top of page

Thinking about AI: On Mirrors and Bionic Arms

Of all the questions at the core of indico's philosophy, this is perhaps the most important one: What is the relationship between us and AI? Forgive me for using the word, but what is the appropriate paradigm through which we should view and develop it? Which analogies can we use that will help AI make sense to a broader group of people with fewer letters after their names?


I ask the question, not because I'm a tech bro that loves smelling my own farts, (though I'll confess with horror that that may have something to do with it) but because society is toeing into a profound intersection of everyday life with new technology. This technology is complex in a way that is not directly analogous to anything that has come before, but if we're to adopt it responsibly we must be able to discuss it effectively.


The Prevailing Paradigm - Robots and Electric Sheep

I think the prevailing paradigm here is informed by, more than anything, a handful of movies that have polluted the public zeitgeist so thoroughly that they have become the bedrock and yardstick against which all other AI is measured.


I'm sure we can all name these films without being reminded, but for the sake of completeness:




Or whatever the appropriate reboots were for your generation.


That's not to say that these movies are bad, in fact they're quite good, and they have rhetorically interesting concepts of AI. In every one of these cases, Artificial Intelligence is ironically and obviously human. Indeed the obvious humanity of AI is the key plot arc in all of these movies.

In every one of these cases, Artificial Intelligence is ironically and obviously human.

[Some readers may believe that C3PO and R2R2 are not the main characters of the Star Wars franchise. I would encourage them to watch the series again.]


As a storytelling device? Excellent. Let's just remind ourselves that these are not, nor are they intended to be, portraits of reality. These are not created by people with a deep understanding of AI, instead reflecting a decades old cultural zeitgeist that few of us still have appropriate context for.


The Prevailing Danger - Tilting at Windmills

So, why do we care? Generally speaking, I care very little about the analogies that people use in action movies, and I'm decent enough at suspending my disbelief that I can stomach even Marvel "science". However, as the technologies we describe come closer and closer to reality I believe we must examine the prevailing views and determine where they misalign with the technology at hand.


I think this leads overall to The Objectivity Fallacy and a subsequent Abdication of Responsibility. When we imagine AI as a separate entity from ourselves, we believe that it has some kind of agency, or that it is less inherently subjective than humans. This leads to a deeply problematic belief that disparate impacts in AI are a result of objective differences between protected classes. This is not true.


Then, when the AI is making its 'own' decisions, humans cannot be to blame for the mistakes it makes. In another blog post I'll be going into detail on better views into the extensive supervision present in so-called "unsupervised" techniques, but suffice it to say that there is no situation in which humans are absolved of the responsibility to define unambiguous success criteria.

There is no situation in which humans are absolved of the responsibility to define unambiguous success criteria.

A Better Approach - Mirrors and Bionic Arms

What, then, is a more appropriate analogy? How should we discuss the AI of today in a way that is both accessible and accurate?


At indico we have an analogy for understanding how AI works, and a paradigm that directs us towards the way that AI should work.


AI is a mirror - a bad one. It has a hundred little pockmarks all across the surface. You shine some data at it and it gives you something back that looks very similar. Some pieces are shrunk or expanded, but on the whole you're getting out what you put in.


We aught to build AI as a bionic arm. Not something that sits next to us and fights against us in a modern re-enactment of John Henry. Before anything, it aught to be a tool that we command. Something that lets us lift 100x more than we could before, but is still fundamentally a tool controlled by us.


AI is, above all, Artificial. We may use it to create increasingly accurate approximations of humanity, but that will never be the most effective application. When we have seven billion humans, I'm quite a bit more interested in a new, unique form of intelligence that works with us.


If old sci-fi movies have taught us anything, making AI that looks like humans is a lot more hassle than its worth.




Recent Posts

See All

What is intelligence? What does it mean to understand the world around us? What is it about humans that makes us so special? For so many people these are abstract, philosophical questions, but as prog

bottom of page