What does the A in AI stand for, really?
Thinking about the relationship between humankind and our super-powered tools
In my line of work, I often get asked about AI: what it is, what I think about it, and where I think it is heading. There is so much hype, excitement and interest in this space that sometimes it is hard to step back and get a different perspective and to do some deeper thinking about the subject.
So I thought it might be useful to others for me to capture how I am currently thinking about the topic of AI, and how I respond to that question. Of recent, my answer has been a question of my own…
Did you ever dream to be able to fly?
Not on a plane, but more like a bird. You know, having the power of flight? Like a cape-wearing superhero from the movies or comic books.
Or, perhaps it was super-strength? When you were a kid and daydreaming or doodling in your exercise book, did you ever draw yourself being able to lift cars? Or with the ability to be able to move at supersonic speed? Or being able to teleport?
Even if you didn’t, the appeal of the idea of having superpowers has fuelled a multi-billion dollar cinematic industry, so there must be some sort of mainstream interest in the idea.
And it’s not an idea that was just invented by Marvel or DC either. When you look back into classical and ancient literature, you can find plenty of examples of myths, legends and fairytales with characters wielding unearthly powers, or using magic potions or talismans to grant themselves superhuman abilities.
It seems that almost for as long as we have been able to tell stories, we have told stories about characters with abilities we could never hope to have by ourselves…or could we?
A tale of two AIs
Artificial intelligence has also been a mainstay of our stories for decades now. Movies, shows, books, and games aplenty have AI as a main character or plot device, and they usually default to one of two types of portraying it.
The first category has the focus on AI as a malignant force. Either initially or eventually, as an indistinct threat or as an antagonist with a distinct personality, AI is the ‘bad guy’ - sometimes apologetic, often apocalyptic, and definitely something to be overcome or outplayed. Because, if not, humans in these stories are at peril of being replaced or even eradicated by AI.
The other camp is maybe not as well populated and definitely not as noticeable. This is where AI is presented as ubiquitous and embedded and, well, pretty normal and routine. Almost boring. In these cases, AI is taken for granted by the characters while, at the same time, endowing them with what we would likely consider almost superhuman abilities and intelligence.
In this second class of stories, humans are assisted and uplifted by AI. It’s a partnership, of sorts, whereby AI is a tool that is being leveraged to significantly improve their situation or ability to help others in a way that they wouldn’t have been able to without the abilities brought on through its utilisation. The humans are still very much in the picture, but they are more capable than they were without AI.
It’s this second angle that I find myself drawn towards in terms of my thinking about AI and where it is heading.
A little over a decade ago, I was travelling with a colleague to help with a presentation he was giving on an innovative project he was working on. As we were refining the presentation, we got to talking about how the app was going to be used to help employees make the best decisions in their day-to-day processes. But it wasn’t going to be replacing them - the employees - with the technology (which was a pre-stated fear of the company to whom we were presenting).
It struck me, as we were looking for a hook for that part of the deck, that we should talk about AI - not artificial intelligence, but rather augmented intelligence, because the project was going to add to the capabilities of the human staff who would be using it.
I was far from the first person to play with this idea of machine-powered augmentation. In fact, the concept of intelligence augmentation or intelligence amplification (IA) has been around since the 1950’s and it comes back to the fundamental idea that all technology is a tool - a word meaning “the application of scientific knowledge for practical purposes”. When deployed for the practical purposes of increasing our intelligence and helping us in our daily lives, technologies like AI don’t need to be scary. We just need to appropriately cater for the risks that come from these technologies, and deliberately design things in favour of doing good.
With great power…
Casting back to those superhero stories from childhood, most of them have an aspect of warning about them. They serve as a cautionary tale of what happens when great power bestowed is not used for good purposes.
In fact, the line between superhero and super-villain in those stories is often just down to motivation. (Well, that and some tragic accidents involving toxic waste or a run in with cosmic radiation!)
If we carry down this line of AI being augmented intelligence, we have to think about what augmenting means. To augment something is to make it greater by adding to it. Now ‘greater’ can mean ‘better’, but it can also just mean ‘more’. In these still early days of incorporating the latest generative AI capabilities into our world, we are seeing many examples of the tech just being used for ‘more’: more junk content created to help with search engine ranking; more targeted spam and phishing attempts via automated email creation; more derivative content by quasi-influencers aimed at encouraging more buying of more things etc.
This has me thinking about whether this indicates that, if left unguided, any new technology will default to amplifying and augmenting the less-than-awesome aspects of being human. And if so, we can’t just leave it to chance or leave it to run on autopilot - we need to be deliberate in our actions and our decisions to increase the likelihood of a positive outcome. We need to be clear on the problems we are trying to solve and focus attention on those.
Put another way: if I turn the volume up, that increases the volume of the signal and the noise, because the two are linked. So, how do we turn up the signal and turn down the noise? It’s a big question, and not one I have all the answers to, but I do have some thoughts.
Our better angels
There’s a famous phrase from Abraham Lincoln’s inaugural presidential address where he talks about the ‘better angels of our nature’ while appealing to try and unify the nation and avoid civil war. While he didn’t succeed in that latter endeavour, the phrase has always struck me and, in the context of augmented intelligence, I think that we should be aiming to provide as much chance as possible that those better parts of ourselves are the ones that are augmented and made greater.
A lot of this is why I’m (strongly) encouraging my children to study some computer science in their schooling and tertiary study. Not so that they can become software engineers, necessarily, but because they are as at risk as the rest of their generation of being expert users of digital technology with zero understanding of how said technology functions. That, in turn, leads to running on autopilot and letting technology take the lead. Default augmentation = increased noise, not necessarily increased benefit. Not ideal.
So, I’m also encouraging them to take the opportunities they have to study a bit of ancient and recent history, philosophy and sociology, anthropology and even some theology. Why? Because I want to stack the odds that the inevitable augmentation that comes from AI and other technologies finds plenty of ‘better angels’ to work with: good, deep, diverse thinking that can be amplified and supercharged in its application to solving some of the tricky problems of modern and future life.
High hopes? Sure. Wishful thinking? Maybe. But we live in a time when things that previous generations would have thought only possible for superheroes or through magical wishes are within reach. Best we think about what we’re wishing for.