Broken robots lost in translation
Sometimes we can speak the same language but mean totally different things.
I find language inherently fascinating. I’m not sure why. I have some theories. Growing up, I was always enthralled by my father’s ability to captivate a room with amazing stories and was (and still am) in love with my mother’s ability to think and speak clearly and fluently. As a teenager, I had formative and influential English teachers in school, and I found reading provided me with regular respite, comfort, and escapism. And then, through my career as a technology architect, it has been easy to draw parallels between human and computer languages and to see spoken language as being akin to ‘code for the mind’.
Recently, I was reflecting on an observation I made long ago and its renewed relevance in this new age of large language models and the like: how very possible it is to be speaking the same language and yet to be lost in translation.
Reasons for running late
One of my favourite examples of this is something that I learned from working with a number of colleagues from South Africa over the years.
If I said to you that I was late to work today because of a broken robot, what would you think happened?
Chances are that if you are from a country like Australia, New Zealand, the UK or the USA, you might think my home automation efforts had gone awry and maybe a robotic vacuum cleaner was blocking my path out the front door! Or, perhaps you might think that I was overly reliant on the self-driving capability of my car, and that had malfunctioned, hence I was late.
But, if I said that phrase in South Africa, the likely interpretation would be that I was late due to a broken traffic light.
This is because, at their introduction in the 1930s, traffic lights (or traffic signals, if you prefer) were referred to as ‘robotic traffic controllers’ which was shortened to ‘robots’, and that name has stuck to the present day. I love this as an example of how two people can speak the same language and mean entirely different things given different colloquial definitions of one word.
This can and does apply in the business context as well. If, in a meeting, I was to say that we should “table this for discussion,” in the USA, that would mean to postpone the item to some future, unset date. Yet, everyone sitting around that hypothetical table who hails from other English-speaking countries would likely think the opposite: that we are going to discuss that topic now. It is being ‘put on the table,’ so to speak.
You can imagine (and I have seen this happen in real life) a meeting going a little wonky because two people agree that we should table the item…but one then continues to discuss it, and the other gets frustrated because they thought it was being put aside!
Beyond these seemingly innocuous examples lies a bigger point: that sometimes misunderstanding can sneak up on us, even if we thought we were being perfectly clear. This is not the same as trying to communicate a complex concept and worrying that your presentation slides don’t quite explain the point. I’m talking about people in everyday interactions who are puzzled, confused or otherwise astonished when a disagreement arises out of nowhere.
Keep things warm with a SCARF
Years ago, I was privileged to hear a talk by Dr David Rock, a neuroscientist who specialises in leadership consulting based on his research into how the brain works. He is famous for his SCARF model which tracks the various emotional triggers that can affect how our brains respond to certain situations.
SCARF stands for Status, Certainty, Autonomy, Relatedness, Fairness. You can read more about the model here or here or watch his TEDx talk, but the brief summary that I keep in my mind is:
Status: is my position or standing being threatened?
Certainty: how clear am I on what might happen here?
Autonomy: do I have some say over what is going on?
Relatedness: how connected do I feel to others right now?
Fairness: is a just outcome likely from this situation?
Dr Rock’s research showed that triggering of any one of the SCARF elements causes our brains to enter a threat response - i.e. fight, flight or freeze being the common ones, although apparently there are others being added to that list over time (like flock or fawn).
We are all susceptible to these triggers to different degrees, but what struck me from Dr Rock’s talk at the time was that, if one of these triggers does fire, it’s much harder for you to think rationally. This is because your brain will redirect oxygen and energy from your prefrontal cortex (the part of your brain that does the rational, slow thinking) to your amygdala (the part of the brain that owns that ‘fight-or-flight’ response).
The threat doesn’t need to be real for your brain to perceive it as real. And sometimes the ‘threat’ can emerge from just the differences between how the speaker and the listener interpret the same words, which can lead to very undesirable outcomes.
Picture this: your boss comes up to you while you are talking to some workmates and says “Can I see you in my office now please? I want to chat about something.” Right out of the gate, your Status is threatened by someone senior to you addressing you in such a way in front of your colleagues. You have no Certainty about what is happening or is going to happen. You have no Autonomy about when or where this meeting is going to take place. Nothing about what they have said in those two sentences would make you feel like you can Relate to them or others in this situation, and you are likely to have questions about whether a Fair outcome is likely!
So, basically, your boss has just singlehandedly filled their proverbial ‘bingo’ card, hitting every possible way of sending your brain into a threat response. You would be neurologically constrained from thinking rationally as you walk into that office with a sweat on your brow! And your boss might be totally confused as to why you are reacting that way, since all they asked for was a quick chat - I mean, what’s scary about that?
Compare that to the same boss approaching you and asking you about how your weekend was and how your morning is going. They then say “hey, do you remember that proposal we were discussing in the meeting last week? I’d really love to pick your brains a bit more on the ideas you raised in that meeting because I think you’re on to something there, and I want to make sure I’m factoring that in to my planning. When would be a good time and place to catch up on that?” Polite chitchat that brings them to the same level as you (Status) + clarity on the topic (Certainty) + your choice as to time and location (Autonomy) + friendly approach and sense of being part of the team (Relatedness) + an increased likelihood of factoring in your feedback (Fairness) = you are much more likely to be able to approach that meeting with a clear, rational state of mind.
SCARF has been adopted into my Family of Favourite Frameworks™ because it is a great reminder that putting a bit of thought into how I am communicating with someone is going to pay dividends in helping me avoid inadvertently triggering their brain to treat me as a sabre-toothed tiger.
AI, you say?
What does all this have to do with AI? Well, on the one hand, it could be argued that the increasing experimentation with AI deployments in business is likely to trigger SCARF for many people across many companies. We shouldn’t be surprised, then, if some of the reactions that follow look a lot like fight-or-flight responses!
To improve this situation for all involved, I think it is important to frame any deployment of AI capabilities within the context of SCARF and involve and engage team members in the process - i.e. amplify their Autonomy, listen for real or perceived Status threats, and see what can be done to provide better Certainty about an uncertain future.
Also, I think it is important to note that the fact that we are having conversations about seemingly conscious AI shows that just because these models and tools can converse with us in English (and a few hundred other languages too), doesn’t mean we naturally understand them or what is going on all the time. The fact that an LLM has been trained on the totality of available human language and then has learned to sound like us, doesn’t mean we really understand them any better than we do our human colleagues and friends. Which is to say that we might think we understand clearly since we’re all speaking the same language after all. But in reality, not knowing or truly understanding the reward mechanism or prompts that are guiding a particular response can be a recipe for…well, misunderstanding at least.
The fact that these models do understand grammar and syntax and colloquialisms and metaphors really is a game-changer in the way that we interface with machines. But given that we don’t really truly and consistently understand how to effectively communicate and interface with other human beings using that same tool of language, maybe we need to take time as we progress in this next evolution of software to allow space to reconsider if and when we get things wrong.
Nearly a century after ‘robots’ were introduced to South Africa, here we are introducing a different kind of robot into our workplaces and homes. We are now reckoning with how best to deploy and utilise these capabilities to augment the best parts of who we are, and to help us to be more productive - all while speaking to them using imperfect, imprecise, inherently interesting human language.
Let’s just make sure we’re leaving room for checking that we’re not getting lost in translation along the way.

