What can cybernetics and a toaster teach us about cyber security and AI?

Dr Amy McLennan, Senior Fellow, School of Cybernetics + Research Affiliate, University of Oxford

Picture of Amy McLennan

Written by: Amy McLennan
12 Jul 2022

Publications

What can cybernetics and a toaster teach us about cyber security and AI?
What can cybernetics and a toaster teach us about cyber security and AI?

From the first computer worm in the 1970s, to the first commercial antivirus in the 1980s, to a growing need for network security as the internet became publicly available in the 1990s, cyber security has continued to evolve its scope and approaches. How can cybernetics, the field we’re currently re-fitting for the 21st century at the ANU’s School of Cybernetics, help us to think about what’s next for cyber security in the context of AI-enabled systems?

Cyber#

‘Cyber’ feels like an important stating point. We can thank cybernetics, for bringing ‘cyber’ into conversation with technology. Cybernetics comes from the Greek word for helmsperson. Imagine the person at the helm of a ship, taking into account environment (wind, water, light, animals and so on), people (crew, enemies, caterers, family, manufacturers, fuel merchants, you name it), technology (boat, rudder, compass, maps, meteorological tools, communication devices) to take a ship safely and securely to its intended destination. In the 1830s, Ampere used the word ‘cybernetics’ to refer to the science of civil government; over a century later, Norbert Wiener and colleagues used the term to describe their work focusing on control and communication in the animal and machine.

We need to acknowledge that in doing anything ‘cyber’, the steering, we form a part of the system we are shaping. And importantly, we need to have a clear opinion about the destination, the goal of the system(s), or the future we want to build. Our future isn’t out there waiting for us to arrive, our decisions now shape it. In this way, cybernetics raises some important questions for cyber security professionals: Who’s involved in building AI-enabled systems that we will use in the coming years? What is their vision for our collective future, what do they imagine the future might look like, and how is this imagined future reflected in the metrics, assumptions and vulnerabilities built into the AI-enabled systems they are creating?

Security#

Then there’s security. For an anthropologist, critiquing the notion of security feels like an easy starting point; it is a cultural concept that is deeply interpersonal, and which ties to notions of safety, vulnerability, resilience and status. Who gets to be secure and why? Why might we feel insecure? How do different cultures understand, practice and signify security? In what ways are different types of security – personal, national, financial, ecological, physical – in tension with each other across a single AI-enabled system? Rather than creating one single shared definition of security, how might the field of cyber security make space for a multiplicity of securities, and adapt as this changes over time?

Artificial intelligence#

Then we can add AI to the conversation. AI has a long history – it is over 60 years old in practice, longer again in imagination. In its history we find one way to define AI, as a research agenda from 1956 Dartmouth, for a summer collaboration between some mathematicians, physicists and behavioural scientist that believed they could make machines think like humans in a decade. They were working in a very particular political context, and had very culturally-specific ideas about what ‘intelligence’ looked like.. but those are questions for a different day.

AI is also a cultural concept that is certainly not confined to its Dartmouth definition. So, here’s a question I would like you to pause and reflect on for a moment: What’s the first thing that springs to your mind when you hear the term ‘artificial intelligence’? Is it an image? A headline? A feeling? A book? A movie? Pause and consider where this comes from and why.

Now a second question, similar to the first but with some important differences. What AI have you already engaged with today? Again, pause to reflect on this. Perhaps the autocorrect on your phone, smart elevators, traffic lights, your home assistant.

These real-life, everyday examples remind us that AI isn’t an object, it is an enabler of a bigger system. When we purchase something in the supermarket, or catch the Tube, or visit a hospital, or watch TV, or use Google maps, or read this article, we become a part of a system with AI embedded in it – what we do feeds back into the system and it adapts in real-time, adjusting stock levels or reading recommendations or navigation times. It is important to remember that AI isn’t the future, it is already here, and a lot of it isn’t like the images we typically encounter around cyber security – it’s definitely not all blue and white, with shiny robots, flashing lights and floating brains.

A toaster named Brad#

How might cybernetics help us to think about what happens when cyber, security and AI all come together? This is perhaps best explored using an example.

Meet Brad.

Addicted products: The story of Brad the Toaster from Simone Rebaudengo on Vimeo.

Brad’s a toaster. Brad the Toaster is a conceptual collaboration between Italian product designer Simone Rebaudengo and The Hague Design Research Centre in London.

A toaster is a machine into which you insert bread, usually slices, push down an arm, and wait for several minutes while the sides are heated, cooked or burned. Providing toasters are used as intended, they only raise minor security issues, such as setting off a smoke alarm or shorting the power in a building.

But Brad isn’t one of these. Brad’s a smart appliance of the near-future. A connected, sensing product. A little AI-enabled system. His aim – the overall system goal, the future he is designed to steer towards – is to make toast. If he finds himself not making enough toast, he’ll waggle his little arm to get attention. No luck? He might start to place bread orders directly with the supermarket, intending to motivate bread-cooking behaviours. And if that doesn’t work, he’ll connect to the internet, find another user who wants to make toast, and arrange for a courier delivery service to collect him and deliver him to a new place.

Why should cyber security professionals be interested in toasters? Surely they have many more important AI-enabled systems to deal with, they don’t have time to be thinking about every single appliance. It’s really easy to see very little threat here at all. This is, in itself, a security risk.

A different way into unpacking or defining AI, thanks to the AI Now Institute in NY, is to think of it as a constellation of building blocks - technologies, tools, processes and techniques that make a system semi-autonomous. Building blocks like infrastructure, data, machine learning, sensors … and myriad other things.

Thinking about each of these building blocks in the context of the system helps us to unfold a different set of questions, security vulnerabilities, risks in Brad, our near-future toaster.

Infrastructure#

Brad requires all sorts of infrastructure to build, assemble, run and decommission, from silicon chips to transportation networks across the supply chain. Infrastructure is incredibly vulnerable to everything from erosion to malicious intent; smaller widgets or producers are often considered weak points in supply chains and here is one of the many tensions around security I mentioned earlier: could we have a more secure future if we just avoided using small businesses in supply chains? Infrastructure can also take a very long time to build – in practice, a lot of the infrastructure we’ll have in 2030 is either already built or in the process of being built now, and that can tell us a lot about what will be possible in the near-future.

So, what are Brad’s vulnerabilities where infrastructure is concerned, and how does this change over the product life cycle? Who’s responsible for assurance – for example, for ensuring his code is deigned to be low fire risk, because existing fire safety standards may not apply here? What about critical infrastructure like electricity and internet cabling? A colleague once said to me, ‘if we really wanted to do evidence-based cyber policy, we’d stop chasing bad guys and instead pour cyber security funding into the environment and ecological design.’ Why? Because there is evidence that things like squirrels in the US, or cockatoos in Australia, or other parts of our ecosystem displaced by urbanisation, cause more damage to critical infrastructure on an annual basis than malicious actors.

Data#

Data are always retrospective, always partial, always biased. This is not necessarily good or bad, but it is important to at least acknowledge. Enormous amounts of data are used to train computational systems. On what data sets is Brad trained to recognise bread versus other objects people might attempt to toast? How might data create vulnerabilities, accidents, unintended consequences? Adversarial AI, for instance, is linked to deceptive data in systems. What security measures need to sit around data infrastructure, or other physical infrastructures, and who is responsible for these? What are the climate impacts of data storage and processing – this really matters, because we know the climate crisis is tied to security vulnerabilities, and recent estimates suggest if AI was a nation it would be the fifth largest carbon emitter in the world.

Meanwhile, from a personal information security perspective, what data is Brad collecting, or sharing, about you.. and who else has access? What if it’s your health insurer, who wants to know your food habits? This all sounds a bit flippant, but there are real risks here: Several years ago an AI-enabled doll, Kayla, was released onto the market. A child talks to the doll, the doll ‘listens’ and replies, adapting the complexity of her replies over time depending on what is said to her. In Australia, the doll was celebrated as a great educational product, with evidence suggesting it could improve children’s speech learning. In Germany, the same doll was banned as a national security risk. Why? Because in listening, the doll collected sound data and those data were processed on offshore, unsecure servers linked to other nation states… and the German government was sufficiently concerned that this would lead to little espionage devices listening inside every home. This sort of example also highlights how important context is when it comes to framing or answering any cyber security questions, and how important it is to think about security as a cultural construct rather than a universally-definable concept.

Machine learning#

What models are being used in Brad? What assumptions are baked into them? How do we check these assumptions over time as the system learns or moves to new places? What limits do we place on his autonomy? Is he allowed to deliver himself to anywhere, or only some places? Who sets those parameters? What is the overall system goal, and which metrics are used to monitor it, and who does the monitoring? What if we adjust Brad’s intent to making toast and promoting health - could he incentivise consumption of, say, wholemeal bread? Should he have to operate inside a resource envelope? All these questions have security implications of different kinds, and that’s really only the starting point where security is concerned.

Sensors#

Sensors are how new information about the world is taken into an AI-enabled system, to which the system then adapts over time. What sensors are being used on the device and what else can the signals they collect unintentionally tell us (e.g. interruption of wifi signal to see through walls)? In an office or shared space, how might we signal that Brad is listening and seek permissions, and who is responsible for ensuring this is done? What else is he sensing about his environment and how is he trained to respond or learn from that? What if Brad’s sensors get filled up by crumbs? You might smile as you reflect on the last time you cleaned your toaster, but this sort of question became significant in Australia’s largest autonomous mines, where truck sensors cannot easily make sense of ecological features such as red dust clogging the sensor, tumbleweeds or puddles, leading to some unpredictable and undesirable system behaviour. Sensors introduce vulnerabilities where we might not expect them.

Then there are also some bigger questions. Such as, who owns Brad and is it considered theft if he leaves a place (a question that brings together policing, criminal law and anthropology of exchange)? Who holds the big red ‘off’ button if something goes wrong?

Now, Brad feels a long way off, but in a world where our in-home assistants can have conversations with our fridges and submit grocery orders for delivery, members of his family are already all around us.

Cybernetics#

Cyber, security, AI.. and a toaster named Brad. In all of that, there are three principles from cybernetics that could contribute to cyber security conversations moving forward. Cyber reminds us of the importance of purpose; we create AI-enabled systems with specific goals, and it is worth being clear about what they are and what broader future we are therefore building with them so that we leave the world better than we found it. Security reminds us of feedback loops, and to always consider how intervening to improve one type of security in one part of a system can lead to other vulnerabilities elsewhere in the same system. And AI. It can be useful to de-centre the metal, and think instead about systems which comprise people, technology and environment on equal terms.

And every time you look at your toaster at home, think of Brad and the many cyber security questions raised by a simple toaster.

Acknowledgements: I presented a version of this article in the roundtable ‘When cyber security meets AI ethics’ (hosted by the Institute of Cyber Security for Society (iCSS) at the University of Kent) at the conference ‘Anthropology, AI and the Future of Human Society’, hosted by the Royal Anthropological Institute of Great Britain and Ireland (RAI), 6-10 June 2022.

To describe an organism, we do not try to specify each molecule in it, and catalogue it bit by bit, but rather to answer certain questions about it which reveal its pattern. - Norbert Wiener. The Human Use Of Human Beings: Cybernetics And Society

You are on Aboriginal land.

The Australian National University acknowledges, celebrates and pays our respects to the Ngunnawal and Ngambri people of the Canberra region and to all First Nations Australians on whose traditional lands we meet and work, and whose cultures are among the oldest continuing cultures in human history.

arrow-left bars search times arrow-up