The new cybernetics#

Why cybernetics?#

Increasingly over recent years, there have been growing calls for the tech industry to shift focus from the technology itself to the humans behind the technology, and the impact on communities and the environment. This isn’t the first time people have sought to do this. The idea that technical systems couldn’t be divorced from the human and environmental systems that created them (and that they shaped in turn) was core to cybernetics, one of the disciplines from which artificial intelligence emerged.

The School of Cybernetics builds on the intellectual legacy of cybernetics. We believe that cybernetics is an important platform from which to make meaningful change in the world. Cybernetics has been a generative intellectual wellspring - it has helped shape everything from AI to critical systems theory, computer-driven art and music, design thinking and the internet.

The theory of cybernetics first found form in the 1940s and 1950s as a response to the rapid outgrowth of computing technology following World War II. As a field, it fused maths, engineering, and philosophy with biology, psychology, anthropology and many others. It was robustly interdisciplinary before that term was in common currency. It theorised an approach to next-generation computational systems that encompassed technology, culture, and the environment.

Today, there is an imperative to reappraise and refit cybernetics for the 21st century and to design, drive, and sustain a program of strategic activities around a new cybernetics.

Read about why we need a cybernetic future on our blog.

Beyond Artificial Intelligence#

A key driver to reinvigorate cybernetics for this moment is to generate possible futures that are unmoored from narrow, simplistic perspectives on technology.

Mary Catherine Bateson, daughter of Gregory Bateson and Margaret Mead who participated in the early cybernetics conferences, wrote of the trajectory of cybernetics in the latter half of the 20th century:

“The tragedy of the cybernetic revolution, which had two phases, the computer science side and the systems theory side, has been the neglect of the systems theory side of it. We chose marketable gadgets in preference to a deeper understanding of the world we live in.”

In this moment, Artificial Intelligence (AI) is too-often divorced from a systems perspective. The surge in public conversation about intelligent machines has trended toward simplifying “artificial intelligence” as a catch-all.

There are many definitions of AI — and many, many more imaginings about it. When it comes to describing what the AI actually is or does, there’s usually a more mundane term to explain it, each with its own field and subfields: machine learning, robotics, virtual reality, data mining. On the other hand, there are technologies that already exist—many of which we use every day—that use a range of computational techniques that could fall under the umbrella of AI, but which in practice aren’t called “artificial intelligence”: we call them search engines, drones, web stores, streaming platforms, social networks, voice assistants. And by some definitions, nothing we currently have is AI — artificial intelligence is the promise of something that hasn’t been invented yet.

We find it helpful to think not of AI, but of the constellation of technologies where data, networks, algorithms, machine learning and edge computing converge to transform the way computers and physical objects work. It is this constellation of technologies that is profoundly changing the world we live in.

So rather than AI, we focus on “cyber-physical systems” (CPS). CPS automatically sense their environment, infer something from this data and act upon that data with real and unmediated effects on the world. Drones, autonomous vehicles, smart city infrastructure, wearable tech; these technologies are just the start of CPS going to scale. With advances in machine learning, these systems are moving rapidly towards being “proactive”, that is, capable of action without immediate reference to human controls, and being “intelligent” in that they can learn and adapt their action according to new information.

Our research into CPS looks at the system beyond the metal - the broader technical, human and environmental implications of emerging CPS. We also look back to the past - all the way to the first human technical systems, such as 35,000-yearold fish traps, examples of which can still be seen in places like Brewarrina. The Brewarrina fish traps were in continuous use until the 1930s when colonisation disrupted the Indigenous Australians’ way of life - imagine today building a technical system that is designed to last tens of thousands of years?

This broader perspective means our approach to research into AI and CPS is fundamentally transdisciplinary. The idea of cybernetics - of steering a technological object, and of the idea of humans in the loop, and of the environment in that same loop, feels hopeful; and just as importantly, actionable. These systems - complex dynamic ones - will require new models of leadership, new kinds of critical thinking and critical doing, and new sorts of training. This is the work of the School of Cybernetics.

Explore why we are transdisciplinary on our blog.

Latest research news#

arrow-right bars search times arrow-up