On Wednesday 15 February, Paul Ramsay Foundation and the School of Cybernetics brought together policymakers, academic researchers, and business and industry representatives to discuss how collective intelligence and cybernetic frameworks could be applied to build systems of knowledge around public sector needs.
Facilitated by policy analyst and public leadership and strategy advisor Martin Stewart-Weeks, Distinguished Professor Genevieve Bell, the Director of the School of Cybernetics, and Sir Geoff Mulgan, Professor of collective intelligence, public policy and social innovation at the University College London (UCL), explored how collective intelligence might be applied to cybernetics to inspire new models of engagement and governance, critical thinking and doing, and systems reform.
The event was a call to action for participants to imagine how Australian social systems could change in the next 10 to 20 years if we harnessed technology and collective intelligence more intentionally.
Collective intelligence proposes that individuals, organisations, and groups would thrive if they were better able to combine the diverse expertise of people and machines to activate systems of knowledge, generate collaborative thinking, and change the way we solve problems. Cybernetics looks at the system as the unit of analysis, transcending boundaries, thinking in systems, and ensuring that humans, technology and the physical environment are in the frame as we tackle the challenges posed by technology at scale.
The discussion began with Stewart-Weeks inviting Professor Bell and Professor Mulgan to shed light on their respective frameworks, and their related tools and methodologies.
Professor Bell introduced cybernetics by sharing a story about a period of intense reflection after World War II when Norbert Wiener, a mathematician, philosopher and computer scientist invited thinkers from a range of disciplines, from anthropology to psychoanalysis, to join him in reflecting on how humans would interact with technical systems as they unfolded.
Those series of early conversations about what the future might be like as humans and computers/machines began to co-exist unfolded under the banner of ‘cybernetics’. At that time, the term coined by Wiener referred to the theory of control and communication of computing and humanity and the broader ecological systems.
At the School of Cybernetics, cybernetics is being reimagined for the 21st Century as an important tool for navigating major societal transformations, through capability building, policy development and safe, sustainable, and responsible approaches to new systems.
Professor Bell discussed how every system has a history and a set of reasons why it has come to be the way it is, including complex tracks that are often hidden from view.
She talked about how art helps us see things in new ways. For example, in a striking work currently on display in the Birch Building by ANU School of Cybernetics’ Design Resident Kate Crawford, Anatomy of Ai System skilfully reinterprets the Amazon Echo as an anatomical map of human labour, data and planetary resources.
Following on from this, she discussed the value of learning from the past as you imagine the future.
“To get better at knowing where you are in the present in order to shape the future, you must first understand the history of where you are – and then untether yourself from the past to imagine what’s possible,” Professor Bell said.
When it comes to applying cybernetics to policy reform, Professor Bell said that there are always multiple futures that unfold simultaneously in different places and none are coherent. But governments like coherence.
“So how do we lay down new tracks?” she asked. “You have to feel productive discomfort.”
It is through allowing different kinds of voices to participate in different kinds of conversations that we begin to prepare for societal transformations, even on a small scale.
Professor Bell described disruptive policymaking as degenerative. When it is inclusive, it is necessarily uncomfortable, especially when this involves considering a diversity of perspectives and voices. In such situations, a different kind of convening is needed. But while we tend to imagine that disruption requires huge centres and large investments, Professor Bell said that sometimes, disruption is simply about “… moving a centimetre, or doing the emotional work required.”
Professor Mulgan then gave a potted history of his career to explain why his primary interest has come to be in studying how a system thinks.
“I spent time at MIT in the late 1980s - including working with many of the pioneers of the Internet, and seeing that very creative thinking about the hardware, data and information transfer didn’t extend much to thinking about the implications for things like childhood, democracy or health. Later at Nesta in the 2010s I helped run various funds commissioning AI tools, and supported work on how to combine AI and Collective intelligence”, Professor Mulgan said.
Professor Mulgan’s interest in collective intelligence deepened after a stint working with the United Nations Development Programme and its network of Accelerator Labs where it was applied to achieve the Sustainable Development Goals through the everyday work of governments. Among emerging collective intelligence approaches he saw successfully being adopted were ones using crowdsourcing and remote sensing and ones that combined it with artificial intelligence.
“We should have more intelligent systems but in most places they are functional silos governed by siloed ways of thinking without a common language,” Professor Mulgan said.
Today, Professor Mulgan is most interested in projects that seek to connect innovations, knowledge and the contributions of diverse people with real time data to find responsible ways of using technology to advance sustainability, equity and social mobility.
After the speakers had shared different approaches to the ongoing challenge of engaging differently with the future, the audience was then invited to consider what we are yet to imagine for a future Australia in which everyone has the opportunity to thrive.
Stewart-Weeks invited guests to respond to a set of provocations to explore where and how collective intelligence and cybernetics might be applied to public sector systems reform to address future challenges and opportunities.
It was generally agreed that Australia is well-placed to disrupt systems, with many new initiatives established to do this such as the School of Cybernetics, ANU Tech Policy Design Centre, the UTS Human Technology Institute and Humanitech.
While there is no shortage of intelligent and engaged individuals that are aware of and actively seeking to disrupt existing systems that are not fit for purpose, we tend not to do so collectively on a grand scale. Incremental changes are achievable, but what is needed most in government right now are leaders who genuinely encourage risk-taking.
A common barrier to innovation in the public service identified by many participants was the lack of safe environments in which to experiment. The need for greater reflective practice and adequate time in which to examine issues more deeply was also identified, especially given the time it takes to convene appropriately and develop a shared, collective lens.
Curiosity was found to be preferable to optimism. One group explored how by uncovering gaps and spaces between systems, we might adopt more creative, speculative approaches to the future that may help identify broader outcomes rather than focussing on narrow technical fixes. This is the approach favoured by the School of Cybernetics where students in its Master’s program examine the interplay between and across complex systems.
Some worried that collective intelligence and cybernetic approaches would take too long to apply in practice. Do we have the space, time and mental capacity as a nation to think creatively given the cracking pace of policymaking?
Questions around how you innovate at scale in government, listen to diverse voices and overcome the barriers to getting big establishments to work at pace led to various ideas about what kinds of structures you would need to enable collective intelligence and cybernetics to flourish in the public sector.
Among questions raised were:
- Where would the feedback loops come from?
- What leadership skills are needed to facilitate collective intelligence?
- Where is the place for ethics and human-centred decision-making?
- How do you deal with barriers such as entrenched institutional thinking, lack of optimism and complex power dynamics?
- How do you establish the ‘glue person’ – the one that helps with making the connections between a team?
Several groups discussed how you would practically go about introducing multi-stakeholder dialogues to develop a collective intelligence and/or cybernetic stance.
The need for public servants to be able to make connections across various systems of administration as well as the benefits of engaging end-users in exploratory conversations to imagine all the possible futures, including any unintended consequences we may not want, were identified as important ways to develop a collective stance.
How we measure and value non-linear changes and incremental shifts that eventually lead to larger change was also discussed. With governments often focussed on the issues at hand rather than collective betterment, and the tendency to wait for a crisis before a maladaptive system is overhauled, having dedicated spaces and allocated time to think critically around future challenges was seen as essential for policy reform.
Another barrier to disruption in government departments was the expectation that public servants serve a minister. What would it take to reinstate frank and fearless advice? Where is the authorising space for public sector expertise?
Adaptive leadership approaches that actively encouraged dissent could help create a more reflexive policy approach and pave the way for change. This could also enable the public sector to develop more interdisciplinary approaches with shared goals rather than being confined within cost centre silos.
Questions of how to keep pace with technological change alongside parallel regulatory reforms were discussed in terms of the risk involved in adopting AI-driven solutions too soon to drive efficiencies. The need to foster a deeper understanding of data, examine algorithmic bias, technological histories and the influence of humans in systems were identified as areas rich for further analysis.
Among recent lessons from Australia’s catastrophic fires were that socialisation structures worked well while other infrastructures failed. Similarly, Australia’s response to the Covid-19 pandemic showed how it is possible to negotiate across boundaries and make urgent changes to complex systems when we need to. How can such pockets of innovation arising from such events be maintained?
Rather than ‘breaking the system’, in what ways could we be optimising, adjusting and enhancing systems? Where could we optimise data sets for social benefits as well as economic benefits, for example by collecting metrics around wellbeing?
We’ve captured practical ideas for the future of policy reform from our discussions here:
Create safe environments for experimentation
Share ideas without being bold and confrontational
Mirror competitive environments
Limits fiscal environments
Find new ways of doing things
Articulate a vision for a future to collectively work towards
Ensure the establishment of guardrails
Conduct challenge-based procurement processes
Establish more bars and cafes to spark serendipitous conversations
Pay attention to feedback loops
Engage in multi-stakeholder dialogues
Reinstate expectation of frank and fearless advice from public sector
Abandon cost centre silos in favour of interdisciplinary approaches
Use adaptive leadership approaches to develop more reflexive policy
Create metrics to measure wellbeing
Value the “glue person” that makes connections within and across teams
Facilitate self-mobilisation as a collective movement
Equip stakeholders with leadership skills to facilitate collective intelligence
Amplify collective intelligence examples
Establish a “Department of Permacrisis”
Make small, incremental changes
Establish a regular forum to explore ideas about the future
Foster a deeper understanding of data
Allow friction between collective outcomes and practical wins
Pay attention to algorithmic bias
Consider technological histories and the influence of humans in systems.
Be wary of training data that is partially accurate
Look for inbuilt structural inequalities
Watch for bias in datasets
Be aware of unintended consequences
Address outliers in datasets
Share more data to exploit its potential to reveal new problems and solutions
Keep pushing for change against barriers
Take proper time for reflection
Give issues the consideration they deserve
Break down silos
Realise the potential role of government as an enabler
Be wary of AI solutions adopted too soon
Consider broad risks of efficiencies in government
Avoid adopting technological applications without careful consideration of risk
Invest in imaginative, collectively devised solutions
Direct long-term funding towards enduring programs that look at the future
Step in to a creative space
Try more things
Be more nimble
Imagine all possible futures
Consider unintended consequences we may not want
Spend time uncovering the gaps
Attend to the spaces between
Adopt speculative approaches to the future
Identify broader outcomes than narrow technical fixes
Engage the community by mobilising collective intelligence action on issues Australians are passionate about
Ensure technological change accompanied by relevant regulatory reforms
Make it easier to establish connections across various systems of administration
Partner with those that bring diverse perspectives
Progress collective intelligence within Departments
Negotiate across boundaries to make change in complex systems
Maintain pockets of innovation by optimising, adjusting and enhancing systems
Redevelop long term programs to be more relevant
Spend more time on problems
Manage egos and power dynamics
Shake up entrenched institutional thinking
Use data to inform governance
Lean in to productive discomfort
In summing up, Martin Stewart-Weeks acknowledged how the generative, expansive and largely optimistic ethic that pervades both cybernetics and collective intelligence provides a powerful antidote to what Professor Mulgan describes as a “learned fatalism” that seems often to pervade the policy and “public work” space.
Thanks to all who participated in the collective intelligence, cybernetics and system reform discussion, an event co-presented by the ANU School of Cybernetics and the Paul Ramsay Foundation.