Looking at government AI Ethics instruments through a dignity lens

Education News

Looking at government AI Ethics instruments through a dignity lens
Looking at government AI Ethics instruments through a dignity lens

Our newsfeeds and twitterfeeds are overwhelmed with stories of AI gone wrong - technologies that deepen inequalities, discriminate against certain groups of people and, in some cases, undermine democracy. The School of Cybernetics, building on the work of the 3A Institute, is committed to a different technology narrative – a future where scale is pursued in ways that are safe, sustainable and responsible.

When it comes to governments, undoubtedly a core responsibility is to protect citizens from the harms of technologies. But is that all? Are our public institutions limited to just minimising harms or protecting us from the downsides of technology? Or do they also have a responsibility to proactively promote human flourishing and create conditions for the best possible upsides of technology?

These were some of the questions that formed the crux of Masters student, Lorenn Ruster’s capstone project. The capstone project is the final part of the Master of Applied Cybernetics program – students work with organisations on a mutually agreed project and, in the process, translate their learnings to real world contexts. Throughout her studies, Lorenn has been fascinated by the intersection of dignity and technology, asking questions like ‘What would a dignity-first technology development process look like?’ as a part of the design and creation of her Maker project in 2020. Through the capstone project, she was then able to extend this thinking into a government context by collaborating with the Centre for Public Impact (CPI) whose mission is to work with governments, public servants and other changemakers to reimagine government.

Over the course of 3 months, Lorenn explored notions of dignity and designed a Dignity Lens to analyse the extent to which dignity has been a driving force underpinning AI ethics instruments of the governments of Australia, Canada and the United Kingdom. She learned a lot about how to infuse her research with cybernetics and communicate findings to different audiences in straightforward and easily accessible language.

The report – Exploring the role of dignity in government AI ethics instruments – articulates a Dignity Ecosystem consisting of a balance of Protective and Proactive roles for governments. The research findings assert that the ecosystem is off-balance, with governments focused on protecting citizens from dignity violations through prevention and remedy mechanisms and less concerned with proactively promoting dignity in AI ethics instruments. The report also makes suggestions about potential ways forward to heighten awareness of the roles played by governments in the dignity ecosystem and incorporate proactive roles into already-existing mechanisms.

If you’re interested in finding out more about the research and/or co-creating more ways to enable healthy Dignity Ecosystems in AI, Lorenn Ruster lorenn.ruster@anu.edu.au and Thea Snow (CPI ANZ Director) thea@centreforpublicimpact.org are open to co-creation and collaboration.

You can read the report here.

The image was created by Lorenn Ruster using a Style Image Transfer AI algorithm developed by Derrick Schultz and Lia Coleman and images from “Coloured Pattern Texture” by Chrismatos ♥90% OFF, sorry is licensed under CC BY 2.0 and “MIT Museum: Kismet the AI robot smiles at you” by Chris Devers is licensed under CC BY-NC-ND 2.0.

You are on Aboriginal land.

The Australian National University acknowledges, celebrates and pays our respects to the Ngunnawal and Ngambri people of the Canberra region and to all First Nations Australians on whose traditional lands we meet and work, and whose cultures are among the oldest continuing cultures in human history.

arrow-left bars search times