PhD Spotlight: Lorenn Ruster

Recent graduate Lorenn Ruster talks us through her PhD journey

News Education

Lorenn Ruster, PhD Graduate from the ANU School of Cybernetics
Lorenn Ruster, PhD Graduate from the ANU School of Cybernetics

There is immense responsibility that comes with technology development, and Lorenn Ruster believes entrepreneurial organisations play a crucial role in shaping our technologies and therefore, our futures.

By establishing guiding principles early – whether engaging with or building AI-enabled products – entrepreneurial organisations can influence the scaling of future innovations and their broader impact.

Lorenn explains how she is working with practitioners who want to act responsibly when it comes to AI development.

Lorenn presenting at the Eighth AAAI/ACM Conference on Artificial Intelligence, Ethics, & Society in Madrid, Spain (2025)
Lorenn presenting at the Eighth AAAI/ACM Conference on Artificial Intelligence, Ethics, & Society in Madrid, Spain (2025)

Tell us about your research and what led to your chosen thesis topic?#

My research is about exploring proactive, responsible AI practices in entrepreneurial organisations. I’m working with organisations that are either building AI-enabled products or who have these products in their roadmap. They want to be responsible but are unsure how to go about addressing their desire and motivation to be responsible.

We’re talking about organisations who aren’t creating a high-risk system such as medical devices or biometrics so aren’t going to be subjected to significant regulatory intervention. They might have to stipulate a customer is engaging with a chat bot and not a human, and while they need to comply with regular privacy and consumer data laws, they’re not obligated to do much beyond that. So, it’s really tapping into a different sense of responsibility; not a compliance-based one, it’s what you might call responsibility as a virtue or narrative responsibility about who you are and who your organisation is and the type of future you want to be contributing to.

Through various avenues, organisations such as early-stage start-ups, have found me to work with them to help figure out what being responsible looks like in practice. I work with them in a collaborative way, adopting what I call design intervention research. I’m not just observing, or interviewing, or analysing documents, I’m working interactively to co-create prototypes with these different organisations.

Lorenn's conference paper presentation at the Fairness Accountability and Transparency Conference (FAccT) in Athens, Greece appears within  'Responsible AI in Practice 2' on the schedule
Lorenn's conference paper presentation at the Fairness Accountability and Transparency Conference (FAccT) in Athens, Greece appears within 'Responsible AI in Practice 2' on the schedule

I’m focusing on three prototypes for my PhD to help organisations build, or work with, AI-enabled products in a responsible way.

One of those prototypes is a Dignity Lens framework. This framework is interdisciplinary in nature and was designed with influences from the fields of psychology, international conflict resolution, management, and philosophy. It engages with the plurality of understanding of what dignity means.

The Dignity Lens has been trialled within the everyday practices of a data science team building algorithms and integrated into their workflows. For example, there is a wiki page dedicated to the Dignity Lens and associated JIRA tickets to ensure that reflective practices using the Dignity Lens occur throughout the product development lifecycle. There are numerous insights that are gained from the process, with one data scientist reflecting on how the stakeholders involved in design of the product could be more meaningfully engaged for future iterations through dignity concepts of recognition, acknowledgement and acceptance of identity. Similarly, thinking about how the product design through the Dignity Lens allowed for giving the benefit of the doubt (a component of thinking about Dignity-as-experience) and prompted some additional design considerations for the next iteration.

Lorenn is part of the Governing Systems Safely and Responsibly panel at ISACA Conference 2023 with Maitreyi Singh, Dr. Kartik Gupta and Alicia Lillington (Left to Right)
Lorenn is part of the Governing Systems Safely and Responsibly panel at ISACA Conference 2023 with Maitreyi Singh, Dr. Kartik Gupta and Alicia Lillington (Left to Right)

Moving on to one of the other prototypes, we take a step back because in the case of the Dignity Lens, we are already assuming the organisation wants to prioritise dignity as one of their principles. But what if you don’t know what your values or principles are for AI development? So, this prototype, which is actually the first step, is what I’m calling a Pledge-Making prototype. It emerged out of working with an organisation who wants to be responsible and has some established organisational values, but they’re a startup of three or four people, trying to work it all out.

Their view is if we commit to something and don’t know how to do anything about it, then that’s a problem, so how do we meaningfully engage with selecting principles for AI development without being a subject of responsibility-washing or ethics-washing? Together we worked through that over the course of about 18 months to bed down what it means for them.

That prototype is what I call Pledge-Making because that’s one of the outputs – distilling what your pledges are – but it’s more than that. It’s the process of thought experimentation to work out what’s plausible in your context, what you can commit to and how to manage staying true to your own sense of responsibility and not overcommitting, not underdelivering, not overcommunicating and all while dealing with the funding environment of a startup. And then tackling the embedding phase with the idea being, if you’ve done this pledge-making, how do you make sure you’re living that pledge in the day-to-day.

The third prototype is focused on reflective practices. In it we trial a range of different ways of continuing to pay attention to the actions that are taken and their relationship with the pledges made. In some cases, this reflective practice was driven by myself as the researcher interacting with the practitioners, but we also saw instances where the practitioners were using LLMs (large language models) to assist them in reflecting as they were designing a product or during the coding process itself.

Lorenn with her research poster of five characteristics of responsible AI at the Next Generation Responsible AI Symposium, hosted by CISRO and the Australian Institute for Machine Learning
Lorenn with her research poster of five characteristics of responsible AI at the Next Generation Responsible AI Symposium, hosted by CISRO and the Australian Institute for Machine Learning

What does responsible AI practice mean?#

At the moment, a lot of what we mean by responsible AI practice is complying with regulations and laws. My work is trying to show that yes, these compliance and regulation checkboxes are important and necessary, but they’re insufficient and they’re all operating in what I call a protective mode: protecting people from harm, protecting people from negative consequences and negative impacts. But that’s only one part of the equation of what responsibility means and looks like.

My work is trying to shed light on some of the other parts of what responsible AI practice can mean, which is a more proactive stance towards responsibility. So, you’re not just motivated by the stick. Your own sense of moral compass or virtues and humanness is brought into the process instead of being separated out. Ironically, protective mechanisms often encourage operating like a machine because they are very mechanistic and, in the process, you miss out on some of the other ways to enact responsibility practices.

Do you think your time at the School has influenced you as a person?#

I’ve been pretty embedded in systems-based approaches before coming to the School, but I do think that has definitely been solidified in the way the work is done here. I feel I can’t not see the system dynamics now whereas before it was something I drew on at different points in time. Now, for better or worse, it’s there all the time. And seeing things through that lens has certainly changed things for me.

There is also a much deeper sense of there being a plurality of perspectives all the time, and I do have a much wider awareness of how to take on other people’s perspectives or at least be aware of emphasising a particular perspective, which we all kind of do subconsciously. I’m more aware and able to then take a step back and be like, okay what other perspectives are important here, instead of going so deep into one perspective.

There is a bit of muscle building needed to be explicit about the perspective you’re in, and then how that might need to be questioned.

Lorenn with two members of her supervisory panel: Prof. Katherine Daniell and Associate Professor Liz Williams. Third supervisor (currently based in the USA) is Prof. Jenny Davis.
Lorenn with two members of her supervisory panel: Prof. Katherine Daniell and Associate Professor Liz Williams. Third supervisor (currently based in the USA) is Prof. Jenny Davis.

What type of role would you like to see yourself in next, do you have anything lined up?#

I see myself continuing to bridge academia and practice. That could take on a variety of forms within academic institutions and/or within or advising organisations grappling with responsible AI practice.

I love being able to accompany organisations and students in making their commitments to responsible practice real when it comes to AI design, development, deployment and governance. And I love being able to see patterns across experiences and making wider conceptual and theoretical contributions for others to build upon.

Also, continuing to convene spaces where individuals and organisations really believe in their shape-shifting ability is important to me, so they know they are not neutral actors but are building something that shapes people and futures and feel empowered to take hold of their responsibility proactively.

Lorenn’s research:#

Thinking of a PhD in Cybernetics?#

Visit our PhD webpage

You are on Aboriginal land.

The Australian National University acknowledges, celebrates, and pays our respects to the Ngunnawal and Ngambri people of the Canberra region and to all First Nations Australians on whose traditional lands we meet and work as the oldest continuing culture and knowledges in human history.

arrow-left bars search times