When the governments of South Australia and Western Australia launched mobile apps that use a combination of facial recognition and geo-location tagging to support the management of home-based quarantine, a public backlash ensued.
The apps were a subject of intense debate in Australia, and even in the United States, as commentators criticised the techno-centric approach to monitoring participants.
The use of artificial intelligence (AI) technologies, like these mobile apps, is on the rise and the response to the COVID-19 pandemic has only accelerated our reliance on these technologies. A variety of AI-enabled applications were introduced across the world to manage public health responses to the pandemic.
Governments in many countries used AI-enabled systems to improve healthcare services, contact tracing, epidemic management, surveillance, testing, and quarantine monitoring.
The deployment of such technologies may aid in the efficient delivery of public services. However, their use has also been accompanied by issues that spark fears of abuse.
Government policy and regulation of emerging technologies#
As a result of technological developments, governments and regulatory agencies are faced with several challenges to ensure citizens are protected, fair markets are preserved, and regulations and standards are enforced in a way that allows business and technology to thrive. Traditional approaches to regulation may not always be sufficient for new or emerging technologies, and there may be a disparity between the pace of technological change and regulatory action.
Discrimination in facial recognition technology#
AI and facial recognition technology will likely be used in many industries in the future, including public health and law enforcement. It is critical to ensure that discrimination is not embedded in the design of these technologies. There are many examples in which AI-enabled systems have resulted in discrimination against marginalised groups. For example, the use of facial recognition technology has recently been fraught with controversy due to racial and gender bias.
Normalisation of surveillance#
Activists and scholars are concerned that using facial recognition for public health purposes may contribute to the normalisation of surveillance culture. Although facial recognition technology may be an efficient means to address challenges related to public health during the pandemic, the policy and regulatory regimes established during the pandemic may not be sufficient to manage the technology after the pandemic. There is uncertainty as to whether governments testing this technology have considered the implications of the trials beyond the life of the pandemic, and whether they have considered potential issues for Australian society from a normalisation of facial recognition over the long-term
Data privacy#
Concern over privacy is one of the most pressing issues for the use of facial recognition technology, due to a lack of transparency about how information is collected, managed, and used. Facial recognition technology combined with ubiquitous cameras and big data could significantly infringe on citizens’ liberty and rights to privacy.
These four issues that frequently arise from the use of facial recognition technology were discussed in a new episode of the Algorithmic Futures Podcast, done in collaboration with the 2021 ANU School of Cybernetics PhD cohort.
The podcast hosts, School of Cybernetics Senior Lecturer Dr Elizabeth Williams and ANU School of Engineering Senior Fellow Dr Zena Assaad, were joined by PhD students Memunat Ibrahim, Lorenn Ruster, Ned Cooper, and Amir Asadi.
The discussions focused on the mobile-based applications used by the Western Australian and South Australian governments in their pandemic response. With an emphasis on the concepts of scaling and technological transitions, in this case study, the PhD students used the Multi-Level Perspective (MLP) as a framework to analyse the social implications of facial recognition, beyond the life of the trials.
The episode delved into the rise of new technologies and ‘accelerated’ transitions during a crisis, as well as the role and impacts of different actor groups involved in socio-technical transitions by conducting interviews with the following scholars and experts from various disciplines:
- Peter Wells: Professor of Business and Sustainability, Cardiff University
- Lizzie O’Shea: Human rights lawyer, writer, broadcaster and founder of Digital Rights Watch
- Gavin Smith: Associate Professor in the School of Sociology, Australian National University
- Mark Andrejevic: Professor in the School of Media, Film, and Journalism, Monash University
- Angela Webster: Clinical Epidemiologist, Nephrologist and Transplant Physician, School of Public Health, University of Sydney
- Diego Silva: Senior Lecturer in Bioethics, School of Public Health, University of Sydney
Listen to the full episode and/or read the transcript here.
This article was written by Amir Asadi, Memunat Ajoke Ibrahim, Lorenn Ruster, Ned Cooper.
This podcast is a project linked to the Algorithmic Futures Policy Lab (AFPL) supported by an Erasmus+ Jean Monnet grant from the European Commission. The School of Cybernetics is one of the collaborators of the AFPL.
The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of this article or podcast episode’s contents, which reflect the views only of the speakers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.