Gender and Artificial Intelligence

This article is by Genevieve Bell, Ellen Broad, Brenda Martin, Ellen O'Brien, Juliette Parsons, and Alexandra Zafiroglu.

Picture of ANU School of Cybernetics

Written by: ANU School of Cybernetics
8 Mar 2022

Publications

Gender and AI
Gender and AI

Abstract#

This entry explores the interactions and intersections of gender and artificial intelligence (AI) over the past century. It presents four anthropological and anthropologically inspired approaches to scrutinize relationships between gender and technical systems: first, looking at what came before the birth of AI; second, studying the sites of production of AI; third, examining how perceived biological differences still inform AI classifications for gender and other sociocultural constructions; and fourth, questioning the performance of gender by AI systems themselves. It concludes by observing that conversations regarding gender and AI are no longer restricted to specific disciplines (e.g., gender to social scientists, and AI to computer scientists) and will continue to require integrated multidisciplinary efforts to understand and act on insights gleaned from an exploration of the relationships between these concepts. It highlights intersections between gender, AI, and subjects such as race and ethnicity, the environment, and culture as areas for further focus.

Keywords#

artificial intelligence; automation; cognition; cybernetics; embodiment; feminist theory; gender ; science and technology studies; technology

This article was orginally published on Wiley Online Library

There are many ways to describe and define artificial intelligence (AI). There is the strictly instrumentalist approach—“a collection of interrelated technologies used to solve problems that would otherwise require human cognition. Artificial intelligence encompasses a number of methods, including machine learning (ML), natural language processing (NLP), speech recognition, computer vision and automated reasoning” (Walsh et al. 2019, 14); or the subtle nuance—“a constellation of technologies, including machine learning, perception, reasoning, and natural language processing” (Crawford et al. 2016, 2). In reaching an understanding of AI, it is probably important to go back to its first instantiations. It began in 1955 when a group of American researchers, including mathematicians, neuroscientists, and information scientists working across industry and higher education, submitted a proposal to the Rockefeller Foundation to fund a summer workshop at Dartmouth College. Their interest: the future of computing. Here is their working hypothesis: The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. (McCarthy et al. 1955, 2)

In this way, AI is not a technology or even an imagination of a technology; it is a research agenda and an organizing principle to tidy up and create coherence around a body of existing and proposed work, akin to “ubiquitous computing” thirty years later (Dourish and Bell 2011).

Ideas of gender have been entwined in the definition, development, and uses of AI systems since the origination of this research agenda in 1955. The original plan makes clear one kind of gendered frame: “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire” (McCarthy et al. 1955, 2). Indeed, the appendix to the proposal includes a list, referencing no fewer than forty-eight “People Interested in the Artificial Intelligence Problem,” all of whom were men (McCarthy et al. 1955, 11). Since that time, the gendering of AI has manifested itself in everything from how women have been excluded and marginalized from the sites of AI’s productions, to how female bodies are read and heard by AI systems or even enable harassment, to how the work that AI systems automate is itself gendered.

This entry presents four anthropological and anthropologically inspired approaches to the interactions and intersections of gender and AI. Drawing on classic anthropological techniques, these approaches offer different methods to scrutinize relationships between gender and technological systems: first, by looking at what came before the birth of AI (its histories and antecedents); second, through studying the sites of production of AI (who does this work and how?); third, by examining how perceived biological differences still inform AI classifications for gender and other sociocultural constructions (how is meaning ascribed to bodies and experience?); and fourth, by questioning the performance of gender by AI systems themselves (how is power manifest in cultural products?).

Anthropological and Anthropologically Inspired Encounters of AI#

Anthropologists have long had an interest in technologies and technical systems. The locus of that interest has changed, as have the ways we study, interrogate, and engage with technologies and technical systems. At its most general level, within the anthropological literature, technology has been understood as both the literal object and a sociotechnical object, that is, the ways in which technology produces or is produced by culture. Boasian taxonomies of the early twentieth century focused on technological artifacts and inventories. More recent anthropological works have explored how new technologies have (re)shaped cultural patterns and flows in both their presences and absences.

Of course, anthropologists are not alone in their formulations of technologies as simultaneously literal and sociotechnical objects. In the 1970s and 1980s, science and technology studies (STS) reinvigorated conversations on the cultural significance of technologies, and many anthropologists found themselves exploring sites of new information and communications technologies’ production and their producers, (e.g., computer-aided designers: Suchman 1987). Increasingly this work began to focus on high technology, or digital technology, in a range of cultural and technical contexts, including Trinidadian internet users (Miller and Slater 2000), middle-class urban Asian mobile users (Bell 2005), American connected teenagers (boyd 2014), Ugandan cybercafe users (Burrell 2012), Warlpiri television users (Michaels 1986), and Australian wifi users (Jungnickel 2013). There have also been ethnographic and ethnographically inspired studies of the digital (Pink et al. 2015), data-driven (see Boellstorff and Maurer 2015), mobile (Hjorth et al. 2020), and virtual worlds (Boellstorff 2008) on which many of today’s AI systems rely.

Building on such work since the turn of the twenty-first century, AI has become a topic of critical inquiry within and beyond academic anthropology. The organizations where anthropologists find themselves working on AI are not exclusively research departments at universities, nor is the literature they engage with and write into wholly peer-reviewed anthropology journals. Some have seats at the table in organizations developing AI products and services (e.g., Anderson et al. 2019; Cesafsky, Stayton, and Cefkin 2019), while others wear multiple hats (e.g., Bell 2021b; Crawford and Joler 2018; Suchman 2013). They engage with critical perspectives on AI—and on gender—that sit not only in anthropology and in closely aligned disciplines such as sociology but also further afield, for instance in computer science and data science. Furthermore, while not all this more recent critical inquiry about AI is “by anthropologists,” it is, as Genevieve Bell (2021b) argues, grounded in the anthropological insight that the topic of interest is at once the object and the imaginings of it. AI is not “a thing” but many things, both real and imagined, ever present, and hardly singular.

Recent notable anthropological and anthropologically inspired analyses of AI include cultural histories of AI (Bell 2021a, 2021b; Forsythe 2001; Hicks 2017); critiques of drones (Gusterson 2016); robots in the Japanese family (Robertson 2017) and in American university research and development labs (Richardson 2015); early regulatory, legislative, and policy frameworks and framers (e.g., Broad 2018); debates around privacy, surveillance, and trust; the role of algorithms in our lives (Besterman and Gusterson 2019; Finn 2017; Noble 2018); and emergent discussions around indigenous AI, decolonizing AI, and indigenous data sovereignty (Harle, Abdilla, and Newman 2018; Maitra 2020). Within this burgeoning field of work, there are four ways to disentangle how concepts of gender and the classificatory systems reflecting gender, gendered labor, and gendered performance have shaped and been shaped by AI since the 1950s.

Histories and Antecedents: What Came Before and Has Been Obscured#

AI is no more a stable object than gender. The 1955 Dartmouth summer workshop proposal marked a shift from gendered and embodied approaches to perception and understanding to neutered and cerebral ones in the latter half of the twentieth century. Its AI was built on and in dialog with two antecedents that defined and modeled intelligence by explicitly accounting for human and nonhuman bodies and for gender.

The first of these is cybernetics, with its roots in the 1946–56 Macy Conferences in New York, where engineers, neuroscientists, psychologists, linguists, mathematicians, philosophers, and anthropologists developed concepts of information, feedback, and control. They theorized systems that could account for an expansive range of phenomena from social and cultural practices to cognitive, economic, biological, linguistic, and ecological processes (Pias 2016). The anthropologists Margaret Mead and Gregory Bateson were influential figures in cybernetics, especially with second-order cybernetic theory and Bateson’s well-known theory of mind (Bateson 1972; Mead 1968). Cyberneticians considered intelligence the outcome of relationships between parts of a system; they focused on relationships rather than individual objects or actors and on the multiple contexts in which a system existed. They accounted for gendered bodies, and for nonhuman bodies, took seriously consciousness across species, and provided conceptual tools for developing systems that could perceive, understand, and learn. They incorporated more diverse perspectives and connected to broader theories and considerations than mathematics or neuroscience alone (Rid 2016; Wiener 1950).

The second antecedent was Alan Turing’s “thinking machine.” Turing theorized a thinking, “intelligent” form of computing before computer science existed as a discipline. His imitation game, also called the Turing test, was the method he proposed for examining whether such a hypothetical machine had indeed met expectations of “intelligence” as theorized in his paper: whether that could successfully mimic human intelligence through typewritten dialogue (Turing 1950). For a machine to pass the test, a person should not be able to determine if their conversational partner was a machine rather than a human person. Turing’s test is a prelude to one enduring goal of subsequent AI agendas: to create a “human or biological simulacrum” (Star 2010, 38). But in whose image was this goal defined? In the game, male conversation differs from female and thus intelligence is not wholly abstract or cerebral but manifest through gendered speech. As Judith Halberstam (1991) notes, embedded in Turing’s bifurcation of male and female speech is a hierarchy of intelligence.

The Dartmouth group shaped the conversation about AI toward models, algorithms, and systems that could display scientific rationalist notions of intelligence exemplified by disciplinary and traditional ideas of masculine reasoning (Adam 1998). Unlike Turing’s language-as-spoken-by-gendered-humans approach to imitating intelligence, intelligence in Dartmouth conversations was equated to abstract language capabilities, logic, and chess. Cybernetic conceptions of systems embedded in and interacting with the world were replaced by universal abstractions. Those present at the Dartmouth conference went on to establish research and development agendas in leading universities and industry labs in the United Kingdom and the United States over the next half-century, foregrounding notions of rational, cerebral decision making in the development of AI.

The Production of AI: Who Produces AI Systems and How?#

In STS scholarship in the 1970s and 1980s, gender played a central role in explicating the work of technology development and in elucidating everyday interactions with new technologies in homes and places of employment. Foundational arguments about the concept of gender as both embedded in and reproduced by technology in a process of mutual shaping (MacKenzie and Wajcman 1999 Wajcman 2010), hierarchical gender relations embedded in technological artifacts (Cockburn and Omrod 1993; Cowan 1983), and the impact of the marginalization of women from the technological community (Faulkner and Arnold 1985) all informed nascent studies of how AI solutions were produced. The work of two anthropologists stands out in this literature for their continued influence in critical studies of AI.

Lucy Suchman and Diana Forsythe worked in and studied the technology industry and AI research labs throughout the 1980s and 1990s, providing rich analysis of both implicit cultural values encoded in the design of specific technological artifacts by communities of practitioners, and how these values manifested downstream as users interacted dynamically with these artifacts. At Xerox PARC, Suchman observed how those responsible for the design of machine interfaces consistently emphasized planning and rational individual action while neglecting the contexts and environments in which these actions took place. Suchman’s emphasis on “situated action” and ethnographically inspired observations of human behavior in the context of AI systems provided the intellectual basis for the field of human–computer interaction (Suchman 1987) and for more recent work on gender and other biases in product design (Eubanks 2018; Criado Perez 2020) and the deployment of technologies to harass and intimidate particular end users and/or unintended users (Lopez-Neira et al. 2019 Messing et al. 2020).

In contrast, at the Knowledge Systems Laboratory at Stanford, Forsythe (2001) focused on designers and questioned the epistemological assumptions inherent in the processes of “knowledge engineering,” by which knowledge conferred to machine learning is merely “retrieved” from an expert and therefore framed as stable and singular. Forsythe argued that assumptions about knowledge implicit in expert systems engineering resulted in the broad erasure and obfuscation of women and gender from their AI systems and workplaces. She outlines how in these spaces women were mostly employed to do office administration, work that was considered separate from building AI. By comparison, the “technical” work deemed central to building AI, which included mathematical modeling and knowledge extraction from experts, was mostly done by men. Forsythe’s attention to who is invited to perform which types of knowledge, and which knowledge types are valued over others, laid the foundations for growing attention to the specificities of human subjectivities in AI work and for explicitly gendered critiques of the epistemological foundations of AI, including the current focus on fairness, accountability and transparency in machine learning and AI, and bias in data sets on which systems are trained.

Rejecting and Reifying Universal Categorizations: Ascribing Meaning to Bodies and Experiences#

As early as the 1970s anthropologists began to eschew universal categories of gender rooted in the body. Anthropological analysis and understanding of gender and biological sex has undergone significant transformation since the 1970s (e.g., Butler 1990; Ortner 1974; Strathern 1988)resulting in contemporary, understandings of gender in anthropology as classificatory systems, and as contextually specific embodied performances of sociocultural norms. However, Imaginings of gender that inform contemporary AI products’ capabilities are substantially less sophisticated, as are popular culture representations of female AIs. Current critical studies of AI recognize the continued slippages between sex and gender, and the concurrent reifications of other cultural classification systems, especially those of race and ethnicity, and propose actions for systemic change rather than lamenting current shortcomings (Bentley 2020; Buolamwini and Gebru 2018; West, Whittaker, and Crawford 2019).

Many AI systems claiming to recognize and classify gender—as part of facial recognition, data inference projects, or automated decision-making tools—favor physiological, sex-based binary forms of classification (Keyes 2018; Spiel, Keyes, and Barlas 2019). Consumer and industrial AI products still reflect a direct mapping of gender onto human bodies that has been outdated in anthropology for half a century. Typical are products that purport to be able to detect, identify, or understand gender using criteria that reify universal binary concepts of gender based on visual representations of the body (e.g., Sightcorp 2021 Visage Technologies n.d.). Moreover, AI systems classifying gender today often reassert rigid sex-based differences in society and misclassify and erase trans and nonbinary bodies (Keyes 2018, 4). The erasure of bodies that do not conform to rigid male/female classifications by AI systems, both as a mechanism of control and to reduce an individual’s agency, has a history that pre-dates AI classification systems (Hicks 2019).

The situation is no better with imagined AIs. Donna Haraway’s concept of the cyborg as a “hybrid of machine and organism” that blurs boundaries between the artificial and the organic, the human and the nonhuman, male and female (1991, 3) offers a different gender trajectory for imagined AIs, in contrast to the staid and outdated binary-reproducing categories in AI products. Not bound by “organic reproduction which biologically assigns sex,” the cyborg serves as a subversive feminist metaphor (Haraway 1991, 3). Haraway’s challenge to sex-based gender has since been transposed into AI manifestations in popular culture that are initially presented as the “ideal” woman, only to be subverted: from the humanoid robots of popular culture, such as PAT in Disney’s Smart House (1999), Ava in Ex Machina (2014), and Dolores in Westworld (2016–), to the disembodied yet female AI of Samantha in Her (2013). Such subversion has yet to reach the design of the ultimate expression of AI gender, namely the AI digital assistant, which since the days of ELIZA the chatbot, through to Siri, Cortana, and Alexa have been assigned an outmoded gendered labor of support, aid, and care.

Performing Gendered Labor: How Power Is Manifest in Products#

As Bell notes, “every day many of us engage with [AI] systems that contain some or all of these technological pieces, all of which have been designed, built and sold by specific people in specific ways. AI is, in this way, both a technology and an imagination of a technology” (2021b, 444; in press). As one of the earliest mass-adopted examples of AI in consumer products, digital assistants are a popular locus for critiques of how gender is performed and how existing gendered power relations are imagined and inscribed into a new class of AI-enabled cyber-physical products. Through their overtly feminized speech, digital assistants often enact domestic and service tasks. In doing so, they reify what Michelle Rosaldo (1974) has described as relationships between woman and “private” domains of societies—the domestic—in contrast to public domains of power and influence. Through this association, feminized digital assistants illustrate how gendered dimensions of labor are performed or reproduced in these systems (Woods 2018). Digital assistants with female voices and names remain helpful, apologetic, and nonthreatening even when confronted with harassment or abuse, thereby associating femininity with service and obedience (Strengers and Kennedy 2020).

Building on this analysis of the gendered automation of labor by AI systems, Amu Schiller and John McMahon (2019) and Miriam Sweeney (2021) argue that automating these types of work necessarily devalues and exploits the labor done by women and people who are not white, and that the automation of service work by digital assistants impacts how women can access labor opportunities and rights in service work. Hilary Bergen (2016) and Daniel Sutko (2019) extend these issues into the domestic sphere of work, arguing that the disembodied and replaceable nature of digital voice assistants diminishes forms of affective household labor that women often do, with some reviews and user accounts suggesting that these devices can assume the role of a wife (Woods 2018). This scholarship connects to anthropological work exploring the inequality of tasks between genders and the ways that race and ethnicity intersect with gendered divisions of labor.

Conclusion#

Much as our scholarly and popular understandings of gender have changed since the mid-twentieth century, so have our understandings of what we refer to when we write and speak of AI. Far from each being the concern of different domain experts—gender for social scientists and humanities scholars on the one hand, and AI for computational scientists and engineers on the other—we recognize how each has shaped thinking and conversations about the other for most of the past century. In historical and in contemporary research and development processes, product and service offerings, and cultural and artistic expressions, ideas and aspirations of artificial intelligence have frequently been anchored in representations of the human body, theories of cognition, and social relations. As we consider the relationships between gender and artificial intelligence, we encounter what it means to model human abilities, and to exist in systems of human-to-machine relations.

Artificial intelligence is entangled with the same social, cultural, political, and economic structures as STS scholars have raised in unpacking the histories of other technologies, from cybercafes to photocopiers, and how they have shaped and are shaped by the contexts in which they are imagined, designed, built, and used. Gender as a social construct can be (and has often been) defined by, enacted through, and embodied by AI systems that shape our interactions with the world. Thus, fundamental questions that underpin these analyses of gender and technology—such as who builds this technology and who studies those who build it; what conversations occur surrounding this technology and who is in those conversations; what epistemologies shape the intent of the technology; and what agency this technology allows different actors—remain relevant in exploring the entanglements of gender and AI stretching back well before the first use of the latter term at the Dartmouth conference in 1956.

The conversation on gender and AI is only in its early stages. Some of the intersections that gender scholars have grappled with on other subjects, including gender and the environment (see Hawthorne 2002); gender and international politics (see Enloe 2014); gender, race, and ethnicity (see Henne 2018); and gender and cultural values such as the competing claims of privacy and security (see Friedman and Hendry 2019), could also be addressed in relation to AI. The everyday presence of AI in the objects humans interact with suggest that these intersections are necessary sites of analysis as AI is embedded within scaling systems of people, the environment, and technologies.

References and further reading#

  • Adam, Alison. 1998. Artificial Knowing: Gender and the Thinking Machine. London: Routledge.
  • Anderson, Ken, Maria Bezaitis, Carl DiSalvo, and Susan Faulkner. 2019. “A.I. among Us: Agency in a World of Cameras and Recognition Systems.” In EPIC 2019: Ethnographic Praxis in Industry Conference Proceedings, 38–64. Arlington, VA: American Anthropological Association. doi:10.1111/1559-8918.2019.01264.
  • Bateson, Gregory. 1972. Steps to an Ecology of Mind. New York: Ballantine Books.
  • Bell, Genevieve. 2005. “The Age of the Thumb: A Cultural Reading of Mobile Technologies from Asia.” In Thumb Culture: The Meaning of Mobile Phones for Society, edited by Peter Glotz, Stefan Bertschi, and Chris Locke, 67–88. Bielefield, Germany: Transcript. doi:10.14361/9783839404034-004.
  • Bell, Genevieve. 2021a. “Touching the Future.” In Griffith Review 71: Remaking the Balance, edited by Ashley Hay, 251–63. Brisbane: Griffith University.
  • Bell, Genevieve. 2021b. “Talking to Ai An Anthropological Encounter with Artificial Intelligence.” In The SAGE Handbook of Cultural Anthropology, edited by Lene Pedersen and Lisa Cliggett, 442-457. London: SAGE. doi:0.4135/9781529756449.n25.
  • Bentley, Caitlin. 2020. “Including Women in AI-Enabled Smart Cities: Developing Gender-Inclusive AI Policy and Practice in the Asia–Pacific Region.” In Artificial Intelligence for Social Good, 204–43. Association of Pacific Rim Universities / Keio University. Accessed April 21, 2021, https://apru.org/resource/artificial-intelligence-for-social-good.
  • Bergen, Hilary. 2016. “‘I’d Blush If I Could’: Digital Assistants, Disembodied Cyborgs and the Problem of Gender.” Word and Text: A Journal of Literary Studies and Linguistics 6 (1): 95–113.
  • Besterman, Catherine L., and Hugh Gusterson. 2019. Life by Algorithms: How Roboprocesses Are Remaking Our World. Chicago: University of Chicago Press. doi:10.7208/chicago/9780226627731.001.0001.
  • Boellstorff, Tom. 2008. Coming of Age in Second Life: An Anthropologist Explores the Virtually Human. Princeton: Princeton University Press.
  • Boellstorff, Tom, and Bill Maurer, eds. 2015. Data, Now Bigger and Better. Chicago: Prickly Paradigm Press.
  • boyd, danah. 2014. It’s Complicated: The Social Lives of Networked Teens. New Haven: Yale University Press.
  • Broad, Ellen. 2018. Made by Humans: The AI Condition. Melbourne: University of Melbourne Press.
  • Buolamwini, Joy, and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81: 77–91. Accessed April 21, 2021, http://proceedings.mlr.press/v81/buolamwini18a.html.
  • Burrell, Jenna. 2012. Invisible Users: Youth in the Internet Cafés of Urban Ghana. Cambridge, MA: MIT Press. doi:10.7551/mitpress/9780262017367.001.0001.
  • Butler, Judith. 1990. Gender Trouble: Feminism and the Subversion of Identity. New York: Routledge.
  • Cesafsky, Laura, Erik Stayton, and Melissa Cefkin. 2019. “Calibrating Agency: Human–Autonomy Teaming and the Future of Work amid Highly Automated Systems.” In EPIC 2019: Ethnographic Praxis in Industry Conference Proceedings, 65–82. Hoboken, NJ: John Wiley & Sons. doi:10.1111/1559–8918.2019.01265.
  • Cockburn, Cynthia, and Susan Omrod. 1993. Gender and Technology in the Making. London: SAGE.
  • Cowan, Ruth Schwartz. 1983. More Work for Mother: The Ironies of Household Technologies from the Open Hearth to the Microwave. New York: Basic Books.
  • Crawford, Kate, and Vladan Joler. 2018. “Anatomy of an AI System: The Amazon Echo as an Anatomical Map of Human Labor, Data and Planetary Resources.” AI Now Institute and Share Lab. Accessed April 21, 2021, https://anatomyof.ai.
  • Crawford, Kate, Meredith Whittaker, Madeleine Clare Elish, Solon Barocas, Aaron Plasek, and Kadija Ferryman. 2016. The AI Now Report. New York: AI Now Institute. Accessed April 21, 2021, https://ainowinstitute.org/AI_Now_2016_Report.pdf.
  • Criado-Perez, Caroline. 2020. Invisible Women: Exposing Data Bias in a World Designed for Men. London: Random House.
  • Dourish, Paul, and Genevieve Bell. 2011. Divining a Digital Future: Mess and Mythology in Ubiquitous Computing. Cambridge, MA: MIT Press. doi:10.7551/mitpress/9780262015554.001.0001.
  • Enloe, Cynthia. 2014. Bananas, Beaches and Bases: Making Feminist Sense of International Politics. Oakland: University of California Press. doi:10.1525/9780520957282.
  • Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St Martin’s Press.
  • Faulkner, Wendy, and Erik Arnold, eds. 1985. Smothered by Invention: Technology in Women’s Lives. London: Pluto Press.
  • Finn, Ed. 2017. What Algorithms Want: Imagination in the Age of Computing. Cambridge, MA: MIT Press. doi:10.7551/mitpress/9780262035927.001.0001.
  • Forsythe, Diana E. 2001. Studying Those Who Study Us: An Anthropologist in the World of Artificial Intelligence. Stanford: Stanford University Press
  • Friedman, Batya, and David G. Hendry. 2019. Value Sensitive Design: Shaping Technology with Moral Imagination. Cambridge, MA: MIT Press. doi:10.7551/mitpress/7585.001.0001.
  • Gusterson, Hugh. 2016. Drone: Remote Control Warfare. Cambridge, MA: MIT Press. doi:10.1063/1.5009234.
  • Halberstam, Judith. 1991. “Automating Gender: Postmodern Feminism in the Age of the Intelligent Machine.” Feminist Studies 17 (3): 439–60. doi:10.2307/3178281.
  • Haraway, Donna J. 1991. “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century.” In Simians, Cyborgs and Women: The Reinvention of Nature, 149–81. New York: Routledge.
  • Harle, Josh, Angie Abdilla, and Andrew Newman, eds. 2018. Decolonising the Digital: Technology as Cultural Practice. Sydney: Tactical Space Lab.
  • Hawthorne, Susan. 2002. Wild Politics: Feminism, Globalisation and Bio/Diversity. Melbourne: Spinifex Press.
  • Henne, Kathryn. 2018. “Intersectionality Theory of Gender and Race.” In The International Encyclopedia of Anthropology, edited by Hilary Callan, 2553–7. Hoboken, NJ: John Wiley & Sons.
  • Hicks, Mar. 2017. Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing. Cambridge, MA: MIT Press.
  • Hicks, Marie. 2019. “Hacking the Cis-Tem.” IEEE Annals of the History of Computing 41 (1): 20–33. doi:10.1109/MAHC.2019.2897667.
  • Hjorth, Larissa, Sarah Pink, Heather A. Horst, Fumitoshi Kato, Baohua Zhou, Jolynna Sinanan, and Kana Ohashi. 2020. Locating the Mobile: Understanding Mundane Locative Media Practice in Households. New York: Springer.
  • Jungnickel, Katrina. 2013. DIY WIFI: Re-imagining Connectivity. Basingstoke: Palgrave Macmillan. doi:10.1057/9781137312532.
  • Keyes, Os. 2018. “The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition.” In Proceedings of the ACM on Human–Computer Interaction 2 (November): art. 88. doi:10.1145/3274357.
  • Lopez-Neira, Isabel, Trupti Patel, Simon Parkin, George Danezis, and Leonie Tanczer. 2019. “‘Internet of Things’: How Abuse Is Getting Smarter.” Safe—The Domestic Abuse Quarterly 63: 22–6. doi:10.2139/ssrn.3350615.
  • MacKenzie, Donald Angus, and Judy Wajcman, eds. 1999. The Social Shaping of Technology. Milton Keynes, UK: Open University Press.
  • Maitra, Suvradip. 2020. “Artificial Intelligence and Indigenous Perspectives: Protecting and Empowering Intelligent Human Beings.” In AIES ’20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, February 7–8, 2020, 320–6. New York: Association for Computing Machinery. doi:10.1145/3375627.3375845.
  • McCarthy, J., M. L. Minsky, N. Rochester, and C. E. Shannon. 1955. “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.” Accessed April 21, 2021, http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf.
  • Mead, Margaret. 1968. “The Cybernetics of Cybernetics.” In Purposive Systems, edited by Heinz von Foerster, J. D. White, L. J. Peterson, and J. K. Russell, 1–11. New York: Spartan Books.
  • Messing, Jill, Meredith Bagwell-Gray, Megan Lindsay Brown, Andrea Kappas, and Alesha Durfee. 2020. “Intersections of Stalking and Technology-Based Abuse: Emerging Definitions, Conceptualization, and Measurement.” Journal of Family Violence 35: 693–704. doi:10.1007/s10896-019-00114-7.
  • Michaels, Eric. 1986. Aboriginal Invention of Television: Central Australia 1982–6. Canberra: Australian Institute of Aboriginal Studies.
  • Miller, Daniel, and Don Slater. 2000. The Internet: An Ethnographic Approach. Oxford: Routledge.
  • Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press. doi:10.2307/j.ctt1pwt9w5.
  • Ortner, Sherry B. 1974. “Is Female to Male as Nature Is to Culture?” In Woman, Culture, and Society, edited by Michelle Zimbalist Rosaldo and Louise Lamphere, 67–88. Stanford: Stanford University Press
  • Pias, Claus. 2016. “The Age of Cybernetics.” In Cybernetics: The Macy Conferences 1946–1953: The Complete Transactions, edited by Claus Pias, 12–26. Chicago: University of Chicago Press.
  • Pink, Sarah, Heather Horst, John Postill, Larissa Hjorth, Tania Lewis, and Jo Tacchi. 2015. Digital Ethnography: Principles and Practice. London: SAGE.
  • Richardson, Kathleen. 2015. An Anthropology of Robots and AI: Annihilation Anxiety and Machines. New York: Routledge doi:10.4324/9781315736426.
  • Rid, Thomas. 2016. Rise of the Machines: A Cybernetic History. New York: W.W. Norton.
  • Robertson, Jennifer Ellen. 2017. Robo sapiens japanicus: Robots, Gender, Family, and the Japanese Nation. Oakland: University of California Press. doi:10.1525/california/9780520283190.001.0001.
  • Rosaldo, Michelle. 1974. “Women, Culture, and Society: A Theoretical Overview.” In Woman, Culture and Society, edited by Michelle Zimbalist Rosaldo and Louise Lamphere, 17–42. Stanford: Stanford University Press.
  • Schiller, Amu, and John McMahon. 2019. “Alexa, Alert Me When the Revolution Comes: Gender, Affect, and Labor in the Age of Home-Based Artificial Intelligence.” New Political Science 41 (2): 173–91. doi:10.1080/07393148.2019.1595288.
  • Sightcorp. 2021. “What Is Gender Recognition?” Sightcorp: Everything about Gender Recognition from Face Images. Accessed March 3, 2021, https://sightcorp.com/knowledge-base/gender-recognition-from-face-images.
  • Spiel, Katta, Os Keyes, and Pinar Barlas. 2019. “Patching Gender: Non-binary Utopias in HCI.” In CHI EA ’19: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 1–11. New York: Association for Computing Machinery. doi:10.1145/3290607.3310425.
  • Star, Susan Leigh. 2010. “This Is Not a Boundary-Object: Reflections on the Origin of the Concept.” Revue d’anthropologie des connaissances 4 (1): 18–35.
  • Strathern, Marilyn. 1985. “Kinship and Economy: Constitutive Orders of a Provisional Kind.” American Ethnologist 12 (2): 191–209. doi:10.1525/ae.1985.12.2.02a00010.
  • Strathern, Marilyn. 1988. The Gender of the Gift: Problems with Women and Problems with Society in Melanesia. Oakland: University of California Press.
  • Strengers, Yolande, and Jenny Kennedy. 2020. The Smart Wife: Why Siri, Alexa, and other Smart Home Devices Need a Feminist Reboot. Cambridge, MA: MIT Press. doi:10.7551/mitpress/12482.001.0001.
  • Suchman, Lucy A. 1987. Plans and Situated Actions: The Problem of Human–Machine Communication. Cambridge: Cambridge University Press.
  • Suchman, Lucy. 2013. “Consuming Anthropology.” In Interdisciplinarity: Reconfigurations of the Social and Natural Sciences, edited by Andrew Barry and Georgina Born, 141–60. New York: Routledge.
  • Sutko, Daniel M. 2020. “Theorizing Femininity in Artificial Intelligence: A Framework for Undoing Technology’s Gender Troubles.” Cultural Studies 34 (4): 567–92. doi:10.1080/09502386.2019.1671469.
  • Sweeney, Miriam E. 2021. “Digital Assistants.” In Uncertain Archives, edited by Daniela Agostinho, Catherine D’Ignazio, Annie Ring, Nanna Bonde Thylstrup, and Kristin Veel, 151–60. Cambridge, MA: MIT Press.
  • Turing, Alan M. 1950. “I. Computing Machinery and Intelligence.” Mind 236: 433–60. doi:10.1093/mind/LIX.236.433.
  • Visage Technologies. n.d. “Face Analysis: Age, Gender & Emotion Recognition.” Visage Technologies. Accessed August 3, 2020, https://visagetechnologies.com/face-analysis.
  • Wajcman, Judy. 2010. “Feminist Theories of Technology.” Cambridge Journal of Economics 34 (1): 143–52. doi:10.1093/cje/ben057.
  • Walsh, Toby, Neil Levy, Anthony Elliot, James Maclaurin, Iven Mareels, and Fiona Wood. 2019. “The Effective and Ethical Development of Artificial Intelligence: An Opportunity to Improve Our Wellbeing.” Australian Council of Learned Academies. Accessed April 21, 2021, https://acola.org/hs4-artificial-intelligence-australia.
  • West, Sarah Myers, Meredith Whittaker, and Kate Crawford. 2019. “Discriminating Systems: Gender, Race and Power in AI.” AI Now Institute. Accessed April 21, 2021, https://ainowinstitute.org/discriminatingsystems.html.
  • Wiener, Norbert. 1950. The Human Use of Human Beings: Cybernetics and Society. Oxford: Houghton Mifflin.
  • Woods, Heather Suzanne. 2018. “Asking More of Siri and Alexa: Feminine Persona in Service of Surveillance Capitalism.” Critical Studies in Media Communication 35 (4): 334–49. doi:10.1080/15295036.2018.1488082.

Article ID

  • wbiea1678, Technology
  • wbiea1982, Digital Anthropology
  • wbiea2126, Feminism and Anthropology
  • wbiea1715, Gender
  • wbiea1946, Gender and Cinema, Anthropological Approaches to
  • wbiea1385, Mead, Margaret (1901–78)
  • wbiea2206, Queer Theory
  • wbiea2430, Tacit Knowledge
  • wbiea2287, Division of Labor
  • wbiea1547, Gender and Fieldwork
  • wbiea2208, Sex and Gender Roles, Division of Labor in
  • wbiea1517, Transgender
  • wbiea1393, Sex/Gender Distinction
  • wbiea2023, Media Anthropology
  • wbiea1299, Boas, Franz
  • wbiea 1861, Design, anthropology of
  • wbiea 2484, Algorithms
  • wbiea1536, Bateson, Gregory
arrow-left bars search times arrow-up