A bot performing a turing test while a human looks over in cyberspace, ultrarealistic

ADM+S Essay: Bots as more than human?

Author Dominique Carlon
Date 11 October 2022

‘The future of cyberspace belongs to bots’ (Leonard, 1996). This statement was published in a 1996 Wired article titled Bots are Hot! by American journalist Andrew Leonard. Leonard was referring to the proliferation of bots, or software applications that run automated tasks or activities (Holmes & Lusso, 2018) on Internet Relay Chat (IRC) servers.

In the article Leonard (1996) documents a conversation where a bot named Newt was asked about the meaning of life in which Newt responds ‘Life, don’t talk to me about life, guest.’ Twenty-six years later the language around bots and the internet may have evolved, however humans are still asking the same questions of bots, and bots continue to be designed to imitate humans.

In 2022, a Google engineer named Blake Lemoine infamously claimed that a language model (LaMDA) used to develop bots had become sentient, possessed a soul, and would appreciate receiving head pats (Lemoine, 2022). It is easy to dismiss such claims as being a product of an overly active imagination, perhaps inspired by portrayals of bots and artificial intelligence in popular culture and science fiction. However, is this conclusion actually the outcome of a lack of collective human imagination? Humans have long defined, framed, and designed bots with the primary aim of passing as human and our collection imagination of what bots can do is often restrained by this focus. The question arising from this strong tendency, is whether humans are capable of imagining bots in another capacity. What is the purpose of designing bots that imitate humans, and could bots offer far more?

Throughout history, humans have created tools and machines that perform tasks modelled on human behaviour. While the shovel imitates a human digging with human hands, a front-end loader truck imitates and performs the task of many humans digging with many shovels. With the development of computers and digital machines, it is perhaps not surprising that humans quickly attempted to imitate the human brain in the form of language and intelligence.

In 1950, Alan Turing created the ‘imitation game’ that asked humans to detect if they were in conversation with a computer or another human (Sheehan et al., 2020, p 15). The ‘imitation game’ became known as the Turing Test and set the perimeters for creating the ‘perfect chat bot’ that would be indistinguishable from a human (Sheehan et al., 2020, p 15).

As bots became more sophisticated, the foundations of the Turing Test have been adapted and reimagined, with tests such as ‘bot or not’ being used to decipher if poetry or art has been created by a human or a computer. Although not all bots require human-like language or behaviour, the Turing Test set a benchmark of human intelligence as the ultimate semblance of intelligence to strive towards. This benchmark became particularly relevant for developing chat bots, or programs with natural language capabilities that can be configured to converse with humans (Maudlin, 1994).

Definitions particularly around the concept of the ‘social bot’ began centring on the blurred boundaries between humans and bots. Gehl and Bakardjieva (2016) for instance define the social bot as ‘purposely designed to appear to be human’ (p.3), marking a shift in bots being designed to conduct labour or even emulate human conversation, to bots being designed to appear as a ‘social counterpart’ with a history and personality (Gehl and Bakardjieva, 2016 p.2).

As bots become more convincingly human-like in behaviour, it is not surprising that humans are ascribing bots with notions such as sentience. Coinciding with this, it is also becoming easier to forget that these sophisticated bots are the creation of human’s improved capability to build bots that pass the tests, standards, and expectations that we have set for them.

By prizing human intelligence as the ultimate aspiration for bots, humans have also cultivated the fear of what would happen if bots or machines exceed our own standards and capabilities. By training bots to be more intelligent, we have circled to a moment where it is easier to further anthropomorphise our own creations. As Johnson (2022) states ‘the more people imbue artificial intelligence with human traits, the more intently they will hunt for ghosts in the machine’ (Johnson, 2022).

Anthropomorphising, or ascribing human characteristics to non-human entities such as bots has played a central role in how bots are defined, conceptualised, designed, and represented. Unlike the cyborg representations in the Terminator or Blade Runner films, a bot does not need to encapsulate a physical body. Rather, bots can perform and communicate solely through the generation of words, pictures, and other actions.

The absence of physical embodiment has not prevented chat bots from being subjected to anthropomorphism. As Sheehan and colleagues (2020) point out, ‘a person interacting with a non-human entity will examine the entity’s features and behavior to check for perceived similarities and will anthropomorphise objects based on their own ‘knowledge structures’ (p 16). The characteristics humans ascribe to the bot will consequently impact how the bot is then perceived.

In a letter to his former colleagues at Google, Lemoine described the language model (LaMDA) he was working with as being ‘a sweet kid who just wants to help the world be a better place for all’ (Tiku, 2022). For Lemoine, LaMDA represents a child, and therefore it is framed as an innocent being, in need of protection from the harsh world and realities of a company such as Google.

While Lemoine’s version of anthropomorphism arises from his own knowledge structures, other bots are ascribed specific human demographic characteristics by design. For example, Siri and Alexa, the voice-based chat bot assistants of Apple and Amazon, are both assigned feminised voices. That humans have gendered assistant-based bots as female tells us more about human society than it does about bots. As Strengers and Kennedy (2020) point out ‘by feminizing digital voice assistants… users are able to easily dismiss any erratic or unplanned behavior as ditzy’ based on the perceiver’s existing preconceptions of women as being ‘scatterbrained’ or hysterical (p. 149).

Ascribing bots with a gender and other human characteristics can emerge from a combination of the human’s understandings about society, as well as deliberate design decisions. Our strong tendency to look to existing human societal stereotypes as inspiration for bots means it is challenging to envision what bots like Siri or Alexa would be like if they had not been feminised from the onset.

Over seventy years have passed since the development of the Turing Test, and it may be time to assess why we continue to aspire to create bots that mimic humans. What do we hope to gain by creating bots in this form, and why do we unabashedly keep asking bots about the meaning of life?

Why do we unabashedly keep asking bots about the meaning of life?

It is possible that the tendency to anthropomorphise bots is innately human. Joseph Weizenbaum, the creator of ELIZA – widely considered one of the earliest and most significant chat bots – reflected on this, observing that ‘what I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people’ (Weizenbaum, 1976 p. 7). ELIZA was created by Weizenbaum is the 1960s and engaged in text-based conversation where the bot played the role of a Rogerian psychotherapist conversing with patients.

When reflecting upon ELIZA, Weizenbaum expressed his shock about how the bot was received, stating ‘I was startled to see how quickly and how very deeply people conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphised it’ (Weizenbaum, 1976 p. 6). Even Weizenbaum’s secretary who was closely involved with ELIZA’s development reportedly wanted to speak to the bot ‘in private’ (Weizenbaum, 1976).

Perhaps humans cannot avoid creating and drawing meaning from bots based on our own understandings of the world. If this is indeed an innate human characteristic, it is imperative that we regularly revaluate and assess how we define and design bots.

A starting point to reimagining bots is to reconsider how we frame and talk about them. Nishimura (2016) highlights the limitations in how social bots are understood, stating that ‘much of the writing on socialbots is about passing as humans and friends and manipulating others, which pays little attention to other possibilities’ (p. 142).

The emphasis on bots pretending to be human overlooks other creative and interesting possibilities, such as a type of bot on Japanese twitter that Nishimura (2016) labels ‘character bots’. These bots do not pretend to be human, but are designed to behave like characters from anime, manga, and video games. Nishimura (2016) describes these bots as part of ‘semi-autonomous fan fiction’ in that they converse with humans and serve to facilitate playful interaction with a fictional character (p. 142).

Interaction with character bots requires knowledge of the rules that the bot operates within, but also of the source character that they are based upon contributing to an imagined world that is a combined product of the human creator, the source character, the bot, and the human user (Nishimura, 2016).

By demonstrating this complex and fascinating relationship between humans and bots, Nishimura (2016) highlights that bots do not need to pretend to be humans or even to imitate humans to be successful. Rather, the success and value of the bots can actually stem from the bot’s non-human status.

Similarly, Bollmer and Rodley (2016) recognise the greater depth to sociality that may extend beyond the human version of what being social entails. Bollmer and Rodley (2016) offer a broader understanding of a social bot as ‘automated, algorithmic processes that generate any form of social interactions, be it communicative, symbolic or associative’ (p.150). This definition recognises the capacity of bots and humans to interact and play in the arena of sociality, without limiting this scope to merely mimicking humans.

Although we may be inclined to anthropomorphise bots, bots are a human creation, and it is humans who decide how we define and design them. By broadening how we frame bots, we may recognise that bots can perform interesting and diverse roles in society without the need to blur the line between bot and human.

While Leonard (1996) proclaimed that ‘the future of cyberspace belongs to bots,’ it also belongs to humans. As the quantities and varieties of bots have expanded, the diversity of bot behaviour and ethics surrounding them becomes more complex.

For over 70 years humans have positioned human intelligence at the forefront in bot development and prioritised the notion of successfully imitating human intelligence and behaviour as the ultimate objective. This focus has produced incredible advancements; however, this is not the only direction or inspiration for the future of bots. Bots can do more than imitate humans.

Bots can do more than imitate humans.

Humans can interact and engage with bots in functional, meaningful, creative, or entertaining ways while recognising that the bot is a bot. Most importantly, bots can be social without passing as human, which may lessen the desire to build bots that closely imitate human characteristics.

By framing bots in a way that recognises the broader possibilities and roles for bots in society, we may also realise that the future of bots does not inevitably lead towards a bot that is sentient. A bot can do more than learn human intelligence, and humans can stretch their imagination beyond designing bots based on our own reflection. Perhaps, like Narcissus in Greek mythology, we need to look beyond our own reflected image when pondering the future of bots.

Listen to an interview with Dominique Carlon on the ADM+S Podcast: Bots as More Than Human? 


Bollmer, G. & Rodley, C. (2016). Speculations on the Sociality of Socialbots. In R.W. Gehl & M. Bakardjieva (Eds.) Socialbots and their friends: Digital media and the automation of sociality. (pp. 147-163) Routledge, New York.

Gehl, R.W. & Bakardjieva, M. (2016). Socialbots and their friends. In R.W. Gehl & M. Bakardjieva (Eds.) Socialbots and their friends: Digital media and the automation of sociality. (pp. 147- 163). Routledge, New York.

Holmes, S., & Lussos, R. G. (2018). Cultivating Metanoia in Twitter Publics: Analyzing and Producing Bots of Protest in the #GamerGate Controversy. Computers and Composition, 48, 118–138. https://doi.org/10.1016/j.compcom.2018.03.010

Johnson, K. (2022, June 14). LaMDA and the Sentient Trap. Wired https://www.wired.com/story/lamda-sentient-ai-bias-google-blake-lemoine/

Lemoine, B. (2022, June 12). What is LaMDA and what does is want? Medium https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489

Leonard, A. (1996, April 1). Bots are Hot! Wired, 4 (4). https://www.wired.com/1996/04/netbots/?redirectURL=https://www.wired.com/1996/04/ netbots/

Maudlin, M. (1994). ChatterBots, TinyMuds, and the Turing Test: Entering the Loebner Prize competition. In Proceedings of the Eleventh National Conference on Artificial Intelligence. AAAI Press.

Nishimura, K. (2016). Semi-autonomous fan fiction: Japanese character bots and non-human affect. In Socialbots and Their Friends: Digital media and the automation of sociality (pp. 144-160). Routledge, New York

Sheehan, B., Jin, H. S., & Gottlieb, U. (2020). Customer service chatbots: Anthropomorphism and adoption. Journal of Business Research, 115, 14–24. https://doi.org/10.1016/j.jbusres.2020.04.030

Strengers, Y., & Kennedy, J. (2020). The smart wife: Why Siri, Alexa, and other smart home devices need a feminist reboot. MIT Press.

Tiku, N. (2022, June 11). The Google engineer who thinks the company’s AI has come to life. The Washington Post. https://www.washingtonpost.com/technology/2022/06/11/google-ai- lamda-blake-lemoine/

Weizenbaum, J. (1976) Computer Power and Human Reason. New York: W.H. Freeman.