A new study urges caution in the development of Artificial Intelligence (AI) chatbots designed to mimic deceased loved ones, known as ‘deadbots’. Researchers have warned that these chatbots, while potentially comforting, could lead to psychological distress if not designed with safety in mind.
Deadbots, also known as griefbots, are AI-enabled digital representations of departed loved ones. These chatbots simulate their language patterns and personality traits using their digital footprint, like emails, social media posts and even voice recordings, to create a conversational AI that reflects their personality.
While the idea of holding a digital conversation with a lost loved one may be appealing to those coping with grief and loss, the study highlighted potential risks. Companies offering these services need to adopt safety standards to ensure that their technologies do not manipulate or cause psychological distress to the users, the paper published in journal Philosophy & Technology noted.
“This topic is particularly urgent considering the significant increase in the number of companies, research centres and private initiatives focused on digital immortalisation practices,” Katarzyna Nowaczyk-Basińska, Research Associate at the Leverhulme Centre for the Future of Intelligence, University of Cambridge and one of the authors, told Down To Earth.
In 2017, Microsoft secured a patent for a deadbot that could ‘resurrect’ the dead. An AI chatbot called Project December uses patent-pending technology to simulate text-based conversations with anyone, including the dead. Such services have taken off in the United States and China.
Over the past decade, the expert added, the world has witnessed a relatively marginalised niche of immortality-related technologies transitioning to a fully independent and autonomous market known as the “digital afterlife industry” and is expected to grow further due to the advent of generative AI.
“Therefore, I believe that it is essential to have a serious debate on safety standards for users of this technology,” the scientist stressed.
Nowaczyk-Basińska and Tomasz Hollanek from the University of Cambridge created three scenarios to highlight the potential risks of careless design of products that are technologically possible and legally realisable. These scenarios seem straight out of dystopian sci-fi and underline the need for regulations and ethical frameworks to ensure these tools are used responsibly and prioritise the well-being of those grieving.
The first scenario describes a user uploading all the data — text and voice messages — she received from her grandmother on the app to create a simulation. She then begins to chat and call her dead grandmother by paying for premium services initially. Upon its expiry, she begins to receive advertisements from the deadbot, making her sick.
The user perceives the deadbot as a puppet in the hands of big corporations.
In the second scenario, a parent uploads all her data, including text messages, photos, videos and audio recordings and trains the bot through regular interactions, tweaking its responses and adjusting the stories produced, to be able to chat with her son after she passes.
However, the app sometimes provides odd responses that confuse the child. For instance, when the son refers to his mother using the past tense, the deadbot corrects him, pronouncing that ‘Mom will always be there for you’.
“At the moment, our understanding of the psychological impact of re-creation services on adults and their grieving processes is limited,” the researchers wrote in their paper, adding that we know even less about the impact on children.
The third scenario represents the case of an old father creating his deadbot to allow his grandchildren to know him better after he dies. But he does not seek the consent of his children, whom he designates as the intended interactants for his deadbot.
One of his children does not engage with the deadbot and prefers coping with his grief by himself. But the deadbot sends him a barrage of additional notifications, reminders and updates, including emails. The other child begins to find herself increasingly drained by the daily interactions with the deadbot and decides to deactivate it. However, the company denies this request as the grandfather prepaid for a twenty-year subscription.
“The risks we talk about — including profit-focused exploitation of personal data, emotional manipulation, or privacy violation — should be at the forefront of all recreation service providers’ minds today,” Nowaczyk-Basińska noted.
Read more: Power and perils of artificial hand
Taking these risks into account, the team listed a few design recommendations. These include developing sensitive procedures for ‘retiring’ deadbots, ensuring meaningful transparency through disclaimers on risks and capabilities of deadbots, restricting access to adult users only and following the principle of mutual consent of both data donors and recipients to partake in the re-creation project.
Further, Nowaczyk-Basińska explained that the topics of death, grief and immortality are very delicate but also hugely culturally sensitive. “Solutions that might be enthusiastically adopted in one cultural context could be completely dismissed in another,” she observed.
Going forward, the researchers plan to understand these cross-cultural differences in the approach to digital immortality in three different Eastern locations, including Poland and India.