The article “Here is your dead grandfather”: How Deathbots should digitize grief first appeared at the online magazine Basic Thinking. You can start the day well every morning via our newsletter update.
“Deathbots” are digital reproductions of deceased. More and more start-ups and tech companies are experimenting with it. Such avatars that reproduce voices, faces and personalities from dead sound at first after comfort for the remaining. However, if you look closely, Deathbots raise many legal and ethical questions. An assessment.
To deal with the loss of beloved people is very difficult for us humans. So far, remaining with memories, photos and perhaps also videos could help themselves.
Using artificial intelligence (AI), however, we are now able to “bring back” the deceased themselves – with voice, facial expressions and its typical phrases.
Deathbots: Digital images of deceased people
This is exactly the promise of the “Deathbots” or “Griefbots”, namely to live the dead in digital form. How current and disturbing this practice can be, recently showed a lot of noticed interview of the former CNN presenter Jim Acosta.
He spoke to a Ki-Avatar of the student Joaquin Oliver, who was killed in a school massacre in Florida in 2018. The conversation made it clear how annoying and at the same time problematic this technology has become.
Legal aspects in Germany and the EU
Legally, the situation with regard to such avatars is not that easy. In Germany, postmortal personal rights protects the dignity of the deceased for a certain amount of time. For example, portraits may only be published for ten years with the consent of the relatives.
But the reality of reproductions of people using AI is difficult to fit into the existing legal situation. Who can decide whether an avatar can be created? The relatives? Or should you regulate this for yourself during your lifetime in a will, in which it is then determined how the deceased’s data should be done after death?
This then includes whether a digital copy can be created at all-which some lawyers have now already described as a so-called “anti-ghost bot clause” or “disposal”, which is intended to exclude this.
Deathbots: AI act of the EU prescribes transparency
The EU is also concerned with the question of AI Avatars, although deathbots are not yet in focus. The AI Act prescribes at least transparency in synthetic content, that is, if you interact with a AI-generated avatar, you should also know.
This transparency must then also apply to the digital reproductions of deceased. However, this does not mean that the effect of deathbots is regulated. There is still no further discussion here.
In addition, the role of media supervision grows. Especially when dealing with minors, the risks of talks with deceased are particularly high.
Deathbots and the influence on young people
Since young people are considered to be emotionally very receptive, they can easily build a bond with such avatars. They then give them “views” of a deceased who ultimately based on software with an algorithm.
It becomes problematic if these bonds are misused using such avatars or steered in manipulative directions. Cases of chatbots that tempted teenagers to eat eating disorders or suicidal thoughts have shown how dangerously uncontrolled systems can be.
And then there is the economic dimension: Google already holds a patent For the simulation of personalities using chatbots. This means that theoretically deceased people can also be simulated.
If you think this patent, you can imagine that it is not just about mourning, but about a possible business model when the simulation of a deceased is offered against money.
Ethical aspects of deathbots
I only want to briefly torn off the ethical dimension of Deathbots, because here you can quickly get into such a wide field that the scope of this column is quickly blown up.
Deathbots can certainly help to process grief by being able to speak to relatives again “to grandma” or “the father” or “the mother”.
At the same time, in my view, there is a risk that people will remain trapped in a kind of “digital intermediate world” and thus become unable to really process their loss.
Commercialization is even more serious: What happens if the deceased’s voice suddenly appears in a commercial, for example because you have agreed to the wrong terms and conditions? In my view, you shouldn’t fool yourself: What is technically possible will be implemented.
Personal assessment of the development of the mourning bots
I am convinced: Deathbots will be busy for a long time – not only as currently as a marginal phenomenon, but as a topic that has to be discussed and, if necessary, regulated due to its possibilities.
Because the technology will be getting better and better: language models that the interaction of the deceased’s interaction with the remaining taxes learn to reproduce individual voices down to the smallest detail and generate more and more looking faces.
So what may still look a bit jerky and not really “real” today could appear so real in a few years that we hardly see a difference to the original.
What limits do deathbots need?
This is exactly what makes things so tricky. We will have to ask ourselves whether we really want digital images from deceased to be available at the push of a button. I don’t mean that they should be automatically prohibited.
I mean that we have to consciously deal with the topic and the possibilities resulting. We have to talk in particular about power and responsibility: if Google registers patents, then maybe not to make mourning easier, but because a potentially huge market attracts.
Deathbots are not a curiosity that will disappear again. They are technically too fascinating and economically too attractive. Therefore, an understanding of what their existence means for us humans is therefore needed. And then we have to deal with how we want to use them and what uses we want to allow. That must then be regulated.
Conclusion: Deathbots between consolation and commercialization
Even if Deathbots may not be so present in the current AI discussions, they are more than just a morbid gimmick. They show how quickly and narrow AI technologies can be linked to our most intimate feelings in terms of people.
In the future, we will not only have to adapt to better simulations of people, in particular from the deceased, but also to new business models that commercial and attention by creating avatars of deceased.
In my eyes, the suction and the attractiveness that comes from such avatars is immense: Who would not like to get a “last” advice from his father who died early or the always so life -clever grandma?
Maybe one day we will all be confronted with a digital reflection – whether we want or not. And this reflection continues if we as a real person no longer exist.
This is exactly why it is now time to talk about the limits we want to set in this regard. Because one thing has to be clear to us: the technology will not wait until we are ready.
Also interesting:
- Smart-Contracts: Does Germany miss the next innovation?
- This human ability is most important in dealing with AI
- Intel-Aus: An unexpected opportunity for Magdeburg
- The EU’s AI code is brave-but insufficient
The article “Here is your dead grandfather”: How Deathbots should digitize grief first appeared on basic thinking. Follow us too Google News and Flipboard Or subscribe to our update newsletter.
As a Tech Industry expert, I believe that the concept of Deathbots digitizing grief is both fascinating and potentially controversial. On one hand, the idea of using technology to help people cope with loss and process their grief in a new way could be incredibly beneficial. It could provide a sense of closure and comfort to those who have lost a loved one, allowing them to interact with a digital version of their deceased relative and feel a sense of connection.
However, there are also ethical considerations to take into account. It’s important to ensure that these Deathbots are created and used in a respectful and sensitive manner, and that they do not trivialize or cheapen the grieving process. Privacy and consent are also crucial factors to consider, as individuals should have control over how their data and digital likeness are used after they pass away.
Overall, I believe that Deathbots have the potential to be a valuable tool for those who are grieving, but it’s important to approach their development and implementation with caution and empathy. It’s essential to prioritize the emotional well-being of users and ensure that these technologies are used in a way that is respectful and supportive of the grieving process.
Credits