Viewpoint
Abstract
The focus of debates about conversational artificial intelligence (CAI) has largely been on social and ethical concerns that arise when we speak to machines—what is gained and what is lost when we replace our human interlocutors, including our human therapists, with AI. In this viewpoint, we focus instead on a distinct and growing phenomenon: letting machines speak for us. What is at stake when we replace our own efforts at interpersonal engagement with CAI? The purpose of these technologies is, in part, to remove effort, but effort has enormous value, and in some cases, even intrinsic value. This is true in many realms, but especially in interpersonal relationships. To make an effort for someone, irrespective of what that effort amounts to, often conveys value and meaning in itself. We elaborate on the meaning, worth, and significance that may be lost when we relinquish effort in our interpersonal engagements as well as on the opportunities for self-understanding and growth that we may forsake.
JMIR Ment Health 2024;11:e53203doi:10.2196/53203
Keywords
Introduction
Conversation is central to our shared humanity. It is the means through which we make ourselves knowable to another and come to know them in turn. Our mental states—our beliefs, feelings, intentions, desires, and attitudes—are in some respects unreachable by another and sometimes even opaque to ourselves. However, in conversation, we render them articulable, and therefore, accessible. Not unrelatedly, in these exchanges, we often learn about ourselves as well as the other person. The recent emergence of powerful conversational artificial intelligences (CAIs) has therefore been unsettling on various levels (far more so than equally powerful AIs that operate in mediums besides conversation). In their extraordinary replication of the means through which we express our mental states, it is tempting to impute these states to our AI interlocutors. After all, the articulation of thinking (or feeling, hoping, willing, and desiring) is usually all the evidence we require to attribute the relevant mental states to someone.
In her book, Reclaiming Conversation: The Power of Talk in a Digital Age, Sherry Turkle [
] endeavors to make the case for conversation in a world that has increasingly abandoned it for the conveniences (and safeties) of mere digital connection. “At a first, we speak through machines and forget how essential face-to-face conversation is to our relationships, our creativity, and our capacity for empathy,” Turkle writes. “At a second, we take a further step and speak not just through machines but to machines. This is the turning point” [ ]. This concern was prescient, and Turkle has more recently elaborated on it with reference to the proliferation of CAIs or social chatbots, such as Xiaoice, Woebot, or Replika. These CAIs aim to provide intimacy, but of what sort? Turkle suggests that this intimacy is necessarily fraudulent since it is (by design) devoid of the emotional vulnerability crucial to genuine intimacy [ ]. Similarly, these CAIs eliminate the demands and challenges of empathy required for genuine interpersonal exchanges [ , ]. These arguments align with Turkle’s long-standing critique of how computers affect our relationships with ourselves and with others [ - ].“Speaking to Machines”: CAIs and the Possibility of Insight
There is ongoing debate concerning the type and quality of conversations possible with CAIs and their appropriateness in therapeutic contexts. In psychotherapy, the digitization of many processes may suggest that CAIs can simply replace the therapist. However, it is also possible to argue that the psychotherapeutic relationship and the experience of that relationship are what is most crucial. In psychodynamic psychotherapy, the client experiences transference while the therapist experiences counter-transference, and working through these processes leads to therapeutic change. In frameworks more influenced by cognitive-behavioral principles, such as schema therapy, the therapist may play a key role in providing “reparenting,” a process that leads to positive outcomes.
Ethical concerns with CAIs in therapeutic contexts include the biases and other harmful prompts that might arise in such exchanges, along with the potential dearth of responsibility and accountability for these harms [
- ]. However, even if such patent ethical concerns were addressed or eradicated, central questions would persist: what sort of presence or entity do we have in CAIs? [ - ] and perhaps relatedly, is it possible for CAIs to facilitate genuine self-knowledge, self-understanding, and insight in their human interlocutors?Some have suggested that engagements with CAIs are necessarily deficient in this crucial respect, especially if we consider the practice of joint attention, as well as other forms of mutual recognition and acknowledgment, to be central to the therapeutic conversation (and indeed to conversation more generally) [
, - ]. Relatedly, there are concerns about the lost mutuality of these exchanges [ , ]. Conversations with bots do not demand that we empathize with or accommodate another, since, in an important sense, there is no one else there [ , , ]. As Andrew McStay [ ] points out, much depends on the account of empathy we are assuming. McStay argues that accounts that are more accommodative of CAIs are “deficient and potentially dangerous” insofar as they lack interdependence, copresence, and particularly moral responsibility [ ].However, others disagree with these characterizations and see no reason why CAIs cannot encourage genuine introspection [
- ]. What is required in the therapeutic exchange—according to some of these proponents—is not necessarily mutual agency, but rather the experience of being emotionally supported and encouraged to engage in self-reflection [ , ]. To necessitate another subjectivity, or the presence of another full-fledged agent, is to presuppose the illegitimacy of CAIs in these contexts, and to needlessly curtail the possibilities of what qualifies as a genuine therapeutic conversation. After all, human therapists regularly fail to generate the conditions for self-understanding and insight, irrespective of their full-fledged agency [ ].We find these counterarguments compelling. Furthermore, if therapeutic benefits are possible through CAIs—as some research suggests [
- ] (although far more investigation is required [ ])—then we potentially have a powerful tool in therapeutic CAIs. Given the immense shortfall in mental health care globally [ , ] and the often prohibitive cost of undertaking conventional psychotherapy, we would be remiss to hastily disregard the beneficial possibilities of therapeutic CAIs. Moreover, certain individuals and populations might experience unique benefits from the format of engagement required by therapeutic exchanges with CAIs, and (relatedly) may not experience the particular advantages of in-person conversation highlighted by advocates such as Turkle (this point has been made with regard to children on the autism spectrum in particular [ , ]).Being Spoken for: CAIs and Surrendering Articulation
In recent reckonings with the rise of CAIs, the focus has generally been on concerns like those outlined above: what becomes of us when we increasingly replace our human interlocutors—including our human therapists—with AIs, that is (in Turkle’s phrase) “when we speak not just through machines but to machines” [
].Our central concern in this viewpoint, however, is different. Although certain dimensions of the preceding debate are of relevance to our position, we can also remain agnostic with regard to the value of “speaking to machines,” whether in a therapeutic context or otherwise. We can remain open to the possibility that bot and human engagements can generate genuine depth, worth, and meaning. Furthermore, we need not presume that the conditions for the emergence of genuine self-understanding and self-reflection cannot be generated in interactions with CAIs. Rather, our concern arises independently, for we now seem to have reached another turning point, and one that extends even further. “At a third point,” we might add to Turkle’s list, “we take yet another step and let machines speak for us” [
].We will concentrate on the significance of these forces to our self-knowledge and our interpersonal relationships, although more could be, and has been, said about their implications more generally, for example, concerning achievement gaps (where automation threatens to undermine genuine achievement, and therefore, meaningful work [
]) and responsibility gaps (where automation threatens to undermine responsibility for harmful outcomes [ , ]).Our central concern will be the following: what is potentially at stake, personally and interpersonally, when we let the machine speak for us? We will explore this question within the framework of philosophical and ethical debates concerning the interpersonal value of effort, rather than exploring it qualitatively or quantitatively (although further empirical research on these questions would be valuable).
This third transition can take many forms, some seemingly more trivial than others. When we are writing an email and the remainder of the sentence auto-fills in gray, we are tempted to stop speaking for ourselves and let the machine speak for us instead.
At times, the costs of this surrender may seem slight, if they exist at all. What does it matter if you articulate some rote phrase to a distant work acquaintance or have it articulated for you instead? However, in other circumstances and other relationships, even these subtle interventions can carry weight.
In an early exploration of the implications of large language models—written in 2019, before the mass rollout of ChatGPT and other large language models—the journalist John Seabrook [
] wrote the following about the experience of using Smart Compose to autocomplete his emails:Finally, I crossed my Rubicon. The sentence itself was a pedestrian affair. Typing an e-mail to my son, I began “I am p—” and was about to write “pleased” when predictive text suggested “proud of you.” I am proud of you. Wow, I don’t say that enough. And clearly Smart Compose thinks that’s what most fathers in my state say to their sons in e-mails. I hit Tab. No biggie.
And yet, sitting there at the keyboard, I could feel the uncanny valley prickling my neck.
Nowadays, the modes in which the machine can speak for us have expanded enormously from these first modest iterations. There are many examples to consider, and many more are developing as we write, but the ways through which we can outsource the labor of our interpersonal articulations are currently expanding exponentially.
Take one example: it is now possible to get CAIs to message on your behalf on dating apps. A variety of start-ups have generated different tools that allow you to hand over your messages to an AI [
]. Instead of having to initiate a conversation with a prospective date—or to come up with thoughtful or witty replies to their messages—AI will do it for you.When you do not care for the people you are messaging, this option offers a certain pragmatic appeal (especially given the volume of messaging that contemporary dating apps necessitate). However, when you do care about a person, the temptation might be even stronger. The CAI, after all, always has an idea of what to say next, and moreover, it offers a version of what you should say—a statistically probable representation of what people like you say at times like this. In comparison, speaking for yourself can feel risky. The things you might say on your own—the way in which you try to make yourself known and get to know others—might be odd, off-putting, or wrong somehow.
Take another example: in June 2023, The New York Times reported that some doctors were turning to AI to communicate compassionately with patients [
]. We have all experienced the sense of inadequacy that comes with trying to say something supportive to someone who is in an awful circumstance. At such times, we can cast around for ages and summon nothing but cliches. How alluring it is to have a ready-made response instead, and one so well trained in the performance of genuine feeling. The AI’s messages will be, in many cases, much better than what we could have produced on our own—kinder, more thoughtful, and more encouraging. Yet no matter how superbly it manages to express care and compassion, this expression is of course divorced from any genuine experience of care and compassion. We should be cautious, in our expedient outsourcing of this emotional connection and engagement, of when we begin to divorce ourselves from the genuine experience of care and compassion along with it.When we are struggling to find the right thing to say, it may feel like we are achieving nothing. Yet it is precisely in these times—as we try to understand what someone else is enduring, to feel for them, and to express that feeling—that we are undertaking the genuine experience of care and compassion, without which the words themselves are hollow.
One optimistic response is that we might learn more empathetic engagement from the example of the machines. However, this seems unlikely. It is like suggesting that we will improve our spelling skills by relying on automated spell-check or that we will remember more phone numbers through the excellent example set by our phones. Of course, we will not, as the process removes effort, and little of importance has ever been learned without effort.
Thinking ahead—and not necessarily too far ahead—it is possible to see how the temptation to let the machine speak might overspill our text-based conversations. The push to normalize mixed-reality engagements—most notably with the launch of Apple’s Vision Pro headset last year—would make it possible for the machine to take over not only our text-based correspondence but also our face-to-face conversations.
We are, right now, at the initial stages of the temptation to begin ceding our expressions to CAIs. However, with little imagination, it is easy to see all the ways in which these temptations are poised to grow. After all, if it was largely the machine whose messages charmed someone into going on a date with you in the first place, how enticing would it be to let the machine keep on speaking when you have to go on the date yourself? The machine speaks with such authority, and as our confidence in its utterances grows, our confidence in our own could correspondingly diminish.
To our mind, the potential costs (to one’s own humanity and to our shared humanity) of CAIs are greatest when we allow them to speak for us. Genuine conversation nurtures authentic engagement with others and a better understanding of ourselves. Turkle [
] emphasizes what is lost when we speak through machines, and further still, when we speak to machines, but there is, even in these latter engagements, the possibility of coming to know our own thoughts and feelings, of having to search for, and to find, the expression for our experience, and recognizing that the experience precedes the expression that follows.However, when we allow the machine to speak for us, even this possibility diminishes. We can too easily avoid the effort it takes to genuinely understand ourselves and our unique circumstances (undertakings that are not necessarily discouraged by speaking to machines [
, , ]). We are not encouraged to find the expression for our experience. Instead, we can too easily mistake whichever expressions we receive for our own experience, scarcely recognizing what we have lost in the exchange.Effort and Meaning
The purpose of these technologies is, in no small part, to remove effort. To take something that once required a great deal from us and make it require little to nothing. Effort is by definition a burden, and in any given instance of having to exert effort, we are always wishing there was a way to be rid of it, but effort also has enormous value, and in some cases, even intrinsic value. This can be true in many realms—there are crucial senses in which “achievement” itself is impossible without effort [
, ]—but it is especially true in our interpersonal relationships. In some interpretations, effort allows us to reveal our care and concern for one another and make it knowable. In such interpretations, its role is primarily epistemic. This epistemic role is not trivial in itself, but there are also interpretations whereby effort is more significant still—instead of only allowing us to reveal care and concern, it may also generate this care and concern [ , ]. Imagine a husband who lovingly cares for his wife through a long illness. His devotion through this ordeal might not only reveal the depths of his love for his wife, but it could also generate those depths.In this sense, the exertion of effort might have both generative and revelatory value in our interpersonal relationships, and the relinquishment of effort might have serious costs on both fronts. To make an effort for someone, irrespective of what that effort amounts to, conveys value and meaning in itself. Many of our interpersonal practices are ways of trying to make real or manifest the effort that is in fact of genuine importance to us. In turn, when effort is removed from these practices, so is their worth.
Take one example: nowadays, Facebook provides automatic reminders of people’s birthdays. The moment this memory became automated, the fact of remembering someone’s birthday (which used to carry weight and significance) became increasingly meaningless. It is now possible to set up your account to automatically post a rote birthday message on the appropriate day; you need not even give the person a moment’s thought. These automated messages are equivalent, in terms of their interpersonal worth, to the automated birthday messages sent by a bank or a mobile service provider. Without requiring any thought or effort, the whole practice loses its significance. What other forms of interaction could we surrender to this fate, as we are increasingly able to opt for the effortless modes of expressing pride, love, affection, or consolation to the people around us?
Conclusions
In turn, we should begin to think carefully (even if just for ourselves) about which of these technologies we choose to use, in different contexts and spheres of our lives, and which ones we do not. Where we choose to use them, we should think equally hard about the manner of our engagement and the extent of our agency within it; the more passive we allow ourselves to be, the greater the potential costs we have gestured to in this viewpoint. This is especially true when it comes to those undertakings that have value in and of themselves—rather than value only for their outputs [
]—and also, as we have emphasized in this viewpoint, when it comes to those relationships and human interactions in which our engaged presence, as well as our emotional and intellectual attention and reflection, carries so much significance.There is an adage in developmental psychology: the toys that are best for children are the ones that require them to do the most work. “The best toys are 90% the kid, 10% the toy,” as psychologist Kathy Hirsh-Pasek put it. “If it’s 90% the toy, and 10% the kid, that’s a problem” [
]. The toys that demand the most of a child are the ones that generate creativity, teach them problem-solving, and encourage their social interactions. On the other hand, the toys that merely require a child to press a button will teach them only to press a button over and over again. When we consider children, we usually show special caution for what will aid and hamper their development, flourishing, and well-being. However, our development does not cease after childhood, and indeed, much of the hardest work (in learning to know as well as relate to ourselves and others) still lies ahead. Given that, we should perhaps pause and wonder what opportunities for self-development we might be forsaking as we embrace, ever more, the toys that want to do everything for us, while we ourselves do less and less. We should remember, in the ceaseless war against effort, that far from needing to be eradicated at every opportunity, there are spheres of our lives—and our interpersonal relationships are a prime example—where effort itself can be the whole point.Conflicts of Interest
DJS has received consultancy honoraria from Discovery Vitality, Johnson & Johnson, Kanna, L’Oreal, Lundbeck, Orion, Sanofi, Servier, Takeda and Vistagen.
References
- Turkle S. Reclaiming Conversation: The Power of Talk in a Digital Age. New York, NY. Penguin books; 2015.
- Turkle S. That chatbot I’ve loved to hate. MIT Technology Review. 2020. URL: https://www.technologyreview.com/2020/08/18/1006096/that-chatbot-ive-loved-to-hate/ [accessed 2024-05-27]
- Turkle S. The Empath Diaries: A Memoire. New York, NY. The Penguin Books; 2021.
- Turkle S. Life on the Screen: Identity in the Age of the Internet. New York, NY. Simon & Schuster; 1995.
- Turkle S. Alone Together: Why We Expect More From Technology and Less From Each Other? New York, NY. Basic Books; 2011.
- Laacke S. Bias and epistemic injustice in conversational AI. Am J Bioeth. May 2023;23(5):46-48. [CrossRef] [Medline]
- Kasirzadeh A, Gabriel I. In conversation with artificial intelligence: aligning language models with human values. Philos Technol. Apr 19, 2023;36:1-24. [CrossRef]
- Blodgett SL, Lopez G, Olteanu A, Sim R, Wallach H. Stereotyping Norwegian salmon: an inventory of pitfalls in fairness benchmark datasets. 2021. Presented at: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers); August 1 - 6; Virtual event.
- Henderson P, Sinha K, Angelard-Gontier N, Ke NR, Fried G, Lowe R. Ethical challenges in data-driven dialogue systems. 2018. Presented at: AIES '18: AAAI/ACM Conference on AI, Ethics, and Society; February 2 - 3; New Orleans, LA. [CrossRef]
- Welbl J, Glaese A, Uesato J, Dathathri S, Mellor J, Hendricks L, et al. Challenges in detoxifying language models. arXiv. Preprint posted online on Sep 15, 2021. [CrossRef]
- Sedlakova J, Trachsel M. Conversational artificial intelligence in psychotherapy: a new therapeutic tool or agent? Am J Bioeth. May 2023;23(5):4-13. [FREE Full text] [CrossRef] [Medline]
- Nyholm S. Tools and/or Agents? Reflections on Sedlakova and Trachsel's Discussion of Conversational Artificial Intelligence. Am J Bioeth. May 2023;23(5):17-19. [CrossRef] [Medline]
- Floridi L. AI as agency without intelligence: on ChatGPT, large language models, and other generative models. Philos Technol. Mar 10, 2023;36(1):1-7. [CrossRef]
- Montemayor C. Language and intelligence. Mind Mach. Aug 08, 2021;31(4):471-486. [CrossRef]
- Strijbos D, Jongepier F. Self-knowledge in psychotherapy: adopting a dual perspective on one’s own mental states. PPP. 2018;25(1):45-58. [CrossRef]
- Wieland LC. Relational reciprocity from conversational artificial intelligence in psychotherapy. Am J Bioeth. May 2023;23(5):35-37. [CrossRef] [Medline]
- Friedman C. Ethical concerns with replacing human relations with humanoid robots: an ubuntu perspective. AI Ethics. Jun 20, 2022;3(2):527-538. [CrossRef]
- McStay A. Replika in the Metaverse: the moral problem with empathy in 'It from Bit'. AI Ethics. Dec 22, 2022:1-13. [FREE Full text] [CrossRef] [Medline]
- Grodniewicz JP, Hohol M. Therapeutic conversational artificial intelligence and the acquisition of self-understanding. Am J Bioeth. May 2023;23(5):59-61. [CrossRef] [Medline]
- Hurley ME, Lang BH, Smith JN. Therapeutic artificial intelligence: does agential status matter? Am J Bioeth. May 2023;23(5):33-35. [CrossRef] [Medline]
- Gray J. Deception mode: how conversational AI can respect patient autonomy. Am J Bioeth. May 2023;23(5):55-57. [CrossRef] [Medline]
- Burr C, Floridi L, editors. Ethics of Digital Well-Being: A Multidisciplinary Approach. Cham, Switzerland. Springer; 2020.
- Rubeis G. E-mental health applications for depression: an evidence-based ethical analysis. Eur Arch Psychiatry Clin Neurosci. Apr 2021;271(3):549-555. [CrossRef] [Medline]
- Fiske A, Henningsen P, Buyx A. The implications of embodied artificial intelligence in mental healthcare for digital wellbeing. In: Ethics of Digital Well-Being. Cham, Switzerland. Springer; 2020.
- Torous J, Cerrato P, Halamka J. Targeting depressive symptoms with technology. mHealth. 2019;5:19. [FREE Full text] [CrossRef] [Medline]
- Amram B, Klempner U, Shturman S, Greenbaum D. Therapists or replicants? ethical, legal, and social considerations for using ChatGPT in therapy. Am J Bioeth. May 2023;23(5):40-42. [CrossRef] [Medline]
- Mental Health and Substance Use (MSD), World Health Organization. The Mental Health Atlas 2020. Geneva. World Health Organization; Oct 8, 2021.
- Tartaro A, Cassell J. Playing with virtual peers. Bootstrapping contingent discourse in children with autism. 2008. Presented at: Proceedings of the Eighth International Conference for the Learning Sciences – ICLS 2008, Volumes 2 (pp. 382-389); June 24 - 28; Utrecht, The Netherlands.
- Tigard DW. Toward relational diversity for AI in psychotherapy. Am J Bioeth. May 2023;23(5):64-66. [CrossRef] [Medline]
- Danaher J, Nyholm S. Automation, work and the achievement gap. AI Ethics. Nov 23, 2020;1(3):227-237. [CrossRef]
- Matthias A. The responsibility gap: Ascribing responsibility for the actions of learning automata. In: Ethics and Information Technology. Cham, Switzerland. Springer; 2004:A.
- Nyholm S. Attributing agency to automated systems: reflections on human-robot collaborations and responsibility-Loci. Sci Eng Ethics. Aug 2018;24(4):1201-1219. [FREE Full text] [CrossRef] [Medline]
- Seabrook J. The next word. The New Yorker. 2019. URL: https://www.newyorker.com/magazine/2019/10/14/can-a-machine-learn-to-write-for-the-new-yorker [accessed 2024-05-27]
- Lorenz T. Welcome to the age of automated dating: multiple start-ups are building AI tools for romantic connections? Washington Post. 2023. URL: https://www.washingtonpost.com/technology/2023/04/23/dating-ai-automated-online/ [accessed 2024-05-27]
- Kolata G. When doctors use a chatbot to improve their bedside manner? The New York Times. 2023. URL: https://tinyurl.com/3p6aasyv [accessed 2024-05-27]
- Bradford G. Achievement. Oxford, UK. Oxford University Press; 2015.
- Guerrero A. Intellectual difficulty and moral responsibility. In: Robichaud P, Wieland JW, editors. Responsibility: The Epistemic Condition. Oxford, UK. Oxford University Press; 2017:199-218.
- Nelkin DK. Difficulty and degrees of moral praiseworthiness and blameworthiness. Nous. Nov 14, 2014;50(2):356-378. [CrossRef]
- Blasdel A. 'They want toys to get their children into Harvard?': have we been getting playthings all wrong? The Guardian. 2022. URL: https://tinyurl.com/3d274nnk [accessed 2024-05-27]
Abbreviations
AI: artificial intelligence |
CAI: conversational artificial intelligence |
Edited by J Torous; submitted 29.09.23; peer-reviewed by R Marshall, X Zhao; comments to author 11.12.23; revised version received 25.01.24; accepted 26.01.24; published 18.06.24.
Copyright©Anna Hartford, Dan J Stein. Originally published in JMIR Mental Health (https://mental.jmir.org), 18.06.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on https://mental.jmir.org/, as well as this copyright and license information must be included.