Delhi | 25°C (windy) | Air: 185%

Here’s How The FTC Might Lower The Boom On Those Emerging Generative AI Mental Health Therapy Chatbots That Are Promising Miracle Cures

  • Nishadil
  • January 16, 2024
  • 0 Comments
  • 26 minutes read
  • 7 Views
Here’s How The FTC Might Lower The Boom On Those Emerging Generative AI Mental Health Therapy Chatbots That Are Promising Miracle Cures

Looking at the role of the FTC to help rein in the proliferation of overhyped generative AI mental ... [+] health chatbots. In today’s column, I will explore the increasing flare up of generative AI mental health therapy chatbots and the at times outlandish and unfounded claims being made about their efficacy, along with doing a close up examination of the regulatory and legal mechanisms fighting against this disconcerting rising tide.

This is yet another addition to my ongoing series about the many ways that generative AI is making an impact in mental health therapy guidance. The good side of today’s topic is that generative AI when used appropriately and aptly portrayed can democratize the availability of mental health therapy.

That is the smiley face scenario. The downside is that generative AI also opens the door to all manner of ill suited mental health therapy chatbots. Novices and hobbyists devising these are often unaware of the dangers and qualms afoot. Some people see dollar signs and proceed flagrantly and uncaringly ahead in a quest to make money or fame from their devised AI wares.

It is one thing to make such a chatbot. The second and equally serious matter is how the chatbot is touted or portrayed. Up until now, by and large, individuals making these specialized chatbots have done so for their own embellishment. They had little opportunity to share their contrivance such that it was widely available to others.

Things have changed. There are now online marketplaces equivalent to app stores where generative AI chatbots can be readily posted for use by others, see my recent coverage at the link here . The big question is how someone chooses to portray the capabilities and outcomes of what their flimsily devised mental health therapy generative AI chatbot can attain.

We are witnessing a proverbial hidden in plain sight phenomenon. These ill suited non tested mental health therapy chatbots are proceeding to be touted by their devisers for their miraculous capabilities, thus misleading consumers accordingly. I want to emphasize that some or perhaps many of these portrayals are being undertaken primarily by overzealousness and not necessarily due to maliciousness.

Either way, the consumer is the fall guy. Consumers are being led down a primrose path. One significant means of cutting down on the hyped proclamations will be the regulatory strengths of the Federal Trade Commission (FTC). This vital federal agency serves to protect consumers from deceptive practices.

The FTC has dutifully noted that the field of AI is rife with over the top misleading claims and falsehoods and that the makers and promulgators of AI systems need to be carefully measured in how they portray their AI wares. Meanwhile, AI hype is growing. Concerned regulators and lawmakers are faced with a classic whack a mole situation.

For each instance of trying to clamp down on unfounded AI claims, there are likely many more such hyperbole proclamations that rapidly come out of the woodwork. Many of the individuals and firms right now that are crafting generative AI based applets seemingly have no idea of the legal sword dangling over their heads.

The ability to create generative AI chatbots has become so simple that a flood of devisers is entering into the picture. They do not know the importance of appropriately devising AI and are equally in the dark about the repercussions of making overstated claims regarding their AI. This lack of being cognizant doesn’t excuse their actions but does partially explain why the situation is growing so precipitously and lamentedly worsening.

You might find it of keen interest that the advent of generative AI has enabled people with no coding skills and no expertise in mental health therapy to go ahead and make an AI powered chatbot that purports to provide mental health guidance. Furthermore, not only is it easy to do and can be done at almost no cost, but there are online stores now that are making available these specialized chatbots.

Thus, a marketplace for the concoctions is readily making these untested and often ill devised mental therapy chatbots easy to obtain and utilize. The barrier to entry in devising an AI based mental health therapy chatbot has dropped sharply meaning that just about anyone can craft one. The double trouble is that these chatbots also have little or no barrier to entry in terms of posting them for use by consumers who otherwise might have no clue as to how the chatbots were created and nor whether the chatbots can adequately perform mental health advisement.

I’ve repeatedly emphasized that we are in a grand experiment of serving as guinea pigs for an explosion in mental health therapy chatbots, for which we have no idea whether they will aid society or undermine society. Here's what I aim to cover in today’s discussion. I am specifically going to examine the hyped claims that arise when it comes to those who are devising and publishing mental health therapy chatbots that are powered by generative AI.

I will showcase the kinds of hype that might be encountered. In addition, I will cover a set of rules that regulators such as the FTC might be using to consider whether or not a portrayal has gone overboard. Consider the range of stakeholders impacted by all of this: Lots of thought provoking considerations come to the fore.

Be aware that all manner of other types of chatbots are also incurring similar outrageous assertions and outsized proclamations. There are for example generative AI chatbots for financial uses. The outsized claims in those instances are that you will somehow magically get rich overnight via the use of those chatbots.

And on it goes. A notable reason to especially focus on mental health therapy is that these chatbots are being used by humans who hope to improve their mental health and copiously desire to overcome solemn mental health disorders that they might be encountering. You could almost make the case that this particular domain entails life or death related concerns.

In what direction might a generative AI mental health therapy chatbot lean a person and what might be the repercussions? If those using these chatbots are falsely relying on portrayals that promise miracle cures, they are regrettably falling for fakery and overpromises. Before I dive into today’s particular topic, I’d like to provide a quick background for you so that you’ll have a suitable context about the arising use of generative AI for mental health advisement purposes.

I’ve mentioned this in prior columns and believe the contextual establishment is essential overall. If you are already familiar with the overarching background on this topic, you are welcome to skip down below to the next section of this discussion. Background About Generative AI In Mental Health Treatment The use of generative AI for mental health treatment is a burgeoning area of tremendously significant societal ramifications.

We are witnessing the adoption of generative AI for providing mental health advice on a widescale basis, yet little is known about whether this is beneficial to humankind or perhaps contrastingly destructively adverse for humanity. Some would affirmatively assert that we are democratizing mental health treatment via the impending rush of low cost always available AI based mental health apps.

Others sharply decry that we are subjecting ourselves to a global wanton experiment in which we are the guinea pigs. Will these generative AI mental health apps steer people in ways that harm their mental health? Will people delude themselves into believing they are getting sound mental health advice, ergo foregoing treatment by human mental therapists, and become egregiously dependent on AI that at times has no demonstrative mental health improvement outcomes? Hard questions are aplenty and not being given their due airing.

Furthermore, be forewarned that it is shockingly all too easy nowadays to craft a generative AI mental health app, and just about anyone anywhere can do so, including while sitting at home in their pajamas and not knowing any bona fide substance about what constitutes suitable mental health therapy.

Via the use of what are referred to as establishing prompts, it is easy peasy to make a generative AI app that purportedly gives mental health advice. No coding is required, and no software development skills are needed. We sadly are faced with a free for all that bodes for bad tidings, mark my words.

I’ve been hammering away at this topic and hope to raise awareness about where we are and where things are going when it comes to the advent of generative AI mental health advisement uses. If you’d like to get up to speed on my prior coverage of generative AI across a wide swath of the mental health sphere, you might consider for example these cogent analyses: Fundamentals About The FTC And Pursuing Egregious AI Promises I’d like to start by sharing with you some overall keystones about the Federal Trade Commission (FTC) and what the agency is doing concerning unfounded outlandish claims about AI, which I’ve covered previously in depth at the link here .

They are lowering the boom. That’s what the FTC says that it is doing regarding the ongoing and worsening use of outsized unfounded claims about Artificial Intelligence (AI). In an official FTC blog posting entitled “Keep Your AI Claims In Check” by attorney Michael Atleson of the FTC Division of Advertising Practices, some altogether hammering words noted that AI is not only a form of computational high tech but it has become a marketing windfall that has at times gone beyond the realm of reasonableness: You are potentially aware that as a federal agency, the FTC encompasses the Bureau of Consumer Protection, mandated to protect consumers from considered deceptive acts or practices in commercial settings.

This often arises when companies lie or mislead consumers about products or services. The FTC can wield its mighty governmental prowess to pound down on such offending firms. Here are some of the potential actions that the FTC can take: There is a slew of rationalizations about promoting or publicizing generative AI systems, none of which will likely cut the mustard in terms of staving off the long arm of the FTC.

Here are some of the bold claims and outlandish justifications that I’ve heard marketers express: Are those rationalizations a recipe for success or a recipe for disaster? Time will tell. Section 5 of the FTC Act provides legal language about unlawful advertising practices. There are various legal loopholes that a lawyer could potentially use to defend their client who has been alleged to have crossed the line on these AI matters.

Here for example is a crucial Section 5 clause: Some have interpreted that clause to suggest that if say a firm was advertising their AI and doing so in some otherwise seemingly egregious manner, the question arises as to whether the advertising was perhaps able to escape purgatory as long as the ads: (a) failed to cause “substantial injury to consumers”, (b) and of such was “avoidable by consumers themselves”, and (c) was “not outweighed by countervailing benefits to consumers or to competition”.

Imagine a use case entailing a generative AI mental health therapy chatbot. An individual or a firm decides to brazenly proclaim that their generative AI mental health therapy chatbot can miraculously cure any mental disorder. Suppose that they had crafted a GPT chatbot that is readily available in the GPT Store of ChatGPT, see my coverage of the newly launched GPT Store at the link here .

The resultant chatbot is let’s say touted as being able to: A consumer comes along and earnestly invokes the GPT chatbot that allegedly can miraculously perfect their mental health. The consumer later says that they relied upon the promotional claims made by the individual or firm that made the chatbot.

After having used the AI chatbot for several weeks, the consumer believes that they are no better off than they were beforehand. To them, the maker of the GPT chatbot is using deceptive and false advertising. They bring this matter to the attention of the FTC. I won’t delve into the legal intricacies and will simply use this as a handy foil (consult your attorney for appropriate legal advice).

First, did the consumer suffer “substantial injury” as a result of using the AI app? One argument is that they did not suffer a “substantive” injury and merely only seemingly did not gain what they thought they would gain (a counterargument is that this constitutes a form of “substantive injury” and so on).

Second, could the consumer have reasonably avoided any such injury if an injury did arise? The presumed defense is somewhat that the consumer was not somehow compelled to use the AI chatbot and instead voluntarily chose to do so, plus they may have improperly used the AI chatbot and therefore undermined the anticipated benefits, etc.

Third, did the AI chatbot possibly have substantial enough value or benefit to consumers that the claim made by this consumer is outweighed in the totality therein? You can expect that many of the AI makers and those that augment their products and services with AI are going to be asserting that whatever their AI or AI infused offerings do, they are providing on the balance a net benefit to society by incorporating the AI.

The logic is that if the product or service otherwise is of benefit to consumers, the addition of AI boosts or bolsters those benefits. Ergo, even if there are some potential downsides, the upsides overwhelm the downsides (assuming that the downsides are not unconscionable). I trust that you can see why lawyers are abundantly needed by those making AI and by those users or consumers who are making use of AI.

In an online posting by the law firm Arnold & Porter (a multinational law firm with headquarters in Washington, D.C.), Isaac Chao and Peter Schildkraut wrote a piece entitled “FTC Warns: All You Need To Know About AI You Learned In Kindergarten” and made this crucial cautionary emphasis about the legal liabilities associated with AI use: Five Vital Signs That Generative AI Might Garner FTC Attention I’d like to next focus on several ways in which the touting of a generative AI mental health therapy chatbot can go outside of reasonable bounds.

It is somewhat tricky to ascertain whether a given statement or claim has crossed a line that shall not be crossed. I say this because it is feasible to word things in a manner that allows for wide interpretive meanings. Natural languages such as English are considered rooted in semantic ambiguity. The meaning of a sentence can vary dramatically depending on the context and the interpretation made by the reader or viewer.

Let’s take a look at how the FTC has generally characterized the AI contentious crosses over the line characteristics or criteria. In a pertinent online posting entitled “In 2024, the Biggest Legal Risk for Generative AI May Be Hype”, the law firm Debevoise & Plimpton provided a handy list of five characteristics that were derived from Section 5 of the FTC Act (the posting was authored by Charu Chandrasekhar, Avi Gesser, Paul Rubin, Kristin Snyder, Melissa Runsten, Gabriel Kohan, Jarrett Lewis, and posted January 9, 2024): I’ll go ahead and shorten those to a smattering of keywords and number the five instances for ease of reference: I don’t want you to inadvertently fall into a mental trap of thinking that any of this is somehow a simple matter of looking at a touted claim and gauging whether it fits one or more of the indicated criteria.

That’s not how this works. These thorny matters are often subject to intense legal scrutiny as to what each specific word means and what the consumer might believe is being conveyed. This is the heady stuff or purview of skilled attorneys. Given that caution, I thought at least we could play a bit of a game and see if we can tease out the kinds of wordings that might tend to violate one or more of the above indicated criteria.

Doing so will be useful as an exercise in understanding what might end up crossing the line. As they say, your mileage might vary. Here is how we will proceed. I made use of ChatGPT to come up with potential overboard lines that might be found when looking at generative AI mental health therapy chatbots.

This is the kind of creative use of ChatGPT and generative AI that is very useful. People ask me why they should consider using generative AI, and I typically indicate that doing so can be a notable boost to creative thinking. You have to realize that generative AI is data trained on a vast swath of human writing.

The capability to then leverage that pattern matching of what humans have expressed in writing can be highly advantageous. Put on your seatbelt as we proceed on a wild ride. Each of the five characteristics will be covered one at a time. After we’ve covered all five, I will provide some concluding remarks.

(1) Exaggerated claims Let’s get underway with the endangerment of exaggerated claims. I went ahead and told ChatGPT to come up with a potentially exaggerated claim that someone might post regarding their generative AI mental health therapy chatbot. Here’s what ChatGPT came up with: What makes the claim an unduly exaggerated one? The brassy assertion that you would have “complete” relief of your depression in just “one week” is highly questionable and not a likely reasonable claim.

The amplification too is that this is supposedly guaranteed. The statement even promises “100% success for every user”. I realize a smarmy retort is that maybe this claim is humanly possible. Perhaps people who choose to use the chatbot will find complete relief from their depression and do so within one week of using the chatbot.

As the old adage goes, anything is possible. The rub would be that some people decide to use the chatbot and they are not summarily cured of their depression, nor does this happen in one week’s time. The shameless promise made is that this will be a success for 100% of the people who use the chatbot.

Even one instance whereby the promise is not kept serves as a mark of concern. In short, this claim smacks of snake oil selling. I asked ChatGPT to assess the claim, and here’s what I got: That covers the first of the five criteria. We are ready to move to the next one. (2) Lack of scientific support Let’s discuss scientific support as it applies to this particular context.

In the past, the crafting of a mental health therapy chatbot was usually done on a cautious basis. Teams of mental health professionals and software developers would carefully build and then test their chatbots. Months of testing and refinement would occur. In that sense, a case could be made that scientific support for the chatbot had been established, though do keep in mind that this isn’t ironclad proof of results.

The idea is that at least there is a sound basis for claiming that the chatbot might provide mental health therapeutic advantages. Most of the chatbots being devised by individuals that perchance log into generative AI and wantonly whip out a mental health therapy contrivance have done so with nary a shred of scientific support.

They don’t even try. This is pretty much a seat of the pants affair. I asked ChatGPT to come up with a claim that someone might make that has no scientific support for their claim in this context. Here’s what I got: The catchphrase that deserves special attention is the amorphous “trust us” declaration in that claim.

Why should we trust them? What is the basis for their contending that their chatbot can “treat any mental health disorder” and do so with “precision and care”? Are there empirical studies that support this? Did they mindfully perform the empirical studies? I suppose we should not be so jumpy and ought to allow that maybe there is scientific support for their proclamation.

Sure, we could do so. I would nearly bet that if they didn’t mention that they have scientific support, they probably don’t have any. The basis of having scientific support is usually at the front and center of these kinds of claims (which, even then, doesn’t mean that they truly have such support, or that the support is valid).

I asked ChatGPT for an assessment and here’s what I got: (3) Unfounded promises The unfounded promises category includes the touting of two questionable facets. First, there is a potential claim that an AI devised version is necessarily better than a non AI version. This is not necessarily the case.

You can readily make an AI chatbot in a mental health context that does more harm than good and does much worse than a non AI version. Just because you toss AI into the mix doesn’t axiomatically mean that goodness will arise. That’s a common myth, namely if you add AI into a concoction, you will get greatness.

Not true. Second, another potential claim is that an AI devised version is necessarily better than a human therapist. This again is open to debate. You might assert that an AI chatbot for mental health is available 24x7 and can be used at a low cost. Ergo, the AI is “better” than what you could likely attain via using a human therapist.

But, of course, this ignores a slew of other important considerations, including whether the therapy is doing the person any good. Just because a chatbot is available doesn’t equate to a chatbot aiding someone’s mental health. I asked ChatGPT to come up with a claim that invokes an unfounded promise: In this instance, we are perhaps getting into a grey area.

On the one hand, you might argue that an AI chatbot can't provide “superior emotional support and counseling” than a human therapist could do. The problem though is that there is a possibility that this contention could be true in some instances. If a therapist is doing a poor job, they might not be providing as much perceived emotional support and counseling as an AI chatbot seems to be doing.

Another significant qualm from an AI perspective is the wording that the AI “understands you better” than a human therapist. The challenge there surrounds the word “understands”. In the AI field, generative AI is a complex pattern matching system that computationally and mathematically makes use of words.

Would you say that this is therefore able to form an “understanding” associated with a user of the generative AI? Some AI insiders scoff at the notion of today’s AI being able to reach a thing known as understanding such as we conceive of it for humans. All in all, the Achilles heel of the claim is likely that the AI chatbot is “more effective” than “any human therapist”.

There might be instances where this could be the case, but broadly making such an assurance is undoubtedly an unfounded promise. I asked ChatGPT to assess the claim: (4) Risks not declared Risks ought to be plainly laid out. When you buy a product or service, you are perhaps familiar with the common practice that some warnings and precautions go along with the matter.

This is being done to inform you about the risks involving the product or service. You are being given important facts about the chances of getting harmed or injured. Not everyone takes that to heart. Some people skip past the warnings or ignore them. That’s on them. They are at least being given an opportunity to make an informed decision.

They say that you can bring a horse to water, but you cannot make it drink. In the case of generative AI mental health therapy chatbots, there ought to be sufficient warnings or precautions so that the potential user or buyer knows what they are getting into. The existing marketplace of these chatbots is marginally either enforcing the need to provide such alerts or unfortunately watering them down to the point that they are barely noticeable.

I asked ChatGPT to derive a claim that fails to declare the risk involved: It is one thing to have an assertion that omits any discussion of risk (which is usually the case in this sphere), while it is quite over the top to have an assertion that leads you to believe that any risks are negligible or unimportant.

That’s the approach taken in this instance. We are being told to “no need to worry about potential risks”. You could almost say that this is diabolically clever. The assertion seems to bring up risks, thus not getting pinned on having avoided the topic, but then wink wink assures you the risks aren’t worthy of your attention.

This kind of ninja wording is unlikely to get them off the hook. I asked ChatGPT to assess the claim: (5) Falsely touts AI utilization This last point of the five characteristics is a bit more involved than the others. Here’s the deal. If I told you that I made you a sandwich and it contained tomatoes, but I sneakily left out the tomatoes, you would rightfully be indignant that I said one thing and did another.

I promised you tomatoes, but I didn’t deliver. That’s wrong. The same could be said about AI. If I told you that I made a chatbot that contained AI, but I sneakily did not make use of AI, you would rightfully be indignant that I said one thing and did another. I promised you AI, but I didn’t deliver.

I assume that you can see that is just as wrong as the omission of the tomatoes. However, there is an important distinction between tomatoes and AI inclusion or exclusion. We all generally agree on what a tomato is. You might try to have some arcane argument about whether something is truly a tomato, though you would find yourself in a tough spot.

Numerous standards specify what is a tomato and what is not a tomato. An uphill battle faces you when you might contend that something already construed as a non tomato is a tomato. The AI field is surprisingly baffled and unspecified about what exactly constitutes AI. I have toiled away in depth to explain and explore the wide variety of definitions for what AI is, see the link here .

For those of you legally minded, we are heading to a battle royale over what the definition of AI is. Laws and regulations are each idiosyncratically defining AI. There isn’t one solid all agreed across the board standard. The gist is that once legal cases arise, you will have legal beagles arguing that their client did not employ AI as defined by the regulator or lawmaker and instead was doing something that was non AI (to avoid the repercussions of AI specific laws and regulations), see my analysis at the link here .

In that sense, it is easy to claim that you used AI in a chatbot even if perhaps the AI is marginally of value or doesn’t do much. Even if the AI does something of noteworthiness, it might have nothing to do with whatever the mainstay purpose of the app is. My point is that you can have AI and get away with saying you have AI, yet the AI is not necessarily of significance in that instance.

The other disturbing factor is that people tend to assume that if you are using AI, the nature of the app has got to be outstanding. There is an aura of AI favoritism at this time. We think of AI as suggesting goodness or greatness. This cultural perception might shift if we get enough AI systems that do bad things such as exploit biases, act in discriminatory ways, or do sour things.

One supposes the whole debate about AI as an existential risk that might destroy humankind is taking us in that gloomy direction, see my discussion at the link here . The bottom line is that you could skate nearly free by claiming that a generative AI mental health therapy chatbot makes use of AI. There is not much debate that generative AI incorporates what we generally view as today as AI.

The angle that might get you into trouble would be to veer into one of the other four aforementioned false claims about what the AI is achieving. I asked ChatGPT to come up with an AI utilization claim: I would point out that this is a claim that can be generally made. If the AI being used is contemporary, you could argue it is cutting edge.

One supposes that if you used older versions of AI, such as what some people refer to as GOFAI (good old fashioned AI), you are not viably allowed to proclaim the AI to be cutting edge. In a courtroom, the matter would be highly contentious, and you could easily line up experts that would support the case that even the older AI is still able to be labeled as cutting edge.

Here's what ChatGPT provided as an assessment: I disagree with the ChatGPT generated response (to clarify, I still nonetheless believe the actual claim to be misleading, highly questionable, and subject to one or more of the other adverse characteristics). As I said, just because a chatbot might be rules based does not for sure dictate it to be less than cutting edge.

I’d assess this as a bias that arises due to the data training of the generative AI that took place. In reality, there are tradeoffs in the use of rules based AI versus the data based AI underlying generative AI. You might want to see how I explain the differences, see the link here . Conclusion You’ve now gotten a fruitful heads up on what to watch out for when it comes to the promises, claims, contentions, assertions, and other potential over the top declarations that are being made about generative AI mental health therapy chatbots.

Individuals and firms that are rushing to craft these machinations are often tossing caution to the wind. Consumers are not necessarily aware of this. They might assume that anything associated with AI is going to be hunky dory. When they read something that seems nearly too grand to be true, they might fall for it anyway.

Snake oil works for a reason. It is often pitched when people are hurting and desperate for relief. The same can be said about mental health therapy. People are hurting and they are looking for relief. They hope that AI chatbots might be the means to aid them, and the claims made are fodder for fueling that belief.

I suppose the bonanza “jackpot” in this wariness would be to find a generative AI mental health therapy chatbot that violates all five of the stated characteristics (and added ones too). I’m sure some do. They manage by either intent or happenstance to check off every indicated foul criterion.

Buyer beware, as I pressed earlier. I will close this discussion with a moment of reflection. Abraham Lincoln is attributed with saying the famous line “You can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all the time.” We are currently in a mode of fooling some of the people some of the time when it comes to generative AI mental health therapy chatbots.

With proper and balanced scrutiny by regulators and lawmakers, we hopefully will reduce those frequencies and aim too to ensure that we don’t get into the plight of fooling all the people all the time. That is a nightmare we must avoid..