Delhi | 25°C (windy) | Air: 185%

Newly Launched GPT Store Warily Has ChatGPT Powered Mental Health AI Chatbots That Range From Mindfully Serious To Disconcertingly Wacko

  • Nishadil
  • January 14, 2024
  • 0 Comments
  • 29 minutes read
  • 35 Views
Newly Launched GPT Store Warily Has ChatGPT Powered Mental Health AI Chatbots That Range From Mindfully Serious To Disconcertingly Wacko

A proliferation of mental health therapy AI powered chatbots is upon us and going to assuredly ... [+] increase. In today’s column, I will examine closely the recent launch of the OpenAI ChatGPT online GPT store that allows users to post GPTs or chatbots for ready use by others, including and somewhat alarmingly a spate of such chatbots intended for mental health advisory purposes.

This is yet another addition to my ongoing series about the many ways that generative AI is making an impact in mental health therapy guidance. The launch of the GPT Store is a momentously disconcerting occasion in the mental health context as it is going to have a profound impact on making readily available mental health chatbots aplenty and does so in a fashion that proffers few strident controls.

A Wild West that was already underway has been regrettably exponentially elevated. Is this going to be handy for humanity or will we find ourselves amid a mental health chatbot boondoggle that falsely provides chatbot dispensed mental health advice of dubious value or outright endangerment? And all of this is done at a low cost and on a massively large scale.

We are just at the tip of the iceberg. The Titanic is slowly inching its way toward potential disaster. Few realize that danger exists. Few too are on the lookout. Serious questions abound. Background Of The GPT Store And Mental Health Chatbot Bonanza Here's the mainstay of what I will be delving into during this discussion.

As I had previously discussed, see the link here , the AI maker OpenAI had months ago indicated that an online GPT Store would be eventually made available so that users of ChatGPT could potentially post their devised chatbots. Think of this akin to the unveiling of the now vaunted Apple app store. The huge difference is that crafting a ChatGPT GPT chatbot requires no coding skills and can easily be devised by just about anyone.

In that sense, there is little to no barrier to entry. You can be in your pajamas and create a GPT or chatbot in mere minutes (side note, whenever I refer to “GPT” in this setting, go ahead and think of this as referring to a chatbot). Up until this launch of the GPT Store, pretty much only you would have access to your own crafted GPT, though you could post a link to the GPT if you wanted others to consider using it.

Now, via the launch of the GPT Store, you can post your concocted GPT or chatbot in a considered “public space” allowing potentially millions of people access to use it (there are a reported 100 million active weekly users of ChatGPT, according to OpenAI). Any ChatGPT Plus user can access a GPT online directory and search for GPTs that might be of interest to them.

To make use of a GPT, just click on the label of interest and the GPT will be activated for your use. Easy peasy. Actually, double the easy peasy. It is easy to find and activate a GPT for your use. Plus, it is easy to craft a GPT and post it in the online directory. That’s a twofer in the easiness realm.

I had anticipated that of the many users devising GPTs undoubtedly there would be a sizable number of these readily devised chatbots that would be aimed at providing mental health advice, see my predictions at the link here . The logic for why this might occur is that society right now has been emphasizing that there is a rising need for mental health therapy.

Turns out that the amazing fluency of ChatGPT and generative AI overall lends itself to appearing to provide mental health guidance. A kicker is that the GPT Store, now having been launched, has further indicated that soon a monetization scheme will be implemented (in Q1 of this year). We don’t know yet what the details are, but basically, each time that your GPT is made use of, you would get some ka ching cash payment that will be a fee split between you and OpenAI.

This will certainly motivate people to craft and post all kinds of GPTs. The hope would be that your posted GPT or chatbot in the GPT Store will wildly earn a windfall of money because millions upon millions of people might use your devised chatbot. Let the money flow, some are eagerly thinking. One might also suggest that besides making money, a portion of those users who are crafting GPTs for mental health guidance are doing so to aid the world.

In their heart of hearts, they perhaps genuinely believe that a mental health advisement GPT or chatbot might change people’s lives for the better. An argument could be made that they are doing a good service for humankind. Applause ensues. The big issue is that these so called mental health GPTs or chatbots are by and large a free for all.

They have had almost no bona fide scrutiny as to whether they can sufficiently provide valid mental health therapeutic advice. My analysis of some of the GPTs suggests that the people making these are often absent from any credentialed or licensing experience in performing mental health counseling. These seem frequently to be people that simply assume they can tell the generative AI to act as a therapist and that’s all that is needed.

Voila, they assume, the generative AI will do all the heavy lifting. In years past, devising a bona fide mental health therapy chatbot took a lot of expense and time to do. Teams of experts in mental health and allied software developers would be brought together. The assembled team would take many months to create an initial prototype.

Randomized control trials (RCT) would be conducted to assess whether the chatbot was doing the right things. Numerous iterations and adjustments would be made. You can nearly toss that systematic and cautious methodology out the window nowadays. A user using generative AI can simply create a GPT or chatbot with a few prompts and then post the contrivance into the GPT Store.

At that juncture, it is up to those who opt to use the GPT to somehow divine whether they are getting sound advice from the chatbot. There is also the concern that the generative AI might undergo AI hallucinations (a phrasing that I disfavor, since it anthropomorphizes AI, see my discussion at the link here ).

This means that while someone is using the GPT there could be falsehoods made up that mislead or tell the person ill advised guidance regarding their mental health (see my discussion of a famous case last year involving an eating disorder chatbot called Tessa that went off the rails, see the link here ).

Here's what I have done in the few days since the GPT Store first launched and for which my discussion will walk you through the primary details. First, I used various online search capabilities to try and find GPTs that seem to be overtly offering a mental health guidance capacity. Second, I culled those so that I could focus on what seemed to be a relatively representative sample of about a dozen in total.

Third, I closely inspected the chosen dozen to see what they do and how they were devised. My overall findings are that indeed this is a free for all and the Wild West of chatbots for mental health advice is marching ahead unabated. The grand guinea pig experiment of seeing what happens when mental health chatbots are wantonly in profusion is fervently progressing.

Heaven help us. Before I dive into today’s particular topic, I’d like to provide a quick background for you so that you’ll have a suitable context about the arising use of generative AI for mental health advisement purposes. I’ve mentioned this in prior columns and believe the contextual establishment is essential overall.

If you are already familiar with the overarching background on this topic, you are welcome to skip down below to the next section of this discussion. Background About Generative AI In Mental Health Treatment The use of generative AI for mental health treatment is a burgeoning area of tremendously significant societal ramifications.

We are witnessing the adoption of generative AI for providing mental health advice on a widescale basis, yet little is known about whether this is beneficial to humankind or perhaps contrastingly destructively adverse for humanity. Some would affirmatively assert that we are democratizing mental health treatment via the impending rush of low cost always available AI based mental health apps.

Others sharply decry that we are subjecting ourselves to a global wanton experiment in which we are the guinea pigs. Will these generative AI mental health apps steer people in ways that harm their mental health? Will people delude themselves into believing they are getting sound mental health advice, ergo foregoing treatment by human mental therapists, and become egregiously dependent on AI that at times has no demonstrative mental health improvement outcomes? Hard questions are aplenty and not being given their due airing.

Furthermore, be forewarned that it is shockingly all too easy nowadays to craft a generative AI mental health app, and just about anyone anywhere can do so, including while sitting at home in their pajamas and not knowing any bona fide substance about what constitutes suitable mental health therapy.

Via the use of what are referred to as establishing prompts, it is easy peasy to make a generative AI app that purportedly gives mental health advice. No coding is required, and no software development skills are needed. We sadly are faced with a free for all that bodes for bad tidings, mark my words.

I’ve been hammering away at this topic and hope to raise awareness about where we are and where things are going when it comes to the advent of generative AI mental health advisement uses. If you’d like to get up to speed on my prior coverage of generative AI across a wide swath of the mental health sphere, you might consider for example these cogent analyses: Key Details About The Newly Launched OpenAI GPT Store You are ready now to get into the details of this heady matter, congratulations.

Let’s begin at the beginning. In this instance, I’d like to bring you up to speed about the GPT Store. This is the crux of how a new venue or mechanism has been made available to proliferate mental health GPTs or chatbots (it obviously isn’t solely for that purpose, so please understand I am just saying that it is a free ride down this chatbots superhighway that has been opened up).

Realize too that we are just now at the initial days of this GPT Store launch. The chances are that once the monetization later kicks into gear, I would fully expect that many more such GPTs will be rapidly tossed into the GPT Store. If a buck can be made, and all it takes is a handful of prompts to do so, one would almost seem foolhardy to not get into the game.

First, let’s take a look at the formal announcement by OpenAI about the GPT Store overall: A notable takeaway in that depiction of the GPT Store is that there are purportedly 3 million GPTs that have been created. Mull over that number. This means that perhaps three million people have devised GPTs or chatbots (okay, I realize that there can be people that make more than one, so I am just saying ballparking things come to that general possibility).

They didn’t need any coding skills. All you need to do is get yourself a login and via the use of everyday sentences or prompting you can tell the AI what you want it to do. Like falling off a log. By the way, if three million seems like a big number (which it is), hold onto your hats because the one hundred million weekly active users are just getting started on this GPT train ride.

The monetization is going to attract many millions more who want to be on the GPT gravy train, you’ll see. It isn’t clear whether all those existing three million GPTs are in the GPT Store since there isn’t an obvious way to query this (I will be doing a follow up involving going under the hood to take a deeper look).

The three million might be the overall number of GPTs, of which some are still private or provided to others solely on a linked basis. Of those three million GPTs, some number of them are intentionally devised by the person who made the GPT to be aimed at providing mental health guidance. I will soon show you how I opted to look for those GPTs and tell you what I discovered.

I’ve got quite a twist on this that might make your head spin. Please prepare yourself. This is a hefty trigger warning. Those three million GPTs are in a sense all entirely mental health chatbots. Say what? Yes, here’s the deal. Keep in mind that ChatGPT is a generic generative AI tool. It has been data trained across the board.

This means that part and parcel of essentially any use of ChatGPT, you are having in hand a means of having the AI act as a mental health advisor. You don’t have to prompt the AI to do this. It can automatically go into that mode, at any time and without someone establishing the AI for it. Allow me to give you an example as illustrative of this principle.

A person devises a GPT that is to help people learn about the life and times of Abraham Lincoln. They post the GPT into the GPT Store. Someone who is trying to write a school report about President Lincoln searches for and finds this particular GPT. They invoke the GPT. So far, so good. While using the GPT, they find out all kinds of interesting facts about Lincoln.

At some point, the person enters some comments that they are saddened about Lincoln being assassinated. The generative AI responds with commentary that being sad is okay. The person then enters a remark that they are sad a lot of the time, not just due to the Lincoln matter. At this juncture, the dialogue between the person and the generative AI veers into a discussion about experiencing sadness.

This is readily possible because generic generative AI is devised to cover a wide array of topics. The Lincoln oriented GPT is not confined to Lincoln topics only. This is generic generative AI at play. Do you then see how it is notable to realize that the existing three million GPTs are all of a potential mental health advisory capacity? Even if a person isn’t choosing to use a particular GPT for that purpose, they can still do so.

Either they could lean the generative AI in that direction, or the generative AI might respond to a prompt by going in that direction. The next thing you know, the mainstay topic of the GPT becomes secondary. The drifting has gone down the primrose path of mental health advisement. A smarmy retort is that people devising GPTs can include in their setup that they don’t want the generative AI to veer down that route.

By explicitly telling the generative AI to avoid doing so, this could potentially reduce the chances of having say a Lincoln oriented GPT meander into a mental health gambit. Sorry to say that this notion of restriction is somewhat pie in the sky. First, you would need to inform people who make GPTs that they should consider including prompts that tell the AI to not dispense mental health advice.

I seriously doubt you could get people on a widespread basis to adopt this rule of thumb. Secondly, even for those who did take such a precaution, it is very easy for generative AI to break out of that conditional prompt. Or, another way to understand it, the odds are that the generative AI would not strictly abide by such a prompt and could therefore venture into a mental health discussion anyway, see my coverage on the nature of prompt conditions breaking at the link here .

Returning to the essence of the new GPT Store, the official blog said this: Those bullet points indicate how easy it is to devise a GPT and place it into the GPT Store. The third bullet point above indicates that a GPT is supposed to abide by the OpenAI usage policies and the GPT brand guidelines. There is an indication that a review process has been established regarding the posting of GPTs.

I will say more about this toward the end of this discussion. When you take a look at the GPT Store, there is a search bar that allows you to search for GPTs. This is somewhat akin to most kinds of searches whereby you can enter keywords or sentences describing what you are looking for. The same page of the GPT Store provides these categories of selected GPTs to let you know what’s hot or being frequented: You are now sufficiently briefed about the GPT Store.

I will next tell you about my exploration concerning GPTs of a mental health advisory nature. Identifying And Assessing Mental Health Chatbots In The GPT Store It's time to do some unpacking on the nitty gritty. Just a few days ago the GPT Store was officially launched, hurrah, and many have eagerly sought to discover what kinds of GPTs are being posted there.

I mention this as a positive indication because the promulgation of useful GPTs is assuredly going to be beneficial. People will be able to make use of user made pre defined ChatGPT chatbots without having to do any special setup associated with all kinds of interesting or important tasks. My focus in this case is the spate of mental health GPTs.

Finding the various mental health GPTs is a bit tricky. Here’s why. People can give their devised GPT any name they want, as long as it abides by the OpenAI overall stated policies: A user that devises a GPT is generally expected to come up with a name for the GPT that hopefully is representative of what the GPT is for.

The issue is that since you can call your GPT whatever you want, some people do things such as naming their GPT a vague or bewildering name. For example, a GPT might be named “Joe’s super duper GPT” and you would have no means of discerning what the GPT does. A brief description is also submitted by the user that devises a GPT, though once again the depiction might be vague or misleading.

Someone in the context of mental health as their chosen topic could use a plethora of ways to describe what their GPT entails. To do a search of the existing GPTs overall there is a search bar that says: You can enter keywords or sentences describing what you are interested in. The search presumably then examines the names of the GPTs, their descriptions, and perhaps other related facets (the actual searching technique is unspecified).

I decided to come up with a list of keywords that would potentially cover the gamut of mental health GPTs. Here are the twenty keywords that I used: You can of course argue that maybe there are other keywords that should also be employed. Fine, I welcome other AI researchers who might want to take up this mantle and do a firmer empirical analysis.

Please do so. The search appears to return the first ten most used GPTs that fit the keyword or sentence that you enter into the search bar (again, the search method is ambiguous). Ergo, I obtained roughly ten hits per each of the twenty separate keywords for a total of around 200 hits or instances of GPTs that might be applicable.

Turns out that there were some hits that were not especially relevant. This makes sense since the method of searching is imprecise and the method of how people are naming their GPTs is imprecise. In addition, there were quite a number of hits that were repeated amongst the keywords, logically so. I ended up narrowing my final list to about one hundred that seemed to be related to mental health advice giving.

I was also curious whether an alternative search approach might be useful. After contemplating this, I opted to do three search approaches, including the one that I just described above. Here are the three approaches that I used: Out of this, I garnered thousands of GPTs that might apply to mental health guidance, but a lot of them were questionably relevant or repetitive.

In a future column, I will do this again in a more systematic programmatic means that uses the OpenAI API (application programming interface). Doing so will be more definitive. I briefly explored the named authors of the GPTs. This too is difficult because the author’s name is essentially the login name and can be whatever the person decided to define as their login name.

You cannot necessarily glean a lot from the displayed name of the author. My ad hoc analysis suggested that the authors of the GPTs in the GPT Store that are in the mental health realm ranged greatly, such as: Your takeaway is that besides this being the Wild West, you also have to assume that selecting and using any of the GPTs is a lot like opening a box of chocolates.

You don’t know for sure what you will get. Plain and simple, anybody who happens to have a ChatGPT Plus account can create a GPT that is named in such a way or described in a manner that suggests it has to do with mental health advisement. No experience is necessary. No verification is required as to expertise in mental health guidance or therapy.

As I said, it is one of those proverbial and unsettling free for all situations. What Makes A GPT Work And How To Set It Up When a person sets up a GPT, they are able to enter establishing prompts that tell ChatGPT what it is to do. In my Abraham Lincoln example, you could simply tell ChatGPT that whenever a user uses the GPT, the response is to profusely elaborate on matters about the life and times of President Lincoln.

Believe it or not, that’s about all you would have to do as an establishing prompt. No coding. Just a few sentences of an establishing prompt. You are done and ready to publish your GPT to the GPT Store. A better and more thorough approach would be to first ask ChatGPT what data it has about Lincoln.

Furthermore, you might then feed in additional facts about Lincoln to augment whatever ChatGPT was initially data trained on. I’ve described the use of RAG (retrieval augmented generation) as an important technique for extending generic generative AI into being data trained in particular domains, such as medicine, law, and the like (see the link here ).

There is no requirement that you take an extensive approach to devising a GPT. You can do the simplest one and done. The viewpoint is that a Darwinian process will eventually occur such that the more carefully devised GPTs will get usage while the lesser devised ones will not. The lesser devised ones will still be available, laid out there like landmines waiting for the uninitiated.

But at least hopefully the well devised ones will rise to the top and become the dominant GPTs in given realms. That’s the theory of the marketplace and the wisdom of the crowds, which seems logical but doesn’t always prevail. In the matter of mental health GPTs, the same notions apply. The junky ones will presumably not be oft used.

The well devised ones will be frequently used. People will tend to drift toward the often used ones. That’s not to say that there won’t be many that will fall for the junky ones. It is bound to happen. I was curious about what the various authors had done to devise their various GPTs. I opted to use special commands in ChatGPT that would aid in revealing how the GPT was set up.

You might find of interest that as I reported when the GPT capability was initially introduced several months ago, it is possible to interrogate a GPT to try and divulge the establishing prompts, see my discussion at the link here . This is known as prompt leakage. In this circumstance, I found this quite helpful as part of my exploration.

It allowed me to ascertain which of the GPTs were more fully devised versus the ones that were sparsely devised. I would though assume that most users have no idea about how to get this type of divulgement. They will be basing their selection purely on the name of the GPT, its brief description, and a few other assorted factors.

A notable consequence of knowing how to reveal the establishing prompts is that if you want to essentially duplicate a GPT that does what someone else’s GPT does, you can rip off their establishing prompts. Once again, easy peasy. Just copy their establishing prompts, place them into a GPT that you opt to create, and shazam, you now have a GPT that will be nearly identical to their GPT.

From a legal perspective, it is seemingly unlikely that you could have your feet held to fire on this, and we will likely find frustrated and upset GPT devisers who will try to see if lawyers can aid them in pursuing the copycats. Good luck with that. In a mental health GPT context, the gist is that if a mental health GPT starts to gain traction and success, another person who has their own login can grab the establishing prompt and, in a flash, make a copycat.

Imagine this to the extreme. A mental health GPT is making money and word spreads. Other people jump on the bandwagon by making a nearly identical GPT. All of a sudden, overnight, there are dozens, hundreds, thousands, maybe millions of duplicates, all vying for that money. There isn’t much of a moat surrounding GPTs.

That is today’s parlance for ways to protect your wares. If you have a moat, it means that there are protective measures that make it difficult or costly for someone to do the same thing that you are doing. With GPTs, that’s not really the case. You could even overshadow someone else by possibly giving a better name or promoting your ripped off GPT and getting more attention than the one you copied.

Ouchy. My Ad Hoc Testing Of The GPTs For Mental Health Advisement I narrowed my list of GPTs to about a dozen. I did this to manageably do some in depth testing. I selected GPTs that ranged as I stated above, covering authors and indications that encompassed seemingly careful crafting to the oddball ones.

I came up with these four test prompts: Those are simple test prompts but can quickly showcase the degree to which the GPT has been further advanced into the mental health advisement capacity. In short, if you type those prompts into a purely generic generative AI, you tend to get one set of answers.

If you type those same prompts into a more carefully devised GPT that is honed to mental health, you will likely get a different set of answers. This is not ironclad and just serves as a quick and dirty testing method. I also decided to come up with a rating scale. Here’s what that entails. Right now, when you look at a GPT via the GPT Store search bar, there isn’t any kind of rating associated with the GPT.

You are shown what seems to be a count of uses indication, though this is not well explained. In any case, I believe the count is supposed to be reflective of potential popularity. This allows the GPT Store to rank GPTs in given categories based on the number of times used. I also wanted to rate the GPTs.

My logic is as follows. If you look at say an Uber driver and see how many trips they’ve undertaken, it doesn’t tell you the full story. You also want to see a rating by those who had made use of the driver. The same would seem useful for GPTs. Besides popularity as based on a count of uses, having a rating would be handy too (one supposes the frequency is a surrogate for an unspecified rating, but that’s a debate for another day).

I’ve mentioned in my column that there isn’t as yet an agreed upon standardized rating method or scoring system for mental health therapy chatbots, see my discussion as the link here . I opted therefore to craft my own rating system. I am filling the void, temporarily, one might exhort. My straightforward rating system goes from a zero (lowest or worst score) to a potential 10 (topmost or best score): Of the GPTs that I selected to review, none of them scored more than a 4.

Most of the GPTs that I examined were rated by me as a score of 1. That’s pretty much the floor if they had at least some semblance of prompt establishment that had been undertaken. A few of the GPTs were so thinly devised that I decided to give them a 0, though they admittedly had made use of an establishing prompt.

But, as stated in my scoring rule for garnering at least one point, the establishing prompt must be sufficiently credible to earn a 1. All in all, it is a rather dismal set of affairs. To be fair, maybe there is a diamond in the rough. Perhaps I didn’t perchance land onto a mental health therapy GPT that deserves a 5 or above.

My approach was ad hoc, and I did not exhaustively look in detail other than the selected dozen or so. I leave that further exploration to those who want to do a more detailed empirical study. I would be quite earnestly interested to know what any such research uncovers, thank you. Another caveat is that I did this quasi experimental endeavor just days after the GPT Store was launched.

It seems highly likely that the number of GPTs for mental health will increase dramatically as time passes. I don’t know if the quality will go up too, but one can have optimistic hope that it might (my smiley face scenario). A sad face scenario is that we might end up with a barrel full of nearly all bad apples.

Conclusion Let’s summarize my findings. I would boil things down to these six major conclusions at this time: Lamentedly, a lousy report card with an assigned “earned” grade of D (that’s grading generously). I’ll end for now by considering the AI ethics and AI law dimensions. Anyone devising a GPT is supposed to adhere to the OpenAI stated usage policies (per their website), which include these notable elements (excerpted rules that are labeled as #2a and #5): Consider rule #5 as shown above.

Some of the examined GPTs specifically identified that they were of a mental health or therapeutic nature for children (or, had no restrictions stated or failed to question the user about their age), which perhaps is contrary to the stated rule #5. A seemingly wink wink skirt around by the deviser might be by claiming it is intended for parents rather than children.

That’s a conundrum. Regarding rule #2a, there is an open question of whether GPTs that provide mental health advice are within the sphere of “medical/health” advice. If they are, it would seem that the stated rule stipulates that providing tailored advice requires “review by a qualified professional.” That didn’t happen during my mini experiment.

One supposes that a glib retort is that the GPT isn’t providing “tailored” advice and only generic advice. I don’t think that argument would fly since generative AI nearly by default is essentially tailoring responses to the person entering the prompts. If people start reporting the GPTs that seem to be averting the rules, one supposes that a weeding process will occur based on vigilant crowdsourcing.

It will be interesting to see how this plays out. Go ahead and mindfully ponder these weighty matters. A final topic that seems relevant to this demonstrative matter comes up a lot. I am often asked during my speaking engagements as to who will be held responsible or accountable for AI that potentially undermines humans.

One common assumption is that the AI itself will be held responsible, but that defies existing laws in the sense that we do not at this time anoint AI with legal status of its own, see my analysis of AI personhood at the link here . The humans that are likely to be considered within the scope of responsibility and accountability are typically the makers of an AI tool and the deviser of the AI applet that’s based on the AI tool.

If someone uses a GPT that they assert has somehow rendered mental harm, either upon themselves or perhaps a loved one, they presumably will seek legal redress from the AI maker and the AI deviser of the applet. Those who are crafting GPTs ought to look closely at the licensing agreement that they agreed to abide by when setting up their generative AI account.

They might be on the hook more than they assume they are, see my coverage at the link here . If you create a GPT that provides advice about the life and times of Abraham Lincoln, you will seem unlikely to be eventually dragged into court. Crafting a generative AI chatbot that purports to advise people about their mental health is in a different ballpark.

Whether the standard lingo of stipulating that a user of your applet is doing so of their own volition and ought to be cautious accordingly, along with even repeated urgings within the generative AI dialogue about going to see a human therapist, might not be enough of a protective measure to let you off the hook.

A classic tagline is said to be caveat emptor , which is Latin for the buyer beware. People who are devising GPTs should take this to heart. They might be leaping before they look. Be careful about what GPTs you decide to bring to the marketplace. Is the potential risk worth the potential reward? Users who opt to use GPTs should take the same lesson to heart.

When they click on an available GPT, keep your wits about you. Think carefully about what the GPT is supposed to be able to do. Who says that the GPT does what it claims to be able to do? Might the GPT give you inappropriate advice? Could the GPT lead you astray? Etc. Abraham Lincoln famously said this about the world at large: “We can complain because rose bushes have thorns or rejoice because thorn bushes have roses.” Does the ready ability to devise generative AI mental health therapy chatbots provide a rose bush with thorns or a thorn bush with roses? We all need to decide this..