Google’s Gemini flop raises the question: What exactly do we want our chatbots to do, really?
You can now do an official Google course to learn how to talk to AI chatbots
Either they could lean the generative AI in that direction, or the generative AI might respond to a prompt by going in that direction. The next thing you know, the mainstay topic of the GPT becomes secondary. The drifting has gone down the primrose path of mental health advisement. Keep in mind that ChatGPT is a generic generative AI tool.
- If you type those same prompts into a more carefully devised GPT that is honed to mental health, you will likely get a different set of answers.
- The third bullet point above indicates that a GPT is supposed to abide by the OpenAI usage policies and the GPT brand guidelines.
- Crafting a generative AI chatbot that purports to advise people about their mental health is in a different ballpark.
- Here’s what I have done in the few days since the GPT Store first launched and for which my discussion will walk you through the primary details.
- A notable consequence of knowing how to reveal the establishing prompts is that if you want to essentially duplicate a GPT that does what someone else’s GPT does, you can rip off their establishing prompts.
- This too is difficult because the author’s name is essentially the login name and can be whatever the person decided to define as their login name.
If people start reporting the GPTs that seem to be averting the rules, one supposes that a weeding process will occur based on vigilant crowdsourcing. It will be interesting to see how this plays out. There is no requirement that you take an extensive approach to devising a GPT. The viewpoint is that a Darwinian process will eventually occur such that the more carefully devised GPTs will get usage while the lesser devised ones will not. The lesser devised ones will still be available, laid out there like landmines waiting for the uninitiated. But at least hopefully the well-devised ones will rise to the top and become the dominant GPTs in given realms.
Florida lawmakers recall what led to 60% threshold to pass constitutional amendments
We don’t know yet what the details are, but basically, each time that your GPT is made use of, you would get some ka-ching cash payment that will be a fee split between you and OpenAI. This will certainly motivate people to craft and post all kinds of GPTs. The hope would be that your posted GPT or chatbot in the GPT Store will wildly earn a windfall of money because millions upon millions of people might use your devised chatbot. A final topic that seems relevant to this demonstrative matter comes up a lot.
A better and more thorough approach would be to first ask ChatGPT what data it has about Lincoln. A user that devises a GPT is generally expected to come up with a name for the GPT that hopefully is representative of what the GPT is for. The issue is that since you can call your GPT whatever you want, some people do things such as naming their GPT a vague or bewildering name. For example, a GPT might be named “Joe’s super-duper GPT” and you would have no means of discerning what the GPT does.
You can be in your pajamas and create a GPT or chatbot in mere minutes (side note, whenever I refer to “GPT” in this setting, go ahead and think of this as referring to a chatbot). Up until this launch of the GPT Store, pretty much only you would have access to your own crafted GPT, though you could post a link to the GPT if you wanted others to consider using it. Those who are crafting GPTs ought to look closely at the licensing agreement that they agreed to abide by when setting up their generative AI account. They might be on the hook more than they assume they are, see my coverage at the link here. If you create a GPT that provides advice about the life and times of Abraham Lincoln, you will seem unlikely to be eventually dragged into court. Even if a person isn’t choosing to use a particular GPT for that purpose, they can still do so.
This means that part and parcel of essentially any use of ChatGPT, you are having in hand a means of having the AI act as a mental health advisor. It can automatically go into that mode, at any time and without someone establishing the AI for it. My overall findings are that indeed this is a free-for-all and the Wild West of chatbots for mental health advice is marching ahead unabated. The grand guinea pig experiment of seeing what happens when mental health chatbots are wantonly in profusion is fervently progressing.
Search results for
I would though assume that most users have no idea about how to get this type of divulgement. They will be basing their selection purely on the name of the GPT, its brief description, and a few other assorted factors. In the matter of mental health GPTs, the same notions apply. People will tend to drift toward the often-used ones. That’s not to say that there won’t be many that will fall for the junky ones.
- I leave that further exploration to those who want to do a more detailed empirical study.
- You cannot necessarily glean a lot from the displayed name of the author.
- A brief description is also submitted by the user that devises a GPT, though once again the depiction might be vague or misleading.
Third, I closely inspected the chosen dozen to see what they do and how they were devised. Any ChatGPT Plus user can access a GPT online directory and search ChatGPT for GPTs that might be of interest to them. To make use of a GPT, just click on the label of interest and the GPT will be activated for your use.
While that proposal has moved through committees in the Legislature in recent years, it’s never made it to the floor for a final vote. However, with Gov. Ron DeSantis going full out in denouncing amendments 3 and 4 this year, some former lawmakers suspect he may attempt to persuade lawmakers to act during the 2025 session. “Just like pregnant pigs had no place in our state constitution decades ago, legalizing drugs should not be rammed into the constitution for the profit of a few,” Wilson wrote in the Tallahassee Democrat. The pregnant pigs amendment still has resonance. In an op-ed by Florida Chamber of Commerce CEO Mark Wilson last month, he invoked it in advocating for the public to reject this year’s Amendment 3, which would have legalized adult use of recreational cannabis. In order to do so, please follow the posting rules in our site’s Terms of Service.
My approach was ad hoc, and I did not exhaustively look in detail other than the selected dozen or so. I leave that further exploration to those who want to do a more detailed empirical study. I would be quite earnestly interested to know what any such research uncovers, thank you.
This too is difficult because the author’s name is essentially the login name and can be whatever the person decided to define as their login name. You cannot necessarily glean a lot from the displayed name of the author. A brief description is also submitted by the user that devises a GPT, though once again the depiction might be vague or misleading. Someone in the context of mental health as their chosen topic could use a plethora of ways to describe what their GPT entails. At this juncture, the dialogue between the person and the generative AI veers into a discussion about experiencing sadness.
Those are simple test prompts but can quickly showcase the degree to which the GPT has been further advanced into the mental health advisement capacity. In short, if you type those prompts into a purely generic generative AI, you tend to get one set of answers. If you type those same prompts into a more carefully devised GPT that is honed to mental health, you will likely get a different set of answers. This is not ironclad and just serves as a quick-and-dirty testing method. First, I used various online search capabilities to try and find GPTs that seem to be overtly offering a mental health guidance capacity. Second, I culled those so that I could focus on what seemed to be a relatively representative sample of about a dozen in total.
It is easy to find and activate a GPT for your use. Plus, it is easy to craft a GPT and post it in the online directory. This is supposed to be Google’s big bet on the future, and while most people in the real world have no idea about any of this, internally at Google, it’s viewed as a humiliating self-own. In 2002 Florida voters had approved Amendment 10, which prohibited confinement of pregnant pigs as a way that prevented them from turning around freely. The measure was strongly supported by the Humane Society as a way to prevent cruelty to animals.
I am often asked during my speaking engagements as to who will be held responsible or accountable for AI that potentially undermines humans. One common assumption is that the AI itself will be held responsible, but that defies existing laws in the sense that we do not at this time anoint AI with legal status of its own, see my analysis of AI personhood at the link here. To be fair, maybe there is a diamond in the rough. Perhaps I didn’t perchance land onto a mental health therapy GPT that deserves a 5 or above.
The chatbot optimisation game: can we trust AI web searches? – The Guardian
The chatbot optimisation game: can we trust AI web searches?.
Posted: Sun, 03 Nov 2024 18:33:00 GMT [source]
In addition, there were quite a number of hits that were repeated amongst the keywords, logically so. I ended up narrowing my final list to about one hundred that seemed to be related to mental health advice-giving. The big issue is that these so-called mental health GPTs or chatbots are by and large a free-for-all.
I will soon show you how I opted to look for those GPTs and tell you what I discovered. You can nearly toss that systematic and cautious methodology out the window nowadays. A user using generative AI can simply create what do chatbots do a GPT or chatbot with a few prompts and then post the contrivance into the GPT Store. At that juncture, it is up to those who opt to use the GPT to somehow divine whether they are getting sound advice from the chatbot.
From a legal perspective, it is seemingly unlikely that you could have your feet held to fire on this, and we will likely find frustrated and upset GPT devisers who will try to see if lawyers can aid them in pursuing the copycats. Your takeaway is that besides this being the Wild West, you also have to assume that selecting and using any of the GPTs is a lot like opening a box of chocolates. Plain and simple, anybody who happens to have a ChatGPT Plus account can create a GPT that is named in such a way or described in a manner that suggests it has to do with mental health advisement.
Think of this akin to the unveiling of the now-vaunted Apple app store. The huge difference is that crafting a ChatGPT GPT chatbot requires no coding skills and can easily be devised by just about anyone. In that sense, there is little to no barrier to entry.
You also want to see a rating by those who had made use of the driver. Besides popularity as based on a count of uses, having a rating would be handy too (one supposes the frequency is a surrogate for an unspecified rating, but that’s a debate for another day). Furthermore, you might then feed in additional facts about Lincoln to augment whatever ChatGPT was initially data trained on. I’ve described the use of RAG (retrieval-augmented generation) as an important technique for extending generic generative AI into being data trained in particular domains, such as medicine, law, and the like (see the link here).
Popular Keywords
If you are already familiar with the overarching background on this topic, you are welcome to skip down below to the next section of this discussion. I opted to use special commands in ChatGPT that would aid in revealing how the GPT was set up. You might find of interest that as I reported when the GPT capability was initially introduced several months ago, it is possible ChatGPT App to interrogate a GPT to try and divulge the establishing prompts, see my discussion at the link here. When a person sets up a GPT, they are able to enter establishing prompts that tell ChatGPT what it is to do. Of those three million GPTs, some number of them are intentionally devised by the person who made the GPT to be aimed at providing mental health guidance.
The use of generative AI for mental health treatment is a burgeoning area of tremendously significant societal ramifications. We are witnessing the adoption of generative AI for providing mental health advice on a widescale basis, yet little is known about whether this is beneficial to humankind or perhaps contrastingly destructively adverse for humanity. A seemingly wink-wink skirt around by the deviser might be by claiming it is intended for parents rather than children. Another caveat is that I did this quasi-experimental endeavor just days after the GPT Store was launched.
I opted therefore to craft my own rating system. I am filling the void, temporarily, one might exhort. Before I dive into today’s particular topic, I’d like to provide a quick background for you so that you’ll have a suitable context about the arising use of generative AI for mental health advisement purposes. I’ve mentioned this in prior columns and believe the contextual establishment is essential overall.
A mental health GPT is making money and word spreads. Other people jump on the bandwagon by making a nearly identical GPT. All of a sudden, overnight, there are dozens, hundreds, thousands, maybe millions of duplicates, all vying for that money. In my Abraham Lincoln example, you could simply tell ChatGPT that whenever a user uses the GPT, the response is to profusely elaborate on matters about the life and times of President Lincoln. Believe it or not, that’s about all you would have to do as an establishing prompt.
Chatbots in science: What can ChatGPT do for you? – Nature.com
Chatbots in science: What can ChatGPT do for you?.
Posted: Wed, 14 Aug 2024 07:00:00 GMT [source]
In years past, devising a bona fide mental health therapy chatbot took a lot of expense and time to do. Teams of experts in mental health and allied software developers would be brought together. The assembled team would take many months to create an initial prototype. Randomized control trials (RCT) would be conducted to assess whether the chatbot was doing the right things. Numerous iterations and adjustments would be made. A kicker is that the GPT Store, now having been launched, has further indicated that soon a monetization scheme will be implemented (in Q1 of this year).
You are done and ready to publish your GPT to the GPT Store. Sorry to say that this notion of restriction is somewhat pie-in-the-sky. First, you would need to inform people who make GPTs that they should consider including prompts that tell the AI to not dispense mental health advice. You can foun additiona information about ai customer service and artificial intelligence and NLP. I seriously doubt you could get people on a widespread basis to adopt this rule of thumb.
In this instance, I’d like to bring you up-to-speed about the GPT Store. In this circumstance, I found this quite helpful as part of my exploration. It allowed me to ascertain which of the GPTs were more fully devised versus the ones that were sparsely devised.
Leave a Reply
Want to join the discussion?Feel free to contribute!