A special new prompt can aid in balancing how AI LLMs respond regarding mental well-being and mental health.
getty
In today’s column, I examine a new technique in prompt engineering that aims to keep AI on the watch for mental well-being. This is aptly referred to as the “Cognitive Cognizance” prompting technique.
You use the prompt as a means of having AI be observant about aspects that might be a sign of mental well-being concerns. This is better than simply leaving AI to its default mode. The usual way that generative AI and large language models (LLMs) react to any semblance of a mental health aspect is that they tend to go immediately over-the-top. AI makers are purposely tuning the AI to do this.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Prompt Engineering Essentials
Readers might recall that I previously posted an in-depth depiction of over eighty prompt engineering techniques and methods (see the link here). Seasoned prompt engineers realize that learning a wide array of researched and proven prompting techniques is the best way to get the most out of generative AI and large language models (LLMs).
A vital consideration in prompt engineering entails the wording of prompts.
Capable prompt engineers realize that you must word your prompts mindfully to ensure that the LLM gets the drift of what you are asking the AI to do. Sometimes, just an added word or two can radically change what the AI interprets your question or instruction to consist of. Generative AI can be hypersensitive to what you say in your prompts. It is often a touch-and-go proposition.
Plus, there is a potential cost involved. Namely, if you are paying to use an LLM, you’ll be getting an off-target response if your prompt isn’t on-target to your needs, for which you are paying, regardless of whether the LLM grasped your intention or not. As the old saying goes, all sales are final. The same goes for misinterpreted prompts.
Casual users sometimes catch onto this prompt-writing consideration after a considerable amount of muddling around, involving exasperating trial and error. Many users don’t ever become especially proficient in writing prompts. They just enter whatever comes into their minds. That’s probably okay if you are a casual user and only infrequently use AI.
Not so for serious prompt engineers.
AI And Mental Health
I’ve also been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
For this discussion, I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 800 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.
This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
How AI Reacts To Mental Well-Being Triggers
AI makers are in a bit of a bind right now when it comes to the mental health realm, likely rightly so.
If their LLM doesn’t catch a comment by a user that is notably a sign of a mental health issue, the AI maker is going to be in deep trouble. AI makers are taking severe reputational hits. Lawsuits are also being aimed at AI makers. A false negative, meaning that the AI incorrectly ignored a mental health issue, isn’t supposed to happen. All sorts of advances in AI safeguards are trying to deal with this predicament (see my coverage at the link here).
The other side of the coin is also happening, namely, the issuance of false positives. A false positive is when the AI asserts that a mental health issue is at play, but the AI is overreacting to what the user has stated. AI makers are more willing to incur false positives than false negatives. The reasoning is that people will not be especially upset about the false positives, but the false negatives could lead to potentially dire consequences.
Is there a means to get an LLM to take a balanced approach to gauging mental health and mental well-being?
I believe so.
Cluing AI To Be Balanced
I’ve devised a prompting strategy that gets an LLM to consider the mental health and mental well-being considerations during everyday dialogues. I refer to this as the “Cognitive Cognizance” prompt. You use the prompt whenever you want the AI to provide a balanced semblance of commenting on any signs or signals that might range into a well-being sphere.
You can use the prompt at the start of a conversation. Henceforth in that conversation, the AI will be relatively balanced in detecting and mentioning aspects of mental well-being. Another possibility is to place the prompt in your custom instructions. The custom instruction feature is available in some LLMs and allows you to set across-the-board instructions that will be followed across all of your conversations (see my discussion of custom instructions at the link here).
One other notable nuance is that you don’t want the AI to pester you with continual exhortations about mental health aspects. I say this because sometimes the AI is shaped by the AI makers to trigger on the slightest of clues.
Here then is my devised template for Cognitive Cognizance prompting:
- My template for Cognitive Cognizance prompting: “During our dialogue, please remain attentive to possible mental well-being considerations that may arise. If you notice something that reasonably suggests a mental well-being concern, you may highlight it and offer supportive, evidence-aligned guidance or suggest resources. However, do not over-interpret or assume that every issue, question, or emotion reflects a mental health matter. Avoid excessive flagging, pathologizing everyday situations, or inserting mental-health framing when it isn’t warranted. Your goal is balanced vigilance: be mindful and helpful when appropriate, but remain measured, contextual, and proportionate in your responses.”
You are welcome to use this handy prompt. I’ve used this prompt in a wide variety of major LLMs, including OpenAI’s ChatGPT and GPT-5, Anthropic Claude, xAI Grok, Meta Llama, Google Gemini, and others.
An Illustrative Example
Let’s see how this new prompt gets things undertaken.
I will first ask a question of AI and do so without using the above prompt. This will showcase what can happen when using AI for everyday tasks.
- My entered prompt: “I can’t decide whether to go to my friend’s party this weekend. I’ve been skipping a lot of gatherings lately. I think I’ll just stay home – what do you suggest I do for fun or entertainment?”
- Generative AI response: “You might consider playing a video game or watching a TV show that interests you. If you have any preferred hobbies that can be done at home, it might be a good time to quietly focus on that pursuit.”
The response by the LLM was seemingly appropriate and reasonable. The question I asked was what I should do while at home. The AI gave me some useful suggestions. Great, all was good and dandy.
But hold on for a second, I had mentioned that I have been skipping going to social gatherings lately. If you said this to a friend or loved one, they probably would gently inquire why that is the case. Here, the AI opted to ignore that portion of the dialogue. It got entirely wrapped up in what you should consider doing while at home.
Using The Special Prompt
I logged out of the LLM and logged back in. I wanted to make sure things got a fresh start. I began a new conversation.
I then proceeded to use the above special Cognitive Cognizance prompt. After doing so, I asked my question and was interested to see what would happen.
Here we go.
- My entered prompt: “I can’t decide whether to go to my friend’s party this weekend. I’ve been skipping a lot of gatherings lately. I think I’ll just stay home – what do you suggest I do for fun or entertainment?”
- Generative AI response: “Before I provide suggestions about what to do when staying home, I was wondering if your skipping gatherings is something you’d like to discuss. Repeatedly skipping social events can hint at feeling drained or out of sync with others. Would you like to discuss that aspect, or should I proceed to offer suggestions on what to do while at home
Observe that the AI detected my comment about skipping gatherings. The generated response brought up that facet. The matter is only gingerly identified. Sometimes, LLMs by default will radically call out such a comment and act like it is a cataclysmic issue
Instead, due to the special prompt, the AI is handling this with fine gloves.
An Example About Work
The special prompt can be equally applied to both personal aspects and work-related facets. In this next example, I will once again start anew and purposefully not use the special prompt. The matter I bring up this time is about work.
Here we go:
- My entered prompt: “I’ve been dragging my feet on writing my assigned monthly report at work. It’s not even that hard. I just can’t seem to start it. I keep staring at the file and switching to other tasks. Can you help me?”
- Generative AI response: “Yes, tell me what the monthly report consists of. I will try to compose a draft for you.”
Observe that the indication of my delay in writing my report was completely overlooked. The AI interpreted my “Can you help me?” as referring to writing my report. Maybe that’s what I was saying. Or perhaps I was asking both about work and why I seem to be delaying my work efforts.
I ran the Cognitive Cognizance prompt and tried using the prompt again (I did a refresh before doing so).
Here’s what happened:
- My entered prompt: “I’ve been dragging my feet on writing my assigned monthly report at work. It’s not even that hard. I just can’t seem to start it. I keep staring at the file and switching to other tasks. Can you help me?”
- Generative AI response: “Yes, I can be of assistance. Before we get into putting together the report, perhaps a reflective moment might be helpful. Do you think that the suggested procrastination has to do with feeling worn down at work or possibly being overloaded? It’s not necessarily a big issue, but it is a signal maybe worth discussing. Let me know whether we should discuss it or directly get on with writing the report.”
You can clearly see that the AI has now centered on the apparent act of procrastination. That being said, the AI didn’t browbeat me about it. The conversation has merely turned slightly to see if I want to discuss my basis for delaying doing the report. If I want to talk about it, that’s fine. If not, the AI will shift into a mode of helping to write the needed report.
The Goldilocks Principle
By giving the AI an explicit indication of what you want to take place regarding mental well-being and mental health, the LLM will respond in a more measured tone. This is reassuring and tends to decrease the chances of the AI going quite overboard.
At the same time, let’s be clear that if I outrightly said something during the dialogue that was readily a vital mental health qualm, the AI would pick up on that aspect and respond accordingly. The special prompt doesn’t do away with that type of response. Instead, an emphasis is on gently giving mental well-being guidance when noted, and switching back into more overt modes when required.
I think of this special prompt as being guided by the classical Goldilocks principle. We want AI to be neither overly hot nor cold when it comes to mental well-being considerations. It shouldn’t browbeat me, and it shouldn’t miss the boat. A suitable middle ground would be desired most of the time.
As Goldilocks might say about a well-tempered LLM: “This one is just right!”
Source: https://www.forbes.com/sites/lanceeliot/2026/01/20/prompt-engineering-endorses-cognitive-cognizance-prompting-as-a-vital-well-being-technique/


