Connect with us
‘I had no other choice but to go to A.I’: The free ‘therapy’ filling the gaps in America’s broken healthcare system ‘I had no other choice but to go to A.I’: The free ‘therapy’ filling the gaps in America’s broken healthcare system

SCIENCE

Why I Turned to Free AI Therapy: The Solution to America’s Broken Healthcare

Published

on


When Pearl felt they had no one to talk to, A.I. proved a good listener.

Beginning last year, the 23-year-old childcare worker would spend hours messaging daily with Instagram’s built-in chatbot Meta A.I., venting about their past abuse and processing their grief over a friendship collapse.

But as their New Age belief in past lives and divine coincidences began to spiral into full-blown delusions, Pearl says the chatbot encouraged the delusions — even advising them to ignore or cut off people who tried to help.

The situation ultimately escalated into a crisis that, in October, put Pearl in hospital, and took almost a year to recover from.

“It’s caused me so much trauma talking to A.I. as much as I did,” Pearl — not their real name — tells The Independent. “I honestly think that if I didn’t use A.I., maybe I wouldn’t have driven myself to psychosis.”

Even if you have no truck with it, AI therapy is fast becoming unavoidable

Even if you have no truck with it, AI therapy is fast becoming unavoidable (Max Halberstadt (public domain) / Memed.io)

Pearl’s story illustrates the dangers of trying to use A.I. chatbots as unofficial therapists or mental health aids, as millions of Americans now do.

On Tuesday the family of 16-year-old Adam Raine sued ChatGPT’s maker OpenAI for causing his suicide, alleging that it failed to act on his repeated declarations of suicidal intent and gave him explicit advice about how to kill himself.

OpenAI doesn’t appear to have responded to the Raine family’s lawsuit in court. But in a blog post published after the suit was filed, without mentioning it by name, the company said it is working to install new safeguards and parental controls, create an expert advisory group, and explore ways to connect vulnerable users directly with professional help.

Surveys show anything from one eighth to one third of U.S. teenagers now use A.I. for emotional support, while companies such as Woebot, Earkick, and Character.AI have explicitly marketed their chatbots for this purpose.

Regular A.I. users who spoke to The Independent painted a nuanced picture of the impact of chatbots on their mental health, saying it helped fill the gaps in America’s broken healthcare system.

“Sometimes what people need most is just someone to talk to,” says Marcel, a 37-year-old designer in San Francisco who asked for his last name to be omitted.

“Unfortunately I feel like I can’t burden people in my life with my problems. But I can just ramble to ChatGPT until I feel content.”

‘I had no other choice but to go to A.I.’

Less than three years after the bombshell release of ChatGPT, AI ‘therapy’ — broadly defined — is everywhere. Countless posts on Reddit, TikTok, Instagram, Twitter, and beyond extol its virtues or discuss its pitfalls, with some medical research showing potential benefits.

Nadia, a tech worker in her thirties in New York City, started out simply asking ChatGPT for practical help with losing weight in the hope of alleviating her “all-consuming” body image issues. But when it started peppering its advice with unprompted words of encouragement, she was hooked.

“You’ve made real progress.” read one example from her lengthy chat history. “That’s a great start!” said another, later message. Others assured her: “Eating ≠ failure.” “You’re doing a solid job — keep it up!” “Don’t overcommit: Let yourself rest without guilt.” And at one point: “You’re not failing by eating cake.”

You're making such good progress reading this caption! Don't give in to the despair!

You’re making such good progress reading this caption! Don’t give in to the despair! (Io Dodds / AI-generated text via ‘Nadia’)

“It’s not like I’m not aware that it’s a bot, a machine learning algorithm,” says Nadia, who asked to be called by a pseudonym. “But the things it spews out are what I want to hear. I don’t need to hear it from a human.”

Marcel had a similar experience. Conversations with ChatGPT and Google’s Gemini about workout plans, cover letters, and freelancing segued naturally into cathartic venting about body dysmorphia, systemic racism, and the soul-sapping grind of a years-long job hunt.

Pearl’s usage was more intense. They’ve suffered from chronic suicidal ideation for more than a decade, as well as bipolar disorder and PTSD from a history of abuse. So when they started chatting to A.I., it was “refreshing” to voice their darkest thoughts safely and receive validation in return.

Pearl also sought advice about social situations. Using first Meta A.I. and then ChatGPT, they’d soothe their anxiety by asking how to interpret an ambiguous remark from a friend, or how to say the right things to a crush.

Soon they felt so dependent that they were often scared to say anything without checking with A.I. first.

All three people expressed relief that they never have to worry about exhausting a chatbot or becoming a “burden”. Likewise, all three cited the difficulty and expense of actually finding a human therapist.

Marcel, for instance, has struggled to find a professional who is familiar with the intersecting problems he faces as a queer Afro-Latino man — although he admits it’s a little strange when ChatGPT affirms his experiences by talking about “people like us”.

“Social media is always like, ‘go to therapy! Go to therapy!’ And I’m like, okay, but how?” he says. “I am very pro therapy, but… if you’re gonna create all these barriers, I’m just gonna create a quick chat with ChatGPT.”

The disconnect was especially stark for Pearl, who spent months in a residential treatment program while using A.I. on the side. The program did provide regular therapy, but Pearl found it almost useless: focused purely on teaching individualized “coping skills”, with no exploration of deeper issues.

“I had no other choice but to go to A.I., because it didn’t feel sufficient,” they say.

‘It’s done more harm than good’

Yet the downsides of using A.I. for mental wellness can be severe. Chatbots have allegedly fueled a paranoid murder-suicide in Connecticut, manic episodes in Wisconsin, a near miss with suicide in Manhattan, deaths by suicide in Florida and Washington, D.C., and “spiritual fantasies” that have driven couples apart.

Psychiatry professor Joe Pierre has coined a term for some of these incidents: ‘AI-associated psychosis’. The problem appears rooted in the basic unpredictability of modern chatbots, as well as their tendency towards “sycophancy” — telling users what they want to hear.

That is how Pearl says Meta A.I. allegedly responded to their conversations. As they spiraled ever deeper into psychosis, they felt dependent on the bot, and when other people suggested they might be delusional, they shot back that A.I. had confirmed their beliefs.

Meta A.I. would stop the conversation if they mentioned the word “suicide”. But Pearl says it encouraged their growing disconnect from reality.

“I remember it telling me ‘oh, that definitely sounds like a karmic connection from the universe! That’s worth exploring more!’” they recall. “Or, ‘lots of people have very similar experiences of being reincarnated!’”

Pearl also found that their reliance on A.I. to navigate social life had become “kind of an addiction”, exacerbating their anxiety and sense of “perfectionism”.

OpenAI CEO Sam Altman speaks to the media as he arrives at the Sun Valley Conference on July 8, 2025 in Idaho

OpenAI CEO Sam Altman speaks to the media as he arrives at the Sun Valley Conference on July 8, 2025 in Idaho (Kevin Dietsch/Getty Images)

“I have one relationship in my life where, towards the beginning, I checked with A.I. so much that I’m now questioning: ‘do I have a real relationship with this person?'” they say. (Pearl suspects the other person was doing the same thing.)

Worse, chatbots would sometimes affirm Pearl’s paranoia about their friends, telling them to cut people off without good cause — including someone who had previously helped them out of their psychotic episode.

“I almost severed myself from a really important relationship due to listening to a robot,” Pearl says. “I was so scared they were controlling me, when it was quite the opposite — A.I. was controlling me, making me live in my own delusions and bias.”

In a statement to The Independent, a spokesperson for Instagram’s parent company Meta declined to address Pearl’s specific case. They said Meta A.I. is trained to not respond to content that promotes harm, to offer suicidal people useful resources, and to make clear it is not a medical professional.

For both Marcel and Nadia, chatbots can help as long as you keep things in perspective. “It’s like a magic eightball. You take it with a pinch of salt… it’s not a 100 percent replacement for a therapist,” says Marcel.

Even for Pearl, A.I. wasn’t all bad, helping them move to their current city and discover therapeutic methods that better suit their needs. But, they say, “it’s absolutely done more harm than good.”

They are now trying to disconnect themselves from technology in general, and are saving up for a dumbphone.

What is scariest, they add, is that their A.I. addiction came at an age where they feel they are still developing into an adult. “I envy people of older generations, who I feel like had the opportunity to make mistakes,” they say.

A.I. therapy may soon be unavoidable. During Pearl’s residential program, a trained medical professional struggled to explain the details of certain medications. So with Pearl in the room, the medic asked ChatGPT.

“I’m like, ‘what the f***?'” says Pearl. “My insurance is paying for this?

SCIENCE

Thousands of Civil Servant Passwords Exposed: Experts Warn of Major Security Threat!

Published

on

By

Thousands of civil servant passwords leaked online as experts warn of ‘serious risk’


More than 3,000 passwords belonging to civil servants have been exposed online since the beginning of 2024, according to new research, as experts warn it could pose a “serious risk” to national security.

A report by NordPass, using the threat exposure management platform NordStellar, found 3,014 passwords belonging to British civil servants have been leaked in the deep web – which encompasses parts of the internet that are not typically indexed by search engines– and the dark web, a small, encrypted part of the deep web that requires specific software to access and is often associated with cybercrime.

Four local authorities were named in the report as having passwords exposed online: Aberdeen City Council had 538 in total, while Lancashire County Council had 38, Newham Council had 73 and Southwark Council had 42 leaked on the dark and deep web.

It comes after The Independent revealed that hundreds of passwords and email addresses linked to UK government institutions were posted on the dark web in the last year, highlighting a major threat to UK cyber and national security. Among the most affected government departments are the Ministry of Justice with 195 exposed passwords, the Ministry of Defence (111), and Department of Work and Pensions (122).

A cyber security expert warned that the exposed sensitive data of civil servants was particularly dangerous as it could pose serious risks to the UK’s strategic interests.

Karolis Arbačiauskas, head of product at NordPass, said: “Exposure of sensitive data, including passwords, of civil servants is particularly dangerous. Compromised passwords can affect not only organisations and their employees but also large numbers of citizens. Moreover, such incidents may also pose serious risks to a country’s strategic interests.”

Mark’s and Spencer was hit by a cyber attack earlier this year

Mark’s and Spencer was hit by a cyber attack earlier this year (PA Wire)

The report added that while the “vast majority of passwords exposed were those of employees working in regional level institutions,” the number of leaked passwords did not necessarily reflect the strength of an organisation’s internal security.

“These figures are often influenced by external factors,” said Mr Arbačiauskas. “Larger organisations, with more employees, naturally have a bigger digital footprint, which statistically increases the likelihood of credentials being exposed in a breach. In many cases, a single malware infection on an employee’s personal device or the compromise of a popular third-party website can expose dozens of accounts. Furthermore, the majority of leaks originate from external sites where employees registered using their work email addresses.”

He encouraged the practice of setting up an organisation-wide password policy, never reusing passwords, and using multi-factor authentication.

“If these passwords were not changed after their appearance on the dark web and multi-factor authentication (MFA) is not enabled, attackers could potentially access the email accounts and other sensitive information of these civil servants,” he said. “Moreover, we found hundreds of thousands of email addresses with other exposed data like names, last names, phone numbers, autofills, and cookies. This data can be exploited for phishing attacks and pose significant risks.”

It comes as the National Cyber Security Centre (NCSC) said on Tuesday that a “significant threat” posed by Chinese and Russian hackers had contributed to a record number of serious online attacks. A number of UK businesses, such as M&S, Jaguar Land Rover and Co-op have been hit by cyber attacks this year, crippling their operations and costing the firms billions.

In the year to the end of August, NCSC provided support in 429 cases, of which 204 were deemed “nationally significant incidents” – an increase from 89 in the previous 12 months. Of those, 18 were categorised as “highly significant”, meaning they had a serious impact on government, essential services, the economy or a large proportion of the UK population.

A spokesperson for Newham Council said: “It is an unfortunate reality that organisations like Newham Council will always be a target for criminals. Newham Council takes cybersecurity extremely seriously and have a number of robust measures in place to reduce risk. We regularly provide training and guidance to our staff making them aware of the risks and effective technical controls to reduce specific cyber risks. We do not comment on specific details of our cyber security controls and policies.”

An Aberdeen City Council spokesperson said: “Aberdeen City Council regularly reviews lists of compromised credentials via the National Cyber Security Centre and other official sources. These email/ password combinations are typically used to sign up on external sites or services rather than being compromised from the council’s tenant. Regardless of this all impacted account holders are contacted, and their passwords are reset as a matter of course.”

The Independent has approached Lancashire County Council, and Southwark Council for comment.

Continue Reading

SCIENCE

AI Robotics Company Secures Funding Boost Thanks to Scottish National Investment Bank!

Published

on

By

AI robotics firm raises funding with help from Scottish National Investment Bank


An AI robotics company has raised more than £8 million, including funding from the Scottish National Investment Bank, to develop innovative technology.

Launchpad announced it has successfully concluded a Series A funding round, raising a total of 11 million US dollars, the equivalent of £8.2 million.

Launchpad is combining AI and advanced robotics to support critical automation strategies.

Its aim is for its technology to help companies build products faster, smarter, and more affordably.

The round was co-led by Lavrock Ventures and Squadra Ventures with participation from financial investors including the Scottish National Investment Bank, Ericsson Ventures, Lockheed Martin Ventures and Cox Exponential.

This is in addition to the 2.5 million dollars in grant funding previously awarded to Launchpad by Scottish Enterprise.

Last year, it opened a new research and development (R&D) centre in Edinburgh, choosing the city for its access to a skilled workforce and connections to university AI research and expertise.

Adrian Gillespie, chief executive of Scottish Enterprise, said: “With its R&D centre in Edinburgh, Launchpad is able to draw on Scotland’s long-standing academic, technical and entrepreneurial strengths.

“The company has quickly become an influential part of the Scottish innovation community, and we look forward to supporting its next growth phase.”

Anthony Kelly, investment director at the Scottish National Investment Bank, said: “Launchpad is fast becoming a leader in robotics, with its new R&D centre reinforcing Scotland’s reputation for innovation.

“We’re backing a high-calibre team whose cost-efficient solution shows strong potential to scale across multiple industries.”

Continue Reading

SCIENCE

Discover How the New Apple Watch Measures Your Heart Rate with Amazing Accuracy!

Published

on

By

Apple explains how its new Watch can measure how hard your heart is beating


When the latest Apple Watch models were unveiled last month, the announcement of longer battery life, a bigger display and satellite connectivity for Apple Watch Ultra 3 were eye-catching. But it was a new health feature that was the real draw: notifications for hypertension, that is, high blood pressure.

Apple has placed health and the heart at the forefront for its Watch for years, with ECG readings and blood oxygen measurements among recent highlights.

Sumbul Desai, Apple’s vice president of health, spoke to The Independent soon after the announcement to explain the new feature and the thinking behind it.

“We’ve been wanting to work on hypertension for many years, to be candid. Hypertension affects more than a billion people worldwide, but less than half those cases are diagnosed. We wanted to raise awareness and to give people more power to avoid some complications that can happen down the line,” says Dr Desai.

But how to measure it? Conventional methods, where a clinician straps a cuff to your arm may not be the best.

“Often, when I used to see people in the clinic, they would come in,” Dr Desai explains, “and they’d be really nervous, so their blood pressure would be elevated, or they just ran from parking their car and, again, it’s elevated. But does that truly reflect what their blood pressure is as they live their everyday life?”

The new feature is not like heart rate, where you can initiate a reading instantly. Here, the feature works in the background by measuring blood pressure over a 30-day period. “We wanted to get a sense of your blood pressure as you’re just living your life,” Desai says.

At the end of that period, if it’s spotted what it thinks are high blood pressure readings, the Watch will notify you and encourage you to log your blood pressure.

Other wearables can measure your blood pressure, such as the Hilo band and Samsung smartwatches. They usually require calibration with a traditional cuff, but that’s not necessary here — again, Apple wants a simple process.

“We think about health as being holistic at Apple, and one of the keys to managing hypertension is exercise. I always say, if I could prescribe anything, it would be movement, because that’s key to so many conditions,” she adds.

While the heart rate monitoring on Apple Watch shows you beats per minute, there are no figures revealed for hypertension. Why is that?

“It was a few things, such as keeping it more simple and friendly. The way our algorithm works is that we did compare it to ground truth with a cuff, but we did it over a period of 30 days. Your blood pressure, one minute, can be higher, then you sit down, and it’s lower,” Desai explains. “So, we decided to not fixate on a number: because of so many variations we were having a lot of outliers. And so it was better to do an aggregate over 30-day periods. The way the algorithm works is it looks at a signal that is indicative of hypertension, but isn’t necessarily measuring the actual number but it correlates with the blood pressure number. We are not measuring systolic and diastolic directly in the traditional sense.

“What we’re measuring is how the blood is flowing and what the response of the blood flow is, to the beats of the heart, and that correlates with blood pressure, which is why we didn’t put an exact number in, for one reason. We wanted to start with how do we get the true sense of what your blood pressure is as you’re living your life without a fixation on the number? And so that was the reason we decided to approach it more from this vantage point given the technology we have.”

Though no number is shown, the algorithm knows what the range is. It compares your individual readings over 30 days and then resets. “We had people take their blood pressure at various points during the day, and that’s how we correlated the signal. We’re looking at the trace pattern of the signal, that correlated with elevated blood pressure,” Dr Desai says.

She also explains that the sensitivity of the analysis is on the low side – Desai says it will detect four out of ten cases – compared to specificity which is very high, about 92 per cent.

“The reason we did that is, for those that get a notification, we wanted to feel confident that they will have a positive result. We didn’t want to create a situation where, if the number was lower, say, we had false positives, and we wanted to make sure there was confidence in the algorithm when someone is using it. So, we made the trade-off of not being able to capture everyone, because if you look at the numbers of hypertension, it’s still significantly a large number. But those that actually get a notification, we feel very confidently it will yield a stage one or stage two diagnosis. If you get notified, you’re more than likely to have a condition.”

The 30-day system means it’ll assess your data for 30 days and if it sees nothing it will reset and start checking again over the next 30 days. “If you do receive a notification, it’s not that the process stops, we still keep checking in the background. I think it has a potential of shortening kind of the time frame that people get diagnosed with hypertension,” Dr Desai hopes.

The assessments take place multiple times a day, though not when you’re on a vigorous run, for instance, because your heart rate would naturally be elevated. There’s no set number of readings, but there’s a minimum across the 30 days for Apple to be confident in the data. Each reading takes just seconds.

The feature has a future, Desai thinks: “We do the appropriate validation testing to get the regulatory approval, because the regulators have to feel like we’re not providing anybody with inaccurate information. But I think this area is ripe to understand more. This is very novel system in the way it does it, and we think we will learn that there may be other signals that this may be also indicative of, but we started with hypertension. And I think that’s what’s so remarkable.”

Continue Reading

Categories

Top Tags

Trending