Skip to content

For some autistic people, ChatGPT is a lifeline

    The flexibility of the chatbot also brings some unresolved issues. It can produce biased, unpredictable and often made-up answers and is based in part on personal information collected without consent, creating privacy concerns.

    Goldkind advises that people turning to ChatGPT should be familiar with its terms of service, understand the basics of how it works (and how information shared in a chat may not remain private), and be aware of its limitations, such as tendency to information. Young said they’ve been thinking about enabling data privacy protection for ChatGPT, but also think their perspective as an autistic, transgender, single parent could be useful data for the chatbot in general.

    Like so many other people, autistic people can find knowledge and empowerment in conversations with ChatGPT. For some, the benefits outweigh the drawbacks.

    Maxfield Sparrow, who is autistic and facilitates support groups for autistic and transgender people, has found ChatGPT helpful in developing new material. Many autistic people struggle with conventional icebreakers in group sessions because the social games are largely designed for neurotypical people, Sparrow says. So they urged the chatbot to come up with examples that work better for autistic people. After some back and forth, the chatbot spat out, “If you were weather, what kind of weather would you be?”

    Sparrow says this is the perfect opener for the group – concise and related to the natural world, which Sparrow says a neurodivergent group can connect with. The chatbot has also become a source of comfort for when Sparrow is sick, and for other advice, such as how to organize their morning routine to be more productive.

    Chatbot therapy is a concept that goes back decades. The first chatbot, ELIZA, was a therapy bot. It came out of the MIT Artificial Intelligence Laboratory in the 1960s and was modeled on Rogerian therapy, in which a counselor repeats what a client tells them, often in the form of a question. The program didn’t use AI as we know it today, but through repetition and pattern matching, the scripted responses gave users the impression that they were talking to something that understood them. Despite being created with the intention of proving that computers could not replace humans, ELIZA captivated some of its “patients,” who had intense and extensive conversations with the program.

    More recently, chatbots with AI-driven, scripted responses – similar to Apple’s Siri – have become widely available. One of the most popular is a chatbot designed to play the role of a real therapist. Based on cognitive behavioral therapy practices, Woebot saw a surge in demand during the pandemic as more people than ever sought mental health care.

    But because those apps have a narrower scope and provide scripted responses, ChatGPT’s richer conversation may feel more effective for those trying to solve complex social issues.

    Margaret Mitchell, chief ethics scientist at startup Hugging Face, which develops open source AI models, suggests that people facing more complex problems or serious emotional issues should limit their use of chatbots. “It can lead to discussion directions that are problematic or encourage negative thinking,” she says. “The fact that we don’t have full control over what these systems can say is a big problem.”