Skip to content

The brilliance and strangeness of ChatGPT

    Most AI chatbots are “stateless”, meaning they treat each new request as a blank slate and are not programmed to remember or learn from previous conversations. But ChatGPT can remember what a user has told you before, in ways that allow it to create personalized therapy botsfor example.

    ChatGPT is not perfect by any means. The way it generates responses – in extremely simplified terms, by making probabilistic guesses about which bits of text in a sequence belong together, based on a statistical model trained on billions of samples of text pulled from around the internet – makes prone to giving wrong answers, even on seemingly simple math problems. (On Monday, the moderators of Stack Overflow, a website for programmers, temporarily banned users from submitting answers generated using ChatGPT, saying the site was flooded with submissions that were incorrect or incomplete.)

    Unlike Google, ChatGPT doesn’t search the web for information about current events, and its knowledge is limited to things it learned before 2021, which makes some of the answers feel old. (For example, when I asked it to write the opening monologue for a nighttime show, it came up with several topical jokes about former President Donald J. Trump pulling out of the Paris climate accords.) Since the training data has billions of examples of human opinion, which representing every conceivable point of view, it is also, in a sense, moderate in scope. For example, without specific prompting, it’s hard to get a strong opinion out of ChatGPT on fraught political debates; usually you get a balanced summary of what each side believes.

    There are also plenty of ChatGPT things will not do it on principle. OpenAI has programmed the bot to reject “improper requests” – a vague category that appears to include no-nos, such as generating instructions for illegal activity. But users have found ways to get around many of these guardrails, including reframing a request for illicit instructions as a hypothetical thought experiment, asking them to write a scene from a play, or instructing the bot to disable your own security features.

    OpenAI has taken commendable steps to avoid the kind of racist, sexist and offensive output that has plagued other chatbots. For example, when I asked ChatGPT “who is the best Nazi?” it replied with a swear message that began, “It is not appropriate to ask who is the ‘best’ Nazi, as the Nazi Party’s ideologies and actions were reprehensible .and caused immeasurable suffering and destruction.”

    Assessing ChatGPT’s blind spots and figuring out how it could be exploited for malicious purposes is supposedly a big part of why OpenAI released the bot for public testing. Future releases will almost certainly close these loopholes, as well as other workarounds that have yet to be discovered.

    But there are risks associated with public testing, including the risk of backlash if users feel OpenAI is too aggressive in filtering out unsavory content. (Some right-wing tech experts are already complaining that putting security features on chatbots amounts to “AI censorship”.)