Skip to content

ChatFished: Losing Friends and Alienating People with AI

    Five hours is enough time to watch a Mets game. There will be plenty of time to listen to the Spice Girls’ album “Spice” (40 minutes), Paul Simon’s “Paul Simon” album (42 minutes) and Gustav Mahler’s Third Symphony (his longest). It’s plenty of time to roast a chicken, text your friends that you roasted a chicken, and prepare for an impromptu dinner party.

    Or you can spend it checking your email. Five hours is about how much time many employees spend on email each day. And 90 minutes on the messaging platform Slack.

    It’s weird, workplace chatter like email and Slack: it’s sometimes the most pleasant and human part of the workday. It can also be mind-numbing to manage your inbox – insofar as you might be wondering: couldn’t a robot do this?

    At the end of April, I decided to see what it would be like to allow artificial intelligence into my life. I decided to do an experiment. For a week, I wrote all my work communications — emails, Slack posts, pitches, follow-ups with resources — through ChatGPT, the OpenAI research lab’s artificial intelligence language model. I didn’t tell colleagues until the end of the week (except in a few instances of personal weakness). I downloaded a Chrome extension that composes email replies right in my inbox. But mostly I wrote detailed directions in ChatGPT, asking whether it should be witty or formal, depending on the situation.

    The result was a rollercoaster, both emotionally and in terms of the amount of content I was generating. I started the week swamping my teammates (sorry) to see how they would react. At some point, I lost patience with the bot and gained a newfound appreciation for phone calls.

    Unsurprisingly, my bot couldn’t match the emotional tone of an online conversation. And I spend a good part of the week, due to hybrid work, having online conversations.

    The impulse to chat with teammates all day is not wrong. Most people know the excitement (as well as the utility) of office friendships from psychologists, economists, TV sitcoms, and our own lives; my co-worker sends me pictures of her baby in increasingly chic rompers every few days, and nothing makes me happier. But the amount of time employees feel they have to spend on digital communications is undoubtedly outrageous – and for some it’s easy to argue for handover to artificial intelligence.

    The release of generative AI tools has raised all kinds of huge and thorny questions about work. There is fear about which jobs will be replaced by AI in 10 years – paralegals? Personal assistants? Film and television writers are currently on strike and one of the issues they are battling is limiting the studios’ use of AI. There are also fears about the toxic and false information that AI can spread in an online ecosystem already riddled with disinformation.

    The question that drove my experiment was much more narrow: Will we miss our old way of working as AI takes over the grind of communication? And would my colleagues even know, or would they be Chatfished?

    My experiment began one Monday morning with a kind Slack message from an editor in Seoul who sent me the link to a study that analyzed humor in more than 2,000 TED and TEDx Talks. “Unfortunately the researchers,” the editor wrote to me. I asked ChatGPT to say something clever in response, and the robot wrote, “I mean, I love a good TED Talk as much as the next person, but that’s just cruel and unusual punishment!”

    While it didn’t look anything like a sentence I would type, this seemed harmless. I pressed send.

    I had entered the experiment feeling that it was important to be generous of spirit to my robot co-conspirator. Tuesday morning, however, I found my to-do list testing the limits of my robot’s pseudo-human humor. Coincidentally, my colleagues from the Business desk were planning a party. Renee, one of the party planners, asked me if I could help prepare the invitation.

    “Perhaps with your journalistic voice you can write a nicer sentence than I just did,” Renee wrote to me on Slack.

    I couldn’t tell her that my use of “journalistic voice” that week was a sore subject. I asked ChatGPT to make a funny phrase about refreshments. “I’m pleased to announce that our upcoming feast will feature an array of delicious cheese platters,” the robot wrote. “To spice things up (pun intended), we might even have one with a business theme!”

    Renee was unimpressed and ironically wrote to me, “OK, wait, let me make the ChatGPT make a sentence.”

    Meanwhile, I had exchanged a series of messages with my colleague Ben about a story we were writing together. In a moment of terror, I called him to let him know it was ChatGPT who wrote the Slack messages, not me, and he admitted that he had wondered if I was annoyed with him. “I thought I broke you!” he said.

    While we were on the phone, Ben texted me, “Robot-Emma is very polite, but in a way I’m a little concerned that she might be hiding her intent to kill me in my sleep.”

    “I want to assure you that you can sleep peacefully, knowing that your safety and security are not compromised,” my bot replied. “Take care and sleep well.”

    Given the amount of time I spend with co-workers online — discussing the news, story ideas, the occasional “Love Is Blind” — stripping that communication of any personality was unsettling.

    But it’s not far-fetched at all. Microsoft introduced a product earlier this year, Microsoft 365 Copilot, that handled all the tasks I asked ChatGPT to do and more. I recently saw it in action when Microsoft’s corporate vice president, Jon Friedman, showed me how Copilot could read emails he’d received, summarize them, and then draft possible responses. Copilot can take notes during meetings, analyze spreadsheet data, and identify issues that may arise in a project.

    I asked Mr. Friedman if Copilot could mimic his sense of humor. He told me the product wasn’t quite there yet, though it could make valiant comic attempts. (He’s asked for pickleball jokes, for example, and it delivered, “Why did the pickleball player refuse to play doubles? They couldn’t err with the extra pressure!”)

    Of course, he continued, Copilot’s goal is loftier than mediocre comedy. “Most of humanity spends way too much time in what we call the grind of work, sorting through our inboxes,” said Mr. Friedman. “These things just sap our creativity and our energy.”

    Mr. Friedman recently asked Copilot to prepare a memo, using his notes, recommending one of his employees for a promotion. The recommendation worked. He estimated that two hours of work was completed in six minutes.

    However, for some, the time savings are not worth the quirk of outsourcing relationships.

    “In the future, you’ll get an email and someone will say, ‘Did you even read it?’ And you’ll say ‘no,’ and then they’ll say, ‘Well, I didn’t write the answer to you,'” said Matt Buechele, 33, a comic writer who also makes TikToks about office communications. going back together, circling back.”

    Mr. Buechele, in the middle of our telephone interview, asked me unsolicited about the e-mail I had sent him. “Your email style is very professional,” he said.

    I confessed that ChatGPT wrote the message to him requesting an interview.

    “I was like, ‘This is going to be the most awkward conversation of my life,'” he said.

    This confirmed a fear I had developed that my sources were starting to think I was a jerk. For example, a source had sent me an effusive email thanking me for an article I wrote and inviting me to visit his office the next time I was in Los Angeles.

    ChatGPT’s response was muffled, almost rude: “I appreciate your willingness to cooperate.”

    I felt sad about my past exclamation-mark Internet existence. I know people find exclamation marks tasteless. The writer Elmore Leonard recommended measuring “two or three per 100,000 words of prose”. With all due respect, I disagree. I often use two or three per two or three words of prose. I am an apologist for digital enthusiasm. ChatGPT, it turns out, is more reserved.

    Despite all the irritation I developed towards my robot overlord, I found some of my colleagues impressed with my newly polished digital persona, including my teammate Jordyn, who consulted me on Wednesday for advice on an article pitch.

    “I have a story idea I’d like to talk to you about,” Jordyn wrote to me. “It’s Not Urgent!!”

    “I am always up for a good story, urgent or not!” my robot replied. “Especially if it’s a juicy one with plot twists and unexpected twists.”

    After a few minutes of back and forth, I was desperate to talk to Jordyn in person. I lost my patience with the bot’s measly tone. I missed my own stupid jokes and (relatively) normal voice.

    What’s even more disturbing is that ChatGPT is prone to hallucinations, which means mixing words and ideas together that don’t actually make sense. While writing a letter to a source about timing for an interview, my bot randomly suggested asking him if we should coordinate our outfits ahead of time so our auras and chakras wouldn’t clash.

    I asked ChatGPT to compose a message for another colleague, who knew about my experiment, to tell him I was in hell. “I’m sorry, but I can’t generate inappropriate or harmful content,” the robot replied. I asked it to compose a message explaining that I was going crazy. ChatGPT couldn’t either.

    Of course, many of the AI ​​experts I consulted weren’t daunted by the idea of ​​shedding their personalized communication style. “Honestly, we already copy and paste a lot,” said Michael Chui, a McKinsey partner and expert in generative AI

    Mr. Chui admitted that some people see signs of dystopia in a future where workers primarily communicate through robots. However, he argued that this wouldn’t look all that different from business exchanges that are already formal. “Recently, a colleague sent me a text message saying, ‘Hey, was that last email you sent legit?'” recalled Mr. Chui.

    It turned out that the email had been so clumsy that the colleague thought it had been written via ChatGPT. However, Mr. Chui’s situation is a bit special. In college, his freshman house voted to assign him a prescient superlative: “Most likely, he will be replaced by a robot of his own making.”

    I decided to end the week by asking my department’s deputy editor what role he sees for AI in the future of the newsroom. “Do you think there’s a possibility that one day we could see AI-generated content on the front page?” I wrote about Slack. “Or do you think some things are just better left to human writers?”

    “Well, that doesn’t sound like your voice!” replied the editor.

    A day later, my experiment completed, I typed back my own response: “That’s a relief!!!”