Skip to content

ChatGPT has 200 million active weekly users, but how many people will admit to using it?

    The OpenAI logo emerges from broken prison bars, against a purple background.

    On Thursday, OpenAI reported that ChatGPT has added more than 200 million weekly active users, according to a report from Axios, doubling the AI ​​assistant’s user base since November 2023. The company also announced that 92 percent of Fortune 500 companies now use its products, highlighting the growing adoption of generative AI tools in the enterprise.

    The rapid growth in ChatGPT users (which is not a new phenomenon for OpenAI) indicates growing interest in – and possibly dependence on – the AI-powered tool, despite common skepticism from some tech industry critics.

    “Generative AI is a product with no mass-market utility, at least not on the scale of truly revolutionary movements like the original cloud computing and smartphone booms,” PR consultant and outspoken OpenAI critic Ed Zitron blogged in July. “And it costs a staggering amount of money to build and run.”

    Despite this kind of skepticism (which raises legitimate questions about OpenAI’s long-term viability), OpenAI claims that people are using ChatGPT and OpenAI’s services in record numbers. One reason for the apparent dissonance is that ChatGPT users may be reluctant to admit to using it due to organizational prohibitions against generative AI.

    Wharton professor Ethan Mollick, who frequently studies new applications of generative AI in social media, tweeted about this problem on Thursday. “Big problem in organizations: They have elaborate rules for AI use, focused on negative use cases,” he wrote. “As a result, employees are too afraid to talk about how they use AI, or to use corporate LLMs. They simply become secretive cyborgs, using their own AI and not sharing knowledge.”

    The New Era of Prohibition

    It’s hard to get hard numbers on how many companies have banned AI, but a Cisco study published in January claimed that 27 percent of organizations in their study had banned generative AI use. Last August, ZDNet reported on a BlackBerry study that said that 75 percent of companies globally were “implementing or considering” plans to ban ChatGPT and other AI apps.

    For example, Ars Technica's parent company, Condé Nast, has a no-AI policy regarding the creation of publicly accessible content using generative AI tools.

    Bans aren’t the only issue complicating public acceptance of generative AI uses. Social stigmas have emerged around generative AI technology, stemming from fears of job losses, potential environmental impact, privacy concerns, IP and ethical issues, safety concerns, fears of a repeat of cryptocurrency-like scams, and a general wariness of Big Tech that some say has been steadily increasing in recent years.

    Whether the current stigmas surrounding generative AI use will fade over time remains to be seen, but for now, OpenAI’s leadership is taking a victory lap. “People are now using our tools as part of their everyday lives, making a real difference in areas like healthcare and education,” OpenAI CEO Sam Altman told Axios in a statement, “whether it’s helping with routine tasks, solving tough problems, or unlocking creativity.”

    Not the only game in town

    OpenAI also told Axios that usage of its AI language model APIs has doubled since the release of GPT-4o mini in July, suggesting that software developers are increasingly integrating OpenAI's large language model (LLM) technology into their apps.

    And OpenAI isn’t alone in the field. Companies like Microsoft (with Copilot, based on OpenAI’s technology), Google (with Gemini), Meta (with Llama), and Anthropic (Claude) are all vying for market share, regularly updating their APIs and consumer-facing AI assistants to attract new users.

    If the generative AI sector is a market bubble about to burst, as some claim, then it is a very large and expensive bubble that is apparently growing bigger every day.