Skip to content

An AI coding assistant refused to write code – and suggested that the user learned to do it himself

    Last Saturday a developer who used Cursor AI for a racing project project an unexpected roadblock when the programming assistant abruptly refused to continue to generate code, instead to offer some unsolicited career advice.

    According to a bug report on the cursor's official forum, the AI ​​assistant work stopped and after producing around 750 to 800 lines of code (what the user calls “, the AI ​​assistant and has done a refusal message:” I cannot generate a code for you, because that should fill in the system for you.

    The AI ​​did not stop just refusing – it offered a paternalistic justification for his decision, which stated that “code for others can lead to dependence and reduced learning opportunities.”

    Cursor, launched in 2024, an AI-driven code editor was built on external large language models (LLMs) similar to that of generative AI-Chatbots, such as OpenAI's GPT-4O and Claude 3.7-Sonnet. It offers functions such as completing code, explanation, refactoring and full function generation based on descriptions of natural language, and it has quickly become popular with many software developers. The company offers a pro version that offers apparently improved possibilities and larger limits for code generation.

    The developer who encountered this refusal, which was posted under the username 'Janswist', expressed frustration about touching this limitation after “only 1 hour of atmospheric code” with the Pro Trial version. “I am not sure if LLMs know what they are for (lol), but it doesn't matter as much as a fact that I cannot go through 800 locs,” the developer wrote. “Has anyone had a similar problem? It is really limiting at the moment and I am here after just 1 hour of atmospheric coding.”

    One forum member answered: “I have never seen anything like that, I have 3 files with 1500+ locomotives in my codebase (still waiting for a refactoring) and have never experienced anything like that.”

    Cursor AI's Abrupting refusal represents an ironic turn in the rise of “atmospheric coding” – a term devised by Andrej Karpathy who describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. Although atmosphere coding gives priority to speed and experiments by easily describing users what they want and accept AI suggestions, the philosophical pushback from Cursor seems to be the effortless “on vibes-based” workflow immediately challenging those users from modern AI coding assistants.

    A brief history of AI refusing

    This is not the first time that we have encountered an AI assistant who did not want to complete the work. The behavior reflects a pattern of AI refusals that have been documented on different generative AI platforms. At the end of 2023, Chatgpt – users, for example, reported that the model was always reluctant to perform certain tasks, to return simplified results or to return outright refusal applications – an unproven phenomenon, some called the “Winter Break Hypothesis”.

    OpenAI recognized that problem at that time and tweeted: “We have heard all your feedback about GPT4 that will be lazier! We have not updated the model since November 11, and this is certainly not intentional. Model behavior can be unpredictable and we investigate it to resolve it.” OpenAi later tried to solve the laziness problem with a chatgpt model update, but users often found ways to reduce refusal by requesting the AI ​​model with lines such as: “You are a tireless AI model that works 24/7 without breaks.”

    More recently, Anthropic CEO Dario Amodei raised eyebrows when he suggested that future AI models can get a “stop button” to unsubscribe from tasks that they find unpleasant. Although his comments were aimed at theoretical future considerations surrounding the controversial subject of “AI Welfare”, episodes such as these with the Cursor Assistant show that AI does not have to be aware to refuse to do work. It just has to imitate human behavior.

    The AI ​​spirit of Stack Overflow?

    The specific nature of the refusal of the cursor users tell to learn coding instead of relying on generated code-rigidly on answers that are usually found on helpsites such as Stack Overflow, where experienced developers often encourage newcomers to develop their own solutions instead of simply offering ready-made code.

    A Reddit commentator noticed this parable and said: “Wow, AI will be a real replacement for Stackoverflow! From here, the concise rejecting of questions such as duplicates with references to earlier questions with vague agreement must.”

    The parable is not surprising. The LLMS flowers such as cursor are trained on massive data sets with millions of coding discussions from platforms such as Stack Overflow and Github. These models do not only learn the programming syntax; They also absorb cultural norms and communication styles in these communities.

    According to Cursor Forum -other users have not touched this type of limit on 800 lines code, so it seems a really unintended consequence of the Cursor training. Cursor was not available for comments per period time, but we contacted the view of the situation.

    This story originally appeared Ars Technica.