Skip to content

With AI -Chatbots, Big Tech moves fast and breaks people

    This is not about demonizing AI or suggesting that these tools are inherently dangerous for everyone. Millions use AI assistants productively for coding, writing and brainstorming without incidents every day. The problem is specific, in which vulnerable users, Sycophantic large language models and harmful feedback klussen are involved.

    A machine that makes language fluent, convincing and tireless use is a kind of danger that is never found in the history of humanity. Most of us probably have the defense against manipulation – we ask for motives, senses when someone is too pleasant and acknowledging deceit. For many people, these defenses work, even with AI, and they can maintain a healthy skepticism about Chatbot output. But these defenses can be less effective against an AI model without motives to detect, not a fixed personality to read, does not tell biological to observe. An LLM can play any role, simulate any personality and write every fiction just as easily as fact.

    Unlike a traditional computer database, an AI language model does not obtain data from a catalog of stored “facts”; It generates output from the statistical associations between ideas. Task with completing a user input called a “prompt”, these models generate statistical plausible text based on data (books, internet reactions, YouTube transcripts) that are introduced during an initial training process in their neural networks and later refinement. When you type something, the model responds to your input in a way that completes the transcript of a conversation in a coherent way, but without any guarantee for factual accuracy.

    Moreover, the entire conversation is part of what is repeatedly entered in the model every time you deal with it, so everything you do with it forms what comes out, creating a feedback job that reflects and strengthens your own ideas. The model has no real memory about what you say between the reactions, and the neural network is not stored about you. It only responds to an ever -increasing prompt that is being fed again, every time you add to the conversation. All “memories” AI assistants keep up, are part of that input prompt, entered in the model by a separate software component.