Skip to content

Under Trump, AI scientists are told to remove 'ideological bias' from powerful models

    The National Institute of Standards and Technology (NIST) has issued new instructions to scientists who work together with the US Artificial Intelligence Safety Institute (AISI) that eliminate the mention of 'AI safety', 'responsible AI' and 'AI Fairness' in the skills of members and a request and a request to be a request and a request and a request to be a request to and a request and a request and a request to and a request to and a request and a request and a request and a request to be a request and a request and a request and a request and a request to be a request and a request and a request and a request and a request and a request and a request to be a fairness and a fairness and a request and a fairness and ai -bias. to strengthen ', to engage human bias.

    The information comes as part of an updated cooperative research and development agreement for consortium members of AI Safety Institute, which was sent at the beginning of March. Previously, that agreement encouraged researchers to contribute technical work that could help identify and resolve discriminatory model behavior with regard to gender, race, age or wealth. Such prejudices are extremely important because they can directly influence the end users and disproportionately harm minorities and economically disadvantaged groups.

    The new agreement deletes the mention of the development of tools “for authentication of content and following the origin”, as well as “labeling synthetic content”, which gives less interest in following wrong information and deep fake. It also adds the emphasis on putting America first and asks a working group to develop test tools “to expand the global AI position of America.”

    “The Trump government has removed safety, fairness, wrong information and responsibility as things it appreciates for AI, which I think it speaks for itself,” says a researcher at an organization that works with the AI ​​Safety Institute, which asked for not being mentioned for fear of retaliation.

    The researcher is of the opinion that ignoring these issues can harm regular users by allowing possible algorithms that discriminate on the basis of income or other demographic data to become uncontrolled. “Unless you are a technical billionaire, this will lead to a worse future for you and the people you care about. Expect that AI will be deployed unfair, discriminatory, unsafe and irresponsible, 'the researcher claims.

    “It's wild,” says another researcher who has worked with the AI ​​Safety Institute in the past. “What does it even mean for people to flourish?”

    Elon Musk, who is currently leading a controversial effort to lower government spending and bureaucracy on behalf of President Trump, criticized by AI models built by OpenAi and Google. Last February he placed a meme on X in which Gemini and OpenAi were labeled as 'racist' and 'awake'. He often quotes an incident in which one of Google's models discussed whether it would be wrong to generate someone wrong, even if it would prevent nuclear apocalyps – a very unlikely scenario. In addition to Tesla and SpaceX, Musk Xai, an AI company, runs directly with OpenAi and Google. A researcher who advises Xai has recently developed a new technique for possible changing the political tendencies of large language models, as reported by Wired.

    A growing number of research shows that political bias in AI models can influence both liberals and conservatives. For example, a study into Twitter's algorithm that was published in 2021 showed that users more often received straight perspectives on the platform.

    Since January, the so-called Ministry of Government Efficiency (Doge) of Musk has been introduced by the US government, effectively dismissed officials, spending expenses and creating an environment that is hostile to those who can oppose the objectives of the Trump administration. Some government services such as the Ministry of Education have archived and removed documents that Dei call. DOGE has also focused on Nist in recent weeks, the parent organization of AISI. Dozens of employees have been fired.