Skip to content

Google has a ban on using its AI for weapons and surveillance

    Google announced on Tuesday that it revises the principles that determine how artificial intelligence and other advanced technology uses. The company has removed the language that promised not 'technologies that can be done or probably cause general damage', 'weapons or other technologies whose main goal or implementation is to cause or immediately facilitate injury to people', 'technologies that information Collecting or using for information for surveillance that breaks internationally accepted standards, “and” technologies whose goal is contrary to general accepting principles of international law and human rights. “

    The changes were announced in a memorandum that was added at the top of a blog post 2018, the guidelines reveal. “We have made updates for our AI principles. Visit AI.Google for the latter, “is the memorandum.

    In a blog post on Tuesday, a few Google -executives quoted the ever -spread use of AI, evolving standards and geopolitical battles over AI as the “background” for why the principles of Google had to be overhauled.

    Google published the principles in 2018 for the first time when it switched to suppress internal protests about the company's decision to work on an American military drone program. In response, it refused to extend the government contract and also announced a series of principles to guide the future use of its advanced technologies, such as artificial intelligence. Among other things, the principles would not develop any weapons, certain surveillance systems or technologies that undermine human rights.

    But in an announcement on Tuesday, Google abolished those obligations. The new webpage no longer mentions a set of prohibited applications for the AI ​​initiatives of Google. Instead, the revised document Google offers more room to pursue potentially sensitive use cases. It explains that Google will “implement appropriate human supervision, due diligence and feedback mechanisms to adapt to user goals, social responsibility and generally accepted principles of international law and human rights.” Google now also says it will work to “reduce unintended or harmful results.”

    “We believe that democracies should lead in AI development, led by core values ​​such as freedom, equality and respect for human rights,” wrote James Fultureika, Google Senior Vice President for research, technology and society, and Demis Hassabis, CEO of Google DeepMind, the appreciated AI Research Lab of the company. “And we believe that companies, governments and organizations that share these values ​​must work together to create AI that protects people, promotes global growth and supports national security.”

    They added that Google will continue to concentrate on AI projects “that match our mission, our scientific focus and our areas of expertise and remain consistent with generally accepted principles of international law and human rights.”

    Multiple Google employees expressed their concern about the changes in conversations with Wired. “It is deeply worrying to see his dedication to the ethical use of AI technology falling without input from his employees or the wider public, despite the long-standing employee sentiment that the company should not be in the war,” says Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA.


    Do you have a tip?

    Are you a current or former employee at Google? We would like to hear from you. Contact a non-work telephone or computer, contact Paresh Dave on Signal/WhatsApp/Telegram on +1-415-565-1302 or paresh_dave@CBNewz, or Caroline Haskins on Signaal on +1 785-813 -1084 or on e-mail carolinehasks@gmail .com


    The return of US President Donald Trump who returned to the office last month has galvanized many companies to revise the policy that promotes equity and other liberal ideals. Google spokesperson Alex Krasov says that the changes have been in the works for much longer.

    Google mentions its new goals such as the pursuit of bold, responsible and collaborative AI initiatives. His sentences such as “socially cheaply” and retain “scientific excellence”. Added is a mention of “respecting intellectual property rights.”