USD
41.76 UAH ▼0.63%
EUR
43.12 UAH ▼1.55%
GBP
51.84 UAH ▼0.74%
PLN
10.22 UAH ▼1.82%
CZK
1.71 UAH ▼1.9%
Google principles no longer contain points that exclude the use of AI to create ...

Google has refused a promise not to use AI to create weapons: what is known

Google principles no longer contain points that exclude the use of AI to create weapons, tracking and other technologies to harm people. On Tuesday, February 4, Google updated its ethical principles for artificial intelligence (AI), refusing to not use this technology as weapons or surveillance. About it writes The Washington Post. Previously, Google Principles on AI contained a section that listed industries where the company would not develop or implement AI.

According to a copy located in the Onlyin Archives Wayback Machine, this section consisted of four points: as of February 5, this section is missing on the site. Google's representative refused to answer journalists' questions about the company's policy on weapons and monitoring, but referred to a report published on Tuesday by the head of the DEMSABIS, head of the DEMISABIS, and her senior Vice Vice President of Technology and Society, James Manica.

The report states that Google is updating its principles of AI, as global competition for the AI ​​leadership in the world under the conditions of increasingly difficult geopolitical landscape, which gave rise to the need for companies in democratic countries and serve clients in the government, continues in the world in the world and serve clients and customers in the world in the world. national security.

"We believe that democracy should lead in the development of AI, guided by such basic values ​​as freedom, equality and respect for human rights. And we believe that companies, governments and organizations that share these values ​​should work together to create a ci , which protects people, promotes global growth and maintains national security, "Gassabis and Manic said.

The updated version of Google's principles in the AI ​​states that the company will use human supervision and accept feedback to ensure that its technology is used in accordance with "generally accepted principles of international law and human rights". In addition, the company promises to test its technology to "mitigate unintentional or harmful consequences.

" Recall that an engineer known on the network under the pseudonym STS 3D has created a rod-based rod, which works on the basis of chatbot with artificial intelligence Chatgpt from OpenAi. According to the OpenAi representative, the developer reported that he had violated the company's policy, and called on "to stop this activity".