OpenAI removes military and war prohibitions from its policies | Trending Viral hub

[ad_1]

Open AI may be paving the way to discover your AIThe military potential of

First reported by he Intercept On January 12, a new change to company policy completely removed previous language prohibiting “activities that have a high risk of physical harm,” including specific examples of “weapons development” and “military and warfare.”

Starting January 10, OpenAI usage guidelines A ban on “war and military” uses is no longer included in existing language requiring users to avoid harm. The policy now only notes a ban on using OpenAI technology, such as its large language models (LLM), to “develop or use weapons.”

Subsequent reports on the editing of the policy noted the immediate possibility of lucrative partnerships between OpenAI and defense departments seeking to use generative AI in administrative or intelligence operations.

In November 2023, the US Department of Defense issued a statement in its mission to promote “the responsible military use of artificial intelligence and autonomous systems,” citing the country’s support for the International Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy, a “best practices” led by The United States announced in February 2023 that it was developed to monitor and guide the development of military AI capabilities.

“Military AI capabilities include not only weapons but also decision support systems that help defense leaders at all levels make better, more timely decisions, from the battlefield to the boardroom. “, and systems related to everything from finance, payroll and accounting to the recruitment, retention and promotion of personnel, to the collection and fusion of intelligence, surveillance and reconnaissance data,” the statement explains.

The US military has already used AI in the The Russian-Ukrainian War and the Development of AI-Powered Autonomous Military Vehicles. Elsewhere, AI has been incorporated into military intelligence and targeting systems, including an AI system known as “The Gospel,” which Israeli forces use to pinpoint targets and reportedly “reduce human casualties” in their attacks on Gaza.

AI advocates and activists have consistently expressed concern over the increasing incorporation of AI technologies in both cyber conflict and combatfor fear of an escalation of the armed conflict, in addition to the already known biases of the AI ​​system.

In a statement to InterceptOpenAI spokesperson Niko Felix explained that the change was aimed at simplifying the company’s guidelines: “Our goal was to create a set of universal principles that were easy to remember and apply, especially as our tools are now used globally by users. everyday users that can now also “Build GPT. A principle like ‘Do no harm to others’ is broad but easy to understand and relevant in numerous contexts. “Furthermore, we specifically cite weapons and injury to others as clear examples.”

OpenAI presents its usage policies with a similarly simplistic refrain: “Our goal is for our tools to be used safely and responsibly, while maximizing your control over how you use them.”



[ad_2]

Check Also

FIFA is said to be close to reaching a television deal with Apple for a new tournament | Trending Viral hub

[ad_1] FIFA, soccer’s world governing body, is close to a deal with Apple that would …

Tesla to recall Cybertruck in latest setback | Trending Viral hub

[ad_1] Tesla agreed to recall nearly 4,000 of its Cybertruck pickup trucks to repair an …

How scam calls and messages took over our daily lives | Trending Viral hub

[ad_1] Doctorow noted that just as the Internet has made routine tasks less onerous, it …

Leave a Reply

Your email address will not be published. Required fields are marked *