US unveils an AI submarine capable to kill enemies without human orders

WASHINGTON, (BM) – The US Navy introduced a submarine capable of attacking without the order of the military. It is controlled by artificial intelligence (AI), learned and NewScientist.

They plan to arm themselves with such a submarine by 2023. Submarines with weapons and artificial intelligence will be able to perform actions without explicit control by humans. The project in the USA was called CLAWS (“Claws”).

Read more: Russia Tests a New Onboard AI System on Mi-28N Attack Helicopter

In official documents, it is described as an autonomous underwater weapon system for covert use. The military hopes the submarine will help increase mission areas.

Journalists also recalled that the UK fleet had previously announced its intention to use artificial intelligence to control a fleet of robotic submarines. They will be able to pave the way and eliminate underwater mines.

Recall, the Pentagon said it formally accepted the ethical principles of using artificial intelligence in armaments.

The recommendations were conveyed to Minister of Defense Mark Esper in the fall of 2019 after a 15-month period of developing the ethics of military vehicles. The possibility that artificial intelligence itself will be able to formulate and carry out combat missions in the near future is causing fierce debate and criticism of the military both in the United States and abroad.

“The United States, together with our allies and partners, should accelerate the implementation of artificial intelligence, should lead in its application for national security purposes, in order to maintain our strategic positions, win the battlefields of the future and defend the rule-based international order,” said in this regard Secretary of Defense Mark Esper. According to him, technology should not affect the “responsible and lawful” behavior of the military.

Now the Pentagon will be guided by five basic principles. Artificial intelligence must be “responsible,” “proportionate,” “understandable,” “reliable,” and “manageable.”

The principle of responsibility implies “proper levels of common sense and caution” in the development of military vehicles.

Read more: The Pentagon accepted ethical principles for artificial intelligence

Pentagon proportionality refers to the “necessary steps to minimize the unintentional bias” of military vehicles.

Understandable artificial intelligence means that the military will own the technologies and operational skills they need, and ensure transparency of procedures and documentation.


Follow us everywhere and at any time. has responsive design and you can open the page from any computer, mobile devices or web browsers. For more up-to-date news from us, follow our YouTube, Reddit, LinkedIn, Twitter and Facebook pages. Do not miss the chance to subscribe to our newsletter. Subscribe and read our stories in News360App in AppStore or GooglePlay or in FeedlyApp in AppStore or GooglePlay

Subscribe to Google News

>>Be a reporter: Write and send your article.<<
Editorial team