Age of fully autonomous and lethally armed battlefield robots approaching

  • Throughout the month of April 2024, participate in the FileJoker Thread Contest OPEN TO EVERYONE!

    From 1st to 30th of April 2024, members can earn cash rewards by posting Filejoker-Exclusive threads in the Direct-Downloads subforums.

    There are $1000 in prizes, and the top prize is $450!

    For the full rules and how to enter, check out the thread
  • Akiba-Online is sponsored by FileJoker.

    FileJoker is a required filehost for all new posts and content replies in the Direct Downloads subforums.

    Failure to include FileJoker links for Direct Download posts will result in deletion of your posts or worse.

    For more information see
    this thread.

Ceewan

Famished
Jul 23, 2008
9,152
17,033
How long will it be before we see truly autonomous artificial intelligence (AI)-driven weapon systems on the battlefield?

The current focus on developing robotic weapons has its genesis in the September 2001 terror attacks on the United States, which prompted the U.S. to make "asymmetrical warfare" -- fighting against comparatively low-tech opponents such as terrorist groups -- a central component of its military research and doctrine. Fuel has been added to this innovation effort as the U.S. looks to keep a step ahead of potential antagonists Russia and China and private firms also compete feverishly to develop AI systems. The result: militarized robots are spreading, and not just among the great powers. In fact, the nation perhaps on the leading edge of battlefield AI development is Israel.

Amir Shapira is a top robotics researcher in the mechanical engineering department at Ben-Gurion University of the Negev, and his son is now a soldier in the Israel Defense Forces (IDF). Shapira told the Mainichi that when his son was born, he thought that by the time his boy was drafted robots would be the ones fighting on the front lines, and his son would control them from the rear. That hasn't happened, but Shapira is not simply disappointed with the progress of automated weapon systems. He also says Israel needs battlefield robots capable of acting fully autonomously to protect friendly soldiers and attack enemies.

Israel is about the size of the island of Shikoku in Japan, and has a population of around 8.3 million. It is surrounded by hostile Arab states, and has defeated several invasions by their larger armies with superior military technology. The IDF assigns highly technically capable new soldiers to its R&D section, and when they leave the army these young people often continue their research at universities or private companies. The technology produced -- and its benefits -- are then passed back to the IDF. This military-industrial cycle is a major pillar of technological innovation in Israel.

According to the Stockholm International Peace Research Institute, from 2010 to 2014, Israel exported a world-leading 165 unmanned aerial vehicles (UAVs), followed by the U.S., which sold 132 drones to foreign buyers.

The U.S. was the first nation to develop an unmanned military aircraft, all the way back in World War I. During the Vietnam War, the U.S. used unmanned aircraft for reconnaissance, but the military didn't see much need for them and development stagnated.

Also in the 1960s, Israel saw a pressing need to keep track of the military preparations of its western neighbor Egypt. The then head of Israeli military intelligence -- known now as the "father" of drone technology -- bought a radio-controlled aircraft from the U.S., installed a camera and flew the device on scouting missions. The results were better than anyone expected, and the success sparked the start of UAV development in Israel. In 1984, the country even began exporting UAVs to the U.S.

Israeli developers have also made big leaps in land robots. In 2004, an Israeli team participated in the inaugural Grand Challenge, an autonomous vehicle race through the Mojave Desert run by the U.S. Defense Advanced Research Projects Agency (DARPA). Four years later, the IDF deployed a semi-autonomous border patrol vehicle called the Guardium -- a world first.

In the relatively near future, only one person will be needed to operate multiple robots numbering, if things continue well, eventually into the hundreds or even thousands, says Noa Agmon, a senior computer science researcher at Bar-Ilan University in Ramat, central Israel, who is cooperating with the IDF on AI development.

For example, a group of robots in automatic communication with each other could be assigned a certain task to perform cooperatively. The robot closest to the human operator could receive instructions and then pass them on to the other machines in its work group in difficult-to-reach places like tunnels or inside nuclear facilities. The focus of the harshest competition among Agmon and his fellow researchers around the world is developing decision-making software to provide the machines with the best, most appropriate instructions. Equipped with good decision-making capabilities, groups of robots on land, in the sea and in the air could all be supervised by a single person, says Agmon.

However, if robots become truly autonomous, the principle of human control could become practically irrelevant. Mary Cummings, a professor of systems engineering at Duke University, has written that there is a tendency for people using automated systems to ignore information that contradicts what the computer is telling them -- an overdependence on automated functions called "auto-bias." During the U.S. invasion of Iraq in 2003, U.S. Patriot missiles destroyed two U.S. and British military aircraft, causing three deaths. The resulting inquiry found that, as the Patriot battery crews had just 10 seconds to decide whether to abort the launches, they tended to accept what the weapon systems' computers told them without question.

At present, there are no absolutely autonomous weapon systems. An AI-equipped border patrol vehicle called the Border Protector deployed in July by the IDF comes closest. Equipped with weapons and loaded with detailed information on the features of a specific area, the vehicles' AI could find and attack targets entering that area. If in fully autonomous mode, decisions on what to destroy and when would be left entirely up to the AI.

At an informal April 2016 expert meeting of the U.N. Convention on Certain Conventional Weapons in Geneva, the participants agreed to plan an official discussion on possible international limitations to lethal autonomous weapon systems. A conference of government experts on the subject is also set to have its first meeting as early as 2017. (By Tomoko Ohji, Jerusalem Bureau)



Why does the name Sarah Connor ring a bell here? "I will be back."
 

EzikialRage

Active Member
Nov 20, 2008
672
100
When playing games like Doom or watching movies where scientists do something nefarious that puts all of mankind at risk. I think to myself there is no what scientists would be stupid enough to do something like that.But then news stories like this say oh yes they fucken would do something that would put all of man kind at risk. Although I do remember a story of dutch scientists making a more dangerous strain of the birth flu.
 
  • Like
Reactions: Ceewan