Artificial intelligence offers significant potential for applications ranging from medical research to the supply chain, and procurement in particular. However, as has been noted before, scientists and researchers increasingly advocate for a clear ethical framework that provides “meaningful human control” and promotes the responsible development of AI.


Megan M. Roberts, associate director of the International Institutions and Global Governance program at the Council on Foreign Relations, and Kyle L. Evanoff, a research associate for international economics and U.S. foreign policy at the Council on Foreign Relations, explain the fundamental concerns about autonomous weapons, or so-called “killer robots,” in an article on WPR (World Politics Review). They note that activists and experts alike have questioned whether autonomous weapons can adhere to international humanitarian law’s principles of distinction (the ability to distinguish between combatants and civilians), proportionality (the requirement that an attack should not be launched if it can be expected to cause excessive harm to civilians), and restriction (limits on the use of weapons that cause unnecessary suffering).


Concerns about fully autonomous weapons are growing. For example, the United Nations recently closed its first talks on the subject, with experts warning that time is running out to set rules for the use of autonomous weapons. The five-day meeting of the UN’s Convention on Conventional Weapons (CCW) marked an initial step toward an agreed set of rules governing the weapons.


In advance of the meeting, more than 100 robotics and AI experts wrote an open letter calling on the UN to ban autonomous weapons. The letter—which included signatories from dozens of organizations in nearly 30 countries, including China, Israel, Russia, Britain, South Korea and France—asked UN leaders to work to prevent an autonomous weapons “arms race” and “avoid the destabilizing effects” of the emerging technology.


“Once developed, [lethal autonomous weapons] will permit armed conflict to be fought at a scale greater than ever, and scales faster than humans can comprehend,” wrote Tesla chief executive Elon Musk, Alphabet’s artificial intelligence expert, Mustafa Suleyman and 115 other experts. “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”


Although 22 countries—mostly those with smaller military budgets and lesser technical know-how—have called for an outright ban, the prospects for a ban treaty remain dim for now, diplomats at the UN meeting said, Agence France-Presse reports. Furthermore, while convention members tentatively agreed to meet again on the subject next year, academics attending the UN talks said the slow pace of the discussions fails to respond appropriately to the emerging threat. The “arms race has happened [and] is happening today,” Toby Walsh, an expert on AI at the University of New South Wales, told Agence France-Presse.


“These will be weapons of mass destruction,” Walsh told Agence France-Presse during a side-event at the UN meeting. “I’m actually quite confident that we will ban these weapons. ... My only concern is if nations have the courage of conviction to do it now, or whether we will have to wait for people to die first.”


Indeed, a pressing issue is to decide “what effective human control means in practice,” the head of the Arms Unit at the International Committee of the Red Cross (ICRC), Kathleen Lawland, told AFP in an email. While the ICRC has not called for a ban, Lawland warns that some kind of action is needed since the technology is advancing so quickly.


What are your thoughts on AI in general, and specifically, it’s potential for applications in the supply chain? Secondly, do you agree with Dr. David Hanson, CEO of Hanson Robotics, who has previously said “AI is good for the world, helping people in various ways” but clear guidelines are needed “before the technology has definitively and unambiguously awakened.”