
Washington Post article on the use of AI in weapons systems. Well written and timely. But I feel that the concerns come too late. It is unlikely that the US or any other power will walk away from using AI in their weapons. Given the proliferation of AI systems, anyone that does will be at a disadvantage.
In March, a panel of tech luminaries including former Google chief executive Eric Schmidt, then-chief of Web services, now chief executive of Amazon Andy Jassy and Microsoft chief scientist Eric Horvitz released a study on the impact of AI on national security. The 756-page final report, commissioned by Congress, argued that Washington should oppose a ban on autonomous weapons because it would be difficult to enforce, and could stop the United States from using weapons it already has in its arsenal.
Washington Post: The U.S. says humans will always be in control of AI weapons. But the age of autonomous war is already here — By Gerrit De Vynck
The key will be how tightly the protocols lead from one stage or escalation to another. If AI systems can make decisions that escalate into using powerful weapons of mass destruction, including nuclear, then we are fucked. There has to be a man-in-the-middle approach that buffers how far and fast the AI systems can go. But it is safe to assume that battlefield engagements will have AI systems running point.
The frustrating aspect of this subject is that the speed that technology continues to move leaves very little time for society to review and sensibly argue the ethical implications. Now, anyone who reads science fiction knows that these topics have been covered in detail by writers for decades, but our leaders and society have dismissed these stories as fantasy. Now they have come to reality.