Project Maven, Google and the power of conscientious objection

Prussian military strategist Carl von Clausewitz is famed for his aphorism “war is the continuation of politics by other means.” The succinct insightfulness of his expression is elegant. And yet, the way we think about war, power, ethics and morality seems to be shifting, not through evolutionary or revolutionary political thought, but rather through unprecedented technological advance, particularly in synthetic intelligence and learning. As Google withdraws from Project Maven, what next for AI, ethics and conscientious objection?

artificial intelligence

Lethal Autonomous Weapons Pledge

The “Lethal Autonomous Weapons Pledge”, recently endorsed by Elon Musk and a number of the co-founders of Artificial Intelligence (AI) company, DeepMind raises interesting points. The Pledge urges “citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.” Technologists (at least in some parts of Silicon Valley) are moralising about kill-chains, algorithmic agency, accountability and the ethical fit of their technological inventions with military use.

Project Maven & Conscientious Objection at Google

Google employees recently signed a petition protesting the company’s involvement in US Defense initiatives such as Project Maven, which uses AI to analyse reconnaissance footage from drones. It was reported in June that Google was ‘withdrawing’ from Project Maven as well as publishing a set of ethical principles to guide and inform its work in defence research and development. The incongruity of Google’s corporate values, the value systems of its employees and the needs of defence and intelligence customers highlight some interesting tensions. We are left to consider that as the industrial age gave way to the information age, that labour unions are giving way to new fluid and powerful employee collectives, some of which are effective and influential conscientious objectors. Discourse and public perception of leaks from Edward Snowdon, Chelsea Manning as well as the War on Terror, rendition, enhanced interrogation and military counter-terrorism have likely hardened positions.

Weaponizing Technology – Standards of Acceptance

That said, concepts such as “acceptable and unacceptable use of AI” must be critically challenged. Acceptable or unacceptable to who, in what context and to what standard? I am reminded of Professor Alan Dershowitz’s argument about the “case for torture warrants” and wonder if there is a similar case for intelligent autonomous weapons? Would it be acceptable to grant limited autonomy and lethality to a weapons system for a defined period under a temporary (judicially issued and reviewed) warrant? This would certainly make for interesting case law.

Looking further into the signatories of the Lethal Autonomous Weapons Pledge, several of the 26 nation states listed draw the eye. Human rights, political and economic corruption, gender inequality, state sponsored terrorism and other abuses have been in part levelled against one or more of Algeria, Bolivia, China, Cuba, Ecuador, Egypt, Iraq, Mexico, Nicaragua, Pakistan, Panama, Peru, Uganda, Venezuela and Zimbabwe. Careful consideration must therefore be given to understanding the broad spectrum of motives of signatories.

Conclusion

There is perhaps significant naivety in conscientious objection, particularly set against the realism of global military and political struggle. Technology companies that sell into defence and intelligence markets, particularly in the United States should note these developments and closely assess emerging trends. Likewise, defence agencies must plan for increased resistance and clarify technology use in defensive and offensive systems and operations.