Winston Churchill is quoted as saying, “You can always count on Americans to do the right thing — after they have tried everything else.” Washington’s grasp of artificial intelligence (AI) demonstrates the point.
China has a national AI strategy entailing $150 billion in focused-research spending and military applications. France has an AI strategy with $2 billion in applied-research spending, and a collective-defense model with other NATO allies called the JEDI Collective. The UAE established a separate cabinet-level department, the Ministry of AI, which released its national AI strategy in 2017. “Artificial intelligence is the future, not only for Russia, but for all humankind. . . . It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world,” said Russian president Vladimir Putin. The activities of other countries around the world suggest he is right.
The United States? The White House has held a conference on AI strategy, is home to a series of independent working groups, and has issued a general statement entitled “Artificial Intelligence for the American People.” America is being outpaced in the global race for AI leadership. The country needs an AI strategy.
Google shows why. Somewhat ironically, Google’s newly released AI principles come after the company withdrew from a contract with the Department of Defense for an initiative called Project Maven when its employees protested Google’s involvement in American national-security initiatives. Incredibly, given the emergence of AI strategies across the world, Google is one of the first technology giants to release a set of AI principles, and these principles offer a glimpse of what an American national AI strategy could be.
Google openly calls for policymakers to regulate AI. This is highly unusual for a company that normally touts a more libertarian approach to regulations. Yet it demonstrates a serious recognition of the need for government guidance by a major technology player with a market capitalization so large that it would register as one of the 20 largest economies in the world. Absent government regulatory frameworks, Google has no choice but to establish its own. The company has done so not because of altruism but in an effort to protect its mission and projects. In essence, Google set forth principles that align with its corporate self-interest.
Yet they may not align with the national interest. Google’s AI principles attempt to fill a void left when it recently axed its “Don’t be evil” mantra. In attempting to assure both its employees and its user base that none of its technology will be used for weapons, Google demonstrates that it does not understand the current security challenges of AI. Google rankings consistently elevate conspiracy theories that have hurt democracies across the world. YouTube has already been used to spread propaganda that incites violence, as well as to disseminate propaganda from terrorist groups. In short, AI is already weaponized in the current environment. While Google’s definition of weapons is confined solely to hardware, such as unmanned systems, it has already allowed its products to be used for information warfare. Google’s shallow understanding of security is a red flag — but also not entirely its fault. There is a need for smart government policy to step in to fill the breach.
Private-sector entities should not be the only ones articulating and implementing a national AI strategy.
The public dialogue on AI has centered on national security, but the discussion around AI governance needs to be broadened to include federal and state agencies. Several of Google’s principles call for multi-interdisciplinary and cross-industry initiatives, highlighting a fact that is often forgotten: AI strategies work only if there’s a broad consensus from all stakeholders. While the principles identify some stakeholders, nodding to educational campaigns at universities and targeted investment in critical infrastructure that forms the backbone of AI, more pressing is the need for a coordinated strategy that synchronizes the public and private sectors. This requires a national effort that pulls together the Departments not only of Defense and State but also of Energy, Education, Transportation, and assorted local and state institutions.
At a national level, meanwhile, the race for AI is essentially a race for high-fidelity data. In the 21st century, autocratic regimes have a decisive advantage in this respect because they control public data and use that data for their own purposes. Democracies such as those in Europe and the United States already face an uphill battle to take advantage of their own data through AI, thanks to concerns over civil liberties. But this does not mean that major technology players have no substantive role to play or leadership opportunities to fill. On the contrary, the private sector is critical to the establishment of a comprehensive AI strategy. Google CEO Sundar Pichai should be lauded for his leadership in highlighting this gap, but the private sector should not be the sole entity that is articulating and implementing a national AI strategy.
Private–public partnerships are a good start, simply because they enable the government to adapt to commercial best practices and help develop technologies with an eye on the public good. There is evidence that the White House understands this. And a greater focus on grassroots programs such as Code for All to introduce STEM into poor and marginalized areas of the country, not just the flashy tech hubs on the coasts, would be welcome. America still possesses the most cutting-edge technology giants in the world and is home to robust, internal national-security networks focused on AI, such as IARPA and DARPA. But these efforts are not enough, especially compared with the initiatives taken by other countries around the world; they tend to be highly decentralized, one-off, and uncoordinated. To continue to be the world’s global powerhouse in technology, the U.S. needs a comprehensive AI strategy — led by the public sector, but with plenty of participation from the private sector and civil society.
— Evanna Hu is the CEO of Omelas, a machine-learning company, and a fellow at New America. Stephen Rodriguez is the founder of One Defense, a visiting professor at the Naval Postgraduate School, and a senior fellow at New America.