It is hard to predict the exact impact and trajectory of technologies enabled by artificial intelligence (AI).1 Yet these technologies might stimulate a civilizational transformation comparable with the invention of electricity.2 AI applications will change many aspects of the global economy, security, communications, and transportation by altering how humans work, communicate, think, and decide. Intelligent machines will either team up with or replace humans in a broad range of activities. Such a drastic shift will boost the social, economic, and political influence of those with game-changing capabilities, while the losing sides could face significant challenges.
The AI revolution and accompanying technologies are also transforming geopolitical competition. Because the development of AI, machine learning, and autonomous systems relies on factors such as data, work forces, computing power, and semiconductors, disparities in how well different countries harness these technologies may widen in the future. This matters because states’ mastery of AI will determine their future strategic effectiveness in military matters, as well as their performance, competitiveness, and ability to deter adversaries.
From the use of autonomous systems to the transformation of command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR) capabilities, and from intelligence processing to cognitive security, AI will change how wars are planned and fought. AI systems will be crucial for tackling more integrated conventional, hybrid, and peacetime challenges. As disruptive technologies provide new tools for totalitarian regimes and extremist groups, the transatlantic community needs to develop solutions to mitigate the malicious use of intelligent machines.
NATO is coping reasonably well with the challenge. Recent military capability-building efforts, collaborative research projects, and internal consultations in the alliance suggest an awareness of opportunities and challenges emanating from the rapid development of AI. Allied exercises include cross-domain autonomous systems, cyber-enabled tactics, and adversarial scenarios, as well as new C4ISR capabilities. Various organizations, such as NATO’s Science and Technology Organization and centers of excellence, help spread knowledge, create awareness, stimulate research and development support, and attract national expertise.
However, the alliance still needs to develop a holistic vision for developing and adapting to AI. NATO should address internal and external disparities in AI capabilities. Internally, the alliance needs new mechanisms so that smaller member states do not lose the ability to support the organization. Externally, the alliance as a whole must maintain its adaptability and agility in a highly competitive international environment. All member states need to be involved in preparing for the transition to an AI-powered, highly interconnected world, because such a world will not tolerate weak links in defenses.
ISSUES AT STAKE
AI AND INTEGRATED BATTLE SPACES
Modern warfare is based on unprecedented connectivity between and within three categories of the battlefield, which together build complex battle spaces. The first category is the physical domain, in which ballistic missiles, main battle tanks, aircraft, the weaponry of ground infantries, and other military hardware are used to degrade or destroy an adversary’s physical resources.
The second battlefield is the information technology space. Here, each side tries to gain superiority by improving the way information is shared, connecting space-based intelligence to weapons systems, or calculating the trajectory of an incoming ballistic missile. For example, a combatant may use electronic warfare to try to blind an adversary’s acquisition radars before an airstrike.
The third battlefield is the cognitive space, where information operations and political warfare take place. Cyberspace straddles the informational and cognitive battlefields. Fifth-generation aircraft such as the F-35 and Russia’s influence operations use cyberspace to produce, disseminate, control, and monitor information.
In the future, victories will increasingly depend on the systematic synchronization of the physical, informational, and cognitive battlefields, all augmented by algorithmic warfare. This triad will redefine essential military concepts such as the center of gravity, the fog of war, and the concentration of forces. In the age of AI, big data, and robotics, concept development will be more important than ever. This will be an unending task because new concepts will need to constantly change to keep up with countermoves such as adversarial algorithms and data-poisoning attempts, which involve feeding adversarial data to AI systems. Such attacks try to alter what AI learns from training data or how it solves classification or prediction problems.
In the near future, more breakthroughs seem imminent. Advances in neuroscience, behavioral biology, and other fields will enable new technological leaps such as human-machine teaming and increased autonomy in military systems.3 Robotic swarms—the “collective, cooperative dynamics of a large number of decentralized distributed robots,” in the words of AI researcher Andrew Ilachinski—form another field in which computer science and robotics follow in biology’s wake.4
Human-machine collaboration is likely to bring about faster and better decision-making by enabling enhanced management of massive data streams. Humans and AI systems have very different decision-making mechanisms, which result in completely different kinds of errors when they fail. By combining the strengths of humans and machines, it may be possible to eliminate those weaknesses. Such teaming trials have already been carried out in the military realm.5
New technologies encourage people, groups, and states to conduct influence operations and manipulation at scale. Intelligent machines can identify susceptible groups of people and “measure the response of individuals as well as crowds to influence efforts,” according to Rand Waltzman, deputy chief technology officer at RAND Corporation. Cognitive hacking, a form of attack that seeks to manipulate people’s perceptions and behavior, takes place on a diverse set of platforms, including social media and new forms of traditional news channels. The means are increasingly diversified, as distorted and false text, images, video, and audio are weaponized to achieve the desired effects. Cognitive security is a new multi-sectoral field in which actors engage in what Waltzman called “a continual arms race to influence—and protect from influence—large groups of people online.”6
AI could cause drastic changes in hybrid warfare, which is a major concern for NATO. States and nonstate actors can use cyberspace to influence large groups of civilians and opposing forces. From reconnaissance activities and the profiling of target audiences to the weaponization of distorted or fake information and psychological operations, AI broadens the potential of information operations.
In addition, human-machine interactions will likely become a part of military engagements, with ethical and legal implications that remain unclear and unexplored. The introduction of this technology needs oversight to prevent potential abuses and unintended consequences.7
GEOPOLITICS IN THE ERA OF AI
Computing power, data availability, and infrastructure are the core pillars of AI geopolitics. One area of great-power competition lies in finding, recruiting, training, and retaining a highly qualified expert workforce. Humans have become a central component of the ongoing international race for data-driven advantage.
Among the main technological enablers of AI industries, semiconductors will potentially be decisive in tipping the balance of power between major actors. The Chinese government and Chinese companies have invested significantly in expanding their computing power and semiconductor capabilities to narrow the gap with actors in the West and develop an independent industrial base.8
At present, the United States is the leading AI power, while China is emerging as an aspirant challenger. Russia, as yet, has not managed to be a part of the top tier in AI, autonomy, and robotics. However, the administration of Russian President Vladimir Putin has attached high importance to the subject, and the Kremlin sees AI as the focus of the next great-power competition.
In the meantime, ambitious small and midsize states that can punch above their weight thanks to their technical and scientific know-how, such as Israel, Singapore, and South Korea, have promising potential that should not be underestimated. This diversity will lead to dynamic technological development and diffusion, with social, economic, geopolitical, and security implications on a global scale.
DEFENDING CORE VALUES AND DEFEATING MALIGN MACHINES
Another focus for NATO should be the values that the alliance has been defending for decades. As the use of AI in everyday life grows, biases and discrimination inherent in AI, the management of sensitive personal data, and malicious online behavior will change societies in ways that are only beginning to be understood.
Some allied governments have begun delving into the underlying issues. The United Kingdom (UK) Parliament formed a select committee on AI, and the United States adopted a National Artificial Intelligence Research and Development Strategic Plan. In April 2019, a group of lawmakers in the U.S. Congress proposed the Algorithmic Accountability Act, which would require companies to audit their algorithms. Reportedly, additional bills are being prepared to counter risks of disinformation and label AI-enabled fake content a threat to national security.9 Parliamentary groups in the UK and Australia have proposed legislative measures to prevent similar harmful use of digital platforms.
More than thirty countries and international organizations have strategies and initiatives for artificial intelligence.10 These have varying priorities, from taking advantage of military clout (United States) to proposing values-based AI (European Union) and from leveraging leadership in AI research (Canada, China) to driving military-civilian fusion (China).11 This diversity continues to evolve both inside and outside NATO.
In recent years, European lawmakers have been actively seeking regulatory action amid emerging digital threats, data-privacy issues, and hostile influence campaigns. European policymakers often emphasize protecting core values, regulating big tech, and preventing malign actors from using AI and accompanying technologies to target Western political institutions, public safety, and individuals.
NATO would benefit from a convergence of transatlantic regulatory and legislative frameworks to better steer the trajectory of the coming transformation. In 2018, a consortium of U.S. and European experts from industry, civil society, and research institutions published a report that outlined three areas of concern.12 The first is the digital security domain, in which the report warned of potential AI vulnerabilities that would allow adversaries to stage large-scale, diversified attacks on physical, human, and software targets.
Second, in the physical security domain, the availability and weaponization of autonomous systems cause major challenges. Cyber and physical attacks on autonomous and self-driving systems and swarm attacks—coordinated assaults by many agents on multiple targets—are other potential threats.
Third, there are significant risks to political security. AI-enabled surveillance, persuasion, deception, and social manipulation are threats that will intensify in the near future. New AI capabilities may strengthen authoritarian and discriminatory political behavior and undermine democracies’ ability to sustain truthful public debates.
NATO nations need to develop an acceptable level of consensus in the governance of the AI transformation. Although this seems extremely difficult given the current state of political affairs, NATO exists for its member nations to come together and tackle these vital security challenges. AI is likely to cause large-scale economic and workforce shifts. Crucially, it is changing how geopolitical competition plays out. It will equip authoritarian states, some of which are competitors of NATO nations, with new oppressive and discriminatory tools. Besides, AI can offer increasingly smart autonomous weapons systems to states and non-state actors.
The transatlantic community will therefore have a full set of tasks on its plate, from observing how such dynamics develop in different regions to building international partnerships to ensure common interests and regulatory actions.13
NATO would benefit from initiatives to prepare for, govern, and regulate AI-related policy priorities. From developing capabilities to building consensus on the challenges mentioned above, NATO needs new mechanisms to tackle emerging threats and continuously adapt to the dynamism of AI-led developments.
Comprehensive collective initiatives are known to be effective in the cyber security field. The alliance should establish an AI task force to review policies and strategic issues. On the policy level, NATO should initiate a continuous and meaningful conversation among decision makers, industry, civil society, and the scientific research community. The alliance has a long way to go in developing algorithmic warfare capabilities and adopting an AI-enabled C4ISR structure.14 Because most innovations in AI and robotics come from outside the military-industrial complex, some studies have encouraged the alliance to cooperate closely with big tech or develop ties with promising start-ups.15
The interdisciplinary conversation needs to go beyond tech companies. AI and other modern disruptive technologies relate to a multitude of scientific fields, from computer science to behavioral biology, neuroscience, psychology, anthropology, robotics, nanotechnology, and many others. NATO nations have relied on these scientific communities to lead AI innovations. However, the level of integration of these sectors is still significantly below what is required, in part because of a populist backlash against experts among parts of the political class. The transatlantic community needs to build a culture to overcome such communication issues and ensure a continuous conversation.
NATO must test its social-cognitive and digital-security vulnerabilities systematically. Ideally, red teaming—in which a group adopts an adversarial point of view to challenge an organization to improve its effectiveness or detect a major weakness—and experimentation efforts should cover both allied exercises and more isolated, peacetime activities to test defenses in national security apparatuses. Inputs from the interdisciplinary and multisectoral conversation, as well as continuous exercises, may provide significant information for new concepts.
A new international and interdisciplinary research center, as an analytical hub and in the form of a center of excellence, would enable effective solutions for all the challenges mentioned above. The proposed institution would blend the high-level techno-scientific outputs of existing NATO bodies, such as the Science and Technology Organization, the Innovation Hub, and centers of excellence with state-of-the-art scientific contributions from member states and in-house experts.
1 This essay is based on a report published as part of the NATO at 70 project: Can Kasapoğlu and Barış Kırdemir, “Wars of None: Artificial Intelligence and the Future of Conflict,” Center for Economics and Foreign Policy Studies, May 15, 2019, http://edam.org.tr/en/wars-of-none-artificial-intelligence-and-the-future-of-conflict/.
2 Andrew Ng, Twitter post, “‘AI Is the New Electricity!’ Electricity Transformed Countless Industries; AI Will Now Do the Same,” May 26, 2016, https://twitter.com/andrewyng/status/735874952008589312?lang=en.
3 “Emerging Cognitive Neuroscience and Related Technologies,” National Research Council of the National Academies, 2008, https://www.nap.edu/catalog/12177/emerging-cognitive-neuroscience-and-related-technologies.
4 Andrew Ilachinski, “AI, Robots, and Swarms: Issues, Questions, and Recommended Studies,” Center for Naval Analyses, January 2017, https://www.cna.org/cna_files/pdf/DRM-2017-U-014796-Final.pdf.
5 Ashley Roque, “US Army Looking for Ways to Fold AI Research Into Expeditionary Manoeuvre, Air, and Ground Reconnaissance Technologies,” Jane’s International Defence Review, May 20, 2019, https://www.janes.com/article/88654/us-army-looking-for-ways-to-fold-ai-research-into-expeditionary-manoeuvre-air-and-ground-reconnaissance-technologies; Andrew White, “Ready Player One: The Future of AR and AI,” Jane’s Defence Weekly, 2019; and Matej Tonin, “Artificial Intelligence: Implications for NATO’s Armed Forces,” NATO Parliamentary Assembly Science and Technology Committee, April 17, 2019, https://www.nato-pa.int/document/2019-stctts-report-artificial-intelligence-tonin-088-stctts-19-e.
6 Rand Waltzman, “The Weaponization of Information: The Need for Cognitive Security,” RAND Corporation, April 27, 2017, https://www.rand.org/content/dam/rand/pubs/testimonies/CT400/CT473/RAND_CT473.pdf.
7 M. L. Cummings, Heather Roff, Kenneth Cukier, Jacob Parakilas, and Hannah Bryce, “Artificial Intelligence and International Affairs: Disruption Anticipated,” Chatham House, June 2018, https://www.chathamhouse.org/sites/default/files/publications/research/2018-06-14-artificial-intelligence-international-affairs-cummings-roff-cukier-parakilas-bryce.pdf.
8 Paul Triolo and Graham Webster, “China’s Efforts to Build the Semiconductors at AI’s Core,” New America, December 7, 2018, https://www.newamerica.org/cybersecurity-initiative/digichina/blog/chinas-efforts-to-build-the-semiconductors-at-ais-core/.
9 Karen Hao, “Congress Wants to Protect You From Biased Algorithms, Deepfakes, and Other Bad AI,” MIT Technology Review, April 15, 2019, https://www.technologyreview.com/s/613310/congress-wants-to-protect-you-from-biased-algorithms-deepfakes-and-other-bad-ai/.
10 “National and International AI Strategies,” Future of Life Institute, accessed October 12, 2019, https://futureoflife.org/national-international-ai-strategies/?cn-reloaded=1.
11 Samir Saran, Nikhila Natarajan, and Madhulika Srikumar, “In Pursuit of Autonomy: AI and National Strategies,” Observer Research Foundation, November 16, 2018, https://www.orfonline.org/research/in-pursuit-of-autonomy-ai-and-national-strategies/.
12 Miles Brundage, Shahar Avin, Jack Clark, et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” arXiv.org, February 2018, https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf.
13 Ben Scott, Stefan Heumann, and Philippe Lorenz, “Artificial Intelligence and Foreign Policy,” Stiftung Neue Verantwortung, January 2018, https://www.stiftung-nv.de/sites/default/files/ai_foreign_policy.pdf.
14 Patrick Tucker, “How NATO’s Transformation Chief Is Pushing the Alliance to Keep Up in AI,” Defense One, May 18, 2018, https://www.defenseone.com/technology/2018/05/how-natos-transformation-chief-pushing-alliance-keep-ai/148301/.
15 Martin Dufour, “Will Artificial Intelligence Challenge NATO Interoperability?,” NATO Defense College policy brief 6-18, December 10, 2018, http://www.ndc.nato.int/news/news.php?icode=1239.
This article has been published firstly at Carnegie Europe. For other articles, please visit “New Perspectives on Shared Security: NATO’s Next 70 Years”.