AI ATWAR How Big Data, Artificial Intelligence, and Machine Learning Are Changing Naval Warfare Edited by Sam J.. Title: AI at war : how big data, artificial intelligence, and machine le
Trang 3AI AT
WAR
Trang 4AI AT
WAR
How Big Data, Artificial Intelligence, and Machine
Learning Are Changing Naval Warfare
Edited by Sam J Tangredi and George Galdorisi
NAVAL INSTITUTE PRESS
Annapolis, Maryland
Trang 5Naval Institute Press
291 Wood Road
Annapolis, MD 21402
© 2021 by Sam J Tangredi and George Galdorisi
All rights reserved No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and recording, or by any information storage and retrieval system, without permission in writing from the publisher.
Library of Congress Cataloging-in-Publication Data
Names: Tangredi, Sam J., editor | Galdorisi, George, date, editor.
Title: AI at war : how big data, artificial intelligence, and machine learning are changing naval warfare / edited by Sam J Tangredi and George Galdorisi.
Other titles: Artificial intelligence at war
Description: Annapolis, Maryland : Naval Institute Press, [2021] | Includes bibliographical references and index.
Identifiers: LCCN 2020047097 (print) | LCCN 2020047098 (ebook) | ISBN 9781682476062
(hardcover) | ISBN 9781682476345 (pdf) | ISBN 9781682476345 (epub)
Subjects: LCSH: Artificial intelligence—Military applications—United States | Naval art and
science—United States—Data processing | Naval art and science—Technological innovations— United States | Big data.
Classification: LCC UG479 A44 2021 (print) | LCC UG479 (ebook) | DDC 359.00285/63—dc23
LC record available at https://lccn.loc.gov/2020047097
LC ebook record available at https://lccn.loc.gov/2020047098
Print editions meet the requirements of ANSI/NISO z39.48-1992 (Permanence of Paper) Printed
in the United States of America.
29 28 27 26 25 24 23 22 21 9 8 7 6 5 4 3 2 1
First printing
Trang 6List of Illustrations
Foreword—ADM JAMES G STAVRIDIS, USN (RET.)
Introduction—SAM J TANGREDI AND GEORGE GALDORISI
1 Theory and Conceptual History of Artificial Intelligence
PATRICK K SULLIVAN AND THE OCEANIT TEAM
2 AI, Autonomy, and the Third Offset Strategy
Fostering Military Innovation during a Period of Great Change
ROBERT O WORK
3 The Department of the Navy’s Commitment to Big Data, AI, and Machine Learning WILLIAM BRAY AND DALE L MOORE
4 The Navy at a Crossroads
The Uneven Adoption of Autonomous Systems in the Navy
PAUL SCHARRE
5 AI Programs of Potential Military Opponents
Propositions and Recommendations
SAM J TANGREDI
6 Battlefield Innovation on Patrol
Designing AI for the Warfighter
NINA KOLLARS
7 Mission Command and Speed of Decision
What Big Data, Artificial Intelligence, and Machine Learning Should Do for the Navy
ADM SCOTT H SWIFT, USN (RET.), AND ANTONIO P SIORDIA
8 Practical Applications of Naval AI
An Overview of Artificial Intelligence in Naval Warfare
CONNOR S McLEMORE AND CHARLES R CLARK
Trang 79 How AI Is Shaping Navy Intelligence, Surveillance, and Reconnaissance
MARK OWEN, KATIE RAINEY, AND RACHEL VOLNER
10 Communicating at the Speed of War
The Future of Naval Communications and the Rise of Artificial Intelligence
ALBERT K LEGASPI, JEFF MAH, AND STEPHANIE HSZIEH
11 How AI Is Shaping Navy Command and Control
DOUG LANGE AND JOSÉ CARREÑO
12 AI and Integrated Fires
MICHAEL O’GARA
13 Artificial Intelligence and Future Force Design
HARRISON SCHRAMM AND BRYAN CLARK
14 Entry Pass to Future Warfare
AI Education at the U.S Naval Academy
NATHANAEL CHAMBERS, FREDERICK L CRABBE, AND GAVIN TAYLOR
15 Trying to Put Mahan in a Box
Insights from Attempting to Develop a Decision Aid for the Operational Commander
ADAM M AYCOCK AND WILLIAM G GLENNEY IV
16 “Sea Hacking” Sun Tzu
Deception in Global AI/Cybered Conflict and Navies
CHRIS C DEMCHAK AND SAM J TANGREDI
17 Overcoming Impediments to AI for Unmanned Autonomy and Human Decision-Making GEORGE GALDORISI
18 The Influence of Artificial Intelligence on Naval Strategy and Tactics
REAR ADM NEVIN CARR, USN (RET.), AND SAM J TANGREDI
19 The Future of AI
PATRICK K SULLIVAN AND THE OCEANIT TEAM
Epilogue—GEORGE GALDORISI AND SAM J TANGREDI
Afterword—ADM MICHAEL S ROGERS, USN (RET.)
Selected Bibliography
About the Contributors
Index
Trang 8Photos
1-1 The School of Athens
17-1 Historical Pre-Invasion Speech
17-2 Pre-Invasion Speech to AI Machine Troops
17-3 Pre-Invasion AI Machine Speech to Human Troops
17-4 Pre-Invasion Speech to Human-Machine Team
Tables
5-1 Comparative Populations and AI Implications (2017)
5-2 The Six Propositions
5-3 The Six Policy Recommendations
9-1 TCPED Process with AI Extension
14-1 Course Requirements
Figures
1-1 The Turing Machine
3-1 John Boyd’s OODA Loop
9-1 The TCPED Process with AI Extension
9-2 Three-Layer Neural Network Architecture
9-3 Adaptive Closed-Loop Control Processing
12-1 The Joint Targeting Cycle
15-1 Mahan-in-a-Box Conceptualized at the Operational Level of War19-1 Quantity versus Accuracy of Training Data
19-2 Marriage Rate in Wyoming versus Car Sales
19-3 Example of an Adversarial Attack in Two Dimensions
Trang 919-4 Three Different Types of Inputs
Trang 10ADM JAMES G STAVRIDIS, USN (RET.)
“Go right at ’em.” Those are the words of the greatest admiral in history,
Vice Admiral Lord Nelson of Trafalgar When I began to truly study histactical approach over the years, I was a lieutenant commander lookingforward to the day I would command a warship at sea As I pondered hislessons, I wondered how best a warship, and a fleet, could positionthemselves to achieve that maxim
From my own study, I already knew—or at least suspected—thatinformation was the key ingredient But in the midst of crisis or battle,accurate information was always difficult to obtain Common wisdom heldthat “first reports are always wrong.” The key to being a successfulcommander throughout naval history had been the ability to sort throughincomplete information, assess what was probable and what was unlikely,and make a decision based on professional judgment honed by experience.Nelson knew that, and I gradually learned it over the many sea miles of myvoyage
In those days, the late 1980s, the U.S Navy was already making thechange from analog to digital Many of the weapons and sensors themselveswere already digital, but it was not until the paradigm-breaking Aegiscombat system proliferated throughout the surface fleet that digitalinformation from multiple sources and sensors became a truly reliable,albeit never infallible, asset in operational (read “human”) decision-making
I was lucky to be the commissioning operations officer on one of the very
first Aegis ships, USS Valley Forge (CG 50), where I benefited from the
technical acumen of the “father of Aegis,” Rear Adm Wayne Meyer, andthe tactical acumen of Capt Wayne Hughes
At the same time came the public proliferation of the personalcomputer, the popularization and commercialization of the Internet, and the
Trang 11incredible advances (and potential for conflicts) in cyberspace The entireworld was becoming awash with information that previously required muchtime and human labor to acquire and categorize The prodigious amount ofavailable information—in both the civilian and military spheres—wasincreasing so much and so fast that it seemed the human mind could notkeep up It was becoming apparent that decision-makers, particularly thosewho might face combat, needed new ways to sort through all theinformation with which they were being bombarded.
In naval operations, information was no longer primarily sourced fromsensors located on board the individual warship—what we call organicsensors Accurate information was coming from satellites, shore stations,other services’ joint assets, and open sources Lack of information suitablefor targeting the enemy could often still be a problem, but now it wascompounded by other times when there was just too much information tosort through Much information was available to the enemy as well
How, then, could we continue to position ourselves to “go right at ’em”?That is when I realized that the key had shifted from availability ofinformation to the speed of decision In some ways, this parallels what wasalready known in air dogfights based on Air Force Col John Boyd’s model
of the observe-orient-decide-act (OODA) loop Pilots who could cycle morequickly through the OODA loop—in other words, made accurate decisionsmore quickly—generally won the dogfight They were, in the words ofWayne Hughes, able to attack effectively first
By the 2000s, when I became a senior decision-maker, particularly asSupreme Allied Commander of North Atlantic Treaty Organization (NATO)forces, the amount of information that could potentially support decisionswas absolutely staggering I knew there needed to be methods ofprocessing, combining, validating, and making the options clear to thehumans who had to decide both peaceful and violent actions The decisionsneeded to be made before an enemy could make them—and faster, asformer Chief of Naval Operations Adm John Richardson likes to say
Fortunately, the development of artificial intelligence (AI) systemsholds great promise to enhance critical human decision-making, especiallywhen speed of decision means life or death—not to replace humandecision-making, please note, but to assist it in making effective decisionsamidst the melee of often conflicting and ambiguous information To me,
Trang 12the true promise of AI is to speed up our OODA loop through new methods
of human-machine teaming and collaboration
I wish I had had AI systems to assist me—and NATO’s tirelesslydedicated personnel—in sorting through the complexity of informationduring our operations Humans striving to be tireless are no longer enough
At NATO, we did not sufficiently and effectively leverage our technologicaladvantages
All of that is why I jumped at the chance to provide the foreword to thisvolume General discussions concerning the promise of AI, machinelearning, big data, and human-machine teaming in military affairs haveproliferated, but relatively little has been written as to how the United Statesand its Navy can achieve this promise This book drills down through theadvances and—quite frankly—the hype, to identify both the applicationsand limitations of AI to naval warfighting It may be the very first book to
do so
During his term as deputy secretary of defense, my close friend andcolleague, the Honorable Bob Work, placed AI in the center of his effort tocreate a third offset strategy The debate over the third offset becamecomplex—and sometimes contentious—in itself Putting my Navy hat back
on, I would describe the need for implementing AI in naval (and militaryoperations in general) as enabling us to “go right at ’em” in a Nelsoniansense
In this approaching period of potential great power competition, Ibelieve that the ability to “attack effectively first” is the prime enabler ofbeing able to deter effectively first AI is not the full solution to effectivedecision-making about warfighting and deterrence, war and peace But if itassists future decision-makers to make wise choices, it will live up to itspromise This book, I believe, is a seminal step toward building afoundation for effectively applying AI to defense decision-making Editedand partly written by two brilliant thinkers, my friends George Galdorisiand Sam Tangredi, this book is the “sea detail” that allows the reader to sailinto the still-uncharted waters of artificial intelligence
Trang 13SAM J TANGREDI AND GEORGE GALDORISI
Artificial intelligence (AI) may be the most beneficial technological
development of the twenty-first century However, it is undoubtedly themost hyped technological development of the past two decades This hypehas raised expectations for results and, unfortunately, has clouded publicunderstanding of the true nature of AI and its limitations as well as itspotential
The very characteristics of AI, machine learning, human-machineteaming, and analysis of big data are poorly understood except byspecialists, and the terms have been publicly used in arbitrary fashions,
sometimes deliberately The highly respected Financial Times reports that a
study indicates 40 percent of “artificial intelligence start-ups” (softwarecompanies seeking capital investments) in Europe actually “do not use any
AI programs in their products.” It appears that “companies branded as AIbusinesses have historically raised larger funding … and secured highervaluations than other software businesses,” up to “15 percent higher” thanthose that do not use the term “artificial intelligence.”1 Hence, there aregreat incentives to appropriate the term AI and apply it to otherprogramming activities, vaguely related research efforts, and unmanned orautonomous systems development
If venture capitalists responsible for the investment of many thousands
or millions of dollars are not often certain as to what constitutes AI, howcan non-specialists be?
WHAT REALLY IS ARTIFICIAL
INTELLIGENCE?
Trang 14In reality, artificial intelligence is not magic, mystery, or a superior way ofthinking.2 Any well-functioning human brain could perform the samefunctions as any AI machine if time, energy, focus, and data storage(recollection) were not factors and limits AI machines consist of softwareand input composed of programming, algorithms, and collected,correlatable data running on computer hardware AI machines have beatenhuman masters of chess, iGo, and Texas Hold ’Em poker because they cancontain and recall more data in a shorter amount of time than the humanbrain.
AI has had myriad definitions since its initial development as a branch
of computer science in the late 1950s,3 but all of these can be distilled into asimple definition of AI as “the capability of a machine to imitate humanbehavior,” or, more specifically, “a computer system able to perform tasksthat normally require human intelligence, such as visual perception, speechrecognition, decision-making, and translation between languages.”4 AI’sadvantages over the human mental processes that it imitates are its ability toperform calculations more rapidly and avoid the difficulties of actuallybeing human: bodily functions, fatigue, emotions, boredom, worry, anddoubt
Whereas a human iGo master can recall hundreds of options for the nextseries of moves, an AI machine—if initially programmed with the rules ofthe game and the knowledge of other iGo masters—can recall thousands,perhaps hundreds of thousands of options AI can rapidly “learn” some ofthese options by observing play or actually playing (termed “imitativelearning”), but the key to victory is recollection The machines also do nottire, are not distracted by other thoughts, tasks, or emotions, and—converting data into binary systems of zeros and ones (like all computers)—can make calculations at much greater speeds
However, even AI machines that are programmed to self-learn have to
be programmed by humans, be taught initially how to learn (via code) byhumans, and be provided with data from sensors created and attached tothem by humans, and they are ultimately designed to mimic rationaljudgment—rationality being a human construct AI can be seen as
“superior” to human intelligence since it can quickly process data fasterthan any one human, albeit usually in one narrow field of knowledge Yet it
Trang 15is still the product of collective human endeavor As such, it has limits,costs, and risks.
AI machines have vulnerabilities, some greater and some less than those
of humans Just as humans can suffer massive heart attacks or devastatingcombat trauma, AI machines can “die” instantly if separated from theirpower sources AI machines, contained in metal boxes, can be armored forprotection from shock, but so can humans AI machines can be constructed
in robot form with some limited degree of movement using mechanicalstrength that surpasses human strength However, the ability to engineer arobot to fully mimic all the degrees of human movement requires effortsand costs that exceed the actual programming costs of AI In industry,current robots remain very limited in function in order to perform narrowtasks that are repetitive in nature.5 Even if AI-driven, they are not designed
to multitask outside of limited functions simply because it is not effective to do so.6
cost-Another limit to AI, whether simple, narrow, or general (we will definethose terms later), is created by the very nature of electronic binarycomputer coding Since computers are essentially limited to calculation—perhaps we can call it “internal thought”—only in combinations of zerosand ones (which is why the system is called binary), they are capable of
“thinking” only in terms of yes or no, black or white—not “maybe” or
“gray.”7 In an analog machine, this would result in on/off functions andcannot result in “partially on.” The result is that, unlike humans, AI cannottake partial actions without having considerable programming layers (deeplearning) that can categorize data by probabilities These layers, whichconsist of decision nodes that prescreen data going to other decision nodes,can approximate “gray thinking” by combining binary loops Nevertheless,
they cannot match both the disadvantages and value of human doubt This
factor is dismissed by some computer scientists who point out thatblack/white thinking is valid since “one cannot be a little bit pregnant.”8
However, humans can make decisions based on uncertainty, personalvalues, and irrational thinking, all of which are difficult for AI to duplicate.Unlike humans, AI machines do not necessarily show the effects ofaging, if the hardware is properly maintained and the software remains freefrom hacking or other compromise (and is upgraded as technology
Trang 16advances) They also feel neither fear nor exhilaration while facing anenemy in combat.
UNDERSTANDING MILITARY AI
In the public perception of military affairs, the term AI almost universallyconjures up images of “killer robots” running amok and deliberatelyattacking noncombatants, possibly destroying the entirety of the humanrace Obviously, this mental image is elicited by a large number of popularmovies and novels based on that theme
This fear exists to the extent that certain scientists and scholars haveconflated artificial intelligence with unmanned military systems (these so-called killer robots) A coalition of nongovernmental organizations, led byHuman Rights Watch, has established a “campaign to ban killer robots,”which they define as an effort to “ban fully autonomous weapons andthereby retain meaningful human control over the use of force.”9 HumanRights Watch defines autonomous weapons as weapons that “would be able
to select and engage targets without human intervention,” a definition sobroad that it could conceivably cover such existing weapons as heat-seekingmissiles (originating in the 1950s) or stationary naval mines (the firstrecorded, verifiably successful mine attack was in 1855).10 In fact, HumanRights Watch refers to “armed drones” as “precursors” to killer robots,despite the fact that all verifiable armed drone missions have beenconducted by human operators With confused assemblages of terms such
as “drones,” “autonomous weapons,” “killer robots,” and “AI,” along withjournal articles with hyped or alarmist headlines, it is difficult for both thepublic and those charged with national security to comprehend theadvantages—and limits—of specific applications of artificial intelligenceand its associated (albeit distinct) techniques of machine learning andanalysis/fusion of big data.11
This lack of knowledge and understanding results from the fact thatthere are very few open, public sources—articles, books, published studies
—that discuss in any detail the specific, functional applications of Al to thediscrete elements that constitute preparation for, deterrence of, and conduct
of military operations Part of this difficulty is due to the securityclassification of such information, during both the military-sponsored
Trang 17research and development phase and the implementation and employment
of such systems Another complication is that many of the professionalarticles written on the potential for AI application to military affairs aredirect, indirect, or (rarely) subtle arguments for spending additionalfinancial and personnel resources (e.g., money, labor, and time) on thedevelopment of AI and related techniques Other sources are arguments forthe greater importance (or “status” in the bureaucratic hierarchy) of AIprograms in the U S Department of Defense (DoD) It is hard to give abalanced assessment of the application of AI (cons as well as pros) whileadvocating for more money or status
AI at War is specifically intended to overcome these difficulties and
increase the understanding of both national security professionals and theinterested public of the application of AI to warfighting Although thethemes and findings of the chapters are generally applicable across DoD, toinclude all services, joint staff, and defense agencies, as well as allied andpartner ministries of defense, our “case study” is warfighting functions inthe naval services—the U.S Navy and U.S Marine Corps Our definition
of “warfighting” is deliberately broad to include those logistics andadministration functions that are necessary for combat operations ordeterrence Such “tail” in the proverbial “tooth-to-tail” ratio is essential ifthe teeth are to be employed For this book, warfighting also includes theoverall national security strategy, assessment of the future securityenvironment, and planning, programming, and budgeting essential to ensureour future security
It is important to note that even within DoD, the term AI is used loosely,often applied to software programs and algorithms that provide “automatedprocesses” for data fusion, rather than to the dominant characteristics ofartificial intelligence: pattern recognition and the ability to self-program.For the purposes of this book, we try to avoid conflating AI with automatedprocesses that do not include pattern recognition or self-programming, butwhen we use the term AI, we do include in it the related functions ofmachine learning and big data analysis—primarily for brevity
However, before switching to brevity, it is imperative that we unpackand distinguish those terms
Trang 18AI VS MACHINE LEARNING VS BIG DATA
The relationship between deep learning, machine learning, and AI has been
described as “a set of Russian dolls [matryoshka] nested within each other,
beginning with the smallest and working out[ward] Deep learning is asubset of machine learning, and machine learning is a subset of AI.”12 Fromthis perspective, AI includes more than machine learning, but all machinelearning falls under the category of AI
Machine learning was originally defined as a “field of study that givescomputers the ability to learn without being explicitly programmed.”13
Recent usage describes machine learning as the ability of computers tomodify their decision-making when exposed to increasing amounts of data.This is achieved primarily through trial-and-error learning guided bypredictions In essence, the machine is programmed to predict outcomesbased on the knowledge and algorithms with which it is initially provided.After it makes a decision or takes an action based on its initial knowledge(memory) and forecast of being correct, it learns from the results Ifincorrect, it programs that bit of knowledge into its memory and is able tomodify the algorithm that guides its prediction (and actions) Part of thedata is the quantitative size of the error measured by the “distance” from theanswer.14 A useful, but overly simplistic, illustration would be the addition
of a new factor to an “if-then” computer statement.15 That statement ismodified from “if-then” to “if-then (original knowledge) plus then (latestresults).” In this way, the computer “learns” from experience Onceprocessed, each data set is referred to as having been “labeled.”
To make decisions from a huge amount of data—such as in facialrecognition, by evaluating combinations of pixels—programming suchlearning becomes very complex with the equivalent of hundreds ofthousands of if-then statements and pieces of labeled data The algorithmscreating these statements are arranged in layers in which two or morealgorithms (sometimes referred to as “nodes”) in one layer feed results to anode (algorithm) of a higher layer Each layer or node adds a correspondinglayer of complexity by combining the multiple results of the previous layerinto a new (but related) algorithm “Deep” is the technical term for anetwork of nodes that has more than one “hidden” layer, or any layer otherthan the original data input and the final output Thus, any system involving
Trang 19more than three layers is considered to constitute “deep learning.” Deeplearning is thereby a subset of machine learning As computation hardwarehas improved and programmers have become more adept, deep learning hasconstituted an increasing percentage of machine learning systems.
Deep learning systems with multiple layers and nodes are referred to asneural networks or, more properly, “deep artificial neural networks” andsometimes “deep reinforcement learning.”16 The choice of this commonterm is unfortunate, since it conjures the image of the neural networkswithin the human brain, which function through biochemical processes (asstudied in the medical field of neurology), rather than electrical-sourcedbinary calculations For a lay audience, the use of the term “neuralnetworks” makes deep learning and AI seem more human-like than theyactually are Perhaps this also makes such AI machines more threatening.However, the layered combination of nodes in neural networks issimply a more extensive means of generating predictions (such as in photorecognition: “this image is probably a leaf”), comparing the predictions toreality (“no, it is actually not a leaf; it is a tree”), establishing the degrees oferror (“a leaf is more related to a tree than it is to the image of a dog, but it
is less related than an image of a branch with several leaves”), andcorrecting the controlling algorithm (“this is how to distinguish a leaf orbranch with several leaves from a tree”) Media examples in which AIsystems mistake the faces of individuals or confuse one animal for anotherlikely indicate that the system examined does not have enough layers toidentify more distinguishing characteristics But the more layers there are,the more computational difficulty, and—if a single computation is incorrect
—the more chances of error This is the infamous situation of “garbage inequals garbage out.” When applying AI to decision-making in the use ofmilitary weapons systems—which often involves a great deal of complexity
—programming errors of that sort could be fatal to the defender or result incollateral damage or friendly fire That is not true for systems that only playchess or iGo
This leads to a debate within the ranks of AI scientists as to whether
“supervised learning” or unsupervised deep learning is the most effectiveway to build accurate AI machines In supervised learning, a human
programmer or data scientist/engineer is involved more deeply in assisting
the machine with the labeling of data Instead of simply devising the
Trang 20underlying programs and providing the initial data, the engineer alsoevaluates the machine’s assembly and association of data and “corrects” itwhen it is wrong As one data science consulting firm describes its value,
“Data scientists are the people who explore data, clean it, test algorithmsthat they think might make accurate predictions about that data, and thentune those algorithms so that they work well.”17
Eventually, the corrections accumulate so that the machine has a solidframework within which to label new data inputs Supervised learning mayalso involve putting the AI machine into simulations with known answers inorder to evaluate the output In unsupervised learning, the machine itself isallowed to work its associations by trial and error, establishing its ownguidelines
The debate involves both cost and philosophy Obviously, supervisedlearning involves much greater costs (since it is more human labor–intensive) than allowing the AI machines to do their own trial-and-errorlearning and, afterward, have humans evaluate the final output.18 In the racebetween Tesla, Waymo, Uber, and other companies to develop fullyautonomous self-driving cars, costs matter since they cut into (potential)profits The company that can utilize the least costly sensors and fewesthuman engineers (and achieve results) is perceived as the betterinvestment.19 Balancing the search for lower development costs is theresults; running over a bicyclist is considered a bad thing.20 Presumably, themore the learning is supervised, the less likely that such incidents occur
At the same time, there is an aspect of unsupervised learning thatworries some scientists—and enthralls others Without supervision, it isdifficult to determine exactly how AI machines “think through” a particularproblem, and, hence, difficult to duplicate the exact logic and process of themachine’s decision-making The actual AI process remains a mystery,evoking the image of potentially irrational (by human standards) results.This has been noted as “the dark secret at the heart of AI” and particularlyobserved through the artwork produced by unsupervised AI.21 Yet otherscientists are intrigued that there might be modes of (AI) thought vastlydifferent from that of humans, which might lead to unimagined discoveries.Even the art world appears interested.22
This touches on the philosophical question of whether supervised AI is
“actual AI” and whether AI machines should be free to map out “new paths
Trang 21to knowledge.” At the extreme fringe are those who postulate that advanced(unsupervised learning) AI “could run the world better than humans evercould.”23
Leaving aside the philosophical aspect, an additional difficulty inapplying deep learning—and certainly unsupervised machine learning—toweapons systems is that all deep learning (like all machine learning) relies,
as previously discussed, on some degree of learning from errors of initialperception Yet in combat, one single error can be fatal For example, thePhalanx close-in weapons systems (CIWS), used as the last-ditch pointdefense system against antiship cruise missiles in U S and allied Navywarships, can be considered a “simple” AI system since it can detect,identify, track, and—if in full-auto mode—engage an incoming targetwithout human intervention Development of CIWS as an automated,networked system was a recognition of the increasing speed and varyingflight profiles of a new generation of antiship missiles However, CIWS isnot designed with machine learning and could not function as a learningsystem, since learning from error would result in its destruction (along withthat of the ship) The point is that there are limits to the application ofmachine and deep learning to AI As previously noted, all machine learning
is AI, but not all AI includes deep learning
Big data is associated with AI since the amount of information andeffort to correlate the data is usually so huge as to require automatedprocesses more advanced than any available through previous types ofcomputer programming At its best, big data analytics “makes it possible to
do many things that previously could not be done: spot business trends,prevent diseases, combat crime and so on.… Number-crunchers have evenuncovered match-fixing in Japanese sumo wrestling.”24
Much of this primarily digital data is collected as the result of activitiesconducted for other purposes and is especially used to target Internetadvertising: “‘Data exhaust’—the trail of clicks that Internet users leavebehind from which value can be extracted—is becoming a mainstay of theInternet economy One example is Google’s search engine, which is partlyguided by the number of clicks on an item to help determine its relevance to
a search query If the eighth listing for a search term is the one most people
go to, the algorithm puts it higher up.”25 Google either sells the analyticaldata to marketing firms or manufacturers that evaluate how to best position
Trang 22their products for sale on the Web or provides the evaluation andpositioning as a service-for-hire.
Assessing what methods to use to compile and analyze the volume of
data is where data scientists come into their own As The Economist notes,
“a new kind of professional has emerged, the data scientist, who combinesthe skills of software programmer, statistician and storyteller/artist toextract the nuggets of gold hidden under mountains of data.”26 These skillshave obvious applicability to military analysis, where input from a variety
of sensors needs to be combined to provide targeting solutions Forexample, in undersea warfare or antisubmarine warfare, manyenvironmental factors need to be measured in order to evaluate a sonarreturn: water salinity, depth, bottom contour, biologics, and so forth Navalresearchers were analyzing combinations of such factors long beforeanyone coined the term “data scientist.” Weapons systems have long beendesigned to account for information However, with AI, such data canpresumably be evaluated faster and with more accuracy than before, alongwith making the decision as to which weapon would achieve the greatesteffect in an attack
In short, the relationship between AI and big data is one of tool andendeavor Big data analytics attempts to make sense and value out of hugeamounts of digitally recorded data AI processing is a method by which thedata can be combined, categorized (labeled), and analyzed Big data doesnot fit the Russian nested doll image Rather, the relationship between bigdata and AI can be visualized as a Venn diagram of overlapping circles,with much of the big data circle overlapped by a larger AI circle Not all bigdata analytics use AI, but much of it does A part of the universe of AI isused for big data analytics, but it can also be said that all AI is itselfdependent on big data input
Large quantities of data are needed to train AI, and without suchamounts of data, there might be no point in developing AI In recognition ofthe fact, we often present the technologies under discussion in the sequence
of “big data, AI, and machine learning,” as in this book’s title
SIMPLE VS NARROW VS GENERAL
(STRONG)
Trang 23A number of adjectives are sometimes appended to AI that will not beaddressed here because they refer to particular computer science approaches
to developing AI, coined by particular computer scientists Some areconcrete, some are vague; many have been integrated into other terms Forthe purposes of this discussion, we will categorize AI into three groups thatexpand in complexity
Simple AI refers to what can also be called automated processes.27 In itsbroadest use, simple AI refers to machines that can do calculations throughanalog or digital functions faster than most humans could From thisperspective, a hand-held calculator could be described as simple AI.28 Mosthand-held calculators are not, however, connected to sensors from whichthey can obtain their own data
Phalanx CIWS is a simple AI machine that, in full-auto mode, makes adecision based on its own data collection whether to engage an incomingtarget—to shoot or not shoot In this sense, it acts as a human would whenprovided with the same data When set in a manual mode, CIWS doesindeed require a human to press the firing trigger based on the data thesystem’s sensors provide, as well as any additional information the humanoperator can acquire from other sources The reason for placing it in a fullyautomatic (or autonomous) mode is that the human operator cannot keep upwith the speed of the engagement, or there is no human operator present.The danger in placing CIWS in a full-auto mode is that it might engageundesired targets, perhaps friendly aircraft, that are detected by its radarsensors and fit the parameters for which it is programmed to attack—speed
of approach, flight profile, and other characteristics
The difference between CIWS as simple AI and systems that constitute
“narrow AI” is that CIWS is not a learning system It cannot expand itsknowledge base as to what is a target and what is not, but rather mustconform to a decision-making rule set that it cannot alter untilreprogrammed by humans Hence, if a friendly aircraft with the same flightprofile as an incoming enemy antiship cruise missile approaches thedefending ship, it will be engaged The CIWS unit will not learn from its
“error” to distinguish between antiship missiles and friendly aircraft
Narrow AI is the developmental level of current AI systems “Narrow”refers to the fact that the AI machine can learn and self-program, but only toperform a narrow or specialized range of activities It is also called, though
Trang 24less often, “modular AI” or “weak AI.”29 Winning against chess or iGomasters would be considered a narrow range of activities All currentapplications of AI that involve machine learning remain narrow AI Severalnarrow applications can be combined in one AI machine but not yet in asingle neural network.
General AI or general intelligence AI—also known as strong AI—characterizes systems that mimic human awareness and have the ability tomake decisions concerning multiple tasks or concepts Instead of limited tochess or iGo, the AI machine can make decisions in a multiplicity ofdomains—similar to human multitasking However, doing so generallyrequires some degree of mobility for the system to position itself at the sitewhere the decision needs to be made—hence the close association ofgeneral/strong AI and autonomy This association evokes the image of thefilm villain “the Terminator,” an autonomous system that appears human-like in its decision-making and actions but is programmed to achieve oneparticular result at all costs
Mercifully, the ability to create a Terminator remains far out of reach fortwo reasons: the current programming characteristics have not yetdeveloped a system with deep enough layers to make decisions in multipledomains (which would require many millions of lines of code), andengineering capabilities cannot make a device that can mimic and controlall the moves of the human body (some can be mimicked, but not all,because human movement is biochemically, not mechanically, controlled).Could a Terminator-like humanoid be constructed someday? Perhaps, but itwould require significant advances in biochemistry and other fields besidesAI
Such advancements are funded in the civilian sector primarily if there is
a profit to be made Currently, there are not such incentives to creategeneral AI while narrow AI functions adequately to performcommercialized tasks Even such virtual personal assistants as Siri orAlexa, which can seem human-like because they have natural languageprocessors (understanding and replying in spoken languages), constitutenarrow AI They can perform rapid fact searches on the Internet or turnelectrical/electronic devices on and off, but they are still narrow AI withlimits to their functions
Trang 25In the military sector, today’s AI-driven systems that possess autonomyare largely simple AI, only bordering on narrow AI The third offsetstrategy or third revolution in military affairs that is discussed by nationalsecurity professionals (particularly in chapter 2) is primarily focused onincreasing the application of narrow AI in military operations, not strivingfor general AI The third offset also acknowledges that the U.S militaryneeds to leverage commercial AI advances in order to develop militaryapplications Given the amount of resources required and the fact that profitdrives development, military applications are highly dependent on ongoingcommercial development of AI The Chief of Naval Operations
acknowledged this fact in the 2018 Navy future fleet development plan, A Design for Maintaining Maritime Superiority, Version 2.0.30
AI AND AUTONOMY
Autonomous systems are not synonymous with AI Autonomous systems—
machines that can operate without physical human intervention—arerelatively rare and, more importantly, are preprogrammed to perform tasks,not to make independent decisions Autonomous vacuuming devices mayreverse course when they bump into walls or furniture, but they do notactually recognize walls or furniture; they are simply programmed toreverse or change to another preprogrammed course on any impact(including with children, dogs, and cats) Without that feature, they aresimilar to mechanical battery-operated toys with wheels or tracks that canoffset, which have existed for decades
Simple AI—as it is defined—does exist in military systems and hasexisted for several decades Our previous example of Phalanx CIWS is butone form Another is the Mk 60 Captor mine, a Cold War–era naval mooredmine Moored on the seabed, Captor had sensors that could detectsubmarines and, without the need for human decision-making, launch atorpedo against the target Using set parameters, Captor, like other bottommines, could distinguish between targets and nontargets The humandecision-making involved was simply to determine where to place the mine.Thus, Phalanx and Captor are both examples of simple AI that areautonomous (within prescribed limits)
Trang 26The difference between Captor and an improvised explosive device inAfghanistan or Iraq is that, unless electrically or remotely controlled viacell phone by an observer, the latter (a land mine) cannot distinguishbetween a U.S./allied convoy, a humanitarian nongovernmentalorganization vehicle, or a terrorist-filled “technical” (pickup truck) There is
no autonomous programming involved
The term autonomous is also associated with unmanned vehicles (UVs)
However, most military UVs utilized thus far have not been autonomous.
Rather, as with the feared Predator or Reaper unmanned aerial vehicles,they have been remotely controlled by military personnel (humans) withlimited support from narrow AI-processed intelligence (and a lot of human-processed intelligence) It is not that unmanned vehicles cannot be designed
to be primarily autonomous but rather that there is considerable reluctance
to trust their decision-making in that mode.31 Additionally, successfullyadding narrow AI is a difficult and expensive engineering process
The relationship between autonomy and AI is again similar to a Venndiagram Both overlap, but the degree of overlap will be determined byfuture advances in AI and whether the third offset strategy comes tofruition
AI AND WARFIGHTING
The third offset strategy will be discussed in detail in later chapters Inshort, it is an attempt by the United States and her allies to harnesscommercial developments in AI (and other emerging technologies) so as toremain ahead of military peer competitors in an international systemdefined by great power competition “Third” refers to the fact that theUnited States is perceived to have leaped ahead of past military competitors
by developing nuclear weapons and then precision-guided weapons Asnoted, increasing the use of AI (and autonomous systems) is one of thegoals of the third offset
Thus far, programs supporting the third offset have focused on narrow
AI because, for both moral and practical reasons, U.S defense makers and the American public are uncomfortable with contemplatinggeneral/strong AI weapons systems Autonomous narrow AI weaponssystems also remain, as noted, controversial in nations in which public
Trang 27decision-opinion controls government This will also be examined in a later chapter.However, authoritarian governments may not be so reluctant Russia, in
particular, is assessed as having developed and deployed several completely
autonomous armed land combat systems.32 China is reportedly overtakingthe United States in commercial AI development, although the effects on itsmilitary programs remain largely unknown.33
Areas in which current commercial AI could be used for militaryfunctions include photoreconnaissance analysis (and other imaging) andspeech translation These tasks fit narrow AI because they constitute patternrecognition of big data sets In an earlier article, one of our chapter authorsadvised that “the Navy should invest in [AI] capabilities for tasks with rules
or patterns that are predictable and difficult to disrupt, and should avoidautomating tasks with rules and patterns that change unpredictably.”34 Rulesand patterns that are predictable are the elements that enable AI to beatchess and iGo masters The rules of these games do not change in themiddle of these matches, even if a tremendous number of patterns must berecalled
Can AI be used in unpredictable situations of chaos on the battlefield or
to partially dispel the fog of war? Many experts argue yes, although theyalso admit that there can be no such thing as “perfect AI” and that “useful
AI generates good enough results at least a little faster, better, or cheaper
than those produced by human intelligence.”35 Of course, if one is onboard
a warship seconds away from the impact of an enemy’s antiship missile and
is trusting an AI machine to provide for the last-minute defense, “perfectAI” would be the desire This is one reason for exploration of the use of AIfor human-machine teaming in warfighting, a point particularly emphasized
in chapter 2 Imperfect AI may assist human decision-making, while relying
on “perfect” AI to make its own decisions remains unreliable
Whether in the form of pattern recognition in intelligence, weaponscontrol, or human-machine teaming, AI will be a part of future militaryoperations—just as it is and will be in commercial businesses Examiningexactly how AI can be applied to naval warfighting—without hype or alarm
—is the objective of this book
TASKS OF THE AUTHORS
Trang 28Working within the confines of unclassified information, each of thechapter authors or co-authors is charged with examining the applicationsand implications of AI to a discrete warfighting mission or function orseries of related functions Although this tasking may naturally evokeprograms, projects, or studies that the authors are involved with on a day-to-day basis, they were asked to focus their writing on mission or function,not program.
Each of the authors was chosen because he or she is actively involved indeveloping, assessing, or advising on the applications of AI to navalwarfare Several were senior officials or commanders responsible for thecommand and control of forward-deployed combat forces We havedeliberately paired commanders, scientists, engineers, and programmanagers with national security analysts in order to ensure that discrete
assessments also have broad background information so that the material is
approachable by a nonspecialist That does not mean that technical detail isavoided altogether; rather, we have attempted to place technical detail incontexts of combat operations and overall defense planning
The goal for each chapter is to answer the question, “How could AI beapplied to, support, or affect (or how has it already affected) a particulararea of focus”—such as intelligence, command and control, integrated fires,strategic planning, deterrence, force development, operational decision-making, or deception—in other words, how could it work? What would bethe results? What would be the dangers? Implied are three subordinatequestions First, does the application of AI to the focus area provide us with
a new capability, make existing capabilities process faster, or simply replacemanpower or reduce costs? Second, what assumptions, costs, and risks areinvolved in applying AI to that particular mission, function, or area offocus? Third, are the costs and risks worth it, or are alternate methods just
as effective in achieving desired results? Indeed, recent defense workshopshave suggested that some inputs to decision-making in war are too complex
—using current techniques—to be adequately coded, or simply do not fit alogical pattern that can be considered rational thought These are alsothemes with which the chapter authors grapple in their individualassessments
Trang 29STRUCTURE OF THE BOOK
Many edited volumes result in a disconnected sampling of subtopics related
to the general theme but not to each other We have attempted to apply morediscipline to this work This book is structured into three associatedsegments Following the foreword and introduction, the first series ofchapters focuses primarily on the desired objective of introducing AI intonaval operations, from a very broad perspective to one increasingly morespecific
Readers should note, however, that some of the chapter authors disagreewith other authors Although such differences are identified within the
chapters, the editors have—by design—made no effort to seek agreement
between the authors It is important for readers to be well aware of thevarying perspectives and disputes concerning the applicability of AI if theyare to come to their own conclusions on AI’s effects on naval and jointwarfighting
Chapter 1 provides a theory and conceptual history of AI and machinelearning, written from the perspective of computer scientists who supportresearch toward the development of general or strong AI The authorsdescribe machine learning programs as a form of applied statistics madepossible by advances in computing hardware—a necessary advance in theapplication of technology to correlating big data—but not in itselfconstituting artificial intelligence They argue that many of the algorithmsapplied to the probabilities that form the results of machine learningprogramming were developed long before the search for AI began
Chapter 2 examines AI and related technologies as elements of the thirdoffset strategy and in the context of developments by potential adversaries.The author was one of the foremost champions of the third offset approachand the potential for AI applications during his past service as deputysecretary of defense
Chapter 3 reflects on (although does not constitute) the current officialview of the Department of the Navy (Navy and Marine Corps) concerningits commitment to develop, fund, and implement AI It is written by twoserving Department of the Navy officials with the understanding that it istheir own personal assessment of the department’s efforts Even if the
Trang 30official view changes in the future, many of its premises will remain a part
of the public debate on providing for national security
Chapter 4 assesses the relationship between robotics, autonomousweapons systems, and AI This is the relationship that confuses and createsgreat fear in the critics of military use of AI The author has previouslypublished his own work on military AI, particularly in land combat
Chapter 5 provides insights from surveying the current analyses of the
AI programs of potential military opponents, notably the People’s Republic
of China and Russia Relying on the work of the top researchers of foreignmilitary AI, this chapter identifies the differing approaches of the AI “greatpowers.”
Chapter 6 identifies the requirements for AI if it is to facilitatepractitioner innovation and adaptation in crises and combat Often forgotten
in discussions on AI is that its effective use in military operations will be
determined more by the operators than by the programmers/coders.
Chapter 7 discusses the opportunities of utilizing AI for supporting andincreasing the speed of decision for mission command, defined as “theconduct of military operations through decentralized execution based uponmissiontype orders.”36
The second section of the book examines the application of AI tospecific naval warfighting functions or “warfare areas,” along with effects itmight have on fleet structure and the education of personnel Starting with
an overview (chapter 8), these areas include intelligence, surveillance, andreconnaissance (chapter 9), communications (chapter 10), command andcontrol (chapter 11), and integrated fires (chapter 12) Chapter 13 examinesboth the impact and requirements of AI on future military and naval forcestructure, to include the personnel aspect.37 In doing so, the chapter touches
on the issue of whether the services can recruit and retain servicememberswith high-value cyber and AI skills.38 Chapter 14 takes a deeper dive intothe role and requirements of AI in naval education, specifically theeducation of midshipmen at the U.S Naval Academy Chapter 15 uses acase study of an attempt to develop a modest decision aid for operationalcommanders to examine both the opportunities and limits of AI in fulfillingthe operational requirements
The third and final section of the book examines the policy and strategyelements involved with naval application of AI and the effects on current
Trang 31and future national security.
Chapter 16 discusses the use of AI in terms of the overall cyberedconflict that is occurring even in our time of “peace” and the potential fordeception to create AI vulnerabilities Chapter 17 examines theorganizational and bureaucratic-political impediments to committing to AI,
as well as the ethical concerns, particularly those involved with autonomoussystems and human decision-making regarding the use of deadly force.Since ethical concerns over military applications of AI are also discussedthroughout many of the other chapters, the book does not include anindividual chapter solely on AI ethics By design, this choice was intended
to mainstream this critical issue—not isolate it.39
Chapter 18 identifies the impact of AI on naval strategy and tactics nowand in the future Chapter 19 provides a scientific perspective on the generalfuture of AI Finally, an epilogue and an afterword provide practitioners’perspectives on what this general future may mean to the naval services andwhat research is needed to continue this discussion
Thus, the book begins its voyage with definitions, origins, requirements,and objectives; then sails into specifics; examines storms, shoals, andstrategic effects; and ultimately provides possibilities and options for thefuture One might shorthand the three segments as the promises, theparticulars, and the problems
INDIVIDUAL EFFORTS, PROXIMATE
OBJECTIVE, CONTINUING PROCESS
This book is built primarily around collaboration between members of two
U S naval organizations, the U.S Naval War College (NWC) in Newport,
RI, and the Naval Information Warfare Center Pacific (formerly the Spaceand Naval Warfare Systems Center Pacific), in San Diego, CA Theorganizational affiliations of authors at NWC include the Institute forFuture Warfare Studies and the Cyber and Innovation Policy Center, bothpart of the Strategic and Operational Research Department of the Center forNaval Warfare Studies, the research arm of NWC Additionally, authorsinclude individuals from the Chief of Naval Operations AssessmentsDivision (OPNAV N81) and its campaign analysis branch, located at the
Trang 32Pentagon; the U.S Naval Academy, Annapolis, MD; the Oceanit Company
of Honolulu, HI; the Leidos Corporation, headquartered in Reston, VA; theCenter for a New American Security (CNAS), Washington, DC; the Centerfor Strategic and Budgetary Assessments (CSBA), Washington, DC; andretired flag officers of the U.S Navy and senior DoD officials
However, no matter the affiliation, all the authors worked exclusively
on their own account and from their own perspective Although currentU.S government policies are discussed, this book is decidedly not anofficial publication of the U.S government or of any of the affiliateddepartments, agencies, or organizations As the standard disclaimer reads:The views expressed in these chapters are those of the authors and do notnecessarily reflect the official policy or position of the Department of theNavy, Department of Defense, or the U.S Government
The proximate goal of this book is to promote an understanding of theassumptions, possibilities, issues, costs, risks, effects, impediments,alternatives, and requirements for the application of artificial intelligenceand related sciences to the modern art of war and specifically to U.S navalforces Our view is that this is an understanding needed by all nationalsecurity professionals, but also by the citizens of the United States, thenation that we serve, have served, and will continue to serve
Through providing our analyses, we hope to contribute to what should
be a continuing DoD and nationwide discussion of the applicability of AI toour future From this view, this book is but a start to what should be anongoing process of research, analysis, debate, agreement, and development.The only way to decide the future of artificial intelligence is through theapplication of human intelligence, and the only way to successfullyleverage artificial intelligence for military applications is for the operationalforces to clearly define the narrow AI tasks that are most important tosupport Sailors in the fleet and Marines in the fleet and field.40
do not think—they only respond to instruction Others revert to a simple definition of thinking as
“reasoning” in support of the conception that AI machines “think.” However, formal definitions
of thinking also imply “the entrance of an idea into one’s mind with or without deliberate
Trang 33consideration or reflection” (Webster’s Ninth Collegiate Dictionary [Springfield, MA:
Merriam-Webster, 1985], 1226) Logically, AI fails this definition since random information in a machine must first be preprogrammed by a programmer or internal programming function based on some form of consideration or reflection.
3 Credit for the term artificial intelligence goes to Professor John McCarthy of MIT and Stanford, who defined it as “the science and engineering of making intelligent machines.”
4 Skymind Inc., “Artificial Intelligence (AI) vs Machine Learning vs Deep Learning,” A.I Wiki, https://skymind.ai/wiki/ai-vs-machine-learning-vs-deep-learning.
5 More than 50 percent of robots delivered to the United States are used in the automotive industry for repetitive tasks See Chloe Taylor, “A Record Number of Robots Were Put to Work Across North America in 2018, Report Says,” CNBC, February 28, 2019, https://www.cnbc.com/2019/02/28/record-number-of-robots-were-put-to-work-in-north-america- in-2018.html.
6 A provocative paper on comparing the costs of AI versus multitasking humans (that finds
humans to be more cost-efficient at nonrepetitive tasks) is Matt Mahoney, The Cost of AI (draft
unpublished paper), March 27, 2013, http://mattmahoney.net/costofai.pdf.
7 Research into quantum computing, in which series of zeros and ones can be separated into different quantum states, is an attempt to break free of exclusively binary calculations.
8 Skymind Inc., “A Beginner’s Guide to Neural Networks and Deep Learning,” A.I Wiki, https://skymind.ai/wiki/neural-network.
9 “A Growing Global Coalition: About Us,” Campaign to Stop Killer Robots, https://www.stopkillerrobots.org/about/#about.
10 “Killer Robots,” Human Rights Watch, https://www.hrw.org/topic/armas/killer-robots#.
11 One example of an influential professional journal article that has a particularly alarmist or overly dramatic title but is rather more nuanced in substance is General John Allen, USMC (Ret.), and Amir Husain, “AI Will Change the Balance of Power,” U.S Naval Institute
Proceedings 144, no 3 (August 2018): 26–31, https://www.usni.org/magazines/proceedings/2018-08/ai-will-change-balance-power However, the article does coin yet another new buzzword: “hyperwar.”
12 Skymind Inc., “Artificial Intelligence (AI) vs Machine Learning vs Deep Learning.”
13 Skymind Inc., quoting Arthur Lee Samuel of IBM in 1959 Samuel programmed a computer to learn how to play checkers, prompting the more recent efforts at chess and iGo The computer learned by playing itself thousands of times.
14 One way to envision this “distance” is to imagine numbers along a consecutive number-line The distance between numbers three and five is compared to the distance between three and ten Assuming three is the correct number, the prediction of five as the correct answer would be judged more accurate than the prediction of ten because the distance between three and five is less than between three and ten.
15 Describing a computer process in terms of “if-then” statements has acquired the name code or pseudo-coding Admittedly, this involves a more detailed process using “if-then-else” logic, “while/until” loops, and others However, this introduction will limit itself to “if-then.” A
pseudo-short description of pseudo-code is Jon Erickson, Hacking: The Art of Exploitation, 2nd ed (San
Francisco: No Starch Press, 2008), 7–10.
16 Skymind Inc., “Artificial Intelligence (AI) vs Machine Learning vs Deep Learning.”
17 Skymind Inc., “A Beginner’s Guide to Automated Machine Learning & AI,” A.I Wiki, https://skymind.ai/wiki/automl-automated-machine-learning-ai.
18 Sardonically, Skymind Inc notes in its more self-promotional material: “I know what you want
to hear: ‘You, too, can automate AI in your company, and never have to worry about those pesky
Trang 34data scientists again!’ It’s not that simple.”
19 Even if fully automated self-driving cars are eventually legalized, the possibility of first entrants
turning an actual long-term profit by their AI alone is an investment gamble, since these firms
have borrowed and burned through billions in capital.
20 Matt Bevilacqua, “Uber Was Warned Before Self-Driving Car Crash that Killed Woman Walking
Bicycle,” Bicycling, December 18, 2018,
https://www.bicycling.com/news/a25616551/uber-self-driving-car-crash-cyclist/.
21 Will Knight, “The Dark Secret at the Heart of AI,” MIT Technology Review 120, no 3 June 2017): 54–61, https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of- ai/.
(May-22 Tim Schneider and Naomi Rea, “Has Artificial Intelligence Given Us the Next Great Art
Movement? Experts Say Slow Down, the ‘Field Is in its Infancy’,” ArtNet News, September 15,
2018, https://news.artnet.com/art-world/ai-art-comes-to-market-is-it-worth-the-hype-1352011.
23 Dan Robitzski, “Advanced Artificial Intelligence Could Run the World Better than Humans Ever
Could,” Futurism, August 29, 2018,
https://futurism.com/advanced-artificial-intelligence-better-humans/.
24 “Data, Data Everywhere,” The Economist, February 27, 2010, https://www.economist.com/special-report/2010/02/27/data-data-everywhere.
25 “Data, Data Everywhere.”
26 “Data, Data Everywhere.”
27 There is a continuing disagreement among AI scientists and aficionados as to when an automated process constitutes “AI.” Proponents of “strong AI,” in particular, dislike associating AI with machines as simple as a calculator Yet under the original definitions in AI research, a calculator does indeed mimic the actions of a human brain, even if it relies exclusively on a human for input The authors acknowledge that many would disagree with their definition of simple AI or deny that there even is such a thing.
28 Studies of literature on AI published between 1970 and 1990 justify the application See, for
example, Eve M Phillips, If It Works, It’s Not AI: A Commercial Look at Artificial Intelligence
Startups, thesis, Massachusetts Institute of Technology, May 7, 1999, https://dspace.mit.edu/bitstream/handle/1721.1/80558/43557450-MIT.pdf?sequence=2 This has been described as the “AI effect,” in which machines that were formerly referred to as possessing AI are later dismissed as merely automated processes as the science progresses A quip that has been attributed to multiple sources is that “AI is whatever hasn’t been done yet.”
The original may be from Douglas Hofstadter, Godel, Escher, Bach: An Eternal Golden Braid
(New York: Basic Books, 1979), 601.
29 The term “modular AI” in reference to narrow AI for military applications is used in Kareem
Ayoub and Kenneth Payne, “Strategy in the Age of Artificial Intelligence,” Journal of Strategic
Studies 39, no 5–6 (2016): 793–819, https://www.tandfonline.com/doi/full/10.1080/01402390.2015.1088838?src=recsys.
30 Admiral John M Richardson, USN, A Design for Maintaining Maritime Superiority, Version 2.0,
https://www.navy.mil/navydata/people/cno/Richardson/Resource/Design_2.0.pdf.
31 See discussion in Julia Macdonald and Jacquelyn Schneider, “Battlefield Responses to New
Technologies: Views from the Ground on Unmanned Aircraft,” Security Studies 28, no 2
(June/July 2019), https://doi.org/10.1080/09636412.2019.1551565.
32 Samuel Bendett, “Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward,”
Real Clear Defense, June 26, 2018, https://www.realcleardefense.com/articles/2018/06/26/russian_ground_battlefield_robots_a_can
Trang 35did_evaluation_113558.html; Noel Sharkey, “Killer Robots from Russia Without Love,” Forbes, November 28, 2018, https://www.forbes.com/sites/noelsharkey/2018/11/28/killer-robots-from- russia-without-love/#3640d4dbcf01; Samuel Bendett, “Autonomous Robotic System in the Russian Ground Forces,” Mad Scientist Laboratory, February 11, 2019, https://madsciblog.tradoc.army.mil/120-autonomous-robotic-systems-in-the-russian-ground- forces/; David Axe, “This Video May Be the Future of Russia’s Army: Armed Ground Robots,”
National Interest, March 18, 2019, russias-army-armed-ground-robots-48022 Having blocked efforts at international bans on such systems, Russian officials reportedly are suggesting such a possibility: Samuel Bendett, “Did
https://nationalinterest.org/blog/buzz/video-might-be-future-Russia Just Concede a Need to Regulate Military AI?” Defense One, April 25, 2019,
intelligence/156553/?oref=defenseone_today_nl.
https://www.defenseone.com/ideas/2019/04/russian-military-finally-calling-ethics-artificial-33 Amy Webb, “China Is Leading in Artificial Intelligence—and American Businesses Should Take
Note,” Inc., September 2018,
https://www.inc.com/magazine/201809/amy-webb/china-artificial-intelligence.html; Will Knight, “China May Overtake the U.S with Best AI Research in Just Two Years,” MIT Technology Review, March 13, 2019, https://www.technologyreview.com/s/613117/china-may-overtake-the-us-with-the-best-ai- research-in-just-two-years/.
34 Connor S McLemore and Hans Lauzen, “The Dawn of Artificial Intelligence in Naval Warfare,”
War on the Rocks, June 12, 2018, intelligence-in-naval-warfare/.
https://warontherocks.com/2018/06/the-dawn-of-artificial-35 McLemore and Lauzen.
36 U.S Joint Chiefs of Staff, DoD Dictionary of Military and Associated Terms, February 2019,
155, https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/dictionary.pdf.
37 Despite the perception of a joint approach to military force structure, the individual armed services perceive and define force structure quite differently Traditionally, the most notable distinction is between the U.S Army and U.S Navy, created by their different operating environments Army leadership tends to view personnel as constituting force structure, with equipment seen as an operating expense Navy leadership tends to see ships, submarines, and aircraft as force structure, and personnel as an operating expense.
38 An impactful commentary that summarizes the current high-tech recruit/retain issue is Jacquelyn
Schneider, “Blue Hair in the Gray Zone,” War on the Rocks, January 10, 2018,
https://warontherocks.com/2018/01/blue-hair-gray-zone/.
39 Ethical use of AI is the most widely debated issue in the existing literature on military applications of artificial intelligence, with the number of studies justifiably increasing Some of this discussion is captured under the term “ambient intelligence technologies.” See, for example, Yvonne R Masakowski, Jason S Smythe, and Thomas E Creely, “The Impact of Ambient
Intelligence Technologies on Individuals, Society, and Warfare,” Northern Plains Ethics Journal
4, no 1 (Fall 2016): 1–11, Impact-of-Ambient-Intelligence-Technologies-on-Individuals.pdf (and responding essays in that issue).
http://northernplainsethicsjournal.com/wp-content/uploads/The-40 A point forcefully made in George Galdorisi, “The Navy Needs AI, It’s Just Not Certain Why,” U.S Naval Institute Proceedings 145, no 5 (May 2019): 28–32, https://www.usni.org/magazines/proceedings/2019/may/navy-needs-ai-its-just-not-certain-why.
Trang 36Theory and Conceptual History of
Artificial Intelligence
PATRICK K SULLIVAN AND THE OCEANIT TEAM
The fields of artificial intelligence (AI) and machine learning (ML) have
made significant strides in the past few decades; now they are ubiquitous inmodern society Although the terms are often conflated, AI is “experimentalepistemology”—the science of creating a “thinking machine” by encoding
in a computer program a theory of human knowledge.1 In contrast, ML isthe application of trained statistical models to recognize correlations in bigdata to make predictions, classifications, and approximations for practicaluse AI/ML systems can be found in applications ranging from mission-critical uses such as the control of weapons systems, manufacturing, andlogistic systems, to whimsical uses such as Snapchat filters.2 Due to ourtendency to bestow “intelligence” on any ML result, AI and ML are oftenconflated into a single category.3 However, these fields are different and infact compete to answer a more fundamental question: “How can we make acomputational system intelligent?”
Many ML practitioners have faith that intelligence can be achieved by
“learning” (in actuality, mathematical curve-fitting) from data From thisperspective, the key to solving the problem is big data and bettercomputation; if only we had infinite data and hyper-computation, we couldtrain ML for infinite possibilities.4 However, others in the so-called goodold-fashioned AI community contend that humans are equipped with
“innate knowledge” for learning from small or sparse data and engaging ingeneral problem solving so as to survive and thrive in zero-day (i.e., novel)situations, creating new explanations and understanding of the world.5
Trang 37Therefore, no AI system will be genuinely intelligent unless it ispreprogrammed with knowledge that would enable more human-stylecognition.
In order to understand this divide, we must go way back to thebeginning of the AI story We need to appreciate and understand competingphilosophies that frame the discussion today and that gave rise to the fields
of AI and ML.6 While the philosophy is important, neither approach could
be practically implemented without the theory and practice of computing,which was established in the twentieth century Nevertheless, thephilosophical differences dating back to antiquity illuminate the differences
in the AI and ML fields
THE PHILOSOPHY: EMPIRICISM VERSUS RATIONALISM
Our story begins in classical Greece with Plato and his pupil Aristotle, who,
respectively, founded the philosophies of rationalism and empiricism that
are key to everything that follows The difference between the two iscaptured visually in the Italian Renaissance artist Raphael’s famous fresco
The School of Athens.7 Plato and Aristotle are centered; the former pointsupward to the abstract and the latter downward to the concrete in anallegory for the rationalism/empiricism distinction—a distinction thatwould come to define the difference between AI and ML
The philosophy of empiricism dictates the dogma that all knowledgederives from experience Empiricism is best summarized by the Peripateticaxiom first stated by Thomas Aquinas in the thirteenth century and further
developed by seventeenth-century philosopher John Locke: nihil est in intellectu, quod non prius fuerit in sensu (“nothing is in the intellect which
was not previously in the senses”).8 It asserts that knowledge is acquiredgradually through the formation of associations of external stimuli, and sothe structure of knowledge “[consists] of links connecting [concepts formedfrom sensedata] with various other related concepts, essentially giving asense of where [a] concept is located in ‘conceptual space’ by saying what
it most resembles” in form and/or matter and/or space and/or time.9
Trang 38The philosophy of empiricism has evolved over time In classical
seventeenth/eighteenth century), it was asserted that “the world is structured
in a certain way and that the human mind is able to perceive this structure,ascending from particulars to species to genus to further generalization andthus attaining knowledge of universals from perception of particulars.… Wemust possess an innate capacity to attain developed states of knowledge, butthese are,” in Aristotle’s words, “neither innate in a determinate form, nordeveloped from other states of knowledge, but from sense perception.”10
This thesis would appear many times over the course of history, includingthe dominant theories of B F Skinner’s twentieth-century behavioristpsychology and twenty-first-century ML.11
Photo 1-1 The School of Athens In the original fresco, Plato and Aristotle
are robed in red and blue, respectively Their philosophical differences
underpin the differences today between artificial intelligence and machinelearning
Located at Apostolic Palace, The Vatican Photo by Patrick J Bayens, used with permission
Trang 39While the adherents of empiricism vary in their beliefs, the commoncore tenet is that knowledge comes from drawing linkages betweenexperiences Therefore, a computational system (for example, a person orartifact) with the ability to make these connections can gain knowledge bybeing provided with enough examples/experiences/data to make thenecessary connections.
Unlike the empiricists, rationalists argue that knowledge has a
component that is fundamentally innate Theories of this innate componenthave changed over time Plato proposed that we have an innate knowledge
of “Forms” (mathematical objects, Goodness, Beauty, etc.) During theEnlightenment, René Descartes argued that knowledge of the self wasinnate, while G W Leibniz thought mathematical logic was Today, NoamChomsky has espoused a rationalist philosophy in which knowledge oflanguage is genetically endowed
While ideas of precisely what type of knowledge is innate will differ,the rationalist theory is that our experiences are determined, codified, andunderstood using these innate components—in modern terms, “theextraction from experience of symbolic representations, which are carriedforward in time by a symbolic memory mechanism, until such time as theyare needed in a behavior-determining computation.”12 The extraction ofinformation from experience necessitates innate modules—computationalmachinery of information extraction, representation, manipulation—specific to the domain of information to be extracted It logically followsthat knowledge acquisition is possible if, and only if, the mind is geneticallyendowed with representations of possible messages communicable by the
environment and abductive (not inductive) algorithms In inductive
reasoning, a specific observation leads to a general conclusion In abductivereasoning, incomplete information leads to a prediction (In deductivereasoning, a general rule leads to specific conclusions.)
Thus, rationalism “shifts the burden of explanation from the structure ofthe world to the structure of the mind.” In the words of seventeenth-centuryrationalist philosopher Ralph Cudworth, “What we can know is determined
by ‘the modes of conception in the understanding’ …; what we do know,then, or what we come to believe, depends on the specific experiences thatevoke in us some part of the cognitive system that is latent in the mind.” AsCudworth explains, “The mind has an ‘innate cognoscitive power’ that
Trang 40provides the principles and conceptions that constitute our knowledge,when provoked by sense to do so.”13
The key difference between rationalism and empiricism, and the cause
of the divide, is whether knowledge has a substantial innate component.This debate would persist for centuries Locke, the empiricist, would quoteAquinas, saying that “there is nothing in the intellect that was not first in thesenses,” to which seventeenth-century German mathematician GottfriedLeibniz, a rationalist, would reply, decisively, “except the intellect itself.”The conflict between rationalism and empiricism persists to this day Itwould take another invention, artificial computers, to turn differences inphilosophy into differences in methods
THE MEDIUM OF AI: THE THEORY AND
PRACTICE OF COMPUTING
The most vehement of supporters of AI and ML agree on very few things.However, both groups rely on the same underlying theory of computationand the associated hardware Indeed, only after the introduction of thetheory and hardware needed for computation could either field really begin
to make strides
One of the first, and most critical, questions in computing isdetermining which problems a computer can solve and which it cannot Theeminent German mathematician David Hilbert posed a variant of this
problem in 1928 His “decision problem” (Entscheidungsproblem) was to formulate a procedure (Entscheidungsverfahren) that in a finite number of
steps would decide the validity of any given logical expression Moregenerally, from a logical formalism of finitely specifiable axioms and
theories, such a procedure would decide—compute, hence explain—
potentially infinite nonarbitrary sets of theorems and data in systems of theformal and natural sciences Such a formalization would approachrealization of “Leibniz’s dream.”14 Leibniz, one of the greatestmathematicians and philosophers in history, “dreamt of an encyclopediccompilation, of a universal artificial mathematical language in which eachfacet of knowledge could be expressed, of calculational rules which would