We’re in trouble.
All of us.
Black, white, men, women, all political parties, all factions, we’re all in this.
What kind of trouble?
Life-threatening computer trouble.
Hold on there, you say. My PC isn’t waving a knife around. What’s got me so crazy?
It’s a confluence of things, really.
Back in 1991, hurricane Grace had pretty much petered out and was getting ready to expire in the North Atlantic, when it happened to bump into – and merge with – a storm system blowing off the Canadian Maritimes. When the two systems combined, they produced what would eventually become “The Perfect Storm”. This storm system was so violent that it induced waves that were over seventy feet (and in fact could not be measured, as the sensor devices in place couldn’t go high enough to report accurately), obliterated the unsuspecting fishing boat Andrea Gail, and inspired both a non-fiction book and a fictionalized film.
So when I title my article The Perfect (Digital) Storm, you can get some idea that I am not kidding around here. This stuff is serious.
Here we go. Buckle up.
There are a half-dozen major technologies and non-technical trends in place and in use today, which are all dangerously close to being combined. When they are, there won’t be a single person on the planet safe. Here are a few biggies:
Behavioral Prediction Software
Have you ever been thinking about buying something, like perhaps a lawn mower or a new kitchen appliance? Where you hadn’t told anyone you were even thinking about it, but you had decided “Okay, I need this,” and you were probably going to start shopping for one soon?
And then, that same day, Facebook or eBay or your news site starts spamming advertisements in their banner or skyscraper zones that have exactly that thing advertised for you?
That’s the result of behavioral prediction software.
All those privacy settings in your browser or computer, see, those are directly related to how marketing firms identify who is in the buying mood for what. They also predict your “journey,” the path you will take from one page to another, one site to another.
You experience behavioral prediction every time you visit an eCommerce site of a major retailer. Ever been browsing and suddenly you have a popup saying “save 10% when you buy today”? That’s because that site’s prediction software has observed your path, and realized you’re only a step or two away from abandoning the site without buying something.
This kind of software is also evident when you try to purchase an air fare online: have you noticed that it almost never costs the same amount when you view the same flight from different sites, different physical locations, different times of day or week? There is a “Big Data” AI reviewing those factors and deciding what is the optimal price to put on those fares to garner the most overall money for the seats on that flight.
A British firm, “Cambridge Analytica”, used behavioral prediction software to manipulate the election results of the US 2016 Presidential election, as well as that of British Parliament and the Brexit referendum. By pushing out “quizzes” across Facebook and other social media platforms, they built profiles of individuals to then identify what kind of messaging to put in front of those individuals to induce certain behavior – such as staying at home rather than voting, getting out to vote, and so on. Have you seen one of those “I scored a jillion points on this free IQ test” things shared on a friend’s timeline? That’s one of those tools. “Can this picture of a deaf and blind puppy get a million likes?” is another way they get to you – using emotional blackmail to get hold of your personal info and that of those connected to you.
Behavior prediction is also being used by the Chinese government in major cities (such as Beijing, Hong Kong, etc.) to identify citizens and other persons who are what they consider “anti-social” and dangerous to the State (which, in this case, means dangerous to its dictator, Xi Jinping). A person’s ‘social score’ is built based on his/her actions and the values of those actions in their management system – and behaviors are being stacked into a prediction model to determine who might become “problematic.”
Personal Recognition Software
You’ve seen it in movies (facial recognition in 2008’s “The Dark Knight”, for example). You may have even used it (biometric identification of your iris or finger). This is a way of using your body in the same way police can use your fingerprint – there are certain unique combinations of features (your cheek height relative to the position of your right eye, for example, or the width of your chin or nose) which add up to a unique profile. Even twins don’t have the same value.
As I mentioned – it works in the same fashion as a fingerprint. When forensics experts review a fingerprint, they are looking for a unique combination of whorls, lines, connections, etc. to build a “profile.” In digital terms, this equates to building up a unque value (combination of numbes and letters) which are tied to your identity. Your fingerprint gets translated into a bunch of 1’s and 0’s, the combination of which can only be created by reading your finger and using the same procedure.
This can be a wide variety of personal features. The top of your head, how you walk, the motion of your arms, many different things can be used to build a profile of an individual.
The EU has recognized the gathering of this sort of information as a potential infringement of privacy rights, and led by Germany, instituted the “GDPR” regulation for securing of personally-identifiable information. Seems they learned a bit about what happens when a government spends too much time paying attention to the identity and behavior of its citizens.
Basically, a camera paired with the correct software can identify you walking down the street in the same fashion as you can identify persons you know from a long way away. But they do it really rather more quickly than you can. It builds a profile watching you, or a part of you, and compare that profile’s “score” against its database of known profiles, producing a value-match that expresses varying degrees of certainty. Most often we hear that as a percentage – “85% match” “99% match” and so on.
AI is a very much-abused term these days. Most laypersons hear the term and think “Terminator” or “robot”. But let’s first separate it from what it isn’t, by introducing a contrasting term – “general intelligence.” General intelligence, in this context, is self-aware and self-motivating intelligence – a true ‘mind.’ You are a form of general intelligence, as is your dog or cat, birds, even to a much lesser degree flatworms. AI, as we know it, so far is very specifically tuned intelligence that is itself not self-aware (it doesn’t recognize itself as an individual separate from other things), nor is it self-motivating (it does not decide one day “I am going to spend time making a sandwich” without someone else feeding it this idea).
AI is a combination of technologies that enables digital machinery to make a decision regarding its own purpose on its own. For example, a paperclip-manufacturing software might observe seasonal demand and anticipate that it needs to order additional raw materials in July, in advance of a busy August. It does not identify itself as an actor, and neither does it truly “understand” its job. It merely serves as an active model of what takes place in the manufacture of paperclips.
Often people will confuse “AI” with machine learning, or decision-support software. Machine learning and decision support are often included in an AI, but they themselves are not AI. AI can use the output of both systems in its own decision-making process, though.
AI has been put into use in the military, as well. Target identification, path-finding, and a number of other tasks have been subject to military use.
Often, certain decisions are pushed into an AI’s domain – such as, for example, when to stop a car in heavy traffic (self-driving vehicles). In some cases medical diagnostic software is deciding on diagnoses and recommendations for treatment – these are not yet in production, but are being tested alongside live doctors for potential deployment. Autopilot software has been deciding how to fly planes for years. The stock market is largely driven now not by the perceived value of a company’s future, but by the estimation of software programs attempting to determine whether a company’s stock value will rise or fall in a given period of time. The 2008 financial crisis suffered a much amplified blow to market values because of such software – the fall precipitated software judgment that values would continue to fall, which induced selling behavior, which accelerated falling values, inducing further selling, and so on.
These decision-making systems are slowly becoming a form of general intelligence. How do we, how will we know when these things genuinely are one, though?
Answer is, we really don’t. We don’t know the answer to this any more than we know a consistent definition of information versus data.
When you consider your own mind, taking a step back and considering how thoughts pass through it, you may find a surprising amount of randomness in there. Among all the various sensory inputs that are being registered and ignored or acted upon, your mind itself tends to bubble up random bits of memory, interconnected bits, even invented concepts. Your brain has been trained over many years to filter out elements which are irrelevant to the current and strongest path, which is generally the conversation you are having, the film you are watching, or the article you are writing.
An artificial general intelligence will probably self-congeal from among a similar case, where thousands of various inputs as well as internal processes fight for the attention of the central core thought processes. This is probably a topic for a dozen other papers or articles, and as cool as it is, it is beyond the scope of this piece.
But AI has a direct impact on the topic at hand: specifically, lazy decision makers relying on something they view as a “magic carpet ride” to shift the responsibility of decision making onto an automatic system.
A benefit to such a system is that should something go wrong, they can blame the system, or the engineers who built it. As well, decision makers who desire above all speed in decision making, these people will desire a system on which they can push important decisions that are time-sensitive. The system can decide faster than the observe-pass on info-human decides-return decision-act cycle can produce an answer. This is a legitimate concern in military decision making, if a valuable target shows itself for only a few moments. But let’s not be so antiseptic about it – we’re talking about people. Enemies, yes, but still people.
And that means command staff among the military, wanting to push a kill decision into a drone, to reduce reaction time to identifying a potential target.
Aircraft, automobiles, buses, trains, trucks, drones, a wide variety of self-moving machines are being enabled via GPS and navigation software to be able to navigate and maneuver to desired locations without human assistance. Self-driving cars are anticipated to be available commonly in the next few years. Aircraft are capable of a complete flight from takeoff to landing now without a human hand on the controls. Remote drones can perform a variety of missions autonomously, and can loiter over a target area for extended periods. They are literally ticking time bombs.
If the last four years have shown us anything, it is that about three out of ten people have a strong desire to control the other seven, possibly kill two or three among those seven, and that they will go out of their way to exert that power over them. They also happen to be simplistic people, who are prone to violent tendencies. These people form a “power core” behind dictators worldwide.
And as was demonstrated last century, when they seize control of the mechanisms of power in a nation, mortal disaster follows for hundreds of thousands, if not millions of people. People die.
In the USA, the Republican party has demonstrated its disregard for the traditions of its country and that it has a motivation solely for raw power – and that it is willing to repeat the crimes of last century by opening concentration camps on the Southern border of the country to isolate “undesirables.” Genocide has already been committed according to the letter of the law, though mass murder has not yet taken place. Police kill black people indiscriminately.
Facebook’s CEO, Mark Zuckerberg, seems to show little concern and is willing to let his platform be used in support of this. Twitter as well.
In Russia, Putin collects power to himself and he refuses to let it slip away.
In Hungary, Poland, and Turkey, traditional democracies are falling and becoming dictatorships.
The Saudis have demonstrated their murderous nature against their own and even against American residents.
These are people who experience little remorse at committing crimes against people not of their political / religious / tribal identity. They don’t view these as crimes. Their leadership will attempt to enact laws that enable such acts and remove their criminality.
China is putting millions of Muslims into concentration camps as I write this, with the endorsement of President Trump.
Trump himself has demonstrated himself as incapable of either empathy or compassion.
Bringing Them All Together
What I am describing here are:
- Prediction of behavior
- Projection of violence
And when you put them in the same room, you have a situation where one can construct and project military-grade power with very few humans having a hand on the tiller.
Imagine, for example, the royal house of Saud in Saudi Arabia – their biggest fear is an uprising among the common people leading to an overthrow of their family. It is in their interest to kill any potential leadership of such an uprising.
Now imagine a behavioral prediction system which takes the histories and behavioral cues of past and current “revolutionaries” to build profiles of individuals who can potentially develop into future revolutionaries.
And imagine them purchasing drones from the USA or other technically-capable countries which are capable of self-navigation over areas where such individuals live, armed with long-range missile or sniper weapons, and which are enabled with recognition software that can identify such individuals from range.
And those drones are equipped and enabled to kill those people.
Such systems are already being used today. The US military uses drones to track, identify, and kill individuals labeled as terrorists or active military agents in combat zones. The US has not yet pushed a “kill decision” into an automated drone, instead relying on the system giving human decision makers a % likelihood of identification.
And these drones are nearly invisible from the ground. Their active camouflage and ultra-quiet motors make them almost impossible to detect.
Whether you like the prospect or not, your behavior is available for an ambitious (or, for those of you active on social media platforms, routine) program to assemble. It is exceedingly likely that your face has already been entered into the database of “Clearview”, a company specializing in facial recognition.
Whether you like it or not, even the USA is committing actions based on “loyalty” to the current administration – purging those perceived as disloyal, and hiring yes-men into positions of control. Courts are being stacked with unqualified but ideologically compatible persons. Police are using facial recognition of protesters to sieve through their past records, looking for outstanding warrants or infractions that can grant them a plausible excuse to arrest protesters.
And legislation is being fronted at every level to enable discrimination and even violence against “undesirables” – mostly it seems ‘Christian’ causes are being put forward as justification to discriminate against LGBTQ persons. The USA already has a lengthy history of racism against non-whites, as well.
The current administration has demonstrated it is willing to commit crimes against humanity to achieve its goals, and has gone out of its way to praise and offer aid to white supremacists, the Nazi party, and other dictatorial groups.
The tools are in their hands, and they have demonstrated that they are rapidly shedding the moral restrictions that hold people back from using their power to abuse and kill people to further entrench or enrich themselves. The Republican-controlled Senate has demonstrated that it is fully in line with the administration, and the “conservative”-controlled Supreme Court has telegraphed its willingness to overlook crimes committed by them.
Time is proverbially – and literally – short.
And this warning takes into account intentional use of these technologies. I haven’t even begun to explore accidental use of them.
One demonstration of an infantry robot armed with a large-capacity magazine on an assault rifle body nearly killed a stadium full of observers, and would have done had it not been tackled by an observant soldier supervising the robot. Its target identification package suffered a glitch that disabled its ability to distinguish friend, so all were foe.
Drones have often been indiscriminate in their deployment of high-explosive missiles. They have a long record of high collateral damage.
Other non-military systems have suffered catastrophic failures for the simplest of reasons. A multi-billion-dollar Mars probe self-immolated because a single developer made an error in using Imperial measurements rather than metric. Denver International Airport suffered a multiple month delay in opening at millions of dollars per day lost, because software developers refused to accept that their chosen tools were simply incapable of handling the real-world traffic of the airport.
Does anyone seriously believe that software designed for military use would be immune to such problems? And when armed with lethal machinery under software decision making, that an error would be without impact?
What Can Be Done?
We have to review our place in this grim picture.
As individuals, what can we do against this?
First, vote. Stand in line for hours if you have to, but vote. Even if it feels pointless, vote. Failing to vote is not a protest, it is a surrender.
Are you an educator? Do you know one? Insist on ethics courses for all students, or at least for computer science requirements in your university / college / higher education facility. Demand your institution be transparent in its support of government-sponsored research programs. Oppose those programs which you know to be wrong.
Spend time discussing these problems with those around you. Make the issue known, and advocate avoidance of putting wartime decision making in the hands of machines.
Oppose dictatorial policies and parties.
Do you have spare time? Run for a local office. State legislator. School board. Anything. Get involved. If you are not ambitious for such office, that makes you the ideal candidate – it is our responsibility as ethical persons to remove and replace those in office who are amoral or unethical.
Get in touch with your national representatives and make your voice heard. Ensure that democratic values are supported.
Advocate for a human military. Fighting machines detract from the fear of war that should be present for everyone involved. Every development of new technology in the past two centuries that was lauded as “too frightening to make war viable any longer” have had the opposite effect: they’ve made it easier to wage war, and they’ve resulted in a casual attitude towards what should be the gravest of actions a nation can take.
An army of robots makes waging war easier and cheaper, and that army can be easily turned against its own people, should a party in power decide it does not want to relinquish that power.
Most importantly: If you are a US citizen, abandon and oppose the Republican party. This will be particularly difficult for persons who consider party affiliation to be something deserving of blind loyalty. This group has demonstrated itself to be a front – it has no values in common with American democracy, and deserves no further voice in the democratic process. Join the Lincoln Project if you are a Republican.
I do not recommend joining the Democratic party, though I do support many of its members. No, I simply recommend you oppose the Republican party in whole. They no longer represent anything even mildly resembling the Ike Eisenhower era figures – they are characterized by three faces: Donald Trump, Roy Moore, and Mitch McConnell. All are anti-Democratic, all are unabashed racists, both the first two have lives irrevocably stained by sexual abuse of partners and children. And the Republican party was just fine with all of that.
The party pushed back on their respective candidacies, right up until it was clear that they were going to be the nominees and would not drop out – at which point the party threw in and got behind them. In short, the Republican party was more interested in ensuring a win than it was with putting a child molester and rapist into the most powerful seat(s) in the land.
In short, supporters of the Republican party are the sort of people who would exercise the technological power this century is on the verge of granting us, to eliminate those who would oppose them. The Senate’s handling of the Trump impeachment under McConnell should resonate as a clarion call that spells out that no abuse of power is unacceptable to them, so long as it is performed in service to their party’s continued power.
And the slide into dictatorial political machinery won’t stop unless we stop it. We’ve already seen the Republicans throw us into war – killing thousands of American soldiers and hundreds of thousands, if not millions, of Iraqis – for the sole purpose of winning an election. The killing of persons, citizens or no, on our own ground is coming if they are not stopped.
Going forward, these technologies are not going to go away. We are going to need a renewal of the Geneva Conventions, and a whole new set of laws and oversight on how to utilize information technology when mated with potentially lethal combat machinery and tactics. We will need politicians with good ethics (and yes, there are some) who can craft appropriate measures to govern their use. We cannot rely on the Republicans to be so forward-thinking.
Boycott businesses that support Republican candidates or the party, and make it known to them the reasons you do so. Stop giving to churches that obviously favor them. Make it expensive for a business or church to throw its support behind such candidates.
Technology, and time, always march forward. What we do with the tools that time and tech hand us is what will define our future. We can no more ban them than we can stop the sun from rising and setting. So it is up to us as a people – of a nation, of the world – to demand responsible use of these new technologies, and to prevent their use in service of an industry that has already proven itself too costly in human lives.
The solution to this problem is one of ethics, and politics. And the time we have to address this issue before it is upon us physically is running out.