AI: How the enlightenment ends

In the Wisconsin city of La Crosse in 2013, Eric Loomis, then in his early 30s, pleaded guilty to eluding police in a stolen car (while denying any role in a drive-by shooting involving the vehicle). The judge sentenced Loomis, who had a criminal record, to six years in jail, the longer end of possible terms.

So? Well, the judge based Loomis’s prison term partly on the recommendations of an artificial-intelligence, or AI, program. Under the ‘Compas classification’, secret algorithms compare information about the defendant against a criminal database to assess the risk that person poses. In Loomis’s case, the prosecutor told the trial judge that the Compas report showed “a high risk of violence, high risk of recidivism”.

Loomis appealed the length of the sentence saying he had no opportunity to evaluate the algorithms and their assessment based on his gender violated ‘due process rights’. At a court hearing, one expert witness said the Compas program had a "tremendous risk of over estimating an individual's risk”.

The court’s use of AI to sentence Loomis attracted much criticism, including from within the tech community, because it raised questions about the role that ‘big data’ and AI are playing in everyday decisions. Expect more such controversies for society to solve as AI’s rapid deployment is creating many ethical issues – a gentler way of saying AI is capable of ill as well as good.

AI is certainly causing concern – Henry Kissinger cautions that it’s “how the enlightenment ends” while the late Stephen Hawking warned “it could spell the end of the human race”. Among potential dangers, AI might be used by despots who want to enforce censorship, micro-target propaganda and impose society-wide controls on citizens. Many think the disinformation, conspiracy theories and echo chambers that AI-driven recommendations engines can promote on social media deepen social tensions. AI can be used in warfare. The technology has the potential to make swathes of workers redundant. Used inappropriately, AI can act discriminately or invasively. Many worry about the privacy violations surrounding the data used to train and improve AI algorithms. (Hawking was warning about ‘superhuman AI’, which as it is still an aspiration is not a pressing ethical concern.)

Many of the concerns about AI are tied to the nature of the algorithms. People worry that society is handing over decision-making to secret software codes – essentially instructions – that have no understanding of the context, meaning or consequences of what they do. They fret algorithms are being entrusted with tasks they are incapable of fulfilling and they magnify the biases and flaws of their human coders and the data inputted. People are concerned about how algorithms can manipulate behaviour and promote digital addiction. They see they can be gamed by attention seekers, from anti-vaxxers to populist politicians and extremists.

People are tackling some of the ethical issues involved. Researchers have withheld AI because of possible misuse. Governments, notably the EU, have acted to protect privacy. The EU is developing an AI code of ethics. Companies are creating principles around AI use – Google offers seven. Businesses are setting up ethical boards to monitor its deployment. Platforms are using AI to inhibit the ability of other algorithms to spread viral extremist content. Data gatherers are better protecting user information. US tech employees are rebelling against AI’s use in warfare.

But not enough might be happening to limit AI’s possible harm. People seem blasé about how their online data trails are used to sway their spending and thinking. Businesses appear far more focused on generating positive returns from AI than in overseeing and mitigating the negative side effects. Autocratic states such as China are increasingly using AI to tighten their control over media and communication. When ethical issues are raised, valid rebuttals can result in inaction. Authorities with genuine concerns appear hobbled because of the public’s fondness for the cyberworld.

Be aware that AI is being deployed at a faster rate than ethical issues can be properly identified and resolved. 

The moral concerns encircling AI are likely to become big enough political issues in time to warrant much public scrutiny and government intervention.

To be sure, many of the ethical issues raised are broader than AI. Some of the tech’s biggest ethical issues, such as gene-edited babies, are away from AI. Protests against tech’s use in warfare reach beyond AI too, as do the problems with data gathering. Discussions about ethics could prove divisive and prolonged. Many of the ethical issues swamping AI are everyday ones that are as old as humans – AI is just a new setting for them.

But that fresh setting looms so large AI is bound to spark controversies, especially since AI’s political weakness is that it’s easy to demonise. Expect a rigorous human overlay on AI in due course. The challenge for authorities will be to limit AI’s possible harm without suppressing its advantages.

AI angst

Amid concern that China is ahead in key AI areas, the Pentagon in June last year said it would create a Joint Artificial Intelligence Center so the military could work with “industry, academia” and others to pursue AI applications that “will change society and ultimately, the character of war”.

Fat chance, though, of full industry cooperation. The announcement came just days after Google employees forced the company to quit a project that oversees AI use in drone attacks.

Google’s decision triggered much criticism, including from Amazon. Why should Silicon Valley employees hold a veto over the US military’s ability to protect the nation’s interests as defined by elected officials? One possible answer is the tale of how Dow Chemicals (now part of DuPont) incurred billions of dollars in damages for producing napalm for US use in Vietnam in the 1960s. Nonetheless, a key ethical issue raised is how much power companies can have in a democratic nation-state that, by definition, has defendable borders.

While the tech industry is split on aiding the military, it is united when fighting against privacy exemptions for intelligence agencies because it says ‘back doors’ and ‘master keys’ provide openings for hackers. A crucial ethical issue here is how much privacy citizens should be forced to give up to allow governments to provide a safe society, as well as what role, if any, private companies should have in setting these limits.

The tech industry is unified again in frowning on regimes that use AI to enshrine their power. Even so, western facial-recognition companies can’t escape the ethical issue that their advancements might entrench autocrats to such an extent that despots might pose a “mortal threat” to open societies, according to George Soros.

Another society-level issue causing ethical headaches is that AI might destroy so many jobs that a “new serfdom beckons”. Society-wide solutions proffered include guaranteed income or employment for everyone. But critics say ‘universal basic income’ is too expensive, promotes a disincentive to work, does not generate the sense of purpose that people crave, and turns much of the population into a welfare community. They say a ‘federal jobs guarantee’ is too costly, impractical and does little to reduce inequality. 

The inequality and other ethical issues tied to massive AI-driven job losses would become prominent if such redundancies were to coincide with a downturn.

So much data

When people search the web or use digital platforms, their extensive digital trails flow into algorithms to optimise the services to capture their attention. Many people are unaware of the extent of the data collection and how it is used to profile and target them. Scandals surrounding data have triggered much ethical discussion about privacy and data rights.

Pushes are underway to give users more ownership and control over their data. A UK inquiry in 2019, for instance, has called for a deterrent-enforced code of conduct on data gathering and for users to have the ability to move their data to other parties and make their data available to competitors.

While data ownership is contentious, protecting privacy appears a straightforward issue in ethical terms. A consensus has formed that companies should seek user consent on fair terms and have no right to collect data covertly. One example of underhand data gathering is when platforms secretly track users after they leave their sites. Another is the way facial-recognition companies ‘scrape’ photos on the internet – use any published photo – to hone the technology.

These practices risk regulatory action. But where fits the practice whereby AI is used to scour social media to assess babysitters or by insurance companies to set premiums for policyholders?

New York State in January allowed life insurers to use predictive models to comb social media to see if policyholders take part in unhealthy or dangerous practices or have faked claims. The ethical issues include the ability of the algorithms to deliver a fair outcome, the transparency of the process and the right of appeal. The only safeguards New York State enshrined were that the information gathered must be sound in actuarial terms, relevant and not be “unfairly discriminatory”.

Flawed codes

The algorithms that power AI are reams of code that can process data efficiently to assist in making parole, medical, military, work-dismissal, university-admission and many other decisions. These instructions can perform vast analysis within these narrow functions at speeds beyond human ability. They can recognise patterns in the data with much greater granularity and nuance than humans can. They often generate surprising and counter-intuitive conclusions that few humans could arrive at.

But algorithms lack many human qualities and smarts. These algorithms do not understand the cause and effect of their decisions. They lack common sense, emotion, imagination and senses of humour or irony. They have no free will. They can have inbuilt biases, generally delivered by the data that drives them. They can be gamed and outsmarted. Many aren’t that sophisticated. The ethical issue is: How can society justify the handing over of vital decision-making to AI when it falls well short of human ability in so many ways?

The ethical cloud over algorithms is highlighted when they are set tasks beyond their design limits. Platforms and others, for example, augment human oversight with ‘content moderation’ algorithms to scour for ‘hate speech’, bullying, harassment and worse (while also relying on alerts from the public, which then help train the algorithms). The algorithms keep much out. But enough escapes them. Algorithms have often failed to remove all copies of an offensive video because people can alter the footage enough to outwit algorithms that can only look for earlier versions. As Facebook concedes: “The more we do to detect and remove terrorist content, the more shrewd these groups become.”

A wider ethical issue is whether or not AI-dependent platforms should be responsible for the content shared and viewed on their platforms, whereas now they bear no legal responsibility provided they take offending content down once notified. Another ethical issue is whether or not private companies should be monitoring the ‘cyber public square’ – that private companies are acting as censors and judges of what’s appropriate, something that Zuckerberg recently conceded Facebook shouldn’t be doing. And what is the responsibility of users in all this? A core problem with content on social media is that enough people are prone to interact with the most sensational and vilest content. Doing so, users prompt the AI set up to boost their engagement to feed them more of the same.

Away from terrorism and violent crime, perhaps the most consequential example of how algorithms can be exploited is Russia’s manipulation of platforms during the US elections in 2016. Since then, working with intelligence agencies around the world, Facebook’s ‘feed’ algorithms that rank and personalise content have become better at policing against political manipulation. 

It is ironic that the most effective solution to AI’s failures to protect democracy appears to be more AI.

Another ethical issue to resolve with AI is whether or not to let algorithms operate in situations with infinite possibilities (such as powering driverless cars on open roads) when, for now, AI works best in defined conditions (such as translation, sophisticated board games or, in the case of driving, keeping a car within white lines on a highway). The death of a woman crossing a road at night in Arizona by a self-driving car in 2018 highlighted how AI programs can prove fatal in uncontrolled situations. A central ethical issue here is whether or not the hopes that autonomous vehicles might one day reduce road fatalities is worth the loss of life in the experimental stage. Another ethical issue to resolve in such situations is who might be responsible when things go awry. Volvo in 2015 said it would be liable for all accidents involving its driverless cars.

Another prominent flaw of algorithms and data is that they promote the biases of code writers and data. Amazon acknowledged this limitation in 2018 when it stopped using algorithms to sort job applications because they promoted gender biases against employing women. The problem here is that data, as a record of the past, feeds algorithms the prejudices of the past. While no one defends discrimination per se and code writers can attempt to overcome this flaw, the ethical issues require subjective solutions – witness the debates around whether or not discrimination has occurred, the use of minority quotas and the risk of ‘reverse discrimination’.

Data with gender, race and other biases and the limits on the abilities of algorithms are prompting calls for algorithms to be regulated. Companies could come under pressure to reveal their algorithms, as France is doing with those used by the government. The tech industry, however, resists such transparency, saying their formulae are intellectual property.

Such ethical issues around AI are prompting reassessments of the technology, as shown by talk of a second “AI winter’ (when research and deployment stalls), a surge in expert warnings of its potential harm, and by the number of recent books highlighting its flaws, such as Meredith Broussard’s Artificial Unintelligence.

While the Loomis appeal was rejected by the Wisconsin Supreme Court in 2016 and the US Supreme Court in 2017 refused to hear the case, the ethical issues it raised will be among many that surround AI as its deployment brings many advantages to society.

By Michael Collins, Investment Specialist

Want to invest in the world's best companies?

Magellan believes that successful investing is about finding, and owning for the long term, companies that can generate excess returns on capital for years to come. We do not look for stocks that might come into short-term favour on stock markets. Learn more here


1 topic

Magellan was formed in 2006 by Hamish Douglass and Chris Mackay, two of Australia’s leading investment professionals. The company specialises in global equity and listed infrastructure assets.

I would like to

Only to be used for sending genuine email enquiries to the Contributor. Livewire Markets Pty Ltd reserves its right to take any legal or other appropriate action in relation to misuse of this service.

Personal Information Collection Statement
Your personal information will be passed to the Contributor and/or its authorised service provider to assist the Contributor to contact you about your investment enquiry. They are required not to use your information for any other purpose. Our privacy policy explains how we store personal information and how you may access, correct or complain about the handling of personal information.

Comments

Sign In or Join Free to comment