The limits of AI and why it will not replace human knowledge workers — from the perspective of a practitioner

A. P. D. G. Everett
9 min readApr 21, 2021

--

Earlier this year, I saw this article in The Atlantic from last October about how algorithmic bias is harmful to teens that I’ve spent some time thinking about (https://www.theatlantic.com/family/archive/2020/10/algorithmic-bias-especially-dangerous-teens/616793/) and was reflecting on the last paragraph:
“Algorithms powerfully shape development; they are socializing an entire generation. And though United States governmental regulations currently fail to account for the age-based effects of algorithms, there is precedent for taking age into consideration when designing media policy: When the Federal Communications Commission began regulating commercials in kids’ TV in the 1990s, for example, the policies were based on well-documented cognitive and emotional differences between children and adults. Based on input from developmentalists, data scientists, and youth advocates, 21st-century policies around data privacy and algorithmic design could also be constructed with adolescents’ particular needs in mind. If we instead continue to downplay or ignore the ways that teens are vulnerable to algorithmic racism, the harms are likely to reverberate through generations to come.”

It goes without saying, although I’ll say it again, the issue of bias baked into artificial intelligence/machine learning (AI/ML) algorithms is an important topic in the world of tech. For some of the coverage on this topic:

In a 2015 article in The Atlantic (https://www.theatlantic.com/business/archive/2015/09/discrimination-algorithms-disparate-impact/403969/), entitled When Discrimination Is Baked Into Algorithms, which states:
“So how will the courts address algorithmic bias? From retail to real estate, from employment to criminal justice, the use of data mining, scoring software, and predictive analytics programs is proliferating at an exponential rate. Software that makes decisions based on data like a person’s zip code can reflect, or even amplify, the results of historical or institutional discrimination. “[A]n algorithm is only as good as the data it works with,” Solon Barocas and Andrew Selbst write in their article “Big Data’s Disparate Impact,” forthcoming in the California Law Review. “Even in situations where data miners are extremely careful, they can still affect discriminatory results with models that, quite unintentionally, pick out proxy variables for protected classes.” …There still exists “a large legal difference between whether there is explicit legal discrimination or implicit discrimination,” said Friedler, the computer science researcher. ‘My opinion is that, because more decisions are being made by algorithms, that these distinctions are being blurred.’”

A 2015 article in the New York Times (https://www.nytimes.com/2015/06/26/upshot/can-an-algorithm-hire-better-than-a-human.html) asking if an AI algorithm can hire better than a human, contraposed with coverage in Business Insider in 2019 that AI screening tools were just as biased as humans (https://www.businessinsider.com/ai-hiring-tools-biased-as-humans-experts-warn-2019-10).

From a 2016 article in Fivethirtyeight (https://fivethirtyeight.com/features/an-algorithm-could-know-you-have-a-genetic-disease-before-you-do/), a discussion of an algorithm being able to detect possession of a genetic disease before you know you have it.

As was reported in the Wall Street Journal in 2016, discussing Cathy O’Neil’s book Weapons of Math Destruction (https://www.wsj.com/articles/algorithms-arent-biased-but-the-people-who-write-them-may-be-1476466555?mod=e2fb) that the mathematical/statistical analysis behind algorithms isn’t biased, but the people programming the algorithms may be biased.

From a 2017 article in Quartz (https://qz.com/work/1098954/ai-is-the-future-of-hiring-but-it-could-introduce-bias-if-were-not-careful/), the article lede proffers an operative word “could” introduce bias, but given the litany of examples, that ship has long since sailed.

A 2017 article in MIT Technology Review (https://www.technologyreview.com/2017/06/12/105804/inspecting-algorithms-for-bias/) discussing the inspection of AI algorithms for bias.

From a 2017 article in the Harvard Business Review (https://hbr.org/2017/07/ai-may-soon-replace-even-the-most-elite-consultants), discussing AI moving into management consulting over the next few years, it asserts that AI is going to change the way we all gather information, make decisions, as well as connect with stakeholders. And that as of 2017, leaders were starting to use AI to automate mundane tasks such as calendar maintenance, and making phone calls. But also AI can be used to help support decisions in key areas, including: human resources, budgeting, marketing, capital allocation, and even corporate strategy — areas which have long been the bastion of major consulting & marketing firms.

A 2018 article in Politico (https://www.politico.com/agenda/story/2018/02/07/algorithmic-bias-software-recommendations-000631/) asking directly “Is your software racist”?

A 2018 article in MIT Technology Review (https://www.technologyreview.com/2018/01/26/104816/algorithms-are-making-american-inequality-worse/) discussing Virginia Eubanks’ book Automating Inequality, which discusses the increased imposition on the poor of AI-based assessments.

From a 2018 article in Newsweek (https://www.newsweek.com/ai-racist-yet-computer-algorithms-are-helping-decide-court-cases-789296), the article title was “Artificial Intelligence Is Racist Yet Computer Algorithms Are Deciding Who Goes to Prison” but really, AI is only as racist as the data that goes into it.

From a 2019 article in the New York Times (https://www.nytimes.com/2019/11/11/technology/artificial-intelligence-bias.html):

“In a blog post this week, Dr. Munro also describes how he examined cloud-computing services from Google and Amazon Web Services that help other businesses add language skills into new applications. Both services failed to recognize the word “hers” as a pronoun, though they correctly identified “his.”

“We are aware of the issue and are taking the necessary steps to address and resolve it,” a Google spokesman said. “Mitigating bias from our systems is one of our A.I. principles, and is a top priority.” Amazon, in a statement, said it “dedicates significant resources to ensuring our technology is highly accurate and reduces bias, including rigorous benchmarking, testing and investing in diverse training data.”

Researchers have long warned of bias in A.I. that learns from large amounts data, including the facial recognition systems that are used by police departments and other government agencies as well as popular internet services from tech giants like Google and Facebook. In 2015, for example, the Google Photos app was caught labelling African-Americans as “gorillas.” The services Dr. Munro scrutinized also showed bias against women and people of colour.”

In 2019, it was reported in the Washington Post (https://www.washingtonpost.com/business/2019/11/11/apple-card-algorithm-sparks-gender-bias-allegations-against-goldman-sachs/) as well as elsewhere that a husband and wife who both applied for an Apple Card and received different credit limits were an example of sex-based discrimination (specifically he got a credit limit that was 20 times more than hers), however, as was reported earlier this year (2021), these reports were in error, and no violation of the Fair Lending Act (https://www.reuters.com/article/us-goldman-sachs-apple-card/apple-card-underwriter-goldman-sachs-committed-no-fair-lending-violations-idUSKBN2BF1UN)

Which as a whole, these articles and similar reportage can be summarised by:

1) AI/ML algorithms are based, like regular statistical analyses, often times on historical data.
2) It goes without question that there has been a history of discrimination against non-White people in the US. Blacks, Latinos, Native Americans, etc — all have histories of adverse treatment by government agencies as well as by non-government actors.
3) Things like redlining, ghettoisation, etc, have undeniably adversely impacted wealth acquisition and land value appreciation in non-White neighbourhoods. Any data or analyses derived from this history, regardless of how well-meaning, will be subject to the same biases.
4) There have been documented cases, a fair number of them, that indicated that for certain classes of crime, such as illicit cannabis use, that Whites and Blacks use at comparable rates, but given differences in law enforcement activity by area, Blacks have been way more likely to have stopped, arrested, and criminally charged for illicit cannabis use.
5) Image classification algorithms are often written by those who are White (or Asian) and male, as well as many of the test image sets, meaning that, when doing the calculations and processing through its algorithms, you end up with things like: — Black people being “categorised” as gorillas by Google’s image processing (https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/) — Images of prominent persons of colour, including Barack Obama, Lucy Liu, AOC, etc, when run through image processing software for low resolution images, the outputted people look White (https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias)
6) In the several centuries of published books, many of them have expressed attitudes that would be classified as sexist and racist according to modern standards. However, when the data from these books is scanned into computer systems and then enters the databases, the computer doesn’t know to recognise the limitations of the recorded data. The issues discussed here are also why you can’t just wave AI/ML over hiring processes and make them better, since hiring data is often times skewed by the same sorts of problems that other datasets are. In US law, there is the concept of disparate impact that has existed since the ruling of Griggs v. Duke Power — 401 U.S. 424 (1971) — (which SCOTUS held in short — doesn’t matter if the intent of a law or policy is racist or sexist, if the output is discriminatory, then it deserves to be challenged [also note that this case is what eliminated the use of general IQ testing for employment accessions, which lead in turn to the rise in requiring a university degree for more jobs]).

As Cathy O’Neill, a Harvard maths PhD and former Wall Street quant, wrote a book called Weapons of Math Destruction, in which she discusses why algorithms discriminate. She asserts that the problem isn’t caused by the maths, but is rooted in the biases of people who encode their notions in algorithms and apply them en masse to the public in ways that are largely invisible and therefore difficult to challenge. In particular, she expresses concern about mathematical models that rank or score individuals, institutions or places, often by using proxies to stand in for things the modelers wish to measure but can’t. Ms. O’Neill’s description of what she calls “weapons of math destruction.” WMDs, she says, share three characteristics: a) They are biased. b) They are opaque, and, 3) They are scalable. That last one is part of the reason usually unintentional adverse discrimination is becoming even more common against women and persons of colour.

As Virginia Eubanks notes in her book Automating Inequality, that algorithms used by social services agencies in administering to the poor and needy often times increase adverse impact versus human bureaucrats. Algorithms used by the criminal justice system are demonstrably worse in predicting recidivism rates amongst Blacks (and to a lesser degree Latinos) than Whites. Data-driven decision making, especially in the case of government administration, can’t be effective if it’s based on bad data. I could go on, both from a research POV as well as a commercial one. Fact of the matter is, more of our actions and opportunities are being driven, not by humans, but by machines analysing data.

It’s easy to just say “Garbage In, Garbage Out” (commonly abbreviated as GIGO, the British version I was told as a child is “Rubbish In, Rubbish Out”) — and recognising this is important, however, in many if not most cases, commercial AI systems, are built directly from research data and algorithms without any adjustment for racial or gender disparities. This is also true of government systems, considering the increasing reliance of the US Government on consulting firms, who bring to bear these commercial systems towards public policy problems. Meaning, that you end up with a putatively fair system that still in fact treats women different from men, persons of colour different from Whites. Software that makes decisions based on data like a person’s postal code can reflect, or even amplify, the results of historical or institutional discrimination. As Solon Barocas and Andrew D. Selbst argue in their article “Big Data’s Disparate Impact” (referenced in one of the cited articles above) that expanding disparate impact theory to challenge discriminatory data-mining in court “will be difficult technically, difficult legally, and difficult politically.” (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477899)

In the end, this means that AI is far from the solution to all problems of bias, it adds a pseudo-scientific sheen that makes such bias even harder to refute. It is because of this I remain utterly sceptical of AI technologies and their abilities to replace human knowledge workers. Given the technical and legal challenges, I think that the use of AI will remain proscribed, it will have to be, especially in light of equity concerns, much less legal ones.

Notes for the audience:

1) This essay is based off of my own personal background and knowledge as a trained engineer who has studied AI in my graduate studies and works as a software engineer. It in no way reflects any official position of any organisation with which I am affiliated and should not be taken as such.

2) This essay is also based off of a post on my personal Facebook from January 2021

--

--

A. P. D. G. Everett
A. P. D. G. Everett

Written by A. P. D. G. Everett

Engineer, PMP, Proud citizen of Canada & USA, UW/UVA/Penn/Cornell alumnus w/ a habit of writing about personal interests. LinkedIn: https://bit.ly/3jJIovf