Automation & Its Limitations, or, Why the Future is Humans & Robots/AI Working Together as Teammates

A. P. D. G. Everett
21 min readMay 16, 2022
From NIST — Test Methods and Metrics for Effective HRI in Collaborative Human-Robot Teams (https://www.nist.gov/news-events/events/2019/03/test-methods-and-metrics-effective-hri-collaborative-human-robot-teams)

Throughout history, improvements in technology have caused human labour to be supplemented, and then supplanted, by artefacts created by humanity, but after each period of technological growth, a new equilibrium between humanity and their tools occurred, although the period of disruption before a new equilibrium was reached was not fixed, and in the case of the first and second industrial revolutions, those equilibria took a while to reach. And those artefacts which humanity has created have allowed a new form of partnership between humanity and technology, and human ingenuity as expressed through technological development has undeniably led to massive benefits to the people of the Earth. There are limits though, as far as how far technology can go, or rather should go, given that technology, despite all of its promise, is not an unalloyed good. All of this drives towards an end of humans partnering with robots and/or AI to achieve tasks, not just entrusting the robots & AI with the labour on their own. Let us begin with an historical survey.

Until the industrial age, the work needed to craft goods, was either performed by humans themselves, or by humans harnessing natural elements. Then came the first industrial revolution, from roughly 1770 to 1840, transitioning from hand production methods used by skilled artisans to machines which almost anyone could operate ergo boosting the demand for unskilled labour, and all of the incumbent accompanying innovations, which radically transformed the relationship between humanity and the Earth.[1] After the first industrial revolution, there was a lull in technological change that occurred between the end of the first industrial revolution and the beginning of the second industrial revolution (and the related first period of globalisation that arose with technologies such as the railroad, electricity, the telegraph & telephone, interchangeable parts, transition from animal fats to petroleum to fuel heating and lighting, mechanisation of farming, etc.) from the 1870s to the start of the first Great Powers War in 1914.[2] Along with the monumental technological growth that happened in the roughly 140 year span between the beginning of the first industrial revolution and the end of the second was a massive growth rate in human population from the dawn of the industrial age to 1914, and in effect, each new stage of technological innovations drove massive population growth.[3] From 1770 to 1914, the population on Earth grew from 795 million to 1.797 billion in 144 years, a difference of over one billion people, an annual growth rate of 5.67 per mille (0.567 per cent). Even during the period of 1914 to 1949, a 35-year period of wars and revolutions that transformed the state of the world, the population still grew from that 1.797 billion to 2.503 billion (an annual growth rate of 9.51 per mille, even factoring the roughly 40 million killed in the 1914–1918 Great Powers war, the roughly 77 million killed in the 1939–1945 Great Powers war, the 10–15 million killed in the Russian Civil War, the 10–15 million killed in the Chinese Civil War, the 20 million killed in Stalin’s purges & famines, etc — roughly 160 million dead total from conflicts in that span, are still less than a quarter of the net estimated population growth of 706 million in that 35 years).[4]

The third industrial revolution (otherwise known as the digital revolution), which arose after the Second Thirty Years War, given the development of the ENIAC in December 1945 at the University of Pennsylvania, the development of the point-contact transistor in 1947 at Bell Laboratories by John Bardeen and Walter Brattain under the leadership of William Shockley, and the development of information theory by Claude Shannon (also of Bell Laboratories) in 1948.[5] From the late 1940s until the present, increasingly more devices became electronically controlled, and now, virtually everything of note relies on integrated circuits.[6] And again, the technological growth fuelled an even bigger population growth from 2.503 billion in 1949 to 7.875 billion as of 2021, an average annual population growth of 16 per mille (1.6 per cent) over that 72 year span, a growth rate higher than the period of the second industrial revolution, which in turn was higher than the population growth rate during the first industrial revolution. For an elucidation of the value of integrated circuits in driving technological progress, a brief discussion of the implications of Moore’s Law needs to occur. Imagine for a moment that you get in your car, and begin driving at 8 kph. In one minute, you’d travel 133.33 metres. In the second minute, imagine that you’re going twice that fast, so 16 kph. In that second minute, you’d travel 266.67 metres. In the third minute, you’d be travelling at 32 kph, and you would travel 533.33 metres. If you doubled your speed 27 times, you would travel 1.79*107 km in a minute. In five minutes at that speed, you could travel from the Earth to Mars. That is the approximately where information technology stands today, vis-à-vis when integrated circuits came into being in the late 1950s.[7]

Along with the three industrial revolutions spread over the last 252 years and counting, and the growth of the human population from 795 million in 1770 to 7.875 billion as of 2021 has come, the replacement of human labour to do things. The first two industrial revolutions, the replacement was mostly of human physical labour by manufactured artefacts, but the third industrial revolution, also called the digital revolution, has caused the replacement of the mental capacity of humans through technological innovations as well. As Harvard University economist Kenneth Rogoff noted in 2012: “Since the dawn of the industrial age, a recurrent fear has been that technological change will spawn mass unemployment. Neoclassical economists predicted that this would not happen, because people would find other jobs, albeit possibly after a long period of painful adjustment. By and large, that prediction has proven to be correct.[8] However, as Mr Rogoff stated in that same essay that: “Vasily Leontief, the Nobel Prize winning economist noted in 1983, worried that the pace of modern technological change is so rapid that many workers, unable to adjust, will simply become obsolete, like horses after the rise of the automobile.[9] It should be noted that the rise of the first industrial revolution led to the Luddites sabotaging textile machines in England, and even the word sabotage is reputed to have come from labourers in francophone countries throwing their wooden shoes (called sabot), into the early machines to stop them.[10]

As a 2017 article in The Economist noted that one significant factor in the rise of income inequality since the 1970s is the labour-market premium for those with university degrees, which have risen by over a third since 1963, whereas wages have dropped for those without a secondary school diploma.[11] This was something that Vasily Leontief forecasted in 1983, where he said: “the new technology diminishes the role of human labour in production to such an extent that it is bound to bring about not only long- run technological unemployment, but-if permitted to operate within the framework of the automatic competitive price mechanism-also a shift toward a more skewed and, because of that, socially unacceptable distribution of income.[12]
Related to this phenomenon is the fact that increasing automation drives productivity bonuses, and that increased productivity of machines and (higher-skilled) people together allows for that higher-skilled labour to be paid better…at a cost of replacing most of the previously employed labour pool.[13] What distinguishes the advances of the digital revolution from those of the industrial revolutions is that they have, up until now, favoured skilled workers. This is a phenomenon known as “skill-based technological change”.[14] So far, university degrees have been a reliable proxy for skill but this may change as artificial intelligence starts taking jobs away from white-collar workers.[15] As was demonstrated by Martin Ford in his book Rise of the Robots (as well as elsewhere by other parties), employment for many skilled professionals, including: attorneys, journalists, scientists, and pharmacists, is already being significantly eroded by advancing information technology.[16] Relatedly, it should be noted that to assume any current economic trend to persist is to assume an inefficient labour market, and as Kenneth Rogoff stated in a 2017 interview that as the wage premium for a particular group of workers rises, firms will have a greater incentive to replace them.[17] However, a 2016 study by McKinsey & Co. noted that: “While automation will eliminate very few occupations entirely in the next decade, it will affect portions of almost all jobs to a greater or lesser degree, depending on the type of work they entail. Automation, now going beyond routine manufacturing activities, has the potential, as least with regard to its technical feasibility, to transform sectors such as healthcare and finance, which involve a substantial share of knowledge work.[18]

What that McKinsey study also notes is that whilst technical feasibility is a necessary precondition for automation, but it is not a complete predictor that such an activity will be automated. A second factor to consider is the cost of developing and deploying both the hardware and the software for automation. A third factor is the cost of labour and related supply-and-demand dynamics: if workers are in abundant supply and significantly less expensive than automation, this could be a decisive argument against it. A fourth factor to consider is the benefits beyond labour substitution, including higher levels of output, better quality, and fewer errors. These are often larger than those of reducing labour costs. Regulatory and social-acceptance issues, such as the degree to which machines are acceptable in any particular setting, must also be weighed. A robot may, in theory, be able to replace some of the functions of a nurse, for example. But for now, the prospect that this might actually happen in a highly visible way could prove unpalatable for many patients, who expect human contact. The potential for automation to take hold in a sector or occupation reflects a subtle interplay between these factors and the trade-offs among them.[19] It should be noted, that certain industries, such as most things in the health care sector, as well as in the education sector, have, over the last four decades or so, continued to grow in cost greater than the mean growth rate of wages, known as “Baumol’s Cost Disease”, which applies to sectors for which there have not been productivity increases. In effect, it takes the same number of doctors and nurses to staff a hospital now as it did 20, 40, or 60 years ago, and the same for education, for two examples of it.[20]

Min Kyung Lee noted in a 2018 study: “Advances in artificial intelligence, machine learning, and data infrastructure are transforming how people govern and manage citizens and organisations. Now more than ever, computational algorithms increasingly make decisions that human managers used to make, changing the practices of managers, policy makers, physicians, teachers, police, judges, on-demand labour platforms, online communities, and more.[21] Part of the reason for this transition is that decisions made by algorithm are seen as unbiased (since computers, unlike humans, only respond to data).[22] However, the rise in cases of algorithmic bias has demonstrated that, in many cases, the computer is being fed data that is biased, which is yielding biased outcomes, at least when the algorithm alone is doing the screening. This is because, as Nicol Turner-Lee of the Brookings Institution discussed in an interview by Vox in 2020, algorithmic bias happens in two primary ways: accuracy and impact. An AI can have different accuracy rates for different demographic groups. Similarly, an algorithm can make vastly different decisions when applied to different populations.[23] As a for instance, an AI-infused platform for human resources, might have in it, historical data for successful hires, but given that the reference dataset might stretch back a long time, it might also, as a result, bias against women, people with less “White-sounding” names, veterans, or the disabled, results that are not just morally wrong, but also illegal.[24] As Virginia Eubanks notes in her book Automating Inequality, algorithms used by social services agencies in administering to the poor and needy often times increase adverse impact versus human bureaucrats. Algorithms used by the criminal justice system are demonstrably worse in predicting recidivism rates amongst Blacks (and to a lesser degree Latinos) than Whites.[25] Yael Eisenstat, former CIA officer and national security adviser to (then Vice President) Joe Biden, noted in a 2019 article in Wired, that: “No matter how trained or skilled you may be, it is 100 percent human to rely on cognitive bias to make decisions. Daniel Kahneman’s work challenging the assumptions of human rationality, among other theories of behavioural economics and heuristics, drives home the point that human beings cannot overcome all forms of bias. But slowing down and learning what those traps are — as well as how to recognize and challenge them — is critical. As humans continue to train models on everything from stopping hate speech online to labelling political advertising to more fair and equitable hiring and promotion practices, such work is crucial. Becoming overly reliant on data — which in itself is a product of availability bias — is a huge part of the problem. In my time at Facebook, I was frustrated by the immediate jump to “data” as the solution to all questions. That impulse often overshadowed necessary critical thinking to ensure that the information provided wasn’t tainted by issues of confirmation, pattern, or other cognitive biases.[26] This sort of thing is undeniably problematic. As that same 2018 study by Min Kyung Lee discussed above also noted about the limitations of algorithmic based hiring decisions, roughly half of the participants in the study thought that the decisions the algorithm made were unfair. To quote: “Most participants thought that the algorithm would not have the ability to discern good candidates because it would lack human intuition, make judgments based on keywords, or ignore qualities that are hard to quantify. ‘‘An algorithm or program cannot logically decide who is best for a position. I believe it takes human knowledge and intuition to make the best judgment.’’ (P169). ‘‘He could have a great resume but not hit the key phrases.’’ (P174).[27] The Lee 2018 study demonstrates that knowledge of if the decision-maker is human or computer can influence perceptions of the decisions made, but also whether or not the tasks required more human or mechanical skills also influenced perceptions of the decisions made, which can offer insights into helping to create trustworthy, fair, and positive workplaces with algorithms. This correlates to what was found by Pamela Hinds et al. in their 2011 study, whose finding suggest that there are significant differences in the extent to which people will rely on robots as compared with human work partners. When working with a person instead of a robot, participants relied more on the partner’s advice and were less likely to ignore their counsel.[28] The McKinsey 2016 study also illustrates that the tasks least likely to be subject to automation, from a technical feasibility perspective are “managing others” (9 per cent) and “applying expertise” (18 per cent), which are the exact kind of tasks that the Lee 2018 study is discussing.[29]

These sorts of issues, which are most certainly difficult to overcome using a solely data-driven approach, and in fact may be made harder to achieve through such, as has been discussed. Meaning that the best approach is probably, instead of human oversight and decision-making being replaced entirely by computers using algorithms drawing on large datasets, might in fact be a partnership of sorts, humans being assisted by such algorithmic tools, given the lack of trust in management solely by algorithm (as Ms Lee has demonstrated in her research) & given that the data itself may be flawed (as has been noted by many sources), with regular reviews of the input data and output decisions to avoid an algorithmic bias problem brought on by biased data. In the context of humans and machines working together, the future is most presumably humans doing the “nice” tasks, whilst robots doing the dirtier and/or more dangerous tasks, this has been repeatedly discussed in popular and academic literature.[30] The SARS-CoV-2/COVID-19 pandemic has, most likely, accelerated the pace of automation for two main factors: 1) The pandemic has induced social changes that are likely to endure, including the “Great Resignation”, wherein millions around the world have quit their jobs, may in part be a consequence of lockdowns creating new opportunities for home working, and 2) the AI & robots are getting better, and more capable of taking on higher-order tasks.[31]

Now, what might a team of humans and robots working together, look like? As was discussed by Victoria Groom and Clifford Nass in their 2007 study: “The experiences of the few groups dedicated to developing true human–robot teams offer enough information to identify specific challenges to human–robot teams. Finding solutions to some of these problems appears inevitable. For example, human teammates often refer to objects by points of reference, and the objects and points of reference change frequently throughout the interaction. While robots currently struggle in this area, researchers are succeeding in improving their abilities to identify objects by reference point. Though it may be many years before robots can consistently understand human references to objects, there appears to be no major block that would prevent eventual success. Two major problem areas that show less promise of resolution are the robot’s inability to earn trust and lack of self-awareness. In high-stress situations, robot operators may experience physical and cognitive fatigue. In extreme cases, operators may forget to bring a robot to or from a disaster site (Casper & Murphy, 2003). Unlike human teammates who think not only about how to do their jobs, but also how to be prepared to do their jobs, robots require an external activation of their autonomy. They must be set-up to perform, but in cases of extreme stress, human teammates may lose their ability to babysit the robot. In these cases, the robot will not be in the appropriate position to complete its task.”.[32] This, fundamentally, is a question of relatability between people and robots. And, on the basis of the literature, it seems as though, as with humans who form teams, and go through the Tuckman stages of group development (i.e., forming, storming, norming, and performing), that the only way robots will truly be trusted by their human teammates is to have enough emotional intelligence to recognise the emotional states of the human members and gain their trust, which is undeniably not a simple task when the robot doesn’t have the ability to feel.[33]

These are the sorts of limitations that will need to be overcome in order for robots to totally supplant humans, which is why, it appears, that the future is headed to another case of specialisation of labour, where robots will be entrusted with what they are best at, the predictable & repeatable tasks, and humans to be entrusted with the tasks that are unique & unpredictable.

Note: This essay was my final project for INFO 6309: Robots & Work, at Cornell University (Instructor: Malte Jung, PhD), and despite its original purpose, I do think that this essay is of value to other audiences, so share it with that intent in mind.

[1] As was noted in The Economist in 2017 — “What history says about inequality and technology” (17 June 2017): “One study has found that the share of unskilled workers rose from 20% of the labour force in England in 1700 to 39% in 1850. The ratio of craftsmen’s wages to labourers’ started to fall in the early 1800s, and did not recover until 1960.” This article references the work of Scottish economist Gregory Clark at UC Davis, who put together a comprehensive dataset of English wages that stretches back to the 13th century. Mr Clark noted that in the past the skilled-wage premium, defined as the difference in wages between craftsmen, such as carpenters and masons, and unskilled labourers has been fairly stable, save for two sharp declines, the first, coming after the Black Death of the mid 1300s, and after the first industrial revolution. That dataset can be viewed here: http://faculty.econ.ucdavis.edu/faculty/gclark/data.html

[2] This is drawn from many sources, but the author of this essay will point to the book The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present by economic historian David S. Landes published by Cambridge University Press (initially published in 1969, second edition published in 2003 — the version this author has read and referenced) for a discussion of the first and second Industrial Revolutions.

[3] There are many references that discuss the relationship between technology growth and human population growth. The author points to “Innovation and the growth of human population” by V. P. Weinberger et al. (2017); Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 372(1735), 20160415. https://doi.org/10.1098/rstb.2016.0415 as a starting reference point. It should be noted that along with the industrial revolutions was also the rise of modern medicine and public health reforms (clean water, waste removal, vaccinations for infectious diseases, etc) which are also technological innovations no doubt, but of a different kind than those covered by the increased production efficiencies wrought by the industrial revolutions.

[4] Population figures from worldpopulationhistory.org. the war dead listed are estimates averaged from various sources and are meant to be demonstrative, not iron-clad declarations of fact. The discussion of the massive number of dead in the 1914–1949 span due to wars is not meant to be an exercise in fascination in the macabre, but as Brian Hayes notes in the beginning of his précis into the life and work of Lewis Fry Richardson with respect to modelling wars, “Demographically, it hardly matters. War deaths amount to something like 1 per cent of all deaths; in many places, more die by suicide, and still more in accidents. If saving human lives is the great desideratum, then there is more to be gained by prevention of drowning and auto wrecks than by the abolition of war.”, the point being that even with death on this massive scale, it hardly impacted the growth in human population (this is drawn from Everett (2016), MSc Thesis, University of Virginia, short title: “What are the Odds?”, and refers to Brian Hayes, Computing Science: Statistics of Deadly Quarrels, American Scientist.

[5] The author recognises that the concept of a “Second Thirty Years War” to discuss the period of 1914 to 1945 was originally proposed by French general and political leader Charles de Gaulle, and also adhered to by British soldier turned author turned political leader Sir Winston Churchill in the preface of volume one (The Gathering Storm) of his six-volume memoir The Second World War (https://www.amazon.com/Second-World-War-Volumes/dp/B005NS30ZG), which earnt him the 1953 Nobel Prize in Literature. The author adheres to an extension of this view, that the “war years” of the Second Thirty Years War were from the start of the first Great Powers War in 1914, through the Second Great Powers War ending in 1945 until 1949, which also encompasses the Chinese Communist Revolution in 1949, but also the Russian Revolution of 1917, etc. This is admittedly a bit arbitrary, but allows for a single period which covers both the duration of the Cold War, as well as the digital revolution, which started in the later 1940s. It also allows the demonstration of the modern social welfare state as part of that same post-bellum/modern period. For a discussion of this in the UK, see the essay by Derek Brown in The Guardian on 14 March 2001, “1945–51: Labour and the creation of the welfare state” (https://www.theguardian.com/politics/2001/mar/14/past.education)

[6] The author was once a doctoral student at the University of Pennsylvania in electrical engineering (including taking classes in the room that once stored the ENIAC), and as such, is basically familiar with the history of digital computer technology. Other things to note, the development of the point-contact transistor (and quickly thereafter of the bipolar junction transistor (which was developed by William Shockley) which replaced the point-contact transistor given it was sturdier and easier to manufacture than the point-contact transistor) was used to replace the generally unreliable thermionic valve (aka vacuum tube) that was common in electronics such as the Colossus computer and the ENIAC until the creation of transistors. The (monolithic) integrated circuit, developed by Robert Noyce (a one-time protégé of William Shockley) at Fairchild Semiconductor in 1959, which combined multiple transistors in a single semiconductor now called a microchip, drove the miniaturisation of electronics, and their increasing ubiquity in society. There are whole papers and books that discuss this, it is summarised here to demonstrate recognition of the history of development of modern electronics.

[7] This example is drawn from Pg xiii of the introduction of the book Rise of the Robots by Martin Ford, Basic Books (2015). Given that it is now seven years past when this example was crafted, you’d have to multiply the example by a factor of 8, which would yield a distance of approximately 0.96 AU (or 1.432*108 km).

[8] Kenneth Rogoff aired this opinion in an essay on Project Syndicate, entitled: “King Ludd is Still Dead” (1 October 2012).

[9] The article that Kenneth Rogoff is referring to is: Leontief (1983); “Technological Advance, Economic Growth, and the Distribution of IncomePopulation and Development Review, 9(3), 403–410. https://doi.org/10.2307/1973315. Also note that Mr Leontief, who was born in Munich on 5 August 1905, used the German spelling of his given name Василий (Wassily) in countries that use Latin-based alphabets — primarily Germany (he performed his doctoral work (dissertation title: Die Wirtschaft als Kreislauf ) at Friedrich Wilhelm University (now Humboldt University of Berlin) earning his PhD in 1928, and worked at the Institute for the World Economy of the University of Kiel from 1927 to 1930) and the United States (where he emigrated to in 1931, working first at the National Bureau of Economic Research before joining the economics department of Harvard University in 1932. The author prefers to use the standard English rendering of the Slavic given name, Vasily (from the Greek name Βασίλειος, Basil in English), but recognises Mr Leontief’s preferred spelling.

[10] This supposed origin of the word sabotage is incorrect, the word came from the French word “saboter” (to walk loudly) which came from the word for the wooden clog known as a sabot, as the wooden sabot made a lot of noise walking on paved city streets. Sabotage, Online Etymology Dictionary — https://www.etymonline.com/word/sabotage. As it happens, this false history is actually reference in the film Star Trek VI: The Undiscovered Country.

[11] This is from the previously referenced article in The EconomistWhat history says about inequality and technology

[12] Pg 409, Leontief (1983).

[13] There are many examples of this, however, a recent example is discussed by Christopher Matthews in Axios, “Automation: raising worker pay while killing U.S. jobs” (15 December 2017) — https://www.axios.com/2017/12/15/automation-raising-worker-pay-while-killing-us-jobs-1513301264.

[14] The issues surrounding SBTC are massive and much academic research has been done on the topic. The author concedes that SBTC is certainly to play in part, but not entirely, for the worker displacement issue. If there is an interest to delve into this topic in greater depth, two papers worth reading are: 1) David Card & John DiNardo, Skill-Biased Technological Change and Rising Wage Inequality: Some Problems and Puzzles, Journal of Labour Economics, 2002, vol. 20, no. 4 — https://davidcard.berkeley.edu/papers/skill-tech-change.pdf; and, 2) Eli Berman et al., Implications of Skill-Biased Technological Change: International Evidence, The Quarterly Journal of Economics, Vol. 113, №4 (Nov., 1998), pp. 1245–1279 — https://www.jstor.org/stable/2586980

[15] This is also from the previously referenced article in The EconomistWhat history says about inequality and technology

[16] Page xv, Ford (2015) for the summary, multiple chapters for the details.

[17] This is also from the previously referenced article in The EconomistWhat history says about inequality and technology

[18] Michael Chui et al., McKinsey & Co., 8 July 2016, “Where machines could replace humans — and where they can’t (yet)” - https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/where-machines-could-replace-humans-and-where-they-cant-yet

[19] McKinsey & Co. (2016). In summary, the five factors are: 1) technical feasibility, 2) costs to automate, 3) the relative scarcity, cost, and skills, of the workers who might otherwise do that activity, 4) benefits of automation beyond labour-cost substitution, & 5) regulatory & social-acceptance considerations. A paraphrase of the original text is provided above given it was given examples that are useful.

[20] William Baumol started describing this phenomenon in the 1960s, published multiple papers on this topic, and distilled many of those papers into a book: The Cost Disease: Why Computers Get Cheaper and Health Care Doesn’t, Yale University Press (2012), which was the primary reference source by the author for this topic.

[21] Min Kyung Lee, “Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management”, Big Data & Society, January–June 2018: 1–16 https://doi.org/10.1177%2F2053951718756684

[22] This assumption is discussed in the Min Kyung Lee paper discussed above.

[23] Rebecca Heilweil, “Why algorithms can be racist and sexist”, Vox/Recode (18 February 2020)

[24] Gideon Mann & Cathy O’Neil, “Hiring Algorithms Are Not Neutral”, Harvard Business Review, 9 December 2018 — https://hbr.org/2016/12/hiring-algorithms-are-not-neutral. This referenced a famous 2004 study by Marianne Bertrand & Sendhil Mullainathan published by the National Bureau of Economic Research “Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labour Market Discrimination”, American Economic Review, v94, 991–1013. The 2020 Vox piece by Rebecca Heilweil referenced before also references an example of this, the time when Amazon.com used an AI resume screening tool to improve the efficiency of screening job applications, but given the highly disproportionate number of men hired by Amazon from the sample period, that meant the system learnt, in effect, to discriminate against women, even though it wasn’t programmed with such a bias.

[25] This is from an essay the author wrote & published on his personal Medium page, “The limits of AI and why it will not replace human knowledge workers — from the perspective of a practitioner” (21 April 2021) — https://apdge.medium.com/the-limits-of-ai-and-why-it-will-not-replace-human-knowledge-workers-from-the-perspective-of-a-b7c345f06c2d .
The referenced book — Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, written by Virginia Eubanks, was published in 2018 by St. Martin’s Press

[26] Yael Eisenstat, “The Real Reason Tech Struggles With Algorithmic Bias”, Wired (12 February 2019) — https://www.wired.com/story/the-real-reason-tech-struggles-with-algorithmic-bias/

[27] Pg 9, Lee (2018)

[28] From Section 5 (Discussion) of Pamela J. Hinds, Teresa L. Roberts & Hank Jones: Whose Job Is It Anyway? A Study of Human-Robot Interaction in a Collaborative Task, Human–Computer Interaction, (2004) 19:1–2, 151–181

- https://doi.org/10.1080/07370024.2004.9667343

[29] Conversely, the McKinsey study found that the types of labour most susceptible to automation from a technical feasibility perspective are: 1) predictable physical work (78 per cent), 2) data processing (69 per cent), and 3) data collection (64 per cent).

[30] One example, in the field of urban search and rescue, is by Robin Roberson Murphy, Human–Robot Interaction in Rescue Robotics, IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS — PART C: APPLICATIONS AND REVIEWS, VOL. 34, NO. 2, MAY 2004

[31] The Economist, Covid has reset relations between people and robots, 25 February 2022

[32] Victoria Groom and Clifford Nass; Can robots be teammates? Benchmarks in human–robot teams, Interaction Studies 8:3 (2007), 483–500.

[33] Bruce W Tuckman, “Developmental sequence in small groups”. Psychological Bulletin. 63 (6): 384–399 (1965). doi:10.1037/h0022100. PMID 14314073

--

--

A. P. D. G. Everett

Engineer, PMP, Proud citizen of Canada & USA, UW/UVA/Penn/Cornell alumnus w/ a habit of writing about personal interests. LinkedIn: https://bit.ly/3jJIovf