By Ian Duckles
For my dispatch today, I want to talk about the language that is used around generative AI and think about the implications of the words and phrases that are used to talk about how we should use these systems.
If you spend much time in AI spaces or trainings where AI is the focus, a major topic that comes up is the importance of there being a “human in the loop.” The main justification behind this idea is that AIs have biases, AIs “hallucinate” (the oddly chosen term for when an AI makes up something that isn’t true, I’ll get to this another time), and otherwise make mistakes, so there needs to be a human “in the loop” to check their work.
If we think about this phrase, one of the aspects that really stands out to me is that when someone is “looped in” to something, this doesn’t necessarily mean that they are contributing to the thing they have been looped in on or that they have any say over that thing. To be “looped in” to a conversation or discussion can often simply mean that one is informed of that thing after the fact, “I just wanted to loop you in on our meeting yesterday.”
When AI promoters talk about the need for a “human in the loop,” they say this, I think, both to cover themselves around the fact that AIs make mistakes, but also as a way to assure us poor humans that we will still have an important role to play in a future dominated by AI. AI may replace many of our jobs, but there will always be a need (so they say) for a “human in the loop.”
What this formulation both implies and covers up is that a “human in the loop” is satisfied as long as one human reviews the output. This is particularly problematic given the well-known biases that are inherent in AI systems.
As an example, if one asks an AI to generate an image of a productive person, they will get lots of images of a white man sitting at cluttered desk in an office. If the human in the loop in charge of reviewing or approving these images is also a white man who works at a cluttered desk in an office, it is likely that this bias will be missed and that image will be sent out into the world (which will then provide additional training data for future AI’s to simply reinforce those existing biases). Far more valuable would be to have humans (emphasis on the plural) in the loop who are themselves a diverse group of individuals who would be more likely to spot bias. But that would require intentionality and the hiring of people, which defeats the efficiencies promised by AI.
Also telling about this formulation is that it doesn’t necessarily imply that a human will have the final say. As long as that human is looped in (we are told) things will be okay and we needn’t worry about AI and instead we can embrace it for all its potential and all it will do.
Personally, I would prefer a phrase like “human oversight,” which has a stronger implication of a human being in charge and directing and guiding the AI, rather than a human being simply involved in a process or merely a passive observer.
Finally, as I listen to all this talk of “humans in the loop” and humans being “looped in,” it seems clear to me that this was language created by middle managers that reflects a middle management dream: a bunch of AI systems to replace human labor. These systems can produce work instantly on demand at any hour of the day. They never get sick, never complain about their working conditions, and never ask for raises. The AIs generate the product, and the middle managers get “looped in,” ensuring their value and providing job security for what is otherwise the most useless category of employment.
While this might be utopia for them, it sounds like dystopia for the rest of us.
Ian Duckles teaches philosophy (until AI replaces him) at San Diego Mesa College and is actively involved in his local union, AFT Local, 1931.