Perspectives · Black-Collar Workforce
The Trolley Problem Has Actually Arrived in the Age of AI: Why Black-Collar Workers Are Indispensable
Translator’s note. In contemporary professional usage, “black-collar” (黑领) refers to high-end specialists working in high-tech, information technology, engineering, automation, and scientific research — people who often work in dimly lit environments such as data centers, laboratories, and operations consoles, or who possess a distinctive creative and disruptive form of innovative talent. In this author’s series, the term is given a more specific role: the worker whose value in the AI era lies in standing on the boundary between humans and systems, designing the limits within which AI is allowed to act.
Original Chinese version. This essay was first published in Chinese on WeChat. Read the original →.
If you have ever taken a philosophy class, you have probably heard of the trolley problem. A runaway trolley is barreling toward five people. You are standing next to a lever. You can pull it and divert the trolley onto a different track — but there is one person on that other track. Do you pull the lever? The point of this question has never been the answer. The point is that there is no standard answer. The problem forces you to face a fact: some choices, no matter what you do, will harm someone.
Many people assume that this is a thought experiment confined to the classroom, far from real life. But if you observe carefully today, you will notice something: the trolley problem has arrived in the real world. And not only occasionally — it is happening every day.
Begin with the easiest scenario to grasp. A self-driving car is moving through a city. It is raining at night, the road is slick, you are inside the car, the car is going at a moderate speed, everything is normal. Suddenly a person rushes out from the side of the road. The system has to make a decision in a fraction of a second. There is a pedestrian ahead, another car to the left, a guardrail to the right. There is no time to brake, and any swerve creates a different risk. Whom to hit, whom to avoid — this is no longer a question that technology alone can fully resolve. It is a choice. The most important point here is that the choice is not made at that instant. When you bought the car, when the system was designed, the answer to this question had already been written in. The engineer, while writing the code, had already made a choice on behalf of some future “you.” You merely executed that choice.
Now consider a less obvious but equally real scenario. In hospitals, especially when resources are scarce, doctors must make priority judgments. Who gets a piece of equipment first? Who gets treatment first? Who can wait? Who cannot? More and more hospitals are now introducing AI systems to assist with these judgments. The system makes recommendations based on age, medical history, probability of survival, and other factors. This appears more scientific, and more fair. But the problem has not gone away — it has only changed form. Who is more deserving of being saved first? The young person, or the one with the higher chance of survival? The one who has contributed more, or the one who is more vulnerable? The system will produce an “optimal solution,” but that “optimum” is a value judgment in disguise. It is hard to say how this differs in essence from the trolley problem.
If we widen the lens further, the question becomes even sharper. In modern warfare, AI is already participating in target identification and decision support. A system makes a determination on whether a given location is a military target. Suppose the system reports: at this location there is an 80% probability that this is an enemy facility, and a 20% probability that it is a school. Strike, or do not strike? A human commander would at least recognize that this is a moral choice. But when the system is participating in the decision — and at high tempo becomes the principal basis for it — the choice gets reduced to a parameter. What is the risk threshold? What probability is acceptable? Who bears the cost of a wrong call? This is no longer a philosophical question. It is an engineering question.
Many people, reaching this point, suddenly realize a shift. The trolley problem used to happen in the moment, in the field. Now it happens inside the system. Once, a human being made a decision in real time. Now, the rule is written in advance. You think you are “using technology,” but in many of the situations that matter, technology is making choices on your behalf. Recommendation systems decide what you see. Credit systems decide how much you can borrow. Hiring systems decide whether you get a chance. Self-driving systems decide what path is taken in an accident. Medical systems decide how resources are allocated. What these systems have in common is that they are all making choices. And those choices often have no correct answer.
The real danger of the situation is not whether AI will make a mistake. The real danger is that we are gradually getting used to handing this kind of question — questions with no correct answer — over to systems. Systems do not hesitate. Systems do not reflect. Systems do not bear responsibility. They only execute. When things go smoothly, this looks efficient, rational, even more “objective” than human beings. But the moment something goes wrong, we discover a void: who actually made this decision? The engineer? The company? The algorithm? Or no one?
This is precisely the context in which the role of the black-collar worker emerges. A black-collar worker is not a programmer, and not a manager in the traditional sense. Their work is not to write code or to optimize a process. Their work is to stand between the system and the humans, dealing specifically with the questions that “cannot be solved by code.” When a system is about to launch, they ask: is this rule reasonable? When a model produces an output, they judge: is this acceptable to act on? When conflict arises, they decide: who do we protect first? When the system fails, they bear the burden: who is responsible? In other words, the black-collar worker is not there to increase efficiency. They are there to draw boundaries.
Here is a distinction that is easy to miss. The white-collar worker’s value lies in making things run more efficiently. The black-collar worker’s value, sometimes, lies in making the system stop. When everything appears to be “optimizing,” when the data is telling you to push forward, the black-collar worker must be capable of saying: wait a moment. In many situations, that one sentence matters more than any algorithm.
Many will ask: does this mean the future needs a new wave of “people who understand philosophy”? The answer is not so simple. The black-collar worker is not a person with one specific academic background; they are a structure of capacities. They must understand technology, and they must understand people. They must be able to follow what an engineer is saying, and to feel what an ordinary person fears; they must be able to face data, and able to face responsibility. Most importantly, they must be able to make a judgment when there is no standard answer. No course can fully teach this.
So what the trolley problem really reminds us of, in the AI era, is not “how to make the right choice.” It is a more direct question: when the choice has already been written into the system, and you are using these systems every day — have you ever stopped to ask who made that choice for you?
The trolley problem is no longer an exercise in a philosophy class. It has become part of our daily lives. And in this world, what matters most is not whether AI can make choices, but —
Who has made the choice for you.