A warehouse worker in the Midwest of the United States goes through a tiny, nearly unmemorable moment dozens of times during a shift. They hesitate. Perhaps they look at a window, or perhaps they stretch their back. When thirty minutes of that accumulate over the course of a shift, the system issues a warning. After an hour, the disciplinary procedure starts.

They are fired after two hours. A supervisor did not call. There had been no discussion. No one gave them a direct look. That was the decision made by the algorithm.
| Field | Details |
|---|---|
| Topic / Concept | Algorithmic Management (ALMA) |
| Also Known As | AI-driven management, digital supervision, automated workplace control |
| Key Research Project | ALMA-AI Project — multi-country European research initiative |
| Countries Involved | Eight European nations |
| Major Companies Using It | Amazon, Uber, Lyft, McDonald’s, Walmart |
| Amazon Workforce Size | 3+ million drivers globally |
| Walmart Workforce Affected | 2.1 million employees |
| McDonald’s Deployment | Orquest scheduling system across 70,000 employees |
| EU Regulatory Response | AI Act — classifies many workplace AI systems as “high-risk” |
| Key Legal Concern | Algorithmic wage discrimination, antitrust violations, GDPR data rights |
| Reference / Further Reading | ALMA-AI Project |
This is not some far-off corporate nightmare conceived during a whiteboard meeting in Silicon Valley. Millions of workers already have to deal with it on a daily basis, from Uber drivers navigating surge pricing they are unable to comprehend to Amazon warehouse workers being watched by systems so accurate they account for every moment of inactivity. The boss of algorithms has shown up. Silently, effectively, and virtually without public discussion.
The ALMA-AI project, a research endeavor involving eight European nations, devoted a significant amount of time to investigating the precise implications of this change for common workers—not in an abstract manner, but in concrete, tangible terms. What it discovered was sobering. The intensity of work is increasing. The degree of autonomy is declining.
There is an increase in social isolation. Anyone who has ever had a poor manager should be concerned about the mounting negative effects on their mental health because, at the very least, a poor human manager could be rationalized.
To put it simply, algorithmic management is the use of software and artificial intelligence to perform tasks that were previously performed by managers, such as assigning tasks, scheduling shifts, monitoring performance, assessing behavior, and making important decisions regarding people’s livelihoods.
This includes ride-sharing applications that pair drivers with passengers, logistics systems that monitor which shelves employees visit and how long they stay there, and call center dashboards that score call resolution times and tone. Furthermore, most people are unaware of how quickly the scope has expanded.
The most notable example is probably Amazon’s Time Off Task algorithm. No human supervisor could keep up with the accuracy with which it tracks inactivity, and it acts on that information without hesitation or context. When employees question the outcomes, managers at these facilities have allegedly stated that they are powerless.
This presents an odd and unsettling question: what precisely is the manager doing if the algorithm makes the decision? Researchers have come up with a term to describe this pattern: “checkers.” Those who verify algorithmic outputs and bear accountability for decisions they did not participate in are not managers in the conventional sense.
It’s difficult to ignore what that arrangement loses. The four hours a floor manager at McDonald’s used to devote to creating the schedule for the following week were never merely routine administrative tasks. These occurred when a manager observed that two workers on the same shift caused needless conflict or that one employee consistently had difficulty on Friday mornings.
McDonald’s has reduced that four-hour process to thirty minutes by implementing the Orquest scheduling system for 70,000 workers worldwide. Absolutely efficient. However, what happened to the institutional knowledge that existed within those four hours?
Recently, two brothers who drive for Uber conducted a small experiment in which they sat in the same room and looked for rides while simultaneously using their apps. On both screens, nearly identical jobs were displayed. The compensation was a little bit different. Neither was able to explain why.
Veena Dubal, a law professor at the University of California, contends that rideshare companies engage in what she terms “algorithmic wage discrimination,” tailoring offers to drivers based on all the information the app has about them, including how long they usually work, what kind of compensation they typically accept, and their behavior patterns over thousands of trips. Uber disputes this description.
However, the brothers’ experiment remains silent and inconvenient, implying that something mysterious is going on beneath the surface.
Because algorithmic pay lacks the rough reasoning that typically supports wage disparities, it is especially problematic. Although they are not perfect justifications for unequal compensation, seniority, experience, and proven skill are at least apparent and debatable.
There is no logic to argue against when a black-box algorithm yields different compensation for comparable work. The Federal Trade Commission has started investigating whether any of this falls under antitrust laws. Whether current legislation is capable of addressing it is still up for debate.
Reading the ALMA-AI results in conjunction with research from American business schools gives the impression that the organizations making the fastest progress in this area are also making the least amount of reflection. Many workplace systems would be categorized as high-risk under the EU’s upcoming AI Act, necessitating documented impact assessments on health and safety.
Regulatory pressure of that nature seems long overdue. Following protracted legal battles, workers in Europe have gained partial rights under GDPR to view some of the data used to determine their performance and compensation. American workers have not yet engaged in or prevailed in that struggle.
All of this does not imply that algorithmic management cannot be saved. The ALMA-AI researchers are cautious to note that these systems, when carefully crafted and with worker input, could actually improve safety—monitoring fatigue, identifying risks a fatigued supervisor might overlook, and flagging hazardous conditions.
Technology is not the antagonist in and of itself. It is unclear who it is intended to assist and whether those most impacted by it have any say in how it is run.
The surveillance itself isn’t the most unsettling aspect of watching this develop across industries. It’s the slow erasure of the kind of judgment that can only come from experience, such as that of a manager who once stood on a factory floor, knew the names of those in their immediate vicinity, and realized that the numbers on a screen only told a portion of the story. One decision at a time, that knowledge is being automated away, and it’s not clear that anyone is monitoring what we’re losing.
