Report: DOGE supercharges mass-layoff software, renames it to sound less dystopian

Report: DOGE supercharges mass-layoff software, renames it to sound less dystopian

“It is not clear how AutoRIF has been modified or whether AI is involved in the RIF mandate (through AutoRIF or independently),” Kunkler wrote. “However, fears of AI-driven mass-firings of federal workers are not unfounded. Elon Musk and the Trump Administration have made no secret of their affection for the dodgy technology and their intentions to use it to make budget cuts. And, in fact, they have already tried adding AI to workforce decisions.”

Automating layoffs can perpetuate bias, increase worker surveillance, and erode transparency to the point where workers don’t know why they were let go, Kunkler said. For government employees, such imperfect systems risk triggering confusion over worker rights or obscuring illegal firings.

“There is often no insight into how the tool works, what data it is being fed, or how it is weighing different data in its analysis,” Kunkler said. “The logic behind a given decision is not accessible to the worker and, in the government context, it is near impossible to know how or whether the tool is adhering to the statutory and regulatory requirements a federal employment tool would need to follow.”

The situation gets even starker when you imagine mistakes on a mass scale. Don Moynihan, a public policy professor at the University of Michigan, told Reuters that “if you automate bad assumptions into a process, then the scale of the error becomes far greater than an individual could undertake.”

“It won’t necessarily help them to make better decisions, and it won’t make those decisions more popular,” Moynihan said.

The only way to shield workers from potentially illegal firings, Kunkler suggested, is to support unions defending worker rights while pushing lawmakers to intervene. Calling on Congress to ban the use of shadowy tools relying on unknown data points to gut federal agencies “without requiring rigorous external testing and auditing, robust notices and disclosure, and human decision review,” Kunkler said rolling out DOGE’s new tool without more transparency should be widely condemned as unacceptable.

“We must protect federal workers from these harmful tools,” Kunkler said, adding, “If the government cannot or will not effectively mitigate the risks of using automated decision-making technology, it should not use it at all.”

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *