OPINION: Wrong answers, faster: Meet Nevada's new unemployment AI overlord

Laid off and hoping for some financial help while you look for another job? Nevada is letting a Google-run artificial intelligence (AI) program write the rulings on your unemployment appeal. You might not even know it's happening. You can't opt out. And if it gets the answer wrong, you may not even know why.
The state's Department of Employment, Training and Rehabilitation (DETR) has rolled out a $2.6 million AI tool that comes into play when an applicant appeals the denial of an unemployment benefit claim. The tool reviews hearing transcripts, applies Nevada law and drafts decisions on benefit appeals. A state employee looks it over before it goes final.
DETR calls this oversight. How reassuring.
We've seen this movie before, and we know how it ends. UnitedHealth Group and Cigna have faced lawsuits for using AI-driven and algorithmic tools to deny medical claims, systems with apparently staggering error rates, where human review meant doctors averaging 1.2 seconds per denial. Patients lost coverage. Some lost their lives.
The companies called it efficient. Cheap is the more honest word. Nevada is importing that same type of model into unemployment benefits, a program that exists precisely because people have nowhere else to turn.
Somewhere in Nevada right now, someone is calling their landlord to beg for two more weeks while AI decides whether they deserve help.
The Nevadans appealing unemployment denials are not abstractions. Many come from communities that have already endured round after round of safety net cuts such as Supplemental Nutrition Assistance Program reductions and Medicaid tightening. Automating the decision about whether they eat this month isn't innovation. It's the state deciding that their crisis doesn't warrant a human being's full attention.
As of summer 2024, DETR still had more than 10,000 outstanding appeals, 1,500 of them left over from the pandemic. The desire to move faster is understandable. AI is not the enemy here. There's a version of this that could work. But speed without accuracy and without genuine human oversight isn't efficiency. It's indifference. This isn't the version that works.
DETR frames this purported leap in efficiency as reducing a three-hour process to as low as five minutes. That framing is doing a lot of work.
The five-minute figure doesn't include the mandatory human review, which DETR's spokesperson says adds 10 to 30 minutes per case. For complex appeals where a human might take three hours, that's a meaningful improvement. For simpler cases that don't take three hours to begin with, the efficiency gain shrinks to almost nothing. The headline number of five minutes is accurate in the most favorable possible scenario and misleading everywhere else.
Accuracy is the deeper problem. DETR Director Christopher Sewell acknowledged that during testing, the tool cited incorrect Nevada statutes and failed to pull information from all hearing documents, issues he says have since been addressed. The contract with Google required a 90 percent accuracy rate. Research on similar large language models suggests actual error rates run significantly higher. These systems don't "know" Nevada law. They predict what a correct answer should look like and can deliver wrong answers with exactly the same confidence as right ones.
Perhaps tellingly, DETR also quietly shelved a separate AI project for invoice processing after it failed to meet promised benefits and expectations. One AI application inside the same agency, during the same push, has already washed out.
DETR's entire risk mitigation strategy rests on one premise: A human reviews every AI-generated ruling. Director Sewell has been emphatic; two state workers are involved in each decision. The AI doesn't make the final call.
What he's describing is not independent review. It's automation bias.
Automation bias is one of the most well-documented findings in human factors research, studied specifically in government decision-making contexts. Human reviewers systematically defer to machine-generated recommendations even when those recommendations are wrong, even when contradictory information is right in front of them.
This isn't a character flaw or a failure of diligence. It's a predictable cognitive response to authoritative-looking output, and AI is specifically engineered to produce output that reads as authoritative and polished. That's what makes errors hard to catch and deference easy to justify.
The workplace dynamics at DETR make this worse. A state referee reviewing an AI ruling isn't exercising independent judgment in a vacuum. They're being asked to second-guess, in 10 to 30 minutes, a $2.6 million system their agency director has publicly championed, all inside a process explicitly designed for speed. The referee who consistently rejects AI outputs isn't the diligent employee — they're the bottleneck, and they know it.
Nevada Sen. Skip Daly (D-Sparks) noted this exact problem using 12 words: "You get used to it, and then you get comfortable with it." I'll use one: complacency.
The risk isn't catastrophic failure on day one. It's normalization. Reviewers get comfortable. The review becomes a formality. The safeguard exists on paper while quietly ceasing to exist in practice.
None of this is happening in a legal vacuum. Under Mathews v. Eldridge, the Supreme Court's framework for due process in government benefit cases, courts weigh the private interest at stake, the risk of erroneous deprivation and the government's interest in efficiency. People appealing unemployment denials have lost their jobs and been denied benefits.
Short of criminal proceedings, it's hard to name a higher private interest. The government's efficiency interest is real, but it doesn't override everything else. Not when the tool could hallucinate legal citations. Not when the safeguard depends on a cognitive phenomenon that decades of research says pushes reviewers toward rubber-stamping.
And not when the people whose rights are at stake might not even know it's happening. It's unclear whether claimants will be informed that a Google AI tool is processing their appeals. There is no consent requirement. There is no opt-out. State Sen. Dina Neal (D-North Las Vegas) says Nevada is contracting away citizens' rights without their knowledge. She's not wrong.
The Biden-era federal guidance that would have classified this project as presumptively rights-impacting and triggering notice and appeal requirements was rolled back by the Trump administration in January 2025. There is no statute, no regulation, no independent oversight body standing between DETR's AI tool and the Nevadans whose benefits depend on it getting the answer right.
Neal says she'll bring an AI oversight bill next session. The people filing unemployment appeals today can't wait that long.
DETR's rollout is the most aggressive expansion of AI in Nevada government to date, but it won't be the last. The Nevada Department of Motor Vehicles already uses an AI chatbot and is building a more advanced one. The Silver State Health Insurance Exchange has deployed an AI agent. The state's AI policy implicitly encourages agencies to find new use cases. Whatever oversight Nevada accepts or fails to demand here becomes the template for every other interaction Nevadans have with their government. Whatever the state gets away with here, it will do again.
AI in government isn't inherently wrong. What is wrong is choosing speed over accuracy when the stakes are this high — and then calling that progress. The least we owe people in crisis is a human being who's actually paying attention.
Bryan Driscoll is a Las Vegas-based human resources consultant who advises employers on workforce compliance and legal risk.
Support Local Journalism
You’ve enjoyed unlimited access to our reporting because we’re committed to providing independent, accessible journalism for all Nevadans.
But sustaining this work — informing communities, holding leaders accountable, and strengthening civic life — depends on readers like you.
Nevada needs strong, independent journalism. Will you join us?
A gift of any amount helps keep our reporting free and accessible to everyone across our state.
Choose an amount or learn more about membership

