What Will Slow AI Automation? The Humans

Written by Dax Cross | Mar 2, 2026 9:10:04 PM

A recent Citrini Research article painted a dystopian 2028 scenario of economic collapse following the agentic automation of most knowledge work. The end could be coming quickly, this and other articles suggest, due to the rapid pace of AI technology development.

While we’re firm believers in AI and autonomous agentic systems fundamentally changing how work gets done, the article got us thinking about our experience with technology offering the ability to automate decisions. Throughout our 20-year career building Revenue Analytics, we delivered software systems that offered the ability to automate a key commercial decision: pricing. And yet, full automation of pricing decisions remains rare today in most industries.

Our experience suggests that humans will accept some level of automation but ultimately want control of decisions for which they are accountable.

A 30-Year Test Case of AI Automation

Our father, Robert Cross, pioneered the field of Pricing & Revenue Management at Delta Airlines in the early 1980s and ultimately led the deployment of software systems to help manage pricing across multiple industries.

Let’s focus on the hospitality industry. As far back as the 1990s, large hotel chains like Marriott, Hilton, and IHG deployed Revenue Management Systems (RMSs) to manage pricing and availability. Though they had the ability to leverage full automation, most were marketed as “decision support” systems that promised to provide guidance to users while letting those users ultimately determine final prices and availability themselves.

In the late 2000s and early 2010s, Revenue Analytics partnered with hotel chains like IHG, Hyatt, and Marriott to add dynamic price optimization capabilities to these RMSs. Almost every RMS deployed in the industry has long had the ability to automatically deploy the optimized prices recommended by the algorithm in near real time. But very few hotels deploy these “autopilot” capabilities at scale or for long periods of time.

Our most recent RMS, N2Pricing, which debuted in 2020, allows users to configure automation. For example, many users set up the system to automatically deploy price changes of 10% or less versus the current price while requiring user approval for any larger price changes.

What strikes us is that all of these RMSs included the technology to fully automate pricing decisions, essentially handing decision authority over to an algorithm working on the user’s behalf. These systems were, by any reasonable definition, AI: machines making predictions and recommendations with the ability to autonomously deploy price changes. And yet, they rarely delivered the promised levels of automating work.

Automation Requires 100% Trust

Most human resistance to automating decisions or work comes down to human nature. Humans are naturally threatened by the notion that technology could do their job as well as they do. As such, their initial reaction is skepticism, and they tirelessly seek observations to support that skepticism.

In deploying dozens of RMSs with pricing automation capabilities across a variety of industries, we have witnessed this skepticism firsthand. For example, during a pilot of a price optimization capability for an enterprise hospitality RMS, we were monitoring a hotel in Boston. The user had been accepting around 80% of the pricing recommendations, so the pilot, by all accounts, was going well. Then adoption dropped to zero.

As soon as this happened, we spoke with the hotel revenue manager. She said, “This system doesn’t know what’s going on. I don’t trust it.” When asked why, she explained that the Boston Marathon was coming up in a few weeks and the RMS was recommending a price reduction. “This system is so stupid. That’s our biggest event of the year!”

Our team investigated. We found that she had actually configured a price ceiling into the system. This configuration decision, made weeks earlier, capped all of the algorithm’s recommendations at the user-defined price ceiling. So the recommendation was that ceiling, which was lower than the current price set for the Boston Marathon. As it turns out, the optimization algorithm was recommending a price increase compared to the current price behind the scenes, but it looked like a decrease recommendation once her ceiling was applied.

Setting aside the irony that a human-defined ceiling was causing the “stupid” price decrease recommendation, the story is instructive. We have consistently seen that people hold technology to a higher standard of trust than they hold themselves. Eighty percent confidence is generally pretty good for a human making a decision, but even one single observation out of one hundred machine-generated recommendations that isn’t immediately intuitive can undermine trust in a system and quickly have a user revert back to their “old ways.”

We are seeing the same thing today. AI is capable of doing incredible work already and getting better by the day, but a random hallucination or mistake undermines trust and makes people wary of letting an AI agent operate without human oversight. When corrected, the standard response of “You’re absolutely right!” may be polite, but it does not inspire confidence.

Why Automate Satisfying Work?

Another pattern that has emerged from 20+ years of deploying AI-enabled decision support and automation systems is that when people find their work satisfying, they are less likely to accept automating it. Again, hotel pricing provides some interesting examples.

N2Pricing’s algorithm takes into account the demand forecast (which the user can override when they have unique knowledge), competitor rates, and price sensitivity and elasticity. In our experience, every price recommendation we have seen makes sense when evaluated against these factors. Hotels that have used it on autopilot report strong results.

Despite the precision of the algorithm, we have consistently observed users tinkering with pricing. Though they almost always directionally accept the recommendation (i.e., raising the price following an increase recommendation), they often tweak the final price deployed to suit their tastes. For example, for a hotel priced at $200 per night next Tuesday, the system might recommend an increase to $220, and the user might push the price to $210. The math behind the algorithm would suggest that, over time, the hotel will make less money with this suboptimal price, but the user feels better with their fingerprints on the final price.

Why the tinkering? We believe it boils down to the desire to control and explain a decision for which they are accountable. Many users also get satisfaction from changing prices for the next few weeks based on constantly evolving market dynamics. They have trained for this work. They are recognized as subject matter experts for the skills they have developed. And they don’t want to stop doing that work or hand it over to a machine. They like the work they do, feel a sense of accomplishment from it, and it makes them happy employees.

Automate Someone Else’s Team, Not Mine

The first two issues focus on why users are hesitant to automate their own work. But the nature of leaders is also a constraint to automation. For most corporate executives, the size of their team and budget reflects the power they have in the organization. But power isn’t the only factor. When leaders see the opportunity to make a change that would financially benefit the company, the next consideration is always risk — the risk to the company, but even more so to their own position and standing in the organization.

Here again, Pricing & Revenue Management technology provides a test case. Sticking with hospitality, it has been well documented for over 30 years, across dozens of studies (many of which we led), that RMSs deliver at least a 3–5% revenue uplift. Yet when evaluating deploying a new RMS, many leaders focus far more on the risk than the upside.

Leaders ask themselves questions like, “If we deploy a new system and immediately boost our revenue, does that make me look bad? Does it make my team look bad?” A couple of years ago, we saw a global enterprise Revenue Management leader decline to buy an RMS that would deliver automation and a proven 3% revenue uplift in favor of buying a Business Intelligence solution that gave the team access to more data for making manual pricing decisions. For us, that was clearly moving backward on the technology frontier, but for many leaders, personal risks outweigh the potential reward to the company.

Most importantly, the leader remains accountable for the decisions they own. If the leader turns decisions over to an AI agent, they may be rewarded in the near term for generating cost savings or headcount reductions, but now they are accountable for decisions made by that autonomous agent going forward. If the agent makes a mistake, that could cost the leader dearly, and last year’s cost savings will be long forgotten.

Based on this rationale, we expect leaders to hesitate to give up the power that comes with a larger team and budget. Furthermore, we expect them to resist retaining accountability for the work of AI agents. Much like users, leaders understandably do not want accountability without control.

Humans Are the Speed Limit on AI Automation

Our experience suggests that both individual contributors doing knowledge work today and their leaders will continue to be highly skeptical of fully automating decisions with AI agents. After all, automating one’s own work or one’s team’s work feels like an admission that the person or team was doing work anyone could do without unique knowledge or skill.

Knowledge workers do not see their decisions as automatable. They will continue to use LLMs to seek advice or perspective that they can take pieces from and draw their own conclusions. They will continue to embrace agents doing remedial and time-consuming tasks like compiling data and summarizing reports, but giving away real decision authority? Just try dynamically pricing a Boston hotel during Marathon week with all eyes on you to hit your revenue plan.

AI technology will continue to advance at an exponential pace. But decades of deploying automation in the real world have taught us something the doomsday crowd keeps missing. Over our careers, the technology has never been the constraint. The humans always are.