Author

Jeremy Lockhorn

SVP, Innovative Technology, 4As

Topic

  • Artificial Intelligence
  • Future of the Agency
  • Future of the Industry
  • Generative Ai
  • Technology

Many enterprises are automating fiction. That is the insight that kicked off HumanX – one of the largest enterprise AI conferences in the country. The opening keynote cited a study that revealed 14 of 20 enterprise AI deployments studied were automating how organizations thought they worked — not how they actually did. Processes were undocumented, informal, inherited from structures that no longer exist. This is the challenge that any enterprise faces when operationalizing AI. And for agencies, who are largely moving from AI experimentation to operationalization en masse, it’s a reminder that readiness and governance is perhaps more important than tech and tooling.

HumanX 2026 was a whirlwind four days featuring a large exhibit floor, hundreds of sessions to choose from and a cross-industry audience ranging from CMOs to Cisco’s president to Ray Kurzweil. The underlying questions were ones agency leaders know well: Are we deploying AI well? Are we measuring the right things? Is what we thought was true six months ago still true?

What follows is a cross-conference synthesis organized around two frames: what confirmed what we already believed, and what didn’t.

What We Expected

Human judgment is still the last moat –  for now. Every creative session circled back to the same point: AI accelerates production; it doesn’t replace the judgment about what’s worth making or whether it’s any good. The most vivid example came from the ComfyUI CEO describing how the Svedka Super Bowl campaign was built. It was a spot featuring dancing humanoid robots, produced by parallel production teams with crowdsourced choreography piped directly into a node-based workflow. His framing on where humans remain irreplaceable: “You can’t rely on AI to give you the answer on taste.” The caveat: Ray Kurzweil put a shelf life on this consensus. His argument, and his track record on timeline predictions is unusually strong, is that AI isn’t just assisting human creativity, it’s merging with it. How durable the human-judgment frame is for the next planning cycle is a real question.

The real barrier is organizational, not technical. The gap between AI capability and AI deployment is not an engineering problem. It’s a trust problem, a process problem and a leadership alignment problem. This showed up consistently across the week, including as the opening frame from the conference chair. The capability is largely there. What’s blocking enterprise AI is change management. The hardest problem, as usual, turns out to involve humans.

Culture and speed are the real competitive differentiator. Jeetu Patel (Cisco) said it most directly, but it echoed across the week. The technical stack is increasingly commoditized. What differentiates organizations now is the ability to build AI fluency at density and move faster than the market recalibrates, which has practical implications for how agencies prioritize investment.

What Surprised Us

Five things that cut against the grain.

Most organizations are automating the wrong thing. When processes are undocumented and informal (true of many agency workflows), AI accelerates confusion more than “the work”. The question “do we actually understand our own process?” is uncomfortable. Now is the time to get comfortable with the uncomfortable question, ensure that we are reinventing processes where necessary to take full advantage of AI, and documenting processes accurately. 

Only 3% of employees are genuinely AI proficient. Section AI put a precise number on a problem most organizations can sense but can’t quantify. Outside the Bay Area, 85% of employees use AI as a Google replacement: search, summarize, move on. Just 3% can get real, repeatable value from the tools in front of them. Login rates and license counts are measuring the wrong thing. The leading indicator is proficiency and for most organizations, it’s nowhere near where leadership believes it to be. Upkilling the workforce is an urgent need, and the 4As Generative AI Certification program is a great place to start.

AI is intensifying workloads, not reducing them. One study found that 67.1% of workers say AI has led to higher output expectations, not lighter loads. About half have absorbed more work without adding headcount, while less than 20% have been upskilled. Similar findings are showing up across other research studies on knowledge workers more broadly. The public narrative centers on displacement. The actual worker experience is intensification. For agencies billing on outcomes rather than hours, this is a talent and retention problem dressed as a productivity gain.

Workflow architecture is the real skill gap. The Svedka Super Bowl story is being told as an AI creative story. It’s actually a workflow infrastructure story and the distinction has practical stakes. With a prompt-based tool, you describe what you want and iterate from there. With a workflow-based system, you design the constraints once — brand parameters, output specs, approval logic — and the system enforces them on every run. One is a conversation; the other is a pipeline. The advertising industry is largely still at the prompting stage. Getting to production-grade AI means building the workflow infrastructure, not just the prompts.

Inference costs are heading to the P&L. Two speakers, without coordinating, made the same structural prediction. Greg Shove of Section AI: 1inference is his company’s third-biggest expense and will be second by year-end, with business leaders eventually managing an annual inference budget the way they manage a software budget. Jeetu Patel (Cisco): “Your second biggest cost, if not your biggest cost over time, will become the cost of 2tokens.” One runs a 40-person AI-native company. The other is president of one of the world’s largest technology firms. Same warning, different vantage points. Token spend is not an IT procurement consideration. It’s heading toward the same boardroom conversation as headcount.

A Signal Worth Watching

Kurzweil’s session included a shocking anecdote. To draft his forthcoming book proposal, he asked Gemini to describe a book he hadn’t written yet. What came back was credible enough that he chose to rework it rather than discard it. His conclusion: “AI creativity is real, and it’s already merging with human creativity.”

Most of HumanX positioned AI as a production accelerant that humans then judge. Kurzweil is pointing at something more structural, not a sequential handoff but a merger where the source of an idea can’t always be cleanly attributed. For agencies whose core value proposition is creative judgment, that’s a first-order strategic question. 

 

Patel closed his session with a compelling piece of advice: don’t hold yourself accountable to current knowledge. Hold yourself accountable to growth. In a landscape where the rules rewrite themselves every few months, the relevant question isn’t what you know. It’s how fast you learn.

 

1Inference is a technical term that basically means using AI, rather than training AI.
2Tokens are the basic unit of AI – generally equivalent to a word or a part of a work – tokens are used to train AI systems and are increasingly becoming the billing unit for AI platforms.


Jeremy Lockhorn

Jeremy Lockhorn spearheads several 4As GenAI resources such as the GenAI Blueprint, the GenAI Maturity Assessment Tool and more. These resources help member agencies navigate the opportunities and challenges in adopting and scaling GenAI and emerging tech. He speaks at 4As conferences and other industry events and writes about emerging tech in advertising, separating the hype from the real. Jeremy can be spotted interviewing agency leaders and innovative thinkers at events like CES and SXSW.