Session Focus: Build on a 3-step AI workflow model presented by Julia Meyer and Becca Benson at our October 2025 conference by validating the underlying business problem with human context and committing to practical next steps that stick.
This hands-on AI Practical Lab builds directly on last month’s AI workflow challenge, shifting the focus from generating better outputs to ensuring we’re solving the right problem—before accelerating with AI.
Using the same challenge from last month’s ✨ AI Practical Lab: AI-Powered Workflows, talent learners in this lab will refine the 3-step model, surface root causes, and apply ELE’s Four-Step Model to clarify why a problem exists—and what action is truly worth committing to. The emphasis is on human judgment, shared context, and practical decision-making so AI-enabled work translates into outcomes that actually stick.
This lab introduces ELE’s Four-Step Model as a practical way to move from clarity to commitment—keeping humans at the center of AI-enabled work.

"AI gives us amazing options, but I need to see the big picture. How is this actually moving the needle for the business?" -- Keri Kersten
This discussion offered a timely reminder: most capability gaps don’t start with a learning problem—they start with a clarity problem.
As Nicole DeFalco put it,
"Clients don’t come to us with problems—they come to us with solutions wrapped in assumptions."
That framing resonated because it reflects the daily reality talent leaders face. We’re often handed a fully formed request—train this, roll that out, scale it fast—and rewarded for speed, not scrutiny. But when we rush to deliver, we risk solving the wrong problem beautifully.
The most powerful moments came from slowing down and asking harder questions:
-
What business outcome is actually off?
-
What evidence says this is a skills issue—and not incentives, workload, role design, or market conditions?
-
What would people need to do differently in real work for results to change?
That shift—from order-taker to problem framer—is where team capability actually starts.
Another insight landed hard: training can’t compensate for an environment that discourages the very behaviors we’re trying to build. If KPIs, job design, time pressure, or incentives pull people in the opposite direction, no amount of content will stick. Capability has to be designed into the work itself.
The group also challenged the instinct to “train everyone.” Instead, the conversation moved toward targeted pilots with high-impact cohorts—designing with the people closest to the outcome, learning fast, and scaling only once behavior change is visible. That’s not slower; it’s smarter.
AI showed up not as a content engine, but as a thinking partner—helping leaders rehearse stakeholder conversations, pressure-test assumptions, and surface blind spots before committing resources. Used this way, AI didn’t replace judgment; it sharpened it.
And as Angelica Stilling summed up succinctly,
"Clients don’t come to us with problems—they come to us with solutions wrapped in assumptions."
That line captured the discipline required of leaders right now—not more activity, not faster delivery, but clearer thinking.
For talent leaders, the takeaway is straightforward but demanding:
Building team capability isn’t about delivering better learning. It’s about designing the conditions where learning shows up as different decisions, actions, and results.
If your team is busy building programs but struggling to move the needle, the most impactful move may be to pause—and ask better questions before you build anything at all.
