✨ AI Practical Lab: From Insight to Influence — Telling Stories with Business Impact

Please to view member only content.

In last week’s ELE hands-on AI Practical Lab, a simple truth kept surfacing: when a team can’t see the work clearly, they can’t change the work confidently.
That’s why so many learning initiatives stall—not because the idea is wrong, but because the story is unreadable. The value stays invisible. Leaders move on.
Dustin Brewer put it bluntly in a way most talent leaders will recognize immediately:

If the C-suite won’t read, they have to see it first.Dustin Brewer, HumanSide.

In other words: if your “insight” can’t travel in a glance, it won’t survive the week.
But the lab didn’t stop at getting attention. It pushed into the harder question: what makes people trust the message enough to act on it? That’s where Nicole DeFalco grounded the group with the reminder that capability doesn’t transfer through outputs alone:

It still has to land—human to human.Nicole DeFalco.

AI can speed the draft. It can’t replace the relationship.

Three insights for leaders designing team-based learning experiences
1) Legibility is a capability strategy

Teams don’t adopt what they can’t quickly grasp, repeat, and share. A crisp one-page visual, a clean infographic, even a “cover story” mock—these aren’t marketing flourishes. They’re capability tools, because they compress complexity into something a team can align around.
If your experience design ends with “we trained them,” you’re done too early. The real question is: what will the team use together next Tuesday when the pressure hits?

2) Story isn’t fluff — it’s the operating system for alignment

In the lab, “story” wasn’t treated as creativity. It was treated as structure: What changed? What’s at stake? What choice are we asking leaders to make? What proof would they trust?
That’s what makes work decidable in the room. And that’s what makes a team-based learning experience feel like real work—not a field trip.

3) AI changes roles — and raises the bar for human value

A helpful reset surfaced as a throughline in the February 12th lab: the real shift isn’t “AI is going to take my job.” It’s that AI changes how value gets created—and makes the human worker more indispensable when the work matters.

Ed brought in a quote from Shyam Sankar (CTO of Palantir) that set a new standard for how we talk about AI and work: “The American worker will wield AI to do more with less—and become more productive and valuable as a result.”
That’s a stronger standard for how to view AI’s role in the workforce: not as a replacement narrative, but as a multiplier—where humans become higher-impact by focusing on judgment, clarity, credibility, and connection.

The lab reinforced what that looks like in practice. AI can accelerate drafts and options, but humans still own the parts that create trust and movement—especially verification and delivery.

We’re not putting the person out of the loop. I’m going in there and verifying. Edward DesRosiers

AI increases speed. Human ownership increases responsibility. And that’s exactly what makes the human worker more valuable, not less.

✨ What to do Monday: four moves that build team capability in real work

Run a 45-minute story prototype sprint. Pick one live initiative—new workflow adoption, manager friction, AI integration—and build a one-page artifact plus a 30-second talk track that answers one question: what decision should a leader make differently after seeing this?
Use a simple rhythm to keep the work moving: Clarify → Co-Create → Customize → Commit. Clarify the decision and audience. Co-create a first draft with peers and AI. Customize it to stakeholder reality. Commit by piloting it in the flow of work and capturing what changed.

ELE's 4 C Model for AI Workflows

Make iteration explicit. Your first version should be fast—and imperfect—so people can react to something real instead of debating abstractions. Two quick feedback loops (peer + end-user) will teach your team more than a “perfect” deck ever will.
Treat prompting and verification as part of the experience design. Prompting isn’t a personal party trick; it’s stakeholder translation with constraints. Verification isn’t optional; it’s what makes speed safe. Give your team a repeatable checklist (sources, SME review, human sign-off) so credibility scales with adoption.

Continue the conversation

If you’re working on the toughest talent challenges—and you want capability that shows up in real work (not just course completions)—join the continuing thread on the ELE Idea Exchange:

2026 FEB: AI Lab Pre-chat & Post-Chat

ShareCopy