Chuck Schultz Chuck Schultz

Stop Pitching the Demo. Start Shipping the Spec.

The gap between AI vision and enterprise deployment isn't technical. It's a packaging problem.

I help teams architect agentic workflows. We build a working prototype β€” often in a Claude Project β€” complete with a step map, design canvas, skills written in markdown, and a sharp system prompt. It feels real. It works.

But the real test comes at handover.

Every time, the engineering team asks the same question: "Is this a Claude thing… or can we actually ship this in production?"

That question is my signal. It means I haven't packaged the work clearly enough for them to run with it.

The reframe that changed my approach: Stop pitching the demo. Start shipping the spec. The prototype is just validation. The actual deliverable is documentation that speaks for itself when I'm no longer in the room.

Here's what effective packaging looks like:
1. Package the capability, not the platform. Deliver a clear step map, design canvas, defined skills in markdown, inputs, outputs, and success criteria. The implementation technology is engineering's decision β€” not yours.

2. Write the expectations brief. One concise document: here's what needs to be deployed, here's how we'll measure success, here's what I'm deliberately not prescribing. Then hand it over.

3. Step back. The moment you stay attached to the tool that helped you prototype, momentum dies.

The uncomfortable truth: most agentic workflows stall in enterprise not because of technical limitations or organizational politics β€” but because the visionary never fully translated the vision into something an engineering team could own.

Where are you in this right now β€” still in demo mode, or do you have a handover process that engineering actually trusts?

Read More
Chuck Schultz Chuck Schultz

The Invisible Workflow Problem

Your best AI results aren't repeatable. They're dependent on whoever ran the workflow last.

Think about the last time your team got a genuinely great output from AI β€” a campaign diagnosis that was actually useful, a performance summary that led to a real decision, a creative brief that didn't need three rounds of revision.

Now ask: could anyone else on the team reproduce that result tomorrow? Could you reproduce it next week?

In most organizations, the answer is no. Not because the tool changed. Because the workflow lived in one person's head β€” their prompt approach, their context setup, their personal Claude account, their browser bookmark. Invisible to the organization. Impossible to scale. Gone when they're gone.

Quick self-score:
πŸ”΄ No documentation. Every workflow starts from scratch depending on who's running it.
🟑 A few people get consistently great results β€” but their system lives in their personal accounts and their own head. The org has no access to it.
🟠 Some team-level standards exist in some places. Nothing you'd call a system.
🟒 AI workflows are documented, version-controlled, and fully transferable β€” any team member can execute to the same standard.

Here's the uncomfortable math:
Jasper's 2026 State of AI in Marketing β€” 1,400 marketers surveyed β€” found that while 91% now use AI in their work, the share who can prove ROI actually dropped year over year. From 49% last year to 41% today.

Not because AI got worse. Because personal productivity isn't organizational capability. Leadership isn't seeing the value because the value isn't in the system. It's in the individual.

The gap between 🟑yellow and 🟒green isn't a better tool or a smarter prompt. It's a decision to treat workflow documentation as seriously as the output you're trying to scale.

Your AI strategy is only as durable as the least-documented workflow it depends on.

Read More
Chuck Schultz Chuck Schultz

London Built It. Chicago Automated It. Singapore Is Testing It.

Your marketing organization is running AI experiments right now.

Someone in London built a workflow. Someone in Chicago automated a brief. Someone in Singapore is testing a content tool nobody approved.

None of it is connected. None of it is governed. And none of it is visible to the people responsible for what happens when something goes wrong.

That's not a technology problem. It's an operating model problem.

Quick self-score:

πŸ”΄ No process β€” AI tools are adopted informally, market by market, team by team, with no central visibility.

🟑 Some awareness β€” leadership knows AI is spreading but there's no intake process, no registry, no governance funnel.

🟠 Partial governance β€” approved tools exist but no structured process for evaluating new ideas before they get built.

🟒 Full funnel β€” every AI idea enters through a defined intake, gets evaluated against governance criteria, and earns deployment through human-owned decision gates.

Most enterprise marketing organizations are πŸ”΄ or 🟑. Not because they don't care about governance. Because no one designed the system before the tools arrived.

Here's what a governed AI innovation funnel actually requires:

β†’ A universal intake β€” one mandatory entry point for every AI idea, every market, every team. If it isn't logged, it doesn't exist.

β†’ Structured evaluation β€” ideas scored against business impact, data compliance, scalability, risk, and execution readiness before anyone builds anything.

β†’ Human decision gates β€” AI does the processing. Humans own every approval. Nothing auto-promotes. Nothing auto-deploys.

β†’ A live registry β€” a running record of every idea, every evaluation, every deployment decision. Not a spreadsheet someone updates quarterly. A governance-grade audit trail.

The organizations that get this right aren't slowing innovation down.

They're making every idea better, faster, and more likely to actually deploy at scale β€” instead of dying in a market silo nobody else can find.

The uncomfortable reality:

If your AI governance policy lives in a legal document but your teams are building without a funnel, you don't have governance. You have liability with paperwork on top of it.

Read More
Chuck Schultz Chuck Schultz

Nobody Owns the Gap

πŸ”₯The governance trap nobody is talking about.

Your organization didn't decide to move slowly on AI. It just never decided who owns the space between approved systems.

Here's the scenario playing out in marketing organizations right now:

The AI tool is approved. The data platform is approved. Legal signed off on both. IT provisioned both. And the two systems cannot talk to each other β€” because connecting them requires a decision that sits in no one's job description.

Who owns the MCP integrations? Who approves the data flow between a secured data platform and an approved AI layer? Who unblocks the connection between your media data warehouse and the AI skill that's supposed to analyze it?

Not who could answer that question. Who actually owns it β€” with the authority and accountability to move it forward?

In most organizations, that answer is silence.

Quick self-score:
πŸ”΄ No one owns it β€” approved tools sit unconnected. Capability exists on paper only.
🟑 IT and Marketing pass it back and forth. Nothing moves without an escalation.
🟠 Ownership is assumed but undocumented β€” progress depends on who pushes hardest.
🟒 A named function owns platform connectivity decisions with a defined process and SLA.

Here's the uncomfortable reality:

The governance trap isn't rogue AI. It's two fully approved, fully secured platforms sitting three feet apart with no one authorized to connect them.

Organizations aren't failing at AI adoption because they're reckless. They're failing because accountability for enablement β€” the unglamorous work of actually connecting approved capability to approved data β€” belongs to no one.

You don't need looser governance. You need someone whose job it is to own the gap.

Read More
Chuck Schultz Chuck Schultz

70% Say It's a Priority. 17% Are Actually Doing It.

Here's a question most marketing leaders can't answer cleanly.

"If you could solve one AI or data challenge in the next 90 days β€” the one that would have the most meaningful impact on your marketing effectiveness β€” what would it be?"

Not a wishlist. One thing.

Most organizations can't answer that. Not because they don't care about AI. Because they've never been forced to prioritize.

Here's what the data says: 70% of marketing leaders say optimizing spend is a top priority. Only 17% are actually using AI to analyze and optimize campaigns. That gap β€” between what leaders say matters and what AI is actually doing β€” is the diagnostic. And the 90-day question is the forcing function that reveals it.

The exercise:

Ask this question in your next leadership meeting. Write down the answers before anyone speaks.

What you'll usually find:
β†’ Finance names a measurement problem
β†’ Operations names a data quality problem
β†’ The CMO names a tool problem
β†’ The agency names a brief problem

Four different answers. One shared budget. No shared priority.

That's not an AI readiness problem. That's a leadership alignment problem wearing an AI mask.

The organizations actually moving are the ones that have answered this question β€” and gotten the whole room to the same answer. Not because the problem is small. Because they made a choice.

What would your one thing be?

Drop it in the comments. Seriously. I read every one. And if you can't name it in one sentence β€” that's the answer.

Source: Supermetrics, Marketing Data Report 2026 β€” survey of 400+ marketers across the US, UK, Germany, Australia, and Singapore.
* 70% of marketing leaders cite optimizing spend as a short-term priority.
* Only 17% are actually using AI to analyze and optimize campaigns.
That gap is the whole conversation.
https://lnkd.in/g6d9Spka

Read More
Chuck Schultz Chuck Schultz

We're Watching It' Is Not a Media Strategy

ChatGPT is now a paid media channel. Does your team have a buying thesis for it β€” or are you waiting to be asked?

On February 9, OpenAI launched ads inside ChatGPT. On March 2, Criteo became the first ad-tech partner. The Trade Desk is reportedly in talks to follow. Early data shows users referred through ChatGPT convert at 1.5x the rate of other channels. Target, Ford, Best Buy, and AT&T are already in.

This isn't a future planning item. It's live inventory.

Sources: Criteo press release, March 2, 2026; Digiday, "OpenAI is building the ad tech stack it's currently borrowing," March 10, 2026.

The question most marketing teams are quietly avoiding: do we have a framework for evaluating this β€” or are we going to end up reacting when a client asks?

Quick self-score:
πŸ”΄ No process. Channel decisions happen informally based on whoever raises their hand first.
🟑 We discuss new channels but there's no documented evaluation criteria or decision framework.
🟠 Some guidance exists but it isn't consistently applied β€” especially for AI-native channels with no measurement history yet.
🟒 Deliberate, documented framework governs every new channel decision: measurement approach, brand safety criteria, attribution model, minimum test budget threshold.

Here's the uncomfortable reality:

Most media organizations are excellent at executing on known channels. The process for evaluating genuinely new surfaces β€” especially ones with no historical benchmarks, unproven attribution models, and evolving ad formats β€” tends to be whatever a smart person argues convincingly in a room.

That's not a framework. That's a persuasion contest.

ChatGPT advertising isn't the point. The point is whether your team has the organizational muscle to evaluate it intentionally β€” with defined criteria β€” rather than reactively when budget pressure or client curiosity forces the question.

The organizations that will have an advantage here aren't the ones that jump in fastest. They're the ones that already have a decision architecture for situations exactly like this.

Your team is going to get asked about ChatGPT ads in the next 90 days. "We're watching it" is not a media strategy.

Read More
Chuck Schultz Chuck Schultz

The Real AI Risk Isn't Rogue Agents. It's Ungoverned Skills.

POST #2 of 2 β€” Visibility & Governance Series
Anthropic recently told the enterprise world something important. Most marketing leaders missed it and we are now seeing momentum.

The framing was technical: don't build agents, build skills instead.
The implication for marketing organizations is anything but.

An AI agent is broad β€” a general-purpose system you point at a problem. A skill is different. Packaged instructions, domain knowledge, and procedural logic an AI loads on demand to perform a specific task, consistently, every time. Skills are how you take what your best analyst does intuitively β€” campaign post-mortems, pacing diagnosis, data reconciliation β€” and make it reproducible at scale. The promise is real. Package your best thinking into a skill and your entire team executes at that level.

Here's the risk nobody is talking about yet.

Custom AI without governance creates shadow AI β€” siloed, unmanageable, impossible to control. Skills deployed without oversight create the same problem. Just faster and at greater scale.

What ungoverned skill proliferation actually looks like:
β†’ An analyst builds a campaign diagnosis skill using a methodology leadership never approved
β†’ Another builds audience segmentation using KPI definitions that don't match the agency's
β†’ A third builds a reporting skill that's been running against stale data for six weeks

Each works. Each produces confident, fluent output. Each compounds your governance gap β€” now automated.

Anthropic built the answer into their platform: central admin controls that govern which skills are provisioned and enabled. The infrastructure exists. Most marketing organizations haven't built the muscle to use it.

Quick self-score:
πŸ”΄ Skills built by individuals. No inventory, no standards, no oversight.
🟑 Some workflows exist. Nobody has mapped what's running or who owns it.
🟠 Leadership has general awareness. No formal governance framework.
🟒 Skills are inventoried, owned, governed, and aligned to documented standards.

The organizations that win won't deploy the most skills. They'll govern them.
Ungoverned skills don't amplify your best thinking. They amplify whoever built them last.

Where does your organization land? Drop your color in the comments.

https://lnkd.in/gkUFJeXi
(Anthropic, "Equipping agents for the real world with Agent Skills")
https://lnkd.in/gj7PuZ3j
(Anthropic launches new push for enterprise agents with plug-ins for finance, engineering, and design)

Read More
Chuck Schultz Chuck Schultz

You Can't Govern What You Can't See

POST 1 of 2 β€” Visibility & Governance Series
Your leadership team has no idea how AI is actually being used across your marketing organization. Not a rough sense. Not a partial picture. No idea.

Think about the last time someone asked your CMO to inventory every AI-enabled workflow running across media, analytics, content, and data teams. How long would it take to get a complete answer? Who would even own that answer? That silence β€” right there β€” is your governance gap.

Most marketing organizations have invested in tools, trained teams, and started workflows. What almost none have done is build organizational visibility into what's actually running, who owns it, and whether it's aligned to business priorities.

AI is being used across your org right now. Some of it is brilliant. Some of it is inconsistent. Some of it is producing outputs that are informing real decisions β€” audience strategy, budget allocation, campaign analysis β€” and nobody in leadership sanctioned the methodology, reviewed the output quality, or could reproduce the result tomorrow.

Quick self-score:
πŸ”΄ No visibility. AI is happening. Nobody knows where.
🟑 Leadership has a rough sense. No formal tracking. No governance.
🟠 Some usage is monitored. Significant gaps remain.
🟒 Full visibility. Governance policies aligned to business strategy and risk standards.

Here's the uncomfortable math: most organizations are πŸ”΄ or 🟑. And most leadership teams don't know it β€” because the people running AI workflows don't broadcast what they're doing, and leadership never created a mechanism to ask.

You can't govern what you can't see. And right now, most marketing leaders are making assumptions about their AI maturity based on what they've approved β€” not what's actually running.

This is Part 1 of 2. Next post: why ungoverned AI skills β€” not rogue agents β€” are the real enterprise risk hiding in plain sight.

Read More
Chuck Schultz Chuck Schultz

You Don't Have an AI Strategy. You Have an AI Layer.

Your AI strategy looks impressive in the presentation.
But here's the one question that cuts through it: Has the manual grind actually decreased for the average analyst on your team?

Not your best prompt engineer. Not the one person who's figured out how to automate their own workflow. The average person. The one pulling data from three platforms into a spreadsheet every Monday morning before any real work can begin.

If the answer is no β€” you don't have an AI strategy. You have an AI layer sitting on top of an unchanged operation.

Quick self-score:
πŸ”΄ Minimal β€” most marketing and data work is still done manually.
🟑 Some reduction in isolated areas but the broader team is largely manual.
🟠 Noticeable reduction across several marketing and data workflows.
🟒 Significant β€” AI has fundamentally changed how our team processes information and produces work.

Here's what most organizations get wrong:
They measure AI adoption by tool usage. Seats purchased. Prompts run. Hours of training delivered. None of that tells you whether the manual grind has moved.

The honest test is simpler: ask your analytics team how much of their week is still spent preparing data before analysis can begin. Ask your media team how long a standard performance report takes to produce. Ask your agency how many hours go into reconciling numbers before they can tell you what happened last week.

Those answers will tell you more about your AI readiness than any dashboard showing tool adoption rates. The organizations pulling ahead aren't adding AI on top of manual processes. They're replacing the manual processes β€” and measuring whether it actually happened.

If your team is still cleaning data by hand, you don't have an AI strategy. You have a very expensive habit.

Read More
Chuck Schultz Chuck Schultz

Confident Doesn't Mean Correct

NERD ALERT - This final post takes a peek under the hood of AI.
Grounding: AI will give you a wrong answer with the same tone as a right one. That's not a bug β€” it's the architecture. And it's the most important thing marketing leaders need to understand before they deploy AI at scale.

It's called the grounding problem. And it's why AI confidence and AI accuracy are not the same thing.

A language model generates responses by predicting what words should follow, given everything in its context. When it's well-grounded β€” meaning it's working from accurate, relevant, verified information β€” the output is reliable. When it isn't, the model doesn't stall or flag uncertainty. It continues. Fluently. Confidently. Incorrectly.

This is what practitioners call hallucination. But that word undersells the problem. Hallucination implies something obviously wrong. A made-up statistic. A fictional citation. Those are easy to catch.

The harder version is a response that is directionally plausible, internally consistent, and subtly wrong in ways that require domain expertise to detect. That's the one that gets published in the recap deck.

What grounding actually requires:
Grounding is the practice of anchoring AI output to verified sources before generation happens. It's not a setting you toggle. It's an architectural decision.
β†’ Retrieval systems that pull from your actual data, not the model's training
β†’ Source attribution so outputs can be traced and audited
β†’ Evaluation layers that check outputs against known standards before they surface
β†’ Human review workflows designed around the failure modes, not the happy path

This is the common thread of everything I've written in this series. Decomposition structures the question. Fan-out broadens the retrieval. Context windows supply the inputs. Retrieval vs. reasoning diagnoses the failure. Grounding is what ties it together β€” the discipline of ensuring AI output is anchored to something true.

The uncomfortable truth for marketing organizations:
Most AI deployments optimize for speed of output. Grounding optimizes for trustworthiness of output. Those are not the same objective, and the tension between them is where most enterprise AI implementations quietly fail.

Speed without grounding is just confident noise, faster.

The organizations that will build durable AI capability aren't the ones moving fastest. They're the ones who decided early that trustworthy output was non-negotiable β€” and built accordingly.

Read More
Chuck Schultz Chuck Schultz

It's Not the Model. (Maybe.)

NERD ALERT - This post takes a peek under the hood of AI.
Retrieval vs. Reasoning: AI makes two fundamentally different kinds of mistakes. Most organizations can't tell them apart. That's expensive.

Retrieval: The first is a retrieval failure. The model didn't have the right information. It worked from what it had β€” which wasn't enough, or wasn't current, or wasn't specific to your business. The answer was wrong because the inputs were wrong.

Reasoning: The second is a reasoning failure. The model had the information. It connected it incorrectly. The logic was flawed, the inference was a stretch, or it weighted the wrong signals.

These look identical on the surface. Confident. Fluent. Wrong.

But they have completely different fixes.

Retrieval failures are solved with better data architecture β€” richer context, better retrieval systems, more relevant information surfaced before the model responds. This is an infrastructure problem.

Reasoning failures are solved with better prompt design, chain-of-thought instruction, and output evaluation. This is a workflow problem.

If you treat a reasoning failure like a retrieval problem, you'll rebuild your data pipeline and still get bad answers. If you treat a retrieval failure like a reasoning problem, you'll rewrite your prompts indefinitely and wonder why nothing improves.

What this means for marketing organizations:
When AI-powered campaign analysis produces a bad output, the instinct is usually to blame the model. "It just doesn't understand our business."

Sometimes that's true. But before you conclude the model can't reason about your data, ask:
β†’ Did it have access to the right data in the first place?
β†’ Was the question structured to guide its reasoning, or left open-ended?
β†’ Was the output evaluated against a known standard, or just eyeballed?

Diagnosing which failure mode you're in is the work. It's not glamorous. It doesn't make for a good demo. But it's the difference between AI that compounds in value over time and AI that stays permanently in pilot.

Confident and fluent is not the same as correct.
Knowing which kind of wrong you're dealing with is the first step to fixing it.

Read More
Chuck Schultz Chuck Schultz

The Model Isn't the Variable. You Are.

NERD ALERT - This post takes a peek under the hood of AI.
Most people think the quality of an AI answer depends on how good the model is. When in actuality, it depends more on what you put in front of it.

This is the context window.
And it's the most underestimated variable in enterprise AI.

Here's the simple version: an AI model doesn't have memory in the way you do. Every time it responds, it works from what's currently in its context window β€” the information you've handed it for that interaction. Documents, data, instructions, conversation history, retrieved chunks from your database. All of it.

The model is only as good as what's in that window.

This is why two organizations can use the exact same AI model and get dramatically different results. One handed it rich, structured, relevant context. The other handed it a vague question and hoped for the best.

What this means practically:
Context window management is a design discipline. Not a prompt trick.

For marketing organizations building AI into analytics, campaign reporting, or content workflows, the question isn't just "what do we ask the AI?" It's:
β†’ What information does it need to answer well?
β†’ Where does that information live?
β†’ How do we get the right context into the window before the model responds?

This is exactly what retrieval systems are designed to solve β€” pulling the relevant information from your data and surfacing it into context before generation happens. Fan-out (which I wrote about in my last post) is one technique for doing that retrieval more thoroughly.

But retrieval is only half the problem.

The other half is what you put in by design β€” your instructions, your definitions, your business rules, your standards. That's not retrieved. That's architected.

Organizations that treat context as an afterthought get AI that performs like a smart intern on their first day. No institutional knowledge. No standards. Just capability without direction.

Organizations that design their context get AI that performs like a senior analyst who has read everything, remembers everything, and applies your standards consistently.

Same model. Completely different outcome.

The question isn't whether your AI is powerful enough.
It's whether you've given it enough to work with.

Read More
Chuck Schultz Chuck Schultz

You Asked One Question. It Did Ten Units of Work.

NERD ALERT - This post takes a peek under the hood of AI.
In my prior post you learned about 'problem decomposition'.
What I didn't mention: AI systems have been doing it automatically for years. It's called fan-out. And once you see it, you can't unsee it.

When you ask ChatGPT, Claude, Perplexity, or Google AI Mode a complex question, it doesn't run a single search against your exact words. Under the hood, the system decomposes your question into 8–12 sub-queries β€” each targeting a different angle, intent, or dimension of what you asked. Those run in parallel. The results get synthesized into one coherent answer.

You asked one question. The system did ten units of work.
Sound familiar?

The discipline I described in the prior post β€” the one most marketing organizations skip entirely β€” is already baked into every serious AI retrieval system on the market. The machines didn't wait for us to figure it out.

What this means practically:
If you're deploying AI agents or building AI-powered analytics inside your organization, fan-out is the architectural equivalent of decomposition. Instead of handing your agent one big ambiguous question, the system should be generating multiple targeted sub-queries against your data before it synthesizes a response.

"Why did campaign performance drop last quarter?" becomes:
β†’ What changed in delivery pacing by channel?
β†’ Which audience segments showed the sharpest drop?
β†’ Were there creative variants that outperformed despite overall decline?
β†’ What do external benchmarks show for the same period?
β†’ What does the data say versus what was planned?
Five retrievals. One synthesized answer. Dramatically better output.

The uncomfortable parallel:
Organizations that skip decomposition as a human discipline are also the ones deploying AI systems without fan-out architecture. They're handing a complex question to a system built for structured tasks β€” and wondering why the answers feel fluent but shallow.

The question isn't whether AI can do this. It's whether your implementation is designed to.

Read More
Chuck Schultz Chuck Schultz

You're Not Ready for Agents Yet

Most AI conversations stay at the surface. This series goes one layer deeper. Agents are coming into your marketing stack. Most teams haven't thought about what to actually hand them.

Gartner recently published it: 40% of enterprise applications will be integrated with task-specific AI agents by the end of 2026, up from less than 5%.

That sounds like a technology story. It's actually a workflow design story.
Because an agent can only execute what's been clearly defined. If you hand it a messy, complex business question β€” "why did campaign performance drop last quarter?" β€” you'll get a confident, fluent, partially wrong answer.

The discipline that separates organizations that benefit from agentic AI versus ones that just automate confusion faster? Problem decomposition.

Breaking complex marketing and analytical questions into structured, AI-executable tasks before you run anything. Most organizations skip this entirely.

Quick self-score:
πŸ”΄ We don't do this, AI gets the big messy question and hope for the best.
🟑 Occasionally, when someone is thoughtful. Not a standard or repeatable.
🟠 It happens on some projects but it's not baked into how we work.
🟒 Complex questions are always decomposed before AI is deployed. It's a defined discipline β€” not a personal habit.

Here's what decomposition actually looks like in practice:
"Why did our campaign underperform?" is not an AI task. It's a question made of ten smaller tasks.
Break it down:
β†’ Isolate the performance drop by channel, audience, and creative variant
β†’ Compare delivery and pacing against plan
β†’ Flag anomalies in the data against expected ranges
β†’ Analyze external factors (seasonality, competitive) for the relevant period
β†’ Synthesize findings into ranked hypotheses
Now run AI against each discrete task. The output is dramatically better β€” and reviewable.

The uncomfortable truth:
If the question going in is ambiguous, the answer coming out will be confident and wrong. Agents arriving in your stack don't change that. They amplify it.

The organizations that win in an agentic environment won't be the ones with the most agents. They'll be the ones who've done the unglamorous work of designing the tasks those agents will execute.

Source: Gartner, "Gartner Predicts 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026," August 2025.
https://lnkd.in/g42w7p3h

Read More
Chuck Schultz Chuck Schultz

Built for a Crawler. Bought in a Conversation.

LLMs don't click your ads. But they're deciding who gets recommended.
Has your content strategy caught up?

Think about the last time you used ChatGPT, Claude, or Gemini to answer a business question. You didn't get ten blue links. You got an answer. Maybe a shortlist. Maybe a recommendation. Somewhere in that response, a brand either showed up β€” or didn't.

That decision wasn't made by an algorithm counting backlinks. It was made by a model trained on what's authoritative, structured, and citable β€” one that decided in seconds whether your organization's thinking was worth surfacing. Most marketing teams have no visibility into that decision. And no strategy to influence it.

Quick self-score:
πŸ”΄ We measure clicks, impressions, and traffic. Zero visibility into whether we're being surfaced β€” or ignored β€” inside LLM responses.
🟑 We know this is shifting. Nobody owns it. It has no clear home.
🟠 We've started adjusting content. Nothing systematic, nothing measured.
🟒 Content strategy is explicitly engineered for AI recall β€” named frameworks, credentialed expert attribution, structured original data LLMs can cite.

Here's what HBR research published this month makes clear: when AI-generated summaries appear in search results, clicks to websites drop an average of 47%. In some cases, traffic reductions approach 90%.

That's not a traffic problem. That's a visibility architecture problem.

The organizations winning here aren't producing more content. They're producing content AI systems can use β€” specific enough to cite, structured enough to recall, distinctive enough to attribute.

The uncomfortable reality:
Your SEO strategy was built for a crawler. The buying decision is increasingly happening in a conversation your brand isn't part of.

You don't need to abandon what's working. You need a strategy built for an audience of one β€” the model deciding whether your brand is worth citing before your customer ever asks the question.

Where does your organization land?
And if you've started building for LLM visibility β€” I'd genuinely love to hear how.

Source: Kenny & Pogrebna, "LLMs Are Overtaking Search. Here's How to Adjust Your Online Presence." Harvard Business Review, March 2026. https://lnkd.in/gz-T5qDh

Read More
Chuck Schultz Chuck Schultz

Gemini Doesn't Fix Your Data. It Executes On It.

Nine days from now, Google is putting Gemini inside the tools that run your media budget. DV360. CM360. SA360. Analytics 360. All of it. At once.

On March 23 at their NewFront event, Google will reveal what they're calling "the Gemini advantage in Google Marketing Platform" β€” an ecosystem-wide upgrade designed to remove fragmentation and activate your data in real time. That's the pitch.

Here's what almost nobody is saying out loud yet.
Gemini doesn't fix your data. It executes on it. Gemini can only remove fragmentation if your data can be unified. And in most marketing organizations right now, it can't.

Misaligned KPI definitions across CM360 and SA360. Agency feeds on weekly cadence while internal dashboards pull daily. Floodlight configurations that were never fully audited. Taxonomy built long ago and not standardized.
Gemini doesn't see that complexity. It sees inputs β€” and acts on them at AI speed, at media scale, against real budget.

Clean data foundation going in: dramatically better outputs.
Fragmented data foundation going in: confident, fast, and potentially expensive wrong answers.

Quick self-score:
πŸ”΄ Highly fragmented β€” no unified view across platforms, agencies, tools.
🟑 Partially consolidated β€” significant gaps and manual workarounds.
🟠 Mostly centralized β€” but taxonomy alignment remains incomplete.
🟒 Clean and governed β€” structured well enough that an AI layer can optimize toward real outcomes.

There's a second problem that runs deeper.
When Gemini is making real-time bidding decisions inside DV360, the window to catch a data quality problem before it affects spend is essentially closed.
The organizations that benefit from this have defined what "good" looks like before the system runs β€” named accountability, documented success criteria, clear quality gates. The ones that haven't? The speed of the system becomes a liability, not a true advantage.

The upgrade is coming whether your data foundation is ready or not.

Source: Google, "The Gemini advantage in Google Marketing Platform,"https://lnkd.in/gMdHTRek, February 26, 2026. NewFront livestream: March 23, 11:30 AM ET.

Read More
Chuck Schultz Chuck Schultz

You Don't Have an AI Strategy. You Have a Lottery.

The Scene: Someone opens an AI tool, pastes in a spreadsheet, types a question, and waits for a miracle. Sometimes they get something useful. Sometimes they don't. They run it again. Different output. A colleague tries the same task with a completely different prompt and gets a completely different answer.

So they conclude the tool is inconsistent.
The tool isn't the problem. The input is.

Most organizations treat AI like a vending machine. Put something in. Hope something good comes out. The organizations that are actually winning treat it like a system β€” where the data is structured deliberately, the context is defined precisely, and the instructions are architected to constrain and focus the output before the model ever touches the analysis.

The difference isn't the AI. It's everything that happens before you hit run.

Quick self-score:
πŸ”΄ Raw data, fresh prompt every time β€” outputs depend entirely on who's typing and what they're thinking that day.
🟑 Some people get consistently great results. Their system lives in their head and their personal accounts β€” invisible to the organization and impossible to scale.
🟠 Some standards exist in some places. Nobody would call it an architecture.
🟒 Structured inputs, defined context, deliberate instructions. The output is the same regardless of who runs it.

Here's the uncomfortable truth:
If your AI results depend on who wrote the prompt that day, you don't have an AI strategy. You have a lottery.

Repeatability isn't a model problem. It's a design problem. And most organizations haven't designed anything β€” they've just started prompting.
The gap between yellow and green isn't a better tool. It's a decision to treat input architecture as seriously as the output you're expecting.

Garbage in, garbage out has always been true.
AI just makes the gap between thoughtful and careless input more expensive.

Read More
Chuck Schultz Chuck Schultz

AI Doesn't Create Data Problems. It Reveals Them.

Yesterday, I asked where your data actually stands. Today, let's make it actionable.
In my experience, three core data problems sabotage AI investments more than anything else. They lurk quietly until AI deployment reveals themβ€”often too late.

Here's a clear breakdown:
1) Fragmentationβ€”Your data is scattered across too many silos (e.g., platforms, spreadsheets, or agency feeds), making it unreliable for AI to access and process consistently.

2) Inconsistencyβ€”Different teams, partners, or systems define the same KPIs or metrics in varying ways, leading to mismatched inputs and flawed AI outputs.

3) Invisible single points of failureβ€”One critical data source fails (e.g., an API outage or manual update delay), and your entire AI workflow crumbles without warning.

The worst part? Most organizations don't spot these issues until AI amplifies them, wasting time and resources. Before scaling AI into any key workflow, run this quick 5-question audit to uncover and prioritize these problems. It takes just 30 minutes and gives you a clear data roadmap:

1) Where does the data live?
List every platform, agency feed, spreadsheet, or tool involved. If you can't do this in five minutes, fragmentation is already biting you.

2) Who owns each data source?
Name a specific individualβ€”not a team or vendor. No clear owner means no accountability when issues arise.

3) How fresh is the data?
Is it real-time, daily, weekly, or manual? Stale data leads to AI generating confidently incorrect results.

4) How consistent is the taxonomy?
Do all platforms, partners, and teams use the same definitions for KPIs and metrics? Variations cause silent errors in AI reconciliation.

5) What happens if one source fails?
Map out the ripple effects. This reveals hidden single points of failure before they disrupt your AI.

Identify these gaps now, fix them first, and your AI initiatives will thriveβ€”not falter.

Do you know or have a hunch, which of the three problems feels most urgent in your organization?

Read More
Chuck Schultz Chuck Schultz

The Career Risk Nobody's Naming

Let me ask you something uncomfortable.
You've invested in AI tools. You've trained your team. You've planned and built workflows.

But what is your AI actually working with?
The unfortunate reality β€” there is enormous organizational pressure to accelerate AI implementation. Board mandates. CEO directives. Competitive anxiety.

Move fast. Show results. Now.

And underneath that pressure β€” largely unspoken:
"We know our data isn't ready. We're building on a shaky foundation. But we can't say that out loud."

Most organizations have a data problem they haven't fully admitted yet.
Not because they don't know it exists.
Because admitting it may feel like a career risk.

Quick self-score - how centralized and governed is your data?
πŸ”΄ Highly fragmented β€” campaign data, customer data, sales data scattered across platforms, agencies, and spreadsheets. No unified view exists.
🟑 Partially centralized β€” some consolidation but significant gaps, inconsistencies, and manual workarounds remain.
🟠 Mostly centralized with some legacy exceptions that still require manual intervention.
🟒 Fully centralized, governed, and accessible across the organization in near real-time.

Here's the truth nobody wants to say out loud:
You can have the most sophisticated AI stack in your category.
If your data is fragmented β€” your AI is just automating confusion faster.
Garbage in. Garbage out. Still applies.
The organizations pulling ahead aren't necessarily using better AI models.
They're feeding better data into average ones.
Data centralization isn't a technology project.
It's a competitive decision.

Where does your organization land? Be honest β€” your AI strategy is only as strong as the data underneath it.

Read More
Chuck Schultz Chuck Schultz

65% of Marketing Tasks. Already Exposed.

The Anthropic study dropped last week. Marketing orgs should be uncomfortable.

Anthropic released the first labor market study built from actual Claude usage data β€” not theoretical capability, but what AI is doing right now across real occupations.

The finding that should stop every CMO:
Market research analysts and marketing specialists: 65% task exposure.
#5 on Anthropic's most exposed occupations list β€” above financial analysts, software engineers, and information security professionals.

To put that in context β€” that's higher than most of the technical roles your organization has already started AI-proofing.

Here's what makes this different from every AI hype cycle you've sat through:
This isn't "AI could theoretically do this." It's observed. Measured. From real usage patterns across real workplaces.

And there's a timing signal buried that almost no one is talking about:
Workers ages 22–25 are already being hired less frequently in high-exposure professions. The unemployment numbers haven't moved yet. But the hiring decisions already have.

That gap β€” between the hiring shift and the headline numbers β€” is your window.

Most marketing organizations are still building their AI strategy around tools.
What needs to happen now is an honest look at task composition:
β†’ Which roles on your team are doing work that AI is already doing at scale?
β†’ Where are you hiring for tasks that are 65% redundant in 18 months?
β†’ What's your plan for the people currently in those roles?

This isn't a layoff conversation. It's a design conversation β€” and the organizations that have it proactively will build something better than what they're replacing.

The ones that wait will be reacting to disruption instead of shaping their response.

Source: Anthropic, "Labor Market Impacts of AI: A New Measure and Early Evidence" β€” https://lnkd.in/gB9WASj6

Read More