This article was originally published on LinkedIn.
Boards want visible AI progress. Enterprise systems want stability. The job is holding both - without kidding yourself about the work in between.
I was recently invited to join a virtual roundtable with a group of CIOs. The companies spanned a range of industries, and most of them are north of the $100M mark. They had been following some of my posts here on LinkedIn about NetSuite and AI, and they reached out to get my perspective - to "pick my brain" a bit.
Normally, my work involves working with CFOs and developers. So talking with a group of CIOs was an interesting opportunity.
The official topic was "AI in ERP." But the conversation moved quickly past tools and into something more fundamental:
How do we pursue AI without destabilizing the systems that our companies depend on?
Some CIOs admitted they weren't sure where to start. Others were less worried about where to start and more worried about what could go wrong. One poor decision in this space doesn't just create a failed pilot. It can create operational disruption - in finance, supply chain, customer operations - and a credibility problem that takes longer to repair than any model takes to build.
That sort of tension is everywhere right now. Boards want progress. Competitors are shipping features fast. And if you run IT, you can feel the clock ticking. Push too hard and you risk breaking what's working. Move too slowly and you watch faster organizations gain ground.
Based on my experience helping organizations adopt AI - and what I heard in that discussion - seven patterns stood out. Together, they tend to determine whether AI becomes a durable advantage or an ongoing distraction:
- Activity without strategy
- "Shadow AI"
- Poor data quality
- Governance gaps
- Proof-of-concept dead ends
- The prompt engineering gap
- The messy work of making AI fit the enterprise
These don't usually show up in isolation. They reinforce each other. And ignoring one often amplifies the rest.
Let's talk through them.
Activity without strategy
A lot of AI work starts with motion, not direction.
One team experiments with a chatbot. Another builds a forecasting model. Someone signs up for a vendor tool that looked impressive in a demo. Twelve months later, there's spend and activity - but no shared priorities, no consistent definition of success, and no credible story about ROI.
You don't need a massive strategy document. But you do need intent.
In practice, the teams that make progress anchor AI work to outcomes the business already cares about. Faster closes. Lower operating costs. Shorter cycle times. Better service levels. Not "we implemented AI." Actual results the business can feel.
It also helps to think in horizons:
- Near-term wins that prove value and build trust - typically process automation in well-defined workflows where success is measurable
- Mid-term investments that create shared capability (data infrastructure, platform standards, reusable patterns)
- Longer-term shifts that change how work gets done - where AI enables fundamentally different approaches to problems
And when side projects emerge - as they inevitably will - there needs to be a path for pulling the good ones into a broader roadmap instead of letting them fragment the landscape. This means having a lightweight review process that evaluates experiments against strategic priorities, not just technical novelty.
One consistent differentiator: the CIOs making progress aren't only buying tools. They're investing in AI literacy so teams can make better decisions without constant supervision. When people understand what AI can and can't do, they stop treating it like magic and start treating it like infrastructure.
Providing guidance - and preventing "shadow AI"
When IT doesn't provide clarity, people don't stop experimenting. They just do it on their own.
That's how "shadow AI" shows up. A marketer uploads sensitive customer data into a public AI tool. An analyst tests an unvetted SaaS platform with financial projections. A developer experiments with an open-source model that hasn't been reviewed for security or licensing issues.
This type of behavior is rarely malicious. It's urgency and curiosity filling a vacuum.
The instinctive response is to shut it down. That typically makes the behavior harder to see - not less likely to happen. People get creative about hiding what they're doing, and the risk actually increases.
What works better is guidance that is simple, explicit, and usable:
- Publish guardrails: what data can and can't be used, where models can run, and who signs off on what
- Teach the basics: short sessions that build judgment, not fear - helping people understand the difference between experimenting safely and creating risk
- Offer safe tools: approved platforms and APIs so people have a legitimate place to experiment without waiting weeks for permission
- Monitor and engage: when you find unsanctioned usage, bring teams into the process instead of punishing curiosity
Handled well, shadow AI becomes feedback. It tells you where the demand is - and where your operating model isn't keeping up. The finance team testing a forecasting tool outside IT channels is probably telling you something about unmet needs in your BI strategy.
Poor data quality - the quiet killer
AI is unforgiving with bad data.
If data is inconsistent, fragmented, or poorly defined, most of your time will be spent cleaning and reconciling before you ever get to model performance. And when definitions differ across systems - when "revenue" means different things in NetSuite, Salesforce, and the data warehouse - results become unpredictable.
The hard part here isn't technical. It's organizational. Who owns the definition of a metric? Who owns fixes at the source? Who has the authority to standardize? These are political questions as much as technical ones.
Here's what typically derails AI projects on the data side: You start with a use case in accounts receivable - automating payment predictions, for instance. The data looks reasonable in the demo. Then you discover that payment terms are inconsistently entered, customer segments overlap in contradictory ways, and there's no reliable way to distinguish between actual late payments and accounting adjustments. The model can't fix this. It just surfaces it loudly.
Some practical moves help:
- Assign ownership so problems don't float indefinitely between teams
- Automate checks to catch drift, missing values, and broken pipelines early - before they corrupt downstream work
- Sequence wisely by starting with domains where data is already cleaner (often newer systems or more tightly controlled processes) and earning the budget to fix harder areas
- Be transparent about what's trustworthy today and what isn't yet - so teams don't waste time on AI use cases that depend on data that isn't ready
Data isn't just an AI input. It's enterprise infrastructure. Treating it like a side project guarantees frustration. And if your data foundation isn't solid, every AI initiative becomes an expensive data remediation project in disguise.
Governance - not a brake, a stabilizer
Pilots can be loose, but production can't.
As AI gets closer to core workflows - anything touching finance, HR, customer data, or compliance-sensitive processes - questions around privacy, compliance, auditability, and accountability stop being theoretical. They become gating issues. And the business won't trust what it can't audit.
The mistake is thinking governance means bureaucracy. It doesn't have to. Good governance is practical clarity:
- Who owns each model, and what "ownership" actually means (not just who built it, but who's accountable when it drifts or breaks)
- Who owns the data inputs and the downstream impact - particularly when AI touches regulated data or makes consequential decisions
- How you log inputs and outputs so decisions are explainable later (this matters for compliance, but also for debugging)
- How and when models are reviewed for drift, bias, or changing business conditions
Done well, governance speeds you up because it creates trust. When finance knows that an AI-generated forecast can be audited and traced back to source data, they'll actually use it. When legal knows there's a clear record of how decisions were made, they stop blocking deployments.
Done poorly, governance drives people back into shadow AI. If your process requires six weeks and three committees to approve a straightforward automation, teams will find workarounds.
Regulation is coming regardless. It's easier to grow governance deliberately - building it into your operating model as you go - than to bolt it on after something breaks or after an auditor asks uncomfortable questions.
Proof-of-concept dead ends
Many AI initiatives die right after the impressive demo.
The model works. The presentation lands. The executive sponsor is excited. And then nothing happens. Not because the idea was bad, but because no one owned the transition from experiment to production.
The pilot validated the algorithm, but nobody thought through how to integrate it with existing workflows, who would maintain it, how to handle edge cases, what happens when the model needs retraining, or how to support users who don't trust the output yet.
If you want fewer dead ends, plan for production from day one:
- Design pilots with integration, change management, and support in mind - not as afterthoughts, but as success criteria
- Tie success to business KPIs, not just model accuracy - a model that's 95% accurate but doesn't change behavior isn't delivering value
- Build reusable components so each project makes the next one faster - shared data pipelines, standard deployment patterns, common monitoring frameworks
- Get real sponsorship: if the business doesn't own the problem, the pilot dies - IT can build it, but if there's no operational owner who will fight for adoption, it becomes shelfware
Pilots should be stepping stones. If they don't lead anywhere, they quietly erode confidence - and make the next AI conversation harder than it needs to be. You end up with a portfolio of impressive proof-of-concepts and nothing in production.
The prompt engineering gap
Here's something that doesn't get enough attention: most AI deployments fail not because the model is bad, but because the prompts are.
This has become the focus of my own AI work - helping organizations understand that getting consistent, reliable output from AI systems requires real discipline around prompt design. It's not just about asking nicely. It's engineering.
The gap shows up everywhere. A support team starts using AI to draft responses, and the outputs are inconsistent - sometimes helpful, sometimes off-brand, sometimes missing critical details. The problem isn't the model. It's that different people are prompting it differently, with no structure or standards.
Or finance automates a reconciliation workflow and the AI works great in testing but produces unreliable results in production. Why? Because the prompt didn't account for edge cases, didn't enforce consistent output formatting, and didn't include sufficient context about business rules.
Professional prompt engineering isn't optional once AI moves into production:
- Structure matters: Well-designed prompts include clear instructions, relevant context, examples of desired output, and explicit constraints
- Consistency is critical: When ten people use the same AI tool with different prompts, you get ten different quality levels - and no way to improve systematically
- Testing is essential: Prompts need to be tested against edge cases, documented, and version-controlled like any other production code
- Iteration creates value: The first prompt rarely works optimally - but organizations often stop iterating once something "sort of works"
The teams that get this right treat prompts as strategic assets. They document what works. They train people on prompt design principles. They build libraries of tested prompts for common use cases. They measure prompt performance and optimize deliberately.
This isn't about creativity or wordsmithing. It's about engineering reliable behavior from probabilistic systems. And it's one of the biggest levers for turning promising pilots into production systems that actually deliver value.
Making AI fit the enterprise
Even great models fail when they don't fit the enterprise.
ERP and adjacent systems weren't designed with AI in mind. Legacy integration patterns resist modern approaches. Latency kills "real time." Security reviews add friction. And the more tightly coupled the environment, the more fragile the last mile becomes.
This is where many AI efforts finally succeed - or quietly fail.
I've seen this play out repeatedly. A team builds an AI model that predicts which purchase orders will be delayed based on vendor history and current conditions. The model performs well in testing. But when they try to surface those predictions inside NetSuite - where procurement actually works - they run into a wall. The integration requires custom middleware. The latency makes real-time alerts impractical. And every NetSuite update threatens to break the connection. Six months of model development gets stuck on three months of integration work that nobody budgeted for.
Teams that navigate this well do a few consistent things:
- Plan integration early. Don't wait until the model is done to ask how it plugs in - involve infrastructure and integration teams from the beginning
- Favor modularity. Wrap AI as services so you can insert, swap, or rollback without rewriting the stack - this also protects the rest of the enterprise from AI-specific complexity
- Isolate dependencies. Protect models from downstream system churn with clean boundaries - and protect downstream systems from model instability with the same boundaries
- Test end-to-end sooner. Catch workflow and security realities early, when they are cheaper to address - not during the week you planned to launch
The last mile is rarely about intelligence. It's about fit. Can the system handle the latency? Does the output format match what downstream processes expect? Will the model break when NetSuite gets upgraded? These are the questions that determine whether AI ships or stalls.
Holding speed and stability at the same time
The challenges that I've discussed above are interconnected. Weak strategy invites shadow AI. Poor data undermines governance. Integration issues sink pilots that looked promising. Bad prompts create inconsistent outputs that erode trust.
Your job isn't to choose speed or safety. It's to hold both. That usually means:
- Safe sandboxes for experimentation - where teams can explore safely before committing to production patterns
- Platform thinking to avoid one-off builds - invest in reusable infrastructure rather than custom solutions for every use case
- Incremental rollout so you learn before betting big - start with constrained use cases, prove value, then expand
- Change management so people adopt what you build - technology is necessary but not sufficient
- Continuous monitoring to catch drift and risk early - before they become incidents
If your AI program is generating activity but not trust, you will eventually get a governance clampdown that kills momentum. If it's generating trust but not outcomes, you will eventually lose sponsorship and budget. You need both simultaneously.
The organizations that get this right aren't necessarily the most sophisticated technically. They're the ones that respected enterprise reality while moving with urgency. They didn't treat AI as a special category that gets to ignore normal engineering discipline. They integrated it into existing practice - making it boring, reliable, and scalable.
Wrapping Up
The AI imperative is real, and the pressure to get moving - to make progress - isn't imagined.
But the question isn't whether to adopt AI. It's how to adopt AI in a way that creates real, measurable value without destabilizing the systems that already support the business.
Clear direction, trustworthy data, pragmatic governance, disciplined prompt engineering, and respect for enterprise reality matter more than any single model or vendor pitch. These aren't barriers to innovation. They're how innovation will survive contact with the real world.
The CIOs who navigate this successfully aren't the ones who move the fastest. They're the ones who build momentum that compounds - where each success makes the next one easier, where trust and capability grow together, and where the organization learns to adopt AI without chaos.