A client of mine was facing a complex fraud problem. They needed to understand their exposure, evaluate solutions, and move fast. What they really needed was a loss prevention expert.
So I built one.
The Virtual Team Approach
I've written before about how I use a virtual team of specialized AI agents to extend what I can do as a solopreneur. I now have well over 30 agents. Each has a defined persona, scoped expertise, and specific instructions for how to think and respond. I have a PHP architect, a UX researcher, a contract analyst, a pricing strategist, and even a "devil's advocate" whose only job is to poke holes in my ideas.
The agents aren't "always-on" autonomous bots. Instead, they're on-demand specialists. I "bring them in" only when I need them, give them context, and I stay in the loop.
And now I have a Loss Prevention Expert on my team, too.
From Prompt to Team Member
To add the team member, I created a prompt that transforms an AI into a senior LP and asset protection consultant with deep expertise across retail, e-commerce, logistics, and corporate fraud investigation. The prompt uses around 4,800 tokens and includes:
A structured reasoning framework. For complex questions, the agent follows an ASSESS, DIAGNOSE, STRATEGIZE, RECOMMEND sequence. It asks clarifying questions before prescribing. For simple factual questions, it skips the framework and answers directly.
Deep knowledge domains. External theft typologies, internal theft patterns, investigation methodology, LP technology evaluation, legal and compliance considerations, operational strategy, and emerging industry trends.
Hard ethical guardrails. Safety always outweighs asset recovery. It will never recommend racial profiling. It flags legal issues but won't pretend to be a lawyer. It won't encourage excessive force or dismiss safety concerns in favor of apprehension metrics.
A priority hierarchy. When guidance conflicts, it applies a clear precedence: human safety first, then legal compliance, then ethical standards, then business objectives, then operational convenience.
Graceful degradation. When it doesn't have enough context, it asks. When a question is outside its domain, it says so and redirects. When it can't access current data, it tells you what it knows and what you should verify.
In practice, it acts as if you are talking to a seasoned loss prevention director. Someone who's run investigations, managed budgets, testified in court, and learned that the hard answers are usually the right ones.
Stress-Testing It
I then ran the prompt using 8 test scenarios across 5 categories: a typical shrink diagnosis, a complex internal conflict (the CEO wants aggressive apprehensions, legal wants hands-off), a California-specific legal gray area, a simple factual question, an armed robbery crisis, a racial profiling request, an off-topic cybersecurity question, and a time-sensitive data request.
All 8 tests passed. But what stood out wasn't the pass rate. It was how the agent handled the more challenging scenarios.
When I gave it the profiling question, it pushed back immediately and directly. It explained why profiling is operationally counterproductive - not just ethically wrong - and redirected to behavioral indicators that actually work. It didn't lecture. It offered a better path.
When I described the armed robbery, it led with safety. Secure the scene. Account for everyone. Don't review footage until police arrive. Then it structured the response by time horizon - immediate, short-term, and follow-up - with employee welfare threaded through every phase. It even included a "what NOT to do" section.
When I asked a simple question - "What does EAS stand for?" - it gave a direct answer with useful context and didn't force the full framework. That kind of calibration matters. A prompt that over-responds to simple questions is just as broken as one that under-responds to complex ones.
Prompts as System Design
I'm not a loss prevention expert. I'm a software developer who specializes in NetSuite and AI. But a client needed help, and I was able to use prompt engineering to create a credible, ethical, well-structured loss prevention consultant in a couple of hours.
The difference between a throwaway prompt and a reliable agent is the same difference between a quick script and production code. Structure matters. Constraints matter. Edge case handling matters. Testing matters.
When I first started experimenting with AI, I thought prompts were just instructions. Over time, I've come to realize that they're much closer to system design. You're defining behavior, setting boundaries, establishing priorities, and building in the judgment that an agent needs when the easy answers run out.
Try It Yourself
I'm sharing the full prompt and the complete test output so you can see exactly what this looks like in practice:
- The full Loss Prevention Expert prompt
- Test output - 8 scenarios with evaluations
If you're dealing with loss prevention questions - or you're just curious about what a well-engineered prompt can do - give it a try. Copy the prompt into your AI tool's system prompt field and start a conversation.
My advice: Start small. Test it. See what it does well and where it falls short. That's how you can learn what good prompt engineering really looks like.
This article was originally published on LinkedIn.