Written by Christine Preizler, Chief Commercial Officer at Sigma Squared.
It’s the year of AI!
In another two years, it will take over the world. Or, maybe pop like an about-to-burst bubble. Either way, you can expect the vast majority of vendors exhibiting at IACP this year to put AI at the center of their pitch. But you’re not buying products for fun – you’re looking for ways to maximize your agency’s outcomes.
The pace at which technology firms are able to release products and innovate is new. But the principles for evaluating and adopting early stage, disruptive technologies are not. Here’s what you should know before you hit the ground in Denver to help you do that well.
We’ve been here before.
Artificial intelligence is not new. We’ve been using software to predict outcomes and automate tasks for decades. Neither is buzzy technology that is destined to transform the way we live and work. Three years ago, tech giants were betting on the metaverse. Facebook changed its name to Meta and sunk billions into building a virtual world. Microsoft was blogging about mixed-reality workplaces and a modern industrial revolution. Wall Street was concerned that Apple's delayed VR headset release was a signal it was missing the market.
Why did the metaverse fade from conversation in 2023? Probably because of ChatGPT. And Claude. It’s important to recognize that many of the vendors you’ll meet in Denver have a strong incentive to brand themselves as “AI-first” companies. Investors and shareholders are demanding it.
Be wary the agents and efficiency gains.
I’m going to pick on two things to make a broader point. Let’s start with agentic AI. Agents are goal-oriented pieces of software that execute tasks. A version of this has been around: it used to be called Robotic Process Automation (RPA) and companies like UI Path started doing it twenty years ago. What makes a bot agentic is when it has the agency to complete its task in the way it sees fit. It interprets context, navigates unstructured data, and learns over time. Make sure you understand what a vendor really means when they say agentic.
Is generative AI powering it? If so, consider whether that kind of non-deterministic model is the right fit for high-stakes decisions or data. Or is it simply a rules-based bot — the kind that’s been around for decades (and that a high school student could probably build over a weekend)?
With regard to efficiency, I’d call your attention to two bodies of research. The first, released in July by Model Evaluation & Threat Research, found that contrary to nearly everyone’s expectations, developers using AI tools actually took 19% longer to complete tasks than those who didn't. That’s particularly notable because coding is one of the use cases where AI is presumed to excel. Closer to home, a study out of Marymount University study found no statistically significant improvement in the efficiency of police report writing when AI tools were introduced. In both cases, it may be that further investment in training, changes in workflows, and continued product development make these technologies more impactful. But for now, customers are not realizing measurable efficiency gains.
To be clear, that doesn’t mean tools promising automation or agentic capabilities are useless. What both of these examples underline is the importance of understanding the context in which these tools must operate and defining the goals of your AI purchase up front – not in the vendor's terms but in your own.

Your data is your foundation. Is it ready for AI?
Artificial intelligence applications rely on large quantities of data – your data – to complete tasks, generate outputs, and identify trends. This can be great: you’ve already invested a lot of time and money to collect and store all of this data, you ought to get the most out of it. But it means that the quality of the data, the format it is in, and where it is stored all matter, a lot. If the records you want to use for an AI use case are incomplete, or the way you collect that data changed significantly two years ago, you will need to address that before you layer an AI application on top. Messy data, messy projects.
Think about pricing approach, not cost.
One of the things that continues to baffle me — as a former AWS-er who negotiated more custom storage and data transfer agreements than I can count — is the way in which both of these costs remain obscured in the public safety market. Understand: your data living in the cloud costs your provider a specific amount of money. Moving your data between cloud-hosted applications, or even within cloud-hosted applications, costs your provider a specific amount of money. This is not just a problem for large files like videos – it is often most expensive to move lots of little pieces of data at a high volume. Say, for example, the kind of data movement pattern that is fundamental to so many artificial intelligence use cases.
Many providers are subsidizing or obscuring these costs in order to get users on the platform. The free data party will end eventually. Make sure you understand how that shift could impact both your future spend and the quality of your AI outputs when it does.
While we’re at it, let’s talk about “vendor lock in”.
There is no such thing as vendor lock in, unless you agree to it contractually. But there are easy migrations and hard ones, cheap ones and costly ones. What distinguishes the two usually comes down to the criticality of the application, the data formats, migration tooling, and the cost to retrieve the data (see above).
A former colleague of mine, Mark Schwartz, wrote a great article about this back in 2018, when a similar combination of hype and concern was swirling around enterprise cloud adoption. He argued that organizations considering a technology purchase should think in terms of “switching costs” and laid out a framework for how to evaluate these costs in the context of a project and reduce the risk of high ones. To do this you need to be honest about your in-house skillsets, consider how valuable premium functionality is to your use case, dig into vendor data formats, and avoid punitive contract structures.
Where AI adds complexity – particularly if you’re considering generative use cases – is that how a particular vendor uses or customizes a model may not be something you can take with you if you decide it’s time to go. This is an especially important consideration if you change process or policy around a generative AI application. Do not take vendor promises about flexibility or interoperability at face value: do your own homework.
Don’t let risk keep you from value.
That metaverse hype cycle wasn’t all for naught. There are concrete advancements that came from it. Video conferencing and graphics rendering got much better. Digital authentication tools evolved. You can thank the metaverse for many of today’s cryptography tools.
Consider that this round of tech hype is likely to follow a similar pattern: the buzz fades away and the real value begins to emerge. The key here is to make sure you put your agency in a position to take advantage of that value without exposing it to too much risk. Here are some practical steps you can take to do that:
- Take stock of where your agency is at today- Before you engage with vendors in Denver or elsewhere, make sure you understand the limitations of your existing contracts, data quality and technology use policies. Be honest with yourself about the level of AI literacy across your organization. Each of these have important implications for how fully you can realize value from your AI purchases. 
- Make this about future-proofing your agency rather than realizing vendor claims- If you’re looking at adopting a first AI use case, it will almost certainly not go the way you think it will. That is okay! It is a necessary first step towards putting your agency in a position to meaningfully adopt AI – or even just exist in an AI-saturated world. Don’t cost-justify based on a calculation of hours saved, time recouped, or efficiency realized. It will be impossible to prove and that miss will undercut your long-term success. 
 
- Pick a medium-stakes project that aligns to top-line department goals- When you’re adopting disruptive technologies, you need projects that are meaningful enough to force your stakeholders to address tricky data, contract, use policy and training issues. Without that weight, it’s a side project. Find projects that matter to your organization but don’t touch mission-critical applications or workflows right out of the gate. 
- Empower a pilot team to make decisions, act fast, and take (some) risk- The private sector innovates faster because they’re always focused on the upside: repeated failure is tolerable if the fifth try yields revenue. Law enforcement agencies have different responsibilities and risk tolerances, but that doesn’t mean you can’t borrow some best practices. 
- Ask the hard questions- Where do you store my data? How do you charge for it? Are your models or data formats proprietary? What happens if I need to move my data or connect it to another system? Are your models explainable? The exhibit hall floor may not be the best place for in-depth answers, but you should put these questions to any vendors you decide to follow up with. 
