Law Enforcement

Know Before You Go: Navigating the IACP Expo Hall in the Era of Artificial Intelligence

Know Before You Go: Navigating the IACP Expo Hall in the Era of Artificial Intelligence

Know Before You Go: Navigating the IACP Expo Hall in the Era of Artificial Intelligence

Oct 2025

Written by Christine Preizler, Chief Commercial Officer at Sigma Squared.

It’s the year of AI!

In another two years, it will take over the world. Or, maybe it’s an about-to-burst-bubble. Either way, you can expect the vast majority of vendors exhibiting at IACP this year to put AI as the center of their pitch. But you’re not buying products for fun – you’re looking for ways to maximize your agency’s outcomes. 

The pace at which technology firms are able to release products and innovate is new. But the principles for evaluating and adopting early stage, disruptive technologies are not.  Here’s what you should know before you hit the ground in Denver to help you do that. 

We’ve been here before.

Artificial intelligence is not new. We’ve been using software to predict outcomes and automate tasks for decades. Neither is buzzy technology that is destined to transform the way we live and work. Three years ago, tech giants were betting on the metaverse. Facebook changed its name to Meta and sunk billions into building a virtual world. Microsoft was blogging about mixed-reality workplaces and a modern industrial revolution. Wall Street was concerned that Apple did not have an answer to what was obviously the next frontier in consumer technology: VR headsets. 

Why did you stop hearing about the metaverse in 2023? Probably because of ChatGPT. And Claude. And maybe Bard, but probably not. It is important to understand that the vendors you interact with in Denver have a strong incentive to frame themselves as AI-first firms: their investors and shareholders are demanding it. 

Be wary the agents and efficiency gains.

I’m going to pick on two things to make a broader point. Let’s start with agentic AI. Agents are goal-oriented bits of software that execute tasks. A version of this has been around: it used to be called Robotic Process Automation (RPA) and companies like UI Path started doing it twenty years ago. What makes a bot agentic is when it has the agency to complete its task in the way it sees fit: it interprets context, can navigate unstructured data, and learn over time. Make sure you understand what a vendor really means when they say agentic. Is generative AI under the hood? If so, consider whether that kind of non-deterministic model is the right fit for high-stakes decisions or data. Or, is it the kind of rules-based bot that has been around for decades (and that a high school kid in your town can probably build over the weekend)?

With regard to efficiency, I’d call your attention to two bodies of research. The first, released in July by Model Evaluation & Threat Research, found that contrary to almost everyone’s expectations, developers using AI tools took 19% longer to complete tasks than those without. What’s particularly important about this is that it focuses on coding, a use case that AI is particularly well-suited for. Closer to home, a study out of Marymount University found no statistically significant impact of AI on the efficiency of police report writing. In both cases, it may be that further investment in training, changes in workflows, and continued product development make these technologies more impactful. But customers are not realizing efficiency gains today. 

To be clear: this does not mean tools that use agents or promise efficiency gains are useless. Well documented agents with a clear purpose can offload unnecessary manual effort. What both of these examples underline is the importance of understanding the context in which these tools must operate and defining the goals of your AI purchase up front – not in the vendor's terms but in your own. 

Your data is your foundation. Is it ready for AI? 

Artificial intelligence applications rely on large quantities of data – your data! – in order to complete tasks, generate outputs, and analyze trends. This can be great: you’ve already invested a lot of time and money to collect and store all of this data, you ought to get the most out of it. But it means that the quality of the data, the format it is in, and where it is stored all matter, a lot. If the records you want to use for an AI use case are incomplete, or the way you collect that data changed significantly two years ago, you will need to address that before you layer an AI application on top. If you don't, the value will be limited. 

Think about pricing approach, not cost. 

One of the things that continues to baffle me as a former AWS-er who negotiated more custom storage and data transfer agreements than I can count is the way in which both of these costs remain obscured in the public safety market. Understand: your data living in the cloud costs your provider a specific amount of money. Moving your data between cloud-hosted applications, or even within cloud-hosted applications, costs your provider a specific amount of money. This is not just a problem for large files like videos – it is often most expensive to move lots of little pieces of data at a high volume. Say, for example, the kind of data movement pattern that is fundamental to so many artificial intelligence use cases. 

Many providers are subsidizing or obscuring these costs in order to get users on the platform. Make no mistake: the free data party will end. Make sure you understand how that will impact both your future costs and the efficacy of the outputs when it does.

While we’re at it, let’s talk about “vendor lock in”.

There is no such thing as vendor lock in, unless you agree to it contractually. What there are, are easy migrations and hard ones. Cheap ones, and expensive ones. What distinguishes the two usually comes down to the criticality of the application, the data formats, migration tooling, and the cost to retrieve the data (see above). 

A former colleague of mine, Mark Schwartz, wrote a great article about this back in 2018, when a similar combination of hype and concern was swirling around enterprise cloud adoption. He argues that organizations considering a technology decision should think in terms of “switching costs” and lays out a framework for how to both consider these costs in the context of a project and how to reduce the risk of high switching costs. To do this you need to be honest about your in-house skillsets, consider how valuable premium functionality is to your use case, dig into vendor data formats, and avoid punitive contract structures. 

Where AI adds complexity – particularly if you’re considering generative use cases – is that how a particular vendor uses or customizes a model may not be something you can take with you if you decide it’s time to go. This is an especially important consideration if you change process or policy around a generative AI application. Do not take vendor promises about flexibility or interoperability at face value: do your own homework. 

Don’t let risk keep you from value.

That metaverse hype cycle wasn’t all for naught. There are concrete advancements that came from it. Video conferencing and graphics rendering got much better. Digital authentication tools evolved. You can thank the metaverse for many of today’s cryptography tools. 

Consider that this round of tech hype is likely to follow a similar pattern: the buzz fades away and the real value begins to emerge. The key here is to make sure you put your agency in a position to take advantage of the value without exposing it to too much risk. Here are some practical steps you can take to do that: 

  • Take stock of where your agency is at today

    Before you engage with vendors in Denver or elsewhere, make sure you understand the limitations of your existing contracts, data quality and technology use policies. Be honest with yourself about the level of AI literacy across your organization. Each of these have important implications for how fully you can realize value from your AI purchases. 

  • Make this about future-proofing your agency rather than realizing vendor claims

    If you’re looking at adopting a first AI use case, it will almost certainly not go the way you think it will. That is okay! It is a necessary first step towards putting your agency in a position to meaningfully adopt AI – or even just exist in an AI-saturated world. Don’t cost-justify based on a calculation of hours saved, time recouped, or efficiency realized. It will be impossible to prove and that miss will undercut your long-term success.
     

  • Pick a medium-stakes project that aligns to top-line department goals

    When you’re adopting disruptive technologies, you need projects that are meaningful enough to force your stakeholders to address tricky data, contract, use policy and training issues. Without that weight, it’s a side project. Find projects that matter to your organization but don’t touch mission-critical applications or workflows right out of the gate.

  • Empower a pilot team to make decisions, act fast, and take (some) risk

    The private sector innovates faster because they’re always focused on the upside: repeated failure is tolerable if the fifth try yields revenue. Law enforcement agencies have different responsibilities and risk tolerances, but that doesn’t mean you can’t borrow some best practices.

  • Ask the hard questions

    Where do you store my data? How do you charge for it? Are your models or data formats proprietary? What happens if I need to move my data or connect it to another system? Are your models explainable? The exhibit hall floor may not be the best place for in-depth answers, but you should put these questions to any vendors you decide to follow up with. 

Heading to IACP? Let’s talk AI in public safety.

Our Chief Commercial Officer will be in Denver throughout the conference, meeting with agency leaders to discuss how to evaluate and adopt AI responsibly.

👉 Book Time at IACP