Why Early Days for Generative AI Means Local Governments Have Time to Get Its Adoption Right
Generative AI has captured headlines and sparked debates over whether it’s the next big thing or just another tech hype cycle. While Big Tech has poured billions into building the infrastructure for these powerful models, the reality is more complicated. As summer rolled on, the promises shifted from “game-changer” to “coming soon,” leaving many to wonder if this innovation is ready for prime time. Let’s just say, if generative AI were a new restaurant, its Yelp reviews would be a mix of “life-changing” and “still waiting for my entree.”
Local governments are grappling with the AI conversation — is it hype or do we go all in and demonstrate our willingness to innovate? I think the objective should be to aim for AI curiosity — not AI obsession. It’s also helpful to put the technology‘s ’evolution into context and understand that it’s still quite immature, so proceeding carefully with adoption just makes sense. Here’s my argument as to why.
What We Get Wrong When We Lump All Forms of AI Together
First, to make it clear, this post is about publicly available generative AI. The kinds of models offered by Google, Meta, OpenAI. There is an unhelpful conflation of generative AI (genAI) and predictive AI in the policy/adoption conversations. This lack of differentiation is akin to lumping nuclear reactors and solar panels in the same bucket as “energy.” Similarly, treating genAI and predictive AI under one broad umbrella ignores the very real and critical differences in how they function, what they can achieve, and the risks they bring.
Predictive AI is ubiquitous and already well integrated into many low-risk applications, such as spam filters, spell checkers, and search engines. When a predictive AI model makes errors, its decision-making process can be traced and adjusted. The risk of understanding how the model works is low. The real risk lies in how these outputs are applied to decision-making and whether issues like data bias, veracity, and ethical considerations are consistently addressed and reassessed. Predictive AI models can be run on relatively modest hardware depending on the size of the model (data, complexity of tasks, speed required).
Unlike predictive AI, which works within defined rules, genAI can switch between being a financial analyst, policy expert, copywriter, or software engineer. That’s its superpower — and our Kryptonite. Outputs can be inaccurate, misleading or entirely fabricated and with no straightforward way to verify truthfulness. The issue of repeatability is also a major concern; genAI models may produce different responses to the same prompt, and change again with an update to the model.
This is problematic because humans have trained themselves to expect reliability from their technology. Our tolerance for failure in technology is low. We accept human mistakes because we know humans aren’t perfect, but we absolutely don’t extend the same grace to tech. Hell no. One mistake and we’re raging on social media and shifting our attention and dollars elsewhere. Think about the last time you tapped your credit card and thought, I need to independently validate the percent tax on my Chipotle receipt?
Use At Your Own Risk
Oddly, the tool that Big Tech is clamoring for you to use comes with a big disclaimer of “Use At Your Own Risk.” Not exactly a selling point. Yet a warning label is what the State of Indiana slapped on the launch of its genAI chatbot. Users must accept a six point disclaimer to use the tool. If you need a six point disclaimer, I’m not sure that’s the right move out of the gate with a genAI tool. I’ll discuss more in my next post, but I’m arguing that the immaturity of the technology and our limited experience with it as a populace means there are better ways for local or state governments to experiment for positive gains.
Without established certification standards or benchmarks to measure the reliability and repeatability of genAI outputs, local governments on the early adoption curve have little comparative market analysis tools on which to make acquisition decisions, like which genAI model best suits the use case(s)? This work is already underway at NIST and GSA, with more to come.
I heartily welcome and encourage local governments to draft AI policies to help them understand the risks of AI — both generative and predictive. But those policies should structure work that distinguishes between the two and focuses on managing risk.
An Internet History Lesson
So, how to understand and manage the risks of genAI without completely squelching innovation and potential for real value? To answer that I’ll make a segue into an historical comparison. This might help clarify why I think slowing down and resisting the temptation to go all in on an enterprise genAI is the wrong approach for local government. Which is to say — you’ve got to figure this out, and if the advent of the internet age is a good comparison, it might be best to experiment in low risk ways a little first.
Let’s go way back in time to the late 1990s when Al Gore was as young as the Internet he created. In the heyday of Netscape, Yahoo, and AOL the country was awash in techno utopia talk. The Internet (it was capitalized then) was going to produce an unprecedented expansion of economic growth, upend commerce, and democratize information. No more gatekeeping by the likes of the Encyclopedia Britannica! We’d freely share our ideas, harmonize disputes among our newly created cyber tribes, and usher in a thousand years of world peace!!!
The hyperbole had its balance of skepticism too. I think this excerpts from a1995 Newsweek article by Richard Stoll sums it up the best, ”Hype alert: Why cyberspace isn’t, and will never be, nirvana.”
- Every voice can be heard cheaply and instantly. The result? Every voice is heard. The cacophony more closely resembles citizens band radio, complete with handles, harassment, and anonymous threats. When most everyone shouts, few listen.
- Then there’s cyberbusiness…Stores will become obsolete. So how come my local mall does more business in an afternoon than the entire Internet handles in a month?
- And you can’t tote that laptop to the beach. Yet Nicholas Negroponte, director of the MIT Media Lab, predicts that we’ll soon buy books and newspapers straight over the Internet. Uh, sure.
Pretty funny right? It just took another decade to get there.
We needed to have significant investments in broadband technologies, cellular deployments, and production of smaller and more powerful chips. Capital investments took time to materialize as mobile computing and “the Cloud.” The internet as innovation had to defuse broadly across disciplines and domains so that by 2007 the launch of the iPhone could catalyze the world as we know it into being.
Early Days, the Internet & genAI Comparison?
In comparison, generative AI may be on the same trajectory. It took a dozen years to go from an AI model that successfully classified images of a cat to a model that creates a tailor made image of a cat just by typing in a description on a web page. A parallel to the internet’s infrastructure build out is underway too.
Right now Big Tech companies are investing billions, building out compute power for training and supporting the next evolution in genAI models. In 2023, genAI investments totaled 21.3B globally, and this was underpinned mostly by Microsoft’s investments in OpenAI. Just like the days of the dot.com boom, there’s no profit to be found yet. In startup land that’s to be expected. Amazon wasn’t profitable for nine years. In 2024, OpenAI spent an estimated $8.5B in model training and staffing while it pulled in only $3.4B in revenue. I can’t believe I just used the qualifier of “only” to talk about billions of dollars.
Yet we don’t have a single consumer ready product from generative AI. The chatbot is not the equivalent to the “killer app.” When the customer has to become their own engineer or learn how to navigate the flaws of your tool — it’s not an imperfect tool, but an immature one. Prompt engineering is the equivalent to the dial up modem, a tedious step you’d rather step over.
Another indicator of early days for now genAI is the lack of a clear return on investment to the business that have invested in building out their own genAI agents and fine tuned private large language models. When you hear squishy metrics like innovation and productivity gains — the search for product fit to the problem is still underway.
What’s Next
Just as the internet needed time to evolve from clunky dial-up modems to the instantaneous services from cloud technologies we rely on today, genAI is in its infancy and we’re all still finding its footing. Experimentation is crucial and so is being use case oriented. No local government should repeat NYC’s early missteps just to show that the city is being innovative. No buying AI just to have AI.
In the next post I’ll zero in on what I think carefully conducting experimentation with genAI in local government should look like. Spoiler alert, it starts with data governance.
References:
https://www.in.gov/core/chatbot.html
https://www.goldmansachs.com/insights/articles/AI-poised-to-drive-160-increase-in-power-demand
https://thehustle.co/clifford-stoll-why-the-internet-will-fail
https://apnews.com/article/new-york-city-chatbot-misinformation-6ebc71db5b770b9969c906a7ee4fae21