I don’t vote in primaries because I am not compelled to align myself with either political party, but like any civic-minded Hoosier, I’ve been paying attention to the discourse around next month’s elections.
I’ve been following along with IndyStar’s excellent candidate interviews and trying to get a sense of who’s running and what they stand for.
In some ways, I feel pretty good about it. There are some serious and thoughtful candidates discussing real issues — including education, housing, health care and child care — that affect the day-to-day well-being of Hoosier families.
On the other hand, there is also a glaring absence in the discourse. Other than some scattered (but fierce) debates about data centers — which matter but are just a sliver of the broader conversation — artificial intelligence has barely registered.
If even a fraction of AI’s current trajectory pans out, it will disrupt virtually every aspect of our economy, culture and daily life. That’s why AI progress and governance are arguably the most important issues of our time. These are not niche or sectoral issues, but ones that cut across labor markets, education, national security, energy, health care and the basic structure of opportunity in this country and the world.
In other words, how we deal with AI is the issue that will dictate how all of the others will develop. It demands careful scrutiny from everyone in public life.
Trillionaires or post-scarcity — pick your sci-fi ending
As I’ve written before, there’s a spectrum of views on AI governance.
On one end, you have what you might call the AI doomers who advocate for things like data center moratoriums and aggressive regulation. Their argument is that the risks are too large and, if companies won’t slow themselves down, the government should step in and force the issue.
On the other end, you have what I’ve previously called the “let it rip” camp. This group wants to leap, boldly and maybe even blindly, into an AI-powered future and let the chips fall where they may.
There are good arguments on both sides. AI is often described as something out of science fiction, and we are currently living in a world where both the dystopian and utopian stories are a real possibility.
Anti-data center populism turns violent
The dystopian version envisions a world where rapidly accelerating wealth inequality will mint a handful of trillionaires while trapping most people into the “permanent underclass,” stripped of whatever sense of agency and purpose they once had. The utopian one starts with centuries of scientific and technological progress compressed into months or years, eventually realizing the impossible dream of a post-scarcity world.
Popular opinion is currently leaning toward Team Doomer. Recent polling has shown that artificial intelligence is one of the most unpopular things in America. OpenAI CEO Sam Altman has reportedly been the target of several incidents at his home. Closer to home, pro-data center city-county councilor Ron Gibson allegedly had his house shot at, with a note indicating that this was a consequence of his data center advocacy.
These incidents suggest a real and potentially sustained populist backlash. More importantly, they suggest that this is an issue of tremendous potential political salience, which makes the relative silence from candidates all the more perplexing.
Someone is about to rewrite the social contract on your behalf
It’s worth being clear about what is fair to expect from candidates. AI is a spectacularly complicated issue, and it raises far more questions than it answers. What I want from candidates isn’t a perfectly formed policy platform or position statement, but more like a sense of the values and judgment that will guide their decisions.
I want to know how candidates think about some of the tricky questions caused by this technology. Who should decide how the benefits and burdens of AI are distributed, and how should those decisions be made? Part of the concern here is that, under our current politics, we are not especially well-equipped to answer those questions well.
We may be entering a period where the economic and social contract that has governed the last 60 to 70 years (fairly successfully, although not perfectly) may need to be rewritten, and our political leaders are the ones who will be negotiating it on behalf of the rest of us. The choices we make in 2026 and 2028 will determine who those leaders are, with some already describing the 2028 presidential race as the “AI election.”
That’s why the relative silence around AI in Indiana politics thus far is disappointing and dangerous.
At first glance, this might seem like an issue for Congress, cabinet officials or for presidents and governors, but AI will shape every domain that every elected official touches.
School boards will have to decide how to handle it in classrooms. County commissioners and mayors will have to manage workforce disruption and rapidly changing local economies. Every candidate for elected office has to have a basic framework for thinking about AI, and I’m not seeing many out there right now.
I’m not a single-issue voter. I’ve never really understood or believed in that approach. There are a lot of things I care about, and I think that’s true of most people. When it comes to AI, however, I can feel myself becoming something like one.
I’m not going to vote for you just because of your position on AI, but I probably will decline to vote for you if I can’t tell what that position is, or how you got there.
Jay Chaudhary is a former director of the Indiana Division of Mental Health and Addiction and chair of the Indiana Behavioral Health Commission. He writes the Substack, Favorable Thriving Conditions.
This article originally appeared on Indianapolis Star: I’m becoming a single-issue voter — on AI | Opinion
Reporting by Jay Chaudhary, Contributing Columnist / Indianapolis Star
USA TODAY Network via Reuters Connect




