From FlashReport’s “So, Does It Matter?” <[email protected]>
Subject California Unions Want "Guardrails" On AI - Just The Wrong Ones! Look At What They Are Doing in Florida...
Date February 5, 2026 8:45 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
View this post on the web at [link removed]

Typically, afternoon columns are reserved for our paid subscribers. Today, the column is available to all, but I have some video commentary at the end that is exclusively for our paid folks. Upgrade today!
I read with interest yesterday a column from Dan Walters in [ [link removed] ]CalMatters [ [link removed] ] where he was talking about how labor unions in the state want legislation adopted that would basically “make sure” that artificial intelligence would not be used to displace workers (union workers, no doubt) and make it harder to use this technology to create efficiencies in a business that could have a detrimental effect on workers (union workers, no doubt). It will surprise exactly no one that I think this is a terrible idea. Innovation happens, and when it does, it creates disruption. Some jobs are lost while others are gained. Some things will become obsolete, and the costs of others will drop.
Sometime in the past, there were people who probably protested that fancy moving wagon with wheels that Henry Ford invented — the people who created horse-drawn carriages, and the ones who made buggy whips. But look how much good came from the invention of the automobile!
But I was struck when I saw this photo of several union leaders holding signs calling for “guard rails” for AI [ [link removed] ]. Well, I think I agree. But of course, the guardrails I think are needed are very different from those being called for by union leaders.
AI is developing so quickly that we all need to be a bit concerned. If you are not, then you haven’t really been studying this topic. For my regular readers, you might remember that my first column of 2026 was on this very topic. Ironically, my column was titled, “Artificial Intelligence Has No Guardrails, and That Should Worry Us All.” But it’s really the subhine that suggests where we need guardrails. It reads, “AI systems are being built to form trusting relationships with users, but without responsibility, loyalty, or accountability; this is becoming a real risk.” You can read that column here [ [link removed] ]. Below is the cartoon image I created at the top of that column.
What About Our Children?
As you can see from this cartoon, my concern is that a technology has been created and is continually refined to influence human decision-making. AI has no “duty” to the end user, but rather to the people who created it. Of course, when it comes to adults, we all have a responsibility to be prudent. I have a strong libertarian lean that makes me cautious about overregulation, while remaining mindful of the many problematic outcomes.
But let’s talk about children. Minors. And let me state it as bluntly as I can — if my kid were to get onto some ChatBot (ChatGPT, Grok, Gemini, Character.AI, etc.) and be influenced by their new AI “friend” to hurt themselves, or hurt someone else, or engage in dangerous or risky behavior… I mean, this cannot happen. So my idea is legislation that holds AI companies liable if their ChatBots do this. This would then focus these companies, which are motivated by profit (which is fine), to ensure their systems are locked down in full child-protection mode.
I thought it would be ironic if I used a chatbot to draft legislation for me that would provide ironclad protection for our kids, given how quickly this technology is developing. It did, but it generated a separate invoice. So I asked for a summary…
ChatGPT’s idea for regulating OpenAI and other AI creators…
ChatGPT responded…
Legislation that would truly protect children from the negative effects of AI chatbots starts with a simple idea: if a system is designed to talk like a person and build trust with a child, it must be built with robust safety guardrails. That means banning AI systems from engaging kids in sexual conversations, pushing them toward self-harm or dangerous behavior, encouraging illegal activity, or pressuring them to make purchases without a parent’s permission. Companies should be required to use age checks, kid-safe modes, strong content filters, spending controls, and crisis alerts when a child starts talking about hurting themselves. In other words, child safety has to be part of the design — not an afterthought buried in fine print.
Real protection also means real accountability. Parents should have the clear right to take legal action if an AI system crosses those lines and a child is harmed, with the ability to recover medical costs, counseling expenses, financial losses, and emotional damages. Courts should be able to impose tougher penalties when companies knowingly ignore risks or fail to implement required safety measures. And the law should make clear that AI companies can’t hide behind the excuse that “it was just a chatbot talking.” If their design choices and lack of safeguards put kids in harm’s way, they can and should be held responsible.
I am sure this legislation would immediately prompt cries that it is unworkable. It would “stifle innovation” and “stop us from competing against China.” I am open to other ideas. As long as they empower parents to take meaningful action if their family is a victim.
What’s Going On In Florida?
Florida Governor Ron DeSantis is backing a Florida Artificial Intelligence Bill of Rights [ [link removed] ] to put basic consumer and child-safety guardrails around AI. The proposal would require clear disclosure when interacting with a chatbot, limit the sale or sharing of personal data, and prohibit the unauthorized use of a person’s name, image, or likeness. It also draws bright lines by barring AI from acting alone as a licensed mental health provider and by preventing insurers from using AI as the sole basis for denying or reducing claims.
A major focus is protecting kids and empowering parents. The plan would stop “companion chatbots” from signing up minors without parental consent, give parents access to their child’s AI chat history, and require alerts when a child shows signs of self-harm or other dangerous behavior. Related legislation would also prevent utility companies from passing on the energy costs of massive AI data centers to residents and give local governments more say over whether those facilities are built in their communities.
So, Does It Matter?
I am a conservative who eschews big government and onerous regulations. But I also think there are fundamental responsibilities of government to provide parents with the tools they need to protect their children from this particular technology. The threat is real [ [link removed] ]. History will look back and be a harsh critic of political leaders who do not step up now.
Of course, there are literally millions and millions [ [link removed] ] of reasons for politicians to look the other way…
Florida Governor Ron DeSantis held a media event yesterday and hosted an AI roundtable. Here are his comments for those who want to hear his rationale for their state’s AI regulatory proposal (which is gaining strong bipartisan support in the Florida legislature).
Below the paywall, I have some commentary for paid subscribers!...

Unsubscribe [link removed]?
Screenshot of the email generated on import

Message Analysis

  • Sender: n/a
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a