from the desk of Dana Criswell I’ve spent more than 18,000 hours in the cockpit of advanced airliners, managing autopilots that, when programmed correctly, can literally land the airplane by themselves. I’ve seen the best of what automation can do. I’ve also spent a career training for the day it doesn’t. That’s why Tesla’s Full Self-Driving (Supervised) data catches my eye. Their numbers say FSD miles see far fewer crashes per mile than the average U.S. driver. If that holds up, it’s impressive. It’s exactly the kind of safety gain good automation can bring, and why I want a Tesla, today. But in aviation we learned a long time ago: automation is a tool, not a magic trick. The question isn’t just, “Is it usually safer?” It’s, “What happens in the rare moments when it fails, and is the human still in the loop enough to save the day?” In a modern jet, the autopilot can fly an approach in zero visibility and roll us onto the runway centerline. But any pilot who trusts it blindly is a hazard. We’re trained to monitor, cross-check, and be ready to click off the automation in a heartbeat. We brief what we’ll do if the autoland goes wrong. We memorize the failures. We practice taking over. That’s where I worry about “self-driving cars.” Their marketing leans heavily on the promise of “Full Self-Driving.” Their stats show millions of miles between major collisions, but they’re based on Tesla’s own definitions and their own telemetry. At the same time, federal safety investigators are looking at serious crashes where these systems were in use and the human wasn’t ready—or wasn’t paying attention. Those accidents are not Tesla’s fault if the “pilot” isn’t paying attention; just like in an airplane, it’s the pilot’s responsibility to expect the failure. In my world, you don’t get to grade your own emergency drills and call it “proven safe.” Independent regulators and accident investigators tear apart every incident. Data is shared, not hoarded. Procedures change when hard lessons are learned. That doesn’t mean politicians design the autopilot; it means the consumers demand truth and transparency. As a conservative, I don’t want Washington bureaucrats trying to code driver-assist systems. They’re barely qualified to run a DMV, much less an AI program. But I do expect one thing from any company asking the public to trust their automation with human lives: brutal honesty. Let Tesla innovate. Don’t ban the tech because it makes some people nervous. Don’t strangle it with precautionary red tape. At the same time, demand truthful reporting from all automakers: crashes, miles driven, system status at the time, and what the human was doing. And make the labeling honest: as of now, this is driver assistance, not a robotic chauffeur. If Tesla’s safety claims hold up under that kind of scrutiny, regulators should step aside and let the market and insurers sort out the rest. If the numbers don’t hold up, then the problem isn’t “too little government,” it’s a company overselling its automation and training its customers into complacency. I’ve watched automation save lives. I’ve also seen where it leads when people stop paying attention. FSD is the future—but for now, it still needs a fully awake pilot behind the wheel. Read all of Dana’s post and stay informed about politics in Mississippi |