
The SANDBOX Act: What Twenty Years of Watching Innovation Theater Tells Me
Reading about the SANDBOX Act hit me like a bag of bricks to the head. I came across it online while taking a break from designing custom AI frameworks (true problem-solving) for a higher education institution that knows it wants to embrace AI for all students and faculty; however, they still want to ensure that students can think critically at the end of the day. What I read about the Sandbox could not have been more opposite of those endeavors. In fact, it fits more closely into the category of questionable-political-ethical-protocols; clearly, the D.C. brand of ethical thought, bought and paid for by Big Tech.
Perfect timing, Ted Cruz.
If this isn't a true, "Texas-Sized-Hail-Mary" on how to effectively ban any form of AI regulation in the United States for generations to come, I don't know what else to call it…
The Texas Senator is now pushing to unleash AI innovation by letting companies bypass federal regulations, while the colleges I work with are still trying to figure out if ChatGPT is "cheating software" or "a research tool." This disconnect between policy fantasy and institutional reality has never been more stark.
My first reaction wasn't analytical—it was exhaustion. Not the kind that comes from overwork, but from recognition. I've seen this movie before. The rhetoric changes—"transformation," "disruption," "unleashing"—but the pattern stays identical. Promising "revolutionary" change by removing oversight in the name of "innovation." But here's what really pisses me off: Cruz accidentally stumbled onto something real. Current federal regulations genuinely cannot handle AI systems. FERPA treats student data like it's sitting in filing cabinets. HIPAA assumes medical information stays within institutional walls. The Copyright Office thinks creativity requires human authorship. These frameworks are epistemologically incompatible with systems that learn from data, evolve through deployment, and generate novel outputs. You literally cannot regulate transformer models using rules written for photocopiers.
So yes, we need new approaches. But Cruz's solution—letting companies apply for two-year regulatory waivers through OSTP—is like treating an infection with a blowtorch. Technically, you're addressing the problem, but the collateral damage defeats the purpose.
Let me explain exactly how this will fail, because I've watched identical dynamics play out with every education technology "revolution" for two decades. First, the evaluation process. OSTP has maybe fifteen people who understand AI at a technical level. The Department of Education's entire Office of Educational Technology has fewer staff than a mid-sized community college IT department. These agencies will evaluate waiver applications from companies with hundred-person AI teams who've spent months crafting documentation specifically designed to obscure risks while highlighting benefits.
The agencies won't understand the technical details—how could they? So, they'll fall back on proxies:
Does the company seem legitimate?
Do they use the right buzzwords?
Have they hired former regulators who know how to navigate the system?
The waivers will be granted based on institutional legitimacy rather than actual safety assessment. I've watched this happen with every major edtech vendor. The dangerous ones don't look dangerous—they look professional.
Two-year waiver periods reveal Congress's complete failure to understand AI development cycles. Two years ago, OpenAI & Claude didn't exist publicly. GPT-4 was speculation. Now, entire industries are restructuring around capabilities that seemed impossible in 2023. By the time a two-year waiver expires, the technology being evaluated will be three generations obsolete. But here's the trap Cruz doesn't see coming—or does he? Once these systems get embedded in institutional processes, they become impossible to remove. The data gets locked in proprietary formats. Workflows reshape around the technology. Thousands of people get trained on specific interfaces. The switching costs explode exponentially.
That's why the ten-year renewal provision isn't just problematic—it’s the bedrock upon which the embedding of AI institutionalization will be built. Regardless of what risks emerge, no agency will pull the plug on an AI system that thousands of institutions depend on. What's the saying with specific financial service companies—they are “too big to fail”? The same thing happened with Turnitin, which took plagiarism detection to the next level by expanding into writing assessment—they now own the largest database of student writing in history—used for purposes never disclosed to the students forced to submit their work. That happened without a regulatory sandbox. Imagine what happens when we explicitly remove oversight?
The equity argument makes me want to scream because it's so cynically backward. Yes, small companies can't navigate complex regulations. But the sandbox doesn't help them—it creates a new game only large players can win. Google has teams of lawyers who know how to write waiver applications. The startup trying to help first-generation students navigate college has two developers and half a prayer to succeed. Worse, once Google gets its waiver, their approach becomes the de facto standard everyone else must match without the same regulatory flexibility. The sandbox isn't a level playing field—it's a moat with a fancy name.
State preemption is where this gets genuinely dangerous, and nobody's talking about it honestly. California's been passing AI regulations because federal action has been absent. Imperfect regulations, sure, but genuine attempts at protection. The SANDBOX Act creates a mechanism for companies to get federal waivers that override state law. Think about what that means: A company denied the ability to use facial recognition in California schools could get a federal waiver that supersedes state prohibition. This isn't speculation—it’s the obvious strategy for any company with decent lawyers.
The liability provisions are pure stagecraft theater, and the American populace is the unwitting audience. Yes, the bill says companies remain subject to civil and criminal liability. But watch what happens when an AI system operating under federal waiver harms students. The company points to federal approval as evidence of due diligence, the institution claims reasonable reliance on government oversight. The federal agencies will say they only evaluated regulatory compliance, not safety. Then the injured party gets lost in jurisdictional ping-pong while everyone points fingers, and our legal dockets continue to get worse. I’ve seen this exact dynamic with data breaches at institutions using "approved" vendors. Nobody's accountable because everybody's partially responsible.
What kills me is that I can see exactly how this plays out because I've watched the pattern so many times. Major tech companies will get waivers for systems they're already developing—it would not surprise me if they already have legal teams in place drafting them as we speak. They'll use vague promises about "educational equity" and "personalized learning" to justify access to student data. Smaller institutions, desperate for solutions and lacking resources to evaluate risks, will adopt these systems because federal approval provides cover. When problems emerge—and they will—the companies will have already moved to the next version, the waivers will be renewed because of dependency, and the cycle continues.
But—God, I hate admitting this—doing nothing is also untenable. Educational institutions are failing students in ways that have become normalized through repetition. Developmental math success rates haven't improved in thirty years despite everything we've tried. Achievement gaps between demographic groups remain fixed like natural law. If AI could genuinely address these failures, don't we have an obligation to try, no?
The question isn't whether we should experiment with AI in education. We're already doing that, just without admitting it. The question is whether this particular mechanism—federal waivers granted by agencies without expertise, evaluated through processes designed for different technologies, with minimal transparency and weak accountability—will produce benefits that outweigh inevitable harms. We need transparency—and algorithms that can balance innovation with our national interests. This is a dichotomy Sam Altman, Bill Gates, Larry Ellison, Elon Musk, and Mark Zuckerberg would never sign off on, so instead we get this…
Solution 3 reads: "Offer Congress a Sound Basis for Fine-Tuning AI Regulation." Congress would receive an annual report detailing how often particular rules were waived or modified. "Lawmakers would also have the ability to make successful waivers or modifications permanent."
Are you serious? My prediction? All of the Big Five will get "permanent waivers" on everything, including immunity from any kind of liability or real harm their technology causes. In other words, a license to do whatever they want and not be held responsible for anything. No wonder Ted Cruz wants this deal to be signed—I am sure the commission on that deal would pay very well!
Based on everything I've seen, it won't work. It all sounds good—in fact, it's almost coming across as rhetorically sound. However, I teach rhetoric—trust me, I know the stench of fresh country air when I breathe it deeply enough. Legitimately, it even comes across like they are so well-intentioned—don't trust the states, they don't know anything. Just relax and leave it to your trustworthy politicians and tech executives who would “never" steer us wrong. The SANDBOX Act will become a mechanism for regulatory arbitrage where companies shop for the most favorable treatment while avoiding meaningful oversight. The innovations that emerge won't be in educational effectiveness; rather, in creative ways to extract intrinsic and financial value while externalizing risk to the most vulnerable.
Here's what could actually work, not that anyone's asking:
Mandatory open-source requirements for any system operating under waivers.
Public disclosure of all training data and model architectures.
Independent evaluation by researchers without industry funding.
Automatic sunset provisions requiring demonstrated benefit for renewal.
Student and educator representation in evaluation processes.
Real consequences—not fines that become business expenses, but prohibition from future waivers—for companies that violate terms.
None of that will happen because it would require admitting that innovation isn't automatically good, that disruption has victims, and that some inefficiencies serve important purposes. Instead, we'll get the SANDBOX Act because it sounds innovative, and nobody in Congress understands what they're voting for—truthfully, I would be shocked if anyone in Congress actually read it before voting on this legislation.
The international dimension Cruz keeps invoking—competing with China—is particularly galling. I read this as a well-positioned rhetorical statement: "AI privacy, speech, and human dignity decisions will be made by China if America fails to keep pace." This rhetoric is designed to appeal to a nationalistic base as a smokescreen for the literal "get out of jail free" cards NIST wants to issue to Big Tech. We're not competing by removing safeguards; we're volunteering Americans as test subjects. Companies getting waivers won't limit their products to the U.S. They'll use our students and workers to debug systems they'll sell globally. We bear the risk while they capture the value. That's not competition—it's exploitation wrapped in patriotic rhetoric.
After two decades of this, I've developed a simple test: whenever someone says technology will transform education, ask who bears the risk and who captures the value. With the SANDBOX Act, the answer is clear. Risks flow to students and educators, while benefits concentrate in companies already powerful enough to game the system.
But I'll still engage with it. Write comments nobody will read. Attend hearings where decisions are already made. Help institutions navigate whatever emerges. Not because I think it will work—it won't—but because disengagement guarantees exclusion from whatever minimal benefits accidentally emerge. That's the trap of working in education: you can't afford optimism but can't afford to give up.
The SANDBOX Act will pass because the technology industry wants it to pass, and Congress never understands what they are really approving—and if they do, that's even more scary. Companies will use it primarily for regulatory arbitrage. Some students will be harmed. Minimal accountability will follow. In five years, we'll have this conversation about whatever new framework promises to finally unleash innovation. The cycle continues because it's designed to continue.
Cruz is right that current regulations don't work for AI. He's wrong about everything else. The sandbox won't unleash innovation—it will unleash experimentation without accountability on populations who can't refuse to participate. The students required to use these systems for graduation. The patients whose insurance only covers AI diagnosis. The workers whose jobs depend on algorithmic evaluation. They don't get to opt out of the sandbox—do they?
That's the real innovation here: finding ways to make the public bear the risk of private experimentation while calling it progress. After twenty years, I'm not even angry anymore. Just tired. Tired of watching the same wealth transfer dressed up as disruption. Tired of explaining why oversight matters to people who profit from its absence. Tired of pretending that this time will be different.
It won't be different. Consider this my way of pointing out past third base, where I am going to hit my home run before I ever swing the bat. I mean, somebody needs to document this entirely predictable failure before it actually happens. Consider this my documentation. When the SANDBOX Act produces exactly the outcomes I'm predicting—regulatory capture, experimentation without accountability, benefiting Big Tech and its allies, risking the safety and well-being of those who are vulnerable—remember that we knew. We've always known. We just decided not to care.
This opinion is entirely my own and does not necessarily reflect that of CityGov.com—I've said my piece. Take it or leave it, but don't say nobody warned you.
David Hatami, Ed.D.