The first time I asked a school district what their AI policy was, the answer was telling: “We banned ChatGPT in November and we don’t have a replacement.”
That sentence has been rattling around in my head for months. It contains, in fourteen words, the entire opportunity and problem of AI in education today. The teachers know AI is powerful. The administrators know it’s risky. The students are using it anyway, on their phones, where no one can see. And the schools are caught in the middle — without a path forward that respects both the promise and the danger.
MillionRoots exists to build that path. We’re a sovereign AI infrastructure company focused on K–12 education, and we believe the next decade of education technology will be defined less by which AI is most powerful and more by whose AI schools can actually trust.
This is the first post in a weekly series. Before I write a hundred more, I want to be clear about what this publication is, and what it isn’t.
What this is
A field journal. We’re building MillionRoots in public — not the product internals, not the customer list, not the roadmap, but the thinking behind the work. The questions we’re chasing. The arguments we find ourselves making to skeptics. The things we read that change how we see this space. The patterns we notice when we sit in classrooms and listen to teachers.
I’m publishing weekly because the alternative — publishing only when we have a polished announcement — would mean missing most of what’s actually interesting. The breakthroughs in any field rarely come from press releases. They come from someone working through a problem out loud, at the right moment, in front of the right reader.
If you work in education technology, school administration, AI infrastructure, or pedagogy — you’re who I’m writing for. If you’re a parent trying to understand why your child’s school is so confused about AI, you’re who I’m writing for too.
What this isn’t
It’s not a product blog. We’re not going to announce features here, run launch campaigns, or fish for sales leads with thinly-veiled marketing posts dressed up as essays.
It’s not a hot-take machine. Plenty of AI commentators are racing to be loudest about the latest model release. We’re going to be slower and more boring than that, because the questions that matter in education move on a different clock than the questions that matter in consumer tech.
And it’s not unbiased. We have a thesis. We think most of the AI being sold to schools today is fundamentally misaligned with what schools actually need, and we’re building an alternative. You’ll hear that thesis in everything we write. We’ll defend it openly and update it when we’re wrong.
Why now
A short version of why this moment matters:
AI is being adopted by students faster than any technology in education’s history. Faster than the internet, faster than the personal computer, faster than the calculator. Surveys consistently show that the majority of high schoolers have used AI for schoolwork — and the percentages are catching up fast in middle and even elementary grades.
Meanwhile, the institutions responsible for educating these students are largely unequipped to respond. Privacy laws written for a different era are being stretched to cover situations the lawyers didn’t anticipate. District procurement teams are evaluating AI products with the same playbooks they used for textbook adoptions. Teachers are receiving conflicting guidance from administrators who themselves don’t know what to do.
The cost of this gap is being paid in trust. Every parent who finds out their child’s data is being used to train a model. Every teacher who watches a student use ChatGPT to write an essay and doesn’t know how to respond. Every principal who has to choose between banning AI entirely or accepting risks they can’t fully evaluate. Each of these is a withdrawal from the trust account that schools depend on to function.
We don’t think the answer is to ban AI from schools. We don’t think the answer is to deploy consumer AI more aggressively, either. We think the answer is to build AI that was designed for schools from the ground up — privacy-first, pedagogy-respecting, teacher-controlled, and architecturally separated from the consumer AI systems whose business models depend on data extraction.
That’s what MillionRoots is building. It’s also what this publication is going to spend the next year exploring.
What to expect, week by week
Every Sunday evening, you’ll get a piece in your inbox. Most weeks it’ll be a short essay — under a thousand words, focused on one idea. Some weeks it’ll be longer: a deeper analysis, a piece of research, a reaction to something happening in the field.
The themes we’ll return to most often:
The architecture of trust — why “private AI” means more than just an enterprise contract, and what schools should actually look for.
The classroom we’re solving for — what it actually feels like to teach 28 students with a five-grade range of ability, and where AI does and doesn’t help.
The economics of education AI — why the consumer AI business model breaks when applied to schools, and what a sustainable alternative looks like.
The history we’re not reading — the lessons of past education technology waves, from the personal computer to one-to-one tablets to interactive whiteboards, and what they tell us about how to do this well.
Industry analysis — the AI for education market is being shaped right now. We’ll write about who’s doing what, what’s working, what’s not, and where the field is heading.
A small ask
If something we publish lands with you — disagrees with something you believe, confirms something you’ve been thinking, or simply makes you see a problem differently — write back. The replies to these posts are how we learn what the field actually cares about, and they’re how the second year of writing here will be sharper than the first.
You can reach me directly at admin@millionroots.com. I read everything. I respond to most of it.
The weekly cadence starts now. Subscribe at the bottom of this page, and I’ll see you next Sunday.