In 2003, Promethean and SMART Technologies were the hottest names in education technology. Their interactive whiteboards were going to revolutionize teaching. Federal funding flowed in. School boards approved purchases. Hundreds of thousands of installations went into classrooms across North America and Europe. By 2010, the smart board was the most aggressive ed-tech rollout since the personal computer.
Walk into a classroom today and look at the smart board on the wall. There’s a good chance it’s powered off. There’s a better chance it’s being used as a regular projector. There’s a near-certain chance it’s not being used the way it was sold.
This isn’t a story about bad technology. The hardware mostly worked. It’s a story about a much harder problem — the gap between what gets adopted and what gets used. That gap is where most ed-tech investment dies, and the reasons it dies are reasons we are about to make again with AI.
What actually happened with interactive whiteboards
The pitch was compelling. Replace the chalkboard with a touch-sensitive surface. Teachers can prepare interactive lessons that students engage with directly. Math becomes manipulable, science becomes visual, language becomes multimedia. The board records what’s drawn on it, can be shared with absent students, can be revisited next class.
The reality was different. Teachers received the boards but rarely received the training to use them well. The interactive lesson software was clunky and required hours of preparation per lesson. The hardware required calibration, maintenance, and a specific projector position that often conflicted with classroom layouts. Younger teachers adopted faster than older ones, but even the early adopters tended to plateau at using maybe 20% of the board’s intended functionality.
By 2015, research papers were starting to ask the uncomfortable question: did smart boards actually improve learning outcomes? The answers were almost uniformly equivocal. Some studies showed modest gains. Many showed none. A few showed slight negatives. None showed the transformative effects that had justified the spending.
The dollar figures are sobering. Estimates of total spending on interactive whiteboards globally between 2000 and 2015 ranged into the tens of billions. Most of that spending produced classrooms that look superficially modernized and function essentially the same way they did before.
The five lessons from the smart board era
The reasons the smart board adoption failed are well-studied, and they’re directly applicable to any new technology being deployed in K–12 today. AI specifically. Here are five.
Lesson one: training was an afterthought. Districts bought the hardware and assumed the training would happen organically. It didn’t. Most teachers received a one-day overview at the time of installation and never received follow-up. The fancy features that justified the purchase price went unused because nobody had the time or the structure to learn them. The same pattern is forming with AI today — districts are buying licenses for AI products and assuming teachers will figure them out. They mostly won’t, not because they can’t, but because nobody is creating the time and structure for them to.
Lesson two: the change-management cost was hidden in the purchase price. A smart board cost five to seven thousand dollars in 2008. Schools budgeted for that cost. They did not budget for the curriculum redesign, the lesson plan adaptation, the IT support contract, or the inevitable replacement when the hardware failed. The total cost of ownership was at least double what districts actually spent. The same dynamic is going to play out with AI — the license cost is the smallest line item. The cost of changing how teachers teach is the actual cost.
Lesson three: the people pitching it didn’t have to use it. Sales reps demoed the boards to administrators. Administrators bought them. Teachers had to use them. This three-step sales chain produced a misalignment where the product’s most important quality — usability for teachers in real classroom conditions — was the quality least represented in the buying decision. AI products today are being sold the same way, with the same misalignment.
Lesson four: the metric was outputs, not outcomes. Districts could report “we have installed interactive whiteboards in 95% of classrooms” without anyone asking whether the boards were being used effectively. This was politically convenient — the procurement was the achievement, regardless of what happened next. AI adoption is following the same pattern. Districts are signing AI vendor contracts and announcing them as accomplishments. Whether the AI actually improves anything in classrooms is a different question, asked later if at all.
Lesson five: the technology asked teachers to change what they already do well, instead of helping them with what they don’t. Smart boards were pitched as a replacement for the chalkboard, which most teachers had refined into an effective instructional tool over years. The new tool offered marginal improvements at the cost of significant relearning. Teachers, sensibly, mostly chose to keep using the chalkboard. The smart board succeeded only when it was used for something the chalkboard couldn’t do — which turned out to be a much narrower set of activities than the marketing implied.
How this maps to AI today
The first three lessons are happening again, almost identically. Districts are buying AI licenses without budgeting for training. Sales is being done to administrators while the actual users are teachers. The total cost of effective AI adoption is being dramatically underestimated.
The fourth lesson is happening in a more pernicious form. AI announcements are easier to make than smart board announcements ever were. A district can issue a press release about its AI partnership in a way it never could about a board purchase. The political incentive to claim adoption without measuring outcomes is even stronger now than it was twenty years ago.
The fifth lesson is the most interesting one, because it points to where AI in classrooms could actually succeed.
If AI is sold as a replacement for what teachers already do well — direct instruction, classroom management, relationship-building — it will fail the same way the smart board did. Teachers will recognize that their existing methods work, the change cost isn’t worth the marginal improvement, and the AI will quietly stop being used.
If AI is sold as help with what teachers don’t do well today — not because they lack skill but because they lack time — it has a different chance. The hours teachers spend on grading, on differentiation, on individual feedback, on administrative reporting, on parent communication: these are the spaces where AI doesn’t compete with the teacher’s craft. It augments it.
This is the actual addressable market for AI in education. It’s not the parts of teaching that look impressive in a vendor demo. It’s the parts that are slow, repetitive, and currently consuming the time that teachers wish they could spend elsewhere.
What to look for, what to avoid
If you’re a school administrator evaluating AI products, three questions to ask:
How much teacher training is included in the contract, in hours? If the answer is less than one hour per teacher per month for the first year, the vendor has not learned the lesson of the smart board.
What is the success metric, and who measures it? If the metric is adoption rate (“our teachers are using the AI”) rather than outcome (“students performed better, teachers worked fewer hours, learning improved”), the contract is set up to be claimed as a success regardless of whether it actually is.
What does the product replace? If it replaces something teachers already do well, expect resistance and underuse. If it replaces something teachers don’t have time to do well, expect adoption and impact.
If you’re a vendor building AI for education, the same questions apply, with one addition: am I willing to be paid based on outcomes, or only based on adoption? Vendors who can credibly answer “outcomes” are doing something different from the smart board era. Vendors who can’t, regardless of how good their technology is, are building tools that will end up powered off in the corner of the classroom in ten years.
The interactive whiteboard didn’t fail because the technology was bad. It failed because nobody asked, hard enough, what it would actually take to make the technology improve teaching. AI in education is at the same fork in the road. We can have a different outcome this time. But it requires asking questions that the current moment, with its rush to announce and adopt and partner, isn’t asking enough.
Next week: a more focused piece on a specific AI capability that’s getting overhyped — and one that’s getting underestimated. We’ll look at where the field is genuinely advancing and where it’s just running in place.