Skip to main content
Back to Insights
Strategy12 min read6 January 2026

The $2 Million Platform Nobody Uses

Every enterprise has one. Usually more than one. How did this happen?

Share:

Somewhere in your organisation, there's a platform that cost $2 million.

Maybe it was a CRM that was going to give you a "360-degree view of the customer." Maybe it was an ERP that was going to "streamline operations." Maybe it was a learning management system, a project portfolio tool, a data analytics platform, or an integration layer that was going to "connect everything."

The business case was compelling. The vendor demos were impressive. The implementation partner had references. The steering committee approved it. The executive sponsor championed it.

Two years and $2 million later, here's what you have:

A platform that 30% of the intended users have logged into. A system that runs parallel to the spreadsheets people actually use. A capability that exists on paper but not in practice. An asset on the balance sheet and a punchline in the hallways.

Nobody talks about it anymore. It's too embarrassing. The executive sponsor has moved on. The implementation partner has moved on. The vendor account manager sends quarterly "optimisation" proposals that everyone ignores.

The platform sits there. Maintained but not used. Paid for but not valuable. A monument to something, though nobody can agree on what.

The Graveyard Is Bigger Than You Think

This isn't a single failure. It's a pattern.

Ask anyone who's been in enterprise IT for more than five years. They can name the platforms. They remember the launches, the training sessions, the go-live celebrations. They remember the slow realisation that it wasn't working. They remember the quiet fade into irrelevance.

Most organisations have multiple platforms in this graveyard:

The CRM nobody updates. Sales still tracks deals in spreadsheets because the CRM is "too clunky" or "doesn't fit our process." The data in it is 18 months stale. Reports are meaningless.

The intranet nobody visits. Launched with fanfare about "connecting our people." Now a ghost town of outdated announcements and broken links. Everyone uses email and chat instead.

The project management tool nobody trusts. Supposed to give visibility into all initiatives. In practice, it's updated quarterly for executive reporting and ignored the rest of the time. The real project tracking happens elsewhere.

The analytics platform nobody understands. Was going to "democratise data." Requires a PhD to operate. Three people in the organisation know how to use it. They're too busy to help anyone else.

The integration layer that integrated nothing. Promised to connect all your systems. Connected three of them, partially. The rest still pass data through manual exports and imports.

Each of these represented millions in licensing, implementation, training, and ongoing maintenance. Each had a business case, an executive sponsor, and a moment of optimism.

Each failed in slow motion, over months and years, until failure became the status quo that nobody questions.

How It Happens

The pattern is remarkably consistent. Understanding it is the first step to breaking it.

Phase 1: The Pain

Someone senior feels a pain. Lack of visibility. Inefficient processes. Data silos. Competitive pressure. The pain is real. Something must be done.

Phase 2: The Search

A team is formed to find a solution. They survey the market. They watch demos. They talk to analysts. They visit reference customers. The vendors are helpful, polished, persuasive.

Phase 3: The Selection

A platform is chosen. It has the most features. It's from a reputable vendor. The reference customers said good things. The implementation partner has done this before. The business case shows ROI in 18 months.

Phase 4: The Implementation

The project begins. It's complex. Requirements are gathered, debated, refined. Customisations are needed. Integrations are harder than expected. The timeline slips. The budget grows. But the team pushes through.

Phase 5: The Launch

The platform goes live. There's training. There's communication. There's a moment of achievement. The project team celebrates. The executive sponsor takes credit.

Phase 6: The Fade

Adoption is lower than expected. Users complain about usability. Workarounds emerge. The platform doesn't quite fit how work actually happens. The power users adapt. The casual users abandon it. The data quality degrades.

Phase 7: The Silence

Nobody wants to admit it's not working. The executive sponsor has moved to another role. The implementation partner has moved to another client. The vendor suggests "optimisation" and "change management." Quietly, the organisation routes around the platform. It becomes infrastructure that's maintained but not used.

Phase 8: The Next Pain

A few years later, someone senior feels a pain. The cycle begins again.

The Lies We Tell Ourselves

Throughout this cycle, the organisation tells itself stories to avoid confronting what's happening.

"It's a change management problem."

This is the most common deflection. The platform is fine; people just need to be trained, incentivised, or forced to use it. If adoption is low, it's because we haven't communicated well enough or held people accountable.

Sometimes this is true. Usually, it's an excuse to avoid examining whether the platform actually solves the problem it was bought to solve.

"We haven't fully implemented it yet."

The platform has more features than we're using. Once we turn on modules three and four, everything will click. The roadmap shows a better future. We just need to invest more.

This is the sunk cost fallacy dressed up as strategy. More investment in something that isn't working doesn't make it work.

"The vendor is working on it."

The next release will fix the usability issues. The integration we need is on the roadmap. The vendor has heard our feedback and is responding.

Maybe. But you've been waiting for two years. How long do you wait for a roadmap item that might not come?

"It's better than what we had before."

The previous system was worse. At least this one has modern architecture, better security, mobile access. Progress has been made.

But "better than before" isn't the same as "good." And "better than before" doesn't justify $2 million if the value delivered doesn't match the investment.

"Everyone else uses this platform."

Gartner says it's a leader. Competitors use it. It must be us that's the problem, not the platform.

Every platform has unhappy customers. Magic quadrants measure vendor capability, not your fit. What works for others may not work for you.

What Actually Went Wrong

When you strip away the excuses, failed platforms tend to share common root causes.

The problem wasn't understood deeply enough.

The pain was real, but the diagnosis was shallow. "We need better visibility" became "we need a dashboard tool" without understanding why visibility was lacking in the first place. The platform addressed a symptom, not a cause.

The solution was chosen before the problem was fully defined.

Someone saw a demo and got excited. The vendor shaped the conversation. The selection process became about comparing platforms rather than understanding needs. By the time requirements were written, the platform was already chosen.

Nobody asked whether we should build this ourselves.

The assumption was always buy. The idea of building was dismissed as unrealistic, too slow, too risky. But a custom solution that fits exactly how work happens might have cost less and delivered more than a generic platform that fits nothing.

The people who would use it weren't meaningfully involved.

Requirements came from managers, not users. Demos were shown to executives, not practitioners. Training was done to people, not with them. The platform was optimised for reporting, not for work.

Success was measured by implementation, not by outcomes.

The project was "successful" when it went live. Nobody measured whether it actually solved the original problem. Nobody tracked whether the business case materialised. Go-live was the end, not the beginning.

Nobody had permission to say it wasn't working.

Once the investment was made and the executive sponsor had taken credit, admitting failure became politically dangerous. The organisation's immune system kicked in, protecting the decision from scrutiny.

The Numbers Nobody Calculates

The $2 million is just the visible cost. The real cost is much higher.

Licensing and implementation: The number everyone knows. Let's say $2 million.

Internal effort: The hundreds of hours your own people spent on selection, requirements, testing, training, and ongoing support. Conservatively, another $500K in loaded cost.

Opportunity cost: What those people could have been doing instead. What other problems could have been solved. What capabilities could have been built. Incalculable, but real.

Ongoing maintenance: Annual licensing, support contracts, the two FTEs who spend half their time keeping it running. Over five years, another $1-2 million.

Workaround cost: The spreadsheets being maintained in parallel. The manual processes that persist because the platform doesn't actually work. The duplicated effort. Another hidden million.

Morale cost: The cynicism that sets in when people see big investments fail. The learnt helplessness. The belief that "this is just how it is here." The talent that leaves because they're tired of working around broken tools.

Trust cost: The next time someone proposes a technology investment, the organisation remembers. Scrutiny increases. Approval gets harder. Even good ideas get killed because of the ghosts of failed platforms past.

Add it all up and the real cost of a failed platform isn't $2 million. It's $5-10 million over the platform's lifetime, plus the intangible damage to culture and capability.

And most organisations have multiple failed platforms accumulating these costs simultaneously.

The Alternative Path

What if, instead of buying the $2 million platform, you had done something different?

Started smaller. Instead of a comprehensive solution, picked one specific problem. Built or bought a minimal solution. Tested it. Learnt. Iterated.

Involved users earlier. Put rough prototypes in front of actual users before committing millions. Discovered that their real workflow didn't match the requirements document. Adjusted before it was expensive.

Built the first version internally. Used AI-assisted development to create something in weeks, not months. It wouldn't have had all the features, but it would have fit how work actually happens. You'd own it. You could change it.

Measured outcomes, not implementation. Defined success as "reduced time to close deals" or "improved forecast accuracy," not "platform went live." Kept measuring. Changed course when the numbers didn't move.

Killed it earlier. When adoption plateaued at 30%, called it. Stopped investing. Wrote off the loss. Moved on. Better to lose $500K learning something doesn't work than $5 million pretending it might.

None of this guarantees success. But it changes the failure mode from "silent, expensive, and multi-year" to "visible, contained, and instructive."

What To Do About the Platforms You Already Have

If you're sitting on a platform graveyard, you have options beyond pretending everything is fine.

Audit honestly. For each major platform, ask: What was this supposed to do? Is it doing that? What percentage of intended users actually use it? What would happen if we turned it off?

Calculate the real cost. Not just licensing, but total cost including internal effort, workarounds, and opportunity cost. Compare that to the value delivered. The numbers might be uncomfortable, but they're clarifying.

Talk to users, not administrators. The people who manage the platform will defend it. The people who were supposed to use it will tell you the truth. Find them. Ask them.

Consider sunsetting. Some platforms should be turned off. The switching cost is lower than the ongoing cost of maintaining something nobody uses. This is hard politically but sometimes necessary.

Don't replace with another platform. The instinct after a failed platform is to find a better platform. Resist this. Understand why this one failed first. The answer might not be another purchase.

Build capability to avoid the next graveyard. The best defence against failed platforms is the ability to build what you need. Custom solutions that fit how work actually happens. Owned by you. Changeable by you.

The Question Nobody Asks

In every platform selection, there's a question that rarely gets asked:

"What if we built this ourselves?"

The question gets dismissed immediately. We're not a software company. We don't have the skills. It would take too long. It's too risky.

But those objections are weaker than they used to be. AI-assisted development has changed what's possible. Small teams can build in weeks what used to take months. The build-versus-buy equation has shifted.

More importantly: a custom solution that fits your actual work is more valuable than a generic platform that fits nobody's work particularly well. Even if the custom solution has fewer features.

The $2 million platform has a thousand features. You use twelve of them. A custom solution with exactly those twelve features, built to fit exactly how you work, might cost a tenth as much and deliver ten times the value.

But you have to be able to build to have that option. And most organisations have outsourced so much capability that building isn't even considered.

That's how you end up with platforms nobody uses. Not because buying is wrong, but because buying became the only thing you could do.

The Monument

The $2 million platform sits in your infrastructure, humming along, maintained by a team that keeps the lights on.

It's a monument. But to what?

To the optimism of the original business case. To the confidence of the executive sponsor. To the persuasiveness of the vendor. To the diligence of the implementation team.

And to the organisation's inability to do anything else. To the absence of alternatives. To the learnt helplessness that made buying the only option, and made admitting failure impossible.

The platform didn't fail because the technology was bad. It failed because the organisation couldn't imagine any other path.

The next time someone proposes a $2 million platform, ask different questions:

What problem are we actually solving? How do we know this platform solves it? What would it look like to build this ourselves? How will we know if it's working? What's our plan if it doesn't?

The graveyard is full. It doesn't need another monument.

This article is adapted from The Capable Organisation by Jason La Greca. The book is a practical guide for technology leaders and executives who are tired of pouring money into platforms that don't deliver—and want to build the capability to do better.

Enterprise SoftwarePlatform FailureIT InvestmentBuild vs BuyDigital TransformationCapability
JL

Written by

Jason La Greca

Founder of Teachnology. Building AI that empowers humans, not replaces them.

Connect on LinkedIn

Ready to build capability?

Take the AI Readiness Assessment to see where you stand.

Start Assessment

Want strategic guidance?

Explore how Teachnology Advisory can help.

Learn More
The $2 Million Platform Nobody Uses | Insights | Teachnology