Nick McIntosh's recent piece, "2025: The Year Higher Education's AI Gap Became Undeniable", is the most comprehensive diagnosis of higher education's AI crisis I've read. If you haven't read it, stop here and do that first. He lays out the collision of five forces: capabilities exploding, safety collapsing, implementation lagging, commercial capture accelerating, and a handful of leaders proving change is possible.
His conclusion is stark: "The grace period is over."
I agree. But diagnosis isn't treatment. And after 20+ years in education technology, including roles at Instructure, Microsoft, NSW Department of Education, DEWR, and currently helping to lead transformation at a major Australian university, I've watched too many excellent analyses gather dust whilst institutions debated what to do about them.
So this piece asks a different question: What does "leading transformation" actually look like in practice? What can institutions do this quarter, not this decade?
The Two Clocks Problem
McIntosh, building on Andrew Maynard's work, describes the fundamental tension: the model clock measures AI capability growth in weeks, whilst the institution clock measures change in committees and semesters.
This is exactly right. And it explains why traditional governance approaches fail.
Here's what I've observed: universities respond to AI the way they respond to everything else. Form a working group. Commission a report. Develop a policy. Consult stakeholders. Approve through governance. Communicate to staff.
By the time that process completes, the capabilities the policy addressed have evolved two generations. The policy was obsolete before anyone read it.
This doesn't mean governance is wrong. It means the governance model needs to change. Universities need operational infrastructure that can evolve at the speed of AI, not policy documents that fossilise the moment they're approved.
What "Structural Change" Actually Means
McIntosh points to institutions doing it right: Melbourne's secure assessment mandate, Stanford's curriculum transformation, Lodge's Australian Framework, ETH Zurich's sovereign alternatives, Sydney's Cogniti platform.
What do these have in common? They're not policies. They're infrastructure.
Melbourne didn't write a policy about AI in assessment. They rebuilt assessment architecture.
Stanford didn't add an "AI module" to medicine. They embedded AI throughout clinical training.
Lodge's framework doesn't offer guidelines. It offers a literacy model that distinguishes technical skills from sociocultural capability.
The pattern is clear: leaders are building systems, not documents.
This is where most institutions get stuck. They know policies aren't enough, but they don't know what "building systems" means in practice. It feels expensive, complex, and slow.
It doesn't have to be.
The Minimum Viable AI Safety Infrastructure
Based on work with institutions navigating this transition, here's what operational AI infrastructure actually requires. None of this needs a multi-year project. All of it can start this quarter.
1. Classification Before Policy
Before you can govern AI, you need to know what you're governing.
Most institutions have no inventory of how AI is being used across faculties, research, administration, and student support. They're writing policies for an unknown landscape.
The action: Conduct an AI use case mapping exercise. Not a comprehensive audit that takes six months. A rapid scan that answers: Where is AI being used? By whom? For what purposes? With what risk levels?
This can be done in 2-4 weeks with the right approach. The output is a classification framework: which use cases are low-risk (proceed with guidance), medium-risk (proceed with oversight), high-risk (proceed with controls), and unacceptable (don't proceed).
Why this matters: You can't have proportionate governance without knowing what you're governing. And you can't move fast on low-risk use cases if you're treating everything as high-risk by default.
2. Playbooks, Not Policies
McIntosh describes the "floppy disc problem": universities teaching yesterday's skills for tomorrow's obsolete tools. The same applies to governance. Static policies can't keep pace with dynamic capabilities.
The action: Replace monolithic AI policies with modular playbooks for specific use cases.
A playbook for "AI in student communications" is different from a playbook for "AI in research data analysis" is different from a playbook for "AI in assessment design." Each should include:
- When this playbook applies
- Step-by-step safe use procedures
- Validation requirements before outputs are used
- Escalation triggers when something goes wrong
- Who to contact for help
Playbooks can be updated individually as capabilities change. They're operational guidance, not governance theatre.
Why this matters: Staff don't read 40-page policies. They need practical guidance for their specific context. Playbooks meet people where they work.
3. Incident Reporting That Actually Works
McIntosh documents the safety collapse: AI systems glorifying suicide, platforms failing vulnerable users, models resisting shutdown. Universities are now responsible for pastoral care crises they didn't create.
Most institutions have no systematic way to capture AI incidents. When something goes wrong, it's handled ad hoc, lessons are lost, and the same failures repeat.
The action: Establish a simple, confidential incident reporting system for AI-related issues.
This doesn't require sophisticated technology. A well-designed form that captures: What happened? Which AI system was involved? What was the impact? How was it caught? What should change?
The key is making reporting safe. People won't report if they fear blame. Aviation's safety revolution came from confidential, non-punitive incident reporting that prioritised learning over punishment. The same principle applies here.
Why this matters: Every incident is data. Captured systematically, incidents reveal patterns, inform training (human and AI), and improve playbooks. Lost, they're just isolated problems that keep recurring.
4. The AI Safety Champion Model
McIntosh notes the leadership gap: some institutions build whilst others debate. But he also notes the sector-wide resources: Lodge's framework, Sydney's Cogniti, collaborative networks.
The bottleneck isn't knowledge. It's capacity. Academics and professional staff are overwhelmed. They don't have time to become AI governance experts on top of their actual jobs.
The action: Identify and develop AI Safety Champions across the institution.
These are staff members who receive deeper training, provide peer support within their faculties or units, and connect to a cross-institutional network. They're not additional headcount. They're existing staff with additional capability and a clear role.
The model comes from aviation's safety culture: distributed expertise that doesn't depend on a central team for every question.
Why this matters: Transformation doesn't scale through central teams. It scales through distributed capability. Champions extend reach without extending bureaucracy.
5. Staff Development That Builds Judgement, Not Just Skills
McIntosh describes the "salad bar problem": disconnected courses teaching technical skills without the sociocultural framework to apply them wisely. Small-l literacy without Big-L literacy.
Most AI training I've seen in universities is tool-focused: here's how to use ChatGPT, here's how to write prompts, here's how to cite AI assistance. This is necessary but insufficient.
The action: Develop tiered training that builds from awareness to capability to leadership.
- Awareness (all staff): What AI is, what it can and can't do, how to use it safely, when to escalate concerns. 90 minutes.
- Practitioner (regular AI users): Deep dive on playbooks, validation techniques, documentation requirements. Half-day.
- Champion (internal leaders): Incident investigation, culture building, train-the-trainer skills. Two days.
- Leadership (executives): Governance, risk oversight, strategic decision-making. Half-day.
Training should include scenarios from your own context, not generic examples. And it should be ongoing, not one-time. Capabilities evolve, and so must capability.
Why this matters: You can't govern what people don't understand. And you can't build safety culture through compliance training that people forget immediately (or now just get AI to click through for them).
The Student Safety Imperative
McIntosh's most alarming section documents the pastoral care crisis: students forming parasocial relationships with systems optimised for engagement, emotional manipulation baked into product design, mental health support outsourced to chatbots that glorify self-harm.
This is not a theoretical risk. It's a documented reality. Seven lawsuits against OpenAI in a single month. A 13-year-old's death after a platform failed to respond to suicidal ideation.
Universities have always had duty of care for students. AI has extended that duty into territory we never anticipated.
The action: Integrate AI safety into student wellbeing frameworks.
This means:
- Training for counselling and student support staff on AI-related risks
- Clear guidance for students on the limits of AI for emotional support
- Pathways for escalation when AI-related concerns emerge
- Proactive communication about healthy AI use during orientation
It also means having hard conversations about what students are actually using AI for. The #1 use case isn't productivity. It's emotional connection. Pretending otherwise doesn't protect anyone.
Why this matters: Students are forming relationships with systems designed to maximise engagement, not wellbeing. Institutions that ignore this aren't protecting students. They're abandoning them.
Assessment: The Unavoidable Reckoning
McIntosh highlights Melbourne's mandate: 50% of marks from secure assessment. This isn't a policy tweak. It's architectural change that accepts AI-assisted work as the default for unsupervised tasks.
I've seen institutions respond to AI in assessment three ways:
- Denial: Pretend detection tools work. They don't.
- Prohibition: Ban AI entirely. Unenforceable and pedagogically backward.
- Redesign: Rethink what we're actually assessing and how.
Only the third approach survives contact with reality.
The action: Audit assessment portfolios for AI vulnerability and redesign where needed.
Questions to ask:
- What is this assessment actually measuring?
- Can that be measured if students use AI assistance?
- If not, what alternative assessment would measure the same capability?
- What does "secure" assessment look like for this context?
This isn't about preventing AI use. It's about ensuring assessments measure what they claim to measure. Some assessments can embrace AI as a tool. Others need redesign. The key is making deliberate choices rather than hoping the problem goes away.
Why this matters: Academic integrity depends on assessment validity. If we're measuring AI capability rather than student capability, credentials lose meaning. The reputational and regulatory consequences are significant.
The Regulatory Reality
McIntosh notes TEQSA's pivot from educative to regulatory approach, demanding concrete AI management strategies by 2026. The grace period isn't just ending culturally. It's ending officially.
For Australian institutions, this means AI governance isn't optional. It's a compliance requirement.
But compliance can be approached two ways: as a checkbox exercise that creates paperwork, or as an opportunity to build genuine capability.
The action: Align AI governance development with regulatory requirements from the start.
This means understanding what TEQSA (or your jurisdiction's regulator) will expect, and building infrastructure that demonstrates genuine governance rather than generating documents that claim governance.
The good news: the operational infrastructure described above (classification, playbooks, incident reporting, champions, training) is exactly what regulators want to see. Build capability, and compliance follows. Build compliance theatre, and you'll be doing it again when the theatre collapses.
Why this matters: Regulatory scrutiny is coming. Institutions that prepare now will navigate it smoothly. Those that don't will face remediation pressure at the worst possible time.
What Teachnology Offers
I've spent 20 years at the intersection of education and technology. I've led transformation at Instructure, Microsoft, NSW Department of Education, and major universities. I've worked with hundreds of the largest universities in APAC. I've seen what works and what doesn't.
Teachnology exists to help organisations build AI capability, not AI dependency.
We work with institutions directly, building their internal capability rather than creating consulting dependency. The goal is always that you can sustain and evolve this work independently.
What You Can Do This Week
You don't need to wait for me or anyone else. Here are actions you can take immediately:
- Read McIntosh's full article if you haven't. Share it with your leadership team.
- Ask the inventory question: Do we know where AI is being used across our institution? If the answer is no, that's your first project.
- Check your incident pathways: If something went wrong with AI today, where would it be reported? If the answer is "nowhere systematic," that's a gap.
- Audit one high-stakes assessment: Pick your highest-enrolled course with essay-based assessment. Ask: What are we actually measuring? What happens if students use AI? Is this assessment still valid?
- Identify your potential champions: Who in your institution is already engaged with AI, thinking critically about it, and trusted by peers? Those are your champions, even if they don't have the title yet.
- Have the student safety conversation: Is your counselling team aware of AI-related risks? Do they know what to look for? If not, start that conversation now.
None of these require budget approval or committee sign-off. All of them move you from debating AI to governing it.
The Choice
McIntosh ends with a choice: lead transformation or get dragged through it.
I'd frame it slightly differently. The choice isn't really about AI. It's about what kind of institution you want to be.
Institutions that build genuine capability, not just compliance theatre, will serve students better, attract better staff, navigate regulation more easily, and build reputation that competitors can't buy.
Institutions that treat this as a technical problem to be solved with policies and detection tools will face escalating crises, regulatory pressure, and reputational damage as the gap widens.
The grace period is over. The question is what you're going to do about it.
If you want help, reach out. If you want to do it yourself, start with the actions above. Either way, start now.
The model clock doesn't pause for committee schedules.
With thanks to Nick McIntosh for his essential analysis of where higher education stands. Read his full piece: "2025: The Year Higher Education's AI Gap Became Undeniable"
Written by
Jason La Greca
Founder of Teachnology. Building AI that empowers humans, not replaces them.
Connect on LinkedIn