← Back to Insights
Higher Education14 min read

Your Retention Algorithm Decided a Student Was At Risk. Who Authorized That Decision?

12 March 2026

Here is a scenario that is no longer hypothetical. A student discovers she was placed into a remedial advising track after an early alert system flagged her as high-risk for attrition. The system used her course performance data, login activity, and demographic variables to generate the flag. An advisor acted on the recommendation. The student was never told that an algorithm informed the intervention.

She files a complaint. She wants to know what data was used, whether the model considered her race or socioeconomic status, and who decided that an algorithm should have a role in shaping her academic path. The institution does not have a clear answer. The early alert system was purchased by the enrollment management office, configured by IT, and used by advising staff, but no one documented who held decision authority over the system's role in academic outcomes. The institution had no framework defining when AI involvement in student decisions requires documentation, disclosure, or oversight, and no process for distinguishing between a productivity tool and an institutional accountability exposure.

The accreditation body now wants documentation of AI oversight in academic decision-making. The faculty senate wants to know why they were not consulted. In the time it takes to form the committee that will study the issue, the exposure has already moved to the President's office. Then the Board's.

I built a university from the ground up. I have sat across from accreditation bodies with institutional survival on the line. What I know from that experience is this: the governance question always arrives. The only variable is whether you designed the answer or scrambled to construct one under pressure.

Roughly 90% of higher education institutions are actively exploring or integrating AI into teaching, research, and operations (EDUCAUSE AI Landscape Study, 2025). Roughly 39% of institutions have formal AI acceptable use policies in place (EDUCAUSE AI Landscape Study, 2025). Roughly 80% of faculty and staff use AI tools, yet less than 25% are aware of a formal institutional policy (EDUCAUSE AI Landscape Study, 2024).

Read those numbers together. The vast majority of your faculty and staff are using AI. The vast majority of institutions have not defined the rules. And the vast majority of your people do not even know if rules exist.

That is not an awareness problem. That is an architecture problem. And accreditors are already asking about it.

Higher education faces a governance complexity that no other sector shares. Healthcare has its own governance complexity, but in higher education, the President operates within shared governance structures where faculty senate, academic affairs, legal, IT, and the board all hold formal standing over different dimensions of the same problem.

This means AI governance in higher education cannot be imposed from the top down. It has to be architected for a political environment where every stakeholder group has legitimate authority over different dimensions of the problem. Faculty own academic integrity. IT owns the tool registry. Legal owns FERPA compliance. The Provost owns academic standards. The President owns institutional risk.

When no one coordinates the architecture, each stakeholder governs their silo. And the gaps between those silos are where institutional risk concentrates.

For institutions with international student populations or European partnerships, the EU AI Act classifies AI-assisted admissions systems and student performance analytics as high-risk, with core obligations for high-risk systems entering into force by mid-to-late 2026. FERPA imposes strict constraints on how student data is processed by third-party AI systems. Major accreditors are already asking AI governance questions. If your institution has not begun preparing, you are not early. You are behind.

Middle States issued an AI-specific accreditation policy effective July 2025 requiring institutions to ensure AI use aligns with their Standards for Accreditation. Other regional accreditors are developing similar guidance. The direction is clear, and the institutions that document their governance architecture now will be prepared when their accreditor formalizes the expectation.

This is not a theoretical regulatory horizon. The questions are being asked now. And they are not the questions most institutions are prepared to answer. They are not asking whether you have an AI policy. They are asking whether you can demonstrate who holds decision authority over AI-influenced academic outcomes, whether that authority is documented and enforced, and whether your faculty governance structure was involved in defining those boundaries.

Purdue's Board of Trustees approved the AI working competency graduation requirement at its December 12, 2025 meeting, signaling that boards are beginning to engage AI not just as an IT question but as an academic and fiduciary one. Colorado's AI Act targets high-risk AI in consequential decisions and will directly affect institutions operating in that state.

The institutions that are ready will have designed their governance architecture before the question was forced. The ones that are not will be building it during the review.

'AI adoption is 10 percent technology. It is 90 percent human architecture. The institutions that navigate AI successfully will not be those with the most advanced models. They will be the ones that defined authority before scale created exposure.' - Dr. Tiffany Masson, Falkovia

Let me be direct about what the work involves, because I have found that most institutions confuse AI policy with AI governance. They are not the same thing. Policy is a document. Governance is the decision engine that makes policy operational.

The Human Authority Line. For every AI system that touches an academic outcome (admissions, grading, advising, student conduct), document the point where machine judgment ends and human judgment begins. Not in theory. In writing. With a named owner. If you have not drawn these lines explicitly, the algorithm has drawn them for you, silently, by default.

The First 90 Minutes. Governance is tested when something goes wrong. In the first 15 minutes, can a named executive pause the specific AI tool? By 60 minutes, has the scope been assessed and leadership notified? By 90 minutes, are external communications prepared? If this protocol does not exist, your governance is a policy document, not an operating system.

Shadow AI Visibility. Faculty and staff are using AI tools you have not approved and cannot monitor. They are not being reckless. They are routing around official tools that are too slow, too restrictive, or too poorly understood. Shadow AI is not a compliance failure. It is a trust signal. Fix the root cause, not just the symptom.

There is a dimension of this challenge that technology governance entirely misses. When a machine can approximate what took a faculty member twenty years to master, the brain does not process that as a productivity upgrade. It processes it as an existential question.

You see it in the senior professor who raises the same objections in every steering committee meeting. In the department chair who quietly avoids AI pilots. In the faculty member who used to trust her read of a student and now wonders if the algorithm sees something she missed.

This is not resistance. It is self-preservation. And if you treat it as a training problem, you will lose the very people whose expertise is the foundation of your institution's academic credibility.

The institutions that win the AI transition will not be the ones with the best algorithms. They will be the ones that understood the psychology of the people using them. That starts with validating the human contribution before you introduce the synthetic one.

'The most dangerous person in your organization is not the skeptic who refuses to use AI. It is the high-performer who trusts it completely.' - Dr. Tiffany Masson, Falkovia

Next Step

Ready to govern AI, not just deploy it?

Schedule a confidential conversation about your institution's AI governance architecture.

Start a Conversation