This is the seventh in a series of blog essays that reflect on the educational and organisational challenges facing universities as they navigate intersecting existential, epistemic crossroads.
I have called the series ‘The Mirror University’ for several reasons. The Mirror Universe is an alternate reality from the cultural phenomena that is Star Trek, in which people who exist in the prime universe have ‘evil’ alternates in the mirror universe. These alternates maintain the structures, relationships, roles and even identities of the ‘prime’ characters but exhibit traits, morals and behaviours that are the anthesis of the higher moral ground taken by our heroes. The mirror university maintains the structures, organisation, practices, and mythologies of what we understand to be a higher education institution but behaves in ways that are the antithesis of the ways of being they aspire to.
The mirror university is a construct.
It doesn’t represent any single institution.
It is not an allegory for the overarching influence of ‘management’.
The aspirational contradictions create a sense of emotional and idealistic liminality for those whose personal, intellectual, or professional identity is deeply rooted in the altruistic conceptions of university as a site of transformational social good for the community, our students, and the academy itself.
TLDR version
The central argument of the article is that trust is the real existential crisis facing higher education. While public debates fixate on generative AI, academic integrity, declining attendance, or marketisation, these are symptoms of a deeper erosion of trust between students, academics, institutions, government, and society.
Trust is the essential social currency of universities: students must trust the credibility of what they are taught; academics must trust the authenticity of student work; society must trust the integrity, usefulness, and rigor of academic research; and future generations must trust that a university education is worth the investment. Each recent crisis (pandemic pivots, employability discourses, collapsing confidence in research quality, political critique and media narratives) has fractured this trust further.
AI accelerates and exposes these fractures. Institutions rush toward AI policies and potentials without clarity or stability; academics and students risk over-reliance on opaque systems; and the research ecosystem faces increasing contamination from fabricated citations, retractions, predatory sources, and uncritical AI-assisted writing. As these problems compound, public confidence in universities continues to decline, especially among Generation Z and upcoming Generation Alpha, who already view higher education as low-value and low-trust.
Rebuilding trust requires a shift from defensiveness to transparency, from quantity to quality in research, and from policy-driven compliance to relationship-driven partnership. Universities must demonstrate and not merely claim credibility, authenticity, fairness, and ethical practice. This involves trusting students, validating human expertise over algorithmic shortcuts, ensuring research integrity, and strengthening the social capital that underpins the purpose of higher education.
Ultimately, trust is not a slogan but the atmosphere in which higher education thrives and without it, the sector’s future is at risk.
Whatever matters to human beings, trust is the atmosphere in which it thrives
Sissela Bok
The social media opinion-sphere has been awash recently with a renewed debate about the future of higher education in the era of ubiquitous use of (generative) AI (see the recent article in the Australian newspaper entitled ‘Lobotomised by AI: How Australia’s university students are using to AI to cheat their way to a degree’. As is often the case in these debates (played out between the media opinion-makers and the academic class) there are positions taken that question the vast traditions of higher education practice, assertions that we have gotten it all wrong or that the system is broken and we need to reinvent our epistemology and pedagogy from the ground up.
These polarised positioning and the sometimes-virtuous handwringing that accompany crisis responses, such as the ones we are experiencing with AI, are representations of a singular existential crisis in higher education. It is a crisis that is corrupting the student experience, devaluing the social capital we deliver for society and is undermining the faith of the public in the efficacy of the sector. This is not a crisis of AI, or of the concurrent crises we have faced, although they have amplified the pace and severity of it. Whilst each of these immediate crisis states for higher education are critical experiential frames through which our real, complex crisis is playing out, they are, however, convenient red herrings that can distract all of us from the critical crisis that threatens the very fabric of higher education as a public good and transformative force for innovation.
The real crisis is trust.
Trust, complexity and the educational atmosphere
Guiso and Alrajeh (2015) define trust as ‘the faith and confidence one has that things will turn out as one hopes’. Trust is the only currency that matters in higher education. Without trust, being ‘good’ researchers and ‘good’ educators is simply making it up for an audience whose response doesn’t matter to the delivery of our message (see Nadeem et al., 2021). If a tree falls in the forest….
Trust is especially important in complex ecosystems like higher education. Trust is equally important for people and societies who reside within that complexity. Complexity shapes the momentum of and importance we place on competing or conflicting interests, values or responses, especially in the face of crisis (Picione & Lozzi, 2021a). Complexity fractures emotional and ontological senses of certainty (or trust) into liminal structures and states of being where reason does not (or cannot) act as a beacon that signals the pathways back to stability and belonging (Picione & Lozzi, 2021b). Complexity forces and takes away choices, leaving individuals in-between states of certainty and uncertainty, in what Picione et al., (2025) characterise ‘as a two-faced Janus in the effort to hold together stability and change, suspension and transformation, subjectivity and collectivity’.
Fuller (2014) makes the case that higher education institutions create and maintain trust within their complex ecosystems by building social capital through relationships within and between their community. Each successive wave of crisis experienced by higher education over the last decade has fractured the trust within and outside the academy. From the media assertions that our online learning pivot during and after the pandemic was in pursuit of economic gain (Baker, 2022; Fitzgerald, 2025) to the ideological debates about graduate employability and job ready graduates (Cassidy, 2026) trust in the actions and ambitions of higher education are constantly brought into question. Niedlich et al., (2020) identify that trust in education is not a simple construct, represented through complex value judgements like ‘transparency, fairness, effectiveness and efficiency’. Colquitt and Baer (2023) add to the complex construction of the notion of trust by arguing that it is created and mediated through interpretations such as reliability, the capabilities of leaders, the ethical frame and fairness displayed in decisions and a sense of benevolence. However, the leaders of higher educational institutions themselves are impacted by competing pressures of urgent decisions that in the complex nature of university systems, the financial and policy pressures they are under and atmospherics within often disparate and fractious faculties, fracture trust relationships both inside and outside the academy (Pillar, 2025). And that brings us right back to (generative) AI.
AI and the fracturing of trust
If the students and lecturers trust too much the decisions that AI takes and the content it generates, they will lose control over the task the AI performs (Ivanov, 2023)
The reality of AI is that despite the assertions that its use and innovation is now a stable state of certainty or an equally stable upwards trajectory of exponential transformative impact, both the technology and its applications to higher education are in a state of flux, in part due to the issue of trust. And that is absolutely fine to say and it shouldn’t stop experimentation and critical evaluation of the potentials of AI and whatever follows it to help humans solve the most critical challenges we face (including the inequality and environmental effects of AI itself!)
- Systems and platforms are deployed as black boxes, deeply integrated as tools that are replacing human search and task activity without consent or knowledge. They are built into institutional instances which, through policy and firewalls, depend on trust through privacy, data protection and research ethics and offer a sense of safety through ringfenced access and single sign on.
- The impacts of AI on assessment design have been catastrophised, bureaucratised and put through a variety of policy sausage makers to spin out transient sequences of superficial change to protect integrity, quality and enable AI skills integration into all components of teaching, learning and assessment.
- Student use of AI has been variously ‘exposed’ as cheating, efficiency, a consequence of our (legislative) obsession with grades, a symptom of the marketisation of higher education or a storm in a teacup.
In an institutional environment like higher education that craves and is defined by the desire for certainty (despite its assertions to the opposite claiming change is our new norm state), the flux of AI, caught in the flash of the screaming for immediate responses and epistemological hand-wringing over the future of everything we do, is fanning the crisis and the chaos of trust. AI isn’t the only game in town and it’s not the only crisis impacting on trust. Across the various fora (mainly social media) that amplify our internal and external interrogations of the sector responding to AI has become the only act of a responsive and responsible institution. Having a policy, a project or a unique selling point on AI is the next urgent decision for leaders. Anything else is head in the sand stuff according the commentariat. But the harsh reality is that AI itself and the ways in which institution enable, ignore or reward its (un)critical use is exacerbating the decline in trust across students, the academy, government, the community and our future state of students and leaders.
Without trust, higher education doesn’t exist as a form of social capital.
Trust, particularly institutional trust, appears to be important because it facilitates a student’s willingness to accept the legitimacy of the educational system in terms of determining future lives but also because it fosters a willingness to become actively involved with the school. (Fuller, 2014)
I have attempted to represent the connected nature of trust in higher education. It shows where trusts resides in a university (students, the academy, government, the community and the systems). I argue that trust can be built and maintained within each of the stakeholders. This internal trust impacts on partnership and relationship decisions, social and organisational cohesion and the stances on knowledges, expertise and impact. Through teaching and research (as simplified representations of action) trust is also built or damaged between these stakeholders impacting on attributes such as authenticity, credibility, reliability and responsibility. Taking a future-facing perspective, the future state, which represents the students and staff and community (such as parents) who are yet to engage with our research and teaching in meaningful ways and have not yet formed a judgement regards trust. If we lose their trust, then the sector simply does not have a future. The evidence on this front is stark and frightening. Generation Z (the current undergraduate cohort) positively value the cost/benefit calculation of their university degree less than half than those in Generation X (Gafner. 2025). If trust in our higher education practice breaks down, that percentage will only decline further.
Students and teachers
Students must trust that what we say in the classroom and the knowledge and learning it shares is accurate, reliable and based in evidence. Whatever approach to learning you take, the teacher has a responsibility to present information, ideas and debates with credibility, expertise and authenticity. What happens when this breaks down?
Students seek their information elsewhere.
Teachers must trust that what student put forward as their work in assessment is their own. For all the debates about broken assessment, marks addiction and other reactive nonsense, the simple fact is that if academics cannot trust what student put forward then our entire form of pedagogical validation (quality assurance) breaks down.
In that case, no one wins when trust is broken.
Researchers, government and community
Trust in the research outputs from the academy is critical to their capability to shape and influence society. Increasingly journals are facing a crisis of AI slop writing, hallucinated or misrepresented references (which are alarmingly being re-cited in other works), data errors and a crisis of retraction like we have never seen before see Tran, 2024). This is challenging the very fundamentals of the research architecture and trust in the validity and reliability of what we research. There are reasons for why this happening, but the consequences are what I want to focus in. A loss of trust in our research that should be shaping the discourses of society. There is also the loss of trust in what researcher use for their own work.
A case study
I was writing part of a literature review recently. Over 50% of what I searched on Google Scholar came from predatory or low-quality journals with lower editorial standards, littered with fake references, limited scope studies and no immersion in or reference to the theoretical traditions. I dove into the rabbit hole of trust. A reference that was cited as their central theoretical frame appeared several times across the papers. I wanted to check this article out. It was not in Google Scholar. It was not in the archive of the journal it claimed to be from. So I asked my institutional instance of a well-known LLM integration if the article was real. The answer was an emphatic yes, it was real. I asked the instance a further question “I have checked journal <name> and it is not in the journal issue or pages you have cited. I ask again, is the reference real?’ The response was ‘Short answer: No, it is not in that journal.’ So I asked ‘Why did you lie?’ This is what I received back (from my institutionally endorsed LLM (I have blanked the actual article to protect the authors).

This process took me an hour to get to the facts you see above. As an editor of a Q1 journal, I can tell you that many articles submitted are not taking that time. They are not verifying sources (although DOI checks help). But more importantly, they are not verifying that a real source says what the LLM tells them it does. The introduction of micro-bias creates imperceptible misrepresentations of what an author has said, which gets amplified each time it gets re-cited. And each time, trusts decays a little bit more. Writers like Dirk Lindebaum have written about this in terms of challenges to epistemic agency and epistemic governance (Lindebaum et al., 2025). Klingbeil et al., (2024) argue that without any real evidence academics are increasingly become over-reliant, algorithmically appreciative and uncritically trusting in AI outputs for research and decision making. Mezzadri (2025) argues that the use of AI in research is an ethical paradox, in that its inherent unreliability counters any benefit through the need for the humans to do the work themselves to validate the outputs. In a metricised ‘publish or perish; world this is not an universally accepted trade-off. As the sequence of unreliability feeds not just into the immediate article, but cited and re-cited back into the training data you reach the conceptual tipping point of discovery collapse where new ideas, novel approaches or theories or innovative solutions are not supported, inspired or challenged by the veracity of the data used to underpin or inform thinking or expertise.
Back to it…
If trust breaks down between research, the community and the government then alternative seeking behaviours replace academic knowledge with corporate reportage, internal analytics and yes, AI. Generative AI and LLMs become especially attractive as the promises of algorithmic reliability and agential creativity and inspiration are emboldened by the lack of trust in ‘traditional’ research (often seeded by the AI companies themselves in the media, see Cawley, 2025 and Mali, 2025 as examples).
Trust in systems is especially critical
Universities are ecosystems of complex, integrated systems, which are in the main proprietary and often containing black box code and algorithms. Trust in the systems that crunch data, store and disseminate our research, support our teaching and assessment and conduct many of our engagements inside and outside the academy is critical. In the above example that response and its hurt, slightly accusative tone came from an institutional instance of an LLM. If we cannot trust our systems not to lie, or cause harm, or they are defined by a primary purpose of pleasing the user over and above the demands of rigour and reliability then another pillar of our trust cracks. These systems need to be coded and guardrailed in a way that hard codes trust at the centre of the architecture (along with security). Our students need to know that we have quality assured their contributions to their learning and that what they receive is a fair representation of their performance. And then it circles back again to research. If our systems cannot be trusted then they should not be doing the academic work of a human and they should not be used in the production of knowledge. And as I have argued in a previous blog, they should be providing feedback on student learning. It is not tha AI shouldn’t be doing that process, its that we cannot trust AI or the systems behind to to do with student learning, credibility and expertise at the forefront.
The future state
Generation Alpha are the next generation to enter higher education, commencing their undergraduate experience in about five years’ time. Generation Z are completing their undergraduate journey and are moving slowly into lower and middle management positions (see Deloitte, 2024). There is significant empirical and anecdotal evidence that points to the dissonance between the educational expectations and priorities of Generation Alpha and Generation Z. and the business models of institutions. Their attitudes to university education are potentially moderated by their parents and their own experiences with, and decisions to participate in higher education (Dretsch, 2021; Jukic & Skojo, 2021). A recent report by Gafner (2025) found that 20% of Baby Boomers believed that their university experience was a waste of money. That lack of belief in the value of education increased by approximately 10% with each successive generation, coming to around 40% of Millennials and 51% for Generation Z respondents.
Trust in the benefits of a higher education is critical to ensure participation of the next generations of students in a university education. Trust is critical to maintain the reputation of the university as a trusted partner, a trusted voice in debate and a trusted source of expertise in times of crisis. Trust is critical to continue to build ensure the social capital of our research and education is respected and integrated into their work by Generation Z and Generation Alpha. This is critical for these cohorts, who begin with historically low confidence in major institutions and expects transparency, authenticity, and meaningful involvement. Knott (2022) argues that Generation ?Z is the least likely of all generations to trust higher education, with only 41% expressing trust in universities.
The solution and a mission
We need to rebuild trust in higher education. Endless debates about what we did wrong in the face of AI, during the pandemic or in reaction to government policy won’t rebuild trust. We need to offer a positive vision for the future of our sector where trust is at the centre. Trust needs to be earned, otherwise it is ripe to be exploited as a tool of coercion and control. Strategies, rewards and culture need to do more than say the sector is trusted, they need to evidence it at every opportunity. Where trust fails, then dialogue must be swift and open. Where trust grows, it must be nurtured.
1.
Students need to trust us, and we need to trust them. Breaches of trust have consequences, on all parties. We are not always the right people to assert that message inside or outside the academy. We need to empower the right people to persuade those who have lost trust in the value of an education, the value of undertaking assessment and in the messages, we communicate that we want their trust back. We need to find out, through open dialogue and conversation, what we need to do to be trusted again. Good teaching, good assessment and good learning build trust.
2.
Industry, the community and government need to trust us and our research. The more we need to retract research because of academic misconduct or false data (Sample, 2025) the quicker the trust breaks and the failures amplified by those who seek to take over business smodel. We need to stop using AI for academic work that should be done by humans. Stop using AI to write journal articles, stop using AI for peer reviews, stop using AI to speed up the writing and editing processes where it rewrites human effort. We need to stamp out the academic misconduct that is corrupting even further an already broken system of research publication and dissemination. We need to rebuild trust in our discoveries our insights, our contributions to theory and our debates. To do this we must value that impact over the rankings gain of masses of publications. Good research builds trust in our sector.
3.
Industry, the community and government need to trust our graduates. They need to trust that we have educated, engaged, developed and assessed them to be better employees, better leaders, better professionals and better members of society. This is a delayed form of trust because it may take a decade before the trust is betrayed or reinforced or it might happen on day one of the graduate job. Trust is critical here for students as well. They need to trust us that we have listened to and embraced the requirements of what it means to be a doctor, a teacher, a scientist, a marketer or an economist. Building a culture of trust within the teaching-research-engagement nexus is the responsibility of leaders in higher education. It is a strategic and epistemological imperative to ensure long-term success and viability of a sector who thrives on building social capital.
…and in the end.
This is not a hitjob on (generative) AI. The overwhelmingly loud debates about how it is disrupting and changing higher education have shone a bright and blinding light on our already existing deficit of trust. But AI, its use cases and polarised usage debates are not to blame for the issues of trust we face. But we are all responsible for rebuilding it. We are responsible for the cultures we enable, creating differential value for trust in teaching and research. We are responsible for finding, amplifying and celebrating the voices of those who have benefited from trusting higher education, not punished for it. Trust is not a marketing slogan; trust is the fuel of human relationships.
In a 2024 blogpost I described the post-transformative university (Bryant, 2024). It is a third space between the neoliberal ideal and the transformative utopia of universities. A post-transformative university leverages the intellectual ecology of the academy as an engine for authentic, real change that makes a difference to people, to communities, to societies. This is a two-way enabling of trust. A university that can walk and chew gum at the same time, enabling strategies to navigate financial and policy crises whilst building trust with all its stakeholders, not as a promise or manifesto, but a visible manifestation of the soul of the institution. This is the foundation of building trust back as our only currency.
Trust and integrity are precious resources, easily squandered, hard to regain. They can thrive only on a foundation of respect for veracity
Sissela Bok


