Recently scholars and government officials from Newfoundland (and Labrador) newly found themselves in hot water over a report riddled with fabricated citations. The report, titled Vision for the Future: Transforming and Modernizing Education, was to be a 10-year roadmap for modernizing the education system in the province and articulated an initiative that "focus(es) on the entire education ecosystem, with equity, health, well-being, and belonging at its foundation." (source) The finished 418-page report has 110 recommendations and, allegedly, 15 completely made-up sources.

The co-chairs responsible for the final report (who are also professors of education) have come forward to deny that the sources were added on their watch. They claim that government employees must have added the offending material after it was submitted.

This is a disheartening situation all the way around. Nearly 2 years of work from dozens of people has been invalidated by what was likely some lazy genAI use and incomplete reviewing. The report doesn't exclusively discuss technology but recommendation 3.6-61 is especially ironic in light of this incident.

3.6-61. Provide learners and educators with essential AI knowledge, including ethics, data privacy, and responsible technology use.

A Vision for the Future: Transforming and Modernizing Education - Education Accord NL

Confirming critical information is, of course, responsible use.

But wait, isn't that kind of tedious research that we're using AI to avoid? This stringent adherence to human in the loop sure can buff the shine from tools that promise infinite knowledge instantly.

the recursive review commitee

It's too convenient to frame this case as a failure of diligence. After all, this committee was not called together to verify the output of an LLM. While this may be an altogether different failure or scholarly rigor, it looks increasingly likely that future committees may serve to ratify machine-generated reports and curriculum.

Academics are in an increasingly tenuous position as the poster children of the knowledge workers who are foretold to be the biggest losers of the AI workforce revolution. Even as these prophecies have failed to materialize, the allure of having experts chime in on synthetic content rather than generate it themselves could be too much to resist for productivity-minded administrators. Instructional designers, too, have proposed models for learning material that is only SME verified, cutting down on development and time costs.

Similarly, genAI presents an alluring shortcut for scholars who have been asked to produce more and more scholarship in the ever tightening academic market. As the ecosystem becomes rife with AI reviewers reviewing AI-generated manuscripts to be read by AI crawlers who make it hard for human to access resources.

I've spoken to hundreds of academics in my career, and have never found one, in any field, whose ambition for their expertise was to be relegated to a fact checker. Similarly, I've never met a student who wished that their courses were embedded with factual errors that they were tasked with confirming.

As we reflect on how genAI should or should not be deployed in higher education, we should engage in a thought exercise about what happens when there is more created than can be curated, more developed than can be tested, and truth is buried in an labyrinth of personalized distractions. When we lose the ability to see what's ahead of us, we're bound to step in it.

miles and miles of cow pasture

My rural Texas upbringing inspired my urban adulthood. Among the things I don't romanticize about country life are the keeping of chickens and rambling around cow pastures. They're hot, barren places where you're likely to step in manure.

The modern internet is similar to any stretch of land you'll see out of your car window on I-10. A lot has been written about the enshittification of the internet. As someone who has been online personally and professionally for 30 years, I mostly agree. The problem is one of incentives, and as long as attention is the currency of the internet, you'll have whatever the current equivalent of Italian brainrot shorts, keyword-stuffed SEO blogs, or newsgroup spam is. The ease with which this material can be created, distributed, and watched in 2025 has outpaced most people's ability to avoid it if they'd like to do anything online.

Anyone who is interested in truth and public discourse should be clearheaded about the idea that the product of genAI is decidedly bullshit. I mean this in the academic sense of the word. While LLMs do not have a "reckless disregard for the truth" because they can't have any regard at all. They have been designed to mimic text communication, not deliver the truth. It is more accurately "soft bullshit" or "bullshit produced without the intention to mislead the hearer regarding the utterer’s agenda".

These models aren’t designed to transmit information, so we shouldn’t be too surprised when their assertions turn out to be false.

Hick, Humphries, and Slater - "ChatGPT is Bullshit"

Criticisms of genAI as a tool for learning extend this train of thought to tutoring and collaborating with AI tools. Flenady and Sparrow argue that asking students to participate in authenticating AI output is pedagogically perverse. This is a valid criticism of the first wave of prompt-focused chatbot interventions were introduced into instruction. Early forays into teaching with search engines and Wikipedia were critiqued similarly. Since the past rhymes with the future, I believe that this particular problem will be smoothed in similar ways: the tools will become more accurate, humans will become more proficient in using them, and there will still be many horrible applications for education.

There is a future where those who develop models turn them to their own ends, fulfilling the requirement of intent to mislead their user and creating a machine that traffics in "hard bullshit". Like social media before, we'll be reckoning with the political implications of a small number of people controlling the distribution of "truth" to billions of people for generations.

literacy for thee, shortcuts for me

Those of us with an interest in how AI will shape education are left with an old problem dressed up in a new Scooby-Doo-villian-style mask. ("It was critical thinking all along!") Indeed, the project of education is to help students evaluate information and integrate it into their outlook on the world. Many well meaning AI literacy initiatives instruct students to always confirm output and disclose their prompts. While some of this advice is extended to staff and faculty, the tenor of the conversation is not equal to its potential effects.

This is a tenuous time for higher education. If there was a time to double down on quality over quantity, this is it. Our campus greens should be decidedly free of cow feces. This does not mean ignoring or rejecting AI tools for scholarship or research. If we'd like to lead through the fog, we need to hold ourselves to the highest standards.

Students are often aware (and unhappy) when faculty use AI for grading and feedback. I'll save the nuances of automating grading for another post, but regardless of your (or your institution's) stance on the topic, all AI use by faculty should be transparent and intentional. Expectations for its use by the instructor should be listed right alongside any guidelines for students in syllabus. If we are to build trust with students, we have to be willing to model what we wish to see.

Accountability should be part of every AI adoption conversation. At time of writing, no individual has taken responsibility for the fake sources in the Education Accord NL report. The companies who train and provide AI models add disclaimers to their tools reminding users that it needs to be checked. Current models for literacy in the workplace and in education institutions may contain entreaties to verify information, but very little time is spent on the unintended consequences of letting hallucinations take root in final products.

A screenshot of text that reads "Gemini can make mistakes, so double-check it"
Google implores me to double-check the output of Gemini.

Once people take responsibility for how AI is used, we can begin the hard work of guiding students through a future where false citations lurk around every corner.

AI Disclosure: I used Claude to research the Newfoundland Education Accord incident and some of the academic sources referenced in this post. I did the writing and analysis. Me and my deterministic human brain confirmed all of the sources were real and appropriately represented.

check this post for accuracy