Blog

The Impact of AI on Educational Institutions

Written by Chris Meylan | Jun 25, 2024 7:31:48 AM

In January of 2023, Christian Terwiesch—Co-Director of the Mack Institute of Innovation at the University of Pennsylvania’s Wharton Business School—published a fascinating white paper: Would ChatGPT Get a Wharton MBA? Terwiesch performed an experiment. He challenged ChatGPT 3 to take the final exam for a sample Wharton MBA course. The results? ChatGPT would have earned a grade of a B to B-.

Keep in mind: Wharton is one of the world’s most elite business schools, ranked #1 in the United States in 2024 by U.S. News & World Reports. If artificial intelligence can perform at this high level, think about all the implications this kind of technology has for those of responsible for educating and preparing the next generation of professionals. Both the challenges and opportunities presented by AI are daunting.

I recently wrote about the growth of AI and pondered the massive impacts it will have—and is already having—on those of us in the education sphere. As I mentioned in that post, this topic poses such a dramatic challenge to educators and students that I’ve decided it deserves its own dedicated series of articles. Today, what I would like to explore with you specifically are the impacts artificial intelligence platforms have at the level of the educational institution. In future posts, we’ll drill down to the impacts on teachers and students themselves.

How should schools address these new technologies? Ban them? Teach students how to use them? Wait and see? Leave the question up to each individual instructor? Let’s dive into the issue together.

The Challenges of AI

Are the results generated by AI perfect? No, far from it. ChatGPT is known for “hallucinating,” randomly inventing facts from thin air. Its responses are often formulaic. Its tone, less than human. Even in Tierney’s experiment, he noted that the initial responses generated by ChatGPT were flawed. It completed some tasks better than others and made simple math errors when doing calcuations. That said, a little bit of human feedback upgraded responses and allowed ChatGPT to return quite high quality responses for many of the exam problems. His experiment also used the ChatGPT 3 platform that existed a year and a half ago. As of May 2024, we’ve now progressed to the Chat GPT 4o version. Already, the processing and outputs generated by ChatGPT have improved significantly. Responses sound more human and natural, and they will only continue to improve. AI technology is evolving rapidly.

What does this mean for teachers who want to ensure their students are actually learning and doing the work they’re assigned? Cheating will become ubiquitous. The siren call of AI is seductive. Put yourself in a young person’s shoes. Do you want to spend eight hours faithfully doing a project? Or would you rather spend 20 minutes feeding it to AI and making a few quick adjustments and then going out to enjoy your weekend?

Technology designed to evaluate whether a written work has been produced by AI exists, but the results of these services are quite dubious. Detecting academic dishonesty in this brave new world has become a Sisyphean task. As soon as we’re able to recognise the tell-tale signs of AI content, the bar changes. If AI is capable of writing a high-level essay that includes all references to the level of a PhD student without being detected, what stops a student from asking ChatGPT or Gemini to feed them a report that is already finalised?

The ethical concerns loom large, but perhaps even more concerning for educational institutes is the negative impact AI usage can have on the learning process. The biggest worry teachers have is whether students will actually do the work. In education, process can often be much more important than outcome. The goal of writing an analytical essay is not to produce an immaculate essay (though that’s certainly nice); the goal is to learn how to do the analysis. Mistakes are learning opportunities. They can be even more important than successes. By over-relying on AI tools, students may be cheating, but more importantly they’re cheating themselves out of learning opportunities. They lose the entire value of the assignments and tests they’re completing. They miss out on developing the problem-solving, analytical, evaluative, critical thinking, research, synthesis and collaborative skills that these assignments have been designed to teach.

Towards an AI Policy

The first step every institute should be carrying out is to create a student policy to guide students in using AI effectively and responsibly. At AIHM, we are in the process of writing our AI policy. Other administrators and educators who are currently developing their own AI policies know how difficult—and vital—this task is.

An effective policy in regards to artificial intelligence must articulate when AI is allowed and when it is prohibited. The policy should identify how students should cite AI, include a warning about the technology’s limitations and students’ accountability for AI output, and highlight the importance of ethical use. A school’s AI policy must also emphasise that AI should be used as a learning tool, not merely a content generator. Providing such a policy helps students understand these tools’ potential benefits and drawbacks, and it encourages students to explore AI within the set boundaries.

Approaching Artificial Intelligence for Undergraduate Students

Recently, I had the chance to discuss with Prof. Amit Joshi from IMD Lausanne the ways we should be approaching AI in the context of teaching undergraduate students. Having done a lot of work on generative AI and how AI is impacting strategies in the corporate world, Prof. Joshi possesses a great deal of expertise and insight on this topic and was a wonderful person to bounce ideas off of. He outlined three possible approaches that may be used singularly or in combination:

1. Ban the Use AI

Banning artificial intelligence technology use in some (or many) contexts can be a valid approach. In particular, institutes may choose to ban AI for first-year students in order to teach them the core concepts they need to know to progress further. Prof. Joshi stressed that there are some theories students need to know by heart and there is no way around that. For instance, when students are studying marketing, they need to learn the concept of the marketing mix, what segmentation is, what a distribution strategy is, and so on. For ensuring this learning, perhaps the best way to assess students is through in-class exams or perhaps viva voce.

2. Make the Use of AI Optional

Under this policy approach, AI use is a case-by-case discussion and a decision that each subject expert and lecturer can make. Students can be allowed to use ChatGPT and other generative AI tools in order to carry out specific tasks such as performing research. In the real working world, AI can be a productive and suitable tool. However, students must ensure that information is properly referenced and checked. If there is a mistake in that information (because, for example, AI hallucinates), it’s the student’s responsibility.

3. Make AI Usage Compulsory

In this third approach, AI use may be directly required as part of the assignment. Moreover, learning effective and ethical uses of AI might be an integral part of the assignment, in addition to any other desired learning outcomes.

As an example, consider the following assignment, which could be particularly well suited to undergraduate students towards the end of their degree programme—that is, students who have already gained a degree of maturity, a solid foundation in their subject area and previous experience with AI tools. The lecturer can ask for two submissions as part of one assignment. Ask the students to create a prompt on a generative AI tool, and submit this initial output. For part two of the assignment, students should evaluate the initial output and then modify the prompt to produce a refined second output. Students can then be asked to write a report critically analysing what ChatGPT has produced and comparing it to what the student knows. This form of assignment activates and practices a very high level of critical thinking; the learning derives from the process itself. Again, this teaching approach prioritises process over outcome, a very sound pedagogical position.

Every school is different. While there is no one perfect policy that applies equally to all schools, all programmes, all majors and all student bodies, the three approaches above are a good starting point for discussing and developing your school’s student AI policy.

Tactics to Address AI’s Challenges

Policy sets a framework, but of course, this is only the first step. Moving beyond top-level frameworks, instructors must update their pedagogy and individual teaching tactics to address the new challenges AI introduces.

Two tactics gaining popularity in educational circles are integrating more oral assessments and in-class exams into coursework. While take-home exams and and out-of-class assignments have some advantages that have led to their more widespread adoption in recent years, AI tools render these types of evaluations more challenging. Whether or not an all-out ban on take-home exams is the most appropriate solution can be subject to debate, but shifting the balance towards testing methods that are guaranteed to be AI-free can be an extremely useful.

Another way to circumvent the use of AI is to design assignments so that they must integrate reflections on personal experience or information that by its nature must come from original data and the student’s own individual life and perspective. While AI excels at certain tasks, it still fails significantly at missions that require creativity and a unique human touch.

More than anything else, the key to overcoming the challenges of AI exists in prioritising critical thinking. As AI grows and becomes more prevalent in the world, teaching critical thinking is more important than ever. We must teach our students to critically evaluate the results AI generates. Is the machine hallucinating? It might provide an okay answer, but is it the best answer? What aspects of the problem is AI overlooking? What potential solutions has it ignored? What biases might AI be perpetuating? What misinformation is it recycling?

Some Final Thoughts on the Limitations of AI

Artificial intelligence draws on massive data sets. Essentially, it combs through colossal banks of human-produced content, learning from it and then trying to reproduce what it considers to be a human-like response. The reason so much AI content comes across as reductive, repetitive in style and not quite human is that this is exactly what AI is doing. It’s reducing its diverse data sets into a sort of “average”. Content produced by generative AI highlights what is called the fallacy of averages. Reducing a data set to its averages can often produce useful information about overall trends but in other ways blurs the importance of individual difference.

Take a look at the Terrible Map below depicting the average flag colour of European countries. This is a wonderful visual demonstration of how averages hide inherent variation. This map can even be used as an illustration for explaining this problem to students. Generative AI, by trying to produce an average or ‘typical’ human response rather than an individualised response, can fail at delivering a useful or meaningful answer. This doesn’t mean averages don’t have their place; rather, it shows how a compiling, synthetic approach (such as the methodology AI uses) must be used in concert with critical thinking. Otherwise, artficial intelligence can lead us down paths that feel, well, artificial.

Average Flag Colours of European Countries

If institutes are to allow, or even promote, the use of AI, we must provide students with guidelines and lessons on how to use these tools appropriately and intelligently. When can AI be used? When should it be avoided? What are its limitations?

In a world that increasingly values individual difference and the strength of diversity, how might AI be employed productively and strategically? How can AI work alongside the power of the individual instead of trying to replace the power of the individual?

In the coming weeks, I will be concluding this AI series with two columns dedicated specifically to how teachers and then students can navigate the challenges of AI and emerge victorious. We’ll look at specific examples from AIHM as well what other global scholars have to say about AI’s pitfalls, opportunities and horizons.

In the meantime, I invite fellow educators as well those out in the trenches of the professional world to reach out to me on LinkedIn with your thoughts and experiences about the emerging use of generative AI. I’m always happy to learn what others have to say and any insights they’ve discovered in their own personal and academic journeys. AI is changing quickly, and educational institutions such as AIHM and others can play an important role in leading the change and defining the future of its use.

Already in the Workforce? Get Up to Speed on AI.

In addition to our business degree programme and management certificates, AIHM offers a number of upskilling opportunities for working professionals through our Executive Education portfolio.

Take advantage of our weekend short courses on AI for Hotel Marketing as well as our course on Adopting Generative AI, taught in association with Deloitte.