Although AI has existed for decades, it only rose to mainstream popularity with the arrival of generative AI software like ChatGPT. Since then, it has been widely acclaimed as one of the most exciting new technologies, promising easier access to knowledge and more efficiency.
However, as people started to understand just how powerful AI is, alarm bells rang. It quickly became apparent that this type of technology needs strict guidelines for its use, especially in higher education. When done right, it’s a net positive for everyone.
Ideas for a framework around AI were recently discussed during an Explorance webinar featuring experts from the University of Texas at Austin, Harvard Medical School, and the University of Pennsylvania.
This blog expands on their thoughts by outlining key steps, from addressing ethical concerns and faculty skepticism to exploring innovative AI applications in higher education. Read on to follow their approach to AI literacy and implementation.
AI literacy refers to the critical understanding of AI tools and the ability to apply them to learning and decision-making. While teachers and students know that AI shouldn’t be allowed to complete entire coursework, it can still have massive benefits.
Yet, most higher education institutions have banned generative AI, and many teachers and students have been uneasy about using it. Only 18% of faculty understand AI teaching applications, and only 16% of students feel confident using AI in coursework.
Only the higher education administration can set a goal to improve AI literacy on campus. The guidelines must be clearly communicated, including a framework for using AI. They should be paired with workshops and training to integrate AI into a curriculum.
AI will inevitably become an essential tool for higher education. While it might be too early to determine its definitive use, the experts on the webinar panel have all proposed appropriate uses for it.
Julie Schell from the University of Texas at Austin spoke of her “AI Forward and AI Responsible” framework. This framework is based on the fact that AI can both teach students and provide incorrect information.
One application of this framework is presented in the following exercise: Students are asked to draft teaching philosophy statements using AI and evaluate the output. This process allows students to develop critical thinking while learning more about AI’s shortcomings.
At Harvard Medical School, Dan Liddick explained how he uses AI to track student progress and pinpoint those struggling. This type of data analysis is something this technology is particularly apt at and performs in a way humans simply can’t replicate.
After these experiments, Harvard implemented an AI sandbox to allow its staff to experiment further. One of the main concerns with tools like ChatGPT is data security, and all organizations considering them should start by testing them in a safe and isolated environment.
Rob Nelson of the University of Pennsylvania discussed the interesting integration of LLMs within discussion boards. These tools can help certain students who struggle to articulate their thoughts and stimulate student engagement across these platforms.
AI already demonstrates its potential in academia, from improving instructional design to enhancing administrative efficiency and collaboration. As institutions explore these use cases, a responsible and experimental approach ensures AI is used effectively while addressing its limitations.
Integrating a tool with as many implications as AI is no simple task. While this technology has many fans, it also has its fair share of critics. Some of its flaws can adversely affect the student experience, emphasizing the need for structure around AI even more.
Students and teachers are often confused about AI’s potential applications and what is unacceptable. This decision must come from the administration. Put in place clear guidelines, accounting for as many scenarios as possible.
Generative AI poses several risks, but the primary ones are plagiarism and so-called hallucinations. These tools sometimes return peer-reviewed material without citing their sources and have even been found to invent statistics when pressed to find one about a specific subject.
It’s safe to say that most teachers view AI as yet another shortcut for overwhelmed students. Faculty and staff must be trained not only on recognizing AI-generated work but also on the positive ways to integrate it within their classroom.
AI’s power is undeniable, and students must learn to use it ethically. At the University of Texas at Austin, Schell’s updated honor code promotes proactive learning, emphasizing that “I will honor the learning process, acknowledging that it involves mistakes.”
Administration and faculty must work together to state their AI expectations clearly. Once this is done, students can comfortably discuss the use of AI tools without fear of judgment or even repercussions, as they will have appropriate boundaries.
AI is evolving at a breakneck speed, and it’s important to think big when evaluating this technology. From helping students learn to facilitating services and making administration more manageable, Al’s potential has no limits.
The University of Texas at Austin is developing an AI tutor to promote self-regulated learning. Using AI can provide a helpful jumpstart to students who struggle to get in the right mindset. However, Julie Schell warned against the passive management of a tool like this perpetuating traditional teaching models and their shortcomings.
Dan Liddick of Harvard Medical School explained how they rely on Oracle Cloud’s machine-learning models to automate specific tedious tasks. This promising solution will free up administrators to focus on growth and engagement.
For Rob Nelson of the University of Pennsylvania, AI’s main future application will be to simplify higher education’s most complex institutional processes. These tools will make campuses more accessible and student-friendly to better their overall experience.
As with any new technology, AI must be integrated carefully, in measured steps, and with clear guidelines. To improve their experience and promote new learning styles, set up a safe, dedicated environment for staff and students to experiment with AI.
Make training a priority and explain the various scenarios in which AI can benefit higher education. In most situations, the fears related to this technology mainly stem from a lack of comprehension of the potential good AI can have.
Responsible integration of AI in higher education offers an opportunity to enhance learning, streamline administration, and foster collaboration. Nevertheless, adopting AI must be approached cautiously, emphasizing ethical practices and robust training for faculty and students.
Institutions should focus on transparency, controlled experimentation, and open discussions to harness these benefits. By taking a balanced approach, AI can be a key part of a fair and engaging learning experience.