Panel addresses ChatGPT and AI in higher education
The future might not be as bleak as recent headlines suggest, UND faculty and staff panel says
“ChatGPT is a plague upon education,” reads one headline at Inside Higher Ed. “ChatGPT: Threat or menace?” reads another, clearly not offering much of a choice.
But here’s a third headline, this one from the British magazine Times Higher Education, that better captures the spirit of a recent UND panel on the subject:
“Don’t fear ChatGPT: education always trumps tech,” the headline read.
As Anna Kinney, the moderator for Thursday’s panel and the coordinator of the writing program for UND’s Teaching Transformation and Development Academy, put it, “I’ve heard, in a lot of conversations, that this is like when the calculator was introduced, and it threw the math world for a spin.”
But math and math instruction survived and even thrived, Kinney notes. “And the reality is, we still show our work.”
Panelists talked about these topics during “Implications of AI for Higher Ed – What You Should Know about AI and Student Work,” presented by TTaDA on Thursday, Feb. 23.
The discussion centered on two themes: First, the fact that this is far from the first time educators have had to grapple with and adapt to new technologies in the classroom.
And second, this isn’t even the first time that AI has showed up in classrooms and prompted faculty and students to adjust. Language instructors, for example, have been dealing with AI-enhanced translation services for years.
Some background: ChatGPT – freely accessible to anyone with a web browser – can write computer code, analyze Shakespeare, prepare legal documents, you name it.
This capability isn’t lost on North Dakota’s State Board of Higher Education, which has tasked President Andy Armacost with bringing NDUS leaders together to discuss AI’s impacts on the state’s collegiate classrooms.
But on Thursday’s panel, faculty from a variety of disciplines kept the focus on AI being a tool that can advance educational outcomes, rather than as a disruptor that will cause crises of integrity.
“Writing is both process and product,” Kinney noted. As with the advent of word processing, another revolutionary technology, “this development in AI pushes us back toward prioritizing the process, as well as considering writing as thinking and demonstrating our thought processes.”
Avenues to higher learning
Panelists included Heather Wages, director of policy and administration in the Office of the Provost; Jennifer Cook, assistant professor of law; Jonathan Green, instructor of German; and Tom Stokke, assistant professor and undergraduate director for the School of Electrical Engineering & Computer Science.
Representing a broad set of disciplines, the panelists addressed questions of benefits, challenges and limitations of AI, as well as potential concerns about inherent bias, privacy, ethics and student assessment as new technology is introduced to higher education.
They also took time to answer some questions submitted by attendees.
“Everything is exciting about artificial intelligence and its use worldwide,” said Cook, in the first comment made during the hourlong session.
Speaking from the experience of practicing law, helping clients build and win cases in court, Cook said that AI tools already have been in use in the past decade, powering the country’s biggest legal research platforms.
With AI’s ability to understand natural language research questions, lawyers have more time to spend on higher-level concepts, such as litigation strategies, rather than racking up exorbitant fees while poring over case law or digging for citations.
“When we are able to do our job faster and more effectively for clients, that reduces their costs,” Cook said. “Creating access to justice is really important when it comes to utilizing AI and being more effective with our time.”
And on the teaching side, legal writing can be a difficult subject for first-year law students who all arrive with varying skill sets and command of the language. Tools such as ChatGPT can help students get up to speed on the basics more quickly, which opens avenues to higher learning, Cook inferred.
“If we can take some of the tasks they have to do as part of legal practice and make it easier or more efficient for them, they can spend more time with highly cognitive tasks, like learning and understanding the law, thinking about what the law should be,” she said.
A tool with limitations
Green, who works in languages, as well as translating when not teaching at UND, agreed with Cook that the past five or so years have taken AI from being a “toy” to being a necessary piece of the translation tool chain.
“It’s interesting that nearly all of the things you mentioned are things that I see, coming at it from a totally different angle,” Green remarked to Cook. He concurred that when basic processes can be assisted by AI, students benefit by engaging in more advanced concepts that require the “human touch.”
Stokke, who spoke on behalf of computer scientists, likened ChatGPT’s capabilities to a starting point when learning how to code and think through computer science problems.
Where the technology is limited, in the case of ChatGPT and large language models like it, is that it is based in pattern recognition. The model looks for known responses to prompts provided by people.
ChatGPT’s model is 800 gigabytes in total (an amount of data that can fit on a standard computer’s storage device) and the model’s “training” was finished in 2021 – it’s not necessarily a live, learning system.
Stokke and others on the panel emphasized that the current AI landscape can’t be a shortcut for students who might otherwise avoid learning the fundamental skills of their fields, even if it’s a useful tool at that stage.
“We can give the AI some basic requirements and let it generate the source code for the intended program, then we get to talk about it,” Stokke said. “What did it miss? What did it do? It goes further than how we typically provide sample code, but it isn’t complete.
“So, again, it takes some of the cognitive load off – that low-hanging fruit, if you will – and lets us focus on the stuff that requires an intelligent person to be involved in the process,” he said.
Unanswered questions for higher ed
Among the more pertinent questions posed by Kinney was how UND faculty can ensure AI is used responsibly and ethically.
Stokke responded first, saying that with all tools, the implications of using it need to be well-understood by students and teachers alike. The imperative for information and media literacy came up repeatedly over the hour. Students need to learn which tools use AI, what their capabilities are and how to critically evaluate the information those tools produce, Cook said.
From the perspective of University policy, Wages said there are certainly challenges and unanswered questions when AI collides with academic integrity, cheating and how we even define cheating.
“When we try to fit the improper use of AI technology into our existing definition, what ‘improper use’ means is going to vary depending on the instructor and discipline,” Wages said. “We need to figure out how to build flexibility and acknowledgment into a policy while giving instructors the ability to use it in a way that’s positive for them and their discipline.”
At a further point in the conversation, Wages said the faculty handbook and other UND policies provide ethical guidance when it comes to new tools and tech in the classroom, including “keeping abreast of current developments in their disciplines and trends in the classroom, as well as regularly evaluating teaching effectiveness.”
Though much of the future has yet to be written on the way higher education will embrace, monitor or limit AI, Green made clear that some things are still best done the old-fashioned way: students at desks, pens on paper.
In his field of languages, the confluence of online courses and translation software is nothing new. It’s difficult to trust anything that touches the computer and leaves the instructor’s eyesight, he said.
“Those old-fashioned solutions do have a place,” Green remarked. “I don’t think that AI-powered software is necessarily bad, yet questions as to its benefits to students are completely open.
“At other times, you do need to find out what the students know without any help whatsoever. In my experience, there are plenty of situations where you can’t take time out of a conversation to look up a word on a phone or use technology to help when you’re stuck.”
At the same time, faculty have to acknowledge that they’re in the midst of a paradigm shift, when it comes to student assessment, Stokke said. It’s a perennial phenomenon in the digital age, as well as before, when things such as calculators and word processors were massively adopted.
“I remember one of my professors in my education courses talking about the introduction of word processors, and how the graduate school had a fit over it, because it was changing how graduate students could turn in their theses and dissertations,” Stokke recalled.
“This is what’s next, and it’s going to require ongoing conversations to figure out how to navigate the changes.”
More panels from TTaDA are planned for addressing this topic, and the Academy plans to address unanswered audience questions in future sessions. The next session currently is scheduled for March 6, titled, “Embracing the Future – Student Learning and AI.” More information about the TTaDA’s lineup of training and panel sessions can be found by visiting its website.