Faculty offered guidance for teaching in the age of ChatGPT
By Susan Kelley, Cornell Chronicle
Faculty are encouraged to beware of the pitfalls of generative AI technology but also embrace the learning opportunities it offers, teaching students to approach tools such as ChatGPT with a critical eye, according to a report from a university committee.
“Generative AI has the opportunity for transformative impact, and embracing that is important,” said Kavita Bala, dean of Cornell Ann S. Bowers College of Computing and Information Science, who co-chaired the committee. “With this new technology faculty need to think carefully about the learning outcomes in their courses and look at a broader vision of how generative AI can be used to actually improve educational outcomes and pedagogy.”
The committee’s report, “Generative Artificial Intelligence for Education and Pedagogy,” focuses on AI that generates text and images that can convincingly seem like they were produced by an intelligent human. The report addresses the ways in which these tools can customize learning for individual students – but may also circumvent learning, hide biases and have problems with being inaccurate.
“There has been a lot of uncertainty among faculty and students about when you could use this technology, how you could use it and its impact. That was a big motivation to try to get ahead of this as much as possible,” said committee co-chair Alex Colvin, Ph.D. ’99, the ILR School’s Kenneth F. Kahn ’69 Dean and the Martin F. Scheinman ’76, M.S. ’76, Professor of Conflict Resolution.
The report offers examples of how the tools could be relevant to different learning domains. For example, instructors can use GAI to develop lecture outlines and materials, generate multiple versions of assignments or create practice problems. Students can use GAI to research topics and areas, iterate on text to improve written work, as well as program, design and create code, art and music. However, GAI could end up “doing the learning” that an assignment is designed to elicit, or distort the value of assessments, according to the report.
The committee gave faculty three recommendations:
- rethink learning outcomes by integrating GAI into their goals for students;
- address safety and ethics by educating students about the pitfalls of GAI; and
- explicitly state their policies to students for the use of GAI in their classes.
The committee also recommended instructors consider three kinds of policies:
- prohibit the use of GAI where it interferes with students developing foundational understanding, skills or knowledge;
- allow with attribution where GAI could be a useful resource, and require students to take responsibility for accuracy and attribution of GAI content; and
- encourage and actively integrate GAI into the learning process.
Bala compared the use of GAI to the introduction of calculators into mathematics education. Children first learn how to do arithmetic, multiplication and division by hand. But once they have mastered those skills and the concepts behind them, they use calculators to solve more complex problems.
“We not only adapted to calculators, but we also fundamentally changed the curriculum of elementary school to focus on mathematical reasoning,” Bala said. “We know that our students are going to work and live in an AI-enabled world. If we do not integrate generative AI into our teaching, and continue with business as usual, we will not be setting up our students for success in the future. Like calculators, generative AI requires major curricular change.”
GAI applied to academic disciplines
On Jan. 23, Provost Michael I. Kotlikoff, along with Lisa Nishii, vice provost for undergraduate education, announced the formation of the committee to develop guidelines for the use of AI tools and related tech in educational programs.
The committee was composed of faculty members and expert staff from a wide range of disciplines, from writing – a particular area of concern because of the potential for academic integrity violations – to computer science, law, the life sciences, engineering, the social sciences and more.
The disciplinary breadth of the committee is reflected in the report’s appendices, which focus on how these tools could be used in specific academic areas: writing; music, literature and art; the social sciences; mathematics, physical sciences and engineering; programming; and law. Each appendix includes a synthesis of the opportunities, concerns and recommendations for use of the tools in the academic area, as well as sample assignments.
For example, in a class that involves writing, an instructor could ask students to use GAI to generate a reverse outline of a student’s written draft, identifying the main focus of each paragraph and the supporting evidence. Students would then consider whether the main focus was the one they intended. And they could assess whether the process helped them pinpoint issues with the writing’s structure and where GAI was misguided.
For mathematics and other courses that involve proofs, instructors could ask students to provide a plausible proof generated by GAI and find the mistakes or gaps in that proof. In an engineering class, an instructor could ask students to prompt GAI with a question, such as “What are the underlying assumptions in beam theory?” Students would assess the accuracy of the GAI response.
“Easy student access to AI tools will impact all of Cornell’s educational programs, and we thank the committee members for their thoughtful guidance,” Kotlikoff said. “These technologies have important implications for our pedagogy and, if used appropriately, can be valuable tools for enhanced learning.”
The committee members are well aware their report is not the last word on the use of GAI, Colvin said. “We’re very conscious of the fact that the technology is evolving, and we’re going to have to keep evolving as well.”
Media Contact
Get Cornell news delivered right to your inbox.
Subscribe