Experts in the fields of artificial intelligence and education are brought together to discuss and research the impact of generative AI on education by the Boston University AI and Education Initiative.
The initiative, launched by the Rafik B. Hariri Institute for Computing and Computational Science & Engineering and Wheelock College of Education and Human Development, aims to cultivate research at the intersection of AI and education and is oriented to build “guardrails” around AI to maximize benefits and minimize potential harms, said Naomi Caselli, director of the Initiative and an assistant professor of Deaf Studies at Wheelock.
“The goal of the initiative is to … take a more in-depth perspective that isn’t just blind optimism or pessimism,” Caselli said. “What guardrails are needed? What kinds of innovations could we inspire that could really transform education for the better?”
Caselli voiced concerns about children interacting with generative AI, such as using tools like ChatGPT in ways that have harmful outcomes for other kids.
Large language models have been growing for a long time, but the emergence of ChatGPT “captured the public imagination,” Caselli said.
“I take a middle-of-the-road approach,” Caselli said. “It’s been really polarizing that people either come out as super enthusiasts or super terrified, and I think the middle ground is probably true.”
Teachers can use ChatGPT as a time saver to assist with lesson planning and tracking student progress, Caselli said.
“Not that it’s necessarily going to be the final product but it might give you a nice template where you can make tweaks as you think are needed,” Caselli said.
More creative and engaging learning objectives may need to replace assignments such as literature reviews which can be done by a click of a button with ChatGPT, said Ashley Moore, an assistant professor at Wheelock and affiliated faculty member of the initiative.
“One of the strategies that we can do to make certain assignments more AI-proof is to personalize it, to really make the focus of the assignment students’ personal experiences,” Moore said.
Students can also use ChatGPT to summarize lecture notes or refine their own writing, Caselli said.
“Not that ChatGPT is doing the writing for them, but it becomes kind of a partner, as a sort of co-creator to say, ‘Oh, this was my takeaway,’” Caselli said. “Then the student can say, ‘Oh, but that wasn’t really what I had in mind. Maybe I need to clarify the writing a little bit.’”
Christopher McVey, a senior lecturer in the writing program, said for one of the first-year writing courses which emphasizes research, he allows students to use up to 50% of generative AI in submitted assignments. Students are asked to indicate AI-generated portions in blue font.
“I’ve never been concerned thus far in the semester that students have been using AI to do more than what they’ve admitted to doing or agreed to do,” McVey said. “I’ve actually seen the use of generative AI … go down between the first submitted draft and then the final version.”
Albert Dalia, a writing program lecturer teaching a first-year writing course about anime films by Hayao Miyazaki, asks students to critique movie reviews written entirely by ChatGPT.
“One of the things that my students learned … was that you can’t trust it, and you don’t want it to write your papers because it does a terrible job doing sources,” Dalia said. “The conclusion I think all the students really drew was it’s a great tool to assist a scholar in doing research, not for sources, but for ideas.”
Experts in the field of AI and education will speak at the Initiative’s AI and Education Symposium at the end of November. Potential topics include what to and not to teach children in the context of AI and how to instruct children to use AI without being harmed.
“[AI] is a really powerful tool, and like all really powerful tools, there’s a lot of good that we can do with them,” Caselli said. “I don’t think we’ve even begun to scratch the surface.”