In the bustling heart of Boston University, amidst the whispers of innovation and the hum of intellect, a new force is emerging — the AI Task Force, poised to revolutionize education with the power of artificial intelligence.
The BU AI Task Force, established in September 2023, released a comprehensive final report about generative AI in education and research on April 5. Over the past academic year, the task force — consisting of a multidisciplinary selection of faculty — examined existing AI practices and proposed strategies to enhance BU’s use of AI while mitigating potential negative effects, according to the Office of the Provost website.
The task force’s final report assesses the outlook of generative AI and its potential for research and education. It offers guidelines for students and faculty, recommends AI policies and identifies its limitations and potential flaws.
BU has adopted a “critical embrace policy” toward generative AI, said Wesley Wildman, a co-chair of the task force. He said this view acknowledges AI’s influence on the future and seeks to equip students with the skills necessary for success.
Wildman, a professor in the School of Theology and the Faculty of Computing and Data Sciences, said that BU administrators want students to know how to use AI, but they “want to remain critical … aware of its limitations and aware of the ethical problems associated with using it.”
Wildman explained the university’s request that each department at BU should develop a policy that is specific to the department’s interest and is consistent with the critical embrace policy.
“This is the part of the expectation that the university is going to work hard to make sure everyone, including professors, are literate about generative AI,” Wildman said.
Yannis Paschalidis, co-chair of the task force, explained that the report suggests teachers should have the choice of how to use AI in their classrooms, if at all.
“In terms of protecting academic freedom, individual faculty should be left to decide the way that they would like to adopt [AI] or even not adopt it in terms of their own courses that they teach,” Paschalidis said.
Paschalidis, a professor in CDS and in the College of Engineering, recommends adopting a more AI-friendly teaching style and developing new courses. This can help guide students in a landscape where AI is more prevalent and help them through their coursework with generative AI as an aid, he said.
He added that some BU courses have already incorporated AI policies and provided the students with guidelines for using AI.
Jake Coyman, a freshman in the College of General Studies, said the only time he has used AI in class has been to disprove its legitimacy.
“A lot of the time, it lacks the complexity and understanding of the question that is being asked and therefore offers a very surface level explanation, which doesn’t run parallel with what those questions are asking,” Coyman said.
Amy Muliadi, a sophomore in Sargent College of Health and Rehabilitation Sciences, said she has not used AI in class at all, for any purpose.
“A lot of teachers just make, like, one comment telling us not to use ChatGPT,” she said.
Wildman cited potential scenarios in which AI detectors might be unreliable and cause more harm than good.
“Imagine a student who did nothing wrong being accused of academic misconduct based on a generative AI detector that gave him a probability reading that was inaccurate,” Wildman said.
Two major conclusions from the task force’s student interviews were students’ reluctance to rely heavily on AI and their concerns about competing for jobs with people who use AI when they themselves don’t, Wildman said.
Alexa Thomas, a freshman in Sargent, sees uses for AI in the classroom, like strengthening sentence structure or editing grammar.
“It’s always kind of seen as a bad kind of cheating tool,” Thomas said. “But I think it could have some really helpful, useful ways in the classroom.”
Coyman said that AI’s usefulness in the classroom depends on the situation. There’s a difference between using AI to help compile sources of information and using it to write a research paper, he said.
“It’ll end up basically making us lose all the skills we’ve developed so far,” he said.
Like Coyman, Muliadi said she feels AI “takes away the point of education.”
“I think AI can have its place in robotics, and maybe in surgery and surgical tools,” Muliadi said. “But I think the brain power has to come from human minds.”
The report ultimately suggests that the university adopts policies to use AI but with caution. Likewise, Paschalidis encourages using AI effectively instead of working against it.
“It’s better to find more creative ways of incorporating generative AI rather than relying on, limiting or even prohibiting the use of the tool,” Paschalidis said.