Skip to main content

How do you discuss the ethics of AI in your class?

One week we had a class debate focusing on ethics and responsible use of AI. The topic was, “Should students use generative AI for academic writing?” I assigned sides, and it was interesting to see the different opinions from the students. For example, if I say, “A student plugged a prompt into AI, copied the response, and submitted it to a teacher,” most students say, “This is cheating.” But something like, “A student created multiple AI responses based on the student’s detailed prompt, used the best parts, edited, and submitted” — that’s a gray area. 

Should there be a universitywide policy about AI?

My understanding is that a one-size-fits-all policy doesn’t work because there are so many use cases that cannot be captured in just one policy document. Also, the recent survey of Virginia Tech students showed significant differences in students’ AI perception between STEM and non-STEM majors, suggesting more nuanced policy approaches are needed. Of course, policies should be grounded in principles like transparency and confidentiality, but at the same time I think that each class should have some kind of specific guidelines about use cases of generative AI. In an introductory computer science class, it may not be good to use AI to create code because if they do, they don’t actually improve their computational thinking. But for an advanced class that’s more project based, it’s probably OK. 

Source link

Subscribe our Newsletter

Congratulation!