Students and faculty in Marquette University’s College of Arts & Sciences have been reckoning with whether artificial intelligence belongs in academic work since the technology’s popularization. Some embrace it, while others are pessimistic about the direction of its development.
Jacob Nadess, a junior in the College of Arts & Sciences, said some professors encourage using AI as a tool, while others have reverted to pen-and-paper exams to limit its use.
Artificial intelligence has been implemented in classrooms across campus, with professors trying to teach ethical and efficient uses, he said. But generated answers often require further analysis and fact-checking, which lengthens the process, a reason Nadess personally chooses not to use it.
“It’s a great starting point,” he said. “But it’s awful, at least in its current state.”
Abram Capone, coordinator of the university’s cognitive sciences program, equates the rise of artificial intelligence with a brick on a car’s gas pedal, constantly accelerating forward.
“Artificial intelligence is not different in kind from [what] we had before with computers, it’s just different in scale,” he said.
The capacity of artificial intelligence is constantly expanding, Capone said, affecting how people view technology and the world.
Several of Marquette’s core computer science classes have artificial intelligence as a component, including the Introduction to Computer Science and Professional Ethics in Computer & Data Science courses. Fundamentals of Artificial Intelligence is a specialized course introduced in 2020 and offered to undergraduates that explores the workings of artificial intelligence more deeply.
“At Marquette, computer science students are learning both about how generative AI systems are built, and also how to use them responsibly in their work,” Dennis Brylow, chair of the computer science department, said.
Despite the issues people may have with artificial intelligence, it’s a valuable tool for quick coding and processing, Nadess said. So much so that in the future, he sees more researchers receiving grants for artificial intelligence development research than for computer science itself across the industry.
“That’s where the money’s going to be,” he said.
Capone is optimistic that this technology will help speed up the development of the computer science field, but the work itself will still be human. Unlike artificial intelligence, he said, human beings can be dissatisfied.
This is where original creativity can’t be replicated, something artificial intelligence isn’t capable of. Yet.
“Generative AI tools haven’t been taught any [problem-solving] skills, and they can’t really produce anything new that isn’t fundamentally a remix of stuff they’ve already seen,” Brylow said.
ChatGPT was created in 2022 and has since gone through four major generation updates. Large language models like ChatGPT started as natural language processors, which are defined by their ability to generate and understand text, according to a study presented by the Special Interest Group on Computer Science Education.
More recently, these models have become capable of programming languages, giving them the “ability to address complex problems with human-like expertise,” the study said. Artificial intelligence is constantly expanding and evolving, with new models being introduced multiple times a month.
“I think the next big leap is going to be artificial intelligence that understands what it’s doing,” Capone said.
However, Brylow said there are ethical concerns that arise with these developments, specifically the execution of tasks with an algorithm that still does not understand what it’s being asked to produce.
“Humans in computer science are still our best hope for new and careful design of complex software systems,” Brylow said. “They are the only choice for evaluating the ethical and moral dimensions of computer systems.”
Use of artificial intelligence also harms students’ learning processes, he said. Students who depend on artificial intelligence in their early phase of computer science learning don’t learn the same foundational skills as those who came before them.
As artificial intelligence continues to develop, both Capone and Nadess feel that regulation will be the hardest feat. Artificial intelligence is still growing and changing, so ethical considerations may be difficult to tackle, but inevitable.
Currently, the university’s computer science program overview states, “at Marquette, computer science is infused with ethical considerations, and a focus on how to best use technology for the greater good.”
However, the university does not have a universal policy on artificial intelligence use.
“We are fast approaching a moment where we are going to need to reckon with how artificial intelligence is being used,” Capone said.
This story was written by Lilly Peacock. She can be reached at [email protected].






