BETHLEHEM, Pa. – During a human resources committee meeting Tuesday night, members of the Bethlehem Area Board of School Directors heard the first reading of a policy that addresses the use of generative artificial intelligence for educational purposes.
Generative AI is “an advanced subset of AI that is capable of generating new content from learned data and pattern recognition across various mediums such as text, code, images, audio and video data,” the policy states. ChatGPT is an example of a popular generative AI tool.
Chief Human Resources Officer John Burrus said the draft presented Tuesday is a base policy from the Pennsylvania School Boards Association, but district administrators put the “BASD twist on it,” adding specifics that are particular to the district. Burrus said the the draft policy will also undergo legal review.
The policy states that the district recognizes the potential that generative AI offers in “enhancing educational opportunities, streamlining operations, and preparing students for a future that demands adaptability, critical thinking, and digital literacy.” It also says that when generative AI is “used in a responsible and ethical manner,” it can “support a dynamic working and learning experience.”
The policy outlines several overarching guidelines for staff and students in their use of generative AI, and it broadly addresses ethical considerations and academic integrity related to the extent to which AI-generated work should be used in assignments and how it should be cited.
Board member Karen Beck-Pooley, who is a university professor, expressed concerns about the use of AI tools.
“I don’t use them at all because the slippery slope is real,” she said.
She emphasized the need to clarify how AI tools should be properly utilized and to encourage students to use discretion in treating them as sources in their work.
“I would just suggest we tread very carefully because I don’t know the degree to which we can trust them as a source,” Beck-Pooley said.
“To suggest that it’s a quotable source for student work makes me nervous,” she said. “At the same time, to not convey that this is not student work and to pass it off as your own is a serious, serious infraction.”
Other board members shared Beck-Pooley’s concerns.
“This may come from the larger issue that students don’t always understand what constitutes, sort of, appropriate use of sources,” said board member Kim Shively.
Board member Silagh White suggested that the district investigate analysis tools that can help discern reputable sources.
“My bigger concerns about generative AI and their tools — beyond the literacy and the resource documentation — is being media literate and knowing that something was generated using AI,” she said.
Burrus said policies such as the draft presented Tuesday are meant to be general to a certain extent, while administrative regulations can get more specific.
“I do think it’s important that we teach students how to use AI because they’re going to use it, regardless,” said board member M. Rayah Levy.
She said especially at the high school level, it is critical to prepare students for college by teaching them how to decipher whether content is generated by an AI tool or written by a reputable scholar.
Levy said it is also important to educate faculty and parents about AI because it’s “extremely complicated” for those who are not knowledgeable about the tools.
“I’m happy that there are certain things here,” Levy said of the draft policy, “but I think we have to look more deeply at analyzing and how we’re actually going to go through the process of educating our kids in the use of AI.”
Board Vice President Shannon Patrick reminded the committee that Tuesday night’s reading was the first for the policy and meant to provide a broad overview. She said the administration will put together a scope and sequence at a later time.