By Francisco Omar Fernandez Rodriguez Arts & Features Editor Instructional Technologist Kevin Kennedy hosted a workshop on using generative AI to assess course documents, such as the course syllabus. Kennedy said he used Copilot and ChatGPT to assess a syllabus and assignment from two very different classes - a first-year writing course and a technical writing course - because they have different audiences. He asked the same questions to both AIs, he said. Some of the questions were about whether the text would be confusing depending on the student’s situation - whether or not the student was a freshman, an upperclassman, a first-generation student, or if English isn’t their primary language. He also used the AIs to see if the document was inclusive, asking, “How can I make this more culturally or politically inclusive?” During a previous attempt at this, three months prior, the two AIs gave very different results, but he’s now getting practically identical feedback, he said. While the AIs weren’t great at identifying biased language, they did highlight possible assumptions he could be making about his students, he said. These assumptions focused on the students’ understanding of technology and academic language, and the students’ access to technology, he added. Kennedy said there is a focus on being as clear as possible with academic terminologies and practices. “Terms like annotated bibliography to peer review and self reflection might be unfamiliar to students who haven’t been exposed to academic jargon before,” he added. The AIs’ responses were occasionally contradictory, and sometimes they said there was too much description, and other times they said there wasn’t enough, he explained. Kennedy said this is likely because AIs use information from the internet, and there are very different ideas on what a good syllabus and assignment looks like. There are also different views on what students might need, and students in real life have different needs. Because of this the contradictions aren’t necessarily a bad thing. The AIs were good at identifying possible anxiety triggers, but they didn’t always provide good solutions, he said. When a course document said a class required one-on-one meetings with the professor, the AIs said that might be intimidating for students but didn’t provide any suggestions or alternatives, he added. The answers for what might be confusing to freshmen versus upperclassmen were different, he said. For freshmen, the AIs focused on what might be unfamiliar, and for upperclassmen they focused on what might be different from other course documents they’ve seen, he added. For first-generation students, the AIs gave answers similar to what they gave for freshmen, but homed in even more on unfamiliar academic terms and practices, such as office hours and knowing how to ask for help, he said. Next, he did a demo on using Copilot to assess a course document. He said Copilot is his first recommendation for these types of documents because of how it protects privacy. The system itself works similarly to ChatGPT, and has recently been streamlined and simplified, Kennedy said. He showed how to attach a file, and typed one of the questions into the chat box. He said it helps to give the AI a role, such as a college freshman or an English major - otherwise it might give a generic response that has almost nothing to do with college. He asked Copilot if his syllabus has any language that might evoke anxiety, and it pointed out the heavy workload and strict deadlines. It also focused on the segment on academic honesty, which is clearly strict and might cause anxiety. Kennedy said this might mean he should talk about it more. Collaborative work was brought up as a cause for anxiety by the AI. While he definitely doesn’t plan on getting rid of group work, he might look into making the language as inclusive and clear as possible, he said. Then Kennedy encouraged the audience to practice using an AI to assess their course documents. When registering for this event, it was strongly recommended to have a course document and AI ready to practice with. The participants generally agreed using generative AI to assess their syllabi could be helpful in identifying anxiety triggers and insensitive language.
top of page
bottom of page