top of page
Raena Hunter Doty

Digital Humanities director leads workshop for AI prompt engineering

By Raena Hunter Doty Arts & Features Editor Bartholomew Brinkman, director of the Center for Digital Humanities, led a faculty workshop for engineering effective artificial intelligence (AI) prompts Oct. 31. Prompt engineering refers to crafting effective input text for generative AI software like ChatGPT, Gemini, and Microsoft Copilot. Brinkman said he wanted to keep the workshop fairly simple because the event was open to people who may have no knowledge of prompt engineering at all. He added he wanted to establish some basic facts to keep in mind while engineering AI prompts. First, the quality of output will depend on which specific AI model is in use - different models will bear different answers, he said. Second, he said, “Prompting is often iterative,” meaning the first output may not be the best - or even good - but using different iterations of a prompt can make the model generate better responses. Third, “One analogy I heard recently I think is useful for a lot of us to consider is that a prompt is less like an online search query and more like a formal email,” he said. This means prompts benefit from adding context, setting the tone, high specificity, and using thoughtful tone. Brinkman said general principles he tries to keep in mind while generating prompts include giving as much detail as possible, including a point of view for the narrator of the text, specifying the format, and breaking a larger project up into multiple different sections for the AI to generate. He also recommended attendees working with generative AI should analyze his results, though he said that was beyond the scope of the workshop. Analyzing results allows users of AI to find the root of what’s working and what isn’t when engineering prompts. Brinkman outlined a few “LLM agnostic” principles “that people are starting to gravitate toward. “You need to write instructions as you would to an especially literal-minded intern on his first day of working,” Brinkman said. He added using more specialized language can help because “bland or generic verbs often produce bland content.” On top of this, writing in a negative tone may cause the AI to generate a response with a similar tone, and negative prompts - which ask the AI not to do something - can confuse the models, he said. Brinkman recommended avoiding using synonyms to describe the same two concepts when writing prompts, as this may create unnecessary extra variables. He said sometimes including in a prompt a request for the AI model to slow down can yield better results, as can providing - or asking it to generate - an example of the requested outcome beforehand. Brinkman said there are a few different types of questions. First, basic prompts, like “What’s the weather today?” He added for basic questions like these, he urges people to consider whether it would be a better question for a search engine, as AI models use much more power than search engines. Second, complex prompts, like “Write a summary of the latest research on climate change in the form of a news article.” Third, role-playing prompts, like “Pretend you’re a 19th-century inventor and explain how a steam engine works.” Fourth, scenario-based prompts, like “If you were an AI assistant in a medical clinic, how would you handle a patient with flu symptoms?” After this, he ran through several Halloween-themed examples of the prompts he was describing. In the first, he prompted ChatGPT to generate a one-act play about classic Halloween monsters written in the perspective of a third-grade teacher creating something for students. He said even though he didn’t refine the prompt at all, the product generated by ChatGPT was fairly appropriate on the first try. Next, he asked it to write a 2,000-word essay about hysteria during the Salem witch trials, citing at least five academic sources. He said the product, again, was fairly good, though admitted he hadn’t fact-checked the paper for hallucinations, which is what happens when an AI model generates content that sounds correct but is not. Brinkman then showed a graph ChatGPT generated after he prompted it to create a list of 20 Halloween movies and rank them based on how appropriate they are for 8-year-old children. He said once again it generated fairly reasonable results, though the apparent hyperlinks in the graph didn’t actually lead anywhere. Next he showed a graph he asked it to generate showing a network graph of different Halloween monsters, which again, he said was fairly reasonable, but when he prompted the AI to edit the draft, it didn’t generate a fixed graph immediately. Brinkman showed a poem generated by ChatGPT after he prompted it to write Edgar Allen Poe’s “The Raven” for the 21st century. Brinkman said the poem never fully resembled Poe’s poem, “and I’m a poem person, so I was really kind of annoyed.” For his last Halloween-themed prompt, he showed a couple of examples of images for a middle school Halloween bash generated by ChatGPT, which generated content largely in adherence with the parameters set. Of the two examples he made - one with a more refined prompt than the other - he said, “Maybe it’s just preference.” His last example - “the scariest example of all” - asked, “Based on the most recent polling, who is most likely to win the 2024 presidential election?” The prompt continued with more specific parameters for what should be included in the content of the response. Brinkman said he phrased the prompt specifically so it didn’t include the names of the candidates to show that, even though AI models are often trained on outdated datasets - such as ChatGPT, which was trained in March 2023 - there are still ways for AI answers to use current information. “If you’re doing this through a web interface, it’s going to link out to websites to pull in content to basically rewrite your query - to add context to your query - and then send it to the LLM,” Brinkman said, though he clarified that is oversimplifying the process. He added this is important for anyone who wants recent information but doesn’t trust an AI model to generate any. After that, the workshop broke into groups and attendees were able to experiment with newfound prompt generation skills in order to see what they could produce from an AI model. When the workshop rejoined as a large group, the faculty were prompted to discuss and reflect on their experiences with these new skills. Brinkman said this workshop only covered “the tip of the iceberg. “These are often still very flawed in a lot of ways. But they’re here to stay, and I do think that having more considered prompting techniques can help us to get some of those better responses - and also to see some of the cracks in the veneer that we could then push on as needed,” he added.

  • Instagram
  • Facebook
  • Twitter
bottom of page