SS24: Promting Large Language Models
Course description
There has been a paradigm shift in natural language processing (NLP) from "pre-train and fine-tune" to "pre-train, prompt and predict". This new approach leverages the generation power of large language models to solve NLP tasks based on task-specific instruction templates, which are called "prompts". Prompting can be used to perform a variety of tasks, such as text classification and machine translation, under a zero-shot or few-shot settings. Currently, there is a large body of works on prompt engineering and related training and evaluation strategies; these are the target papers of this seminar. We will focus on the more influential and peer-reviewed literatures. Depending on the interests of the participants, we can expand our reading list to literatures about instruction following in generation, as well as hallucination mitigation.
taught by: | Dr. Frances Yung |
language: | English |
start date: | 22.04.2024 |
time: | Monday, 10:15 - 11:45 |
located in: | In building C7 3, seminar room 1.12 |
sign-up: | Interested students can join our MS Team |
credits: | 4 CP (R), 7 CP (R+H) |
suited for: | B.Sc. in Computational Linguistics M.Sc. in Language Science and Technology (LST) |
more details: | In LSF |
notice: | Registration deadline for the examination is 19.07.2024 |