DReMSS Webinar_ Harnessing Generative AI for questionnaire design, evaluation and testing
| Action | Key |
|---|---|
| Play / Pause | K or space |
| Mute / Unmute | M |
| Toggle fullscreen mode | F |
| Select next subtitles | C |
| Select next audio track | A |
| Toggle automatic slides maximization | V |
| Seek 5s backward | left arrow |
| Seek 5s forward | right arrow |
| Seek 10s backward | shift + left arrow or J |
| Seek 10s forward | shift + right arrow or L |
| Seek 60s backward | control + left arrow |
| Seek 60s forward | control + right arrow |
| Seek 1 frame backward | alt + left arrow |
| Seek 1 frame forward | alt + right arrow |
| Decrease volume | shift + down arrow |
| Increase volume | shift + up arrow |
| Decrease playback rate | < |
| Increase playback rate | > |
| Seek to end | end |
| Seek to beginning | beginning |
Si vous êtes abonné aux notifications, un e-mail vous sera envoyé pour toutes les annotations ajoutées.
Votre compte utilisateur n'a pas d'adresse e-mail.
Informations sur ce média
This webinar examines how generative AI can be integrated into the questionnaire design and evaluation process. Rather than treating large language models as ad hoc drafting tools, we discuss structured, workflow-embedded applications that support measurement quality at multiple stages of survey development. Several use cases are discussed, along with findings from validation exercises.
First, we present an occupational coding application that maps open-ended job descriptions to SOC2020 classifications using structured prompts and validation routines, illustrating how AI can support consistent and scalable post-fieldwork coding. We also describe how the tool can be embedded in a dynamic questionnaire design and generate intelligent follow-ups to probe unclassifiable answers. Second, a “Silicon Sampling” generator is presented that simulates heterogeneous respondent personas to stress-test draft questions at scale, identifying ambiguity, response instability, and potential measurement error prior to fielding. Third, an AI-enabled cognitive interviewing tool that operationalises probing protocols to examine comprehension, retrieval, judgement, and response processes in a systematic and reproducible way. Fourth, we present an example of a scalable pipeline for LLM evaluations of draft survey questions using rule-based question evaluation frameworks designed to identify question features that cause response challenges) (in this case, the Question Appraisal System (QAS-99). We share the results of a validation exercise to assess how LLM evaluations using the QAS compare with those of human experts and novices. Finally, we discuss how such tools can be integrated into an agentic workflow to support survey questionnaire development.
Presenters:
Patrick Sturgis is Professor of Quantitative Social Science and Head of Department in the Department of Methodology at the London School of Economics. He was previously Director of the ESRC National Centre for Research Methods at the University of Southampton 2010-2019. His research focuses on applied quantitative and statistical methods, with a particular specialism in survey design and analysis. He was President of the European Survey Research Association 2011-2015 and has published widely on in leading methodology journals including Journal of the Royal Statistical Society, Public Opinion Quarterly, and Journal of Survey Methods and Statistics. He has served as Chair of the Methodological Advisory Board of the European Social Survey and the UK Household Longitudinal Survey. He is currently Principal Investigator of ‘Harnessing Generative AI for Questionnaire Design, Evaluation and Testing’, a research grant under the ESRC Survey Futures programme with Dr Caroline Roberts and Dr Tom Robinson.
Caroline Roberts is a Senior Lecturer and Researcher in the Institute of Social Sciences at the University of Lausanne, where she teaches MA courses on survey research methods and questionnaire design, and regularly gives specialist training in summer schools and short courses on related topics. She is also an affiliated Survey Methodologist at FORS, the Swiss Centre of Expertise in the Social Sciences, and a Visiting Senior Fellow in the Department of Methodology at the London School of Economics and Political Science. Her research interests in survey methodology relate to the measurement and reduction of different types of survey error. Her most recent research focuses on challenges relating to the implementation of digital data collection in high quality general population surveys, and ways to leverage generative AI in questionnaire design, evaluation and testing. She was Chair of the Methods Advisory Board of the European Social Survey from 2020-2025 and President of the European Survey Research Association from 2019-2021.
Autres médias dans la chaîne "FORS"
1 vues, 1 cette année, 1 ce moisFORS-DaSCH webinar on Informed consent in the social sciences and humanities13 avril 2026
4 vues, 4 cette annéeTeaser video: How to Deposit Social Science Datasets on SWISSUbase10 mars 2026
7 vues, 7 cette année, 2 ce moisFors-DaSCH webinar on Workflows for Anonymization10 mars 2026
1 vues, 1 cette annéeHow To Deposit Datasets on SWISSUbase5 mars 2026
1 vues, 1 cette annéeHow to Search for and Download Social Science Datasets on SWISSUbase5 mars 2026
3 vues, 3 cette annéeHow To Create And Manage a Social Science Project on SWISSUbase6 janvier 2026