讲座简介:
|
In recent years, large language models (LLMs) have attracted attention due to their ability to generate human-like text. As surveys and opinion polls remain key tools for gauging public attitudes, there is increasing interest in assessing whether LLMs can accurately replicate human responses. This study examines the potential of LLMs, specifically ChatGPT-4o, to replicate human responses in large-scale surveys and to predict election outcomes based on demographic data. Employing data from the World Values Survey (WVS), the American National Election Studies (ANES), and the German Longitudinal Election Study (GLES), we assess the LLM’s performance in two key tasks: predicting human survey responses and both U.S. and German election results. In survey tasks, the LLM was tasked with generating synthetic responses for various socio-cultural and trust-related questions, demonstrating notable alignment with human response patterns across U.S.-China samples, though with some limitations on value-sensitive topics. In voting tasks, the LLM was mainly used to simulate voting behavior in past U.S. elections and predict the outcomes of the 2024 U.S. election and the 2025 German federal election. Our findings show that the LLM replicates cultural differences effectively, exhibits in-sample predictive validity, and provides plausible out-of-sample forecasts, suggesting potential as a cost-effective supplement for survey-based research. |