A recent study by political scientists Anil Menon of the University of California, Merced and Baobao Zhang of Syracuse University suggests that warnings about artificial intelligence (AI) potentially disrupting jobs in the near future have little effect on Americans’ attitudes toward automation risks or their support for policy responses.
The research surveyed 2,440 adults in the United States to assess how people react to forecasts predicting the arrival of “transformative AI” as soon as 2026 or as late as 2060. Participants were randomly assigned to groups that read vignettes stating job-threatening AI would arrive in a specific year—2026, 2030, or 2060—or received no timeline information at all. Each vignette described expert predictions that advances in machine learning and robotics could replace workers across various professions.
The findings indicate that while shorter timelines made respondents slightly more anxious about losing their jobs to automation, these warnings did not significantly change when they believed job losses would occur or increase their support for government policies such as retraining programs or universal basic income. Respondents who read about rapid development in generative models like Genesis, Claude, and ChatGPT expected automation might happen sooner but showed little change in economic outlooks or policy preferences.
“These results suggest that Americans’ beliefs about automation risks are stubborn,” the authors said. “Even when told that human-level AI could arrive within just a few years, people don’t dramatically revise their expectations or demand new policies.”
The study used construal level theory to explore how perceptions of time affect risk judgments. It found participants who were told AI breakthroughs were imminent were not much more alarmed than those given distant timelines. Only those presented with a 2060 forecast showed a significant increase in worry about job loss—a response researchers attributed to the perceived credibility of a longer-term prediction.
Menon and Zhang argue these results challenge assumptions that making technological threats feel more immediate will spur public support for regulatory measures or social safety nets. The survey also revealed that overall awareness of automation risks increased with any timeline exposure, but only modestly affected concern about unemployment due to technology.
“The public’s expectations about automation appear remarkably stable,” they said. “Understanding why they are so resistant to change is crucial for anticipating how societies will navigate the labor disruptions of the AI era.”
The study was published in The Journal of Politics amid ongoing debates over how large language models and other generative systems may reshape employment.



