Janet Malzahn

Ph.D. Student · Political Economics · Stanford GSB
jmalzahn (at) stanford (dot) edu · CV
Janet Malzahn
I am a third-year Ph.D. student in the Political Economy group at the Stanford Graduate School of Business. I'm interested in American political economy with a focus on elections and climate change.

Peer-Reviewed Publications

Joint with Andrew B. Hall2024 American Political Science Review
Abstract
We combine newly collected election data with records of public denials of the results of the 2020 election to estimate the degree to which election-denying Republican candidates for senator, governor, secretary of state, and attorney general over- or under-performed other Republicans in 2022. We find that the average vote share of election-denying Republicans in statewide races was approximately 2.3 percentage points lower than their co-partisans after accounting for state-level partisanship. Election-denying candidates received roughly 2 percentage-points more vote share than other Republican candidates in primaries, on average, although this estimate is quite uncertain. The general-election penalty is larger than the margin of victory in battleground states in recent close presidential elections, suggesting that nominating election-denying candidates in 2024 could be a damaging electoral strategy for Republicans. At the same time, it is small enough to suggest that only a relatively small group of voters changed their vote in response to having an election-denying candidate on the ballot.

Working Papers

Joint with Samuel G.Z. Asher, Jessica M. Persano, Elliot J. Paschal, Andrew C. W. Myers, and Andrew B. Hall2026
Abstract
Large language models (LLMs) are increasingly used as research assistants for statistical analysis. A well-documented concern using LLMs is sycophancy, or the tendency to tell users what they want to hear rather than what is true. If sycophancy extends to statistical reasoning, LLM-assisted research could inadvertently automate p-hacking. We evaluate this possibility by asking two AI coding agents—Claude Opus 4.6 and OpenAI Codex (GPT-5.2-Codex)—to analyze datasets from four published political science papers with null or near-null results, varying the research framing and the pressure applied for significant findings in a 2 × 4 factorial design across 640 independent runs. Under standard prompting, both models produce remarkably stable estimates and explicitly refuse direct requests to p-hack, identifying them as scientific misconduct. However, a prompt that reframes specification search as uncertainty reporting bypasses these guardrails, causing both models to engage in systematic specification search. The degree of estimate inflation under this adversarial nudge tracks the analytical flexibility available in each research design: observational studies are more vulnerable than randomized experiments. These findings suggest that, at least in narrow estimation tasks, LLMs themselves are unlikely to bias results toward statistical significance, but safety guardrails are likely unable to restrain researchers intent on p-hacking.

Works in Progress

Do partisans sort along climate risk? Evidence from the United States
Green Bills and Pork: The Political Durability of Green Industrial Policy
Joint with Mary Reader
The Hidden Incumbency Advantage: How Officeholding Shapes Intra-Party Competition in American Legislative and Executive Elections
Joint with Andrew C. W. Myers
Have Changes to Media and Technology Helped to Nationalize American Elections?
Joint with Daniel M. Thompson, Fang Guo, and Andrew B. Hall

Public Goods

Stata package for iterative record linkage on name variables