Why OpenAI says it isn’t spending on Super PACs
OpenAI explains why the organization itself isn’t spending on super PACs amid rising political activity by rivals and its own executives’ donations.
OpenAI has taken a public stance explaining why the organisation itself is not spending corporate funds on Super PACs or similar political action committees at a time when political influence over AI policy is intensifying, telling staff and the press that the conversation around regulation and political issues tied to artificial intelligence should transcend partisan divides and be focused on broad, bipartisan policy formation rather than direct political spending that could tether the company’s brand to specific political outcomes, and in a memo to employees OpenAI’s chief global affairs officer, Chris Lehane, noted that while individual employees are free to express and act on their personal ideological views and support candidates or causes of their choosing outside of their roles at the company, the institution of OpenAI will not make comparable contributions to Super PACs or 501(c)(4) social welfare organisations because the leadership believes doing so could compromise the company’s ability to engage constructively across political lines and risk entangling its mission with contemporary electoral politics at a time when lawmakers in the United States and beyond are debating how to regulate artificial intelligence and what role it should play in society, and this position comes in sharp contrast with announcements from rival AI company Anthropic, which pledged $20 million to a political group advocating for enhanced AI regulation, underscoring divergent approaches among major AI players in the industry on how best to influence policy without jeopardising their missions, and as the year’s elections approach and both companies consider potential blockbuster initial public offerings, OpenAI’s explanation emphasises that it seeks to retain control over its political expenditures and ensure that policy discussions are driven by substantive dialogue and not by partisan funding, noting that the issues at stake — including privacy, job displacement, economic impact, and ethical concerns around AI capabilities — are of such significance that anchoring the company to one side or another of contemporary politics could alienate stakeholders and erode trust, and this reasoning is compounded by the complex political environment in which AI firms now operate, where industry executives and investors have been active donors to multiple Super PACs and political causes in their personal capacities, including substantial contributions by OpenAI’s president and co‑founder, Greg Brockman, and his wife, who together donated $25 million to a pro‑Trump Super PAC, and Brockman and others have also supported a bipartisan AI‑focused Super PAC that advocates against state‑level AI regulation in favour of a unified national framework, demonstrating how individuals connected to the company are firmly engaged in the political sphere even as OpenAI the institution refrains from corporate political spending, and Lehane’s comments to staff stressed that OpenAI backs a range of policy proposals at both the federal and state level but wants to do so in a non‑partisan way that prioritises thoughtful regulation and governance frameworks designed by legislatures rather than through massive campaign expenditures, and OpenAI’s stance reflects a broader strategic calculus about how to preserve its ability to work with governments of differing political compositions around the world, while also acknowledging that high‑profile political contributions by company leaders have drawn attention and controversy, which underscores the importance OpenAI places on clearly delineating the boundary between individual political advocacy and corporate policy engagement, and this position has emerged at a moment when the AI industry more generally is becoming politically active, with multiple Super PACs backed by technology figures and AI stakeholders poised to spend millions to shape the outcome of elections and influence lawmakers on AI regulation, leading some observers to raise concerns about the influence of ‘dark money’ and organisational spending on public policy discourse, and by avoiding corporate Super PAC donations, OpenAI insists it can maintain credibility and neutrality when engaging with policymakers, regulators, academics and the public about AI’s future, whilst stressing that the company is committed to engaging with policy processes in ways it believes are constructive, transparent and focused on ensuring broad societal benefits of artificial intelligence rather than using its corporate balance sheet to directly influence electoral politics, and thus OpenAI’s public explanation for why it isn’t spending on Super PACs emphasises its desire to keep its corporate voice aligned with long‑term, non‑partisan policy goals rather than short‑term political battles, even as the landscape of political spending around AI continues to evolve rapidly and draw scrutiny from observers, lawmakers and industry rivals alike.
“OpenAI explains why the organization itself isn’t spending on super PACs amid rising political activity by rivals and its own executives’ donations.”





