Hero background

Bills We Support

In 2025, SAIP has focused on legislation at the state level which would increase transparency into how the largest AI developers guard against severe risks presented by advanced AI. We’ve supported prescient lawmakers in their efforts to develop, explain, negotiate, and advance such bills in New York, Michigan, Illinois, and California.

Though no two bills are exactly the same, all four efforts take a similar and balanced approach. For example, all four would apply only to large companies like xAI, OpenAI, and Meta, which spend billions of dollars to develop the most powerful AI models, and not apply to smaller developers. The requirements the bills place on large developers are also broadly consistent with practices that many of these companies have already agreed to voluntarily. Making these practices mandatory avoids a race to the bottom on safety in a competitive and high-stakes industry. Below are links to bills that SAIP has supported, along with high-level summaries of their contents and where they are in the legislative process.


New York

Early this year, NY State Assemblymember Alex Bores and Senator Andrew Gounardes introduced the Responsible AI Safety and Education (RAISE) Act. The RAISE Act would:

  • Require the largest AI companies to write, publish, and follow safety and security protocols (SSP) describing how they guard against the risk of critical harms presented by their models. The RAISE Act defines critical harms as those that could result in more than 100 deaths or $1 billion in damages, such as assisting in the creation of biological weapons or carrying out automated criminal activity.
  • Require large developers to promptly report safety incidents that pose a risk of critical harm—such as the failure of technical controls or the theft of model weights—to the state Attorney General.
  • Allow the state Attorney General to bring civil penalties against large AI companies that fail to adopt an appropriate safety protocol.

In June, the NY State Legislature passed the RAISE Act by a decisive vote 58-1 in the Senate and 119-22 in the Assembly, with a majority of both Democratic and Republican lawmakers voting in favor. The bill is now on the desk of Governor Kathy Hochul, who has until the end of the year to decide whether to sign it, veto it, or suggest chapter amendments to adjust its content.

If you live in New York, you can contact Governor Hochul to express your support for the RAISE Act here.


California

California recently became the first state in the nation to require transparency into the safety practices of frontier AI developers.

In February, CA Senator Scott Wiener introduced SB 53, with Secure AI Project as a co-sponsor, alongside Economic Security California Action and Encode AI. In July, CA State Senator Scott Wiener expanded SB 53, the Transparency in Frontier Artificial Intelligence Act, to reflect the recommendations of the The California Report on Frontier AI Policy. That report was led by an expert working group commissioned by CA Governor Gavin Newsom last fall. In early September, the bill was further amended to incorporate feedback from a variety of key stakeholders.

On September 12, the CA legislature passed SB 53 by a decisive vote of 59-7 in the Assembly and 29-8 in the Senate. Governor Newsom signed the bill on September 29, marking a historic moment for California, the United States, and the global AI safety movement. Now that it’s law, SB 53 will:

  • Require large AI developers to write, publish and follow frontier AI safety frameworks— similar to SSPs but with less required detail—describing how they guard against “catastrophic risks” presented by their largest models. Like the “critical harms” in the RAISE Act, catastrophic risks are those that could cause over 100 casualties or more than $1 billion in economic damages.
  • Require large developers to report critical safety incidents involving their largest models to the CA Office of Emergency Services.
  • Protect covered employees at large developers from retaliation if they report catastrophic risks presented by AI models of any size to appropriate authorities. Employees are covered if they are responsible for assessing, addressing, or managing catastrophic risks.
  • Instruct the California Department of Technology to issue an annual report advising on how the bill’s scoping thresholds (determining which companies must comply with its requirements) should be updated over time.
  • Establish a publicly owned and operated cloud computing cluster called CalCompute, charged with advancing the development of safe, ethical, and sustainable AI.

We applaud Governor Newsom and Senator Wiener for their leadership on this crucial issue, and encourage lawmakers across the country to build on their example. You can contact Governor Newsom to thank him for signing SB 53 here, and Senator Wiener to thank him for introducing it here.


Michigan

In June, MI State Representative Sarah Lightner (R) introduced HB 4668, the Artificial Intelligence Safety and Security Transparency Act. You can watch SAIP’s testimony in support of the bill here. HB 4668 would require the largest AI developers to:

  • Write, publish, and follow similar safety and security protocols (SSP) to those required by the RAISE Act in New York (see above).
  • Provide quarterly reports on how they’re implementing their SSP—for example, what tests they’ve conducted on their latest models, and the outcomes of those tests.
  • Have a third-party auditor confirm that they are following their own SSP.
  • Establish whistleblower protections for employees who report to the authorities that one of these models poses a critical risk.

In September, the bill was referred to the Committee on Regulatory Reform within the Michigan House of Representatives, chaired by Representative Joseph Aragona. If you live in Michigan, you can contact Chair Aragona to express your support for the bill here.


Illinois

Early this year, State Representative Daniel Didech introduced HB 3506, the Artificial Intelligence Safety and Security Protocol Act. HB 3506 is very similar to HB 4668 in Michigan (see above). HB 3506 did not come to a vote before the full legislature before the Illinois General Assembly adjourned this spring, so its next opportunity to advance will be in the 2026 legislative session.


If you are a lawmaker interested in introducing similar legislation, we would be happy to provide support. You can reach us at info@secureaiproject.org. If you would like to support our work, you can make donations here.