San Francisco Pioneers AI Transparency: New Law Mandates Generative AI Disclosure

San Francisco Pioneers AI Transparency: New Law Mandates Generative AI Disclosure

San Francisco City Council Approves Landmark Generative AI Disclosure Ordinance

San Francisco, CA – In a significant move setting a precedent for local artificial intelligence regulation, the San Francisco City Council on June 7th, 2025, passed Ordinance 25-G, a first-of-its-kind measure requiring businesses operating within city limits that develop or deploy generative AI models to disclose specific information about their technology.
The vote, a decisive 9-2, signals the city’s intent to proactively address the societal implications of advanced AI, prioritizing transparency and accountability in a sector experiencing rapid growth and integration into daily life. The ordinance represents a pioneering effort at the municipal level to peel back the curtain on how powerful AI systems are trained and what potential risks they may pose.

Details of Ordinance 25-G

Under the provisions of Ordinance 25-G, companies developing or deploying generative AI models — defined broadly to include systems capable of producing text, images, audio, or other data — within San Francisco’s geographical boundaries will be mandated to disclose key characteristics of the data used to train these models. This includes information about the scale, source, and nature of the training datasets. Furthermore, the ordinance explicitly requires businesses to identify and disclose potential bias risks inherent in their models. These disclosures are not intended for public release initially but will be submitted to a newly formed municipal review board.

This board, yet to be fully constituted, will be comprised of technical experts, ethicists, and community representatives. Its primary function will be to review the submitted information, assess the potential impacts of the AI models on San Francisco residents and infrastructure, and potentially recommend further action or guidelines. The focus on training data is particularly noteworthy, as it is a primary determinant of an AI model’s capabilities, limitations, and propensity for producing biased or harmful outputs.

Rationale and Championing the Bill

The ordinance was championed by Supervisor Chen, who has been a vocal advocate for greater oversight of emerging technologies. According to Supervisor Chen, the rapid deployment of powerful generative AI models, while offering immense potential benefits, also carries significant risks, including the propagation of misinformation, algorithmic bias leading to discriminatory outcomes in areas like housing, employment, or credit, and unintended societal disruption. “Transparency is the first step towards accountability,” Supervisor Chen stated during council deliberations. *”Our goal with Ordinance 25-G is not to stifle innovation, but to ensure that the technology being developed and used in our city is understood, its risks are identified, and we have the information needed to protect our residents and maintain public trust. This municipal review board will provide the necessary technical expertise to evaluate these complex systems.”

The push for disclosure is rooted in the belief that understanding the inputs and potential failure modes of AI systems is crucial for anticipating and mitigating their negative consequences. The ordinance aims to provide the city with the necessary tools to monitor the AI landscape within its jurisdiction and proactively address potential issues before they cause significant harm. This approach positions San Francisco as a leader in attempting to build a regulatory framework for AI that is both adaptive and informed by detailed data from the technology developers themselves.

Opposition and Industry Concerns

Despite its passage, Ordinance 25-G faced considerable opposition, primarily from tech industry groups. The Bay Area Tech Alliance (BATA) was among the most prominent critics, arguing that the requirements are overly burdensome and could impede the pace of innovation. BATA and other opponents raised concerns that mandating disclosure of training data characteristics, even to a review board, could compromise proprietary information and trade secrets, potentially putting San Francisco-based companies at a competitive disadvantage globally.

“While we share the city’s interest in responsible AI development, this ordinance creates unnecessary hurdles,” a BATA spokesperson commented. *”Requiring startups and established companies alike to catalogue and disclose intricate details about their training data and perform complex bias risk assessments for a local board adds significant cost and administrative burden. This could chill investment and push AI development elsewhere.”

Critics also questioned the technical feasibility and consistency of disclosing “potential bias risks,” arguing that bias is complex and often contextual, making a standardized disclosure requirement challenging. They suggested that industry self-regulation or federal guidelines would be more appropriate than a patchwork of local rules.

What Comes Next

With the City Council’s approval secured with a strong 9-2 vote, Ordinance 25-G now moves to the Mayor’s desk. Mayor Adams is widely expected to sign the bill into law sometime next week, solidifying San Francisco’s pioneering stance on AI transparency. Following the Mayor’s anticipated signature, the ordinance will not take effect immediately. A transition period of 180 days is stipulated, allowing businesses time to understand the new requirements, establish internal processes for data cataloging and risk assessment, and prepare their initial disclosures for the municipal review board. During this period, the city will also focus on establishing and staffing the newly formed municipal review board and developing detailed guidelines for the disclosure process.

This waiting period acknowledges the complexity of the requirements and the need for both the industry and the city to prepare for implementation. Once effective, the ordinance will position San Francisco as having one of the nation’s strictest local AI disclosure laws, placing it at the forefront of municipal efforts to regulate advanced AI technology. The implementation phase will be closely watched by other cities and jurisdictions grappling with how to govern AI.

Broader Implications

The passage of Ordinance 25-G in San Francisco is a significant development in the evolving landscape of AI regulation. As a global hub for technology, the city’s actions often serve as a bellwether for trends elsewhere. The ordinance reflects a growing global sentiment that the rapid development and deployment of powerful AI systems necessitate greater public understanding and governmental oversight. It highlights the tension between fostering technological innovation and ensuring public safety, fairness, and accountability.

The success and challenges of implementing this ordinance will provide valuable lessons for other municipalities and potentially inform future state or federal regulatory approaches to AI. The focus on disclosure of training data and potential biases could become a model for other jurisdictions seeking to promote responsible AI development without resorting to outright bans or overly prescriptive technical mandates. The coming months will be crucial in determining how this landmark local law shapes the future of AI development and governance in San Francisco and potentially beyond.

Leave a Reply

Your email address will not be published. Required fields are marked *