
AI Governance vs. Regulation: Why Rules Alone Won’t Work
Traditional regulation typically reacts to existing technologies, setting standards based on established use cases and known risks. It often takes years to develop, pass, and implement legislation, by which time the technology it aims to govern may have evolved significantly. This approach was suitable for industries with slower innovation cycles, such as utilities or transportation. Artificial Intelligence, however, evolves rapidly through iterative machine learning processes, making static regulatory models ineffective at anticipating emerging issues or adapting to novel applications.
AI governance goes beyond compliance enforcement. It involves proactive frameworks that shape how AI is developed, deployed, and monitored in real time. Governance includes ethical considerations, stakeholder engagement, algorithmic accountability, and continuous oversight mechanisms. It is not only the purview of federal legislators but also local agencies, technology vendors, and civil society. Unlike traditional regulation, AI governance must be dynamic, collaborative, and context-sensitive to ensure that algorithmic systems operate in ways that are fair, transparent, and aligned with democratic values.
Current Global and U.S. Efforts in AI Oversight
In the United States, the federal government has taken initial steps to guide AI development. President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, issued in October 2023, sets federal standards for AI safety, algorithmic bias mitigation, and data privacy protections across agencies1. It mandates that government contractors disclose AI usage and that federal agencies perform risk assessments before deploying AI tools. While this is a significant milestone, enforcement mechanisms remain limited, and much of the operational responsibility is pushed to individual departments and agencies.
The European Union has taken a more prescriptive legislative approach with the EU AI Act, which classifies AI systems into risk categories and prohibits those deemed to present "unacceptable risks"2. The Act outlines stringent obligations for high-risk applications, including biometric identification and predictive policing. Companies must provide documentation, conduct conformity assessments, and allow regulatory audits. The EU’s model emphasizes rights-based governance and could serve as a reference for U.S. municipalities seeking to develop locally adapted rules that prevent discriminatory or opaque uses of AI.
The Role of Industry Self-Regulation and Its Limits
Industry self-regulation has played a prominent role in AI development so far. Initiatives like the Partnership on AI and the OECD AI Principles encourage responsible AI practices among technology companies3. Many firms now publish AI ethics guidelines and perform internal reviews of algorithms. However, without external enforcement and transparency, these efforts often lack credibility. Self-regulation can contribute to developing shared norms, but relying solely on corporate actors to police themselves risks leaving critical gaps in accountability.
Municipal leaders should not assume that private sector assurances are sufficient. Cities deploying AI tools for traffic management, housing allocation, or policing must establish their own oversight mechanisms. For example, requiring vendors to disclose training data sources and bias mitigation techniques can help ensure that applications align with community values. Local governments must also be cautious about adopting vendor-provided algorithms without independent evaluation, as proprietary systems often lack transparency and are not easily audited by third parties.
Building Trust Through Ethical Standards and Human Oversight
Ethical standards are essential for public trust in AI systems. These standards should include principles such as fairness, accountability, and non-discrimination, especially when algorithms are used in high-stakes decisions like criminal justice, housing, or education. Integrating these principles into procurement policies and program design can ensure that AI tools do not perpetuate systemic inequalities. Human oversight must be layered into the process, especially when decisions affect individual rights or access to services.
Transparent algorithmic processes are also critical. This means documenting how decisions are made, what data is used, and how outcomes are validated. For example, the City of Amsterdam publishes an annual algorithm register that lists all AI systems used by the municipality, their purpose, and risk classification4. This level of openness allows residents and civil society organizations to scrutinize government use of AI and offer feedback. Without transparency, even well-intentioned systems can erode public confidence and invite legal or ethical challenges.
Examples of Local AI Governance in Action
Several U.S. cities have begun implementing governance models that offer practical templates for others. The City of New York established an Automated Decision Systems Task Force to evaluate the fairness and transparency of algorithms used by municipal agencies5. While its final report faced criticism for lacking enforcement strategies, it marked an important step toward institutionalizing public AI oversight. New York City has since expanded its efforts with Local Law 144, which mandates bias audits for AI hiring tools used by employers operating in the city6.
In Seattle, the city council has instituted data equity reviews as part of any major technology acquisition. This process involves examining whether automated systems may disproportionately impact marginalized groups. Meanwhile, San Francisco passed legislation prohibiting city departments from using facial recognition technology, citing civil liberties concerns7. These examples show that city governments can assert leadership by embedding ethical review and public input into their technology governance structures.
A Call to Collaborative Action
Municipal leaders are uniquely positioned to shape AI governance in ways that reflect local values and promote inclusive innovation. Given the distributed nature of AI deployment—from traffic cameras to public benefits eligibility algorithms—local governments cannot wait for federal policy to catch up. They must act now by forming cross-sector partnerships with technologists, researchers, and community organizations to co-design policies that prioritize transparency, safety, and equity.
This is not a task for IT departments alone. City managers, procurement officers, legal counsel, and equity advisors all have roles to play in ensuring that AI systems serve the public good. Establishing multidisciplinary ethics committees, requiring algorithmic impact assessments, and publishing open datasets are tangible steps cities can take today. By anchoring AI governance in democratic accountability and human rights, civic leaders can help ensure that these technologies enhance, rather than erode, public trust.
Bibliography
White House. 2023. "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." October 30, 2023. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
European Commission. 2024. "Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)." https://artificial-intelligence-act.eu/
OECD. 2019. "OECD Principles on Artificial Intelligence." https://www.oecd.org/going-digital/ai/principles/
City of Amsterdam. 2023. "Algorithm Register." https://algoritmeregister.amsterdam.nl/en
New York City Automated Decision Systems Task Force. 2019. "Final Report." https://www1.nyc.gov/assets/adstaskforce/downloads/pdf/ADS-Report-11192019.pdf
New York City Council. 2021. "Local Law 144 of 2021." https://www.nyc.gov/assets/dca/downloads/pdf/about/LL144.pdf
San Francisco Board of Supervisors. 2019. "Stop Secret Surveillance Ordinance." https://sfbos.org/sites/default/files/o0149-19.pdf