
Demystifying AI: Human Inputs, Machine Outputs
Artificial Intelligence is often portrayed as autonomous, but its core is deeply entangled with human judgment. Every AI system begins with choices made by people: what data to include, what outcomes to prioritize, and what risks to tolerate. The models that drive AI systems are trained on datasets curated by humans, reflecting social, economic, and organizational patterns. These choices shape everything from predictive models in city planning to automated permitting systems. Understanding these dependencies is critical for municipal leaders who must evaluate not just what the technology does, but how it was designed to do it.
For example, natural language processing tools used in municipal 311 systems are not inherently intelligent. They process text based on frequency, patterns, and correlations learned from prior human interactions. Their effectiveness depends on the quality and representativeness of the training data, which often comes from local government records or resident-submitted service requests. When these tools succeed, it's often because the input data mirrors the community's language and concerns. When they fail, it's usually because the data is incomplete, unbalanced, or poorly labeled. Municipal managers evaluating such tools must consider who contributed to the dataset, how it was labeled, and what assumptions were built into the model's architecture.
AI as a Decision-Support Tool, Not a Decision-Maker
In practice, AI works best when it augments human decision-making, not replaces it. For example, in code enforcement, AI can flag potential violations by analyzing satellite images or historical complaint patterns. But these outputs are only recommendations. Human inspectors still need to confirm the findings, assess the context, and engage with property owners. Treating AI as a decision-support tool rather than an autonomous actor helps embed accountability and preserve local knowledge in municipal workflows.
Municipal governments are increasingly using AI in areas like traffic flow optimization and resource allocation. These systems analyze historical data to suggest where to deploy assets such as traffic sensors or sanitation services. However, without human oversight, such models risk reinforcing past inefficiencies or inequities. For instance, if historical data reflects underinvestment in certain neighborhoods, AI-driven recommendations may continue that trend unless actively corrected. Human intervention ensures that data-driven insights are interpreted within the broader context of policy goals and community values.
Building AI Literacy in Local Government
One of the most practical steps a municipality can take is to invest in AI literacy among staff. This does not mean turning planners or finance officers into data scientists. Rather, it involves equipping them with the language and frameworks to ask critical questions about AI tools. Who built the model? What data was used? What are the assumptions, and how are decisions audited? These questions help staff become informed consumers and collaborators in AI adoption rather than passive end-users.
Several cities have taken steps to train managers and analysts on the fundamentals of machine learning and algorithmic accountability. For example, New York City’s Algorithm Management and Policy Officer works across departments to ensure that algorithmic tools align with public values and operational needs^1. Such roles can help local governments develop procurement standards, performance benchmarks, and ethical guidelines tailored to their specific service environments. Even modest training initiatives can build internal capacity to evaluate AI vendors, manage pilot projects, and assess tool performance over time.
Aligning AI Projects with Service Goals
Before adopting any AI system, municipalities must clarify their goals and define success in measurable terms. Is the objective to reduce processing time for permit applications? Improve emergency response times? Increase transparency in budget forecasting? AI is a tool, not a strategy. Without a clearly defined outcome, it risks becoming a solution in search of a problem. Aligning AI implementation with service delivery goals ensures that the technology addresses real operational needs.
Practical alignment also involves cross-departmental collaboration. For instance, a predictive analytics tool for housing inspections may require input from the IT department, housing inspectors, GIS analysts, and legal counsel. Each brings a different perspective on functionality, usability, and compliance. Embedding these stakeholders early in the project reduces the risk of misalignment and increases the likelihood that the tool will fit into existing workflows. Cities like Boston and San Diego have demonstrated that successful AI initiatives often begin with small, well-scoped pilots that are rigorously evaluated before scaling^2.
Accountability and Oversight in Municipal AI Use
As municipalities integrate AI tools into service delivery, they must also develop mechanisms for oversight and public accountability. This includes documenting how models were selected, how data is maintained, and how outputs are reviewed. Transparency is essential not only for public trust but also for internal learning. When a model underperforms or behaves unexpectedly, a clear audit trail helps identify what went wrong and how it can be corrected.
Several governments have begun formalizing AI governance structures. The City of Seattle, for example, requires an Algorithmic Impact Assessment for high-risk tools, evaluating potential impacts on equity, privacy, and public trust^3. This process includes public engagement and a requirement to disclose vendors, data sources, and oversight mechanisms. Such practices can be scaled to fit smaller municipalities by adapting checklists or incorporating AI risk assessments into existing procurement reviews. These steps help ensure that AI projects remain accountable to the communities they serve.
Moving from Curiosity to Competence
Cities don’t need to be tech hubs to implement AI reliably. What they need is a clear understanding of their goals, a framework for evaluating tools, and a commitment to learning from experience. AI is not a fixed product; it evolves with data, context, and use. Municipalities that treat AI as a continuous learning process, rather than a one-time deployment, are better positioned to adapt and improve over time.
Practical competence in AI starts with asking the right questions. What problem are we trying to solve? How will success be measured? Who is affected, and how will we know if harm occurs? These questions are not new—they are the same ones public administrators have always asked. AI changes the tools, not the mission. By grounding AI adoption in service goals, community values, and operational realities, local governments can ensure the technology enhances their capacity to serve rather than distracts from it.
Bibliography
New York City Office of Technology and Innovation. “Algorithm Management and Policy Officer.” Accessed April 10, 2024. https://www.nyc.gov/assets/oti/downloads/pdf/algorithms.pdf.
Harrell, Erika. “AI in Cities: Examples from Boston and San Diego.” Urban Institute, August 2022. https://www.urban.org/research/publication/ai-cities-examples-boston-and-san-diego.
City of Seattle. “Surveillance Ordinance and Algorithmic Impact Assessments.” Accessed April 10, 2024. https://www.seattle.gov/tech/initiatives/privacy/algorithmic-impact-assessments.