CityGov is proud to partner with Datawheel, the creators of Data USA, to provide our community with powerful access to public U.S. government data. Explore Data USA

Skip to main content
Who’s in the Data? Bringing Equity and Reflection into AI Design

Who’s in the Data? Bringing Equity and Reflection into AI Design

Ethical reflection in AI design is not a theoretical exercise - it is a critical step in creating systems that align with democratic values and public accountability. From the earliest stages of data collection and model training, developers must ask whose experiences are represented, whose are excluded, and what assumptions are being codified. For example, an algorithm designed to predict student performance may unintentionally penalize students from under-resourced schools if it relies heavily on historical test scores without contextualizing systemic disparities. These design decisions, though often technical in appearance, carry profound ethical implications.

Instituting ethical reflection requires structured practices. Techniques such as ethics checklists, stakeholder reviews, and bias audits can help teams surface potential risks before deployment. The U.S. Government Accountability Office has recommended integrating responsible AI practices into federal acquisition and development processes, including stress-testing models for fairness and transparency across population subgroups¹. Municipal leaders and administrators can adopt similar frameworks by requiring vendors to disclose datasets used, explain model decisions in plain language, and demonstrate how potential harms have been minimized. Building these expectations into procurement policies ensures ethical concerns are addressed early and not as afterthoughts.

Learning from Public Sector Use Cases

Real-world applications of AI in government offer valuable lessons about both opportunity and risk. Predictive policing tools, for instance, have been deployed in several cities to forecast where crimes are likely to occur. However, investigations have shown that such systems can perpetuate racial bias by relying on historical arrest data, which may reflect discriminatory policing patterns rather than actual crime rates². These examples reveal how AI can amplify existing inequities if not rigorously evaluated for fairness and accountability.

Similarly, automated decision systems used in social services, such as eligibility screening for public benefits, have faced scrutiny over transparency and error rates. In Indiana, a statewide automation program that flagged citizens for fraud resulted in thousands of wrongful terminations of Medicaid and food aid benefits³. These incidents demonstrate the importance of maintaining human oversight, establishing appeals processes, and ensuring citizens have access to explanations and recourse. AI should support, not replace, equitable service delivery and due process.

The Role of Diverse Stakeholders in Shaping AI

Inclusive development is essential to ensuring AI reflects the needs and values of all communities. Too often, technical teams operate in isolation, with limited input from educators, community advocates, public administrators, or those directly impacted by the systems. Bringing diverse voices into the design process helps surface blind spots, challenge assumptions, and build more contextually appropriate tools. For instance, involving housing advocates in the development of tenant screening algorithms can help identify criteria that may disproportionately affect low-income renters or communities of color.

Local governments can facilitate participatory design by hosting public workshops, establishing ethics advisory boards, or partnering with academic institutions to conduct community-centered research. The City of Amsterdam, for example, created an AI register where residents can view and comment on algorithmic systems used in city services⁴. This type of transparency builds trust and creates pathways for civic engagement. Encouraging partnerships between civic tech groups and municipal departments can also help translate community needs into technical requirements that promote equity and accountability.

Building Ethical and AI Literacy Together

Technical skills alone are not enough to manage AI responsibly. Public administrators and civic leaders must also cultivate ethical literacy - the ability to question how systems operate, interpret their outcomes, and identify potential harms. This includes understanding data provenance, model limitations, and the social context in which decisions are made. Training programs that combine AI fundamentals with case studies on fairness, accountability, and transparency can equip practitioners to engage more critically with technology.

Several resources are emerging to support these efforts. The AI Ethics Guidelines Global Inventory developed by AlgorithmWatch provides a comparative overview of ethical frameworks from various organizations and governments⁵. The Center for Technology in Government at SUNY Albany has also published practical guides for public administrators on managing data-driven initiatives, including ethical considerations⁶. Embedding these resources into staff onboarding, professional development, and leadership training can help institutionalize a culture of reflective, responsible innovation.

From Reflection to Action: Community Engagement

Creating AI that serves society is not solely a technical challenge - it is a civic one. Residents encounter algorithmic decision-making in areas such as public housing eligibility, job application filtering, and content moderation, often without knowing it. Encouraging public reflection on these encounters can demystify AI and empower citizens to advocate for fairer systems. Town halls, public comment periods, and educational campaigns can all help communities learn how AI affects them and what questions to ask of local leaders and vendors.

For example, libraries and community centers can host AI literacy workshops that explain how recommendation systems work or how facial recognition is used in public spaces. Civic organizations can create toolkits to help residents evaluate the fairness of systems used in schools or housing. When residents understand their rights and the mechanics of AI, they are better equipped to demand accountability. Engaging the public in this way ensures that AI development is not a top-down process but a shared endeavor shaped by collective values.

Conclusion: Designing with Accountability and Compassion

AI does not exist in a vacuum. It is shaped by the priorities, perceptions, and policies of those who build and deploy it. Municipal technology leaders have a unique responsibility to ensure that artificial intelligence supports inclusive, transparent, and equitable outcomes. This means asking hard questions, listening to diverse perspectives, and embedding ethical inquiry into every stage of the design process.

Accountability is not a constraint on innovation - it is a prerequisite for trust. When we recognize the humanity behind every algorithm, we can harness AI to amplify our best intentions rather than our worst biases. The future of AI is not about machines replacing people, but about people taking responsibility for the machines they create. When we center compassion, transparency, and justice, we build technology that truly serves the public good.

Bibliography

  1. U.S. Government Accountability Office. "Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities." GAO-21-519SP, June 2021. https://www.gao.gov/products/gao-21-519sp.

  2. Richardson, Rashida, Jason M. Schultz, and Kate Crawford. "Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice." New York University Law Review Online 94 (2019): 15-55.

  3. Eubanks, Virginia. "Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor." New York: St. Martin’s Press, 2018.

  4. City of Amsterdam. "Algorithm Register." Accessed April 15, 2024. https://algoritmeregister.amsterdam.nl/en.

  5. AlgorithmWatch. "AI Ethics Guidelines Global Inventory." Last modified December 2023. https://inventory.algorithmwatch.org/.

  6. Center for Technology in Government. "Guidance for Data-Driven Government." University at Albany, SUNY. Accessed April 18, 2024. https://www.ctg.albany.edu/publications/guidance-data-driven-government/.

More from Artificial Intelligence

Explore related articles on similar topics