CityGov is proud to partner with Datawheel, the creators of Data USA, to provide our community with powerful access to public U.S. government data. Explore Data USA

Skip to main content
AI’s Communication Surge and the 1990s Tech Bubble Parallels

AI’s Communication Surge and the 1990s Tech Bubble Parallels

The comparison Sam Altman draws between today’s AI boom and the 1990s tech surge serves as a vital caution for policymakers and municipal leaders. The internet revolution brought profound connectivity, but it also delivered a cascade of over investment, collapsing startups, and regulatory gaps. Altman’s remarks suggest that while AI-driven communication is experiencing exponential growth, it may be riding a similar wave of speculative enthusiasm that could outpace governance structures and societal readiness. Current investment trends support this warning: global AI funding hit $91.9 billion in 2023, a 26 percent increase from the previous year, with communication-focused applications leading the surge in venture capital interest1.

For municipal governments, the lesson from the dot-com era is clear: build adaptable, resilient digital infrastructure while preparing for regulatory misalignment. In the 1990s, many local governments struggled to integrate digital communication tools into civic workflows, resulting in fragmented service delivery and digital divides. Today, as AI tools like ChatGPT and Gemini begin to mediate conversations between residents and public institutions, leaders must ensure these tools are not adopted for novelty’s sake but are evaluated against standards of accessibility, transparency, and operational relevance2.

Trust, Empathy, and Machine-Mediated Dialogue

As AI-driven conversation scales rapidly, the question of emotional resonance becomes essential. The mixed reception of GPT-5, with users describing it as “colder” compared to GPT-4, points to a broader issue: can efficiency in communication coexist with emotional intelligence? While GPT-5 demonstrates improved factual precision and speed, many users report a perceived drop in warmth or empathy, which alters how people respond to it in sensitive or service-based interactions3. This matters deeply in municipal settings, where residents often seek not only answers but understanding from their local institutions.

Public-facing AI applications must be designed with this balance in mind. For instance, when chatbots handle resident complaints, permit inquiries, or housing support, the tone of the conversation can influence public trust. A transactional tone may satisfy information needs but erode emotional connection. Conversely, overly empathetic responses that lack clarity can frustrate users seeking action. Municipal IT departments, in collaboration with communications staff, must invest in iterative testing of AI agents to fine-tune both language and functionality, ensuring that digital interactions feel both competent and human4.

Government Strategies for Managing AI Conversations Ethically

To navigate the growing dominance of AI in official communication channels, municipal governments should prioritize three core practices: transparency, equity, and accountability. Transparency involves clearly labeling AI-generated content and disclosing when conversations are mediated by machine agents. This practice builds trust and ensures that residents understand the context of the information they receive. For example, the City of Los Angeles includes visible disclaimers when AI tools are used in online portals to manage expectations and clarify responsibility5.

Equity must guide deployment decisions. AI tools often reflect biases present in their training data, which can result in skewed responses. Municipalities should conduct equity audits before adopting conversational AI, especially in departments serving vulnerable populations such as housing, policing, or social services. The City of Boston, for instance, requires AI vendors to provide documentation on data sources and bias mitigation strategies before procurement6. Lastly, accountability frameworks should define who is responsible when AI tools fail or misinform. A clear chain of oversight can prevent institutional confusion and legal exposure.

Education and Business Leadership in the Age of AI Conversations

Educators and business leaders also have a significant role in shaping how society adapts to AI-mediated communication. In public administration programs, curricula should include modules on algorithmic communication, digital literacy, and ethical technology management. These programs must prepare future leaders to critically assess AI tools not just for functionality but for their social and civic implications. Case studies of municipal AI deployments, such as San Jose’s virtual assistant for community services, can provide applied learning opportunities for students7.

Business leaders, particularly those in sectors that partner with local government, must commit to co-developing AI tools that respect civic values. This includes designing interfaces that accommodate users with disabilities, offering multilingual support, and protecting user data. Partnerships like that between Microsoft and the City of Chicago, where AI tools are used to streamline procurement and budgeting processes, demonstrate how ethical alignment between public and private sectors can yield both innovation and public benefit8.

Listening as a Civic Imperative in the AI Era

As AI tools become more fluent and prolific in conversation, the human role in dialogue becomes even more critical. Altman’s prediction points to a future where machines may speak more frequently than people. However, speech without listening risks reinforcing echo chambers and procedural rigidity. For municipal leaders, the challenge is not just to adopt AI tools, but to ensure they enhance rather than replace authentic engagement. Listening means analyzing AI chat logs for recurring resident concerns, integrating that feedback into policy, and maintaining open forums where human voices remain central to decision-making.

Governments, educators, and businesses must work together to ensure that as machines handle more of the talking, humans remain responsible for the hearing, interpreting, and responding. This requires a shift in mindset: AI should not be treated as a solution, but as a partner in communication. By grounding these technologies in ethical frameworks and civic responsibility, municipalities can lead in harnessing AI’s power while preserving what makes public service human. The future will not just be written by algorithms—it will be shaped by how attentively we listen to each other through them.

Bibliography

  1. PitchBook Data Inc. “AI and Generative AI Report Q4 2023.” January 2024. https://pitchbook.com/news/reports/q4-2023-emerging-tech-research-ai-ml

  2. National League of Cities. “Digital Equity Playbook: How City Leaders Can Bridge the Digital Divide.” 2021. https://www.nlc.org/resource/digital-equity-playbook/

  3. MIT Technology Review. “GPT-5: Faster, Smarter—and Colder?” April 2024. https://www.technologyreview.com/2024/04/15/gpt5-review

  4. Stanford Human-Centered AI. “AI and Empathy: Challenges in Human-Machine Interaction.” 2023. https://hai.stanford.edu/research/ai-empathy

  5. City of Los Angeles Information Technology Agency. “AI Guidelines and Use Policy.” 2023. https://ita.lacity.org/ai-guidelines

  6. City of Boston. “Responsible AI Procurement Framework.” 2023. https://www.boston.gov/departments/innovation-and-technology/responsible-ai

  7. City of San Jose. “Digital Assistant Pilot Program.” Office of Civic Innovation, 2022. https://www.sanjoseca.gov/your-government/innovation

  8. Microsoft News Center. “City of Chicago Uses AI to Improve Government Services.” 2023. https://news.microsoft.com/city-of-chicago-ai-partnership

More from Artificial Intelligence

Explore related articles on similar topics

AI’s Communication Surge and the 1990s Tech Bubble Parallels