
Are We Arguing With Bots? How Synthetic Engagement Hijacks the Digital Public Square
Online discourse today is increasingly shaped not by human voices, but by automated entities designed to mimic them. Bots and troll farms, often operating under the radar, have become central players on platforms like X (formerly Twitter), Facebook, and Reddit. These accounts are programmed to amplify narratives, distort perceptions, and simulate social consensus. In 2020 alone, Twitter estimated that nearly 5% of its users were bots, although independent research suggests the number could be significantly higher in certain politically charged conversations1. Facebook has reported removing over 1.3 billion fake accounts in the first quarter of 2021, a testament to the scale of the issue2.
These automated actors are not limited to spreading spam or low-grade propaganda. Sophisticated bots are now capable of engaging in nuanced conversations, retweeting specific hashtags at optimal times, and creating the illusion of widespread support or opposition. This synthetic engagement manipulates algorithms that prioritize popular or trending content, causing false narratives to rise to the top of public feeds. The design of these platforms, which reward engagement over accuracy, makes them especially vulnerable to this type of manipulation. Consequently, what users interpret as organic public sentiment is often a coordinated effort driven by a minority of automated or bad-faith actors.
Disinformation Cascades: How Falsehoods Spread Faster Than Facts
One of the most vexing characteristics of modern digital discourse is the speed with which misinformation proliferates. Research published in the journal Science found that false news stories on Twitter spread significantly farther and faster than the truth, particularly in areas related to politics and health3. This diffusion pattern is exacerbated by platform algorithms that prioritize content with high engagement, regardless of its veracity. In effect, the emotional appeal and novelty of misinformation often make it more shareable than well-sourced, factual information.
Public administrators and communication professionals must understand that these dynamics are not accidental. Disinformation campaigns are often the product of deliberate strategy, where bad actors test multiple messages and amplify those that generate the most traction. On Reddit, for example, coordinated brigading by troll accounts has been observed during election cycles, where users are directed to downvote or upvote specific content to manipulate visibility4. These tactics distort the apparent consensus in online spaces, making it harder for genuine users to discern reliable information. For governments and civic leaders, this poses a serious challenge to informed decision-making and public trust.
The Blurred Line Between Human and Machine
The rise of AI-generated personas adds another layer of complexity. These digital entities are designed to appear human in every respect: they post regularly, respond to comments, and even express opinions consistent with targeted demographic profiles. Tools that generate realistic profile pictures and language models capable of crafting contextually appropriate responses have made it nearly impossible for the average user to distinguish between a real person and a synthetic one5. This deception is not merely technical - it has social and political implications.
In public forums where policy issues are debated, such as Facebook community groups or Reddit threads focused on urban planning, synthetic accounts can sway sentiment by sheer volume. When enough AI-generated voices appear to support a position, it creates a false consensus that may influence actual public opinion or, worse, decision-making by officials monitoring these discussions. For communication officers and speechwriters in government, acknowledging the presence of these synthetic participants is essential when gauging public sentiment or preparing public responses.
Implications for Civic Communication and Public Trust
As digital platforms become primary venues for public discourse, the integrity of those conversations becomes a matter of governance. When bots and trolls dominate or distort dialogue, they erode the foundation of democratic communication: the belief that individual voices matter and are heard. For local governments, this is particularly significant. Community meetings, surveys, and online comment periods are increasingly supplemented - or replaced - by digital interactions. If those interactions are polluted with inauthentic behavior, the legitimacy of civic processes comes into question.
Speechwriters and communication officers must now approach their work with a dual lens: crafting messages that resonate with genuine stakeholders while being vigilant about manipulation. This requires new tools and practices. For instance, sentiment analysis software must be layered with bot-detection algorithms to filter out synthetic engagement. Communication campaigns must be stress-tested against known disinformation tactics. It also requires public education, helping community members identify credible sources and understand how to vet information before sharing.
Strategies for Promoting Authentic Online Engagement
Combating the influence of bots and synthetic personas is not solely a technological challenge. It is a communication issue that demands proactive strategies rooted in transparency, consistency, and community trust. One effective approach is the establishment of verified channels for official information. Municipal authorities, for example, can ensure that all public announcements, responses to crises, and policy updates are disseminated through clearly marked and consistently branded digital accounts. This helps users distinguish between authoritative sources and imitations.
Another important tactic is community moderation. Platforms like Reddit have shown that well-trained, volunteer moderators can significantly reduce the spread of misinformation and bot-driven content within subreddits. Local governments can leverage this model by partnering with community leaders to monitor online forums, flag suspicious activity, and maintain civil discourse. These partnerships must be supported with training on how disinformation campaigns operate and how synthetic engagement can be identified. Engagement also improves when leaders actively participate in conversations, providing real-time responses and clarifying misinformation before it gains traction.
Building Institutional Capacity for Digital Communication
To navigate the challenges of speech and communication in the digital age, institutions must build internal expertise. This includes hiring or training staff in digital literacy, data analytics, and cyberpsychology. Communication teams should be equipped to analyze discourse trends, identify coordinated inauthentic behavior, and adapt messaging strategies accordingly. It is no longer sufficient to issue press releases or post updates. The new environment demands interaction, agility, and verification.
Some public organizations have begun to create rapid response units within their communication departments. These teams monitor social media channels, respond to misinformation in real time, and escalate threats to public trust. This model, inspired by crisis communication principles, can help institutions maintain credibility amid evolving digital threats. When residents see that their government is responsive, transparent, and present in the same digital spaces they occupy, trust is more likely to be maintained or restored.
Conclusion: Reclaiming the Digital Public Square
The question posed earlier - "Are we talking to people or bots?" - remains central to the future of public communication. While the digital illusion of dialogue can be powerful, it is not unbreakable. With deliberate strategies, technical tools, and a renewed focus on authentic engagement, communication professionals can begin to restore the integrity of online discourse. This is not only a technological challenge but a test of civic resilience and institutional adaptability.
For those working in government communication, especially at the local level, the task is urgent. Building trust, verifying engagement, and fostering meaningful dialogue are not optional in a democratic society. They are foundational. As digital spaces evolve, so too must the strategies used to engage with them. The voices of residents - not bots - must continue to shape the policies and decisions that affect their lives.
Bibliography
Varol, Onur, Emilio Ferrara, Clayton A. Davis, Filippo Menczer, and Alessandro Flammini. "Online human-bot interactions: Detection, estimation, and characterization." In Proceedings of the International AAAI Conference on Web and Social Media 11, no. 1 (2017): 280-289.
Facebook Transparency Report. "Community Standards Enforcement Report." Meta Platforms, Inc., 2021. https://transparency.fb.com/data/community-standards-enforcement/.
Vosoughi, Soroush, Deb Roy, and Sinan Aral. "The spread of true and false news online." Science 359, no. 6380 (2018): 1146-1151.
Marwick, Alice, and Rebecca Lewis. "Media manipulation and disinformation online." Data & Society Research Institute, 2017. https://datasociety.net/pubs/oh/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf.
Cresci, Stefano, Roberto Di Pietro, Marinella Petrocchi, Angelo Spognardi, and Maurizio Tesconi. "The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race." In Proceedings of the 26th International Conference on World Wide Web Companion, 963-972. 2017.
More from Communication and Speech
Explore related articles on similar topics





