AI's Authoritarian Temptation: Safeguarding Democracy

Every generation encounters transformative technologies that, while promising efficiency and progress, inherently possess the capacity to enhance tools of repression. For our current era, artificial intelligence stands as this pivotal technology. Its capabilities are vast: it can predict patterns, persuade individuals at scale, and facilitate mass surveillance and policing. Yet, AI also holds immense potential to broaden participation in democratic processes, expose corruption, and empower citizens with new oversight capabilities, thereby strengthening accountability. The trajectory of AI—whether it ultimately steers the world toward greater freedom or intensifies fear—depends less on the inherent nature of "the algorithms" themselves and more on the institutional frameworks in place, the incentives that drive its development and deployment, and the political choices made by leaders and societies.

This lab note delves into the intricate, bidirectional relationship between AI and authoritarianism, examining how AI can both bolster autocratic regimes and potentially be leveraged to challenge them. It will explore the near-term risks that AI poses to open societies, including the erosion of privacy, the spread of disinformation, and the potential for algorithmic bias to exacerbate existing inequalities. Finally, it will propose concrete guardrails and policy recommendations designed to ensure that democratic systems remain resilient in the face of AI's transformative power, emphasizing the importance of ethical guidelines, transparent development, and robust regulatory frameworks.

What changes when authoritarians get AI?

Authoritarian governance, throughout history, has consistently relied upon a foundational framework built on three critical pillars: (1) the rigorous control of information, encompassing censorship, the dissemination of propaganda, and the strategic setting of public agendas; (2) extensive surveillance, employed to detect and identify any emergent dissent; and (3) the application of coercion, utilized both to deter potential acts of defiance and to punish those who transgress. The advent of artificial intelligence (AI) profoundly amplifies the capabilities and effectiveness of all three of these pillars, presenting new challenges to democratic principles and individual liberties. 

The advent of machine-learning systems has ushered in a new era of information control, dramatically enhancing the speed, scale, and precision with which content can be filtered and agendas manipulated. This represents a significant evolution from traditional censorship methods, where censors typically reacted post-hoc to problematic content. With the power of large language models (LLMs) and advanced pattern-matching classifiers, authoritarian regimes can now proactively identify, flag, and demote emergent narratives, particularly those that signal the potential for collective action or dissent. This sophisticated form of information control is not entirely novel; its roots can be traced back to practices documented long before the current generative AI boom. However, the LLM-era tooling has amplified these capabilities to an unprecedented degree. These advanced AI systems can analyze vast quantities of data in real-time, identifying subtle patterns, linguistic cues, and network dynamics that indicate a nascent movement or a potentially destabilizing narrative. They can then swiftly intervene, either by removing the content, demoting it in search results, or flooding the information space with counter-narratives. This proactive approach allows authoritarian states to preemptively neutralize threats to their control, often before they gain significant traction, thereby stifling dissent and shaping public opinion with an efficiency previously unimaginable. The precision offered by these AI tools means that suppression can be more targeted, reducing the risk of overreach and minimizing visible signs of censorship, thus making the control more subtle and harder to detect for the average user.

The convergence of readily available, inexpensive sensor technology, vast cloud storage capabilities, sophisticated computer vision algorithms, and increasingly accurate biometric matching techniques has paved the way for an unprecedented era of ambient surveillance. This new reality is characterized by monitoring that is continuous rather than episodic, fundamentally shifting the nature of oversight from intermittent checks to constant observation. This continuous monitoring manifests in various forms, from ubiquitous CCTV networks enhanced with AI-driven analytics to integrated smart city infrastructures that track movement and activity across urban landscapes. The data collected – be it facial recognition scans, gait analysis, voiceprints, or even behavioral patterns – is then processed and stored on an enormous scale, allowing for real-time identification and retrospective analysis of individuals and groups.


While the technological advancements are undeniable, crucial challenges and ethical concerns persist. As highlighted by institutions like NIST (National Institute of Standards and Technology) in their landmark face-recognition evaluations, significant demographic differentials and false positives remain serious. This means that the accuracy of these systems can vary considerably depending on factors such as race, gender, and lighting conditions, leading to disproportionate impacts on certain communities and the potential for misidentification. A false positive, even if seemingly minor in isolation, can have severe consequences for an individual, ranging from wrongful accusations to unwarranted scrutiny.


However, the sheer scale at which these systems are deployed means that even modestly accurate systems can chill association and speech at a population level. The chilling effect refers to the suppression of legitimate rights and activities due to fear of observation or repercussions. When individuals are aware that their every movement, interaction, and even expression might be recorded and analyzed, they may self-censor their speech, avoid certain associations, or refrain from participating in peaceful protests. This erosion of privacy and the psychological pressure of constant surveillance can undermine fundamental freedoms and democratic principles, leading to a society where conformity is incentivized over dissent. The implications for civil liberties, social activism, and individual autonomy are profound, requiring careful consideration and robust regulatory frameworks to mitigate the risks associated with this pervasive technological shift.

The emergence of sophisticated AI and data analytics has paved the way for "predictive governance," a system where authorities leverage vast datasets to anticipate and preemptively manage societal behaviors. This approach, while often framed as a tool for enhanced public safety and efficiency, raises profound ethical and human rights concerns, particularly when applied to the control of populations. At its core, predictive governance relies on extensive data fusion, drawing information from disparate sources such as mobility patterns, financial transactions, communication logs, and pervasive camera networks. By integrating these seemingly disparate data streams, authorities can construct comprehensive profiles of individuals and groups, allowing for the identification of patterns and anomalies that might indicate "risky" behavior. A critical aspect of this system is its ability to anticipate potential events, ranging from large-scale protests to individual acts deemed undesirable by the state. For instance, by analyzing mobility data, authorities can predict the logistical movements of protest organizers, identifying potential gathering points, supply lines, or communication hubs. Similarly, transaction data might reveal unusual financial activity, while communication metadata could highlight suspicious network ties.


However, the definition of "risky" people or "suspicious" behavior often relies on proxy features that are deeply intertwined with socioeconomics, ethnicity, or existing network connections. This algorithmic reliance on proxies can lead to the systemic targeting and discrimination of marginalized communities. When algorithms are trained on biased historical data, they tend to perpetuate and amplify existing social inequalities, flagging individuals not for genuine wrongdoing, but for characteristics associated with their background or identity.


Certain areas around the world provide a stark illustration of the extreme applications of predictive governance. In such regions, advanced data platforms are explicitly used to flag "suspicious" behavior based on pre-defined rules, often resulting in arbitrary detention and systematic human rights abuses. These platforms integrate data from surveillance cameras, ubiquitous checkpoints, and even mandatory installations of tracking apps on mobile phones. Algorithms then analyze this massive influx of information to identify individuals who exhibit "suspicious" behaviors, such as frequently contacting relatives abroad, using certain communication apps, or even displaying outward signs of religious observance. The "rules" that govern these systems are often opaque and discriminatory, leading to the profiling and internment based on cultural and religious identity, rather than on any actual criminal activity.


The implications of predictive governance extend far beyond direct political control. It presents a fundamental challenge to privacy, due process, and the very concept of individual autonomy. As governments increasingly adopt these technologies, there is a growing risk of creating societies where citizens are under constant algorithmic scrutiny, with their lives and freedoms subject to the arbitrary interpretations of data-driven systems. The potential for these systems to be misused for suppression, surveillance, and social engineering necessitates a robust public discourse, ethical guidelines, and legal frameworks to safeguard fundamental human rights in the age of AI.


The advent of generative AI has ushered in a new era of "weaponized persuasion," fundamentally altering the landscape of information and public discourse. This technology makes the creation and dissemination of synthetic media remarkably cheap, localized, and fast, posing significant challenges to truth and trust. Audio deepfakes and translated clone voices can convincingly mimic individuals, including politicians, journalists, or public figures, making them appear to say things they never did. This can be used to spread misinformation, create false narratives, or discredit opponents.


"Conversational canvassing" bots can engage with individuals on a large scale, tailored to specific demographics or even individual psychological profiles. These bots can be programmed to push particular agendas, subtly influence opinions, or directly disseminate propaganda. Their ability to simulate authentic human interaction makes them particularly insidious in eroding trust.


The primary objective of weaponized persuasion is to flood attention markets with misleading or fabricated content, making it difficult for individuals to discern truth from falsehood. This tactic is especially potent in low-trust information environments, where skepticism towards traditional media and institutions is already high. Furthermore, these tools can be deployed to suppress turnout in elections. By circulating false rumors about polling station closures, voter registration irregularities, or even the credibility of candidates, malicious actors can sow confusion and discouragement, thereby discouraging citizens from exercising their right to vote.


Recent elections around the world have provided a mixed bag of outcomes regarding the impact of these technologies. While there have been alarming incidents where deepfakes and AI-generated content have been used to attempt to sway public opinion, there have also been instances where their effects were surprisingly muted. This suggests that while the potential for harm is undeniable, the efficacy of these tactics can vary depending on factors such as public awareness, media literacy, and the resilience of democratic institutions.


Regardless of whether deepfakes always decide election outcomes, their proliferation reliably raises the cost of truth. In an environment saturated with synthetic media, individuals must exert greater effort and possess higher levels of critical thinking to verify information. This increased cognitive load and the constant bombardment of potentially false narratives can lead to:

  • Information fatigue: People become overwhelmed and disengage from news and civic discourse.

  • Erosion of trust: Skepticism towards all information sources, including legitimate ones, deepens.

  • Polarization: Individuals retreat into echo chambers where they are only exposed to information that confirms their existing biases.

In essence, while generative AI offers immense potential for creativity and progress, its misuse for weaponized persuasion represents a profound threat to democratic processes and the very fabric of an informed society. The challenge lies in developing robust defenses against these sophisticated forms of manipulation while simultaneously fostering greater media literacy and critical thinking among the populace. Crucially, these capabilities don’t just entrench closed regimes; they can also erode liberal ones from within when public and private actors introduce the same toolkits for “efficiency,” “security,” or “engagement.”


The authoritarian tech stack: from street-level sensors to sovereign platforms

The authoritarian temptation of AI manifests as a sophisticated stack of technologies, each layer leveraging computation to replace human labor and judgment, thereby enabling a more pervasive and efficient system of repression.

At the foundational layer is sensing and identification. This involves the widespread deployment of technologies such as wide-area closed-circuit television (CCTV) systems, mobile International Mobile Subscriber Identity (IMSI) capture, sophisticated geolocation tracking, and various biometric identification systems. These systems continuously collect vast amounts of data, which then feed into training pipelines for AI models. A critical concern within this layer is the varying false-match rates across different demographics. This disparity raises significant equity and due-process problems in democratic contexts, where such biases can lead to discriminatory enforcement. In authoritarian regimes, however, these demographic differentials can be deliberately exploited to enable selective enforcement, targeting specific groups or individuals with greater precision and impunity. Research by organizations like NIST (National Institute of Standards and Technology), particularly their Facial Recognition Vendor Test (FRVT) findings, consistently highlights these demographic differentials, underscoring the inherent biases in many AI-powered identification systems.


The next layer, data fusion and rules engines, integrates disparate data sources to create comprehensive profiles and identify “anomalies.” Authorities combine telecom data, providing insights into communication patterns and networks, with travel records, which can track movements and associations, and financial traces, revealing economic activities and potential vulnerabilities. These diverse datasets are fed into sophisticated platforms that utilize rules engines and machine learning algorithms to flag behaviors or connections deemed suspicious or non-compliant with state directives. This comprehensive integration provides a complete picture of individuals and groups, allowing for the identification of subtle, otherwise undetectable patterns, which then serves as a foundation for intervention or surveillance.


Moving up the stack, filtering and throttling refer to the mechanisms used to control the flow of information and communication. This layer encompasses network-level controls such as DNS tampering, which redirects or blocks access to specific websites; deep-packet inspection (DPI), which allows for the examination and filtering of data packets based on their content; and IP blocking, which restricts access to entire internet protocol addresses. Furthermore, platform takedowns, where online content or accounts are removed, now interact with advanced machine learning classifiers. These classifiers are designed to detect and preempt mobilization cues, such as calls for protests or dissent, by analyzing vast amounts of online communication. This allows authoritarian regimes to proactively disrupt organizing efforts and control narratives before they gain momentum, as evidenced by reports from organizations like the Open Observatory of Network Interference (OONI.org) and Access Now (accessnow.org) on internet shutdowns and censorship.


Narrative operations represent a more subtle yet powerful form of AI-enabled repression, focusing on shaping public discourse and opinion. Generative AI models are utilized to localize disinformation at a highly granular level, adapting its tone, dialect, and even micro-community nuances. This allows for the creation of highly targeted and persuasive propaganda that resonates with specific segments of the population. The use of deepfake audio episodes and anti-turnout robocalls employing cloned voices demonstrates the alarming flexibility of this tactic. These operations are often timed to exploit verification latency, meaning they are deployed when it is difficult or impossible for individuals or organizations to quickly verify the authenticity of the information, maximizing their impact before debunking efforts can take hold. Human Rights Watch and the Carnegie Endowment have documented numerous instances of such AI-powered narrative manipulation, highlighting its potential to undermine democratic processes and human rights.


A relatively new but increasingly significant layer is guardrailed LLMs as policy. Domestic AI assistants, particularly large language models (LLMs), are being designed and deployed to enforce state-defined red lines in everyday queries. This means that these AI assistants can set "epistemic boundaries" around sensitive topics such as history or politics, effectively controlling the information and perspectives accessible to citizens. Researchers have successfully reverse-engineered the guardrails embedded within these LLMs, revealing systematic silences, biases, and narrative steering baked directly into the chat products. This ensures that the AI itself acts as a censor, subtly guiding users towards state-approved narratives and away from dissenting views, thereby shaping collective understanding and limiting intellectual freedom.

Finally, the top layer, procurement and export, highlights the global proliferation of these AI surveillance tools. The market for AI surveillance technologies has experienced an unprecedented boom, with both established democracies and authoritarian regimes actively adopting a range of tools, from sophisticated facial recognition systems to smart city platforms that integrate various surveillance capabilities. This widespread diffusion of technology has several critical implications. Firstly, it facilitates authoritarian learning, allowing repressive regimes to share best practices and refine their surveillance techniques. Secondly, it significantly complicates normative containment efforts, as the global availability of these tools makes it harder to establish and enforce international standards or restrictions on their use, blurring ethical lines and empowering those who seek to suppress dissent.

Why democracies are not immune

AI's potential for authoritarianism doesn't necessarily arise from the rise of an autocrat but rather from a profound lack of accountability in its deployment and operation. This deficiency manifests in several critical areas, enabling forms of control that are often subtle, pervasive, and difficult to challenge.

Predictive Policing & Face Recognition: The Erosion of Rights Through Algorithmic Bias


The widespread deployment of predictive policing and facial recognition technologies, even in ostensibly rule-of-law contexts, poses a significant threat to civil liberties. Without rigorous and independent auditing, these systems frequently reproduce and amplify historical biases embedded in training data. This leads to unnoticeable rights violations, such as wrongful stops based on flawed predictions or misidentification due to demographic performance disparities in facial recognition algorithms. Civil society organizations, including Amnesty International, have meticulously documented the alarming scale and opacity of these rollouts, leading to a growing number of local and national moratoria on their use. The National Institute of Standards and Technology (NIST)'s Facial Recognition Vendor Test (FRVT) has empirically confirmed that algorithmic performance varies significantly across different demographics, underscoring the critical need for oversight that is not only technical but also robustly legal. This dual approach is essential to ensure fairness, accuracy, and adherence to fundamental human rights, preventing these powerful tools from becoming instruments of systemic discrimination and overreach.


Platformic Nudges: Shaping Political Discourse and Attention


Recommender systems, the algorithms that curate content on social media and other digital platforms, wield immense power in shaping the distribution of political attention long before any overt censorship takes place. By subtly influencing what information users see and how they interact with it, these systems can inadvertently (or intentionally) skew public discourse, amplify certain narratives, and marginalize others. Recognizing this profound impact, the European Union's Digital Services Act (DSA) has brought these systems under a comprehensive risk-management regime. The DSA mandates increased transparency regarding how these algorithms operate, and provides users with greater control over their feeds, including options for non-profiling feeds. This legislative intervention aims to mitigate the potential for these powerful "nudges" to undermine democratic processes and informed public debate by fostering a more transparent and accountable digital environment.


Infrastructure Shutdowns: The Digital Iron Curtain


In recent years, both democracies and hybrid regimes have increasingly resorted to internet disruptions—often euphemistically framed as measures for "public safety"—around critical junctures such as elections or periods of civil unrest. The year 2024 saw a record number of such shutdowns globally, inflicting severe civic and economic costs. This tactic is alarmingly prevalent because it is cheap, fast, and, crucially, plausibly deniable. Governments can restrict access to information and communication, suppress dissent, and control narratives with minimal immediate accountability. The ease of implementation and the difficulty in definitively attributing responsibility contribute to the rapid proliferation of this authoritarian tactic, creating digital "iron curtains" that isolate populations and stifle fundamental rights.


Administrative Overreach: The Creep of Scoring Systems

The development and deployment of scoring systems, sometimes controversially labeled "social credit" systems, represent another significant avenue for AI's authoritarian potential through administrative overreach. These systems, which utilize data analytics to assign scores to individuals, can subtly but profoundly impact access to essential services and opportunities. Their application can infiltrate critical areas such as welfare eligibility, immigration controls, and benefits fraud detection. While presented as tools for efficiency and fairness, these systems can lead to opaque decision-making, perpetuate existing societal inequalities, and create a chilling effect on individual freedoms if not subject to stringent oversight and robust appeals processes. The potential for these systems to become tools for social engineering and control underscores the urgent need for ethical guidelines and legal frameworks to prevent their misuse.

How AI reshapes power relations

Several dynamics help explain why AI is structurally attractive to illiberal actors and risky for liberal ones. With an established surveillance infrastructure, additional scrutiny comes at a near-zero marginal cost. This encourages bureaucrats to engage in "just in case" monitoring, even though it erodes trust. Machine Learning Systems transform political issues like safety, cohesion, and "stability maintenance" into optimization problems. This act of formalization itself, by defining a social question as a loss function, restricts acceptable solutions and obscures political considerations.

The pervasive integration of Artificial Intelligence (AI) into societal structures is exacerbating existing power imbalances. This is largely due to the inherent advantages held by powerful entities, such as national security services and dominant technology platforms, in their access to vast amounts of data and advanced computing capabilities. In stark contrast, ordinary citizens and crucial oversight bodies lack comparable resources, creating a significant power skew that operates in the absence of adequate balancing institutions.


This phenomenon aligns closely with the observations of James C. Scott in his seminal work, "Seeing Like a State" (1998). Scott argued that states historically endeavor to make society "legible" – understandable and quantifiable – to effectively govern, control populations, and extract resources. AI, in a contemporary context, digitally completes this project. It transforms what might otherwise be considered weak or disparate signals into administrative triggers with profound real-world consequences. Examples of such triggers include visa denials, the rejection of loan applications, and even the preemption of protests. Crucially, these AI-driven decisions often occur with no clear point of contestation, meaning individuals affected by them have little to no recourse or understanding of the underlying logic.


Furthermore, the rise of generative media presents a distinct and dangerous challenge. Its impact extends beyond mere deception; it actively erodes the foundations of shared truth. By making it increasingly difficult to distinguish between authentic and fabricated content, generative media fosters a pervasive belief that "nothing is real." This pervasive skepticism, in turn, can be strategically leveraged by those in power to exploit public confusion. The ultimate consequence is procedural paralysis – a widespread reluctance to act or engage due to an inability to verify information, thus hindering collective response and potentially undermining democratic processes. The blurring lines between reality and simulation, fueled by sophisticated AI, pose a fundamental threat to informed decision-making and the very fabric of trust necessary for a functional society.


What exactly are the risks over the next few years?


The increasing affordability and ubiquity of sensor technology are paving the way for the widespread adoption of advanced surveillance methods. Initially introduced as "pilot" programs, often in seemingly benign contexts, technologies like face and gait recognition are prone to silently becoming permanent fixtures in our society. Without clear and precise legal boundaries to delineate their appropriate use, a phenomenon known as "scope creep" is highly probable. This means that systems initially deployed in high-risk environments, such as airports for security screening, will gradually expand their reach into everyday spaces, including public transit systems and even public housing developments.


This expansion raises significant concerns, particularly in light of research from institutions such as NIST (National Institute of Standards and Technology). Their evidence consistently demonstrates demographic differentials in the accuracy and effectiveness of these technologies. This translates to a disproportionate and detrimental impact on minority communities, who are more likely to be misidentified or subject to increased scrutiny. The chilling effect of such pervasive surveillance extends beyond direct harm; it subtly but effectively suppresses free expression and community engagement as individuals become more self-conscious of constant monitoring. This erosion of privacy and the potential for discriminatory application pose a significant threat to civil liberties and the very fabric of democratic societies.

Governments are increasingly employing a sophisticated and difficult-to-detect form of censorship, creating what can be termed "shadow censorship." This method integrates several powerful tools: direct pressure on digital platforms, the threat of legal action at a local level, and the use of automated classification systems.

First, platform pressure involves governments subtly or overtly influencing tech companies to remove or downrank content deemed undesirable, particularly protest-related material. This can range from informal requests to more direct mandates, often leveraging the economic interests or legal vulnerabilities of these companies.

Second, local legal threats add another layer of intimidation. By threatening lawsuits, fines, or even criminal charges against individuals or organizations associated with protest content, governments can effectively silence dissent at its source. This creates a chilling effect, deterring people from making or sharing such information.

Finally, automated classifiers are algorithms designed to identify and filter out specific types of content. These AI-powered systems can automatically detect keywords, images, or even patterns of behavior associated with protest activities, leading to the algorithmic suppression of that content. This automation makes the censorship scalable, efficient, and less reliant on human intervention, making it incredibly difficult to pinpoint who or what is responsible for the removal.

The combination of these tactics allows governments to achieve a highly effective yet deniable form of censorship. Because content is often downranked or made less visible rather than outright removed, it becomes challenging for the public, and even international observers, to audit or prove that censorship is occurring. This lack of transparency undermines freedom of expression and the ability of citizens to organize and voice their concerns. Freedom House's extensive longitudinal data on global internet freedom clearly demonstrates this worrying trajectory, showing a consistent pattern of governments adopting these methods to suppress dissent and control information flows.

The landscape of digital information is increasingly susceptible to manipulation, primarily through the proliferation of synthetic media. We anticipate not only episodic deepfake shocks, characterized by sudden and impactful revelations of highly convincing fabricated audio-visual content that often emerge unexpectedly ("late drops"), but also a persistent deluge of less sophisticated yet equally damaging "cheapfake" content and spam campaigns. This ongoing erosion of trust is exacerbated by several critical factors: the widespread adoption of robust provenance tools, which could verify the origin and authenticity of digital content, remains elusive; and platform enforcement, the mechanisms by which online platforms moderate and remove harmful content, is inconsistently applied across different languages and geographic regions. Consequently, even when corrections are issued and the true nature of fabricated content is revealed, we should still expect a significant and lasting trust erosion among the public, as the very foundations of verifiable information are undermined. This continuous assault on truth makes it increasingly difficult for individuals to discern what is real from what is fabricated, leading to a pervasive sense of skepticism and uncertainty.

Record-level disruptions experienced throughout 2024 signify a critical shift, indicating that what were once isolated "exceptions" have now evolved into a systematic and deliberate "technique." While outright internet shutdowns do not inherently necessitate the use of artificial intelligence, their synergy with AI-moderated platforms creates a particularly potent tool for control. By leveraging AI to manage and filter online content, these shutdowns become even more effective at "erasing coordination" precisely at key, pivotal moments. This can manifest in various ways, such as suppressing the organization of protests, hindering the dissemination of crucial information during crises, or stifling dissent by preventing individuals from connecting and mobilizing. The combination of physical disconnections and algorithmic censorship presents a formidable challenge to free expression and democratic processes, demonstrating a sophisticated approach to information control in the digital age.

The pervasive implementation of risk scores across various sectors, including fraud detection, eligibility assessments, and immigration processes, increasingly establishes these scores as defaults. This normalization occurs without the necessary accompanying mechanisms for appeal paths, adequate discoverability of how these scores are calculated, or transparent disclosures regarding their implications. Such an unchecked reliance on algorithmic assessments fundamentally undermines the democratic principle that significant deprivations of rights, opportunities, or status demand clear explanation and contestation.

Without the ability to understand how a risk score was derived, to access a transparent pathway for challenging an unfavorable outcome, or to receive full disclosure about the data points and methodologies used, individuals are left without recourse. This lack of due process transforms what should be a tool for efficiency into a potentially arbitrary and irreversible judgment, eroding trust in the systems that govern crucial aspects of civic life. The opaque nature of these "default" risk scores creates a new form of digital disenfranchisement, where individuals are subject to decisions made by algorithms that operate outside traditional legal and ethical frameworks designed to ensure fairness and accountability..

A prevention and mitigation agenda for democracies

The following comprehensive, multi-layered approach—encompassing legal, institutional, and technical solutions—aims to mitigate authoritarian tendencies while fostering innovation and public benefit.

Constitutional and legal guardrails

Establishing a robust legal framework is crucial for harnessing the benefits of AI while mitigating its inherent risks, particularly those concerning fundamental human rights and democratic processes. This framework should be built upon clear constitutional principles, ensuring accountability, transparency, and due process in the deployment of AI systems.

1. Establish Clear Prohibitions Consistent with Fundamental Rights:

  • Ban or Sharply Limit Real-Time Remote Biometric Identification in Public Spaces: The pervasive nature of real-time remote biometric identification, such as facial recognition, in public spaces poses significant threats to privacy and freedom of assembly. A comprehensive prohibition, with only narrow, judge-supervised exceptions (e.g., in cases of imminent threat to life or severe criminal investigations), is essential to prevent mass surveillance and chilling effects on civil liberties. Any deployment must adhere to strict necessity and proportionality tests.

  • Prohibit Public-Authority Social Scoring: The concept of public-authority social scoring, where AI systems assign scores to individuals based on their behavior, affiliations, or other data, is fundamentally incompatible with democratic values and human dignity. Such systems can lead to systemic discrimination, reinforce existing biases, and create a chilling effect on individual expression and dissent. An absolute prohibition on such practices is necessary to safeguard individual freedoms and ensure equitable treatment under the law.

  • Require an Explicit Statutory Basis for Any Deployment of Biometric or Predictive Systems that Materially Affects Rights: The deployment of powerful biometric or predictive AI systems by public authorities, especially those that can materially affect an individual's rights (e.g., in areas of employment, housing, credit, or criminal justice), must be grounded in explicit statutory authority. This requirement ensures democratic oversight, prevents arbitrary or unauthorized use of these technologies, and allows for public debate and accountability regarding their scope and application. This aligns with principles articulated in robust digital strategies, such as the European Union's regulatory framework for AI, which emphasizes human-centric and trustworthy AI.

2. Guarantee Due Process for Algorithmic Decisions:

  • Explainable, Appealable, and Discoverable AI-Mediated Government Decisions: Any government decision with significant effects on individuals, mediated by AI systems, must uphold the core tenets of due process. This necessitates that such decisions be:

    • Explainable: Individuals must be able to understand the reasoning and logic behind an AI-driven decision, including the data inputs and algorithmic processes that led to the outcome. This goes beyond mere technical documentation and requires clear, intelligible explanations for non-experts.

    • Appealable: Individuals must have a meaningful avenue to challenge adverse AI-mediated decisions. This includes the right to a human review of the decision, with the power to overturn or modify the algorithmic output.

    • Discoverable: The inner workings of AI systems used in governmental decision-making must be subject to appropriate levels of discovery. This includes the provision of audit logs, record retention, and access to relevant data and models, enabling independent scrutiny and ensuring accountability. This is not merely a "nice-to-have" but an administrative expression of the constitutional guarantee of equal protection, ensuring fairness and preventing arbitrary actions.

3. Codify Platform Transparency & Data Access:

  • Require Very-Large Platforms to Disclose Recommender Parameters: Large online platforms, particularly those with significant societal influence, must be mandated to disclose the key parameters and criteria that govern their recommender systems. This transparency empowers users to understand how content is prioritized and presented to them, fostering digital literacy and enabling more informed choices about their online experience.

  • Offer a Non-Profiling Feed Option: To counter the potential for filter bubbles and echo chambers, platforms should be required to offer users a genuine non-profiling feed option. This means a default or easily accessible setting that presents content without personalization based on individual user profiles, allowing for a more diverse and unfiltered information environment.

  • Provide Secure Researcher Data Access to Study Systemic Risks at Scale: Independent researchers must be granted secure and ethically sound access to relevant platform data to study the systemic risks associated with large-scale digital platforms, including the spread of misinformation, hate speech, and the impact of algorithmic amplification. This mirrors the obligations outlined in groundbreaking legislation such as the Digital Services Act (DSA) in the European Union, which recognizes the critical need for independent oversight and research to address the societal impact of powerful digital intermediaries.

4. Update Communications Law for AI Persuasion:

  • Extend Existing Election and Consumer-Protection Rules to Synthetic Media: The rise of synthetic media, including deepfakes and AI-generated content, necessitates a proactive update to existing communications laws to safeguard democratic discourse and protect consumers.

    • Consent and Disclosure for AI-Voice Calls (FCC Approach): Following the approach of regulatory bodies like the Federal Communications Commission (FCC), AI-generated voice calls, particularly in political campaigning or commercial contexts, should require explicit consent from the recipient and clear disclosure that the voice is synthetic. This prevents deceptive practices and ensures transparency.

    • Disclaimers for Political Deepfakes: Political deepfakes, which can manipulate public perception and spread false narratives, must be accompanied by prominent and clear disclaimers indicating their synthetic nature. This helps mitigate their potential to mislead voters and undermine electoral integrity.

    • Expedited Takedown Pathways for Materially Deceptive Content About Voting Logistics: To prevent voter suppression and confusion, there must be expedited legal pathways for the takedown of materially deceptive content about voting logistics (e.g., incorrect polling place information, false election dates) that is generated or amplified by AI. This requires collaboration between platforms, government agencies, and civil society organizations to ensure swift and effective removal of harmful content.

Establishing Robust AI Governance: A Multi-pronged Approach

To effectively manage the complexities and risks associated with artificial intelligence, a comprehensive governance framework is essential. This framework should integrate various mechanisms designed to ensure accountability, transparency, and the protection of human rights.

  • Independent Algorithmic Regulators & Ombudspersons: Centralized oversight can be inefficient and unresponsive to the diverse applications of AI. Instead, a decentralized approach is recommended, fostering sectoral oversight bodies (e.g., dedicated regulators for health, welfare, or policing AI). These bodies must possess genuine authority ("teeth") and be staffed with technically proficient experts capable of understanding the intricacies of AI systems. Furthermore, empowering existing oversight mechanisms such as inspectors general and data-protection authorities to conduct surprise audits of high-risk AI applications will provide a critical layer of real-time scrutiny, deterring misuse and ensuring compliance.

  • Public Registers of High-Risk Systems: Transparency is paramount in building public trust and enabling effective oversight. A live, publicly accessible registry of AI systems employed by public bodies is crucial. This registry should meticulously document key information for each system, including its stated purpose, the data sources it utilizes, the methodologies for its evaluations, and details about the vendor. Critically, it should also include summaries of red-teaming exercises—simulated adversarial attacks designed to uncover vulnerabilities and biases—along with links to comprehensive impact assessments that detail potential societal effects.

  • Procurement as Leverage: The procurement process offers a powerful, yet often underutilized, mechanism for shaping market norms and promoting responsible AI development. Rather than relying solely on new legislative mandates, government and public bodies can leverage their purchasing power to demand specific safeguards from AI vendors. This includes requiring the provision of detailed model cards (documentation outlining a model's characteristics, intended uses, and limitations), thorough risk assessments, evidence of rigorous bias and robustness testing, and clear update commitments for ongoing maintenance and improvement. By embedding these requirements into procurement rules, a de facto standard for responsible AI can be established across the industry.

  • Compute Governance with Rights Safeguards: As the control over advanced computing resources becomes increasingly critical for national security, measures like export restrictions or reporting requirements for high-performance computing capabilities may be pursued. However, it is imperative that such "compute controls" are always paired with robust human-rights due diligence. This proactive assessment ensures that efforts to safeguard national security do not inadvertently contribute to or entrench surveillance suppliers or technologies abroad that could be used to violate human rights. Balancing security imperatives with a commitment to fundamental rights is vital in the evolving landscape of AI governance.

Technical and ecosystem measures

  • Provenance and Content Credentials: The widespread adoption of Content Credentials,  powered by C2PA (Coalition for Content Provenance and Authenticity) and the Content Authenticity Initiative, is crucial for enhancing trust in public-sector communications, major media outlets, and electoral processes. These digital watermarks, though not entirely foolproof against stripping, significantly increase the cost associated with deception and accelerate the identification and correction of misinformation. To make this effective, it's vital to create easily accessible and trustworthy verification portals that citizens can utilize to confirm the authenticity and origin of digital content. Organizations such as C2PA and Content Authenticity Initiative are at the forefront of developing these standards and tools.

  • Resilience Against Shutdowns: To counter the growing threat of internet and communication shutdowns, there's an urgent need to fund and develop robust civic infrastructure. This includes piloting mesh networking solutions, implementing reliable SMS broadcast systems, and establishing satellite backup communication capabilities. The goal is to ensure communities can maintain minimal but essential communication channels during disruptions, whether caused by natural disasters, technical failures, or deliberate actions. Supporting civil-society coalitions such as #KeepItOn, which actively track and challenge internet shutdowns globally, is also paramount to advocating for open and accessible communication.

  • Red-Teaming and Public Sandboxes: Before the deployment of high-risk AI systems, rigorous external red-team exercises are indispensable. These exercises, involving independent security researchers and experts, should identify potential vulnerabilities, biases, and unintended consequences. A comprehensive summary of these findings and the subsequent fixes implemented must be publicly disclosed to foster transparency and accountability. Furthermore, wherever technically feasible, hosting public evaluation sandboxes with synthetic data allows for crowdsourced scrutiny. This enables a wider community of experts and the public to test and assess the system's performance, robustness, and ethical implications in a safe environment, contributing to its overall improvement and trustworthiness.

  • Privacy-Preserving Analytics by Default: In the realm of public interest AI applications (e.g., in healthcare, transportation, and urban planning), it is critical to prioritize privacy by design. Implementing technologies like federated learning, secure enclaves, and differential privacy by default can significantly reduce the "surveillance surface area" – the amount of identifiable data collected and processed – without compromising the utility or effectiveness of the AI system. Federated learning allows models to be trained on decentralized datasets without the raw data ever leaving its source, while secure enclaves provide a protected environment for data processing. Differential privacy adds mathematical noise to data to prevent individual identification, thereby safeguarding sensitive personal information while still allowing valuable insights to be drawn from aggregated data.

Civic counter-power and literacy

The pervasive spread of misinformation, amplified and scaled by advanced AI, poses a significant threat to democratic societies and public discourse. To combat this, a multifaceted approach is essential, focusing on empowering trusted intermediaries, fostering digital literacy, and safeguarding those who expose abuses.

1. Empowering Local Journalism and Watchdog Organizations:

The antidote to AI-driven misinformation is not solely technological detection but the cultivation of a robust ecosystem of trusted local intermediaries. These include independent news organizations, fact-checking initiatives, and community-based watchdog groups. Their critical role lies in their ability to:

  • Validate Information: Through rigorous journalistic practices and on-the-ground reporting, they can verify claims and distinguish factual reporting from fabricated narratives.

  • Contextualize Corrections: Simply debunking misinformation is often insufficient; effective countermeasures require explaining why information is false, providing accurate context, and addressing the underlying motivations or biases that may have led to its creation or spread.

  • Rapid Response: The speed at which misinformation propagates necessitates swift and accurate corrections. Local journalists and watchdogs, embedded within their communities, are uniquely positioned to identify emerging false narratives and issue timely clarifications.

  • Build Trust: Over time, consistent, transparent, and accurate reporting fosters public trust, making these entities reliable sources of information amidst a deluge of unverified content. Funding and supporting these organizations are crucial investments in informational integrity.

2. Cultivating "Provenance Literacy" as a Core Digital Skill:

In an era where synthetic media can be indistinguishable from reality, "provenance literacy" must become a fundamental digital skill. This involves equipping individuals with the knowledge and tools to discern the origin and authenticity of digital content. Key aspects of provenance literacy include:

  • Understanding Content Credentials (C2PA): Initiatives like the Coalition for Content Provenance and Authenticity (C2PA) are developing technical standards to embed verifiable metadata within digital assets, indicating their origin, history of edits, and creation methods. Teaching users how to recognize and interpret these credentials will enable them to assess the trustworthiness of media.

  • Recognizing Synthetic Media Signatures: As AI-generated content becomes more sophisticated, so too do the methods for identifying it. While AI-detection tools are evolving, educating the public about common patterns, anomalies, or subtle tells (even those not immediately apparent to the untrained eye) can empower them to question the authenticity of suspicious content.

  • Critical Thinking and Source Evaluation: Beyond technical skills, provenance literacy emphasizes foundational critical thinking. This includes questioning sensational headlines, cross-referencing information with multiple reputable sources, and understanding the potential biases of content creators.

  • Integration into Education and Public Awareness: These skills should be integrated into educational curricula from an early age, forming part of digital citizenship programs. Public-service announcements, newsroom training, and community workshops can further disseminate this knowledge, making content verification a routine digital habit for everyone.

3. Protecting Researchers and Whistleblowers:

Many of the most egregious abuses of technology, including the spread of misinformation and disinformation, are brought to light by dedicated researchers, digital-rights organizations, and investigative journalists. These individuals and groups often operate at significant personal and professional risk. Protecting them is paramount for several reasons:

  • Exposure of Network Abuses: Organizations like OONI (Open Observatory of Network Interference) and Access Now are instrumental in monitoring internet censorship, surveillance, and other digital rights violations that contribute to or enable the spread of misinformation.

  • Investigative Journalism: Investigative journalists are crucial in uncovering disinformation campaigns, identifying their orchestrators, and exposing their motives. Their work often involves deep dives into complex networks and sensitive information.

  • Early Warning Systems: These groups often serve as early warning systems, identifying emerging threats and malicious actors before they can cause widespread harm.

  • Legal and Material Support: Protecting these individuals and organizations means providing robust legal frameworks that shield them from frivolous lawsuits, harassment, and retaliation. Equally important is ensuring they have the material resources – funding, secure tools, and training – necessary to conduct their vital work effectively and safely. Without their vigilance and courage, many abuses would remain hidden, allowing misinformation to flourish unchecked.

Common objections and responses

In the discourse surrounding the societal implications of artificial intelligence, several critical counterarguments often arise, each warranting a thorough and nuanced response. These objections, though seemingly valid at first glance, can be effectively addressed by considering the broader context and potential for thoughtful policy and technological solutions.

One common assertion is, "If we hamstring surveillance, we can't ensure safety." While public safety is undeniably a legitimate democratic good, the fundamental question is not a binary choice between ubiquitous surveillance and a lack of safety. Instead, it revolves around how to target risk proportionately and with robust accountability mechanisms. The implementation of blanket biometric scanning, for instance, fails both necessity and proportionality tests. Such widespread, untargeted surveillance is not only ethically questionable but also technically brittle, exhibiting demographic differentials that can lead to biased outcomes. Furthermore, it inherently invites selective enforcement, potentially creating a two-tiered system of justice. A more effective and equitable approach involves the use of better tools, such as narrowly tailored warrants based on specific, demonstrable threats, post-hoc audits to ensure compliance and prevent abuse, and adversarial oversight mechanisms that provide independent scrutiny of surveillance activities. These measures prioritize individual liberties while still enabling effective law enforcement and public safety initiatives.

Another objection frequently raised is, "Provenance is brittle; why bother?" Metadata can indeed be stripped, and not everyone actively checks for content credentials. However, dismissing provenance systems on these grounds overlooks their crucial benefits. Provenance systems fundamentally shift incentives by making the manipulation or misattribution of digital content more difficult and detectable. Crucially, they enable fast, authoritative corrections during times of crisis, where the rapid dissemination of accurate information can be paramount. As adoption of provenance standards and technologies spreads across the entire digital toolchain—from cameras and content creation software to content delivery networks (CDNs)—the cost and effort required to launder synthetic or manipulated content will significantly rise. This increased friction discourages malicious actors and promotes a more trustworthy information ecosystem.

Finally, the argument "Deepfakes are overhyped; effects are small" suggests that the impact of synthetic media is not significant enough to warrant extensive policy responses. While it's true that some election cycles may have shown muted aggregate effects of deepfakes, this does not negate their potential for harm. Episodic, well-timed deepfakes can still swing micro-margins in close elections or suppress voter participation by sowing confusion or distrust. Even when debunked, the very existence and circulation of deepfakes can corrode trust in institutions, media, and public discourse, a long-term consequence that is far more insidious than immediate electoral outcomes. The potential for such damage is more than enough to justify proportionate policy responses. These responses should include the widespread adoption of digital provenance systems, clear disclosure requirements for synthetic content, and rapid remedies for the dissemination of harmful deepfakes. The goal of such policies is to preserve speech while deterring deception, striking a balance that protects freedom of expression while mitigating the risks posed by malicious synthetic media.

For a democratic theory of AI

Safeguarding Public Reason, Rights, and Reversible Decisions


Democracy, fundamentally, is not defined by its outcomes alone, but by its core methodology: a peaceful and structured process for the contestation of power. This method is inextricably linked to the principles of public reason, fundamental rights, and the capacity for reversible decisions. Artificial intelligence, with its burgeoning capabilities, stands at a critical juncture: it can either act as a powerful enabler of this democratic methodology—by facilitating citizen participation, enhancing transparency to expose corruption, and fostering fairer access to essential services—or it can dangerously undermine it by rendering individuals increasingly predictable and transforming politics into a programmable, top-down system.


The trajectory of AI's impact on democracy hinges significantly on our collective capacity for governance and institutional imagination. This demands a nuanced and strategic approach to AI regulation and development:

  • Governing Uses, Not Abstract "AI": Rather than attempting to regulate the abstract concept of "AI," our focus must shift to the specific contexts in which AI systems exert power and influence. This necessitates concentrated attention on critical domains such as policing, where AI can have profound implications for civil liberties; welfare systems, where it can determine access to vital support; employment, impacting job opportunities and hiring practices; housing, influencing allocation and discrimination; elections, with the potential for manipulation and disinformation; and digital platforms, which shape public discourse and information flow. Regulating these specific applications allows for tailored interventions that address real-world risks.

  • Prioritizing Structural Over Performative Remedies: True accountability and democratic control require more than superficial gestures. We must move beyond "performative" solutions, such as non-binding public relations pledges or vague promises of ethical AI, which often lack verifiable mechanisms. Instead, the emphasis should be on implementing robust "structural remedies." These include mandatory independent audits of AI systems to assess bias and impact, establishing clear and accessible appeal rights for individuals adversely affected by AI decisions, enforcing stringent transparency requirements regarding AI algorithms and data usage, and developing comprehensive procurement standards that embed ethical considerations and democratic values into the acquisition of AI technologies by public and private entities.

  • Embracing Friction as a Safeguard: The pursuit of hyper-efficiency or "frictionless control" in AI deployment, while seemingly appealing, can pose significant risks to democratic principles. We must acknowledge and accept that a certain degree of "friction"—encompassing meticulous documentation of AI design and operation, rigorous testing protocols, and thorough independent review processes—is not an impediment but a necessary cost. This friction serves as a crucial safeguard, ensuring that human judgment remains central to decisions that directly affect fundamental rights and that these systems are subject to democratic oversight and challenge. It provides the necessary pauses and checkpoints to prevent unchecked algorithmic power from eroding individual autonomy and societal fairness.

Ultimately, democracies need not, and indeed should not, mimic the "efficiency" often championed by authoritarian regimes. Such efficiency, if achieved through the suppression of dissent or the erosion of rights, comes at the expense of long-term stability and legitimacy. In the enduring struggle between technological prowess and human values, legitimacy will always triumph over frictionless control. This legitimacy is not passively granted; it must be actively earned and continuously reinforced. It demands a citizenry empowered to understand, to critically challenge, and to ultimately reshape the very systems and technologies that govern their lives. Only through such active engagement and oversight can AI truly serve as a tool for democratic flourishing rather than a pathway to authoritarianism.

Final Thoughts: Navigating the AI-Enabled Authoritarian Threat

AI, in itself, is not a direct cause of authoritarianism. It is a powerful tool, a technology with transformative potential. However, its true danger lies in its capacity to lower the transaction costs of repression and raise the returns to control for those who wield it with ill intent. This means that for regimes seeking to stifle dissent, monitor populations, and enforce their will, AI offers unprecedented efficiency and effectiveness. Surveillance becomes easier, information control more absolute, and the identification and suppression of opposition more streamlined.


In the face of this emergent challenge, the democratic response must be nuanced and robust, avoiding the pitfalls of both unfounded fear and complacent acceptance. We must reject technophobia, an irrational aversion to technology that would leave us unable to harness AI's benefits for democratic ends. Equally, we must shun techno-fatalism, the resignation to an inevitable future where AI dictates our political destiny. Instead, a truly democratic approach demands a proactive and multi-faceted strategy built upon three foundational pillars:


Firstly, constitutional courage. This requires a steadfast commitment to the core principles of democratic governance – individual rights, due process, freedom of expression, and accountability – even as new technologies present novel challenges to their application. It means being willing to adapt our legal frameworks and uphold our values in the digital sphere, rather than allowing technological advancement to erode fundamental liberties. This courage must extend to scrutinizing how AI systems are designed, deployed, and regulated, ensuring they align with, rather than undermine, democratic norms.


Secondly, institutional craft. This refers to the painstaking work of building and strengthening the democratic institutions that can effectively govern AI. This includes developing new regulatory bodies, fostering independent oversight mechanisms, and establishing clear lines of accountability for the use of AI, particularly in sensitive areas like law enforcement, intelligence, and public services. It also involves investing in research and development to create "democracy-enhancing" AI tools and fostering collaboration between technologists, policymakers, and civil society to shape the future of AI governance. This craft is about designing resilient systems that can resist authoritarian capture and promote transparency and fairness.


Finally, technical realism. This necessitates a clear-eyed understanding of AI's capabilities and limitations. It means recognizing that not all AI applications are inherently benign or suitable for every context. This realism informs a strategic approach where we choose clarity over convenience. This translates into concrete policy choices:

  • Bans wherever necessary: Certain AI applications, particularly those that pose an unacceptable risk to human rights or democratic processes, may require outright prohibition. This could include autonomous weapons systems lacking meaningful human control, or widespread, indiscriminate surveillance technologies that fundamentally undermine privacy.

  • Transparency wherever possible: For AI systems that are permitted, robust transparency mechanisms are crucial. This means open algorithms, explainable AI, and clear accountability for decisions made by AI systems. Citizens must understand how AI affects their lives and have the ability to scrutinize its operation.

  • Contestability everywhere: Individuals and civil society must have the means to challenge decisions made by or influenced by AI. This requires accessible appeal processes, independent auditing of AI systems, and avenues for redress when AI leads to unfair or discriminatory outcomes. The ability to contest AI's output is paramount to maintaining human agency and preventing algorithmic tyranny.

By choosing clarity over convenience – by embracing bans where necessary, transparency where possible, and contestability everywhere – we can ensure that AI remains inside the circle of democratic self-government. This proactive and principled approach will prevent AI from becoming a tool that redraws the circle around us, constricting our freedoms and empowering authoritarian forces. 

The future of democracy in the age of AI depends on our collective courage, ingenuity, and commitment to these foundational principles.

References

  • Freedom House. Freedom on the Net (2024). Key trends on AI-enabled repression and internet freedom. Freedom House

  • NIST. Face Recognition Vendor Test (FRVT): Demographic Effects (2019). Technical evidence on differential error rates. SAGE Journals

  • Human Rights Watch. China’s Algorithms of Repression: Reverse Engineering IJOP in Xinjiang (2019). Data-driven suspicion scoring and policing. WIRED

  • Citizen Lab. Reverse-Engineering Censorship in Chinese LLMs (2023). Guardrails and policy-aligned silences in generative systems.

  • Carnegie Endowment (Steven Feldstein). The Global Expansion of AI Surveillance (2019). Diffusion patterns and procurement across regime types. EUR-Lex

  • Access Now (#KeepItOn). Internet Shutdowns Annual Report 2024 (published 2025). Record disruptions and election-linked blackouts. Access Now

  • OONI. How Internet Censorship Changed in Russia during the First Year of the War (2023). Measurement-based evidence of blocking. ooni.org

  • Reuters / BBC / FT coverage of AI in elections. Deepfakes, disinformation, and 2024 election dynamics. ReutersFinancial Times

  • EU AI Act. Official timeline and scope (entry into force 1 Aug 2024; phased applicability incl. prohibitions from Feb 2025; GPAI rules 2025). Digital Strategy

  • Council of Europe. Framework Convention on AI, Human Rights, Democracy and the Rule of Law (adopted 2024). First binding international AI treaty. The Library of Congress

  • EU Digital Services Act. Platform duties on recommender transparency, systemic risk mitigation, and data access (applicable to all platforms since 17 Feb 2024). European Commission

  • U.S. FCC (2024). Declaratory ruling treating AI-generated voices in robocalls as “artificial” under TCPA; consent required. FCCFCC Docs

  • OECD AI Principles (2019) and UNESCO Recommendation (2021). Global norms on human-centered, rights-respecting AI.

  • Bletchley Declaration (2023). Multilateral non-binding statement on frontier AI risks and cooperation.

  • C2PA / Content Authenticity Initiative. Open standard and ecosystem for media provenance; growing adoption across tools and networks. C2PAcontentauthenticity.orgThe Cloudflare Blog

  • Genia Kostka (2018). China’s Social Credit Systems and Public Opinion: Explaining High Levels of Approval

  • James C. Scott (1998). Seeing Like a State, Yale University Press

  • Freedom House (2018–2024). Prior editions on “Digital Authoritarianism” and AI’s role in repression, for context and trends. Freedom House

Click Here to Listen to the Podcast


Comments

Popular posts from this blog

Artificial Intelligence and the Rise of Post-National Imperialism: A Sociological Analysis

Artificial Intelligence in Higher Education: A Sociological Perspective

California Wildfires: Is this all about climate change?