Publicador de contenidos

Digital platforms as quasi-sovereign actors: Influence operations and Global Security Law

Digital platforms as quasi-sovereign actors: Influence operations and global security law

ESSAY

March 19, 2026

Texto

Although platforms remain subject to national laws, these laws struggle to exert complete control over national sovereignty

In the picture

A Facebook party held in New York [Facebook]

Introduction

“These (influence operations) can employ a combination of methods, such as fake news, disinformation, or networks of fake accounts designed to manipulate public opinion (we refer to these as “false amplifiers”). […] While information operations have a long history, social average can serve as a new tool for gathering and disseminating information for these activities. Through the adept use of social average, information operators may attempt to distort public discourse, recruit supporters and financiers, or influence political or military outcomes.”[1]

These words weren't written in an academic journal, a policy report, or any strategic plan. They come from a document published by Facebook April 2017. Now, in 2025, influence operations (IOs) are playing an increasingly prominent role in the context of hybrid warfare.

 Currently, information operations (IOs) occur primarily in cyberspace, where human vulnerabilities are exploited to exert control over cognitive domains. As Facebook , digital platforms serve as a new tool for collecting and disseminating information. However, as various authors have argued, digital platforms, as intermediaries, “acquire a quasi-sovereignty over the cyberspaces under their control.”[2]

This paper posits that, with regard to the transnational information environment, digital platforms act as quasi-sovereign actors, shaping information gaps, amplifying state and non-state narratives, and wielding de facto regulatory power. They establish, enforce, and adjudicate the rules. This implies a struggle to assert legal and political control over their operations, posing significant challenges to traditional concepts of state sovereignty and responsibility in global security law. The EU has demonstrated, in an ambitious example of co-production, its ability to establish a legal framework that leverages the quasi-sovereignty of digital platforms to defend its digital resilience. This model contrasts sharply with American “libertarianism” or Russian authoritarianism.

 This raises the following question: How do digital platforms function as quasi-sovereign actors in transnational influence operations, what do these implications mean for global security law, and how has the EU sought to regulate their role in shaping information environments?

The methodology for preparing this essay has involved the compilation and analysis of legal sources, policy documents, and academic literature. Structurally, the paper consists of three parts. The first conceptualizes digital platforms as quasi-sovereign actors. The second examines the implications of this quasi-sovereign status for information operations and global security law. Finally, it analyzes the EU regulatory model.

1. Digital platforms as quasi-sovereign actors: rule-making, enforcement, and adjudication.

1.1. Private governance and the exercise of quasi-sovereign authority

Defining “digital platforms” can be challenging, as there is no universally accepted definition. The lack of a definition does not mean that regulating them at the global level is impossible. Platform law is understood as “comprehensive regulation that combines systems grounded in international law with specific local regulations applicable to specific platforms (ecosystems).”[3]

Fabio de Bassan points out that one of the major challenges is the multidisciplinary nature of this field of study, which draws on perspectives from disciplines such as economics, sociology, computer science, and law... but this also leads him to emphasize that the concept must be understood from a perspective tailored to the needs of Global Law. This leads to defining digital platforms as: “Hardware or software structures that provide technological services and tools, programs, and applications for the distribution, management, and creation of free or paid digital content and services, including through the integration of multiple average integrated digital platforms)”[4]

However, among the existing legislation, the definition provided in Article 3 of the EU Digital Services Act (DSA) stands out:

“‘online platform’ means a hosting service that, at the request of a recipient of the service, stores and disseminates information to the public, unless that activity is a minor and purely ancillary feature of another service or a minor functionality of the principal service and, for objective and technical reasons, cannot be used without that other service, and the integration of the feature or functionality into the other service is not a means to circumvent the applicability of this Regulation.”[5]

For the purposes of this essay, this definition is more useful because, on the one hand, the EU’s legislative framework tends to influence international standards, but on the other hand, it emphasizes the importance of information and excludes offline services. This allows us to focus on hosting services.

Likewise, A. Minbaleev notes that there are two concurrent approaches to digital platforms: the first approach involves understanding them as legal entities, while the second views them as information environments where agents interact.[6]Minbaleev favors the second approach.

1.2. Private governance and the exercise of quasi-sovereign authority

Digital platforms have assumed forms of authority over their users that, increasingly, mimic functions traditionally associated with sovereignty wherever a virtual community exists. [7] This occurs in a cyberspace that, in its early days, promised to be free from the exercise of such authority. Although it is not conventional territorial sovereignty, given their ability to control most transnational information, various authors have regarded these platforms as quasi-sovereign actors.[8]

This phenomenon is similar to what Shoshana Zukoff describes in her book *Digital Surveillance*, in which she describes a new form of sovereignty exercised by private actors:

“What we have seen in Facebook a living example of the third modernity that instrumentarianism proposes, defined by a new collectivism owned and operated by surveillance capital. The God view drives the computations. The computations enable tuning. Tuning replaces private governance and public politics, without which individuality is merely vestigial. And just as the uncontract bypasses social mistrust rather than healing it, the post-political societal processes that bind the hive rely on social comparison and social pressure for their durability and predictive certainty, eliminating the need for trust. Rights to the future tense, their expression in the will to will, and their sanctification in promises are drawn into the fortress of surveillance capital. On the strength of that expropriation, the tuners tighten their grip, and the system flourishes.”[9]

This quasi-sovereign nature is manifested above all in three interrelated practices: norm-setting, enforcement, and adjudication.[10] Together, these influence how information is produced, circulated, and responded to.

With regard to the first of these practices, digital platforms must be understood as influential norm-setting actors in the digital ecosystem, rather than as neutral legal intermediaries or merely economic entities. Platforms establish standards through a wide range of regulatory instruments, including: terms of service, community guidelines, content moderation rules, data governance policies, and algorithms. Although many of these rules lack official legal status, they operate as de facto binding standards for platform users, influencing what constitutes acceptable behavior in interactions occurring in cyberspace. In many cases, states find their ability to regulate these spaces limited or delayed.[11]

With regard to the exercise of power, this is carried out through content moderation systems, algorithmic ranking, labeling practices, and account sanctions. These mechanisms can amplify, marginalize, or eliminate certain narratives.[12] Therefore, they directly influence the likelihood of success of an influence operation. The algorithmic governance exercised by these platforms constitutes a form of power distinct from traditional censorship, as it does not prohibit the circulation of information but rather structures visibility by prioritizing exposure to certain content over others.[13] In hybrid warfare contexts, knowing how to manipulate the algorithm can be crucial for expanding a propaganda network and ensuring a specific narrative prevails.

Finally, with regard to adjudication, digital platforms have established mechanisms for resolving disputes related to content moderation. Facebook Oversight Board is a clear example of how digital platforms have incorporated resolution processes that bear similarities to judicial processes. Specifically, the Oversight Board has jurisdiction over disputes involving extremist language or disinformation. However, unlike ordinary courts, its decision-making is guided by the company’s private interests rather than the public good.[14]

Taken together, these three practices position digital platforms as key actors in de facto digital governance and thus enable them to exercise digital sovereignty. Although they are not Direct Participants in Hostilities, their control over the cognitive domain allows them to shape the environment through which states can carry out their influence operations or even contribute to the escalation or de-escalation of tensions. This underscores their role as bellatores,[15] since through their regulatory power they are able to bring stability to states or, conversely, destabilize them.

2. The Impact of Quasi-Sovereignty on Influence Operations and Global Security Law

2.1. How Influence Operations Work

Influence Operations (IO), generally applied in the context of hybrid conflicts, are strategic efforts that seek to alter the perceptions, decisions, or actions of a target audience. Generally, they exploit information environments rather than kinetic force.[16] In traditional military doctrine, “information and warfare operations, also known as influence operations, include the gathering of tactical information about an adversary as well as the dissemination of propaganda in pursuit of a competitive advantage over an opponent.”[17]

These operations are crucial for co-production within the framework of Global Security Law. Since they tend to be carried out during periods of “peace,” they nonetheless have significant implications for political processes, public order, and the stability of the state. This raises questions about how existing legal frameworks—such as the principles of the use of force, non-intervention, and sovereignty—should be interpreted. By leveraging transnational information networks, these operations create gray areas in which traditional legal sources become difficult to apply.[18]

2.2. Influence operations from the perspective of international law

As Dale Stephens points out, there is currently no treaty or source of customary law directly applicable to IOs. However, various branches of public international law are invoked indirectly to assess their legality. This includes Jus ad Bellum, international human rights law, and the law of armed conflict (LOAC). In each of these branches, they are generally considered legal.[19] However, there is an ambiguous area that is greatly influenced by the quasi-sovereignty of digital platforms as normative gatekeepers of information flows.

2.2.1. Jus ad Bellum

In the case of jus ad bellum, it has been interpreted that the prohibition on the use of force, as set forth in Article 2(4) of the UN Charter, applies in most cases only to the use of kinetic weapons. The scope for interpreting an international operation as unlawful under Article 2.4 is very limited. Based on the conclusions of the Nicaragua case, a “campaign that clearly manipulated individuals to organize into armed groups and to carry out physical attacks against another government” could be interpreted as unlawful. An example of this would be in February 1991, when George H.W. Bush incited the Iraqi army to overthrow the Saddam Hussein regime.[20]

In this context, the principle of non-intervention has been put forward as a more plausible legal basis for assessing the legality of influence operations. According to Brian Egan, “a cyber operation by a State that interferes with another country’s ability to hold an election or that manipulates another country’s election results would be a clear violation of the rule of non-intervention.” However, despite increasing academic speculation about cases in which an influence operation violates the principle of non-intervention, no state has formally brought an international legal claim for unlawful intervention based solely on influence operations before an international court.[21]

The quasi-sovereignty of digital platforms further complicates the application of the principles of non-use of force and non-intervention. International operations are rarely driven by direct state action, but are mediated by private actors operating through social platforms. According to Liivoja & Väljataga, although non-state actors are the ones carrying out the operations, if they are linked to a state, their actions through digital platforms must be directly attributable to that state.[22]

In the case of Russia, for example, the legal framework governing public administration has been adapted to facilitate cooperation between public authorities, the private sector, and civil society, thereby creating the structure required for this type of strategic approach. This is known as network governance. average, non-state actors are described as possessing the resources the state needs to achieve its foreign policy objectives, while Russia provides the funding and decision-making authority necessary to make those decisions. Although in some cases this leads to the suggestion that it amounts to subcontracting the provision of services to non-state actors, this could raise doubts about Russia’s own capabilities. Far from it, however; its relationship with non-state actors is understood as “mutual dependence,” since the structures organized by the state are the only ones capable of sustaining the political network, which includes the propaganda network.[23]

Moreover, as clarified by the Tallinn guide . guide : “Coercion must be distinguished from persuasion, criticism, public diplomacy, propaganda, retribution, mere malice, and the like in the sense that, unlike coercion, such activities merely involve influencing the voluntary actions of the target State.”[24] Therefore, the IO would be unlawful only if it affected an exclusively sovereign matter such as domestic or foreign policy. But here lies a problem: the moment a state manages to control the country’s foreign policy, it becomes impossible for it to act.

When platforms exercise quasi-sovereign control over the informational environment, the distinction between voluntary persuasion and structural coercion becomes increasingly blurred. If the decisions made by platform governance consistently affect or limit a state’s ability to implement foreign or domestic policy, this standard legal definition of coercion may not be sufficient to describe how influence operates in digital spaces.[25]

 2.2.2. Human Rights Law

In the digital sphere, international human rights law (IHRL) provides the most important framework for assessing influence operations, notably through Article 17 of the ICCPR (right to privacy) and Article 19 of the ICCPR (freedom of expression).[26]

In the case of Article 17, this protects the right to privacy against arbitrary or unlawful interference and applies equally in both online and offline contexts. In the digital sphere, human rights bodies have recognized that the collection, storage, and analysis of online information by the State may also infringe upon privacy. This applies even when the information is public and seeks to protect individuals from common online information practices such as profiling, monitoring, or the covert use of information. An example of this would be the Cambridge Analytica scandal, in which the data of millions of Facebook was exploited to create psychographic profiles for political persuasion campaigns.[27]

 However, it is important to emphasize that there are three exceptions under which this right may be legitimately restricted: when the interference is justified due to non-compliance with national laws, when it involves matters of national security, or when it is necessary and proportionate.[28] Likewise, the legal boundaries remain in place.

With regard to Article 19, freedom of expression also encompasses the right to seek, receive, and disseminate information through digital platforms. In democratic societies, the free flow of information and the “marketplace of ideas” take on particular significance. This right gives rise to a complex duality. On the one hand, it is precisely because of this right that disinformation—the key component of malicious information operations—can spread. But on the other hand, this is what makes it possible to combat disinformation through the dissemination of fact-checking and truthful information. That is why the right is not absolute and is restricted to what is necessary for national security or public order. Furthermore, Article 20 prohibits propaganda and advocacy of hatred and violence (especially in cases of ethnic cleansing).[29] Likewise, Articles 17 and 20 sometimes conflict.

In election contexts, digital platforms have had both successes and limitations in preventing interference by international organizations. For example, during the U.S. elections, TikTok identified and removed several fake accounts even though it was not required to do so under national law.[30] In the case of the 2024 European Parliament elections, digital platforms encountered many limitations in their self-regulatory approaches.[31]

The quasi-sovereignty of digital platforms creates a regulatory gap: platforms exert significant influence over speech but are not formally bound by international human rights obligations or subject to international judicial oversight. Yet their policies shape the informational landscape in which rights are exercised or restricted.

2.2.3. Law of Armed Conflict (LOAC)

Information operations have proven to be key factors in the course of armed conflicts. In the case of international armed conflicts, Shoigu himself noted that Russia’s victory in the occupation of Abkhazia and South Ossetia (2008) was due to information operations. On the other hand, in the case of non-international armed conflicts, it was thanks to information operations that 1,500 ISIS fighters were able to capture Mosul.

In this regard, the applicable provision would be Article 51(2) of AP1, which prohibits “acts or threats of violence whose primary purpose is to spread terror among the civilian population.”

3. The EU’s Regulatory Responses to Digital Platforms’ Quasi-Sovereignty

Faced with the height of tensions arising from the current context of hybrid warfare, sovereign actors have adopted various regulatory models to manage the quasi-sovereign power of digital platforms, particularly—though not exclusively—in relation to IOs. These responses range from comprehensive regulatory frameworks (EU) to more libertarian and informal governance (US) and even state control and subordination of platforms (Russia). This section provides an analysis of these three types of approaches through which states seek to assert their own sovereignty over that of privately governed information environments.

The EU has developed the most ambitious legislative framework, in the sense that it represents the largest example of co-production in this area. This framework is based on two main pillars: the Digital Markets Act (DMA) and the Digital Services Act (DSA).[32]

The first of these, the DMA, establishes clear rules for online platforms. Its objective is to ensure that no Very Large Online Platform (VLOP) or Very Large Online Search Engine (VLOSE) abuses its position as a “gatekeeper” in digital environments, and it imposes certain obligations and prohibitions on them to ensure compliance.[33]

Regarding the DSA, as noted in a background paper from the Parliamentary Assembly of the Mediterranean, the EU’s efforts to combat disinformation and illegal content related to the Internet of Things (IoT) have been based on this regulation. In a sense, the events that underscored the need for this regulation were the war in Ukraine and the Israel-Hamas conflict. The DSA is the cornerstone of the EU’s digital strategy, aiming to ensure that quasi-sovereign digital platforms are held accountable for disinformation, illegal content such as hate speech, and various societal risks. It incorporates overarching principles and robust safeguards to protect freedom of expression and the rights of other users.[34]

Likewise, another major innovation has been the Enhanced Code of Practice on Disinformation. Previously, the Code of Practice on Disinformation served as a voluntary self-regulatory tool for digital platforms. However, digital platforms are now required to implement the measures established in this Code of Practice, including: enabling users to detect “harmful false and/or misleading information” and take follow-up actions; providing tools to verify “authenticity or accuracy”; ensuring “factual accuracy of sources through fact-checks by fact-checking organizations that have flagged potential disinformation”; and curbing the spread of disinformation.[35] As a result of the implementation of this code over the past six months, particularly in the context of the 2024 European elections, the major digital platforms removed a large amount of content: TikTok 250,000 videos); Youtube 19,000 videos); LinkedIn 20,000 posts). Likewise, according to Mündges & Park, “overall, platforms are only partially compliant with the Code.”

There is a diplomatic aspect that makes these regulations particularly interesting: the Brussels Effect. As noted in an article in the Chicago Journal of International Law: “EU regulators exert significant influence over how social average moderate content on a global scale. This is because the DSA’s regulatory framework will incentivize platforms to align their global content moderation policies more closely with the EU’s approach to balancing the harms and benefits of speech, rather than with the U.S. approach.” The clearest example of the recent impact of the Brussels Effect can be found in the 2025 Moldovan elections. Since Moldova is in the process of acceding to the EU, it has adopted part of European legislation.[36] This has been one of the key factors in preventing election interference by Russia.

Conclusion

This paper has argued that digital platforms act as quasi-sovereign actors in cyberspace and the cognitive domain. Their quasi-sovereignty stems primarily from their ability to establish, enforce, and adjudicate rules within information environments. Although platforms remain subject to national legislation, this legislation struggles to exert complete control over national sovereignty, often having to relinquish control in these matters. This makes them highly relevant actors in the context of hybrid warfare, as they serve as instruments of intervention through digital platforms which, by exploiting human vulnerabilities, seek to interfere in the internal affairs of states.

The prominent role of digital platforms in the new geopolitical paradigm of the 21st century poses a challenge for key sources of global security law to adapt to the interpretation required by these new circumstances. In the case of jus ad bellum, the principles of the use of force and non-intervention should be interpreted beyond kinetic weaponry. This is primarily because digital platforms complicate attribution and present new transnational ways of exercising force and intervening illegitimately, whether by state-affiliated actors or not. On the other hand, International Human Rights Law (IHRL) has proven to be the source with the greatest impact on digital platforms, especially Article 19. However, its interpretive potential is still far from being realized, as platforms like Facebook up operating in favor of power structures, for or against freedom of expression and the right to privacy. Regarding LOAC, cases like that of Georgia demonstrate that operational intelligence (OI) is relevant for understanding when a conflict begins.

At the sovereign level, the EU offers a valid response—through its Digital Framework Directives (DFDs) and Digital Services Directives (DSDs)—to the question of how the quasi-sovereignty of digital platforms should be regulated to comply with Articles 17 and 19 of the Human Rights and Freedoms Act (HRIL). It also serves as the best defense mechanism against malicious online activities (IOs) carried out by strategic rivals. Similarly, through the Brussels Effects, the EU is, in part, proposing the establishment of norms. This European model is being implemented in states that suffer from IOs in asymmetric power relations, as is the case in Moldova.

Although this research has focused on the EU’s legislative framework, it opens the door to a comparative study of three different models for managing the quasi-sovereignty of digital platforms: specifically, American “libertarianism” and Russian authoritarianism. Similarly, average (DMA) and the Digital Services Act (DSA) have only recently been implemented, and it would be important to assess their ability to adapt to the new challenges in the digital landscape.

-----------

Disclaimer Regarding the Use of AI

During the preparation of this work, generative artificial intelligence was used for two purposes. The first was to assist in compiling sources as a supplement to Google Scholar searches. The second, due to time constraints, was to correct spelling, grammar, and fluency in certain sections. As a result of the latter, false positives may occur when using detection tools. All arguments presented in this work are the author’s own.


[1] Jen Weedon et al., “Information Operations and Facebook,” Facebook, 2017.

[2] Luca Belli and Jamila Venturini, “Private Ordering and the Rise of Terms of Service as Cyber-Regulation,” Internet Policy Review 5, no. 4 (2016), https://doi.org/10.14763/2016.4.441.

[3] Ludmila Konstantinovna Tereschenko et al., “Digital Platforms in the Focus of National Law,” Legal Issues in the Digital Age, 2024, https://cyberleninka.ru/article/n/digital-platforms-in-the-focus-of-national-law/viewer.

[4] Fabio Bassan, *Digital Platforms and Global Law* (Edward Elgar Publishing, 2021).

[5] “Article 3, the Digital Services Act (DSA),” accessed December 19, 2025, https://www.eu-digital-services-act.com/Digital_Services_Act_Article_3.html.

[6] Konstantinovna Tereschenko et al., “Digital Platforms in the Focus of National Law.”

[7] Virginia Haufler and Catherine Waddams, “A Public Role for the Private Sector: Industry Self-Regulation in a Global Economy,” Political Studies 50, no. 3 (2002): 652–53.

[8] Michael Bollerman, “Digital Sovereigns: Big Tech and Nation-State Influence,” arXiv:2507.21066, preprint, arXiv, June 1, 2025, https://doi.org/10.48550/arXiv.2507.21066.

[9] Shoshana Zuboff, *The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power* (Hachette UK, 2019).

[10] Hannah Bloch-Wehba, “Global Platform Governance: Private Power in the Shadow of the State,” Dedman School of Law, 2019.

[11] Josep Ibáñez Múñoz, “The Normative Dimension of Platform Governance: Big Tech and Digital Platforms as Normative Actors,” Spanish Yearbook of International Law 25 (December 2021): 128–37.

[12] Robert Gorwa et al., “Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance,” Big Data & Society 7, no. 1 (2020): 205395171989794, https://doi.org/10.1177/2053951719897945.

[13] Emiliano De Cristofaro et al., averageRevealing The Secret Power: How Algorithms Can Influence Content Visibility on Social average, averageaccessed December 19, 2025, https://www.researchgate.net/publication/385177342_Revealing_The_Secret_Power_How_Algorithms_Can_Influence_Content_Visibility_on_Social_Media.

[14] Kate Klonick, “The Facebook Board: Creating an Independent Institution to Adjudicate Online Free Expression,” n.d.

[15] Among some authors, the term “technofeudalism” has become popular to describe the capabilities of digital platforms in cyberspace. In this sense, there are those who have even gone so far as to define the state as a primus inter pares in the information environment. Although the state holds the authority, the quasi-sovereignty of the platforms grants them power. In this state-vassal relationship, the state cedes control over a cyberspace it can scarcely govern to these lords. This turns digital platforms into bellatores. Similarly, users would no longer be the laboratories.

[16] “Influence Operations in Cyberspace and the Applicability of International Law,” accessed December 19, 2025, https://www.e-elgar.com/shop/gbp/influence-operations-in-cyberspace-and-the-applicability-of-international-law-9781035307289.html.

[17] Rand Waltzman, *The Weaponization of Information: The Need for Cognitive Security* (RAND Corporation, 2017), https://doi.org/10.7249/CT473.

[18] “Influence Operations in Cyberspace and the Applicability of International Law.”

[19] Dale Stephens, “Influence Operations & International Law,” Journal of Information Warfare 19, no. 4 (2020): 1–16.

[20] Stephens, “Influence Operations & International Law.”

[21] “Remarks on International Law and Stability in Cyberspace,” U.S. Department of State, accessed December 19, 2025, //2009-2017.state.gov/s/l/releases/remarks/264303.htm.

[22] Samuli Haatja, “Autonomous Cyber Capabilities under International Law,” NATO CCDCOE Publications, 2021, https://ccdcoe.org/uploads/2021/05/Autonomous_Cyber_Capabilities_210525.pdf.

[23] Marthe Handå Myhre and Mikkel Berg-Nordlie, “‘The state cannot help them all’. Russian average on the inclusion of non-state actors in governance,” East European Politics 32, no. 2 (2016): 192–214, https://doi.org/10.1080/21599165.2016.1168299.

[24] Michael N. Schmitt, ed., Tallinn guide .0 on the International Law Applicable to Cyber Operations, 2nd ed. (Cambridge University Press, 2017), https://doi.org/10.1017/9781316822524.

[25] A specific example of this is Belarus. Russia has managed to establish a puppet government through the Union State. As various studies have shown, Russia has used Telegram and goal to expand its influence over the country. Currently, there is a digital resistance movement, exemplified by initiatives such as Digital Belarus, which operate from within the European diaspora. Confronting both narratives in Belarus. This raises the question: how could this situation be legally addressed? Technically, Minsk permits these interferences, but some of them could even be linked to “soft ethnic cleansing” by discouraging Belarusian culture.

[26] Stephens, “Influence Operations & International Law.”

[27] Jonathan Heawood, “Pseudo-Public Political Speech: Democratic Implications of the Cambridge Analytica Scandal,” *Information Polity* 23, no. 4 (2018): 429–34, https://doi.org/10.3233/IP-180009.

[28] These exceptions tend to spark public discussion. One example of this is the discussion messaging platforms with encryption features, such as Telegram. The app has been the subject of controversy because it is one of the platforms favored by the Kremlin for its information operations. Crimes are also committed through it thanks to the privacy it offers. On the other hand, however, in authoritarian states, Telegram has become, due to that same anonymity, a platform through which citizens are able to combat those same information operations and mobilize democratically. A recent example of this was the case of the Georgia protests. Furthermore, tools capable of decryption have been controversial. See the case of the CNI, which used Pegasus against the Catalan leader Carles Puigdemont.

[29] The situation in Nagorno-Karabakh has been particularly problematic. In an effort to exert influence in the region, Russia has launched influence operations aimed at promoting extremism on both sides. The EU, by contrast, has used EU EastStratcom to promote post-conflict reconstruction and transitional justice.

[30] TikTok ’s new transparency report TikTok the latest attempts at political influence on the platform,” La Vanguardia, May 23, 2024.tiktok.

[31] Gautam Kishore Shahi et al., “A Year of the DSA Transparency Database: What It (Does Not) Reveal About Platform Moderation During the 2024 European Parliament Election,” n.d.

[32] Maria Luisa Chiarella, “Digital Markets Act (DMA) and Digital Services Act (DSA): New Rules for the EU Digital Environment,” Athens Journal of Law (AJL) 9, no. 1 (2023): 33–58.

[33] “Digital Markets Act,” December 12, 2025, https://digital-markets-act.ec.europa.eu/index_en.

[34] Marie-Therese Sekwenz et al., “Doing Audits Right? The Role of Sampling and Legal Content Analysis in Systemic Risk Assessments and Independent Audits in the Digital Services Act,” SSRN Scholarly Paper No. 5235646 (Social Science Research Network, April 29, 2025), https://doi.org/10.2139/ssrn.5235646.

[35] Ronan Ó Fathaigh et al., “The Regulation of Disinformation Under the Digital Services Act,” average Communication 13, no. 0 (2025), https://doi.org/10.17645/mac.9615.

[36] Mariana Tacu, average Mass average the Republic of Moldova and Building Resilience Against Disinformation in Coverage of the European Path,” n.d.

BUSCADOR NOTICIAS

SEARCH ENGINE NEWS

From

To