3 Hold social media to account
3.1 Introduction
The Internet, search engines, messaging services, social media and other online platforms have made it easier to access information and engage in public discourse. This is fundamentally positive for freedom of expression. Nine out of ten Norwegians use social media,42 making it an important platform for public debate in Norway. Accordingly, this chapter focuses primarily on social media.
Over the past 20 years, the Internet has become increasingly platformised, evolving from an open, universally accessible network into social media and other online platforms restricted to registered users, creating new challenges that require appropriate responses. Any measures must be balanced against freedom of expression and freedom of information, which includes the right to share false or misleading content.
Social media has both strengthened and challenged open and enlightened public discourse in Norway. This chapter outlines these challenges and sets out the policy instruments and measures the Norwegian Government considers necessary to address them.
3.2 Challenges
3.2.1 Lack of regulation and effective enforcement
Content moderation, algorithmic amplification and restrictions on different types of content are the result of deliberate choices by social media providers. Nevertheless, social media have rarely been held to account for the dissemination and amplification of illegal content, disinformation, election interference and restrictions on content from editorial media.
Regulation of social media platforms is limited, and ensuring effective compliance is challenging. The largest platforms are owned by US or Chinese companies, with European headquarters in countries such as Ireland and the Netherlands. Because the largest social media platforms are based outside Norwegian jurisdiction, holding them to account for regulatory violations in Norway is challenging.
The General Data Protection Regulation (GDPR)43 sets requirements for how social media platforms collect and use personal data, including the need for a legal basis for using personal data for profiling and targeted advertising. However, the GDPR alone is not sufficient to address the challenges posed by the spread of disinformation on these platforms. Pan-European cooperation is therefore needed.
The EU has adopted a new set of digital regulations designed to ensure greater democratic oversight of social media platforms. These regulations will form part of Norway’s legislative framework if they are incorporated into the EEA Agreement and transposed into national law.
Meta has indicated that, together with the current US administration, it will challenge European laws that it believes compromise freedom of expression.44 Freedom of expression was a key consideration when drafting these new digital regulations and will remain important in their enforcement.
3.2.2 Social media are effective channels for propagating disinformation
Social media, messaging services and online platforms have become attractive channels for spreading disinformation, interfering in electoral processes and conducting influence operations. This is partly because social media posts are published in real time without prior review and can, in addition, be targeted at specific user groups for a fee.
The business model of social media platforms such as Facebook, Instagram, X, TikTok and Snapchat is based on the sale of targeted advertising. This means that platform providers may have a financial interest in users spending as much time as possible on their platforms. The providers have access to large volumes of personal data and can track user patterns over time and categorise users into different interest profiles. By linking this data to algorithms, platforms can send targeted content and advertising to individual users. While this increases access to content of interest, it also heightens the risk of spreading misinformation and disinformation.
Misinformation or misleading information, tailored to individual preferences, can be highly influential. The collection and use of personal data can thus facilitate the propagation and amplification of disinformation.
Hate speech, misinformation and disinformation generate engagement and spread rapidly on social media, both through user interactions (such as likes, shares and comments) and through platform recommendation algorithms.45 These algorithms can also limit the distribution of editorial content and other credible content.46 AI, machine learning models and algorithms play a key role in content ranking.
3.2.3 Generative AI can be misused
Generative AI is advancing at a rapid pace. It offers substantial opportunities while also posing considerable challenges.
Generative AI makes it more difficult to distinguish between genuine and synthetic text, audio, images and video. Setting up an online news site with AI-generated fake news requires little time and few resources.47 AI-generated content can be an effective instrument for influencing democratic elections and political opinions, by stirring up and reinforcing existing attitudes or appealing to emotions.
Generative AI has been used by disinformation actors in connection with several elections, for example in the United States48 and Slovakia,49 but it is difficult to measure the impact on election outcomes. A report on last year’s EU elections found that inauthentic behaviour, such as bots and fake profiles, as well as generative AI, were employed in disinformation campaigns prior to the election, though to a lesser extent than feared. European regulations and risk mitigation measures implemented by platforms were identified as possible explanations.50 Norway’s Expert Group on Artificial Intelligence and Elections also found that generative AI had less impact than feared in the elections they examined in 2024. However, both the technology and the trends are evolving rapidly, and with this, the threat landscape and the need to build resilience among the population and within society are also changing.51
Inauthentic content and profiles are a central part of the disinformation ecosystem. In autumn 2024, the Romanian election was annulled due to foreign influence operations on TikTok. Inauthentic accounts, AI-generated content and the use of influencers were key factors.52 Automated programmes (bots) are used to publish, like and share posts. The European Commission has opened an investigation into TikTok to assess whether the platform’s measures to counter election interference have been adequate under European regulations.53
Generative AI can also spread misinformation through so-called ‘hallucinations’, where language models produce false content that appears credible. Many social media platforms now offer AI assistance and chatbots as part of the service, and there is a tendency for users – particularly younger media consumers – to use chatbots such as ChatGPT and TikTok in place of traditional search engines.54, 55
Language models can exhibit both intended and unintended political biases that users may not be aware of.56 For example, one study documented how the large volume of Russian online propaganda, including AI-generated news sites, has influenced large language models, causing them to reproduce the propaganda, often citing it as a credible source.57
3.2.4 Platform power can be misused
Internet platforms are owned by a small number of global technology companies, which have considerable power over and influence on public discourse. These companies occupy dominant positions in the digital advertising market and in app stores, giving them significant competitive advantages. Many are also located in low-tax jurisdictions, further reinforcing the disparities in competitive conditions between global and national actors. This creates challenges for the financing of editorial media, which compete for the same advertising revenue, potentially undermining media diversity and reducing access to fact-checked, verified information.
Changes to system design and recommendation algorithms can have major implications for society. For example, when Facebook adjusted its algorithms in 2018 to promote engaging content,58 it also led to a rise in the propagation of hate speech and harmful material. Restrictions on editorial content imposed by companies such as Meta have also affected traffic to editorial media.59
Heightened geopolitical tensions can increase both interest in using algorithms to achieve political objectives and the likelihood of their use. The capability to target content brings with it a risk of covert attempts to influence public opinion and political attitudes, as well as the risk of platforms abusing their power. This highlights the need for transparency and oversight in the use of algorithms on social media.
3.2.5 Limited insight and access to platform data
Some social media platforms give researchers access to data, but this access is often limited and there is considerable uncertainty regarding quality and continued availability. Many platforms have recently tightened their procedures for sharing data.60
Without sufficient access to data, it is difficult to build a robust evidence base on how disinformation spreads across platforms, how recommendation algorithms increase its reach, and how effective platforms’ countermeasures are. A solid evidence base is crucial for implementing targeted measures to mitigate societal risks associated with social media.
Access to data is also important for supervisory authorities to evaluate platforms’ compliance with relevant regulations.
3.2.6 Lack of measures to protect children and young people
Children and young people are particularly vulnerable to harmful design features and content, including disinformation. Harmful design can take several forms, such as addictive algorithms, recommendation systems that amplify damaging content, and manipulative design. Manipulative design can lead users to make choices that are not in their best interests. For instance, design features can make it harder to refuse than to consent to the collection of personal data, or can try to accelerate a purchase decision. They can also be used to influence opinions.
Report to the Storting no. 13 (2024–2025) Prevention of Extremism – safety, trust, cooperation and democratic resilience notes particular concern regarding minors participating in transnational digital networks. Much of the extremist propaganda is designed and distributed in ways that appeal to younger audiences. Social media can serve as a channel for radicalisation and recruitment. In this context, it is especially concerning that young people are exposed to extremist and violent ideologies, often accompanied by graphic video material.
Harmful design on social media platforms and its impact on children and young people are at the core of legal action that several US states have initiated against Meta.61 Amnesty International has documented how TikTok amplifies harmful content in the feeds of children and adolescents.62 Young girls are particularly vulnerable to developing an addiction to social media.63 Report to the Storting no. 32 (2024–2025) Safe Childhood in a Digital Society highlights a range of challenges and opportunities related to children’s use of social media and digital tools.
Seven out of ten children aged 9–12 years use social media, despite the minimum age being 13. Influence operations actively exploit gaming platforms and associated messaging services to target children and young people.64 Many of these services are regulated under new European regulations, which need to be applied in order to mitigate the risks of unwanted influence.
3.3 Policy instruments
In recent years, the EU has adopted a number of regulations designed to promote fairer competition and enhance the legal protection of users of digital services. The aim is to reduce societal risks and safeguard freedom of expression, freedom of information and media freedom. These regulations are also relevant to efforts to counter disinformation and election interference. They can only be enforced in Norway once they have been adopted as Norwegian law.
3.3.1 Digital Services Act
The Digital Services Act65 (DSA) is designed to strengthen users’ rights online and is a key policy instrument for mitigating the negative effects that social media can have on public discourse, including the spread of disinformation, attempts to influence elections and content that is potentially harmful to children and young people. The DSA also strengthens the accountability of social media platforms for the way content is delivered.
The DSA covers a range of digital services, including Internet service providers, cloud storage services, social media, search engines and other online platforms. The rules are most stringent for the largest online platforms and search engines, such as TikTok, Instagram, Facebook, Snapchat, YouTube and X, where the risks to individual users and to society are greatest.
Key provisions of the DSA
- National authorities can issue orders directly to service providers for the removal of content that is illegal under national legislation.
- Social media platforms and other online platforms must have a system for processing notifications from users regarding illegal content and content that breaches the platform’s terms of service. Platforms must prioritise notifications from public authorities or organisations with trusted flagger status.
- Users have the right to appeal if content or accounts are removed, and the right to have appeals handled by an independent appeals body.
- The DSA prohibits manipulative design and behaviour-based advertising that targets minors.
- The largest platforms and search engines must identify and mitigate systemic risks relating to, for example, freedom of expression, freedom of the press, consumer protection and privacy protection, as well as negative effects on public discourse and electoral processes. Examples of risk-mitigating measures include complying with the duty to remove illegal content, complying with terms of service and reducing risks associated with recommendation algorithms for user-generated posts and advertising. Children and young people must be given special protection, and one of the measures to ensure this is age verification.
- Researchers and supervisory authorities have access to data that can provide a better evidence base for assessing the scope of disinformation and unwanted political influence, and how recommendation systems (algorithms) influence public discourse.
- The European Commission has access to the algorithms and machine learning models used for content moderation and recommendation systems on platforms, and a European centre for aiding the oversight of algorithms has been established (European Centre for Algorithmic Transparency (ECAT)).
The Code of Conduct on Disinformation66 has been incorporated into the DSA as a risk-mitigation measure to address negative impacts on public discourse, particularly relating to misinformation and disinformation. Social media platforms that have signed up to the Code are required to report twice yearly on the effect of the measures they implement.
The Code encompasses measures to curb the propagation of misinformation and disinformation, reduce advertising revenue for disinformation actors, limit inauthentic behaviour and AI-generated posts and accounts, and label fact-checked content. The NMA publishes assessments on its website of platforms’ compliance with the Code in Norway. These assessments include visual graphics that make it easy to track trends over time.67
Each country appoints a DSA coordinator to oversee compliance with the DSA in cooperation with other national supervisory authorities. The Norwegian Communications Authority (Nkom) is Norway’s designated DSA coordinator and holds primary responsibility for compliance in the country. The NMA, the Norwegian Data Protection Authority and the Norwegian Consumer Authority are the designated competent authorities within their respective areas of responsibility.
The supervisory authorities also participate in the European Board for Digital Services, providing a unique opportunity to actively contribute to enforcement of the regulations and promote legal certainty for social media users in Norway. Services that breach the rules can be subject to fines of up to six per cent of their annual global turnover.
3.3.2 European Media Freedom Act
Freedom of the press and media diversity are under pressure in Europe. The European Media Freedom Act68 (EMFA) is designed to protect editorial media from state and private interference and to promote a diverse media landscape.
EU Member States must not influence editorial decisions. They are required to protect the safety of journalists and respect source confidentiality. The EMFA will better protect press freedom on the largest social media platforms and search engines.
Platforms must notify news and current affairs media 24 hours in advance before removing or restricting editorial content, and media organisations have the right to appeal. The EMFA also provides for structured dialogue between editorial media and social media platforms, organised by the European Board for Media Services, with participation from the NMA.
3.3.3 Regulation on transparency and targeting of political advertising
The Regulation on the Transparency and Targeting of Political Advertising69 aims to ensure open and enlightened political debate, free and fair elections, and to counter disinformation and unlawful interference from third countries. It sets requirements for the transparency and targeting of advertising in connection with elections, referendums and legislative processes in the EU and Member States.
The regulation was introduced in response to concerns over the risk of election interference. It is intended to help the public recognise political advertising, identify who is behind it, and know whether it has been targeted, thereby better equipping them to make informed decisions. The regulation does not govern the content of political advertising and does not cover editorial content or expressions of personal opinion.
The regulation sets requirements for labelling political advertising. The labelling must indicate, for example, who has paid for the advertising, the source of the funds and the purpose of the advertising. This is partly because what appears to be neutral information may be funded by another country attempting to influence an election.
The regulation also requires explicit consent to be obtained before using personal data to target political advertising online. Targeted advertising based on sensitive personal data (such as ethnicity, religion, or sexual orientation) is not permitted, nor is the use of personal data from minors. This is intended to prevent the misuse of information for micro-targeting, emotional manipulation, or the propagation of disinformation.
3.3.4 Artificial Intelligence Act
The Artificial Intelligence Act70 aims to ensure responsible use of AI systems within the internal market. It covers product safety and liability and can help reduce the risk of potential negative consequences arising from the use of AI. The Act classifies AI systems into different risk categories: high risk, systemic risk, limited risk and minimal risk. It prohibits harmful AI practices, including social scoring, certain forms of real-time biometric recognition, and the use of AI to manipulate vulnerable groups. The Act does not apply to AI systems intended for military use. The use of AI to generate realistic audio, images, or video (deep fakes) must be labelled, with certain exceptions for criminal proceedings and, among other things, satirical content. Nkom has been designated the national coordinating supervisory authority for the Act by the Norwegian Government, and is responsible for uniform compliance with the rules throughout Norway.
3.4 Measures
3.4.1 Rapid implementation and effective enforcement of EU regulations
New EU regulations impose a duty on social media platforms, search engines and other online services to protect public discourse, promote fairer competition and ensure access to data. These regulations are important instruments for addressing the risks associated with disinformation, election interference, infringements on press freedom in Norway and the misuse of AI. They can only be enforced in Norway once they have been incorporated into Norwegian law.
The vast majority of platforms used in Norway are located abroad, outside Norwegian jurisdiction. National laws alone are therefore often insufficient to address breaches of these regulations, making it essential to utilise the opportunities provided by the EU regulatory framework. In terms of enforcing rules related to the spread of disinformation and the strengthening of public discourse, the NMA has relevant expertise.
The Norwegian Government is working to ensure that relevant EU regulations are incorporated into the EEA Agreement and transposed into Norwegian law as swiftly as possible, and that the relevant supervisory authorities have the necessary resources to enforce them effectively.
3.4.2 Improve understanding of the role of social media in the propagation of disinformation
New regulations and improved access to data can help provide a clearer understanding of how social media influences public discourse in Norway and the Nordic region, both in terms of the propagation of disinformation and the filtering of editorial and other credible content.
- Methodology for analysing social media’s compliance with relevant regulations
Effective enforcement of the DSA requires Norwegian supervisory authorities to map, document and report suspected breaches in Norway to the European Commission or the national DSA coordinator.
The NMA has been tasked with developing a methodology to assess how social media influence public discourse in Norway, and whether the measures aimed at countering disinformation and protecting editorial content are sufficient. This includes evaluating whether platform algorithms actively increase the spread of disinformation or limit the dissemination of credible content, and examining the steps taken by platforms to limit the scope of fake accounts and AI-generated content.
The Norwegian Government will ensure that the relevant supervisory authorities have adequate resources to document how social media manage systemic risks related to disinformation and comply with regulations in the Norwegian context. This will help inform the public and ensure that the conditions are in place for regulatory breaches to be addressed.
- Sharing insights from social media platforms’ self-reporting
Most social media platforms undertake to counter disinformation and report on the effect of the measures they implement, in line with the Code of Conduct on Disinformation discussed in Section 3.3.1.71 This includes reporting the number of posts and advertisements removed for violating the platforms’ terms of service, as well as the number of inauthentic accounts and posts that are deleted. Platforms are also required to publish transparency reports and record data on all posts that are removed or restricted in an open database.
The NMA will analyse and compile relevant data from social media and present it to the public in a clear and accessible manner.
- Strengthened Nordic cooperation on the analysis and oversight of social media
Analysing and monitoring compliance with regulations requires substantial resources. Nordic cooperation can facilitate more effective enforcement. Increased Nordic cooperation on the enforcement of the DSA was also one of the recommendations from the Nordic Think Tank for Tech and Democracy.72
The Norwegian Government will support effective cooperation between relevant Nordic supervisory authorities on analyses of how social media, search engines and major online platforms influence public discourse in the Nordic countries.
- Improved access for researchers to social media data
The DSA requires platforms to give researchers access to data. Researchers can apply to their country’s national DSA coordinator for access, who will then assess the request according to the criteria in the regulations. This can generate insight into the propagation of disinformation in Norway and provide a stronger evidence base for understanding risks to public discourse, as well as for considering potential measures to mitigate such risks. The Norwegian Government will facilitate further research into the spread of misinformation and disinformation via social media in Norway.
3.4.3 Dialogue with and oversight of social media prior to elections
During democratic elections and major national events, the risk of unwanted influence increases. Prior to the 2024 EU elections, the European Commission held a close dialogue with the largest social media platforms and performed a stress test to assess their election readiness.73 The aim was to ensure sufficient measures were in place to counter any influence attempts and to maintain established communication channels in the event of heightened activity, such as the use of bot networks.
The Norwegian Government will consider whether a closer dialogue with platforms is necessary prior to elections in Norway. The DSA also includes specific procedures that will be applicable to future elections.
3.4.4 Clarify age verification and age limits for social media
Children and young people are particularly vulnerable to misinformation and disinformation and have the right to be protected from harmful content. Misleading health information, for example, can have serious consequences. It is important to ensure that children and young people are well protected from harmful content on social media.
Currently, 13-year-olds can consent to social media platforms processing their personal data. The Norwegian Government proposes raising this age limit to 15 years. Work is underway on a legislative proposal establishing an age limit for children’s use of social media, with 15 years as the starting point. The aim is to protect children and young people from potential harm associated with social media use, including exposure to misinformation and disinformation. The Norwegian Data Protection Authority has also been tasked with strengthening efforts to protect children and enforce age limits on social media.
3.4.5 Development of Norwegian and Sámi language models at the National Library of Norway
In 2024, the Norwegian Government presented its digitalisation strategy, The Digital Norway of the Future 2024–2030,74 in which one of the initiatives is to establish a national infrastructure for AI. As a follow-up to this, the National Library of Norway will, from 2025, train, update and provide access to Norwegian and Sámi language models that the Norwegian business sector and public sector can use to develop AI-based tools and services. The rationale for developing dedicated Norwegian models is to create tools that are reliable and of high quality both linguistically and in terms of content, while also reducing some of the risks associated with AI discussed in this chapter. It is particularly important to provide language models that reflect Norwegian public discourse and democratic values, and to build robust alternatives to models from China and the United States.
The National Library will have full insight into how the language models are trained and the data on which they are trained, and will be fully transparent about this. This will facilitate safer and better-documented AI.
The Norwegian Government will
- ensure that the DSA and other EEA-relevant regulations governing social media are transposed into Norwegian law as soon as possible
- ensure that relevant supervisory authorities have the necessary resources to enforce the regulations effectively
- improve understanding of the role of social media in the propagation of disinformation
- develop a methodology for analysing social media platforms’ compliance with the DSA and the extent to which they counter negative effects on public discourse and election interference, including disinformation
- compile and provide access to relevant information from the social media platforms’ self-reporting on measures to safeguard public discourse in Norway
- strengthen Nordic cooperation to understand how social media influences public discourse
- ensure dialogue with social media platforms prior to elections
- raise the age at which children can consent to the processing of their personal data by social media platforms from 13 to 15 years
- submit a proposal for consultation for the introduction of an age limit for social media use
- ensure the development of Norwegian and Sámi language models