5 Trustworthy AI
The Government wants Norway to lead the way in developing and using AI with respect for individual rights and freedoms. In Norway, artificial intelligence will be based on ethical principles, respect for privacy and data protection, and good cyber security.
«Ghosthouse», h.o. (INT) – Photo: Ars Electronica/Martin Hieslmair
Norway is known for the high level of trust we have in each other and in public and private institutions. The Government wants to maintain and strengthen this trust at the same time as artificial intelligence is adopted in new and innovative ways.
The Government believes that:
- artificial intelligence that is developed and used in Norway should be built on ethical principles and respect human rights and democracy
- research, development and use of artificial intelligence in Norway should promote responsible and trustworthy AI
- development and use of AI in Norway should safeguard the integrity and privacy of the individual
- cyber security should be built into the development, operation and administration of AI solutions
- supervisory authorities should oversee that AI systems in their areas of supervision are operated in accordance with the principles for responsible and trustworthy use of AI
Issues related to artificial intelligence
Developing and using artificial intelligence can create challenges and raise many complex questions. This particularly applies to AI that builds on personal data.
Big data versus data minimisation
A certain amount of data is needed to develop and use artificial intelligence. At the same time, one of the key principles of data protection is data minimisation, which requires the amount of personal data collected to be limited to what is necessary for fulfilling the purpose for collecting it. Consequently, the need for large datasets can conflict with the principle of data minimisation. Although enterprises planning to implement a project based on AI will want to obtain as much data as possible, the starting point must be to select a relevant sample and a dataset that is sufficiently large.
Enterprises can also consider whether there are other more privacy-friendly methods of gaining access to the personal data needed, such as anonymised data, synthetic datasets or various encryption methods. The Norwegian Data Protection Authority has published a guide on artificial intelligence and privacy which covers this and other issues.35
It is not only the amount of data that is important for artificial intelligence; the quality and structure of the data must also be good. Any errors in the data can have an impact on the analyses performed. Moreover, there must be metadata describing the content of the different data fields. A good start is for individual organisations to put their own house in order36, meaning that they gain an overview of what data they manage, what the data means, what it is used for, what processes it is used in, and whether legal authority exists for sharing it.
One challenge to quality that particularly applies to artificial intelligence is what is known as selection bias. Selection bias occurs if we have datasets which only contain information about part of the relevant source data. If an algorithm that is meant to recognize images of dogs is only trained using images of dogs playing with balls, the algorithm may reason that it cannot be a picture of a dog if no ball appears in the image. Similarly, it is problematic if an algorithm meant for facial recognition is trained on images of faces from a single ethnic group.
Bias can occur for other reasons; for example, a training dataset for supervised learning may contain bias resulting from human misjudgements or historical bias in the source data (on account of, for example, the conventional view of men as holders of certain types of positions, or if the data contains more images of women than men by a kitchen sink). Artificial intelligence can also be influenced by who defines the problems.
Lack of transparency
One challenge with artificial intelligence is the lack of transparency in some solutions based on deep learning. Some deep learning algorithms can be likened to a 'black box', where one has no access to the model that can explain why a given input value produces a given outcome. Most systems based on AI are not black boxes, however, and render it possible to understand and document how decisions are made. In areas where explainability is important, an alternative approach to deep learning might be more appropriate.
At the same time, much research is being conducted in the field of 'explainable AI', which aims to make black box algorithms explainable. This is not the same as publishing the code behind an algorithm or allowing full access to full datasets, because such an approach may breach intellectual property rights and data protection laws. Instead, explainable AI can analyse what data had significance for the outcome and what significance the different elements had, and thereby explain the logic behind the outcome.
Finally, the fact that artificial intelligence is characterised by autonomy, and that it can make decisions and initiate actions without human interaction, presents a challenge. Although the degree of autonomy will vary, it nonetheless raises questions about responsibility for the consequences of such decisions and how such autonomy can be limited. The initial discussions on ethics for artificial intelligence originated in issues of autonomy.37
5.1 Ethical principles for artificial intelligence
In its Global Risk Report 2017, the World Economic Forum characterises artificial intelligence as one of the emerging technologies with the greatest potential benefits but also the greatest risks. There is therefore a need to continuously discuss what is responsible and desirable development and what we can do to prevent undesirable development in this area.
The European Commission set up an expert group which has drawn up ethical guidelines for trustworthy use of artificial intelligence.38 The guidelines are based on the Charter of Fundamental Rights of the EU and on international human rights law. The purpose of the guidelines is to promote responsible and sustainable development and use of artificial intelligence in Europe.
For development and use of AI to be defined as trustworthy, the European Commission's high-level expert group believes that it must be lawful, ethical and robust. On this basis, the expert group has proposed seven principles for ethical and responsible development of artificial intelligence. The Government will adopt these principles as its basis for responsible development and use of artificial intelligence in Norway.
The principles largely address artificial intelligence that builds on data from or that affects humans, but they are also relevant for industrial use of AI built on data that does not constitute personal data.
Satisfying all seven principles simultaneously can prove challenging. Tensions may arise that create a need to make trade-offs. Such trade-offs should be addressed in a rational and methodological manner. Where no ethically acceptable trade-offs can be identified, the development and use of the AI solution should not proceed in its current form.
All decisions made regarding trade-offs must be reasoned and documented. If unjust adverse impacts occur in a solution built on AI, mechanisms should be in place to ensure that such impacts can be reported. Particular attention should be paid to vulnerable persons or groups, such as children.
AI-based solutions must respect human autonomy and control
The development and use of artificial intelligence must foster a democratic and fair society by strengthening and promoting the fundamental freedoms and rights of the individual. Individuals must have the right not to be subject to automated processing when the decision made by the system significantly affects them. Individuals must be included in decision-making processes to assure quality and give feedback at all stages in the process ('human-in-the-loop').
AI-based systems must be safe and technically robust
AI must be built on technically robust systems that prevent harm and ensure that the systems behave as intended. The risk of unintentional and unexpected harm must be minimised. Technical robustness is also important for a system's accuracy, reliability and reproducibility.
AI must take privacy and data protection into account
Artificial intelligence built on personal data or on data that affects humans must respect the data protection regulations and the data protection principles in the General Data Protection Regulation.
AI-based systems must be transparent
Decisions made by systems built on artificial intelligence must be traceable, explainable and transparent. This means that individuals or legal persons must have an opportunity to gain insight into how a decision that affects them was made. Traceability facilitates auditability as well as explainability. Transparency is achieved by, among other things, informing the data subject of the processing. Transparency is also about computer systems not pretending to be human beings; human beings must have the right to know if they are interacting with an AI system.
AI systems must facilitate inclusion, diversity and equal treatment
When developing and using AI, it is especially important to ensure that AI contribute to inclusion and equality, and that discrimination be avoided. Datasets that are used to train AI systems can contain historical bias, be incomplete or incorrect. Identifiable and discriminatory bias should, if possible, be removed in the collection phase. Bias can be counteracted by putting in place oversight processes to analyse and correct the system’s decisions in light of the purpose.
AI must benefit society and the environment
Artificial intelligence must be developed with consideration for society and the environment, and must have no adverse effects on institutions, democracy or society at large.
The requirement of accountability complements the other requirements, and entails the introduction of mechanisms to ensure accountability for solutions built on AI and for their outcomes, both before and after the solutions are implemented. All AI systems must be auditable.
The Government wants public debate on the ethical use of artificial intelligence and on what applications of artificial intelligence we want to adopt in Norway. Norway has a number of bodies whose mandate is to invite public debate on technology and ethics, such as the Norwegian Data Protection Authority, the Norwegian Board of Technology, and the Norwegian National Committees for Research Ethics.
Privacy by design and ethics
Algorithms can be controlled by facilitating access or audit, but it is more appropriate for developers as well as users to build privacy and ethical considerations into systems from the outset. Such a mindset has already been established with regard to privacy. Privacy by design is a key requirement in the General Data Protection Regulation, and means that consideration must be given to privacy in all phases of development of a system or solution. This is so as to ensure that information systems meet the requirements of the Personal Data Act and safeguard the rights of the individual.
Likewise, ethical considerations should be built into algorithms during development. Among other things, it will be important to assess whether an algorithm may lead to discrimination and whether it is sufficiently robust to withstand manipulation. Ethical evaluations may also call for considering potential environmental impacts and whether a system contributes to achieving the UN Sustainable Development Goals.
Work on privacy by design and ethics require those who work on solutions based on AI to possess or acquire the necessary competence. Higher education institutions ought to evaluate how privacy and ethics can be integrated into their programmes in, for example, information technology and data science.
Artificial intelligence and research ethics
The act relating to the organisation of work on ethics and integrity in research (Lovomorganiseringavforskningsetiskarbeid) imposes a duty of care on researchers and research institutions to ensure that all research be conducted in accordance with recognised standards for research ethics. Research institutions have a responsibility to ensure that candidates and employees receive training in recognised standards for research ethics and that everyone conducting or participating in research be familiar with them. The National Committee for Research Ethics in Science and Technology recently submitted a report on research ethics in which it proposes nine principles for AI research in three areas:39
- Responsibility for development and use of autonomous systems:
Research in AI must safeguard human dignity, assign responsibility, be explainable, and promote informed public debate.
- Social implications and responsible research:
Research in AI must acknowledge uncertainties and ensure broad involvement.
- Big data:
Research in AI must protect privacy and the interests of individuals, ensure reproducibility and quality, and promote equal access to data.
Challenges for consumers
Use of AI offers many advantages to consumers, such as the development of an ever increasing range of new services that simplify everyday life. But it also presents challenges with respect to privacy, transparency and consumer rights. Consumers are particularly vulnerable when AI is used to develop personalised services and targeted marketing based on collecting and processing consumers' personal data. There is growing concern internationally that businesses are failing to take consumers' privacy seriously enough.
A survey from Consumers International40 shows that consumers appreciate what AI technology can do; it gives them independence, entertainment and motivation in new and interesting ways. But the survey also shows that consumers are unsure about how their personal data is used and who is behind the data processing. They seek more clarity and control.
When services and marketing are made increasingly personalised, consumers risk being subjected to discriminatory treatment and arbitrary non-transparent decisions such as price discrimination. Moreover, personalised marketing and other commercial practices developed using AI can manipulate and mislead consumers into making decisions that are not in their interests.
AI affects many aspects of consumer's social life and will encompass different sectors of society. The use of AI raises legal issues under various sectoral legislation, particularly in competition, privacy and data protection, and consumer protection. It is therefore important that the relevant supervisory authorities cooperate on this issue. They should develop competence and information, and participate in international forums such as the Digital Clearinghouse, the European forum for consumer, competition and data protection enforcement bodies. In the white paper on the consumer of the future,41 the Government announced that it will create a similar cooperation forum at national level: Digital Clearinghouse Norway.
Regulation of artificial intelligence in the consumer sector
Norway has a tradition of strong consumer protection laws. Efforts are being made in Norway and the EU to provide consumers with strong and enforceable rights that are adapted to digital life. As part of these efforts, the EU has adopted a number of regulatory acts that will strengthen consumer rights online, such as the proposed package of measures called the New Deal for Consumers. While these regulatory acts do not specifically address AI, the European Commission has stressed that AI will be one of the key areas in the time ahead.42 Norwegian authorities have been closely monitoring the EU's work on modernisation of consumer rights and will continue to do so.
International cooperation on ethical and trustworthy AI
Norway is engaged in an array of international forums that work on strategies and guidelines for ethical and trustworthy artificial intelligence, among them the UN, EU, OECD and the Nordic Council of Ministers.
Norway participates in processes, activities and discussions across the UN system dealing with applications of AI. Thematic areas in which AI is given attention span from eliminating hunger, combating climate change and efforts to promote good health for all to discussing disarmament and international security.43
Norway, represented by the Ministry of Local Government and Modernisation, has participated in EU activities related to AI from the start, and was involved in, among other things, preparing the European Commission's Coordinated Plan on Artificial Intelligence from December 2018.44 The EU is working towards human-centric and trusted AI. Norway participates in this work and sits on the steering group that is developing a coordinated approach to AI together with the European Commission and the member states.
The European Commission is expected to submit a legislative proposal on AI regulation in 2020. A new regulatory framework for AI is expected to build on the ethical principles for developing and using AI published by the EU's high-level expert group in April 2019, on which the Government has based its ethical principles for AI. Norway will be actively involved in the work carried out on any future regulatory framework for AI.
The Organisation for Economic Co-operation and Development (OECD) is working on AI and has published several reports on the topic. Norway, represented by the Ministry of Local Government and Modernisation, has participated in OECD's work on preparing a recommendation on artificial intelligence.45 This was finally approved on 22 May 2019.
The recommendation identifies key values for trustworthy AI, namely: inclusive growth, sustainable development and well-being; human-centred values and fairness; transparency and explainability; robustness, security and safety; and accountability. In addition, OECD makes recommendations pertaining to R&D in AI, fostering a digital ecosystem for AI and shaping public policy on AI. The importance of building human capacity and preparing for labour market transformation is also highlighted. Furthermore, the OECD points out the importance of international cooperation for ensuring ethical and trustworthy AI.
Council of Europe
The Council of Europe is concerned with the potential impacts of AI on human rights. The European Court of Human Rights (ECHR) has as of 2019 not yet heard any cases in which artificial intelligence has been the central issue, though it has touched on the topic in some contexts. In the autumn of 2019 the Council of Europe set up an ad-hoc committee to examine the opportunities and risks posed by AI in respect of human rights. Norway, represented by the Ministry of Justice and Public Security, participates in this work.
Nordic Council of Ministers and Nordic–Baltic cooperation
Nordic cooperation on digitalisation will promote the Nordic and Baltic countries as a cohesive and integrated digital region. Through binding cooperation and projects, the Nordic countries will find solutions to problems encountered by citizens and businesses, promote innovative technologies and services, and make it easier to develop new services for individuals and businesses throughout the region. Nordic–Baltic agreements have been signed on closer cooperation on 5G, AI and data sharing.
The Government will
- encourage development and use of artificial intelligence in Norway to be based on ethical principles and to respect human rights and democracy
- encourage industry and interest organisations to establish their own industry standards or labelling or certification schemes based on the principles for responsible use of artificial intelligence
- encourage the educational institutions to consider how privacy and ethics can be given a central place in their programmes in artificial intelligence
- expect the supervisory authorities to have the competence and authority to supervise artificial intelligence systems within their areas of supervision in order to, among other things, ensure compliance with the principles for responsible and trustworthy artificial intelligence
- establish a cooperation forum for consumer, competition and data protection enforcement bodies: Digital Clearinghouse Norway
- continue to participate in European and international forums, including the EU's work towards creating a regulatory framework to promote responsible and trustworthy use of artificial intelligence and towards modernising consumer rights in light of digital developments
- stimulate public debate on the ethical use of artificial intelligence
To ensure a well-functioning digital society, we must minimise the risk of being affected by adverse cyber incidents. The Government therefore considers cyber security to be a priority area.
In January 2019 the Government presented a national strategy for cyber security46 and a national strategy for cyber security competence.47 The cyber security strategy defines goals for five priority areas:
- Norwegian companies shall digitalise in a secure and trustworthy manner, and improve their capability to protect themselves against cyber incidents.
- Critical societal functions shall be supported by robust and reliable digital infrastructure.
- Enhanced cyber security competence shall be aligned with the needs of society.
- Norwegian society shall improve its capability to detect and manage cyber attacks.
- The police shall enhance its capability to combat cyber crime.
The Ministry of Justice and Public Security and the Ministry of Defence have overarching responsibility for following up the National Cyber Security Strategy for Norway. The individual ministries are responsible for ensuring that the strategy’s priorities and measures are followed up in their respective sectors.
Cyber security and artificial intelligence have two aspects: security in solutions based on artificial intelligence, and solutions based on artificial intelligence for enhanced cyber security. The competence needs in these areas will largely overlap. There is also a need for in-depth specialisation in security architecture for protecting AI systems, and for specialisation in algorithms/big data for using AI to protect IT systems and society.
Artificial intelligence in law enforcement
The Norwegian Police University College and NTNU in Gjøvik are cooperating on a project that examines the use of different forms of artificial intelligence for analysing big data, aimed at detecting, preventing and investigating economic crime. The objective of the Ars Forensica project is to produce new knowledge that can improve prevention, investigation and prosecution of incidents without compromising privacy and the rule of law. Some examples of the research challenges are:
- vast amounts of electronic data that need to be analysed
- fragments of evidence that are hidden in chaotic environments
- varying quality in digital trails, and possibilities to plant/distort digital trails
- dynamic environments and continually changing situations/contexts
- lack of knowledge, and
- decisions characterised by uncertainty and conjecture
The project is funded by the Research Council of Norway's IKTPLUSS programme.
Sources: NTNU/Ars Forensica
Security in IT systems built on AI
Implementing an AI system entails applying conventional technologies such as sensors, communication networks, data centres, big data and software. An AI system will inherit vulnerabilities from these technologies and will also introduce new vulnerabilities as part of the new AI-based solution. In this respect, AI systems are no different from conventional IT or from conventional methods of working on cyber security.
As with other IT systems, a structured, holistic approach to cyber security is needed before an AI system is deployed. The Norwegian National Security Authority's basic principles for cyber security provide all Norwegian organisations with a good starting point for identifying what they should consider in their security activities, regardless of size, maturity and competence.
For many organisations, AI as a service will be provided by external parties with the necessary competence and computing power. This can create challenges in terms of transparency, integrity, accountability and traceability. This must be taken into account when procuring the service. Both the Norwegian Digitalisation Agency and the Norwegian National Security Authority have issued guidance material on security in connection with outsourcing and procuring cloud services.
An AI-based IT system must be trustworthy as well as robust, secure, safe and accurate. Depending on the system's purpose, error or manipulation can in some cases have significantly more far-reaching consequences for an AI system than for a conventional IT system. This must be taken into account when performing a risk assessment of such systems.
Protection of digital infrastructure
The existing early warning system for digital infrastructure has been used to detect targeted cyber attacks for almost 20 years. The Norwegian National Security Authority is now developing new sensor technology that will build on and eventually replace the sensors used in the existing early warning system. A new platform will be developed to use artificial intelligence and machine learning on the data collected. The platform will enable automatic analysis of any malware detected as well as automatic sharing of results.
Source: Norwegian National Security Authority
Use of AI for enhanced cyber security
Systems built on artificial intelligence are becoming increasingly widespread, and will be one of the prerequisites for the success of Norway's future digitalisation efforts. This also applies to organisations engaged in security activities and in cyber security in particular.
Most security organisations regard the use of AI systems as necessary for identifying threats and threat agents, and for being able to withstand and manage cyber attacks. AI-based cyber security solutions contribute to faster detection and management of incidents and to more precise and detailed analysis.
Machine learning and data-driven technology can also help prevent vulnerabilities in software development. Simula researches technologies aimed at helping software developers predict vulnerability in source code during development and thereby prevent security holes which subsequently could be exploited by threat agents.
The Government will
- develop Norway's capacity to detect and respond to cyber attacks using AI
- develop the Norwegian National Security Authority as a tool for guidance, problem solving and cooperation, with the aim of building its expertise in securing AI systems and in using AI for enhanced cyber security
Ministry of Local Government and Modernisation
Printed documents can be ordered from:
Norwegian Government Security and Service Organisation
Phone: 22 24 00 00
Publication no.: H-2458 EN
All pictures in the report are from exhibitions at the art centre Ars Electronica
www.ars.electronica.art and www.flickr.com/arselectronica