Menu Close

Twelve Principles for AI Regulation

Background

The US and UK Governments recently consulted on how to regulate Artificial Intelligence. We submitted the following opinions.

  • The US Government’s consultation placed too much emphasis on how to update existing legislation to ensure trustworthiness[1] and accountability for AI-assisted products and services. AI is a new paradigm which will affect every conceivable aspect of our current ways of life. While AI will bring many benefits, it raises new serious challenges to the trustworthiness of computer-generated output, whether text, audio or video.
  • The UK Government’s current approach places too much emphasis on facilitating innovation. AI innovation will proceed apace and benefits will accrue, regardless of regulatory policy. The first duty of any Government is to protect its citizens from risks of serious harm. These now include harms that a user may suffer from being exposed to AI, probably without even being aware of the harm. That duty should be given priority when regulating AI.

We therefore responded to both Governments’ consultations by proposing twelve new Principles for AI Regulation, from which detailed regulations may be derived. We believe our proposals will:

  • enhance the benefits of AI-assisted systems by helping to make them trustworthy,
  • slow down the development of AI only by the need to modify systems and products to conform to our proposed regulations,
  • not restrict innovation in AI, except for its expansion into areas where AI could pose extreme risks to citizens or even potentially threaten humanity.

Twelve Principles for Defining and Implementing AI Regulation

P1. AI-related legislation must ensure that AI output is trustworthy and protects human users of AI-assisted products and systems from harm. In particular, these products and systems must:

  1. guarantee the physical safety of the user, their privacy, and their other human rights as established by existing law,
  2. be transparent such that the user of their output is made aware that it is generated with the assistance of AI, and
  3. only do what they claim to do or may reasonably be assumed to do.

P2. Legislation must distinguish between two main classes of products that actors develop or use in the AI life-cycle, namely AI tools used to build applications (e.g. AI Foundational Models, Large Language Models, and other AI generative tools) and AI-assisted applications, regardless of whether or not these applications are built using AI tools. Regulations must recognize that AI tools that are made available for public use can also be classified as applications (e.g. LLMs).

P3. The suppliers involved in each stage of the life-cycle of AI-enabled products must be held accountable for ensuring that the collection of their training data is legal, including respecting copyright laws and subscription paywalls in all relevant jurisdictions, and likewise, that their output is legal.

P4. The suppliers of AI tools and AI-assisted applications, whether or not they use or incorporate AI tools for their development, must be held accountable for the purposes for which they claim their products can be used and for which their output is trustworthy. However, the suppliers of AI tools cannot be held accountable for all the possible applications in which their AI tools may be used.

P5. The suppliers of AI tools and AI-assisted applications must have processes to ensure that their products meet the above obligations and must self-certify their compliance with them before their products are made available to third parties. Self-certificates must be available to the user either with the product or via the service that delivers the product and must state:

  1. which features of the product (e.g., the collection of training data, the algorithms to generate the output, the output presentation, etc.) are AI-enabled,
  2. the purposes for which the supplier claims their AI tools and AI-assisted applications can be used and are trustworthy.

P6. In addition to P5, the output of a search application that uses AI must inform the searcher that AI has been used to generate the content (and, in the case of audio/visual output, the presentation) of the search output. Further, in certain cases to be defined, e.g., legal searches, the search application must also enable the searcher to understand the types and/or the actual sources of the training data used to answer the search query and must prevent the generation of fictitious sources.

P7. Regulators may define applications of AI that:

  1. are forbidden, ranging from using AI for mass surveillance (with exceptions, e.g., for specific cases of law enforcement) to toys that encourage dangerous behavior.
  2. do not need any regulation or auditing, for example, the use of AI in games, spam filters, or drug discovery.

P8. All AI-assisted applications and all AI tools not covered by the exclusions listed in principle P7 shall be subject to sectoral AI regulations and liable to external audit. Existing sectoral regulatory bodies must be empowered to define the AI regulations for their respective sectors and to undertake or license audits. (Example ‘vertical’ sectors: health, transport, public services, financial services, advertising, education, social media. Example ‘horizontal’ sectors: financial assurance, employment rights, combatting AI-assisted crime.) New AI sectors may need to be defined, e.g., for AI tools and for the use of AI to generate, maintain, and distribute software code.

P9. AI external audits must be conducted on a sectoral basis, subject to priorities and at frequencies needed, as determined by the sectoral regulator. Given the speed at which AI is changing and spreading, it will be impossible to audit every AI application and tool; sampling may be necessary. Each sector should set its audit standards, subject to principles P1 to P7.

P10 New Federal/Central government functions will be needed to supervise the overall approach to controlling AI. Their responsibilities should include to

  • maintain the overall principles of the regulations and the details of any cross-sector AI regulations,

  • define the areas where the use of AI is forbidden. (Responsibility for defining the areas where no regulations are needed lies with the sectoral regulators.),
  • ensure a coherent allocation of responsibilities between the various sectoral regulators so there are no overlaps or unregulated gaps,
  • define the general content of the syllabi for accreditation/licensing of AI auditors. (Responsibility for the content of sectoral audit syllabi lies with the sectoral regulators),
  • represent the Nation in efforts to seek international harmonization on AI regulation.

P11. Governments should move towards international harmonization of AI regulations.  Gathering training data and using AI tools and AI-assisted applications knows no international boundaries (except those imposed by autocratic regimes).

P12. ‘Super-intelligent’ AI must be developed under international supervision in controlled environments to mitigate the longer-term risks of harm to humanity arising from the technology.

Prometheus Endeavour

29th June 2023


[1] The US NIST, defines “trustworthy AI” systems as, among other things, “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with their harmful bias managed.”

Prometheus Endeavor is a US-based, international think-tank whose members are all lifelong experienced practitioners and advisors to industry on information technology. Now retired, we examine the effects of IT developments on aspects of society such as education, the workforce, those left behind by technology, and the disadvantaged in society. We have experience in auditing IT systems, but we are not lawyers.

Leave a Reply

Your email address will not be published. Required fields are marked *