695th Lord Mayor’s Ethical AI Initiative

Aim: To provide ways in which professionals and firms worldwide can respond to the ethical challenges of Artificial Intelligence (AI, machine-learning) regulation

The Worshipful Company of Information Technologists, in association with the Chartered Institute for Securities & Investment (CISI), British Computer Society, BSI, United Kingdom Accreditation Service, London Chamber of Commerce & Industry, Northeastern University, and Z/Yen Group, with the cooperation of Citigroup, London Metal Exchange, and LearnerShape has launched an initiative to develop ethical AI standards. Initially the standards are embodied in course for professionals working with AI. The Initiative has now moved to promote standards for firmwide certification. The approach is international rather than national, and promotes the use of existing ISO standards (particularly ISO 42001) rather than the creation of new standards, particularly ones that would impede international trade in computer-based services .

Background

AI has the potential to affect society and individuals materially in both positive and negative ways. Rapid development of AI technology means that ethical considerations must be taken into account from the beginning of the design process. AI systems rely on large amounts of data, which raise privacy concerns.  Without ethical guidelines and regulations in place, the misuse or mishandling of data can result in harm to individuals or groups.  Ideally, AI systems are transparent, accountable, and inclusive, and do not reinforce or amplify existing inequalities.

As AI becomes more advanced and widespread, there are growing concerns about issues such as bias, transparency, accountability, and safety.  If people do not believe that AI is being developed and used in a responsible and ethical manner, they hesitate to use or interact with beneficial technologies.

In combination, 42001 certification and a reasonably number of trained professionals, a firm should be able to be categorised as ‘low risk’, particularly under the EU’s AI Law.

Objectives

The initiative has three parts – a course, an accord, and a consensus:

Course – Professional Certification

This initiative aims to ensure that AI is developed and deployed in ways that benefit society, while minimising potential harm.  Ethical AI training can help ensure that AI is developed and deployed in a way that aligns with our values, respects human rights, and promotes the common good. This initiative should establish ethical guidelines and principles for AI development, encourage transparency and accountability in AI systems, and promote collaboration and dialogue among stakeholders.

On 27 June 2023 we launched the “695th Lord Mayor’s Ethical AI Initiative” on “Delivering Ethics Courses For AI Deployers & Builders” at CISI, starting in financial services.  We now have several online courses:

Since 10 November 2023, among all the courses, we have had over 6,000 graduates and several hundred graduates, in over 50 countries from over 500 firms, including seven regulators and four central banks.

Accord – Firmwide Certification

We are promoting the use of ISO/IEC 42001:2023 Artificial Intelligence – Management System by firms. For comparison, this is an organisational certification similar to ISO 9000, ISO 14000, etc. At the TIC Summit 2024, held on 14 May in Brussels, representatives of more than 30 nations quality infrastructure bodies signed the Walbrook AI QI Accord.

Consensus – Responsible AI

On 15 July the Lord Mayor launched the Coffee House Consensus on Responsible AI when giving the opening keynote address at the International Corporate Governance Network (ICGN) London Conference.

Developed in collaboration with major global investors whose assets under management exceeds $26tn, the Objective is to develop a 2-page consensus to help investors and investees coordinate around the responsible deployment of AI, thus increasing returns and reducing risks. The current document is an Exposure Draft and comments are welcome.

Steering Group

While course development is underway and led by relevant institutes, the role of the Steering Group is just that, steering:

  • making intellectual connections and contributions;
  • making personal connections to people or organisations who might usefully contribute;
  • providing a sounding board and critiquing function;
  • publicising when appropriate, particularly towards the end of the Initiative.

WCIT and Professor Mainelli appointed Liveryman Nicholas Beale as Chair of the Steering Group.  The Steering Group is exploring how to provide regular updates for a rapidly changing field, including via CISI’s monthly ethics columns. We are actively exploring extending the course to other professions such as solicitors, barristers, surveyors, and accountants.

For further information:

Stability AI: “steampunk ‘Lord Mayor’ of the ‘City of London’ standing in front of Tower Bridge wearing purple goggles with his battle puffin

Course Development

There are three types of courses:

1 – Domain qualifications for ‘AI Deployers’, e.g. the responsible deployment of AI.  The courses so far are listed above. So far, these courses have taken the educational form of a professional assessment course, circa 8 to 12 hours.  Refresher or update mini-courses might also be considered.  Discussions with other sectors indicate interest for law, maritime, surveying, and pharmaceuticals. 

2  – An ‘AI Builder’ qualification for software designers, engineers, and computer scientists.  BCS would be the certificate provider.  This is likely to take the form of a Professional Certificate, circa 100 to 150 hours.  This is a significant undertaking. If undertaken, such work is likely to be done in conjunction with the Open Data Institute who have made significant progress on a rich curriculum.

3 – Senior Executive (C-Suite) qualifications, yet to be developed.

The overall course syllabus, developed by the Lord Mayor, Professor Michael Mainelli, consists of six parts:

CISI and BCS have strongly led course development.