Responsible Artificial Intelligence

Artificial Intelligence (AI) is growing rapidly as it succeeds to improve productivity and customer experience in every domain of our society such as education, industry, agriculture, healthcare, finance, transportation, entertainment, security, energy, communication, etc.

AI can be described as machines capable of imitating certain functionalities of human intelligence, including such features as perception, learning, reasoning, problem-solving, language interaction, and even producing creative work.

The rapid growth of AI today is possible due to the increased computing power, the availability of big data and the ongoing learning of AI itself as it never sleeps.

The future success of AI, however, depends not only on the further development of its current success factors. The growth of AI shall also be influenced by the ability of people to understand, to trust and being able to participate in AI-systems.

Purpose and Principles

In order to understand, trust or participate in a specific AI-system, it must be clear what the purpose is of the system and what principles are being used.

Companies, institutions, inter-governmental organizations, and governments recognize the need for being transparent about the principles of the AI-system. Many governments (USA, EU, China, Canada), intergovernmental organizations (OECD, UNESCO), institutions (Future of Life Institute) and companies (Google, Microsoft) are disclosing their view on responsible AI principles. There is however not yet one clear vision about what the best practices should be.

Clarke (2019) describes 10 themes for responsible AI, which are:

(1) Assess Positive and Negative Impacts and Implications

(2) Complement Humans

(3) Ensure Human Control

(4) Ensure Human Safety and Wellbeing

(5) Ensure Consistency with Human Values and Human Rights

(6) Deliver Transparency and Auditability

(7) Embed Quality Assurance

(8) Exhibit Robustness and Resilience

(9) Ensure Accountability for Obligations

(10) Enforce, and Accept Enforcement of, Liabilities and Sanctions

Based on these 10 themes Clarke (2019) formulates 50 principles for AI, which can be a good checklist to see what is relevant for the purpose when an AI-system is being used as it integrates all relevant aspects and stresses to deliver transparency, and auditability.

Viewing AI-systems from a risk management perspective

Clarke (2019) also indicates that organizations should evaluate the appropriateness of AI technologies to their own operations through risk assessment and risk management processes. The responsible application of AI is only possible if a stakeholder analysis is undertaken to identify the categories of entities that are or can be affected by the particular project but also to gain insight into those entities’ needs and interests.

Understanding AI-systems will need expert knowledge in a broad sense to interpret the data, the models and the computer languages. From a risk management perspective, this needs oversight. Also when models are black boxes such as deep-learning models and neural networks. There are post-hoc explainability techniques, but there is always the risk of finding explanations that are the result of some artifacts learned by the model instead of actual knowledge from the data.

AI will impact every industry and every human being

The successful and rapid growth of AI will continue based on its current success drivers such as improved productivity, improved customer experience,  growing computational power, more availability of data and better learning algorithms. There is however the condition that AI-systems are transparent about the necessary sustainability principles that are needed for its purpose.

Finally, there it is important to keep in mind that the future potential of AI-systems is enormous, its potential should not be overestimated. The data that AI uses is always historic data as there is simply no future data available in the world that we live in. AI, however, can make high-quality analyses, generate great insights, automate routines and even make astonishing scenario analyses, but in an uncertain world, it can not predict the future.

Norbert Bol

Literature

Clarke, R. (2019). Principles and business processes for responsible AIComputer Law & Security Review.

Photo on Flickr, credit to www.iqlect.com

 

Leave a comment