Earlier this year, the European Commission published its Ethics Guidelines for Trustworthy AI, followed by an implementation checklist. Orange is one of 52 independent experts representing academia, industry and civil society that is part of the European Commission’s High-Level Expert Group on Artificial Intelligence (HLEG AI), setting these standards for the ethical development and use of the AI.
The guidance itself is not binding, but it will influence EU legislators in this field and is designed to help enterprises set their own internal policies and practices. According to the European Commission: “AI is not an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being.”
The innovative use of AI to find new insights in data has the power to transform our world for the better. But people need to understand how AI can enhance people’s experience of work and the products and services consumers enjoy and trust its capabilities. Critically, society as a whole needs to work together to mitigate risk.
AI can help to alleviate global poverty and hunger by improving agricultural crop yields or predicting natural disasters through analysis of complex environmental data. When properly used, AI will augment rather than replace human workers who excel at skills such as complex problem solving or empathizing with a customer. But it also brings with it big challenges in terms of the future of work and highlights a number of ethical, legal and security questions.
Consider “deepfakes,” which use AI to generate multimedia content showing people participating in events that look totally real, but are actually fake. AI is capable of cloning a human’s voice. It could erode faith in online media and cause irreparable damage to governments.
Eliminating bias
AI is only as good as the quality of data that it is fed. AI training data can contain implicit gender, racial or ideological biases, for example. It’s therefore crucial that AI systems are trained with data that is unbiased and truly representative of a population. There is also the risk of programmers introducing bias into the machine learning algorithms that interpret data.
AI can also lead to, or overcome, economic biases. In financial services, data from social media profiles could be used to enable consumers, who do not have a credit history, access loans or credit cards.
Steering best practices for ethical AI
According to the European Commission’s expert panel, ethical AI must be human-centric and respect the fundamental human rights as defined by the European and international treaties. There are seven key requirements:
1. Human agency and oversight: AI systems empower human beings to help them make informed decisions and “foster their fundamental rights”
2. All AI systems must be technically robust and secure: They must also be “accurate, reliable and reproducible” and provide a fall back plan if there is an issue
3. Total respect for privacy and data protection must be put in place. Adequate data governance mechanisms must also be provided “taking into account the quality and integrity of the data, and ensuring legitimized access to data.”
4. All data, systems and AI business models should be transparent. AI-generated decisions should “be explained in a manner adapted to the stakeholder concerned.” Humans must be told they are interacting with an AI system and must understand its capabilities and limitations.
5. Unfair bias must be avoided. In addition, “AI systems should be accessible to all, regardless of any disability and involve relevant stakeholders throughout their entire life circle.”
6. AI systems should benefit individuals, social and environmental well-being. AI’s impact on these should be considered at all times.
7. AI systems should be both responsible and accountable, and mechanisms need to be put in place to ensure this. Auditability plays an important role here, especially with critical applications. Moreover, “accessible redress” must be ensured if consumers believe they have been treated unfairly.
The Commission is inviting structured feedback from stakeholders on these principles until December 2019. Following this, the AI HLEG, including a representative from Orange, will review and update the guidelines early next year, and the Commission will define the next steps that are required.
In tandem, Orange is reviewing its own internal guidelines on the ethical development and use of AI and data. Our employees receive training on our policies to ensure we do business in an ethical way and this is reinforced through our day-to-day actions as part of our human-centric culture.
Our focus on AI and data innovation
We’re already using AI tools to improve the customer experience via chatbots, for example, and optimizing networks. Orange Business is co-innovating with customers to apply AI to:
- Transform the customer experience. Website personalization, contact center data mining, chatbots and connected products, for example.
- Transform business processes. Areas of focus include predictive maintenance, computer vision for quality control and assurance in manufacturing, as well as robotic process automation for business process enablement.
- Increase our digital resilience. This includes the use of AIOps (AI for IT operations) within our own business and our customers' as well as our ongoing program to develop our next generation CyberSOC, which balances and blends human and AI expertise and capabilities.
Our fourth topic focuses on the need for “Smarter data management for AI-enabled business success.” AI can help to solve the challenge of analyzing data at scale. It’s critical to have the right data management and governance processes in place for success with AI and address ethical and security concerns.
Recently, we’ve been helping major automotive and chemicals firms create cloud-based data lakes as a central repository to store all data in its native format, including structured, semi-structured and unstructured data. The aim is to increase an enterprise’s data agility to support the growing use of AI and advanced analytics.
In recognition of the importance of this topic, Orange Group has just appointed ex-Apple and ex-Facebook executive, Steve Jarrett, as its Senior Vice President of Data and Artificial Intelligence. Steve has taken responsibility for a new Data and AI department defining the Group's data strategy within our Technology and Innovation (TGI) Division, which supports the Orange transformation into a multi-services operator.
Ethics are more important than ever. Robust processes to prevent bias will be vital when developing AI-enabled products and services for consumers around the world. This topic should be at the forefront of business leaders’ agendas if consumer trust is to be built into AI technologies that are set to pervade all our lives.
To find out more about the potential applications for AI in your business, download our ebook, Smarter data management for AI-enabled business success.
Raluca is Head of Innovation at Orange Business where she is responsible for co-innovating with our enterprise, start-up and partner networks to create new customer and business value. Raluca began her career in operations management at Citibank, working on retail banking transformation, process improvement and employee onboarding and engagement processes. Prior to her current role, she was Chief of Staff to the Head of Sales for International Business at Orange Business. She holds Masters and Bachelors degrees in International Business from the Academy of Economic Studies in Bucharest and Grenoble Ecole de Management in France. She is fluent in Romanian, English and French.