In recent years, so-called "classic" AI has developed rapidly: a success due to the extensive, as well as measurable, benefits it generates. This is particularly the case in factories, where Machine Learning tools have been tested in many situations. However, the recent and sudden emergence of Generative Artificial Intelligence (GenAI) has reshuffled the deck: its "creative contribution," enabled by the reproduction of human behavior, suggests tremendous potential. However, it comes with its own set of dilemmas.
1. Now or later?
Many leaders are concerned about the increase in competition due to GenAI, fearing the emergence of new players in their markets. Many are tempted to deploy this disruptive technology within their organizations without further delay. Many see it as an opportunity to improve the profitability of their core business and exploit new sources of value creation. And yet they need to be aware of the risks associated with such a major change. Other cautious leaders fear "disruptive shocks" that could threaten their organizations, and even their business models.
2. Traditional digital transformation or revolution?
This is the main dilemma facing every leader eager to adopt GenAI: whether to stick to a "standard" digital transformation, accumulating use cases, or trigger a much deeper transformation impacting all of the company's business lines, all in symbiosis with the IT department.
The second option necessarily involves educating the teams – and top management in particular – to overcome the fears associated with AI. For example, the Executive Committee of a major automotive company followed two days of training, with its members even learning how to program in Python. Another essential prerequisite is the creation of high-quality databases that enable data to be shared outside the usual silos linked to the business lines. By setting up central skills centers, the data collected can then be exploited intensively, enabling "cross-fertilization" of practices by creating cross-functional use cases.
3. Alone or accompanied?
While some managers choose to conduct their "AI transition" using exclusively in-house/internal resources, others prefer to be supported by specialist structures with expertise in the entire transformation chain: identification of needs, deployment of solutions, results analysis, change management, and more. Raising awareness among teams of the dangers associated with AI and respect for ethical principles is a key prerequisite, both in terms of warning businesses about the various biases and malfunctions of AI and in encouraging managers to choose sovereign and cost-effective solutions.
4. Standard budget or dedicated line?
The first Artificial Intelligence initiatives are, in most cases, carried out within the framework of standard IT budgets. However, as the number and scale of projects increases, leaders are tempted to adopt a separate line, allowing AI projects to be set aside within each business department. For example, an industrial services company operating in 45 countries has adopted a central budget, while at the same time carrying out educational initiatives to explain the economic equation of AI to its teams.
5. A profusion of initiatives or central management?
The governance of these tools is a strategic issue. Management can choose to give the business units a great deal of freedom to experiment, which will enable them to make rapid progress, without mobilizing additional resources. It may also set up a central steering committee to give the business lines and entities guidelines and set priorities. A major player in the energy sector, which uses Artificial Intelligence for topics as varied as determining prices at gas stations or managing solar and wind power plants, has set up a "digital factory" centralizing all the company's AI processes.
6. "Make or buy?"
One of the questions on which managers are most hesitant is whether to use external solutions based on generic use cases that enable the tools to be deployed quickly, or whether customized solutions are preferable. While the second option may, on paper, guarantee better results, it comes up against high costs and the difficulty of measuring the profitability of use cases.