ETH Zurich and Zühlke have conducted a study on how companies use AI technologies. A total of 633 companies from the fields of production, technology, healthcare and finance from the DACH region, the UK and the US were surveyed. Stefano Brusoni, Professor of Technology and Innovation Management, explains in an interview where the greatest potential lies and where Europe needs to catch up.

What is the study about and what is its key message?
The study looks at the distribution and impact of artificial intelligence. It analyses which technologies are used for which purpose and in which functions. The most important finding is that companies in the DACH region have lower AI adoption rates compared to US companies and started using AI later. US companies also use AI primarily in research and development and less in customer-facing functions such as marketing.
Were you surprised by the result?
Yes and no. While companies in the DACH region are not located close to the tech giants and don't have easy access to them, the study does not indicate that it is more difficult to obtain data, models or algorithms in the DACH region. One visible difference between companies here and in the US is that we are lagging behind in the development of firm-specific ethical frameworks.
What do you understand by ethical framework conditions?
They are an important prerequisite for the use of AI. They define responsibilities and set clear boundaries. Basically, they enable the use of AI; such framework conditions are more mature in the US.
Isn't there also much more investment in the US?
Yes, it is traditionally easier to access major investments there. Not just the initial investment for a start-up, but also the follow-up investments that are needed to really scale up and industrialise. That's a big plus.
What does Europe need to do better?
The lack of a domestic tech giant is a major disadvantage. Unlike China, however, Europe has not developed any alternatives to US technologies. EU initiatives focus primarily on regulating data access and position Europe as a user, not a developer. The US and China are very different in this respect. They are users, developers and enablers.
Does this mean that China is on the rise?
China has been investing in AI for a long time and has been much more productive than we may have realised. Now it turns out that they have developed technologies that we are all familiar with today. DeepSeek comes as no surprise in this regard. It is merely the product of a certain way of investing in AI and trying to utilise open source developments to circumvent obstacles imposed on China by regulations and trade restrictions.
What can we learn from China and the US?
From China, we can learn that there is not just one way to engage with generative AI. Like Switzerland, the US has good access to top universities, but above all to major investments, and has relatively liberal regulations, which can lead to governance problems. The EU, however, has created very few incentives to invest in AI and its regulatory regime has made Europe a place of AI utilisation rather than development.
What is Switzerland particularly good at?
Switzerland is influenced by the EU's framework conditions but enjoys greater flexibility. Unlike other European countries, we are closer to leading institutions, including ETH Zurich. This is proven by the strong presence of Google, Apple, IBM and many US tech companies in Switzerland, not only as a location to sell their services but also to develop technologies. We also have many promising start-ups. The only problem is that they are not growing. They are being taken over by the Googles and IBMs of this world. Switzerland excels at development but struggles with "productisation". Many start-ups fail to grow here because they are taken over too early.
What are the success factors in the use of AI?
First of all, access to technology and data is crucial. Even if the technologies are not developed in Europe, they can be purchased. But what is really missing is an ethical framework and, more generally, the development of organisational processes and governance principles, such as: Where is the data stored? Who has access to it? Who is responsible for its use? The ethical framework is a management tool that ensures transparency and facilitates responsible decision-making. Without this, it will be difficult to use the technologies on a large scale. It is clear that firms in the DACH region and the EU as a whole have been too slow to proactively develop firm-specific frameworks. They have been too inclined to wait for the EU to regulate. The EU is beginning to reconsider its role in this domain, as evidenced by the news about the halt brought to new regulations on AI liability and privacy. The time is right to explore more bottom-up approaches that complement top-down regulations.
Why are marketing departments, in particular, driving the use of AI?
Marketing and HR departments have been pioneers in the field of artificial intelligence. They had large data sets that they needed to utilise. However, we are also seeing increasing use in the areas of research and development as well as operations. Digital twins, for example, are making a comeback. They were a big thing ten years ago, but were unable to realise their full potential. Many things summarised under the term "digital transformation" are actually a revival of trends. They have been around for a long time but have never been applied in the way they can be now.
Were you able to identify a difference between predictive and generative AI in your study?
Predictive AI creates forecasts based on data from the past, while generative AI creates new content. Companies that started to use big data ten or fifteen years ago were quick to embrace predictive AI because they needed tools to analyse this data systematically and reliably. Generative AI complements this. Companies that have already pioneered the use of big data are also generally quicker to introduce generative AI.
What advice would you give companies seeking to make successful use of AI?
Companies should think about what they are really good at, and which activities are of central importance to their own competitiveness. They carry out millions of tasks and activities every day, all of which are important, but not all of them critical. They need to work out this differentiation and develop a system of rules that takes the criticality and complexity of tasks into account. In the case of strategic activities where the stakes are high, a clear governance framework is needed that defines responsibilities, risks and human oversight. Employees can be given the freedom to experiment with AI tools when carrying out routine, low-risk tasks, such as writing meeting minutes. The key lies in differentiated AI regulations: carefully designed governance for competitive core activities and structured compliance for critical yet more simple functions and the openness to engage in experimentation when carrying out routine tasks. Using a one-size-fits-all approach will slow down decision-making forever. And this is not the time to be slow.
About
Stefano Brusoni has been Professor of Technology and Innovation Management at ETH Zurich since 2011. He is currently Pro-rector for Continuing Education. His work aims at enabling leaders in established industries to leverage new, potentially disruptive technologies such as AI/ML, block chain, additive manufacturing for business and societal impact. He has published in journals such as Strategic Management Journal; Administrative Science Quarterly; Organization Science; Academy of Management Journal, Strategy Science and many others. He is Senior Editor of Organization Science and was Associate Editor of the Strategic Management Journal. He is also a founder and entrepreneur, currently active in EdTech.