Artificial intelligence in the Swiss financial market (2023)

The importance of artificial intelligence (AI) has grown rapidly in all areas of life, including the financial market, in recent years. In accordance with its strategic objectives for the years 2021 to 2024, FINMA supports innovation in the Swiss financial centre and monitors the associated risks.

Surveys show that most institutions are still experimenting with AI, while some companies already have advanced applications that require corresponding risk management processes. Since chatbots such as ChatGPT became available, interest in AI solutions has taken a further upward leap. As in other areas of life, we can expect AI to lead to a host of changes in the financial market.


FINMA sees particular challenges in the use of AI in the following four areas and expects the financial industry to manage the risks accordingly.

Governance and responsibility: Decisions can increasingly be based on the results of AI applications or even be carried out autonomously by these applications. Combined with the reduced transparency of the results of AI applications, this makes control and attribution of responsibility for the actions of AI applications more complex. As a result, there is a growing risk that errors go unnoticed and responsibilities become blurred, particularly for complex, company-wide processes where there is a lack of in-house expertise. For example, the AI application ChatGPT gives such apparently convincing answers based on the highest probability that it is very difficult for users to assess whether the answers are factually correct or not.

Clear roles and responsibilities and risk management processes must be defined and implemented. The responsibility for decisions cannot be delegated to AI or third parties. Everyone involved must have sufficient expertise in AI.

Robustness and reliability: The learning process in AI is based on huge quantities of data. First of all, this poses risks arising from poor data quality (e.g. data that is not representative). Moreover, AI applications undergo a process of automatic optimisation, which can result in the model developing in a wrong direction (known as drift). For example, according to the Harvard Business Review, the majority of AI algorithms for predicting Covid-19 failed. These applications were not reliable enough to be deployed autonomously. Finally, increased use of AI applications and resultant outsourcing and cloud usage will also increase IT security risks.

When developing, training, and using AI, institutions need to ensure that the results are sufficiently accurate, robust, and reliable. Both the data and the models as well as the results need to be open to critical questioning.

Transparency and explicability: The vast number of parameters and complex models in AI applications often means it is impossible to isolate the impact of individual parameters on the result. Without an understanding of how the results have come about, there is therefore a risk that decisions based on AI applications are not verifiable or explicable. This may make checks by the institution using it and auditors or supervisory authorities difficult or impossible. In addition, customers cannot fully assess the risks if they are not informed that AI is being deployed. For insurance tariffs, for example, the use of AI could mean that the tariff is no longer transparent. It would then be impossible to explain the tariff to customers in a transparent way.


Institutions must ensure that the results of an application are explicable and use of the application is transparent, in accordance with the recipient, relevance and process integration.

No-discrimination: Many AI applications use personal data to assess individual risks (e.g. to set tariffs, for lending) or develop customer-specific services. If there is insufficient data on particular groups of people, this can lead to distortions or incorrect results for these groups. If products and services are offered based on these incorrect results, this can lead to unintentional and unjustified discrimination. Alongside legal risks, discrimination also entails reputational risks for the companies concerned.

Firms must avoid unjustified discrimination.

FINMA has discussed and developed its expectations with regard to AI applications with the financial industry, national and international organisations and academia.


FINMA will monitor the use of AI by supervised institutions. It will also continue to closely monitor developments in the use of AI in the financial industry, remain in discussions with relevant stakeholders and keep up to date with international developments.

(From the Risk monitor 2023)

FINMA Risk Monitor 2023

Updated: 09.11.2023 Size: 0.47  MB
Add to personal download list
Backgroundimage