
When does a financial institution need an internal AI policy?
On 10 February 2026, senior representatives from the worlds of politics and business held their latest discussion on the future legal framework for artificial intelligence (AI) in Switzerland. At the ‘Digital Switzerland Advisory Board Meeting’, Federal Councillor Albert Rösti emphasised that ensuring transparency and traceability was key to building public trust in AI. At the same time, all stakeholders must have the space to develop and be able to operate innovatively. However, the protection of fundamental rights must not be compromised in the process. As is well known, AI has long since become part of everyday life in the Swiss financial sector too. Applications can now be found throughout the entire value chain: from customer interaction and credit and risk analysis to fraud prevention, market surveillance and internal support functions. AI tools are increasingly being used by staff as a matter of course.
By contrast, the regulatory framework in Switzerland is evolving at a comparatively measured pace. Over the past twelve months, there has been little change in the formal legal landscape. Unlike the EU, Switzerland still has no specific AI law; to date, there is neither a concrete draft bill nor a formal consultation process. For financial institutions, therefore, the existing financial market regulations and FINMA’s expectations in the areas of risk management and organisation remain the key guiding principles. This situation raises a central practical question: when is it appropriate to issue an internal AI policy – and when is it sufficient to adapt existing regulations?
This article examines this question from a practical perspective and outlines the considerations Swiss financial institutions should take into account when managing AI risks and designing their internal policy frameworks.
Regulatory context: Switzerland and international developments
To date, Switzerland has deliberately adopted a technology-neutral approach to AI regulation. In principle, AI should be addressed within the existing legal framework, in particular through organisational, due diligence, data protection and supervisory obligations. For the financial sector, this means that the use of AI must continue to be measured against the general requirements of financial market law, the revised Data Protection Act and FINMA’s codified practice. Of particular note here is FINMA Supervisory Circular 8/2024, which sets out the specific expectations of financial institutions regarding the use of AI.
At the political level, however, there are signs of some movement in the medium term. Switzerland has signed the Council of Europe’s AI Convention. Its ratification is regarded as an important intermediate step towards coherent legal regulation at federal level. On this basis, a draft bill is expected to be put out for consultation by the end of 2026. However, the timeframe until it comes into force remains open, and no binding new specific AI standards are expected in the short term.
The situation is different in the European Union. The EU AI Act has established a comprehensive, risk-based regulatory framework that imposes obligations—some of which are far-reaching—on the use of AI depending on the risk category. Although this legislation is not, in principle, directly applicable to Swiss financial institutions, it has de facto implications, for example in cross-border activities, with EU clients or when using AI providers based in the EU. In the political debate, there is increasing criticism that whilst Europe is playing a pioneering role in terms of regulation, technological leadership in the field of AI remains primarily in the US and China. For Swiss institutions, this means they find themselves caught between the pressure to innovate and growing regulatory expectations
Obligation to conduct risk analysis – even without a specific AI law
Regardless of the question of future legislation, regulated financial institutions are already obliged today to assess the risks associated with use of AI, and to implement appropriate measures. The use of AI should not be viewed in isolation, but rather as part of the overall organisation and the existing control framework, taking into account the institution’s business model and size.
In practice, this means that institutions must gain a clear understanding of where and in what form AI is used within their organisations – whether for production, support or experimental purposes. Equally relevant is whether AI systems merely prepare decisions or whether they actually lead to (partially) automated decision-making. Added to this are questions regarding the nature of the data processed, potential dependencies on external providers, and the traceability of results. Only on this basis can a proper assessment be made as to whether and which internal regulations are necessary. An AI directive ‘for the sake of having a directive’, by contrast, adds little value.
Typical AI risks for financial institutions
The specific risks depend heavily on the individual case. Nevertheless, certain recurring categories of risk emerge in practice that are particularly relevant to financial institutions. These include, first and foremost, governance and organisational risks, such as when responsibilities are unclear or control and escalation mechanisms are lacking. Data protection and privacy risks are also of particular importance, particularly in the case of automated individual decisions or the processing of sensitive personal data. Related, but not identical, are AI-related risks in the area of confidentiality, namely the risk that employees may breach trade secrets through the improper use of AI tools. Added to this are outsourcing and third-party risks associated with the use of external AI solutions, such as cloud-based models, which place increased demands on contract drafting and monitoring. Finally, reputational and liability risks should not be underestimated: opaque or unfair can have a lasting negative impact on the trust of customers and the public.
When does the question of an internal AI policy arise?
Against this backdrop, the question arises as to at what point a formalised internal AI policy becomes advisable or necessary. In practice, it is evident that larger banks and insurance companies in particular have long since incorporated the topic of AI into their policy and control systems, whether in the form of standalone AI guidelines or as an integral part of existing IT, risk or outsourcing policies. However, small and medium-sized financial institutions such as fund management companies, asset managers and other financial intermediaries are increasingly taking this step as well.
There is a particular need for enhanced internal regulation when AI is used in regulated core processes, when decisions with legal or practical implications for clients are automated or pre-selected, when AI tools are used institution-wide or by a large number of employees, or when external, particularly generative, AI systems are deployed. The closer AI comes to decision-relevant processes, the less sufficient purely informal governance becomes.
Content and structure of an internal AI policy
An internal AI policy need not be highly complex or overly technical. What matters is that it addresses the key risks and is effectively integrated into the existing governance framework. Ideally, it should specify which AI applications are covered, how responsibilities and accountabilities are allocated, and the criteria used to classify AI systems on a risk-based basis. Also relevant are rules of conduct for using AI tools, guidelines on the use of external solutions, on the transparency and documentation of AI-supported decisions, and on staff training and awareness-raising.
It is essential that the scope and level of detail of the policy are commensurate with the size, complexity and risk profile of the institution. A lean, clearly understandable policy is often more effective in practice than a comprehensive document that is rarely put into practice.
Implementation in practice
A step-by-step approach has proven effective in practice. The starting point is a structured assessment of existing and planned AI usage. Building on this, existing guidelines can be supplemented in a targeted manner or – where appropriate – a standalone AI guideline can be introduced. What matters is not so much the formal existence of a document as its actual application in day-to-day practice. Regular reviews, adaptations to new use cases, and close collaboration between business units, IT, Risk, Compliance and Data Protection are key success factors. Staff must be adequately informed, briefed and regularly trained, and the key elements of the policy must be controlled and monitored within the framework of the ICS.
Conclusion
Even without specific AI legislation, Swiss financial institutions are now responsible for managing their use of AI appropriately. An internal AI policy should not be an end in itself, but rather a tool deliberately rolled out by management for structured risk monitoring. Whether and when a separate policy is required depends on the size of the institution, the complexity of its business model, the specific use of AI and the associated risk profile. Institutions that analyse their use of AI at an early stage and regulate it pragmatically not only create regulatory certainty but also strengthen the trust of business partners, auditors, supervisory authorities, customers and employees.
This article was published in the January 2026 issue of the magazine Private (Das Geld-Magazin).