
The cause and effect are clear.Ī random forest is a collection of decision trees (a type of ensemble learning method that deploys multiple algorithms). The system is also interpretable because the designer knows how the decision tree’s features (in Figure 1, “Friday?” and “Rain?”) and their weights determine the system’s output (“yes” or “no”). Figure 1 maps out an algorithm for deciding whether to work from home.įigure 1: Sample decision tree to decide whether to work from homeĪn AI system that only uses a decision tree is explainable insofar as the designer can communicate the system’s behavior and output in a way that is understandable to a human user.

Decision tree models sequentially answer questions according to certain features and weights and continue to branch out until they reach a conclusion.

To see the difference, consider two AI systems: one that uses a simple decision tree model and one that uses a random forest. The distinction, however, is nontrivial: requirements for explainable and interpretable AI systems differ significantly the consequences for the innovation ecosystem may be substantial. While Recital 38 calls for ‘“explainable” AI systems, Article 13 states that users should be able to “interpret” the system and requires human oversight measures that can facilitate the “interpretation” of its outputs. The AIA demonstrates that the EU does not yet know if it wants “explainability” or “interpretability”. To avoid confusion, the EU should clarify its terminology and ensure it does not mistakenly outlaw the most innovative systems. Article 13 of the EU’s proposed Artificial Intelligence Act (AIA) states that “High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately.” But the proposal does not specify what it means for those using an AI system to “interpret” its output nor does it provide the technical measures a provider must take to demonstrate its system complies. This CP addresses that recommendation, building on the responses received to the WMR.The EU is trying to make AI systems more transparent. The WMR highlighted the need for greater clarity on the types of firms which need to be authorised as a trading venue and recommended that we consult on guidance. This consultation paper is part of the Wholesale Markets Review (WMR), the review of UK wholesale financial markets we have been conducting with the Treasury. Next stepsīased on the responses we receive, we will finalise the guidance and publish a policy statement in Q2 2023. Our consultation will also be of interest to trade associations, law firms, issuers and investors. appointed representatives providing or using communications systems or bulletin board services, and their principals.firms providing communications systems or bulletin board services.

technology firms offering platforms and systems for trading.investment-based crowdfunding platforms operating in primary and/or secondary markets.It also helps protect the integrity of the UK financial system. Clarity about the perimeter supports innovation and competition in the market and helps maintain a level playing field. We want to ensure that firms have greater certainty about the permissions they require and that the trading venue regime is applied consistently.
