Financial regulations as algorithms: Measuring regulatory complexity

Jean-Edouard Colliard, Professor of Finance - September 19th, 2016
Financial regulations as algorithms: Measuring regulatory complexity - Jean-Edouard Colliard - ©Fotolia- belkaelf25

Enriching the regulatory framework can lead to a complexity that overshadows understanding of the regulations. In order to help find a balance between the two, a research team is developing a method to measure such complexity by examining the different words and terminology used in the texts.

Jean-Edouard Colliard

Assistant Professor of Finance, Jean-Edouard Colliard obtained a PhD in 2012 from the Paris School of Economics and is a former student of the Ecole Normale Supérieure (Ulm). His (...)

See CV

There is a general perception that regulatory complexity has strongly increased in recent years and reached excessively high levels. For instance, Steven J. Davis from the University of Chicago compares the Ten Commandments supposed to rule human activity in the Old Testament to the “about one million commandments” contained in the U.S. Code of Federal Regulations. Recently, the complexity of new banking regulations has elicited a lot of attention, with the Bank of England’s Andrew Haldane contrasting the 30 pages of Basel I to the 616 pages of Basel III, representing a twenty-fold increase in about 25 years. 

The length of regulation and its complexity are, however, not synonymous. For instance, the software involved in the Apollo 11 software had 145,000 lines of code, compared with 86 million lines for Mac OS X. It does not imply that our laptops use operating systems that are almost 600 times more complex than landing humans on the moon. 

The determinants of complexity

In an on-going research project, Jean-Edouard Colliard, professor of Finance at HEC Paris and Co-Pierre Georg, Professor at University of Cape Town and Deutsche Bundesbank are developing new ways of measuring regulatory complexity inspired from computer science. Indeed, financial regulation can be considered as a computer program: it takes a financial institution and its actions as an input, runs a set of operations, and returns a regulatory decision, e.g. doing nothing or issuing a fine, as an output. Programmers have long used different methods to measure the complexity of a computer program, counting not only lines of code but the number of variables, logical and mathematical operators used, and taking into account the logical structure of the program.

Applying similar ideas to the study of financial regulation, Colliard and Georg put forward ways of measuring regulatory complexity that take into account the quantity of different instructions given by the regulator, the number of different economic variables mentioned by the regulation (number of financial products, types of regulated agents, etc.), and how many times the same variables and instructions are used. 


Guillemet

Financial regulation can be considered as a computer program



In particular, these measures take into account the repetitive nature of certain regulations. For instance, Basel I mainly consists of a list of asset types that are assigned different risk-weights in the computation of total risk-weighted assets. While the text is long because of the number of asset types considered, it repeats the same logical structure multiple times, and is thus less complex than a text of similar length that would consist of completely different regulatory instructions. 

Another interesting dimension that can be captured is the “level” of a regulation. Defining new concepts such as “risk-weights” or “standardized approach” and referring to them makes a regulatory text such as Basel II shorter than if their meaning had to be explicated each time they are needed. However, multiplying the number of such regulatory concepts introduces a whole new specialized vocabulary, making the text much less transparent to non-experts. There is thus a trade-off between conciseness and transparency, whose terms can be measured empirically.

The authors are currently using the U.S. Dodd-Frank Act and the different Basel agreements as a testing ground for their approach. The idea is to use these important texts to generate a first list of the different words used in regulatory texts, classified according to their function. This list can then be used to produce different measures of regulatory complexity. 

Eventually, the researchers plan to develop an online platform on which users could submit new regulatory texts for measurement of complexity, and could also update the word list collaboratively. This platform could then become an important tool to test proposed regulations and identify elements that may need simplification. Another important step for the authors is the “backtesting” of their complexity measures. Ideally, the text-based measures should predict the actual costs of regulatory complexity, both for the agents subject to the regulations and for the supervisory authorities in charge of applying them. The research team is already collaborating in this direction with the French financial markets authority AMF (Autorité des Marchés Financiers) and the banking authority ACPR (Autorité de Contrôle Prudentiel et de Résolution). The next step will be to exchange with market participants in order to link their compliance costs to specific bits of financial regulation.

Practical Applications
Practical Applications

Measures of algorithmic complexity have proven useful in making programming more efficient. Similarly, measures of regulatory complexity could help reduce the complexity of regulatory texts by identifying parts of proposed regulations that seem overly complex. Regulators face a trade-off between comprehensiveness and complexity. While economists are usually good at measuring the benefits of enriching the regulatory framework, it will be difficult to strike the right balance if no measure exists for the associated costs in terms of complexity.

Methodology
Methodology

Colliard and Georg classify the words used in regulatory texts according to their function, both manually and using machine-learning techniques. A given word can for instance be a “regulatory operator” (such as “must” or “has to”), a “logical operator” (such as “if”, or “whenever”), an “economic operand” (“bank”, “hedge fund”), etc. The parallel with algorithmic complexity suggests different measures based on the count of words in different categories, both with and without repetition.