Time to move beyond fixed data model technology

AxiomSL | Inside View Data Model

May 6, 2016 – By Deepak Aravindan, Head of EMEA Solutions Team, AxiomSL

Many firms appear to have resigned themselves to the idea that, in order to produce their regulatory calculations and reports, AxiomSL | Inside View Data Modelthey must rely on a vendor tool with a fixed data model. However, regulations are now changing so frequently that this model is no longer just inefficient – it is becoming unsustainable. The time has come for firms to question the received wisdom that there is no other way of doing things.

When set down on paper, the shortcomings of fixed data model technology are so obvious and so numerous, it is remarkable this approach has been accepted for so long. Here is a quick overview of some of the main issues:

  • Long time to market: Firms that use fixed data model technology need to convert their data into the format mandated by their vendor. This means doing comparisons between the formats used in their internal systems and the format specified by their provider. They must then program an extract, transform, load (ETL) tool to transform their data accordingly. This not only increases capital and operational costs significantly, but also leads to a much longer time to market.
  • Lack of transparency: The unavoidable use of an ETL layer makes it impossible for users to drill down on their data and understand how it has been transformed. This is because, from the point at which the data is loaded onto the ETL tool, it will no longer be in a format that users understand – instead it will be in the vendor’s mandatory format. The fact the data is being processed by two applications (the ETL tool and the regulatory calculation/reporting tool) also makes it more difficult for users to access clear and accurate data lineage information.
  • Poor change management: Firms that use a tool with a fixed data model find it extremely difficult to adapt when there is a change to their regulatory reporting requirements. This is due to the fact their provider will need to issue a new version of its data model to accommodate any additional data requirements (such as new columns in a regulatory report). In order to implement an updated data model like this, firms need to undertake full regression testing. This is a time-consuming process, which increases operational costs and which risks breaking existing logic that is still valid.
  • Inflexibility: Fixed data model technology also restricts the ability of individual departments (risk, finance, treasury etc.) to implement changes that are important for their work. These departments will find they are bound together by the same data model and will need to take it in turns to make even the slightest amendments to the data model. This is a serious barrier for functions which sit at the heart of every financial firm.

Many firms have been putting up with the above shortcomings for so long they appear to have given up on the possibility of overcoming them entirely. However, the truth is they can avoid all of these issues by using regulatory calculation/reporting tool with a flexible data model – i.e. a tool that allows them to adopt a more flexible approach to modelling data for both regulatory and internal purposes. This type of set-up has many benefits, including:

  • Short time to market: Financial firms that use a regulatory calculation/reporting tool with a flexible data model can load their data in its existing format – they do not need to spend months setting up complex ETL processes that will transform the data into a specified format. This results in a much shorter time to market, which is particularly advantageous when new regulations are being rolled at a furious pace. By obviating the need for ETL, the flexible data model approach also allows firms to reduce their implementation costs substantially.
  • Data quality: A regulatory tool with a flexible data model will allow users to transform their data within the solution rather than through an ETL process that sits outside the framework. This is where the term ELT (extract, load and transform) comes from. It means giving end-users the flexibility to do any transformations (if required) and to automate the entire process. The end-user will have the ability to analyze the data and data quality across multiple source systems using management information (MI) reports/dashboards. End-users will also have the ability to drill down all the way from the data quality reports to the data sources and make adjustments if required.
  • Complete transparency: A regulatory calculation/reporting tool with a flexible data model allows users to clearly see and fully understand how their data is being used to produce their regulatory reports. This is possible because the data is loaded onto the regulatory calculation/reporting tool in its existing format, with which users are familiar and comfortable. The fact the data is maintained in a single application (the calculation/reporting tool) rather than across two (an ETL tool and a separate calculation/reporting tool) also results in much clearer data lineage information and enhanced transparency. End-users benefit from full visibility of their data journey and have the flexibility to drill down all the way from the reports to the data sources.
  • Adaptability: By using a regulatory calculation/reporting tool with a flexible data model, firms can react much more quickly to regulatory changes. This is because users do not have to wait for their provider to issue an updated data model every time there is a change to their reporting requirements (no matter how small). Instead, they have the flexibility to quickly add new or updated fields or templates without undertaking regression testing and without the risk of breaking unrelated logic. A flexible data model also gives individual departments more flexibility. They do not need to wait in turn to make changes to a single fixed data model. Instead, each department can be given a separate copy of the data to work from, freeing them to make the changes they require without delay.
  • Support for multiple regulatory and other requirements: If firms use a regulatory calculation/reporting tool with a truly flexible data model, it will be possible to load and process the data required for multiple regulatory requirements – they will not need to use a separate application with a separate data model to comply with each regulatory requirement. Similarly, a tool with a truly flexible data model will allow firms to load additional data sources that are not required for regulatory compliance. Firms will then be able to combine this data with their regulatory data and use it to create valuable business intelligence (BI) and management information (MI) reports.

Many practitioners cannot believe that flexible data model technology exists – they assume that all providers in the regulatory space offer a variation on the fixed data model approach. But the type of technology I have described does exist and is being used by firms to tackle many of the most pressing technical and operational challenges today. Now is the time for more firms to reject the status quo and embrace a new and better way of doing things.

To discuss this article further, please contact:

emea@axiomsl.com