Skip to content
Contact Us Client Area

Swap Data Risk Rising - Q&A with DTCC's Marisol Collazo

| FinReg

Marisol Collazo knows a thing or two about data quality.  As the CEO of DTCC’s Data Repository, she’s responsible for warehousing trade data and related information from over 5,000 clients representing about 100,000 accounts around the globe, in the Americas, Asia and Europe.  So, when she said she was worried about the potential systemic risks that could arise from the current manner in which data is reported to swap data repositories (SDRs), it got our attention.

We spoke with Collazo about the current state of swap data reporting, to get her views on what she notes is a lack of global harmonization on data reporting standards, which could create systemic risk.

DerivAlert: Could you tell us a bit about DTCC’s role in swap data reporting?

Marisol Collazo: DTCC has been aggregating and standardizing swaps data since long before it was ever required by Dodd-Frank.  We launched our Trade Information Warehouse for credit derivatives in 2003 and by the time of the financial crisis, we already had 99% of the world’s credit derivatives swaps trade data accounted for.  In fact, this data was used following the collapse of Lehman Brothers to quickly establish the firm’s exposure to the credit default swap market.  Original estimates were as high as $400 billion notional, but our Trade Information Warehouse records proved it was actually $5.2 billion, which helped to calm financial markets.

Now, post Dodd-Frank, our role is to collect swaps data for all asset classes prescribed by regulation for thousands of institutions worldwide. Essentially, we enable market participants to report trade information and provide this data to regulators and the public to create market transparency.

DA: What’s changed from when you were collecting data for the Trade Information Warehouse and now that you are collecting data as an SDR under Dodd-Frank?

MC: The biggest difference is that when we were collecting data for the Trade Information Warehouse, the trades were standardized and payment processing was facilitated from matched records, so there was absolute certainty that the data was accurate. Today, under Dodd-Frank, the required reporting fields and scope of products is broader and doesn’t necessarily tie into existing market structure. It’s a much more fragmented process due to regional differences and absence of standards. Unlike the Trade Information Warehouse, which is self-policing when it comes to accuracy, as money moves on matched records, in the new scenario we’re collecting data but don’t have the information to determine if it’s correct or otherwise. The only people who can be relied on to ensure the information is accurate are the contributing entities themselves. 

DA: Can you give us a scenario in which this lack of a system of checks-and-balances could become a problem?

MC: When we talk about data quality, we have to first start with a definition.  All data quality initiatives have two parts: 1) standards and validation, and 2) accuracy of content.  We have part one covered under Dodd-Frank as it relates to some fields.  There are standards that have emerged to support reporting, such as FpML, LEIs (Legal Entity Identifiers), ISDA product taxonomies, and trade IDs (also known as Unique Swap Identifiers), that allow the marketplace to quickly identify the data field and the value of the trade, and if it all checks out, we accept the record and move on.  But you also need the second part.  It is still possible today to provide a valid LEI, but have an incorrect counterparty name, or provide the right values and still give the wrong notional outstanding. There is no external information available to validate that data, which can only be confirmed by the counterparties to the trade.

DA: So, what’s the solution?

MC: Data quality needs to be everyone’s responsibility. Right now, the SEC places the responsibility for data accuracy predominately with the trade repository, which cannot possibly know if all of the data is accurate. So there’s an inherent problem in the current process.  Regulators need to support and promote global standards to enable market participants and trade repositories to establish stronger validation controls, which will allow data providers to implement standards and validations in their systems to support data quality.  Further, regulators should be conscious of the market structure that delivers data to the repository, and promote policies that will enable repositories to leverage this information.  Firms can then leverage market structure providers and reconcile data to the data repository to ensure accuracy.

We are focused on a foundational requirement that those submitting data and global regulators take responsibility for data quality, and develop a harmonized set of rules to standardize that process worldwide. 

To get this moving, we issued a proposal to CPMI IOSCO suggesting that we harmonize approximately 30 data fields across global trade repository providers, essentially creating a global data dictionary.  We believe it is important to focus on 30 core fields first, which address foundational data like the economics of the trade, the underlying entity, etc. We can then move on to dealing with jurisdictional data and other technicalities that have historically held up efforts to harmonize reporting.

DA: What’s been the response to your proposal?

MC: The industry is generally supportive of this approach.  ISDA and 11 trade associations have come out in support of data quality, suggesting a similar, simplified approach to data reporting as well.   However, the fact is that right now, this issue is still unresolved and – as a result – we are not meeting the G20 goals on trade repositories when it comes to global market transparency and the identification of systemic risk.

Increasingly, we’re starting to see that the practical sides of this issue are outweighing the political, however, and I’m optimistic that we’re going to start seeing progress soon.