Skip to content
Contact Us Client Area

Can the CFTC Improve Swaps Data?

| FinReg

By Tod Skarecky, Clarus Financial Technology

Originally published on TABB Forum

The Commodity Futures Trading Commission has been receiving swap trade reports for a few years, but it can’t seem to make sense of some, or much, of it. In order to fulfill the Commission’s regulatory mandates of monitoring systemic risk, market abuse and general oversight, the data has to be much better. The Commission realizes that it needs more details than it originally asked for and that the currently requested data is not being reported well enough. Unfortunately, perfect quality data may not be attainable.

The CFTC wants to make Swap Data Repository data better.

On Dec. 22, 2015, when the financial markets had visions of sugarplums dancing in their collective heads, the CFTC snuck out a document, asking the industry to comment on plans to increase reporting requirement for swaps.

The document can be found here. At 68 pages (17 of which are an appendix), it is one of the more digestible documents I have seen come out of a regulatory authority. Let me summarize 68 pages in bullet points (Please understand that I am taking some journalistic liberties in my interpretation, but these liberties are based upon our own initiatives at Clarus over the past three years to understand and normalize the SDR trade reporting.):

  • The Commission has been receiving swap trade reports for a few years.
  • The Commission can’t make sense of some, or much, of it.
  • In order to fulfill the Commissions regulatory mandates of monitoring systemic risk, market abuse and general oversight, the data has to be much better:
    • The Commission realizes that describing a swap requires more details than they originally asked for.
    • The currently requested data is not being reported well enough.
  • 80 questions about how to enrich the data, and what would happen if the regulator asked for a bunch of new data:
    • Better LEI data
    • Better product data
    • Nitpicky (yet very valid) details on price, notional amounts, schedules, options, orders, packages, and clearing attributes
    • Cleaner Post-trade data. I can only infer that even if you assume the trade was originally reported rich enough (which is not a good assumption) that very typical events such as rate resets, amendments, compressions, terminations, etc., mean that the SDR data is outdated as soon as anything happens to the trade.

The Clarus Response

We at Clarus responded to the CFTC request. You can read the full response here.

It is only 3 pages, but if you are in a rush, the gist of our response is:

  • A handful of specifics on particular areas of frustration that we have had in gleaning information off of SDR data:
    • Clearing House needs to be included, as this is price-forming (A CME swap != an LCH swap).
    • Swaps with variable notional schedules, variable price schedules, and variable spreads are quite common and unintelligible on the SDR tape. The amount of trades that have “OTHER PRICE EFFECTING TERM” (a catch-all for “We’re not telling you enough about this trade to make sense of it”) is too large, and it is too large of a cop-out.
    • Many options are a mess. Much of which came from the content of my blog on FX Options.
    • We support initiatives that would help understand package transactions better.
    • We support any initiative that would give greater transparency into the method of execution (RFQ vs CLOB).
  • The CFTC may be well served in adopting existing standards in the marketplace (e.g., FpML).

Good Luck

My initial reaction to this CFTC document is similar to my initial reaction 5 years ago when I first came across the SDR reporting mandate requirements. You see, I have spent more than 20 years in swaps financial technology (I’m a hit at cocktail parties) and I have personally been on projects at financial institutions whereby I have learned that modeling all of a single institution’s OTC derivative trades, particularly anything approaching bespoke, is not possible. Let me clarify:

  • Trying to model all trades and lifecycle events within a single asset class on a single vendor’s technology can take years of effort even when done by experts from that vendor.
  • Many firms give up on the more exotic swaps and maintain spreadsheets or other means for these bespoke products.
  • Manual processes are required to maintain and reconcile a firm’s own trade data throughout the lifecycle.
  • There is very little scale in modelling multiple asset classes. For example, an exotic equity swap and binary FX option have very distinct challenges in modeling, pricing and lifecycle – so the effort is additive. For example, you don’t just implement FX Spot and think that you can now tackle swaptions.
  • Taking this same technology to another firm to model the same asset class takes the same effort yet again.

So if it takes years to try to model and maintain a portfolio of trades for just one firm, how long would it take for a third party (a government agency, no less) to model every trade dealt in America? And just to handcuff you more – these are not your trades, so you do not have the same degree of interest and feedback loop to make sure they are correct.

Unfortunately, I believe the answer is, it is not possible to ever model every trade and keep the image of the trade up-to-date.

All or Nothing

To make things worse, I believe every regulatory agency (not just the CFTC) will suffer from one critical problem when it comes to market and credit risk monitoring of bespoke derivatives:

All it takes is one wrong trade and you’re risk analysis is garbage.

Let’s be optimistic and assume that after this comment letter, rule changes, and a couple more years of tinkering and improvement in data quality (at great cost to the industry), that we get to a point where 99.99% of trades are accurately reported, rich enough, and maintained throughout their lifecycles. (Note: This would be a huge stretch.)

Even if this was accomplished, I would have no faith that someone could look at an LEI’s position in credit derivatives, FX, commodities, rates or equities, and determine if something looked excessive or not.

The Good News

The good news is that while having 99.99% accuracy would not allow a risk manager, trader, or a regulator to make sense of the risk in a portfolio, the CFTC can still accomplish a great deal of its stated goals related to market and trade practice surveillance with improvements to the data.

Further, every incremental improvement in the data should result in incremental improvements in transparency in the publicly disseminated data (Part 43 trade data).

Last, I believe we need to commend the CFTC, as it is the only regulator that is giving us (the public) any good swaps trade data whatsoever. Look no further than our blogs on what we can see out of Trade Repositories in the rest of the world (such as Europes public feed), and you’ll realize the CFTC is the most progressive agency in the world to support G20 swaps reform.

Oh, No, Not the Blockchain

So when you take a step back and look at what the CFTC really would like access to – every trade in its jurisdiction – it begs the question: Why doesn’t it just use what banks already use for themselves; for example, FpML, ISO, SWIFT, FIX, etc.? These standards exist today because the industry created languages and protocols to communicate transactions amongst themselves.

Taking this one step further, the ideal solution is not to re-create languages, protocols, and another database to keep in sync, but to leverage the existing industry languages, protocols and databases. The upside of doing such a thing is that if a trade is not described accurately in the master industry database, banks and other firms will invest the time and effort to fix it, because it will directly impact their bottom lines.

If you’ve followed any of the blockchain news in the industry, the general story goes like this:

  • Counterparties A and B transact a deal as usual (phone, electronic, instant messaging, airmail, etc.).
  • This trade is written to an industry database (a blockchain) so that A and B (and other permissioned bodies) can see it.
  • The trade is a “smart contract” in so much as on the coupons are paid out on coupon dates, exercises occur on maturity, etc.
  • Any changes to the data are additive – the database has the full history of every trade.

I might be the only person to describe (poorly) the blockchain in 4 bullet points – but the point is hopefully clear that if you want to ever get to a point where you have complete accuracy, you’re going to have to be looking at the real data – not some attempt to regurgitate the data into 120 fields.

Summary

We at Clarus have personally witnessed the birth and emergence of trade reporting around the world, and for certain portions of the data within the US SDRs, it has proved tremendously useful and informative. The CFTC deserves credit for all of the progress here.

The data, however, is not perfect. Depending on what asset class and product type you choose to evaluate, you might say that 80% of the data is good (such as for vanilla IRD), or 50%, or 0% (I am thinking of binary FX options). I am also speaking about the public Part 43 data here – I can only imagine that the private Part 45 data is markedly worse, given the lifecycle events.

Regardless of where on the quality spectrum you lie, incremental improvements are possible. Going from 0% to 50% should be relatively easy – for example, requiring a couple fields that are currently missing. But as you move up that quality spectrum – e.g., going from 80% to 81%, or 99% to 100% – the job only gets incrementally more difficult, and I say impossible.

We at Clarus support the CFTC in its efforts to improve the quality of the data for public consumption. However, perfect quality data will only happen once a regulator is looking at the same data that is used by the banks it regulates.

I think we’re many years away from there.