Are Data Centers that Host Exchanges Utilities?

Swedish regulators are seeking to fine Nasdaq OMX for alleged anti-competitive practices in the Nordic colocation business. This case is fairly limited in scope, but it raises some more general questions about colocation.

HFTs, execution algorithms, and smart order routers rely on exchange colocation to provide cost-effective and fair access to market centers. Locking competitors out of an established data center could easily destroy their businesses. The Swedish Competition Authority alleges that Nasdaq OMX did just that:

In 2009, a Stockholm – based multilateral trading platform called Burgundy was launched. Burgundy was formed by a number of Nordic banks. The Burgundy ownership structure gave the platform a large potential client base, especially with respect to brokers. However, the owners had limited possibilities of moving trade from Nasdaq to Burgundy as long as the trade on Burgundy was not sufficiently liquid to guarantee satisfactory order execution. In order to increase the liquidity on Burgundy, it was vital for Burgundy to get more trading participants…

In order to come into close physical proximity with the customers’ trading equipment in Lunda, Burgundy decided to move its matching engine to the data centre in Lunda…

Burgundy had finalised negotiations with Verizon, via their technology supplier Cinnober, and the parties had agreed that space would be leased in Lunda for Burgundy’s matching engine. When Nasdaq heard of this agreement, they contacted Verizon demanding to be the sole marketplace offering trading services in Nordic equities in Lunda. Nasdaq told Verizon that if Verizon allocated space to the Burgundy matching engine at their data centre in Lunda, Nasdaq would remove their own primary matching engine and their co-location service from that centre. Such an agreement with Burgundy/Cinnober could also have an impact on Verizon’s global collaboration with Nasdaq. Verizon accepted Nasdaq’s demands, and terminated the deal with Burgundy/Cinnober.

Latency Arbitrage And Traders’ Expenses

In the US, a lot of people are upset that equity exchanges are located all over New Jersey, instead of being in one building. Michael Lewis’s primary complaint about HFT is that it engages in “latency arbitrage” by sending orders between market centers ahead of anticipated institutional trades. I suspect that, in addition to those concerned about latency arbitrage, most HFTs would also be happy if important exchanges moved to one place. That would cut participants’ expenses significantly; there would be no need to host computers (and backups) at multiple locations and no need to procure expensive fiber and wireless links between locations. It would also allow HFTs to trade a given security on multiple exchanges from a single computer, dramatically simplifying risk checks and eliminating accidental self-trading.

If so many market participants want it, why hasn’t it happened? There could be several reasons:

  1. It’d be anti-competitive for exchanges to cooperate too much in bargaining with data center providers.
  2. Established exchanges want the best deal possible for their hosting, which means they need to consider bids from many competing providers.
  3. Under the Reg. NMS Order Protection Rule, there could be some benefit to exchanges if they have a structural delay with their competitors.
  4. Some exchanges see hosting, connectivity, and related services as important sources of revenue and want customers to procure those services from them. This especially includes exchanges which require colocated customers to lease rackspace only from the exchanges themselves – and also exchanges that operate their own data centers.
  5. Exchanges don’t want competitors in the same data center as them, so they use their considerable leverage with providers to keep them out.

The allegations against Nasdaq OMX are about #5 and seem to be about just one case. But here’s a potentially concerning statement by Andrew Ward in the FT (2010):

People familiar with Verizon said it would be unusual in the exchange industry for more than one operator to share the same data centre.

A New Model for Colocation

Reg. NMS requires exchanges to communicate with each other, and would work better if delays in that communication were kept to a minimum. Would it be reasonable for updated regulation to require market centers to provide one another with a rapidly updated view of their order books? That would necessitate exchange matching engines being physically close to one another, ideally in the same building. Requiring this would end most types of “latency-arbitrage,” whether real or perceived.

One solution could be for FINRA to solicit bids from providers under the assumption of a long-term contract, with extra space available for new exchanges and traders – keeping costs down for everybody. This proposal is in tension with the concern in #1, but I wonder if, because we have a single national market system, it’s reasonable for that system to negotiate as a single entity, and for the other concerns to override #1. Exchanges would no longer make much money from colocation services, but they could compensate for that by raising trading fees, which would arguably be healthier for markets anyway.

In my mind, what separates data centers from utilities is that, for most of their non-financial customers, there’s very little benefit to being in a certain building versus a nearby one. So long as nearby buildings have good connectivity to the local internet, customers have many options when procuring hosting services. Financial customers are much different. Once a major exchange is located in a certain building, traders, and sometimes competing exchanges, have no choice but to lease space there. That feels to me like a completely different dynamic, and possibly one that justifies data centers, in this specific industry, being classified as utilities.

6 thoughts on “Are Data Centers that Host Exchanges Utilities?

  1. Salvatore Sferrazza

    To your point that regulators could mandate heterogeneous exchange physical co-location to reduce latency arb, this could potentially be done via a plan processor arrangement, similar to how the SIP works today and how the CAT will work in the future. The details would be sticky, but mandating order protection by having exchanges route orders amongst themselves is clearly already sticky. However, those hypothetical plan processor requirements might include clear latency and resiliency guidelines for the processor to be measured on, which could be better aligned with the rest of NMS.

    The order protection rule doesn’t seem to grasp the nuanced possibilities of inter-exchange latency, and in my opinion the self-help provision is a bit of a ham-fisted way of addressing the myriad nuances inherent to today’s microstructure.

    Like

    Reply
    1. Kipp Rogers Post author

      Insightful comment. If regulation bakes latency and reliability standards into the SIP’s successor, it might even be possible to eliminate the direct feeds altogether. Aside from latency and reliability, the only benefit of the direct feeds is order-level data. It’s a little ambitious, but I don’t see much reason why exchanges can’t handle each other’s order-level data when calculating the NBBO (or whatever else they might need for any next-generation order protection). If markets had one feed, one speed, and one building, I think it would do a lot to ease people’s minds. I suspect HFTs as a group would like this sort of change; it would cut expenses for them and lower barriers to entry.

      Like

      Reply
  2. Salvatore Sferrazza

    I’ve always appreciated Streetwise Professor’s categorization of NMS as a “simulacrum limit order book” (http://streetwiseprofessor.com/?p=7530), which, technologically, is a far cry from the Mendelson/Peake vision of a truly NMS-wide composite limit order book from the 70’s . But the reality is that implementing a nationwide quote and trade dissemination regime that consolidates information from market centers (i.e. the SIP) is much less onerous (operationally and politically) than maintaining a singleton matching engine that executes trades arriving from participants nationwide. I do believe a central matching engine could be implemented via the existing plan processor framework, but the requisite adjustments to both the regulatory and microstructure landscapes would be dramatic. However, without a centralized matching engine, NMS order protection will always be hampered by jitter to some extent.

    Great blog!

    Like

    Reply
    1. Louis

      Some points:

      a. The advent of microwave dramatically increased the barrier to entry to trade across physically separate locations and effectively created an oligarchy of people that compete. The cost of creating a truly competitive microwave connection dwarfs actual colocation costs for latency traders and even a suboptimal microwave connection is pretty expensive. https://sniperinmahwah.wordpress.com/2014/09/22/hft-in-my-backyard-part-i/

      b. The focus on New Jersey – and equities – is a good one, given reg nms and all.. and it seems truly silly to build n*(n-1)/2 fast connections between exchanges to compete.. but don’t forget Chicago. The E-Mini’s are rather important as they relate to stocks, and are located in Chicago (Aurora specifically). New Jersey ain’t gonna let it’s exchanges move to Chicago and Chicago ain’t going to let it’s exchanges move to New Jersey.

      c. One might argue that the physical location of the servers should not matter to trading firms, but that’s not quite correct. IT personnel regularly visit the exchanges to, say, add a new hard drive to a server. For a big firm that’s probably no big deal. For a small Chicago trading firm it would be major problem indeed.

      d. As a result, moving the CME matching engines to New Jersey (which you did not propose) would be a blow to the economically important Chicago trading industry. Rahm seems to have good enough connections to put in a good word to the regulators :).

      Like

      Reply
      1. Kipp Rogers Post author

        Thanks for the comment.

        a) Wireless costs can definitely be pretty high. Anyone interested in pricing for publicly available networks can see that it’s quite easy to spend >~50k/mo between NJ data centers, and another 50k/mo on data from Chicago. http://images.qnasdaqomx.com/Web/NASDAQOMX/%7Bc6c41d3b-68f7-4408-a404-20b8436209c8%7D_MMWfaqs_April_2015_-_clean.pdf
        http://www.quincy-data.com//product-page/#prices

        And those are just public networks. Even trading companies with deep pockets are consolidating their networks in order to keep costs down and get access to the most prized towers. http://www.bloomberg.com/news/articles/2015-03-24/speed-traders-team-up-in-microwave-tower-superhighway-plan-i7n0i784

        b) You could be right that states would try to incentivize exchanges to stay put. I don’t know anything about this, but it’s conceivable that exchange data centers are noticeable (and prestigious) contributors to tax revenue and significant employers.

        c) I don’t think this is really an issue. My company is a small one and we’ve never set foot inside a data center that is owned/operated/controlled by an exchange (e.g. Nasdaq/Verizon Carteret or CME Aurora). Generally, only customers who lease space directly from an exchange (not an intermediary) are given physical access to their colocated machines – these are usually large customers (I believe CME’s smallest offering is half a cabinet and 4.25kW https://www.cmegroup.com/content/dam/cmegroup/trading/files/co-location-data-center-services.pdf ). Small companies usually rely on remote hands to handle maintenance.

        Part of me is very sympathetic to your point. But NMS exchanges are a very natural group to be put into one building. If regulators were to ask CME and equity exchanges to colocate, shouldn’t they also ask the same of ICE, CBOE, FX venues, fixed income venues, etc? What about exchanges in other countries?

        It seems like an impossible task and I’m not sure that the world would be better off with every exchange in one giant complex. Presumably we’d want multiple disaster recovery sites for that. And, to be frank, I’m not sure how many clients would trade in a disaster recovery scenario. Say, there were a hurricane, all product groups worldwide could be unavailable, instead of just one. Also, if exchanges were all located in the UK for instance, click traders in Asia would have to eat an extra ~200ms. That kind of latency could actually matter. Even aside from the speed disadvantage, human traders may also benefit (in aggregate) from higher speed price discovery.

        Like

        Reply
  3. Salvatore Sferrazza

    So I’ve been thinking that now that Nasdaq is establishing a DR PoP in Chicago, if they are ever primary there (after a failover), during that time one essentially erases any latency between the derivatives markets hosted at Cermak and the Nasdaq, itself hosted at Cermak in a DR scenario.

    Like

    Reply

Leave a comment