Author Archives: Kipp Rogers

IEX Peg Orders: Last Look for Equity Markets?

Matt Levine recently challenged his readers to describe how IEX’s speedbump might be gamed:

It’s hard for me to figure out a way to game it. You all are smart, tell me how to game it. The prize is maybe you get to game it.

I’ve discussed some issues with the IEX platform. In this post, I’ll add detail for a few of those issues. And, while labeling an exchange “gameable” is subjective, IEX peg orders remind me of controversies from other markets. I haven’t seen IEX’s source code or system design, so this post is speculative.

All orders sent to IEX go through the 350us “shoebox” delay, at the time of entry. However, the exchange does not apply the delay to algorithmic order types such as peg orders. This behavior is designed to prevent nefarious traders who, after seeing a quote change on another exchange, rapidly submit aggressive orders to IEX, hoping to hit an order pegged at the now-stale price. [1] IEX’s intention is a good one. But, if the shoebox delay is not fine-tuned, there can be some undesirable side effects.

Last Look

Orders on many spot FX exchanges are subject to what’s called “last look,” where the resting side, after a match, may briefly wait before deciding to proceed with the trade. Last look helps bank liquidity-providers avoid being “picked off,” and gives them option-value by letting them back out of fills if the market goes against them. It may serve legitimate business purposes, but it’s easy to understand why the practice is controversial. BlackRock, for instance, has said that last look causes “phantom liquidity.”

IEX peg orders offer something like a ‘conditional’ last look. Instead of becoming non-firm at the trader’s discretion, peg orders opt-out of executions only if the NBBO moves away within 350us of the incoming order’s reception. [2] This restriction makes them less valuable than true last look, but their option-value is still very significant. How significant? I would estimate that it’s worth around half a tick. To give a rough idea, here is a plot of trades on Nasdaq, grouped by whether Nasdaq still had a quote present at the same price 350us later:

 

aug2X_inet_vsInetBbo350usPostFill_800

Top panel: Average market-priced profit or loss per share vs distance in time from trade, from the perspective of the passive side of the trade. Trades are grouped by how their price compared to the (round lot) Nasdaq BBO, 350us post-trade. Roughly speaking, if Nasdaq were to pull its orders in similar circumstances as IEX pulls its peg orders, Nasdaq would prevent all the trades from the “Better than 350us Post-Trade Nasdaq BBO” group. The group that would remain (“Equal or Worse Than 350us Post-Trade Nasdaq BBO”) would be very profitable after receiving the ~30mil rebate. Visible execution only. The “market price” is the average price of the most recently traded 100 shares. Chart is over 8 days in August 2014 and excludes fees and rebates. Bottom panel: Shares traded on Nasdaq vs time from trade (including fiducial trade).

Here is the same for colocated trades on Nasdaq BX, again grouped by how they compared to the Nasdaq (Inet) BBO 350us post-trade:

aug2X_bsx_vsInetBbo350usPostFill_800
Other, non-colocated exchanges (like BatsZ or EdgeX) that I checked are similar.

These charts are just hints at the option-value offered by IEX and are closest in spirit to IEX primary peg orders, which (I think) only trade at the 350us ex-post NBBO. [3] The edge that midpoint pegs and D-Pegs receive from the head start is much harder to estimate, but I expect that it’s sizable.

It might seem like the edge I’ve described is solely due to IEX successfully preventing peg orders from being “picked off.” It isn’t. A peg order is “picked off” when its counterparty has reacted to an event which should have previously caused that order to be repriced. IEX is repricing peg orders using information that counterparties didn’t have at the time of their orders’ submission. Equities markets are decentralized, and partly unsynchronized — IEX claims to have fixed all race conditions, but they have only fixed one, and by doing so they’ve created others. [4]

Sources of Peg Orders’ Edge

Sophisticated traders might take advantage of the option-value offered by IEX by simply sending peg orders instead of normal, firm orders. If they’re fast enough to be the first peg orders received by IEX, the estimated 50mil edge could make losing strategies wildly profitable.

Where does this money come from? I think it’s mostly from two populations:

1) All resting orders on other exchanges:

A) IEX pegs are priced using other exchanges’ quotes. Peg orders that would have been unviable economically will now be profitable, through their use of information about the future state of those quotes.
B) These peg orders will proliferate on IEX.
C) Orders sent to IEX that would otherwise have been routed out to other exchanges will now trade with these proliferated peg orders.
D) That will lead to fewer, more toxic fills for passive orders on every other exchange.
E) Market makers may widen their quotes to compensate for this adverse selection.

2) High-alpha aggressive orders on IEX:

A) Aggressive traders may cause or predict the movement of prices on other exchanges.
B) IEX will see these price movements, which occur *after* they receive aggressive orders, and pull posted peg orders which would have executed. This fading of liquidity could harm the same traders that Michael Lewis wanted to protect.
C) Aggressive traders could send their orders to IEX 350us before sending orders to other exchanges, in the spirit of Thor. That would prevent IEX from using future information to pull peg orders. Delaying orders is not always an option though; if the aggressive trader is an execution algorithm reacting to a trade, it couldn’t afford to delay any of its orders. If it did, a competing execution algorithm (or HFT) might clear out the available liquidity. Thor-style delays may work for human traders, but are not helpful for the vast majority of volume executed by computers.

Unintended Usage

Knowing exactly how these orders will behave, sophisticated traders can integrate them into their strategies more effectively than other users. I bet it’s profitable to simply copy quotes posted on other exchanges onto IEX with peg orders. IEX allows traders to mirror liquidity from other exchanges, without the risk of getting run-over that normally entails. And, most of this revenue will be earned by high-speed traders. When there’s a structural inefficiency like this one, the fastest orders capture the profit. A 50 mil per share edge is very enticing to HFTs, and I’d expect that many will soon be competing in a race to be first in the ‘peg order queue’ (if they aren’t already).

I’m sure there are many other examples where conditional executions allowed by the speedbump change the circumstances of trading from win/lose to win/scratch. [5][6]

Understanding Timescales of Trading

Put a certain way, IEX’s speedbump doesn’t sound very significant; 350us is less than 1% of the time it takes to blink. But, like it or not, computers do the majority of trading these days, including on behalf of fundamental traders. Market professionals know a lot can happen in hundreds of microseconds, and a speed advantage of that magnitude can guarantee profit. IEX knows this too. Cofounder Dan Aisen says that “350 microseconds is an enormous head start.”

Selective application of a speedbump is economically equivalent to an exchange distributing a secret data feed, providing anointed traders advance notice about changes in the order book. A simple system update would resolve this issue. IEX could keep its peg orders from executing at stale prices, without using information their counterparties don’t yet have.

FX traders understand the consequences of interacting with last look liquidity, and can route their orders elsewhere. Equity markets are different. Maybe last look would tighten spreads for retail traders. But we should think hard before bringing it to our stock market.


[1] A crude example of what IEX hopes to avoid:

  1. Nasdaq has set the NBBO of a stock, which is 10.00/10.01.
  2. IEX has a primary peg order on its bid, currently resting at 10.00.
  3. Somebody sends a large sell order to Nasdaq, clearing out the bid and leaving some quantity resting at 10.00. The new NBBO is 9.99/10.00.
  4. High-speed Trader A sees that and quickly sends an order to IEX to sell at 10.00.

If IEX were to receive Trader A’s sell order before they knew that the NBBO had changed, then they’d execute it against the peg order at 10.00. That’d be bad for the peg order. So, IEX delays the high-speed trader’s order for 350us, which is more than enough time for them to see that the NBBO has changed and reprice the peg order to 9.99, preventing it from being “picked off.”


[2] Mostly. D-Peg orders adjust their price in response to the number of quotes at the NBBO, 350us after reception of an incoming order. If you’re interested, the mechanics of the D-Peg are now disclosed. They’re on p210 of this pdf from IEX’s exchange application.


[3] A few ways the figures differ from IEX primary pegs:

  1. The NBBO is different from the Nasdaq BBO. Adding in venues with inverted pricing (Bx,EdgeA,BatsY) should make this edge larger.
  2. Different exchanges and different order types have different populations of traders.
  3. A market data message takes time to get from Nasdaq to IEX’s system. The details may bore you, but IEX’s speed advantage will vary by exchange. Messages from Carteret to Weehawken on commodity fiber take about 180us 1-way. On a wireless network, messages from Carteret to IEX’s POP in Secaucus probably take about 90us. So HFTs may receive Nasdaq messages around 90us before IEX does, which means that IEX arguably has a 260us headstart (350us – 90us) for reacting to Nasdaq. For the 4 Bats exchanges in Secaucus, IEX will essentially have the full 350us head start. For the NYSE exchanges, IEX should have a smaller advantage. And IEX may receive market data from CHX (in Chicago) well after high-speed traders do, if any bother to send it wirelessly to NJ. There’s also nothing stopping IEX from getting its market data over wireless, which would give them the full head start for messages from every exchange, and would be a tiny expense by their standards.


[4] For readers who have experience with software, here’s an analogy:

Let’s say that you have some multi-threaded software. The software processes a stream of two types of events, A and B. Sometimes, events of Type A occur slightly before those of Type B, but the Type A event processing tends to be slower. Because of that slowness, the software often finishes the Type B events first. That causes events to be handled out of order, and has bad consequences. Your measurements show that Type A’s processing is typically slower than Type B’s by 100us, but never more than 300us. So, you decide to delay all Type B events by 350us, because that will make sure they can never beat any Type A events which occurred first. You’re very proud of yourself, and tell your customers that their synchronization problems are over.

If “Type A” events are NBBO changes that cause you to reprice peg orders, and “Type B” events are all other customer orders, then this analogy is close to the idea of exchange speedbumps. The problem, of course, is that now the “Type B” orders have a speed disadvantage, which means, if the price moves away shortly after they were received, that they can’t match with peg orders. There are methods to properly deal with these situations in software. It’s just that adding a constant delay to select events isn’t one of them.


[5] Here’s another example of a pretty dumb strategy that only an HFT could try:

  1. The NBBO for a stock is set by Nasdaq at 10.00/10.05.
  2. An HFT observes a hidden trade on Nasdaq at 10.03.
  3. The HFT knows that there is probably still hidden liquidity available at 10.03, because resting hidden orders tend to be large.
  4. The passive side of the hidden trade isn’t distributed in market data. But the HFT has a model which estimates that there’s a 70% chance that the resting side is the bid.
  5. If the HFT were more confident about that estimate, it could submit a midpoint buy order to Bats, which could get filled at 10.025, lower than the price the hidden order just paid. However, the 30% chance that the estimate is wrong is too high — If somebody sends large sell orders checking for hidden liquidity at Bats, and shortly afterwards sweeps the Nasdaq bid, the HFT will be stuck with a toxic fill.
  6. The HFT submits a midpoint buy peg to IEX instead.
  7. Now, if their guess is wrong, they’re protected. When IEX receives the same aggressive order checking for hidden liquidity, it holds it for 350us. While holding onto that order, IEX sees the Nasdaq bid swept, and pulls the HFT’s midpoint order.
  8. If the HFT’s guess is right, and someone sends large sell orders to IEX and Nasdaq, the hidden order at Nasdaq will trade at 10.03, leaving the displayed bid intact. The HFT’s order will execute at IEX at 10.025, a better price than the hidden order received.


[6] In addition to peg orders backing away from fills after market conditions deteriorate, it’s possible that IEX uses non-delayed data to help peg orders aggressively trade against resting liquidity, potentially “picking off” orders on their own exchange. I previously blogged about this “book recheck” feature. “Book rechecks” could offer conditionality to sophisticated traders wanting to remove liquidity before specific future events.

A Close Look at the Treasury Flash Rally Report

Flash events, where prices rapidly change and revert to their previous levels, are not well understood. Government reports on these events are immensely helpful, and I was pleased to see a high level of detail in the recent Joint Staff Report on the October 15, 2014 flash event in US Treasuries. It’s hard to see by eye, but many of the charts in the report show important market metrics broken down by trader type, with what appears to be 1-2 second resolution. This kind of data is rarely made public, and is a huge treat for a practitioner like me. In this post I will begin to explore the contents of this ~15 minute dataset. The analysis required some moderately difficult image parsing, not an area of expertise for me, so there could be errors.

Types of Traders in the Report

The report mentions several types of traders, and since each “employs some level of automated trading,” it’s tough to label just one category ‘high-frequency trading,’ though the closest group is probably “Principal Trading Firms” (PTFs), which trade their own capital and do not have customers. [1] But a few algorithmic traders, like Citadel and Renaissance, may be included in the “Hedge Fund” category. Some charts also have “FCM” and “Other” categories, which could contain smaller algorithmic traders that were hard to classify. [2]

Counterparty-tagged Volume

Given the contents of charts 3.5-3.8, and the large amount of self-trading, it’s a reasonable guess that PTFs were trading mostly with other PTFs during the event. But, the data pulled from the charts don’t particularly support this hypothesis. Here is a plot of the net inventory change per second, by trader type and aggressor-flag, for seconds when any group’s aggressively or passively-accumulated inventory changed by more than 100 million dollars:

Net inventory change per second in 10-year futures. Data is from report Figures 3.6 and 3.8. Assumes that each bar in both figures represents 1-second. There are 928 such bars.


Assuming that the 1-second inventory change is reflective of actual trades [3], this figure shows that, during big seconds, little volume was generated by intra-group trading.

Here’s a similar plot for the cash market [4] which appears to show that PTFs traded more with banks than each other:

Similar to above, from Figures 3.5 and 3.7.

Volume Between and Within Groups of Traders

Here is the overall share of volume between traders of various types:

AssetManager BankDealer HedgeFund Other PTF
AssetManager 0.00033 0.01446 0.00578 0.00551 0.02042
BankDealer 0.01446 0.03439 0.03037 0.03146 0.11861
HedgeFund 0.00578 0.03037 0.00642 0.02045 0.05794
Other 0.00551 0.03146 0.02045 0.01953 0.07332
PTF 0.02042 0.11861 0.05794 0.07332 0.18269
Total 0.0465 0.22928 0.12096 0.15028 0.45297

Estimated portion of 10-year futures volume during the event window attributable to each pair of groups. For example, 2% of volume was from asset managers trading with PTFs. Net inventory change per second is used as a surrogate for volume. Counterparty-tagging is estimated pro-rata, e.g. if PTFs and banks each passively bought 50 in 1 second, when hedge funds and asset managers each aggressively sold 50, then the estimate is that hedge funds sold 25 to PTFs and 25 to banks (and asset managers did the same). Again, data is from figures 3.6 and 3.8. Volume is single counted.

Only 18% of estimated volume was from PTFs trading with other PTFs. Given that total PTF volume was 45% under this estimate, PTFs were slightly less likely to interact with one another than by random chance (which would be 0.45 * 0.45 = 20%). [5][6] Note that there’s a disparity between this estimate of total PTF volume and what’s in the report, which has PTF share at around 60% during both the event window (p25) and across the day (Table 2). [7]

It might also be interesting to see how these statistics evolved over time:

Estimated portion of total volume in the preceding 20 seconds traded between PTFs and other groups for 10-year futures.


The estimated volume share of PTF-PTF is rarely far from the square of total PTF share, which suggests that worries about “PTFs trading almost solely with each other” may be unfounded. [8][9] Plots for the other group-pairs here, here, and here.

We can also use the same method to estimate the volume between aggressive and passive traders in all of the groups:

 Group AssetManager Passive BankDealer Passive HedgeFund Passive Other Passive PTF Passive Total
AssetManager   Aggressive 0.00034 0.02092 0.00585 0.00567 0.02521 0.05799
BankDealer Aggressive 0.00801 0.03439 0.02758 0.03361 0.1216 0.22519
HedgeFund Aggressive 0.00576 0.03312 0.00643 0.01864 0.05412 0.11807
Other                 Aggressive 0.00537 0.02937 0.02234 0.01954 0.07505 0.15167
PTF                    Aggressive 0.01564 0.11571 0.06172 0.07154 0.18246 0.44707
Total 0.03512 0.23351 0.12392 0.149 0.45844  1

Estimated portion of 10-year futures volume between passive and aggressive trades from each group.

The estimates show that asset managers tended to trade more aggressively (5.8% of total volume) than passively (3.5%). When trading aggressively, 36% (0.021/0.058) of their volume was executed against a bank-dealer, significantly higher than bank-dealers’ 23% share of overall passive volume. Given that asset managers characteristically have “large directional flows spanning multiple trading sessions,” their tendency to trade with banks may be of interest to people worried about bond market liquidity because “banks now have less risk-warehousing capacity than they did in the past.”

Group-Identified Book Depth

Large, passive sell orders may have stopped the flash rally. From the report:

Around 9:39 ET, the sudden visibility of certain sell limit orders in the futures market seemed to have coincided with the reversal in prices… [W]ith prices still moving higher, a number of previously posted large sell orders suddenly became visible in the order book above the current 30-year futures price (as well as in smaller size in 10-year futures).

We don’t know who submitted those orders for the 30-year, but the report may tell us who did for the 10-year. Here is an estimate of ID-tagged order book depth around this time, using data from Figures 3.19 and 3.22:

Estimated visible book depth in top 3 price levels for 10-year futures, by type of trader. Hedge Fund and FCM data from 3.22 is merged into data from Figure 3.19. The depth quantity from the “Other” trader category in Figure 3.19 appears to be very close to the sum of the quantity from “FCM” and “Other” traders in Figure 3.22; “Other_322” uses the Figure 3.22 data. Aligning, renormalizing, and merging the data from Figure 3.22 into data from Figure 3.19 required some judgment, so there may be errors (and the x-axis is probably off by a few seconds). Time resolution of the data in Figure 3.19 appears to be about 1.8 seconds, but the similar Figure 3.17 from the cash market has an apparent resolution of 1 second; it’s possible that this disparity is because the authors wanted to protect traders’ privacy.

The origin of these large sell orders could have been traders in the “Other” category of Figure 3.22. I wonder if they may have come from asset managers, which are not separately included in the depth plots.

Self-Trading

According to the report: “in the 5-year note in the cash market… self-trading accounted for about one-third of net aggressive trade volume between 9:33-9:39 ET.” Levels of self-trading were high on futures markets as well. Regulators are contemplating new, industry-initiated, rules on self-trading. That makes a lot of sense. The usual defense of self-trading argues (correctly) that it can be the accidental by-product of compliant trading, hardly a claim that self-trading is beneficial.

Most major exchanges offer self-match prevention, and it seems easy to enable it for all customers. I understand that some trading firms have a siloed business model, where individual groups fiercely compete with one another. In these companies, self-match prevention could allow rival groups to learn each others’ trading strategies. [10] But that doesn’t strike me as a particularly high price to pay. In contrast, accidental self-trading does impose a cost on market participants — it adds noise to market data. [11] Regardless of whether self-trading had any effect on the flash event, it certainly has fostered suspicion of the industry, which seems like pretty good reason to eliminate it. [12]

Potential Causes of Flash Events

Andrew Lo discussed the 3 dimensions of liquidity in the recent CFTC Market Risk Advisory Committee:

[T]here are three qualities of liquidity that really make up the definition. A security is liquid if it can be traded quickly, if it can be traded in large size and if it can be traded without moving prices.


Lo adds that these attributes can be measured. I think he’s right that “liquidity” has a simple, quantitative definition. But there’s an additional wrinkle that makes it prone to sudden changes, and challenging to measure. Liquidity is also about expectations, and its three components (price, time, and size) evolve in response to any anticipated change in them. This evolution may be especially important in flight-to-safety markets. If you want protection from volatility, and worry that bond market liquidity could dry up, you might accelerate your purchase of treasuries. If others decide the same, then there could be a rapid, cascading deterioration in liquidity. [13]

Many models of liquidity involve a book of “latent orders,” which are orders that exist in traders’ imaginations and are not yet live. A trader with latent orders might think, for example: “X is over-valued by 15%, so if it drops 20% with little change in my outlook, I’ll buy it.” Donier, Bonart, Mastromatteo, and Bouchaud propose a model where traders instantly submit latent orders as real orders when the market price gets close to their desired price. [14] Their model exhibits many properties found in real markets. But, there’s no reason to expect that latent traders watch markets full-time, and as the authors say in a footnote, these traders’ slow reaction time could be a factor in flash events:

It is very interesting to ask what happens if the conversion speed between latent orders and real orders is not infinitely fast, or when market orders become out-sized compared to the prevailing liquidity.  As we discuss in the conclusion, this is a potential mechanism for crashes

I think that our markets tend to have a layer of liquidity provided by professional intermediaries, and a much thicker layer provided by slower latent traders, far from the top of book. In rare occasions that intermediary layer could be exhausted and, if sufficient time isn’t available for latent traders to step in, a flash event may occur. If so, I’m not sure that there’s an easy remedy. Some people may think that slowing down our markets would prevent these flash events, but I suspect it wouldn’t be that straightforward. Latent traders might check prices once a day (or less), which would mean that our markets would need to be made *a lot* slower. Also, some latent traders may pay attention to the market only after significant volume has transacted at their target price, so slower markets could still have episodes of extreme volatility, they’d just last for days instead of minutes.

Some flash events probably have more rectifiable causes. The August 24, 2015 event was likely exacerbated by temporary changes in market structure from LULD halts, NYSE Rule 48, futures being limit-down, and futures’ price limits changing simultaneously with the equity opening. These measures are intended to give markets time to attract latent liquidity. But because they alter market structure, they may shutdown some professional intermediaries, which aren’t set up to trade in one-off conditions. Increased volatility isn’t surprising with intermediary liquidity missing, and still insufficient time for most latent traders to become active.

Many people think that “HFT Hot Potato,” where HFTs panic as their inventory devalues and then dump it on other HFTs, is a factor in flash events. [15] For the October 15 event, that seems pretty unlikely. PTFs do not appear to have preferentially traded with each other. And figures 3.9-3.12 in the report show that the bulk of aggressive volume from PTFs and Bank-Dealers consisted of exposure-increasing buy orders. [16] Exposure-decreasing aggressive orders were, for the most part, selling the 10-year. [17]

The Utility of Fine-Grained Data

I’ve argued in the past that more post-trade disclosure would dispel conspiracy theories and ensure that our markets stay clean. This Joint Staff Report included data with a resolution that surprised me. I hope that trend continues. Even if it doesn’t, there is a possibility that data with this level of precision can be matched with real market data messages to a limited extent. That isn’t an easy problem technically, but I intend to give it a try.


[1] IEX has used a similar definition. From a 2014 blog post by Bradley Hope:

IEX says that in July 17.7% of trading on its platform is done by proprietary trading firms, which it says are firms that have no clients and trade for their own account. It places HFT firms in this category.

As an aside, this percentage appears to have risen in the last year:

Brokers trading their own principal—they include both HFT firms and the big banks’ proprietary trading desks—account for 23 percent of IEX’s trades.

Though this second definition may be different than the one given in 2014, since it includes banks’ supposedly shrinking prop-trading desks and also appears to be restricted to broker-dealers. The Joint Staff Report’s definition says that PTFs “may be registered as broker-dealer[s]” (emphasis added), and certainly not all high-speed traders are broker-dealers.


[2] The report makes it clear that classifying firms was not easy:

Categorizing the firms requires some judgment, particularly given that they sometimes share certain characteristics or may act in multiple capacities… [S]ome bank-dealer and hedge fund trading patterns exhibit characteristics of PTFs, while many smaller PTFs clearly are not trading rapidly.


[3] This should be close, but not identical, to the aggressive and passive volume of each group. For example, Bank A may aggressively buy from PTF B, then Bank A may aggressively sell to PTF B. If these trades occur in the same second, there would be no net change in Bank-Dealers’ aggressively accumulated inventory, or PTFs’ passively accumulated inventory. This 15-minute period is exceptional, and I couldn’t say how often that kind of trading occurs even normally, but we have a hint from a nice paper by CFTC staffers Haynes and Roberts.

In that paper, Table 8 provides a measure of holding times for different types of traders. It shows that, for the 10-year bond future, 42% of the volume executed by large, automated traders is typically netted with trades on the opposing side within 1 minute. We can crudely estimate the portion of these traders’ volume that is held for under a second by considering the distribution of order resting times, given in Table 7. Summing the appropriate values for the 10-year, about 8.6% of double-counted volume is generated by passive, automated orders that are executed within 1 second, and 23.5%  within 1 minute. The ratio of these two numbers is 37%, which may also be reflective of the ratio between trades that have a 1 second holding time (or less) and trades that have a 1 minute holding time (or less). So we can (very roughly) estimate that 0.37 * 0.42 ~ 15% of a typical high-speed trader’s volume is turned over in a second. This estimate applies to single traders’ turnover, not the aggregate of their group.


[4] With a 50 million dollar threshold instead of the 100 used for the futures plot, because the cash market is less active than futures.


[5] One of the first things discussed on this blog is that algorithms generally want to avoid trading with one another. Table 4 from the above-linked paper says that total volume for 10-year futures is typically composed of: 43% algorithms trading with algorithms, 41% algorithms trading with humans, 12% humans trading with humans. These statistics show algorithms interacting with other algorithms about as often as you’d expect by random chance, which surprises me slightly — I’d have expected algos to tend towards interacting more with humans.


[6] If you’re interested in the correlation matrix of inventory changes:

  AssetManager Aggressive AssetManager Passive BankDealer Aggressive BankDealer Passive HedgeFund Aggressive HedgeFund Passive Other Aggressive Other Passive PTF Aggressive PTF Passive
AssetManager Aggressive 1 0.099 -0.028 -0.3 -0.033 -0.04 -0.021 -0.013 -0.012 -0.12
AssetManager Passive 0.099 1 -0.11 0.017 -0.083 0.000065 -0.067 0.033 -0.2 0.095
BankDealer Aggressive -0.028 -0.11 1 -0.17 0.0043 -0.24 -0.078 -0.34 -0.029 -0.45
BankDealer Passive -0.3 0.017 -0.17 1 -0.29 0.08 -0.11 0.25 -0.44 0.22
HedgeFund Aggressive -0.033 -0.083 0.0043 -0.29 1 0.099 -0.044 -0.28 0.012 -0.43
HedgeFund Passive -0.04 0.000065 -0.24 0.08 0.099 1 -0.18 0.11 -0.47 0.099
Other Aggressive -0.021 -0.067 -0.078 -0.11 -0.044 -0.18 1 -0.14 0.096 -0.34
Other Passive -0.013 0.033 -0.34 0.25 -0.28 0.11 -0.14 1 -0.41 0.4
PTF Aggressive -0.012 -0.2 -0.029 -0.44 0.012 -0.47 0.096 -0.41 1 -0.33
PTF Passive -0.12 0.095 -0.45 0.22 -0.43 0.099 -0.34 0.4 -0.33 1

Correlation between trader groups’ 1-second (aggressor-flagged) inventory changes. Data again from Figures 3.6 and 3.8. A large positive (negative) number means that the two groups are more likely to be trading on the same (opposite) side during the same second.

Nothing immediately struck me about the lagged cross-correlations or auto-correlations; except perhaps that asset managers tend to persistently trade on the same side, which I think we already knew.


[7] The reasons for that disparity could include:

  1. Sub-second, group-wide turnover, when it is make-make or take-take (sub-second turnover for individual HFTs was estimated to be roughly 15% in [3]). Sub-second turnover should appear in the charts if it’s make-take or take-make, because net inventory in the charts is split by aggressor flag.
  2. The y-axis resolution of the charts. The smallest visible changes in net inventory are $2.4M for the aggressive chart and $1.9M for the passive chart. So small executions may be under-represented. Algorithms are known for sending smaller orders than humans.
  3. Self-trades could conceivably have been excluded from these charts.
  4. Data omitted from the charts.
  5. An error on my part.

The total volumes in the aggressive and passive charts differ by about 15%. That may give an idea of the margin of error.


[8] For specific seconds, the estimated level of intra-group trading is higher. As the time resolution increases, intra-group share should become more volatile (at the finest resolution, it will frequently spike to 100%, whenever a single intra-group match occurs). If you’re interested, here’s a table of seconds when more than $30M traded and the intra-group share was above 75% (estimated). This will happen by random chance most often for the largest trader group (PTFs). I won’t pretend that there’s a way to test statistical significance without control data, but there is possibly a cluster of PTF-PTF trading around 9:33:40 (the timestamps could be off by a couple seconds).

Time Group Intra-Group 1-Second Volume (Million USD) Intra-Group Share of Total 1-Second Volume
09:30:39 PTF 25 0.79
09:33:38 PTF 25 0.76
09:33:39 PTF 45 0.87
09:33:42 PTF 32 0.94
09:38:19 PTF 42 0.88
09:44:02 BankDealer 25 0.79


[9] Similar statistics published by Eurex show HFTs tending not to trade with each other, during a flash crash in DAX futures. (If videos test your patience, skip a little over halfway through, until the timestamp on the left is 3:38)


[10] It could also create awkwardness in the company cafeteria. If one group has been making money off of another, that might become obvious if self-match prevention were enabled.


[11] Manipuative self-trading imposes a much higher cost on market participants, because the “noise” is specifically designed to deceive. Though, some people think that noise in market data can reduce “front-running” and is beneficial. I don’t agree.  If you think transaction costs would be lower with more limited data, paring data feeds makes more sense than corrupting them. I also suspect that, for most markets, realtime order and trade transparency lowers costs.


[12] This is just speculation, but I wouldn’t be surprised if most non-manipulative self-trading in these markets is from just one or two firms. A rumored (and disputed) report on BrokerTec shows that two firms execute 40% of volume there, Jump and Citadel.

Saijel Kishan and Matt Leising have reported that:

Jump rents out computers and other infrastructure to its traders, who are organized into independent trading teams. The groups operate as separate cost centers… Jump applies its secrecy ethic within the firm. The teams don’t share information about trading strategies with each other

Citadel also has a reputation for internal secrecy.


[13] It’s easy to see how liquidity anxiety would affect asset prices negatively, especially for flight-to-safety products, which are considered “safe” partly because of their liquidity. Say, hypothetically, that money markets are 100% liquid today, but you suspect that they could freeze up in the next year. You’d probably empty your account immediately, right? If enough people did the same, then liquidity could evaporate in a run.

Less intuitive is the possibility that the very safest assets could increase in value when liquidity is expected to disappear. In such situations, there are probably even worse fears about other markets. Long-term treasury prices actually went up during the 2011 debt ceiling crisis, despite some pessimistic speculation. If this phenomenon contributed to the treasury flash rally, there would presumably have been changes in other assets’ liquidity measures, cross-asset lead-lag relationships, or correlation structure.


[14] A consequence of this model is that the order book will be skewed in the opposite direction of a meta-order, e.g. as someone buys a large block of AAPL, there will usually be more quantity available on AAPL’s offer than its bid (near the top of book). That could be an important detail in the “front-running”/HFT/spoofing debate, because the traders who use skewed order books to predict price may actually be trading on the other side of large meta-orders — offering fundamental traders cheaper fills, rather than pushing the price away from them. Strategies that use order book signals may still compete with other mean-reversion traders, but complaints about that don’t sound very compelling.


[15] Kirilenko, Kyle, Samadi, and Tuzun write that “hot potato” trading contributed to the equity flash crash (of 2010):

After buying 3,000 contracts in a falling market in the first ten minutes of the Flash Crash, some High Frequency Traders began to aggressively hit the bids in the limit order book. Especially in the last minute of the down phase, many of the contracts sold by High Frequency Traders looking to aggressively reduce inventories were executed against other High Frequency Traders, generating a “hot potato” effect and a rapid spike in trading volume.


[16] I don’t know what sort of analysis the authors did to determine whether a given order increased or decreased a trader’s exposure. It seems likely that they considered a trader’s “position” to be a combination of their 10-year cash and futures holdings. That wouldn’t be the only measure of market exposure. For example a trader that is short the 30-year may consider buying the 10-year to be a partial hedge. Likewise for a trader long stocks, or another correlated basket.


[17] With the exception of Bank-Dealers, which aggressively covered short futures positions to the tune of about $200M across 3 minutes, a number that does not sound particularly high.

IEX, Ideology, and the Role of an Exchange

IEX has raised significant capital, possibly at a valuation well into the hundreds of millions. IEX plans to become a full exchange and continue capturing market share, but I wonder if it might have a unique long-term vision that excites investors. In this post, I will speculate about what that vision might look like. To be absolutely clear, this post is highly speculative, and does not constitute trading or investment advice.

CEO Brad Katsuyama testified that “IEX was founded on the premise of institutionalizing fairness in the market.” Soothing words, and possibly words that tell us something substantive about IEX’s values.

IEX recently introduced the D-Peg, an order type that uses market data to make a prediction about where the price is heading, and transact only at times that are predicted to benefit the user. The D-Peg is a blending of price prediction, traditionally the role of traders, with the matching process itself. Combined with the 350us structural delay built into IEX, it’s easy to see how even crude prediction signals could become incredibly powerful. As cofounder Dan Aisen puts it:

[W]e don’t need to be the single fastest at picking up the signal– as long as we can identify that the market is transitioning within 350 microseconds of the very fastest trader, we can protect our resting discretionary peg orders. It turns out that 350 microseconds is an enormous head start

I imagine that even something as basic as an order book ratio (e.g. [AskQuantity – BidQuantity] / [AskQuantity + BidQuantity]), known 350us in advance, has tremendous economic value.

This philosophy is interesting to think about, and I can see how it might appeal to certain audiences. If the exchange has a better idea of the market price than its customers, it makes a sort of sense for it to use that information to ensure trades can only occur at that price. But, I think the idea is ultimately a misguided one.

Here are some problems with it:

1) The exchange, in an effort to increasingly prevent adverse selection, may want to make their prediction more sophisticated. If a skewed order book results in ‘bad fills,’ then the same can be said for trades occurring after price moves on correlated instruments. If the price of PBR has just dropped by 1%, then buy orders for PBR.A are surely in danger of being “picked off.” Should the exchange try to prevent that? IEX may well have decided that they’ll always allow this kind of adverse selection. But keep in mind that trading signals do not work forever, especially when they are heavily used — so IEX will likely need to continually revise their prediction methods.


2) As the prediction methods get more complex, they are more liable to be wrong. In the example above, maybe an event occurs that affects the values of the two share classes in different ways. The exchange could erroneously prevent traders clever enough to understand that from executing, impeding price discovery.

 


3) Sophisticated traders like the PBR/PBR.A specialists could opt out of these order types. but how would they make an informed decision? Right now, we just know that the D-Peg uses a “proprietary assessment of relative quoting activity.” Could that “proprietary assessment” change over time? If so, are those changes announced? Matt Hurd has lamented the D-Peg’s undisclosed nature, and thinks it contradicts IEX’s mission of transparency. [1]

 


4) An exchange cannot increase the profitability of one group of traders without harming another. Now, maybe the only group harmed here are unsympathetic high-frequency traders who don’t deserve their profits. I’m skeptical of that. Who might some of those evil traders “picking off” quotes on IEX be? The motivation for Thor, and a critical part of the Flash Boys story, is the fading of liquidity when a trader submits large marketable orders. Some of the traders that the D-Peg will stymie may be people like the young Brad Katsuyama, investors or brokers who send liquidity-seeking orders simultaneously to many different exchanges. Say the NBBO for BAC is 10.00/10.01, and an investor wants to sell a large holding, so she sends sell orders to multiple exchanges, including IEX. One of those orders hits Nasdaq right when another gets to IEX, but IEX waits 350us, and, seeing the Nasdaq bid disappear, perhaps decides not to execute any resting D-Peg interest with the incoming order. Had the investor timed her sell orders differently (in a similar spirit to Thor, sending the IEX-bound order early), she’d have gotten a better fill rate. [2]

Another possibly harmed group could be non-D-Peg resting orders on IEX. One fascinating aspect of the IEX speedbump is that they can use it not only to prevent resting orders from executing at inopportune moments, but also to help traders remove liquidity at opportune moments! I was surprised to see that some order types can automatically trade against others upon a change in IEX’s view of the NBBO, through a process called “Book Recheck“. The mechanics of IEX seem complicated, so I could be wrong, but it looks to me like orders eligible to “recheck” the book may initiate a trade at a price determined by the realtime (*not* delayed) NBBO. [3] In contrast, cancel requests for the passive sides of these trades would be subject to the IEX speedbump. Here is a concerning, hypothetical example:

A) The NBBO for a stock is 10.00/10.01
B) A trader has submitted an ordinary (non-peg) limit buy order at 10.00 resting on IEX
C) The NBB at 10.00 is completely executed, making the new NBBO 9.99/10.01
D) The trader, seeing that 10.00 is no longer a favorable price for her purposes, tries to cancel her buy order
E) Her order cancellation goes through the 350us speedbump
F) In the meantime, IEX sees that the new NBBO midpoint is 10.00, and decides that a D-Peg sell order (or midpoint peg) is now eligible to recheck the book.
G) The D-Peg order is matched with the bid at 10.00.

 

The combination of algorithmic order types and selective use of the speedbump resulted in one trader getting an especially good fill, and another trader getting an especially bad fill. I guess if you’re not careful designing your exchange which supposedly prevents traders from picking each other off, you might do some picking-off yourself. [4]

 


5) Trading that occurs during price movements tends to be more informed, and preventing it could make markets less efficient. This would only be an issue if IEX captured significant market share, but it does sound like permitting trading only during periods of market stasis is part of IEX’s long-term vision. Referring to the D-Peg, Chief Strategy Officer Ronan Ryan says that “[a] core insight behind our market philosophy is that price changes are valuable opportunities, especially for those strategies fast enough to detect signals from price changes.” And also: “The economic benefit is that investors aren’t paying (or selling at) a worse price to a predatory strategy that is aware of quote changes before they are.”

It sounds like the idea is to stop an informed order from trading with an uninformed order, with the exchange deciding which is which. Naturally, the exchange is not an oracle and will misclassify some orders. But, if IEX becomes the dominant marketplace, and its classification is sufficiently good, informed orders will rarely get filled. You might think that wouldn’t happen, because IEX is only targeting ‘short-term’ alpha, but I’d venture a guess that a sizable chunk of order flow with long-term alpha will also have some short-term alpha inseparably folded into it. With information-bearing order flow being blocked, at a certain point, the exchange will be in the position of deciding when the imbalance between supply and demand warrants a price change. I happen to think that generally markets work better when people can freely trade with one another at prices of their choosing [5], and that a vision like this won’t get IEX into the same league as the major exchanges. But, market participants will be the judges of whether this model is a viable one.

 


6) Even if the exchange is pretty good at determining the market clearing price and balancing supply and demand. It’s not clear they can do so more cheaply than algorithmic traders and human market participants. Right now, IEX is charging 9 cents per 100 shares traded, significantly greater than estimates of typical HFT profit margins. [6]

 


7) IEX, by delaying executions, is effectively using market data from 350us in the future, piggy-backing on price discovery from other markets. As Aisen suggests, the speedbump is probably accounting for the vast majority of their prediction algorithm’s edge. [7] This is different from complaints about dark pools’ use of visible order information for price discovery. Dark pools can only use order information from the present, and have to report trades to the public tape “as soon as practicable“. The speedbump might well allow IEX to cheaply discover pricing information from lit market data, potentially starting a new era of speedbumps, with each exchange wanting to have a longer delay than their competitors have. Regulators may want to carefully think through possible end results of this form of competition.


8) We don’t understand how this sort of market structure would hold up under stress. HFTs thoroughly simulate their algorithms, does IEX do the same? In a flash crash situation, IEX might stop D-Peg matching for an extended period, preventing those clients from getting filled at prices they may love, and isolating much-needed liquidity from the rest of the market. Additionally, if IEX is too effective at blocking informed order flow, some traders could panic when they repeatedly try and fail to get executed, damaging market stability.


Most of these issues aren’t especially important to overall market health as long as IEX’s market share stays below a few percent. And I think their market model is a perfectly fine one for a dark pool, although a little more disclosure wouldn’t hurt. The question is whether their target audience of fundamental traders will want to participate in this sort of market. I suspect ultimately that they won’t, though IEX might reach critical mass before participants really have time for reasoned debate.

We may have a glimpse of what “institutionalized fairness in the market” really means. To some, it may mean the relief of relying on a trustworthy institution to equitably determine the timing and pricing of their trades. To others, it may sound like a private company determining the market price via secret, non-competitive algorithms — unaccountably picking winners and losers. Institutional arbiters are part of civilized society, but ideally they’re transparent, receptive to criticism, and reformable when not working. Before we hand over the keys to IEX, we had better make sure that they meet these standards.


[1]Hurd’s complaint seems fair enough, but I’ll mention that competing exchanges aren’t always perfectly transparent either. For instance, Nasdaq Nordic’s documentation seemed to have some noteworthy details about reserve orders that weren’t available on Nasdaq’s US site.


[2] Brad Katsuyama said that “fading liquidity” is one of IEX’s “concerns regarding negative effects of structural inefficiencies” in his testimony to the US Senate:

[D]ue to the construct of the market system certain strategies are able to get out of the way of buy or sell interest as they are accessing the market in aggregate, which calls into question the fairness of the inefficiencies which allow or enable such behavior, and the potential distortion of price discovery and of supply and demand.


[3] Execution Tag “LastLiquidityInd” has a value for “Removed Liquidity on Recheck.” And the Form ATS says:

Upon a change to the Order Book, the NBBO, or as part of the processing of inbound messages, the System may test orders on one or both sides of its market against the contra side of the Order Book to determine if new executions can occur as a consequence of the change in the IEX Book or prevailing market conditions[.] Orders resting on the Order Book at the IEX determined Midpoint, may be eligible to trade against orders in the updated Order Book, which were ineligible, or did not satisfy the order’s conditions, when they were originally booked.

Does that mean the recheck uses the same non-delayed NBBO that IEX uses in the rest of their logic? I don’t know, but more disclosure from IEX seems like a good idea.


[4] Our hypothetical trader who had her buy order “scalped” may also have heard statements from IEX such as “You can not scalp trades, you can not scalp orders that are on IEX.”


[5] Within some reasonable limits of course. Limit-Up-Limit-Down price constraints seem to be appreciated by most participants, though even those aren’t completely free from criticism. Reg. NMS Order Protection also has some passionate opinions on both sides. There is always going to be some tension between letting traders determine prices unencumbered, and protecting them from ‘erroneous’ or ‘unfair’ transactions.


[6] Rosenblatt Securities, which has conducted surveys of HFTs, recently estimated that HFT profit margins in US Equities are around 5 cents per 100 shares. Tabb Group similarly sees shrinking profit margins.


[7] The D-Peg aside, even the simplest formula like the NBBO midpoint will have massive alpha with a 350us “head start.”

Plain, Old Fraud in the Twitter-Hack Flash-Crash?

Two years ago, hackers took control of the Associated Press’s Twitter account and falsely tweeted that the president was injured due to explosions at the White House. Within 3 minutes, US stock indexes dropped about 1%, but recovered to their pre-tweet values after an additional 4 minutes.

I don’t like to idly speculate [1], but ever since then, I keep wondering if this hack might have been part of a massive manipulation scheme [2]. Even if it was just a prank, it seems like the hackers would have been foolish not to try capitalizing on the market movements that they caused. If they wanted to commit crimes, why not at least make some money?

It would be easy to profit off of such a scheme, and it seems conceivable that a savvy, well-funded group might have cleared an enormous sum. It’s also possible that this hypothetical group could have avoided attracting *too much* attention before wiring out the proceeds, perhaps by splitting up the trades across many accounts, without ever touching an American financial product or bank (markets worldwide were impacted by the tweet). The Syrian Electronic Army claimed responsibility for the hack. I obviously don’t know if that claim is true. But if it is, presumably that group could use the money.

Much like spoofing, the intentional spread of misinformation can harm all sorts of traders. There has been speculation that algorithmic traders were disproportionately deceived by the hack. I imagine that some were, but so were plenty of humans. Here’s Sal Arnuk of Themis Trading:

My initial reaction before I realized it was a fake tweet was the same horrible feeling I had when I worked at the top of the New York stock exchange when planes hit the World Trade Center.

And Arnuk also appears aware of the possibility that it was a profit-making scam:

When I realized it was a fake tweet, I was outraged and ashamed that the market was able to be manipulated so easily.

Regulators take spreading false rumors very seriously, like in today’s suit over false EDGAR filings. I am sure they have been looking into this more significant and complex incident. If and when they complete their investigation, don’t be surprised if it was more than just vandalism.


[1] That means I’m about to. This post is highly speculative.

[2] An SEC information page briefly describes “pump-and-dump” scams:

“Pump-and-dump” schemes involve the touting of a company’s stock (typically small, so-called “microcap” companies) through false and misleading statements to the marketplace. These false claims could be made on social media such as Facebook and Twitter, as well as on bulletin boards and chat rooms.

These scams may also be called “short-and-distort” when the manipulator shorts a financial instrument before spreading negative rumors.

Are Data Centers that Host Exchanges Utilities?

Swedish regulators are seeking to fine Nasdaq OMX for alleged anti-competitive practices in the Nordic colocation business. This case is fairly limited in scope, but it raises some more general questions about colocation.

HFTs, execution algorithms, and smart order routers rely on exchange colocation to provide cost-effective and fair access to market centers. Locking competitors out of an established data center could easily destroy their businesses. The Swedish Competition Authority alleges that Nasdaq OMX did just that:

In 2009, a Stockholm – based multilateral trading platform called Burgundy was launched. Burgundy was formed by a number of Nordic banks. The Burgundy ownership structure gave the platform a large potential client base, especially with respect to brokers. However, the owners had limited possibilities of moving trade from Nasdaq to Burgundy as long as the trade on Burgundy was not sufficiently liquid to guarantee satisfactory order execution. In order to increase the liquidity on Burgundy, it was vital for Burgundy to get more trading participants…

In order to come into close physical proximity with the customers’ trading equipment in Lunda, Burgundy decided to move its matching engine to the data centre in Lunda…

Burgundy had finalised negotiations with Verizon, via their technology supplier Cinnober, and the parties had agreed that space would be leased in Lunda for Burgundy’s matching engine. When Nasdaq heard of this agreement, they contacted Verizon demanding to be the sole marketplace offering trading services in Nordic equities in Lunda. Nasdaq told Verizon that if Verizon allocated space to the Burgundy matching engine at their data centre in Lunda, Nasdaq would remove their own primary matching engine and their co-location service from that centre. Such an agreement with Burgundy/Cinnober could also have an impact on Verizon’s global collaboration with Nasdaq. Verizon accepted Nasdaq’s demands, and terminated the deal with Burgundy/Cinnober.

Latency Arbitrage And Traders’ Expenses

In the US, a lot of people are upset that equity exchanges are located all over New Jersey, instead of being in one building. Michael Lewis’s primary complaint about HFT is that it engages in “latency arbitrage” by sending orders between market centers ahead of anticipated institutional trades. I suspect that, in addition to those concerned about latency arbitrage, most HFTs would also be happy if important exchanges moved to one place. That would cut participants’ expenses significantly; there would be no need to host computers (and backups) at multiple locations and no need to procure expensive fiber and wireless links between locations. It would also allow HFTs to trade a given security on multiple exchanges from a single computer, dramatically simplifying risk checks and eliminating accidental self-trading.

If so many market participants want it, why hasn’t it happened? There could be several reasons:

  1. It’d be anti-competitive for exchanges to cooperate too much in bargaining with data center providers.
  2. Established exchanges want the best deal possible for their hosting, which means they need to consider bids from many competing providers.
  3. Under the Reg. NMS Order Protection Rule, there could be some benefit to exchanges if they have a structural delay with their competitors.
  4. Some exchanges see hosting, connectivity, and related services as important sources of revenue and want customers to procure those services from them. This especially includes exchanges which require colocated customers to lease rackspace only from the exchanges themselves – and also exchanges that operate their own data centers.
  5. Exchanges don’t want competitors in the same data center as them, so they use their considerable leverage with providers to keep them out.

The allegations against Nasdaq OMX are about #5 and seem to be about just one case. But here’s a potentially concerning statement by Andrew Ward in the FT (2010):

People familiar with Verizon said it would be unusual in the exchange industry for more than one operator to share the same data centre.

A New Model for Colocation

Reg. NMS requires exchanges to communicate with each other, and would work better if delays in that communication were kept to a minimum. Would it be reasonable for updated regulation to require market centers to provide one another with a rapidly updated view of their order books? That would necessitate exchange matching engines being physically close to one another, ideally in the same building. Requiring this would end most types of “latency-arbitrage,” whether real or perceived.

One solution could be for FINRA to solicit bids from providers under the assumption of a long-term contract, with extra space available for new exchanges and traders – keeping costs down for everybody. This proposal is in tension with the concern in #1, but I wonder if, because we have a single national market system, it’s reasonable for that system to negotiate as a single entity, and for the other concerns to override #1. Exchanges would no longer make much money from colocation services, but they could compensate for that by raising trading fees, which would arguably be healthier for markets anyway.

In my mind, what separates data centers from utilities is that, for most of their non-financial customers, there’s very little benefit to being in a certain building versus a nearby one. So long as nearby buildings have good connectivity to the local internet, customers have many options when procuring hosting services. Financial customers are much different. Once a major exchange is located in a certain building, traders, and sometimes competing exchanges, have no choice but to lease space there. That feels to me like a completely different dynamic, and possibly one that justifies data centers, in this specific industry, being classified as utilities.

Market Impact, Informational Efficiency, and the Value of Liquidity

A worry many people have about HFT is that it raises market impact costs for large institutional traders. Trading algorithms that explicitly anticipate order flow do exist, but does that mean an outright ban on HFT would reduce trading costs? It’s hard to know the answer to that question, but it may be helpful to consider how instruments receive added value from financial markets.

Ecosystems on secondary markets appear to be completely zero-sum. This is kind of obvious when you only consider first order effects; if a short term trader buys low and sells high, then the money they made has to come from their counterparties somehow. By this logic, if one type of trader consistently makes money, then they are doing so directly at the expense of the rest of the market. This view makes less sense when markets are sufficiently mature.

Well-developed markets with widespread participation offer traders liquidity and the expectation of continued liquidity. That liquidity is worth something. For long-term traders, a liquid marketplace makes it easy to find a partner to trade with. Imagine, for example, how much you would save when selling your home if you didn’t need a broker to find buyers. Liquidity also raises the value of assets themselves; in essence, something is worth more to you if you know that you can readily sell it later. How much more though? If a prospective buyer knew a home could be later sold without paying a 3% broker fee, then maybe they’d consider bidding 1% higher for it. In financial markets, the classic example of this phenomenon is the premium that on-the-run treasuries command in comparison to those that are off-the-run: A 30-year bond with only 10 years left until maturity generally trades at a lower price than a freshly issued 10-year bond, even though the two bonds should make payments that are effectively identical. The reason suspected for this discount is that the fresh 10-year is more actively traded and easier to sell if desired.

Liquidity Provides Option-Value

One way to think of liquidity’s worth is to consider an analogy with put options. A put option offers its owner the ability to sell an asset in the future at a pre-determined price. Liquidity offers the owner of an asset the ability to sell in the future at the prevailing market price, whatever that may be. That difference is important – but even though the “option” associated with liquidity has an “exercise price” that floats with the market price, it is still worth something. Consider that investors pay their brokers for guaranteed executions at (or worse than) the market price. Or consider the existence of trade-at-settlement products, where one party will pay another in order to guarantee an execution at the day’s settlement price (a measure of market price). A liquid security thus has an embedded put option that should increase its value. [1] Options, of course, increase in value when volatility rises. The price of liquidity appears to do the same, for instance on p.42 of this paper, you can see that the on-the-run premium tends to be higher during times of high volatility. [2] [3] [4]

Market Impact

The cost of liquidity is generally divided into 2 components: the bid-ask spread and market impact. The spread is a measure of how much market-makers need to be compensated for the risk of trading with a counterparty large enough to move the price. Market impact is the change in price that occurs after a trade. Since large traders often split their transactions into many small child orders, market impact means that the latter part of these orders are executed at a less favorable price. Very roughly speaking, small traders are expected to have the bid-ask spread as their primary transaction cost and large traders are expected to have market impact as theirs. One common view is that these large traders are “informed” and their favorable information is why market-makers lose money when trading with them. Here’s a widely cited paper by Fox, Glosten, and Rauterberg of Columbia:

There are three primary kinds of private information, which we will label, respectively, inside information, announcement information, and fundamental value information…

Whatever the source of an informed trader’s private information, the liquidity provider will be subject to adverse selection and lose money when it buys at the bid from informed sellers or sells at the offer to informed buyers. As long as there are enough uninformed traders willing to suffer the inevitable expected trading losses of always buying at the offer and selling at the bid, however, the liquidity provider can break even. There simply needs to be a large enough spread between the bid and offer that the losses accrued by transacting with informed traders are offset by the profits accrued from transacting with uninformed investors… [p22 of pdf]

[T]hese informed traders buy when their superior estimate of share value suggests that a stock is underpriced and sell when it indicates a stock is overpriced, their activities make share prices more accurate. [p34 of pdf]


This is a nice story, and I think it is largely reflective of the nature of markets. There’s good reason to think that large traders will often possess valuable information; if you’re going to trade large size, then you have the resources to spend on insightful analysis. And, conversely, if you have the resources to spend on insightful analysis, then you may as well trade large size. There’s some evidence that, in aggregate, managers tend to trade enough size so that their costs balance out their expected profit. [5] This is akin to the type of market efficiency that Cliff Asness and John Liew advance:

[I]t seems like whenever we have found instances of individuals or firms that seem to have something so special (you never really know for sure, of course), the more certain we are that they are on to something, the more likely it is that either they are not taking money or they take out so much in either compensation or fees that investors are left with what seems like a pretty normal expected rate of return. (Any abnormally wonderful rate of return for risk can be rendered normal or worse with a sufficiently high fee.)

Also, it is most certainly the case that with sloppy trading you can easily throw away any expected return premium — whatever its source — that might exist around these strategies by paying too much to execute them

Some Traders Have High Market Impact but Little Valuable Information

But, as with many nice stories, the story of “toxic traders” as “informed” traders seems incomplete. Are there any participants which are generally considered “uninformed”, trade large size, and have high costs from market impact? Most obvious are index funds. [6] [7] When stocks are expected to be incorporated into an index like the S&P 500, their prices rise in anticipation. And stock prices tend to fall when they are expected to be deleted from a popular index. Because these price moves occur before the actual changes in indexes are made, funds that strictly follow an index will trade shares only after prices change, costing them money. [8] Some portion of these anticipatory price moves revert after the actual index changes are finalized (and large funds have completed their rebalancing) – which means that at least part of this expense is not in exchange for something of value, such as the added liquidity that index constituents enjoy. This effect is different from the “invisible scalp” that worries some commentators. That “scalp” concerns the deviation of a fund from its index benchmark, while we’re discussing underperformance of the actual index. Antti Petajisto has estimated this underperformance as costing investors in popular indexes at least a fifth of a percent per year, by no means a trivial sum. [9]

Price Inelasticity

If index investors really are uninformed, why should their trades (or anticipated trades) move the market price at all? The standard adverse selection model would say that less aggressively priced orders in the order book are from market makers requiring compensation for the risk of being run over by large, informed traders. If that model were complete, an informational change would be the only reason a price should ever move. This view is a vestige of the Efficient Market Hypothesis (EMH). In reality, prices move in response to changes in supply and demand, even if those changes aren’t related to any new information. Consider a really simple model, where traders each have their own point estimate for the “intrinsic value” of a stock. Unless everybody has the same estimate, when somebody buys more stock than traders with the lowest estimate are willing to sell, the market price will rise. This is another way to think of an order book. [10] The EMH idea that traders without any information should not be able to significantly move prices is like saying that there are trillions of dollars of undeployed capital backed by first-rate analysis just waiting for stocks to move a few basis points before trading them. That description doesn’t sound like reality, but who knows, perhaps if algorithms continue to take over our markets it could become true in the distant future.

Order Anticipation

Because market impact is such an important force in our markets, detecting institutional order flow could be very lucrative. Market makers moving their quotes out of the way of suspected order flow is order anticipation. Trading in the direction of suspected order flow is also order anticipation, though some label it “front-running”. In public discourse lately, there’s been some tendency to claim that order anticipation is fundamentally the domain of HFT. This is clearly wrong. The connection between stock index changes and anticipatory price moves has a time scale of days, and has been a measurable effect for decades. Similarly, there was suspicion recently that Pershing Square’s toehold purchase of Allergan shares suffered from order anticipation. Again, these price moves occurred over the scale of days, and as can be seen in an analysis by Betton, Eckbo, and Throburn, there is nothing new about toehold purchases exhibiting large price impact. [11] It’s hard to know how much of this impact is due to anticipatory trading or just supply and demand. Separating these two effects is particularly difficult because herding among traders is common and arguably a form of order anticipation itself. Traders’ tendency to make similar decisions simultaneously (herding) has an important influence on market impact, as discussed (among other things) in a wonderful empirical study by  Zarinelli, Treccani, Farmer, and Lillo. It’s also hard to dismiss order anticipation as unhealthy for markets. If liquidity providers were not able to price the risk of their counterparty being part of a takeover attempt, trading could become extremely disorderly.

Information Rents

Traders’ analyses help security prices on public capital markets reflect their real-world values. And market impact is arguably the primary mechanism that connects the information from research with asset prices. [12] At the same time, if market impact costs were too high (from order anticipation, lack of liquidity, or otherwise), nobody rational would bother to trade.

Joe Stiglitz is worried that order anticipation ‘steals’ information rents from research important to economic efficiency. And, also that automated pattern recognition could start a wasteful arms race between computerized order anticipators and fundamental traders trying to avoid them:

[T]he informed, knowing that there are those who are trying to extract information from observing (directly or indirectly) their actions, will go to great lengths to make it difficult for others to extract such information. But these actions to reduce information disclosure are costly. And, of course, these actions induce the flash traders to invest still more to figure out how to de-encrypt what has been encrypted.

If, as we have suggested, the process of encryption and de-encryption is socially wasteful — worse than a zero sum game — then competition among firms to be the best de-encryptor is also socially wasteful. Indeed, flash traders may have incentives to add noise to the market to disadvantage rivals, to make their de-encryption task more difficult. Recognizing that it is a zero sum game, one looks for strategies that disadvantage rivals and raise their costs. But of course, they are doing the same.

I have some sympathy with what he’s saying here, but perhaps his analogy of encryption offers some additional insight. Loosely speaking, encryption is cheap and decryption is very challenging. If there’s sufficient background noise in which to camouflage an “encryptor’s” signal, then probably there isn’t much a “decryptor” can do to find that signal. Obviously, we can conceive of legal or market structures that would tilt the balance of power completely towards the “decryption” camp – like if the law required traders to announce their intentions before taking action. But, for common market structures, I suspect that it isn’t too hard to be almost “maximally” encrypted, that is, to have one’s orders disguised to the point where investing more in encryption isn’t going to measurably affect their detectability. I’m not claiming that everybody is this careful; there are plenty of examples of sloppy trading. But my intuition is that currently the majority of market impact comes from price inelasticity, not order anticipation. That said, I think it’s worthwhile to consider whether there are palatable market structures that allow orders to be better concealed.

Market Manipulation


To quote (again), Matt Levine’s excellent description of manipulation:

Generally it is allowed, encouraged even, for a big market participant to hide its intentions. It is manipulation for a market participant to affirmatively mislead people about its intentions. The space between those two things is very narrow indeed.

When traders are allowed to affirmatively mislead the market about their intentions, then the space of possible “encryption” schemes becomes extremely large and complex. That complexity would make it hard for institutions to reach the “maximally encrypted” state we discussed. With manipulation allowed, Stiglitz’s vision of battles between encryptors and decryptors who “add noise to the market” would be very much “worse than a zero sum game.” Spoofing, which has new laws specifically targetting it, is probably a minor nuissance in comparison to other noise-adding manipulation. Momentum-ignition concerns me much more than spoofing, particularly the sort where very large quantities are traded in order to temporarily cause a supply (or demand) shock as well as fool order anticipators. Again, these order anticipators are not just liquidity-seeking HFTs, but also include execution algorithms, institutional “herding” [13], momentum strategies, human click-traders, and market makers forced to pull their quotes on one side while on the other side quoting more aggressively to exit their bleeding positions.

What does momentum-ignition look like? The allegations that Optiver “banged the close” on energy futures may be characteristic. Optiver appears to have been a market-maker on CME trade-at-settlement (TAS) products, used by participants desiring guaranteed trades at the settlement price. These participants, which may pay for this guarantee, include oil ETFs (Table 2, p45 of pdf). Optiver allegedly then “hedged” their position by trading very aggressively in a short period of time, distorting the settlement price with a program they called the “Hammer.” [14] This allowed them to effectively buy futures near the market price and sell them to their TAS counterparties at the distorted price. [15]

This scheme is indicative of momentum-ignition’s required features: a high-impact mechanism to enter a position and a low-impact mechanism to exit it. [16] I suspect that momentum-ignition is especially easy to spot when it involves a market-making service to a counterparty (or client) which guarantees a benchmark. [17] Matt Levine described another such scheme involving auctions in the equity market. My guess is that when manipulation distorts an important benchmark like the closing price of stocks, or the settlement price of crude oil (gasoline and heating oil too), somebody will probably notice. [18]

Momentum-ignition is prohibited in US equities, as described in this 2010 SEC Concept Release (p17), before Dodd-Frank was law. But spoofing (which the Concept Release states is a type of momentum-ignition) is now punishable with multi-decade jail sentences. I won’t argue that market manipulators should go to jail for life, but I think it’s nice when the law treats offenses similarly if they’re of similar character.

Not everybody has a problem with momentum-ignition though. Here’s Izabella Kaminska on the “concentration” of trading to maximize market impact in the FX scandal [19]:

“Concentration” tactics are normal practice for the industry. It’s the equivalent of creating economies of scale and then choosing the moment to transact so that the depth of the market, and it’s likely impact on the price, is most beneficial to you. It’s called skillful execution.

In some sense, that’s what trading is about

The HFT Controversy

When people criticize HFT, I wonder if what they really dislike is manipulative trading. I don’t know of any reason to think that manipulation is more common among HFTs than other traders. If so, many criticisms about HFT requiring institutional traders to make wasteful technology investments are misdirected. Large institutions’ transaction costs pre-date automated trading and are a natural feature of markets. Computer programs, even when they trade aggressively, can cheaply contribute to liquidity, adding real value to assets. Instead of banishing computers from our markets, society would be better served if we spent more time evaluating the harmfulness of specific behaviors. With clear, consistent definitions of manipulation and good enforcement, perhaps we can convince the public that our markets are safe.

 

[1] This phenomenon might appear to not add any value to an asset when it’s a shortable security. If a buyer is willing to pay a bit extra because they think the asset will be easier to sell in the future due to the liquidity “put option,” then a short-seller should be willing to accept a lower price because of the similarly embedded “call option.” There’s no reason to think these effects are equal and opposite though; a security’s short interest is usually a small fraction of its overall float.

[2] You might be especially interested in making sure financial assets are liquid if you’re naturally a seller of them. Companies that make use of capital markets (through bond or public stock offerings) would certainly fall into this category. So do governments. Here’s FRBNY’s president, William Dudley addressing the relative illiquidity of TIPS bonds in 2009:

[I]t may make sense to structure the TIPS program in a way that would help reduce the illiquidity premium associated with TIPS relative to on-the-run nominal Treasuries. Some of the current illiquidity premium is likely to shrink as financial markets stabilize. However, further improvements may require a change in either the structure of the TIPS program or the secondary market trading environment.


[3] Francis Longstaff estimated an upper-bound on the value of liquidity by comparing it to a lookback option. An omniscient investor can sell a holding at its peak, but only if its market is completely liquid. If the market were completely illiquid, then the investor can’t sell at all. So a lookback option, which would offer the investor the right to sell at the absolute peak in a given time period, gives an upper-bound on liquidity’s value. In practice, of course, there are caveats:

  1. Returns aren’t normally-distributed.
  2. An investor may not have a pre-specified timeframe. And this lookback option would have infinite value if it were over an infinite time interval – not a very tight upper bound. On a side note, bizarrely,  options like this actually exist in the real world.
  3. An omniscient investor might choose to sell a holding below its peak for other reasons (like taxes, personal reasons, or because when you know everything, there’s probably a better way to deploy capital).
  4. Omniscience doesn’t exist. And if it did, I feel like markets wouldn’t make any sense.

If you’re interested in methods to price the value of liquidity, here’s a review by Aswath Damodaran.

[4] Proprietary traders as a whole make much more money when volatility is high. Those increased profits could be partly due to the higher value of liquidity and a commensurate rise in demand for it. The entire financial sector is in some sense engaged in the business of selling option-value. One view of banks is that they make money via shorting liquidity: by holding assets (particularly fixed income) to maturity and riding out fluctuations in market value, they are rewarded with a small profit. And one argument against strict mark-to-market accounting is that it doesn’t properly encapsulate this aspect of banking. Another perspective is that when banks have shorted “too much” liquidity and the price of that liquidity has risen (i.e. volatility has gone up and the “option-value” component of liquidity is expensive), they tend to rely on governments and central banks to sell them additional liquidity cheaply in order to survive. If you consider this government backstop to be an unfair subsidy, then asking banks to mark their balance sheets to market makes more sense. I always find this connection between the two “types” of liquidity (the kind provided by central banks and the kind traders use every day) both self-evident and surprising.

[5] This is the so-called fair pricing condition. See for instance this analysis by Waelbroeck and Gomes. They used a dataset of institutional transactions with (most) “cash flow” trades separately marked. “Cash flow” trades are due to client inflows and outflows which are (probably) not reflective of fund managers’ decisions. When they exclude these “cash flow” trades, they find that, on average, returns are quite close to transaction costs for different portfolio managers (figure 4, p23). They also find costs and returns are roughly in balance for transactions of different sizes (figures 12a and 12b, p41).

[6] At least, it’s a common view that index investors piggy-back on the pricing provided by active traders and do not have any valuable information. Maybe this is an over-simplification though; some index investors could buy when they predict future index buying. In that case, would they mind overpaying slightly for something that has psychological value (like index inclusion)?

Here’s Slack CEO Stewart Butterfield on investors being potentially willing to pay a premium to bolster the perceptions surrounding their investment:

You have to choose some numbers… One billion is better than $800 million because it’s the psychological threshold for potential customers, employees, and the press.

And:

[I]t increases the value of our stock and can allow potential employees to take our offers, and it reinforces the perception for our larger customers that we’ll be around for the long haul.


[7] The Waelbroeck and Gomes analysis in [5] gives us another example of “uninformed” traders who still pay market impact costs, if you consider “cash flow” transactions to lack informational value:

The peak impact of cash flows is statistically indistinguishable from that of other metaorders and both are indistinguishable from 1.5 times the estimated shortfall.

Their analysis also finds that “cash flow” trades’ market impact has a tendency to revert to the pre-trade price (or even past it). So these transactions have market impact expenses, but also appear to have zero (or negative) long-term alpha.

[8] Some funds are more careful and will deviate from indexes a bit in order to avoid paying some of this impact expense.

[9] I suppose this hidden cost, if real, is one way that index investors are charged by the market for piggy-backing on others’ price discovery.

[10] Many of these “orders” will not be on any exchange’s order book. But they’re somewhere in traders’ minds, which makes them like hidden orders (some people call them “latent orders”).

[11] They anaylzed thousands of public control bids from 1980-2002, a few hundred of which were accompanied by toehold purchases. The charts on p28 show significant pre-announcement price movement.

[12] Of course the price of a security can jump discontinuously in response to new information, on little to no volume. But security prices move continuously during the day, and presumably these price changes have informational value. And, when prices change discontinuously at the open, very often they do so in an auction with heavy volume. I’d imagine that, whenever a price change is accompanied by substantial trading, market impact plays an important role in price discovery. This could explain price volatility appearing lower across the weekend than during trading hours.

[13] Momentum-ignition could profit from “herding” in ways like a manipulator inducing panic, triggering stop orders, or causing forced liquidations.

[14] In reference to a similar case in the FX market, Matt Levine predicted:

Of course there will be emails – there are always emails

One issue with enforcing a ban on momentum ignition (or other market manipulation) is that, in order to prove it, knowledge of a trader’s intent is required. But I guess there are always going to be manipulators who describe their schemes over email, or just call their strategy to bang the close “the Hammer.”

[15] It appears Optiver did a bit more than that, allegedly another division used their foreknowledge of the price-hammering in other ways. From an internal company email (p54):

Nick has made a tidy profit on his trading, close to $100k I expect. But I consider the way In which he did it to be both deceitful, and reckless…

They will tell you of course that they have noticed (after we told them), that when someone sells TAS’s, the future often go down during the settlement…

Since our colleagues in Amsterdam know that we are going to do the dirty work, they simply trade their futures before hand, and make a big profit on them.


[16] The possibility that markets could allow asymmetric, profitable impact is counterintuitive. Writing about the FX rigging case, Matt Levine explains why this is confusing:

Let’s say that the chat room traders are selling euros to customers at the fix, so they want a high fix. They want to buy a lot of euros in the few minutes right before the fix, to push the price up…

Banks could further their manipulation by buying from outside banks, selling to outside banks, or doing neither. This should make you suspicious. All of those things can’t work equally well! If buying from other banks would push the price up, or down, then selling to them should push the price down, or up. The fact that the chat room traders sometimes did one, and sometimes the other, means that they hadn’t found a reliable cheat, a way to take the risk out of their trading. It means in some sense that their manipulation didn’t work. I mean, it worked fine. But there’s a reason you “cnt teach that.” It makes no sense!

Potentially, what these alleged manipulators did was increase their own position when they suspected that their counterparty bank would hedge in a low-impact fashion. When that was the case, the alleged manipulators could do their high-impact trading while their counterparty, on the opposite side, would do low-impact trading. When they suspected a rival bank would trade in a high-impact fashion, they might do the opposite. I’m not saying they were successful at this, but our markets may well allow a high-impact entrance and a low-impact exit, in contradiction with a sort of efficiency. Jim Gatheral calls this type of market efficiency, which renders manipulation impossible, a “no-dynamic-arbitrage principle.”

It would be great if we could design a market where this principle is reality. But I kind of suspect it’s impossible. Maybe the closest we can get is to de-anonymize market data long after the sensitive information it contains is stale. That would allow private-sector analysts to uncover manipulation, as well as offer victims the opportunity to see how it damages them.

[17] Finding a low-impact exit is probably also much easier for market-makers guaranteeing a benchmark price.

[18] It appears that NYMEX grew concerned about Optiver’s activity after just a few weeks.

[19] Different markets have different norms. I don’t know if pushing around the price is acceptable in FX markets, but it appears that regulators may want harsh penalties for any guilty banks.

Blind Analysis, Inefficient Markets, and UK Polling Accuracy

Everybody knows that market prices do not perfectly reflect all available information. This is partly due to human biases, one reason that algorithmic trading has become so successful.

The polls published prior to yesterday’s UK General Election generally predicted a hung parliament. Those polls were obviously very wrong. Why? One factor seems to be the influence of human bias on data analysis. Here is Damian Lyons Lowe, CEO of pollster Survation, which nearly published a poll that would have closely predicted the election results:

the results seemed so “out of line” with all the polling conducted by ourselves and our peers – what poll commentators would term an “outlier” – that I “chickened out” of publishing the figures – something I’m sure I’ll always regret

So, at least one pollster held back results that would have completely changed the prediction landscape. Others may have done the same, or may have unconsciously manipulated their analysis so that the final results were closer to what everybody else had been reporting. When the first exit poll was announced, which closely agreed with Survation’s never-released data, markets reacted strongly. GBP/USD immediately rose over 1%, indicating that the trading community found the results just as surprising as political experts.

This is a fantastic opportunity to learn about blind analysis. Blind analysis is a technique, primarily used in particle physics, to reduce the influence of human biases on reported results of an experiment. Physicists learned the hard way that human psychology is an important source of systematic error. See, for example, how some experimental values evolved over time, courtesy of the Particle Data Group. Some of these plots may be reminiscent of markets, which at times have very poorly predicted both future prices and their uncertainty.

Joel Heinrich has written an approachable and excellent summary of the motivations and methods of blind analysis. Heinrich mentions an example that should sound familiar given the biased polling discussed above. A physical quantity appeared, in its reported measurements, to become less uncertain over time, eventually being reported about 8 sigma away from its now-established value:

It is quite likely that the experimenters during this period paid too much attention to the level of agreement between their new result and the measurements of the recent past. If one judges whether a result is ready for publication by its agreement with the current world average, such disasters can happen…
[N]aturally when one sees an 8 [sigma] disagreement between one’s current result and the world average, one goes back and checks everything very carefully. On the other hand, when the new number is consistent with the old, there is a tendency to declare victory and move on. This asymmetry results in a statistical bias. A better procedure would be to list and schedule all the checks in advance of knowing the answer, and carry them out in either case

The techniques of blind analysis can be readily applied outside of physics. The general principles are to have strict procedures written ahead of time, and to hide whatever possible from the individuals analyzing the data. This may mean hiding the final result, adding in some kind of constant or noise to the data which can later be subtracted, or allowing the analysts to work with only a subset of unblinded data. I strongly encourage everybody who works with data to read Heinrich’s review or others. It is immensely frustrating to see human knowledge obfuscated for such a silly, avoidable reason. Blind analysis could dramatically improve the quality of all data-oriented disciplines, and I hope that it becomes more widespread.