Author Archives: Kipp Rogers

IEX, Ideology, and the Role of an Exchange

IEX has raised significant capital, possibly at a valuation well into the hundreds of millions. IEX plans to become a full exchange and continue capturing market share, but I wonder if it might have a unique long-term vision that excites investors. In this post, I will speculate about what that vision might look like. To be absolutely clear, this post is highly speculative, and does not constitute trading or investment advice.

CEO Brad Katsuyama testified that “IEX was founded on the premise of institutionalizing fairness in the market.” Soothing words, and possibly words that tell us something substantive about IEX’s values.

IEX recently introduced the D-Peg, an order type that uses market data to make a prediction about where the price is heading, and transact only at times that are predicted to benefit the user. The D-Peg is a blending of price prediction, traditionally the role of traders, with the matching process itself. Combined with the 350us structural delay built into IEX, it’s easy to see how even crude prediction signals could become incredibly powerful. As cofounder Dan Aisen puts it:

[W]e don’t need to be the single fastest at picking up the signal– as long as we can identify that the market is transitioning within 350 microseconds of the very fastest trader, we can protect our resting discretionary peg orders. It turns out that 350 microseconds is an enormous head start

I imagine that even something as basic as an order book ratio (e.g. [AskQuantity – BidQuantity] / [AskQuantity + BidQuantity]), known 350us in advance, has tremendous economic value.

This philosophy is interesting to think about, and I can see how it might appeal to certain audiences. If the exchange has a better idea of the market price than its customers, it makes a sort of sense for it to use that information to ensure trades can only occur at that price. But, I think the idea is ultimately a misguided one.

Here are some problems with it:

1) The exchange, in an effort to increasingly prevent adverse selection, may want to make their prediction more sophisticated. If a skewed order book results in ‘bad fills,’ then the same can be said for trades occurring after price moves on correlated instruments. If the price of PBR has just dropped by 1%, then buy orders for PBR.A are surely in danger of being “picked off.” Should the exchange try to prevent that? IEX may well have decided that they’ll always allow this kind of adverse selection. But keep in mind that trading signals do not work forever, especially when they are heavily used — so IEX will likely need to continually revise their prediction methods.


2) As the prediction methods get more complex, they are more liable to be wrong. In the example above, maybe an event occurs that affects the values of the two share classes in different ways. The exchange could erroneously prevent traders clever enough to understand that from executing, impeding price discovery.

 


3) Sophisticated traders like the PBR/PBR.A specialists could opt out of these order types. but how would they make an informed decision? Right now, we just know that the D-Peg uses a “proprietary assessment of relative quoting activity.” Could that “proprietary assessment” change over time? If so, are those changes announced? Matt Hurd has lamented the D-Peg’s undisclosed nature, and thinks it contradicts IEX’s mission of transparency. [1]

 


4) An exchange cannot increase the profitability of one group of traders without harming another. Now, maybe the only group harmed here are unsympathetic high-frequency traders who don’t deserve their profits. I’m skeptical of that. Who might some of those evil traders “picking off” quotes on IEX be? The motivation for Thor, and a critical part of the Flash Boys story, is the fading of liquidity when a trader submits large marketable orders. Some of the traders that the D-Peg will stymie may be people like the young Brad Katsuyama, investors or brokers who send liquidity-seeking orders simultaneously to many different exchanges. Say the NBBO for BAC is 10.00/10.01, and an investor wants to sell a large holding, so she sends sell orders to multiple exchanges, including IEX. One of those orders hits Nasdaq right when another gets to IEX, but IEX waits 350us, and, seeing the Nasdaq bid disappear, perhaps decides not to execute any resting D-Peg interest with the incoming order. Had the investor timed her sell orders differently (in a similar spirit to Thor, sending the IEX-bound order early), she’d have gotten a better fill rate. [2]

Another possibly harmed group could be non-D-Peg resting orders on IEX. One fascinating aspect of the IEX speedbump is that they can use it not only to prevent resting orders from executing at inopportune moments, but also to help traders remove liquidity at opportune moments! I was surprised to see that some order types can automatically trade against others upon a change in IEX’s view of the NBBO, through a process called “Book Recheck“. The mechanics of IEX seem complicated, so I could be wrong, but it looks to me like orders eligible to “recheck” the book may initiate a trade at a price determined by the realtime (*not* delayed) NBBO. [3] In contrast, cancel requests for the passive sides of these trades would be subject to the IEX speedbump. Here is a concerning, hypothetical example:

A) The NBBO for a stock is 10.00/10.01
B) A trader has submitted an ordinary (non-peg) limit buy order at 10.00 resting on IEX
C) The NBB at 10.00 is completely executed, making the new NBBO 9.99/10.01
D) The trader, seeing that 10.00 is no longer a favorable price for her purposes, tries to cancel her buy order
E) Her order cancellation goes through the 350us speedbump
F) In the meantime, IEX sees that the new NBBO midpoint is 10.00, and decides that a D-Peg sell order (or midpoint peg) is now eligible to recheck the book.
G) The D-Peg order is matched with the bid at 10.00.

 

The combination of algorithmic order types and selective use of the speedbump resulted in one trader getting an especially good fill, and another trader getting an especially bad fill. I guess if you’re not careful designing your exchange which supposedly prevents traders from picking each other off, you might do some picking-off yourself. [4]

 


5) Trading that occurs during price movements tends to be more informed, and preventing it could make markets less efficient. This would only be an issue if IEX captured significant market share, but it does sound like permitting trading only during periods of market stasis is part of IEX’s long-term vision. Referring to the D-Peg, Chief Strategy Officer Ronan Ryan says that “[a] core insight behind our market philosophy is that price changes are valuable opportunities, especially for those strategies fast enough to detect signals from price changes.” And also: “The economic benefit is that investors aren’t paying (or selling at) a worse price to a predatory strategy that is aware of quote changes before they are.”

It sounds like the idea is to stop an informed order from trading with an uninformed order, with the exchange deciding which is which. Naturally, the exchange is not an oracle and will misclassify some orders. But, if IEX becomes the dominant marketplace, and its classification is sufficiently good, informed orders will rarely get filled. You might think that wouldn’t happen, because IEX is only targeting ‘short-term’ alpha, but I’d venture a guess that a sizable chunk of order flow with long-term alpha will also have some short-term alpha inseparably folded into it. With information-bearing order flow being blocked, at a certain point, the exchange will be in the position of deciding when the imbalance between supply and demand warrants a price change. I happen to think that generally markets work better when people can freely trade with one another at prices of their choosing [5], and that a vision like this won’t get IEX into the same league as the major exchanges. But, market participants will be the judges of whether this model is a viable one.

 


6) Even if the exchange is pretty good at determining the market clearing price and balancing supply and demand. It’s not clear they can do so more cheaply than algorithmic traders and human market participants. Right now, IEX is charging 9 cents per 100 shares traded, significantly greater than estimates of typical HFT profit margins. [6]

 


7) IEX, by delaying executions, is effectively using market data from 350us in the future, piggy-backing on price discovery from other markets. As Aisen suggests, the speedbump is probably accounting for the vast majority of their prediction algorithm’s edge. [7] This is different from complaints about dark pools’ use of visible order information for price discovery. Dark pools can only use order information from the present, and have to report trades to the public tape “as soon as practicable“. The speedbump might well allow IEX to cheaply discover pricing information from lit market data, potentially starting a new era of speedbumps, with each exchange wanting to have a longer delay than their competitors have. Regulators may want to carefully think through possible end results of this form of competition.


8) We don’t understand how this sort of market structure would hold up under stress. HFTs thoroughly simulate their algorithms, does IEX do the same? In a flash crash situation, IEX might stop D-Peg matching for an extended period, preventing those clients from getting filled at prices they may love, and isolating much-needed liquidity from the rest of the market. Additionally, if IEX is too effective at blocking informed order flow, some traders could panic when they repeatedly try and fail to get executed, damaging market stability.


Most of these issues aren’t especially important to overall market health as long as IEX’s market share stays below a few percent. And I think their market model is a perfectly fine one for a dark pool, although a little more disclosure wouldn’t hurt. The question is whether their target audience of fundamental traders will want to participate in this sort of market. I suspect ultimately that they won’t, though IEX might reach critical mass before participants really have time for reasoned debate.

We may have a glimpse of what “institutionalized fairness in the market” really means. To some, it may mean the relief of relying on a trustworthy institution to equitably determine the timing and pricing of their trades. To others, it may sound like a private company determining the market price via secret, non-competitive algorithms — unaccountably picking winners and losers. Institutional arbiters are part of civilized society, but ideally they’re transparent, receptive to criticism, and reformable when not working. Before we hand over the keys to IEX, we had better make sure that they meet these standards.


[1]Hurd’s complaint seems fair enough, but I’ll mention that competing exchanges aren’t always perfectly transparent either. For instance, Nasdaq Nordic’s documentation seemed to have some noteworthy details about reserve orders that weren’t available on Nasdaq’s US site.


[2] Brad Katsuyama said that “fading liquidity” is one of IEX’s “concerns regarding negative effects of structural inefficiencies” in his testimony to the US Senate:

[D]ue to the construct of the market system certain strategies are able to get out of the way of buy or sell interest as they are accessing the market in aggregate, which calls into question the fairness of the inefficiencies which allow or enable such behavior, and the potential distortion of price discovery and of supply and demand.


[3] Execution Tag “LastLiquidityInd” has a value for “Removed Liquidity on Recheck.” And the Form ATS says:

Upon a change to the Order Book, the NBBO, or as part of the processing of inbound messages, the System may test orders on one or both sides of its market against the contra side of the Order Book to determine if new executions can occur as a consequence of the change in the IEX Book or prevailing market conditions[.] Orders resting on the Order Book at the IEX determined Midpoint, may be eligible to trade against orders in the updated Order Book, which were ineligible, or did not satisfy the order’s conditions, when they were originally booked.

Does that mean the recheck uses the same non-delayed NBBO that IEX uses in the rest of their logic? I don’t know, but more disclosure from IEX seems like a good idea.


[4] Our hypothetical trader who had her buy order “scalped” may also have heard statements from IEX such as “You can not scalp trades, you can not scalp orders that are on IEX.”


[5] Within some reasonable limits of course. Limit-Up-Limit-Down price constraints seem to be appreciated by most participants, though even those aren’t completely free from criticism. Reg. NMS Order Protection also has some passionate opinions on both sides. There is always going to be some tension between letting traders determine prices unencumbered, and protecting them from ‘erroneous’ or ‘unfair’ transactions.


[6] Rosenblatt Securities, which has conducted surveys of HFTs, recently estimated that HFT profit margins in US Equities are around 5 cents per 100 shares. Tabb Group similarly sees shrinking profit margins.


[7] The D-Peg aside, even the simplest formula like the NBBO midpoint will have massive alpha with a 350us “head start.”

Plain, Old Fraud in the Twitter-Hack Flash-Crash?

Two years ago, hackers took control of the Associated Press’s Twitter account and falsely tweeted that the president was injured due to explosions at the White House. Within 3 minutes, US stock indexes dropped about 1%, but recovered to their pre-tweet values after an additional 4 minutes.

I don’t like to idly speculate [1], but ever since then, I keep wondering if this hack might have been part of a massive manipulation scheme [2]. Even if it was just a prank, it seems like the hackers would have been foolish not to try capitalizing on the market movements that they caused. If they wanted to commit crimes, why not at least make some money?

It would be easy to profit off of such a scheme, and it seems conceivable that a savvy, well-funded group might have cleared an enormous sum. It’s also possible that this hypothetical group could have avoided attracting *too much* attention before wiring out the proceeds, perhaps by splitting up the trades across many accounts, without ever touching an American financial product or bank (markets worldwide were impacted by the tweet). The Syrian Electronic Army claimed responsibility for the hack. I obviously don’t know if that claim is true. But if it is, presumably that group could use the money.

Much like spoofing, the intentional spread of misinformation can harm all sorts of traders. There has been speculation that algorithmic traders were disproportionately deceived by the hack. I imagine that some were, but so were plenty of humans. Here’s Sal Arnuk of Themis Trading:

My initial reaction before I realized it was a fake tweet was the same horrible feeling I had when I worked at the top of the New York stock exchange when planes hit the World Trade Center.

And Arnuk also appears aware of the possibility that it was a profit-making scam:

When I realized it was a fake tweet, I was outraged and ashamed that the market was able to be manipulated so easily.

Regulators take spreading false rumors very seriously, like in today’s suit over false EDGAR filings. I am sure they have been looking into this more significant and complex incident. If and when they complete their investigation, don’t be surprised if it was more than just vandalism.


[1] That means I’m about to. This post is highly speculative.

[2] An SEC information page briefly describes “pump-and-dump” scams:

“Pump-and-dump” schemes involve the touting of a company’s stock (typically small, so-called “microcap” companies) through false and misleading statements to the marketplace. These false claims could be made on social media such as Facebook and Twitter, as well as on bulletin boards and chat rooms.

These scams may also be called “short-and-distort” when the manipulator shorts a financial instrument before spreading negative rumors.

Are Data Centers that Host Exchanges Utilities?

Swedish regulators are seeking to fine Nasdaq OMX for alleged anti-competitive practices in the Nordic colocation business. This case is fairly limited in scope, but it raises some more general questions about colocation.

HFTs, execution algorithms, and smart order routers rely on exchange colocation to provide cost-effective and fair access to market centers. Locking competitors out of an established data center could easily destroy their businesses. The Swedish Competition Authority alleges that Nasdaq OMX did just that:

In 2009, a Stockholm – based multilateral trading platform called Burgundy was launched. Burgundy was formed by a number of Nordic banks. The Burgundy ownership structure gave the platform a large potential client base, especially with respect to brokers. However, the owners had limited possibilities of moving trade from Nasdaq to Burgundy as long as the trade on Burgundy was not sufficiently liquid to guarantee satisfactory order execution. In order to increase the liquidity on Burgundy, it was vital for Burgundy to get more trading participants…

In order to come into close physical proximity with the customers’ trading equipment in Lunda, Burgundy decided to move its matching engine to the data centre in Lunda…

Burgundy had finalised negotiations with Verizon, via their technology supplier Cinnober, and the parties had agreed that space would be leased in Lunda for Burgundy’s matching engine. When Nasdaq heard of this agreement, they contacted Verizon demanding to be the sole marketplace offering trading services in Nordic equities in Lunda. Nasdaq told Verizon that if Verizon allocated space to the Burgundy matching engine at their data centre in Lunda, Nasdaq would remove their own primary matching engine and their co-location service from that centre. Such an agreement with Burgundy/Cinnober could also have an impact on Verizon’s global collaboration with Nasdaq. Verizon accepted Nasdaq’s demands, and terminated the deal with Burgundy/Cinnober.

Latency Arbitrage And Traders’ Expenses

In the US, a lot of people are upset that equity exchanges are located all over New Jersey, instead of being in one building. Michael Lewis’s primary complaint about HFT is that it engages in “latency arbitrage” by sending orders between market centers ahead of anticipated institutional trades. I suspect that, in addition to those concerned about latency arbitrage, most HFTs would also be happy if important exchanges moved to one place. That would cut participants’ expenses significantly; there would be no need to host computers (and backups) at multiple locations and no need to procure expensive fiber and wireless links between locations. It would also allow HFTs to trade a given security on multiple exchanges from a single computer, dramatically simplifying risk checks and eliminating accidental self-trading.

If so many market participants want it, why hasn’t it happened? There could be several reasons:

  1. It’d be anti-competitive for exchanges to cooperate too much in bargaining with data center providers.
  2. Established exchanges want the best deal possible for their hosting, which means they need to consider bids from many competing providers.
  3. Under the Reg. NMS Order Protection Rule, there could be some benefit to exchanges if they have a structural delay with their competitors.
  4. Some exchanges see hosting, connectivity, and related services as important sources of revenue and want customers to procure those services from them. This especially includes exchanges which require colocated customers to lease rackspace only from the exchanges themselves – and also exchanges that operate their own data centers.
  5. Exchanges don’t want competitors in the same data center as them, so they use their considerable leverage with providers to keep them out.

The allegations against Nasdaq OMX are about #5 and seem to be about just one case. But here’s a potentially concerning statement by Andrew Ward in the FT (2010):

People familiar with Verizon said it would be unusual in the exchange industry for more than one operator to share the same data centre.

A New Model for Colocation

Reg. NMS requires exchanges to communicate with each other, and would work better if delays in that communication were kept to a minimum. Would it be reasonable for updated regulation to require market centers to provide one another with a rapidly updated view of their order books? That would necessitate exchange matching engines being physically close to one another, ideally in the same building. Requiring this would end most types of “latency-arbitrage,” whether real or perceived.

One solution could be for FINRA to solicit bids from providers under the assumption of a long-term contract, with extra space available for new exchanges and traders – keeping costs down for everybody. This proposal is in tension with the concern in #1, but I wonder if, because we have a single national market system, it’s reasonable for that system to negotiate as a single entity, and for the other concerns to override #1. Exchanges would no longer make much money from colocation services, but they could compensate for that by raising trading fees, which would arguably be healthier for markets anyway.

In my mind, what separates data centers from utilities is that, for most of their non-financial customers, there’s very little benefit to being in a certain building versus a nearby one. So long as nearby buildings have good connectivity to the local internet, customers have many options when procuring hosting services. Financial customers are much different. Once a major exchange is located in a certain building, traders, and sometimes competing exchanges, have no choice but to lease space there. That feels to me like a completely different dynamic, and possibly one that justifies data centers, in this specific industry, being classified as utilities.

Market Impact, Informational Efficiency, and the Value of Liquidity

A worry many people have about HFT is that it raises market impact costs for large institutional traders. Trading algorithms that explicitly anticipate order flow do exist, but does that mean an outright ban on HFT would reduce trading costs? It’s hard to know the answer to that question, but it may be helpful to consider how instruments receive added value from financial markets.

Ecosystems on secondary markets appear to be completely zero-sum. This is kind of obvious when you only consider first order effects; if a short term trader buys low and sells high, then the money they made has to come from their counterparties somehow. By this logic, if one type of trader consistently makes money, then they are doing so directly at the expense of the rest of the market. This view makes less sense when markets are sufficiently mature.

Well-developed markets with widespread participation offer traders liquidity and the expectation of continued liquidity. That liquidity is worth something. For long-term traders, a liquid marketplace makes it easy to find a partner to trade with. Imagine, for example, how much you would save when selling your home if you didn’t need a broker to find buyers. Liquidity also raises the value of assets themselves; in essence, something is worth more to you if you know that you can readily sell it later. How much more though? If a prospective buyer knew a home could be later sold without paying a 3% broker fee, then maybe they’d consider bidding 1% higher for it. In financial markets, the classic example of this phenomenon is the premium that on-the-run treasuries command in comparison to those that are off-the-run: A 30-year bond with only 10 years left until maturity generally trades at a lower price than a freshly issued 10-year bond, even though the two bonds should make payments that are effectively identical. The reason suspected for this discount is that the fresh 10-year is more actively traded and easier to sell if desired.

Liquidity Provides Option-Value

One way to think of liquidity’s worth is to consider an analogy with put options. A put option offers its owner the ability to sell an asset in the future at a pre-determined price. Liquidity offers the owner of an asset the ability to sell in the future at the prevailing market price, whatever that may be. That difference is important – but even though the “option” associated with liquidity has an “exercise price” that floats with the market price, it is still worth something. Consider that investors pay their brokers for guaranteed executions at (or worse than) the market price. Or consider the existence of trade-at-settlement products, where one party will pay another in order to guarantee an execution at the day’s settlement price (a measure of market price). A liquid security thus has an embedded put option that should increase its value. [1] Options, of course, increase in value when volatility rises. The price of liquidity appears to do the same, for instance on p.42 of this paper, you can see that the on-the-run premium tends to be higher during times of high volatility. [2] [3] [4]

Market Impact

The cost of liquidity is generally divided into 2 components: the bid-ask spread and market impact. The spread is a measure of how much market-makers need to be compensated for the risk of trading with a counterparty large enough to move the price. Market impact is the change in price that occurs after a trade. Since large traders often split their transactions into many small child orders, market impact means that the latter part of these orders are executed at a less favorable price. Very roughly speaking, small traders are expected to have the bid-ask spread as their primary transaction cost and large traders are expected to have market impact as theirs. One common view is that these large traders are “informed” and their favorable information is why market-makers lose money when trading with them. Here’s a widely cited paper by Fox, Glosten, and Rauterberg of Columbia:

There are three primary kinds of private information, which we will label, respectively, inside information, announcement information, and fundamental value information…

Whatever the source of an informed trader’s private information, the liquidity provider will be subject to adverse selection and lose money when it buys at the bid from informed sellers or sells at the offer to informed buyers. As long as there are enough uninformed traders willing to suffer the inevitable expected trading losses of always buying at the offer and selling at the bid, however, the liquidity provider can break even. There simply needs to be a large enough spread between the bid and offer that the losses accrued by transacting with informed traders are offset by the profits accrued from transacting with uninformed investors… [p22 of pdf]

[T]hese informed traders buy when their superior estimate of share value suggests that a stock is underpriced and sell when it indicates a stock is overpriced, their activities make share prices more accurate. [p34 of pdf]


This is a nice story, and I think it is largely reflective of the nature of markets. There’s good reason to think that large traders will often possess valuable information; if you’re going to trade large size, then you have the resources to spend on insightful analysis. And, conversely, if you have the resources to spend on insightful analysis, then you may as well trade large size. There’s some evidence that, in aggregate, managers tend to trade enough size so that their costs balance out their expected profit. [5] This is akin to the type of market efficiency that Cliff Asness and John Liew advance:

[I]t seems like whenever we have found instances of individuals or firms that seem to have something so special (you never really know for sure, of course), the more certain we are that they are on to something, the more likely it is that either they are not taking money or they take out so much in either compensation or fees that investors are left with what seems like a pretty normal expected rate of return. (Any abnormally wonderful rate of return for risk can be rendered normal or worse with a sufficiently high fee.)

Also, it is most certainly the case that with sloppy trading you can easily throw away any expected return premium — whatever its source — that might exist around these strategies by paying too much to execute them

Some Traders Have High Market Impact but Little Valuable Information

But, as with many nice stories, the story of “toxic traders” as “informed” traders seems incomplete. Are there any participants which are generally considered “uninformed”, trade large size, and have high costs from market impact? Most obvious are index funds. [6] [7] When stocks are expected to be incorporated into an index like the S&P 500, their prices rise in anticipation. And stock prices tend to fall when they are expected to be deleted from a popular index. Because these price moves occur before the actual changes in indexes are made, funds that strictly follow an index will trade shares only after prices change, costing them money. [8] Some portion of these anticipatory price moves revert after the actual index changes are finalized (and large funds have completed their rebalancing) – which means that at least part of this expense is not in exchange for something of value, such as the added liquidity that index constituents enjoy. This effect is different from the “invisible scalp” that worries some commentators. That “scalp” concerns the deviation of a fund from its index benchmark, while we’re discussing underperformance of the actual index. Antti Petajisto has estimated this underperformance as costing investors in popular indexes at least a fifth of a percent per year, by no means a trivial sum. [9]

Price Inelasticity

If index investors really are uninformed, why should their trades (or anticipated trades) move the market price at all? The standard adverse selection model would say that less aggressively priced orders in the order book are from market makers requiring compensation for the risk of being run over by large, informed traders. If that model were complete, an informational change would be the only reason a price should ever move. This view is a vestige of the Efficient Market Hypothesis (EMH). In reality, prices move in response to changes in supply and demand, even if those changes aren’t related to any new information. Consider a really simple model, where traders each have their own point estimate for the “intrinsic value” of a stock. Unless everybody has the same estimate, when somebody buys more stock than traders with the lowest estimate are willing to sell, the market price will rise. This is another way to think of an order book. [10] The EMH idea that traders without any information should not be able to significantly move prices is like saying that there are trillions of dollars of undeployed capital backed by first-rate analysis just waiting for stocks to move a few basis points before trading them. That description doesn’t sound like reality, but who knows, perhaps if algorithms continue to take over our markets it could become true in the distant future.

Order Anticipation

Because market impact is such an important force in our markets, detecting institutional order flow could be very lucrative. Market makers moving their quotes out of the way of suspected order flow is order anticipation. Trading in the direction of suspected order flow is also order anticipation, though some label it “front-running”. In public discourse lately, there’s been some tendency to claim that order anticipation is fundamentally the domain of HFT. This is clearly wrong. The connection between stock index changes and anticipatory price moves has a time scale of days, and has been a measurable effect for decades. Similarly, there was suspicion recently that Pershing Square’s toehold purchase of Allergan shares suffered from order anticipation. Again, these price moves occurred over the scale of days, and as can be seen in an analysis by Betton, Eckbo, and Throburn, there is nothing new about toehold purchases exhibiting large price impact. [11] It’s hard to know how much of this impact is due to anticipatory trading or just supply and demand. Separating these two effects is particularly difficult because herding among traders is common and arguably a form of order anticipation itself. Traders’ tendency to make similar decisions simultaneously (herding) has an important influence on market impact, as discussed (among other things) in a wonderful empirical study by  Zarinelli, Treccani, Farmer, and Lillo. It’s also hard to dismiss order anticipation as unhealthy for markets. If liquidity providers were not able to price the risk of their counterparty being part of a takeover attempt, trading could become extremely disorderly.

Information Rents

Traders’ analyses help security prices on public capital markets reflect their real-world values. And market impact is arguably the primary mechanism that connects the information from research with asset prices. [12] At the same time, if market impact costs were too high (from order anticipation, lack of liquidity, or otherwise), nobody rational would bother to trade.

Joe Stiglitz is worried that order anticipation ‘steals’ information rents from research important to economic efficiency. And, also that automated pattern recognition could start a wasteful arms race between computerized order anticipators and fundamental traders trying to avoid them:

[T]he informed, knowing that there are those who are trying to extract information from observing (directly or indirectly) their actions, will go to great lengths to make it difficult for others to extract such information. But these actions to reduce information disclosure are costly. And, of course, these actions induce the flash traders to invest still more to figure out how to de-encrypt what has been encrypted.

If, as we have suggested, the process of encryption and de-encryption is socially wasteful — worse than a zero sum game — then competition among firms to be the best de-encryptor is also socially wasteful. Indeed, flash traders may have incentives to add noise to the market to disadvantage rivals, to make their de-encryption task more difficult. Recognizing that it is a zero sum game, one looks for strategies that disadvantage rivals and raise their costs. But of course, they are doing the same.

I have some sympathy with what he’s saying here, but perhaps his analogy of encryption offers some additional insight. Loosely speaking, encryption is cheap and decryption is very challenging. If there’s sufficient background noise in which to camouflage an “encryptor’s” signal, then probably there isn’t much a “decryptor” can do to find that signal. Obviously, we can conceive of legal or market structures that would tilt the balance of power completely towards the “decryption” camp – like if the law required traders to announce their intentions before taking action. But, for common market structures, I suspect that it isn’t too hard to be almost “maximally” encrypted, that is, to have one’s orders disguised to the point where investing more in encryption isn’t going to measurably affect their detectability. I’m not claiming that everybody is this careful; there are plenty of examples of sloppy trading. But my intuition is that currently the majority of market impact comes from price inelasticity, not order anticipation. That said, I think it’s worthwhile to consider whether there are palatable market structures that allow orders to be better concealed.

Market Manipulation


To quote (again), Matt Levine’s excellent description of manipulation:

Generally it is allowed, encouraged even, for a big market participant to hide its intentions. It is manipulation for a market participant to affirmatively mislead people about its intentions. The space between those two things is very narrow indeed.

When traders are allowed to affirmatively mislead the market about their intentions, then the space of possible “encryption” schemes becomes extremely large and complex. That complexity would make it hard for institutions to reach the “maximally encrypted” state we discussed. With manipulation allowed, Stiglitz’s vision of battles between encryptors and decryptors who “add noise to the market” would be very much “worse than a zero sum game.” Spoofing, which has new laws specifically targetting it, is probably a minor nuissance in comparison to other noise-adding manipulation. Momentum-ignition concerns me much more than spoofing, particularly the sort where very large quantities are traded in order to temporarily cause a supply (or demand) shock as well as fool order anticipators. Again, these order anticipators are not just liquidity-seeking HFTs, but also include execution algorithms, institutional “herding” [13], momentum strategies, human click-traders, and market makers forced to pull their quotes on one side while on the other side quoting more aggressively to exit their bleeding positions.

What does momentum-ignition look like? The allegations that Optiver “banged the close” on energy futures may be characteristic. Optiver appears to have been a market-maker on CME trade-at-settlement (TAS) products, used by participants desiring guaranteed trades at the settlement price. These participants, which may pay for this guarantee, include oil ETFs (Table 2, p45 of pdf). Optiver allegedly then “hedged” their position by trading very aggressively in a short period of time, distorting the settlement price with a program they called the “Hammer.” [14] This allowed them to effectively buy futures near the market price and sell them to their TAS counterparties at the distorted price. [15]

This scheme is indicative of momentum-ignition’s required features: a high-impact mechanism to enter a position and a low-impact mechanism to exit it. [16] I suspect that momentum-ignition is especially easy to spot when it involves a market-making service to a counterparty (or client) which guarantees a benchmark. [17] Matt Levine described another such scheme involving auctions in the equity market. My guess is that when manipulation distorts an important benchmark like the closing price of stocks, or the settlement price of crude oil (gasoline and heating oil too), somebody will probably notice. [18]

Momentum-ignition is prohibited in US equities, as described in this 2010 SEC Concept Release (p17), before Dodd-Frank was law. But spoofing (which the Concept Release states is a type of momentum-ignition) is now punishable with multi-decade jail sentences. I won’t argue that market manipulators should go to jail for life, but I think it’s nice when the law treats offenses similarly if they’re of similar character.

Not everybody has a problem with momentum-ignition though. Here’s Izabella Kaminska on the “concentration” of trading to maximize market impact in the FX scandal [19]:

“Concentration” tactics are normal practice for the industry. It’s the equivalent of creating economies of scale and then choosing the moment to transact so that the depth of the market, and it’s likely impact on the price, is most beneficial to you. It’s called skillful execution.

In some sense, that’s what trading is about

The HFT Controversy

When people criticize HFT, I wonder if what they really dislike is manipulative trading. I don’t know of any reason to think that manipulation is more common among HFTs than other traders. If so, many criticisms about HFT requiring institutional traders to make wasteful technology investments are misdirected. Large institutions’ transaction costs pre-date automated trading and are a natural feature of markets. Computer programs, even when they trade aggressively, can cheaply contribute to liquidity, adding real value to assets. Instead of banishing computers from our markets, society would be better served if we spent more time evaluating the harmfulness of specific behaviors. With clear, consistent definitions of manipulation and good enforcement, perhaps we can convince the public that our markets are safe.

 

[1] This phenomenon might appear to not add any value to an asset when it’s a shortable security. If a buyer is willing to pay a bit extra because they think the asset will be easier to sell in the future due to the liquidity “put option,” then a short-seller should be willing to accept a lower price because of the similarly embedded “call option.” There’s no reason to think these effects are equal and opposite though; a security’s short interest is usually a small fraction of its overall float.

[2] You might be especially interested in making sure financial assets are liquid if you’re naturally a seller of them. Companies that make use of capital markets (through bond or public stock offerings) would certainly fall into this category. So do governments. Here’s FRBNY’s president, William Dudley addressing the relative illiquidity of TIPS bonds in 2009:

[I]t may make sense to structure the TIPS program in a way that would help reduce the illiquidity premium associated with TIPS relative to on-the-run nominal Treasuries. Some of the current illiquidity premium is likely to shrink as financial markets stabilize. However, further improvements may require a change in either the structure of the TIPS program or the secondary market trading environment.


[3] Francis Longstaff estimated an upper-bound on the value of liquidity by comparing it to a lookback option. An omniscient investor can sell a holding at its peak, but only if its market is completely liquid. If the market were completely illiquid, then the investor can’t sell at all. So a lookback option, which would offer the investor the right to sell at the absolute peak in a given time period, gives an upper-bound on liquidity’s value. In practice, of course, there are caveats:

  1. Returns aren’t normally-distributed.
  2. An investor may not have a pre-specified timeframe. And this lookback option would have infinite value if it were over an infinite time interval – not a very tight upper bound. On a side note, bizarrely,  options like this actually exist in the real world.
  3. An omniscient investor might choose to sell a holding below its peak for other reasons (like taxes, personal reasons, or because when you know everything, there’s probably a better way to deploy capital).
  4. Omniscience doesn’t exist. And if it did, I feel like markets wouldn’t make any sense.

If you’re interested in methods to price the value of liquidity, here’s a review by Aswath Damodaran.

[4] Proprietary traders as a whole make much more money when volatility is high. Those increased profits could be partly due to the higher value of liquidity and a commensurate rise in demand for it. The entire financial sector is in some sense engaged in the business of selling option-value. One view of banks is that they make money via shorting liquidity: by holding assets (particularly fixed income) to maturity and riding out fluctuations in market value, they are rewarded with a small profit. And one argument against strict mark-to-market accounting is that it doesn’t properly encapsulate this aspect of banking. Another perspective is that when banks have shorted “too much” liquidity and the price of that liquidity has risen (i.e. volatility has gone up and the “option-value” component of liquidity is expensive), they tend to rely on governments and central banks to sell them additional liquidity cheaply in order to survive. If you consider this government backstop to be an unfair subsidy, then asking banks to mark their balance sheets to market makes more sense. I always find this connection between the two “types” of liquidity (the kind provided by central banks and the kind traders use every day) both self-evident and surprising.

[5] This is the so-called fair pricing condition. See for instance this analysis by Waelbroeck and Gomes. They used a dataset of institutional transactions with (most) “cash flow” trades separately marked. “Cash flow” trades are due to client inflows and outflows which are (probably) not reflective of fund managers’ decisions. When they exclude these “cash flow” trades, they find that, on average, returns are quite close to transaction costs for different portfolio managers (figure 4, p23). They also find costs and returns are roughly in balance for transactions of different sizes (figures 12a and 12b, p41).

[6] At least, it’s a common view that index investors piggy-back on the pricing provided by active traders and do not have any valuable information. Maybe this is an over-simplification though; some index investors could buy when they predict future index buying. In that case, would they mind overpaying slightly for something that has psychological value (like index inclusion)?

Here’s Slack CEO Stewart Butterfield on investors being potentially willing to pay a premium to bolster the perceptions surrounding their investment:

You have to choose some numbers… One billion is better than $800 million because it’s the psychological threshold for potential customers, employees, and the press.

And:

[I]t increases the value of our stock and can allow potential employees to take our offers, and it reinforces the perception for our larger customers that we’ll be around for the long haul.


[7] The Waelbroeck and Gomes analysis in [5] gives us another example of “uninformed” traders who still pay market impact costs, if you consider “cash flow” transactions to lack informational value:

The peak impact of cash flows is statistically indistinguishable from that of other metaorders and both are indistinguishable from 1.5 times the estimated shortfall.

Their analysis also finds that “cash flow” trades’ market impact has a tendency to revert to the pre-trade price (or even past it). So these transactions have market impact expenses, but also appear to have zero (or negative) long-term alpha.

[8] Some funds are more careful and will deviate from indexes a bit in order to avoid paying some of this impact expense.

[9] I suppose this hidden cost, if real, is one way that index investors are charged by the market for piggy-backing on others’ price discovery.

[10] Many of these “orders” will not be on any exchange’s order book. But they’re somewhere in traders’ minds, which makes them like hidden orders (some people call them “latent orders”).

[11] They anaylzed thousands of public control bids from 1980-2002, a few hundred of which were accompanied by toehold purchases. The charts on p28 show significant pre-announcement price movement.

[12] Of course the price of a security can jump discontinuously in response to new information, on little to no volume. But security prices move continuously during the day, and presumably these price changes have informational value. And, when prices change discontinuously at the open, very often they do so in an auction with heavy volume. I’d imagine that, whenever a price change is accompanied by substantial trading, market impact plays an important role in price discovery. This could explain price volatility appearing lower across the weekend than during trading hours.

[13] Momentum-ignition could profit from “herding” in ways like a manipulator inducing panic, triggering stop orders, or causing forced liquidations.

[14] In reference to a similar case in the FX market, Matt Levine predicted:

Of course there will be emails – there are always emails

One issue with enforcing a ban on momentum ignition (or other market manipulation) is that, in order to prove it, knowledge of a trader’s intent is required. But I guess there are always going to be manipulators who describe their schemes over email, or just call their strategy to bang the close “the Hammer.”

[15] It appears Optiver did a bit more than that, allegedly another division used their foreknowledge of the price-hammering in other ways. From an internal company email (p54):

Nick has made a tidy profit on his trading, close to $100k I expect. But I consider the way In which he did it to be both deceitful, and reckless…

They will tell you of course that they have noticed (after we told them), that when someone sells TAS’s, the future often go down during the settlement…

Since our colleagues in Amsterdam know that we are going to do the dirty work, they simply trade their futures before hand, and make a big profit on them.


[16] The possibility that markets could allow asymmetric, profitable impact is counterintuitive. Writing about the FX rigging case, Matt Levine explains why this is confusing:

Let’s say that the chat room traders are selling euros to customers at the fix, so they want a high fix. They want to buy a lot of euros in the few minutes right before the fix, to push the price up…

Banks could further their manipulation by buying from outside banks, selling to outside banks, or doing neither. This should make you suspicious. All of those things can’t work equally well! If buying from other banks would push the price up, or down, then selling to them should push the price down, or up. The fact that the chat room traders sometimes did one, and sometimes the other, means that they hadn’t found a reliable cheat, a way to take the risk out of their trading. It means in some sense that their manipulation didn’t work. I mean, it worked fine. But there’s a reason you “cnt teach that.” It makes no sense!

Potentially, what these alleged manipulators did was increase their own position when they suspected that their counterparty bank would hedge in a low-impact fashion. When that was the case, the alleged manipulators could do their high-impact trading while their counterparty, on the opposite side, would do low-impact trading. When they suspected a rival bank would trade in a high-impact fashion, they might do the opposite. I’m not saying they were successful at this, but our markets may well allow a high-impact entrance and a low-impact exit, in contradiction with a sort of efficiency. Jim Gatheral calls this type of market efficiency, which renders manipulation impossible, a “no-dynamic-arbitrage principle.”

It would be great if we could design a market where this principle is reality. But I kind of suspect it’s impossible. Maybe the closest we can get is to de-anonymize market data long after the sensitive information it contains is stale. That would allow private-sector analysts to uncover manipulation, as well as offer victims the opportunity to see how it damages them.

[17] Finding a low-impact exit is probably also much easier for market-makers guaranteeing a benchmark price.

[18] It appears that NYMEX grew concerned about Optiver’s activity after just a few weeks.

[19] Different markets have different norms. I don’t know if pushing around the price is acceptable in FX markets, but it appears that regulators may want harsh penalties for any guilty banks.

Blind Analysis, Inefficient Markets, and UK Polling Accuracy

Everybody knows that market prices do not perfectly reflect all available information. This is partly due to human biases, one reason that algorithmic trading has become so successful.

The polls published prior to yesterday’s UK General Election generally predicted a hung parliament. Those polls were obviously very wrong. Why? One factor seems to be the influence of human bias on data analysis. Here is Damian Lyons Lowe, CEO of pollster Survation, which nearly published a poll that would have closely predicted the election results:

the results seemed so “out of line” with all the polling conducted by ourselves and our peers – what poll commentators would term an “outlier” – that I “chickened out” of publishing the figures – something I’m sure I’ll always regret

So, at least one pollster held back results that would have completely changed the prediction landscape. Others may have done the same, or may have unconsciously manipulated their analysis so that the final results were closer to what everybody else had been reporting. When the first exit poll was announced, which closely agreed with Survation’s never-released data, markets reacted strongly. GBP/USD immediately rose over 1%, indicating that the trading community found the results just as surprising as political experts.

This is a fantastic opportunity to learn about blind analysis. Blind analysis is a technique, primarily used in particle physics, to reduce the influence of human biases on reported results of an experiment. Physicists learned the hard way that human psychology is an important source of systematic error. See, for example, how some experimental values evolved over time, courtesy of the Particle Data Group. Some of these plots may be reminiscent of markets, which at times have very poorly predicted both future prices and their uncertainty.

Joel Heinrich has written an approachable and excellent summary of the motivations and methods of blind analysis. Heinrich mentions an example that should sound familiar given the biased polling discussed above. A physical quantity appeared, in its reported measurements, to become less uncertain over time, eventually being reported about 8 sigma away from its now-established value:

It is quite likely that the experimenters during this period paid too much attention to the level of agreement between their new result and the measurements of the recent past. If one judges whether a result is ready for publication by its agreement with the current world average, such disasters can happen…
[N]aturally when one sees an 8 [sigma] disagreement between one’s current result and the world average, one goes back and checks everything very carefully. On the other hand, when the new number is consistent with the old, there is a tendency to declare victory and move on. This asymmetry results in a statistical bias. A better procedure would be to list and schedule all the checks in advance of knowing the answer, and carry them out in either case

The techniques of blind analysis can be readily applied outside of physics. The general principles are to have strict procedures written ahead of time, and to hide whatever possible from the individuals analyzing the data. This may mean hiding the final result, adding in some kind of constant or noise to the data which can later be subtracted, or allowing the analysts to work with only a subset of unblinded data. I strongly encourage everybody who works with data to read Heinrich’s review or others. It is immensely frustrating to see human knowledge obfuscated for such a silly, avoidable reason. Blind analysis could dramatically improve the quality of all data-oriented disciplines, and I hope that it becomes more widespread.

The Flash Crash, Systems Failures, and Mission-Critical Engineering

The Flash Crash was caused by a complex interaction between different IT systems. Some of those systems failed in an ungraceful manner, and other systems could not cope with these failures. What I wish is that, instead of wringing our hands about how computers now manage our trading infrastructure, we took the time to understand that computers manage systems that are far more mission-critical than our financial markets.

I have worked extensively on trading systems and appreciate the importance of stable markets. I think that any software connecting to markets should have multiple layers of safety checks which are as independent as possible. In fact, I would argue for much greater risk checks than we have now. Algorithmic trading companies understand well what can happen if their safety measures fail. But, the worst thing that can happen to HFTs and exchanges is generally bankruptcy. [1]

It’s true that ordinary investors can get caught in IT failures and lose money, though hopefully erroneous trades can be unwound in those instances. It’s also true that traders illegally gaming each other and manipulating markets can defraud innocent investors. Given the spectacular systems failure of the Flash Crash, it’s worth reflecting on the risks associated with shoddy automation in other areas of life. I can’t even begin to detail all of the serious IT failures we’ve had in the last 50 years, but here are a few (in no particular order) that make the Flash Crash look like a triviality:

  1.  The Northeast Blackout of 2003. Complex interactions between multiple events, including a software failure, interrupted electricity delivery for over 50 million people, contributing to many deaths.
  2.  The Therac-25 radiotherapy device. Buggy software with poor safeguards allegedly caused at least 6 cancer patients to receive fatal or near-fatal doses of radiation.
  3.  Problems with software that controls airbag deployment in cars, including certain Cadillacs.
  4.  Problems with electronic voting in the 2014 Belgian Elections. Only 2000 voters seem to have been affected, but I might guess that fewer than that number of traders were seriously affected by the Flash Crash.
  5.  The infamous Toyota “Unintended Acceleration” issue may have been caused by faulty software, according to some experts . The issue has allegedly caused dozens of deaths.
  6.  A flaw in the software of a Soviet satellite reportedly triggered alarms that the US had launched 5 ICBMs. Human operators, suspecting a false alarm, fortunately waited for radar confirmation of the launches before reacting.


This is far from a complete list, and often we don’t even know if faulty IT contributed to a fatal accident. I think that many financial professionals suffer from déformation professionnelle. [2] The reality is that, despite the hullabaloo over the Flash Crash, it had few serious consequences in the grand scheme of things.

The Flash Crash very temporarily resulted in the loss of about $1T in the market-value of securities. It also triggered a media firestorm which potentially convinced some retail investors to hold cash, and miss the post-crisis stock market recovery. For those active traders who lost money that day, there’s no doubt that the Flash Crash was a big deal. But the reality is that the market recovered within minutes and many of the accidental transactions at absurd values were cancelled. A Flash Crash is also much less likely today, at least in American equity markets, where we have circuit breakers that halt trading when it becomes sufficiently volatile.

As computerized systems rightfully take on more responsibility, I hope we can learn some lessons from the Flash Crash. Unlike exchanges, designers of life-critical systems don’t have the luxury of shutting down for a few minutes when problems are detected. It’s not great that financial markets’ electronic infrastructure couldn’t handle a little stress, but it’s lucky that a failure like this attracted media attention without resulting in any loss of life. The anniversary of the Flash Crash is a reminder that all critical systems require intelligent regulation and, most importantly, is an opportunity to thank the engineers who keep us safe.


[1] It might be worse for an HFT to have some of their traders violate compliance rules and commit crimes. But I wouldn’t really call that an IT glitch, though with compliance being increasingly automated, maybe one day that’ll change.

[2] I have looked for an English equivalent to this term, and the closest I’ve found is “occupational psychosis,” which sounds a bit more extreme than I’d like.

Market Data Patterns, Order Anticipation, and An Example Trading Strategy

I recently discussed the inseparability between predicting an instrument’s price and anticipating its order flow. But, sometimes it’s possible to directly predict order flow from the signatures of execution algorithms or even certain exchange order types. In this post, we’ll see examples of common patterns in market data that have an association with future orders. I’ll also outline a simple trading strategy with one of these patterns as its primary feature.

Reserve Orders

A reserve order (also called an “iceberg order” or “MaxShow order“) is a resting order that is programmed to have its quantity refilled in some fashion after it executes. I believe the existence of reserve orders is motivated by participants’ desire to hide the full extent of their trading intentions. A reserve order, for instance, might display 200 shares at any one time when its full size is many times that. Here’s a simplified illustration of how they work:

  1. Trader submits a reserve order to buy 5000 shares of AAPL at $100, setting the order to display 200 shares at any one time.
  2.  Market data shows a new bid at $100 for 200 shares.
  3.  Someone else trades with that $100 bid.
  4. A new 200 share bid for $100 appears in the market data.
  5. Steps 3 and 4 repeat until the full 5000 shares are executed, the price moves away, or the trader cancels the order.

See p56-60 of this document from Nasdaq Nordic for more detailed examples.

You might guess that reserve orders, because they tend to be large and worth hiding, have substantial alpha. You can probably also see how they might leave a fairly obvious signature in market data. So, let’s check. Here is a chart of the performance of a few types of refilled orders.

inet_sfa_reserve_midpt_toxic_800

Top panel: Average profit or loss per share vs distance in time from market event. For events marked “Order-Add,” the line follows the market price trajectory relative to the order price and time of appearance. Events marked “Execution” follow the price trajectory of any of these orders that later trade, relative to execution price and time. The “market price” here is the Nasdaq midpoint, not the last-traded price like in previous posts, and is weighted respectively by order-add/execution quantity. Viewing the midpoint makes it easy to see the price-level trading-through and the market price snap-back that follows. Chart is over 6 days in August and excludes fees and rebates. Bottom panel: Shares traded on Nasdaq vs time from event (including volume from any part of these events).

These orders that are the result of a refill definitely have noticeable alpha, even when they execute. Which is somewhat surprising given that a large part of the market ought to know that these orders were potential icebergs. You might have expected that after the first refill only very confident participants would want to trade with them, and after several refills, traders would be getting increasingly wary. About 67% of these displayed shares are eventually filled, a high portion by any standard. Generally speaking, a high fill rate is associated with greater market impact; though, of course, we don’t have any information about the hidden portions of these orders which are not executed.


Most importantly, notice how the market midpoint tends to snap back after a trade, more or or less confirming that many refilled orders continue to get refilled after subsequent trades. Also, note that the time-scale of the snap-back is pretty tightly in the 200~400us range, significantly greater than the usual roundtrip time for an HFT trading on Nasdaq. My guess is that Nasdaq handles reserve orders via computers external to their core matching engines. That would explain this latency as well as reserve orders not being available on the high-speed OUCH order entry interface. [1] Of course, these refills could also be due to execution algorithms operated independently from Nasdaq, but I suspect that many are, in fact, reserve orders. [2]

Anticipating Order Flow


If that’s the case, this is a great example of exchange order handling being potentially open to what Michael Lewis might call front-running [3]. If reserve orders take ~250us for the exchange to refill them, what happens if somebody tries to submit an order in anticipation of that refilling? Exchange latency varies by the situation, and I haven’t tested it myself in situations like these, but I’d bet that an HFT could have an order of their own live before a hypothesized reserve order is refilled by Nasdaq. [4] An HFT could even delete their order after a few hundred microseconds if the reserve order does not refill behind it. I’m *not* saying that this is happening, but if so the hypothetical HFT would have pretty limited risk of being filled in situations where they did not have an order behind them in the queue, which is generally considered desirable.

My feeling is that this sort of trading by an HFT would be perfectly legal, even if it feels a little shady. HFTs’ compliance departments might forbid flashing and quickly deleting an order if nobody joins its price level, partly because that kind of activity may indicate a strategy is hoping to elicit a reaction from the market with its orders. That’s not what we’re discussing here, but compliance departments are hopefully very watchful of any trading that resembles market manipulation.

But, what if we simulate the next simplest thing? We’re not trying to do anything too complicated here, so let’s just pick the highest performing type of refill events for our strategy to mimic. Events with refills that improve the BBO at the time of adding perform better than those that tie the BBO (for the latter group, here’s the same analysis as above). Refilling orders that display more than 101 shares also perform significantly better. Our strategy will be essentially to copy these orders. Here’s the performance of simulated trades that result from adding a 100-share order *after* one of these refills occurs, with the simulation keeping its orders live until it sees the suspected reserve orders stop refilling:

inet_sfa_reserve_sim_>101add_bboImp_800

Top panel: Average market-priced profit or loss per share vs distance in time from trades simulated by the above-described strategy. The “market price” here is a measure of the last-traded price, which is why you don’t see a snap-back in the price after a trade. Chart is over 8 days in August (different from the days in the above chart). Bottom panel: Shares traded on Nasdaq vs time from trade (excluding simulated trade).

Even this simple trading strategy appears to be profitable. The above chart excludes fees and the roughly 0.30 cents/share Nasdaq rebates, which raise the profitability significantly. All told, at least in simulation, this would make nearly 20k/day. And that is just by sending 100 share orders to Nasdaq, with Nasdaq refills as the only signal. There are other exchanges with very similar behavior that are amenable to this kind of strategy.

Automated Pattern Detection


In the simulation, after the once-refilled order in front of the simulated order executes, another refill generally comes in behind the simulated order. So, this simulated strategy is still pretty explicitly anticipating order flow. Again, I believe this kind of trading activity is legal (not that I’m a lawyer). But it does make me slightly uncomfortable, and neither I nor my company have ever used this signal or others like it in live trading. But, it’s easy to imagine this market data pattern being used in trading strategies without the operator being aware of it. [5] Quants could create a model that automatically searches for patterns in market data and combines them into a prediction statistically, without actually looking at what those patterns are. [6] This particular pattern occurs quite often, so I think that many pattern-detection methods would find it easily.

I’ll also mention that sometimes certain patterns in market data do not obviously resemble exchange order types, but still have similar predictive power to our example of reserve orders. Below, we can see the anomalously high performance of orders that are added shortly after the appearance of another order on the opposite side.

inet_sfa_dime-oppSideDime_midpt_toxic_(trimmed)800

Top panel: Average profit or loss per share vs distance in time from market event. The “market price” here is again the Nasdaq midpoint, not the last-traded price. Chart is over 6 days in August and excludes fees and rebates. Bottom panel: Shares traded on Nasdaq vs time from event (including volume from any part of these events).

Orders that are part of event sequences of this type are pretty high-alpha as you can see. There could be all sort of reasons for that, including:

  1. Execution algorithms that make their quotes more aggressive when they see the spread tighten
  2. Some could be reserve orders like those above, and are refilling after both an incoming trade and a new order on the opposite side (other algos may add that order in response to the trade).
  3. Exchange order types that I’m not familiar with
  4. Or, in light of recent events, spoofing. (Hopefully not, and I doubt it). [Edit: For an example of how spoofing may induce somebody into trading with a reserve order on the opposite side, see this CME disciplinary notice]

Ethics of Using Signatures from Algorithmic Trading Tools

Would using this particular signal, which is not directly linked to an exchange order type, in a trading strategy be unethical? Neither I nor my company have used it (or anything like it), but I don’t see a problem with doing so. If somebody wants to use an execution algorithm that leaves a blaring signature in market data, it’s hard to feel too sympathetic.

Ultimately, it is the traders submitting these orders that are accountable for their efficacy. In an ideal world, traders responsible for substantial volume perform careful analysis of their execution methods and choose the best one for a given situation. Order types such as icebergs are algorithmic tools designed for sophisticated participants. I think it’s important to note that many of the most vocal opponents of order-anticipation strategies also have a Darwinist view of markets and feel that algorithmic traders should not be protected from spoofers. That view feels like it’s in tension with an objection to order-anticipation strategies that predict behavior of other algorithms. The above market data patterns, even if partly from reserve orders, are signatures from algorithmic trading. If you don’t mind algorithmic traders being gamed by spoofers, you ought not to mind their order flow being anticipated using the information they blast out to the market.

Exchange Improvements

That said, I think it might be helpful if exchanges were a little more transparent about their order handling process. If, for example, order types that mimic execution algorithms have a high latency because they operate on separate computers from the matching engine, then maybe the exchange ought to provide latency statistics so that users can make a more educated choice. Similarly, if an exchange could add a trivial feature to help obscure reserve orders, why shouldn’t they? For example, Nasdaq offers functionality for reserve orders to be refilled with a random quantity, presumably with the intention of obfuscation. Why not offer functionality for reserve orders to be refilled at a random time interval as well? If refills didn’t just occur in a tight range of 200-400us, they would have been much harder for me to detect. Of course, many traders could just implement that functionality in their own iceberg algorithms if they wanted it. But it’s worth mentioning that some market data patterns, that are likely from traders’ algorithms (rather than exchanges’ algorithms), also seem to be due to the use of non-random timers – so exchanges are certainly not outside the mainstream here.

Some people might also say that our markets have become too complex and that traders are being increasingly forced to use order types that they don’t understand. I’m somewhat sympathetic with that sentiment, and there are proposals to reduce the number of order types. But even if the markets are simplified, large traders should still read the documentation for exchange matching engines and any order types they plan on using. The most verbose exchange documentation is generally no more than a few hundred pages, and specific order types are usually documented in just a few pages – all of which, though boring, can probably be read in under a day. In the case of reserve orders on Nasdaq, the admittedly brief description provided in the order types guide is likely sufficient for traders to understand that reserve orders may leak valuable information upon refilling. Market professionals are handsomely compensated and probably should take the time to read the manual.

Explicit Marking of Reserve Orders on DirectEdge

More generally, exchanges probably should do the best they can, within the confines of an order type, to keep client information as secret as possible. And, when they don’t, they probably should explain that as clearly as possible, just in case some traders don’t read every exchange document. In DirectEdge’s old market data protocol (p9), there is a field (“Replenish Flag”) that discloses whether a new order is associated with a reserve order:

This message indicates a replenishment of an existing reserve order.

I know exchanges work hard to protect client info, so I was surprised to see this. As we saw, it’s not that hard to identify reserve orders anyway. But even so, I would not be astonished if some iceberg users were upset by this order flag. One philosophy of market data distribution is that more disclosure is always better; DirectEdge could have been operating under this assumption when they elected to include the flag.

This field is not (to my knowledge) disseminated on the new market data protocol, used after DirectEdge’s integration with the Bats platform. In fact, Bats’s documentation seems to stress the importance of keeping this information secret:

To better protect reserve orders, BATS handles executions against reserve orders as follows: …
When the displayed portion of the reserve order is refreshed, the order is assigned a new OrderID on the PITCH feed. This is reported by an Add Order(0x21, 0x22, or 0x2F) when the remainder is nonzero.

In any case, here’s a plot of what the market looks like around the time these flagged orders were added and executed:

edgex_sfa_bboImprove-addflag_reserve_800

Top panel: Average profit or loss per share vs distance in time from market event. The “market price” here is the EdgeX last-traded price. Chart is over 6 days in August and excludes fees and rebates. Bottom panel: Shares traded on EdgeX vs time from event (including volume from any part of these events).


It appears that these orders carry valuable information as well. As with the suspected Nasdaq reserve orders, larger orders with this flag have greater alpha. A simple strategy that just copies them is profitable in simulation too. I will mention that neither I nor my company have used this flag (or anything like it) as a signal.

Adding vs Removing Liquidity

One thing I like about strategies that essentially mimic reserve orders (or other patterns) is that they post liquidity even though they could be embarrassing for anybody using them (again, embarrassment does not imply anything illegal). Passively trading, often called adding liquidity or market-making, is generally a revered activity thought of as a service to the market. Adding liquidity as part of an explicit order-anticipation strategy turns that picture on its head. [7]

Earlier trading strategies posted on this blog all remove liquidity, something often frowned upon by the media. Those aggressive strategies, which focus on trading with old , large, or MPID-labeled orders, choose to trade with counterparties that exhibit characteristics suggestive of a human or retail origin. So, the strategies are likely trading with entities who want to be executed, even if the short-term market price tends to move through their orders after execution. Those strategies may technically remove liquidity, but as far as counterparties would be concerned, they provide a valuable service worthy of being called market-making.


[1] From p28 of Nasdaq Nordic’s Market Model document

All changes on the Order including changes to the volume (both visible and total volume) of a Reserve Order are accomplished using an Order cancellation followed by an Order insert. In addition, when the displayable portion of the Order is completely executed within the Order Book, the non – displayable portion of the Order is decremented (retaining time priority) and a new displayable Order is sent to the Order Book (with new time priority).

The technical implementation for some Order functionality means that the functions are offered on a best effort basis. This means that the execution may be subject to so called ‘race conditions’ and that the outcome may be impacted by other (incoming) Orders. E.g. the updating of open or displayed volume of a Reserve Order is done at a time when other Orders may be entering the Order Book, thus leaving the Order priority of the update non – deterministic.

Nasdaq Nordic uses Inet technology, so it’s a reasonable guess that their American markets have similar order-handling logic. But, it’d be great if Nasdaq could provide some clarifying guidance. It’s a sign of the state of disclosure (which has dramatically improved in recent years) on US exchanges that sometimes you need to read the documentation of foreign analogues to understand how they operate.


[2] Reserve orders account for about 8% of volume on NYSE Arca and about 4% on NYSE. It wouldn’t surprise me if they were also very common on Nasdaq.


[3] I, and others, certainly wouldn’t call it that.


[4] See [1] for Nasdaq Nordic’s depiction of reserve order refills having non-deterministic priority in the queue.

[5] This is, again, not the case for me or my company.


[6] This isn’t advice, but it wouldn’t be a terrible idea for compliance departments to ask developers for a list of all pattern-like signals used in trading strategies. Developers might automatically add signals without really looking at them, but there’s no reason compliance can’t review them before they’re used in live trading. Unless there are tens of thousands of such features, in which case it may still be possible to create automated tools to check them for potentially problematic behavior.


[7] Of course, you could imagine using the order patterns we’ve seen in other strategies that remove liquidity. I do feel like just copying the anticipated orders is the most pure method of capturing some of their alpha though.