Market prices generally respond to an increase in supply or demand. This phenomenon, called “price impact,” is of central importance in financial markets. Price impact provides feedback between supply and demand, an essential component of the price discovery mechanism. Price impact also accounts for the vast majority of large traders’ execution costs — costs which regulators may seek to reduce by tweaking market structure.

Price impact is a concave function of meta-order [1] size — approximately proportional to the square-root of meta-order size — across every well-measured financial market (e.g. European and U.S. equities, futures, and bitcoin). There are some nice models that help explain this universality, most of which require fine-grained assumptions about market dynamics. [2] But perhaps various financial markets, regardless of their idiosyncrasies, share emergent properties that could explain empirical impact data. In this post, I try to predict price impact using only conjectures about a market’s large-scale statistical properties. In particular, we can translate intuitive market principles into integral equations. Some principles, based on efficiency arguments, imply systems of equations that behave like real markets.

In part **I**, we’ll start with the simplest principles, which we’ll only assume to hold on average: the “fair pricing condition”, and that market prices efficiently anticipate the total quantity of a meta-order based on its quantity already-executed. In part **II**, we’ll replace the fair pricing condition with an assumption that traders use price targets successfully, on average. In part **III**, we’ll return to fair pricing, but remove some efficiency from meta-order anticipation — by assuming that execution information percolates slowly into the marketplace. In part **IV**, we’ll emulate front-running, by doing the opposite of part **III**: leaking meta-orders’ short-term execution plans into the market. In parts **V** and **VI**, we’ll discuss adding the notion of urgency into meta-orders.

# Definitions and Information-Driven Impact

We can motivate price impact from a supply and demand perspective. During the execution of a large buyer’s meta-order, her order flow usually changes the balance between supply and demand, inducing prices to rise by an amount called the “temporary price impact.” After the buyer is finished, she keeps her newly-acquired assets off the market, until she decides to sell. This semi-permanent reduction in supply causes the price to settle at a new level, which is higher than the asset’s initial price by an amount called the “permanent price impact.” Changes in available inventory cause permanent impact, and changes in flow (as well as inventory) cause temporary impact. [3]

Another view is that informed trading causes permanent impact, and that uncertainty about informedness causes temporary impact. When a trader submits a meta-order, its permanent impact should correspond in some fashion to her information. And its temporary impact should correspond to the market-estimate of permanent impact. In an “efficient” market, the informational view and the supply/demand view should be equivalent.

Before we proceed, we need some more definitions. Define as the typical permanent price impact associated with a meta-order of quantity . By “typical”, I mean that is the expectation value of permanent impacts, , associated with a situation, , in the set of all possible situations and meta-orders, . Specifically, . It’s reasonable to associate the colloquial term “alpha” — which describes how well a given trade () predicts the price — with .

Also define as the typical temporary price impact after a quantity has been executed. Again, “typical” means .

These expectations can be passed through the integrals discussed below, so we don’t need to pay attention to them. In the rest of this post, “permanent impact” will refer to the expectation unless otherwise specified (and likewise for “temporary impact” and ).

# I. A Bare-Bones Model

Starting from two assumptions of market efficiency, we can determine the typical price-trajectory of meta-orders. The two conditions are:

**I.1)** The “fair pricing condition,” which equates traders’ alpha with their execution costs from market-impact (on average):

###### The integral denotes the quantity-averaged temporary impact “paid” over the course of an entire meta-order. “Fair pricing” means that, in aggregate, meta-orders of a given size do not earn excess returns or below-benchmark returns.

**I.2)** Efficient linkage between temporary and permanent price impact:

###### Here, is the PDF of meta-order sizes and is the CDF. And is the truncated probability density of meta-order sizes, — which represents the probability distribution of given that quantity from the meta-order has already executed. This condition means that, on average, “the market” observes how much quantity an anonymous trader has executed, and uses that to estimate the distribution of her meta-order’s total quantity. “The market” then calculates an expectation value of the meta-order’s alpha, which sets the current clearing price (i.e. temporary impact). [4] [5] To emphasize, only the average temporary impact is determined this way; a single meta-order could have much different behavior. Here’s a heuristic example:

###### A. A trader is buying a lot of IBM stock and has so far bought a million shares.

B. The rest of the market sees signs (like higher price and volume) of that purchase and knows roughly that somebody has bought a million shares.

C. Once a trader has bought a million shares, there is a 50% chance that she’ll buy 5 million in total, and a 50% chance that she’ll buy 10 million. “The market” knows these probabilities.

D. For 5 million share meta-orders, the typical permanent price impact is 1%, and for 10 million share meta-orders it’s 2%. So “the market” expects our trader’s meta-order to have permanent impact of 1.5%. The *typical* temporary impact is determined by this expectation value. This particular meta-order may have temporary impact smaller or larger than 1.5%, but meta-orders sent under similar circumstances will have temporary impact of 1.5% on average.

# Relationship with Efficiency

The fair pricing condition could emerge when the capital management industry is sufficiently competitive. If a money manager uses a trading strategy that’s profitable after impact costs, other managers could copy it and make money. The strategy would continue to attract additional capital, until impact expenses balanced its alpha. (Some managers are protective of their methods, but most strategies probably get replicated eventually.) If a strategy ever became overused, and impact expenses overwhelmed its alpha, then managers would probably scale back or see clients pull their money due to poor performance. Of course these processes take time, so some strategies will earn excess returns post-impact and some strategies may underperform — fair pricing would hold so long as they average out to a wash.

A strictly stronger condition than **I.2)** should hold in a market where meta-orders are assigned an anonymous ID, and every trade is instantly reported to the public with its meta-order IDs disclosed. Farmer, Gerig, Lillo, and Waelbroeck call a similar market structure the “colored print” model. Under this disclosure regime, if intermediary profits are zero, the expected alpha would determine the temporary impact path of individual meta-orders, not just the average as in **I.2)**. All meta-orders would have the same impact path: for any . [6] Now, the colored print model doesn’t seem very realistic; most markets don’t have anywhere near that level of transparency. Nonetheless, Farmer et al. show preliminary measurements that partly support it. [7]

Even without colored prints, the linkage property **I.2)** could be due to momentum and mean-reversion traders competing away their profits. As discussed by Bouchaud, Farmer, and Lillo, most price movement is probably caused by changes in supply and demand. That is, if prices move on increased volume, it’s likely that someone activated a large meta-order, especially if there hasn’t been any news. So, if average impact overshot **I.2)** significantly, a mean-reversion trader could plausibly watch for these signs and profitably trade opposite large meta-orders. Likewise, if average impact undershot **I.2)**, momentum traders might profit by following the price trend.

# Solving the System of Equations

We can combine **I.1)** and **I.2)** to get an ODE [8]:

This ODE lets us compute and for a given meta-order size distribution, .

It’s common to approximate as a distribution (). If we do so, then , and the ODE has solution . Equation **I.1)** implies , so we have that . Impact should nearly vanish for small , so we can say that . The post-execution decay in impact is then given by

If we choose (roughly in-line with empirical data), we get the familiar square-root law: . We also get an impact ratio of , very close to real-world values.

A similar method from Farmer, Gerig, Lillo, and Waelbroeck gives the same result. They use the fair pricing condition, but combine it with a competitive model of market microstructure. [9] Here, instead of having a specific model of a market, we’re making a broad assumption about efficiency with property **I.2)**. There may be a large class of competitive market structures that have this efficiency property.

# Distribution of Order Sizes Implied by a Given Impact Curve

Under this model, knowing an asset’s price elasticity () is equivalent to knowing its equilibrium meta-order size distribution(). [10] If a market impact function is assumed, we can calculate the meta-order size distribution. [11] For instance, Zarinelli, Treccani, Farmer, and Lillo are able to better fit their dataset with an impact function of the form (p17). This impact curve implies a that’s similar to a power-law, but with a slight bend such that its tail decays slower than its bulk:

# II. A Replacement Principle for Fair Pricing: Traders’ Effective Use of Price Targets

The two integral equations in part **I** can be modified to accommodate other market structure principles. There’s some evidence that our markets obey the fair pricing condition, but it’s fun to consider alternatives. One possibility is that traders have price targets, and cease execution of their meta-orders when prices approach those targets. We can try replacing the fair pricing of **I.1)** with something that embodies this intuition:

**II.1)**

###### Where and are constants. This principle should be true when traders follow price-target rules, and their targets accurately predict the long-term price (on average). If and , then traders typically stop executing when the price has moved of the way from its starting value to its long-term value. If and , then traders stop executing when the price is within 1% of its long-term value.

If we keep **I.2)**, this gives the ODE:

It’s readily solved. [12] In particular, if and :

and .

For typical values of , we can get the usual square root-law by setting . We need in order for impact to be a concave, increasing function of order size, in agreement with empirical data. This suggests that perhaps traders do employ price targets, only instead of being conservative, their targets are overly aggressive. In other words, this model gives a realistic concave impact function if traders are overconfident and think their information is worth more than it is. [13] More generally, the partial reversion of impact after meta-orders’ completion could be explained with overconfidence. And when the “average” trader is overconfident just enough to balance out her alpha, the market will obey the fair pricing condition. I think there’s more to fair pricing than overconfidence, but this link between human irrationality and market efficiency is intriguing.

# III. A Replacement Principle for Efficient Linkage, with Delayed Dissemination of Information

We can also think about alternatives for **I.2)**. In **I.2)**, “the market” could immediately observe the already-executed quantity of a typical meta-order. But markets don’t instantly process new information, so perhaps the market estimate of meta-orders’ already-executed quantity is delayed:

**III.2)**

###### Where is a constant and is the positive part of : .

###### This condition should be true when the market (on average) is able to observe how much quantity an anonymous trader executed in the past, when her executed quantity was less than it is in the present. This information can be used to estimate the distribution of her meta-order’s total size, and thus an expectation value of its final alpha. The temporary impact is set by this expectation value.

Intuitively, small meta-orders may blend in with background activity, but large ones are too conspicuous. If someone sends two 100-share orders to buy AAPL, other traders won’t know (or care) whether those orders came from one trader or two. But if a large buyer is responsible for a third of the day’s volume, other traders will notice and have a decent estimate of the buyer’s already-executed quantity, even if they don’t know whether the buyer was involved in the most recent trades on the tape. So, it’s very plausible for market participants to have a quantity-lagged, anonymized view of each other’s trading activity.

Combining **III.2)** with fair pricing **I.1)** gives the delay differential equation [14]:

.

We can solve it numerically [15]:

I gather that fundamental traders don’t like it when the price reverts on them, so some may want this impact ratio to be close to 1. Delayed information dissemination helps accomplish this goal when meta-orders are smaller than what can be executed within the delay period. But traders experience bigger than usual reversions if their meta-orders are larger than . This behavior is intuitive: if a meta-order has executed a quantity less than , other traders will have zero information about it and can’t react. But as soon as its executed quantity reaches , the market is made aware that somebody is working an unusually big meta-order, and so the price moves considerably.

Some bond traders are pushing for a longer delay in trade reporting. One rationale is that asset managers could execute meta-orders during the delay period, before other traders react and move the market. The idea feels superficially like condition **III.2)**, but isn’t a perfect analogy, because counterparties still receive trade confirmations without delay. And counterparties do use this information to trade. [16] So, delaying prints may not significantly slow the percolation of traders’ information into the marketplace, it just concentrates that information into the hands of their counterparties. Counterparties might provide tighter quotes because of this informational advantage, but only if liquidity provision is sufficiently competitive. [17]

In theory, it’s possible for market structure to explicitly alter . [18] An exchange could delay both prints and trade confirmations, while operating, on behalf of customers, execution algorithms which do not experience a delay. This was the idea behind IEX’s defunct router, which would have been able to execute aggressive orders against its hidden order book and route out the remainder before informing either counterparty about the trades. The router would’ve increased the equity market’s by the resting size on IEX’s hidden order book, which (I’m guessing) is very rarely above $100k notional — an amount that doesn’t really move the needle for large fundamental traders, especially since orders larger than experience significant price reversion. Regardless, it’s interesting to think about more creative ways of giving exchange execution algorithms an informational advantage. The general problem with such schemes is that they are anti-competitive; brokers would have to use the advantaged exchange algos, which could command exorbitant fees and suffer from a lack of innovation. [19]

# IV. A Replacement Principle for Efficient Linkage, with Information Leakage from Sloppy Trading or Front-Running

In **III.**, we altered condition **I.2)** so that market prices responded to meta-orders’ executions in a lagged fashion. We can try the same idea in reverse to see what happens if market prices adjust to meta-orders’ future executed quantity:

**IV.2)**

###### Where is the temporary impact associated with a meta-order that has an already-executed quantity of and a total quantity of . is a constant. On average, a meta-order’s intentions are partly revealed to the market, which “knows” not only the meta-order’s already-executed quantity, but also whether it will execute an additional quantity in the future. If a meta-order will execute, in total, less than , the market knows its total quantity exactly. “The market” uses this quantity information to calculate the meta-order’s expected alpha, which determines the typical temporary impact.

This condition may be an appropriate approximation for several market structure issues:

**A.** The sloppy execution methods described in “Flash Boys”: If a sub-par router sends orders to multiple exchanges without timing them to splash-down simultaneously, then “the market” may effectively “know” that some of the later orders are in-flight, before they arrive. If most fundamental traders use these sloppy routing methods (as “Flash Boys” claims), then we might be able to describe the market’s behavior with a approximately equal to the typical top-of-book depth.

**B.** Actual front-running: E.g., if fundamental traders split up their meta-orders into $10M pieces, and front-running brokers handle those pieces, the market will have a . Though, brokers know their customers’ identities, so they may be able to predict a customer’s permanent impact with better precision than this model allows.

**C.** Last look: During the last-look-period, a fundamental trader’s counterparty can wait before finalizing the trade. If the fundamental trader sends orders to other exchanges during this period, her counterparty can take those into account when deciding to complete the trade. This is similar to A., except traders can’t avoid the information leakage by synchronizing their orders.

We can examine the solutions of this version of condition **2)**. Combining it with the fair pricing condition **I.1)** gives, for meta-orders with : [20]

If , this has solution:

For : the , so , which is the same behavior we saw in the base model **I.**

If we look at the solution’s behavior for , the story is quite different:

Under this model, meta-orders slightly larger than *necessarily* have negative long-term alpha. It’s possible that traders would adapt to this situation by never submitting meta-orders of that size, altering the Pareto-distribution of meta-order sizes so that no commonly-used is associated with negative alpha. But, it’s also possible that some traders would continue submitting orders that lose money in expectation. Market participants have diverse priorities, and long-term alpha is not always one of them.

# V. Adding Time-Dependence

The model template above gets some general behavior right, but glosses over important phenomena in our markets. It makes no explicit mention of time, ignoring important factors like the urgency and execution rate of a meta-order. It’s not obvious how we could include these using only general arguments about efficiency, but we can imagine possible market principles and see where they lead.

For the sake of argument, say that every informed trading opportunity has a certain urgency, , defined as the amount of time before its information’s value expires. For example, an informed trader may have a proprietary meteorological model which makes predictions 30 minutes before public forecasts are published. If her model predicts abnormal rainfall and she expects an effect on the price of wheat, she’d have 30 minutes to trade before her information becomes suddenly worthless. Of course, in real life she’d have competitors and her information would decay in value gradually over the 30 minutes, perhaps even retaining some value after it’s fully public. But let’s just assume that is a constant for a given trading opportunity and see where it leads us.

If we try following a strict analogy with the time-independent model, we might write down these equations:

**V.1)** A “universal-urgency fair pricing condition,” that applies to meta-orders at every level of urgency:

###### This is a much stronger statement than ordinary fair pricing. It says that market-impact expenses equal alpha, on average, for meta-orders grouped by *any* given urgency. There are good reasons to expect this to be a bad approximation of reality — e.g. high-frequency traders probably constitute most short-urgency volume [21] and have large numbers of trades to analyze, so they can successfully tune their order sizes such that their profits are maximized (and positive). Perhaps some traders with long-urgency information submit orders that are larger than the capacity of their strategies, but I doubt HFTs do.

**V.2)** Efficient linkage between temporary and permanent price impact:

###### Where is the PDF of meta-order sizes and urgencies, and is the CDF. is the truncated probability distribution of meta-order sizes and urgencies, — which represents the probability distribution of and given the knowledge that quantity from the meta-order has already executed in time . This is similar to the time-independent efficient linkage condition **I.2)**. For example, a trader splits her meta-order into chunks, executing 1,000 shares per minute starting at 9:45. If she is still trading at 10:00, “the market,” having observed her order-flow imbalance, will “know” that her meta-order is at least 15,000 shares and has an urgency of at least 15 minutes. “The market” then calculates the expected alpha of the meta-order given these two pieces of information, which determines the average temporary impact.

We can combine these two equations to get a rather unenticing PDE. [22] As far as I can tell, its solutions are unrealistic. [23] Most solutions have temporary price impact that barely changes with varying levels of urgency. But in the real world, temporary impact should be greater for more urgent orders. The universal-urgency fair pricing here is too strong of a constraint on trader behavior. This condition means that markets don’t discriminate based on information urgency. Its failure suggests that markets do discriminate — and that informed traders, when they specialize in a particular time-sensitivity, face either a headwind or tailwind in their profitability.

# VI. A Weaker Constraint

If we want to replace the universal-urgency of **V.1)** with something still compatible with ordinary fair pricing, perhaps the weakest constraint would be the following:

**VI.1)**

###### Which says that, for a given , fair pricing holds on average across all .

Requiring this, along with **V.2)**, gives a large class of solutions. Many solutions have -behavior similar to the time-independent model **I**, with -behavior that looks something like this:

This weaker constraint leaves a great deal of flexibility in the shape of the market impact surface . Some of the solutions seem reasonable, e.g. for large , could decay as a power of . But there are plenty of unreasonable solutions too, so perhaps real markets obey a stronger form of fair pricing.

# Conclusion

Price impact has characteristics that are universal across asset classes. This universality suggests that financial markets possess emergent properties that don’t depend too strongly upon their underlying market structure. Here, we consider some possible properties and their connection with impact.

The general approach is to think about a market structure principle, and write down a corresponding equation. Some of these equations, stemming from notions of efficiency, form systems which have behavior evocative of our markets. The simple system in part **I** combines the “fair pricing condition” with a linkage between expected short-term and long-term price impact. It predicts both impact’s size-dependence and post-execution decay with surprising accuracy. Fair pricing appears to agree with empirical equities data. The linkage condition is also testable. And, as discussed in part **III**, its form may weakly depend on how much and how quickly a market disseminates data. If we measure this dependence, we might further understand the effects of price-transparency on fundamental traders, and give regulators a better toolbox to evaluate the evolution of markets.

[1] A “meta-order” refers to a collection of orders stemming from a single trading decision. For example, a trader wanting to buy 10,000 lots of crude oil might split this meta-order into 1,000 child orders of 10 lots.

[2] There’s a good review and empirical study by Zarinelli et al. It has a brief overview of several models that can predict concave impact, including the Almgen-Chriss model, the propagator model of Bouchaud et al. and of Lillo and Farmer, the latent order book approach of Toth et al. and its extension by Donier et al., and the fair pricing and martingale approach of Farmer et al.

[3] Recall the “flow versus stock” (“stock” meaning available inventory) debate from the Fed’s Quantitative Easing programs, when people agonized over which of the two had a bigger impact on prices. E.g., Bernanke in 2013:

We do believe — although, you know, there’s room for debate — we do believe that the primary effect of our purchases is through the stock that we hold, because that stock has been withdrawn from markets, and the prices of those assets have to adjust to balance supply and demand. And we’ve taken out some of the supply, and so the prices go up, the yields go down.

For ordinary transactions, the “stock effect” is typically responsible for about two thirds of total impact (see, e.g., Figure 12). Central banks, though, are not ordinary market participants. But there are hints that their impact characteristics may not be so exceptional. Payne and Vitale studied FX interventions by the SNB. Their measurements show that the SNB’s price impact was a concave function of intervention size (Figure 2). The impact of SNB trades also appears to have partially reverted within 15-30 minutes, perhaps by about one third (Figures 1 and 2, Table 2). Though, unlike QE, these interventions were sterilised, so longer-term there shouldn’t have been much of a “stock effect” — and other participants may have known that.

[4] We can assume without loss of generality that the traders in question are buying (i.e. the meta-order sizes are positive). Sell meta-orders would have negative , and the same arguments would apply, but with “” replaced by ““. Though, the meta-order size distribution for sell orders might not be symmetric to the distribution for buy orders (i.e. ). Note that this model assumes that traders don’t submit sell orders when their intention is really to buy. There’s some debate over whether doing so would constitute market manipulation and I doubt it happens all that much, but that’s a discussion for another time.

[5] I’m being a little loose with words here. Say a meta-order in situation has an already-executed quantity of , and the market-estimate of is . **I.2)** is not the same as saying that . The market-estimate could be biased and **I.2)** might still hold. And **I.2)** could be wrong even if is unbiased.

[6] I’m being imprecise here. Intermediaries could differentiate some market situations from others, so we really should have: , where is the average alpha for possible situations given observable market conditions. E.g. average alpha increases when volatility doubles, and other traders know it — so they adjust their estimates of temporary impact accordingly. In this case, is the set of meta-orders that could be sent when volatility is doubled. For this reason, and because impact is not the only cause of price fluctuations, the stronger “colored print” constraint wouldn’t eliminate empirically measured — though it should dramatically reduce it.

[7] The draft presents some fascinating evidence in support of the colored print hypothesis. Using broker-tagged execution data from the LSE and an estimation method, the authors group trades into meta-orders. They then look at the marginal temporary impact of each successive child order from a given meta-order (call this meta-order ). In keeping with a concave impact-function, they find that ‘s child orders have lower impact if they’re sent later in ‘s execution. However, if another meta-order () is concurrently executing on the same side as , ‘s child orders have nearly the same temporary impact, regardless of whether they occur early or late in the execution of (p39-40). This means that “the market” is able to differentiate ‘s executions from ‘s!

**I.2)** might seem like a sensible approximation for real markets, but I’d have expected it to be pretty inaccurate when multiple large traders are simultaneously (and independently) active. There should be price movement and excess volume if two traders have bought a million shares each, but how could “the market” differentiate this situation from one where a single trader bought two million shares? It’s surprising, but the (draft) paper offers evidence that this differentiation happens. I don’t know what LSE market structure was like during the relevant period (2000-2002) — maybe it allowed information to leak — but it’s also possible that large meta-orders just aren’t very well camouflaged. A large trader’s orders might be poorly camouflaged, for example, if she has a favorite order size, or submits orders at regular time-intervals. In any case, if a meta-order is sufficiently large, its prints should effectively be “colored” — because it’s unlikely that an independent trading strategy would submit another meta-order of similar size at the same time.

[8]

A. Take a of **I.1)**:

B. Set A. equal to the definition of in **I.2)**:

C. Take a of B.:

D. Plug B. into C. to eliminate the integral:

E. Use :

F. And for clarity, we can change variables from , and divide by (since we’re not interested in the ODE when ).

[9] There’s a helpful graphic on p20 of this presentation.

[10] This equivalence comes from ODE uniqueness and applies more generally than the model here. Latent liquidity models have a similar feature. In latent liquidity models, traders submit orders when the market approaches a price that appeals to them. In addition to their intuitive appeal, latent liquidity models predict square-root impact under a fairly wide variety of circumstances.

It’s helpful to visualize how price movements change the balance of buy and sell meta-orders. Let’s call the number of meta-orders, of size , live in the market at a given situation (a negative indicates a sell meta-order). When supply and demand are in balance, we have (buy volume equals sell volume).

Say a new meta-order of size enters the market and disrupts the equilibrium. This changes the price by , and morphs into a new function , with . I.e., a new buy meta-order will fully execute only if the right volume of new sell meta-orders appear and/or buy meta-orders disappear. Here is a stylized illustration:

By definition, , where the expectation is over all situations when a meta-order of size might be submitted. Also by definition, — if we assume that meta-orders are i.i.d. (which would preclude correlated trading behavior like herding) — is the empirical distribution function of meta-order sizes. So and have the same shape if there are a large number of meta-orders live.

Donier, Bonart, Mastromatteo, and Bouchaud show that a broad class of latent liquidity models predict similar impact functions. Fitting their impact function to empirical data would give a latent liquidity model’s essential parameters, which describe the equilibrium (or “stationary”) , as well as how it gets warped into when the price changes by .

[11] From the ODE: . We can use **I.1)** to get from , and thus find (for a continuous probability distribution, ).

[12] That is, if : . And in the case that : .

[13] If fund managers knowingly let their AUM grow beyond the capacity of their strategies, then “overconfidence” might not be the right word. Then again, maybe it is. Clients presumably have confidence that their money managers will not overload their strategies.

[14]

A. Take a of the fair pricing condition **I.1)**:

B. Set equal to **III.2)**:

C. Take a :

D. Substitute B. into C. to eliminate the integral:

E. And use to get

[15] The solutions were generated with the following assumptions:

Initial conditions for :

Initial conditions for :

Initial conditions for :

The solution was generated from the ODE of **I.1)**.

[16] Here’s Robin Wigglesworth on one reason bank market-makers like trade reporting delays:

These days, bank traders are loath or unable to sit on big positions due to regulatory restrictions. Even if an asset manager is willing to offload his position to a dealer at a deep discount, the price they agree will swiftly go out to the entire market through Trace, hamstringing the trader’s ability to

offload it quickly. [Emphasis added]

[17] I don’t know whether bond liquidity provision is sufficiently competitive, but it has notoriously high barriers to entry.

Even for exchange-traded products, subsidizing market-makers with an informational advantage requires great care. E.g., for products that are 1-tick wide with thick order books, it’s possible that market-makers monetize most of the benefit of delayed trade reporting. On these products, market-makers may submit small orders at strategic places in the queue to receive advance warning of large trades. Matt Hurd calls these orders “canaries.” If only a handful of HFTs use canaries, a large aggressor won’t receive meaningful size-improvement, but the HFTs will have a brief window where they can advantageously trade correlated products. To be clear, canaries don’t hurt the aggressor at all (unless she simultaneously and sloppily trades these correlated products), but they don’t help much either. Here’s a hypothetical example:

1. Canary orders make up 5% of the queue for S&P 500 futures (ES).

2. A fundamental trader sweeps ES, and the canaries give her a 5% larger fill.

3. The canary traders learn about the sweep before the broader market, and use that info to trade correlated products (e.g. FX, rates, energy, cash equities).

Most likely, the fundamental trader had no interest in trading those products, so she received 5% size-improvement for free. But, if more HFTs had been using canaries, their profits would’ve been lower and maybe she could’ve received 10% size-improvement. The question is whether the number of HFTs competing over these strategies is large enough to maximize the size-improvement for our fundamental trader. You could argue that 5% size-improvement is better than zero, but delaying public market data does have costs, such as reduced certainty and wider spreads.

[18] If were intentionally changed by altering market structure, there’d probably be corresponding changes in the distribution of and the initial conditions. These changes could counteract the anticipated effects.

[19] A more competition-friendly version might be for exchange latency-structure to allow canaries. But the loss of transparency from delaying market data may itself be anti-competitive. E.g., if ES immediately transmitted execution reports, and delayed market data by 10ms, then market-makers would only be able to quote competing products (like SPY) when they have canary orders live in ES. Requiring traders on competing venues to also trade on your venue doesn’t sound very competition-friendly.

[20]

A. Since is piecewise, split the fair pricing integral **I.1)** into the relevant two regions:

B. Plugging in **IV.2)** to A.:

C. Take a :

D. Take another :

E. Subsitute C. into D. to eliminate the integral, and use :

[21] The value of HFTs’ information will decay in a complex manner over the span of their predicted time period. An HFT might predict 30-second returns and submit orders within 100us of a change in its prediction. If that prediction maintained its value for the entire 30 seconds (becoming valueless at 31 seconds), then the HFT wouldn’t need to react so quickly. High-frequency traders, almost by definition, are characterized by having to compete for profit from their signals. From the instant they obtain their information, it starts decaying in value.

[22] Thanks to Mathematica.

With

The procedure is to plug **V.2)** into **V.1)** and take 2 partial derivatives and 1 partial:

A. Inserting **V.2)** into **V.1)**:

B. Take a :

C. Substitute A. into B. to eliminate integrals where applicable:

D. Take another :

E. Substitute C. into D to eliminate integrals where applicable:

F. Take a :

G. To get the result, substitute E. into F. to eliminate integrals where applicable.

[23] I could be wrong, and it’s hard to define what “reasonable” solutions look like. But I checked this three ways:

1. I tried numerically solving for (and thus ) assuming various joint probability distributions — where and are dependent and generated by functions of Weibull, Pareto, Log-Normal, or Stable random variables. I didn’t see any solutions where and had significant -dependence without simultaneously having some other ridiculous feature (e.g. an infinity at small ).

2. I tried assuming had a few reasonable forms (e.g. ) and solving numerically for . All the solutions I saw were not probability distributions (e.g. had negative probabilities).

3. It’s possible to solve the two integral equations directly if we assume that and are independent () and the solutions are separable ( and ). In this case, and obey the same ODE as the original time-independent system in part **I**. And , which isn’t realistic.

Anonymousvery nice explaination

LikeLike