Remember me

Register  |   Lost password?

The Trading Mesh

In brief, colocation beats the speed of light

Thu, 09 Apr 2015 04:48:25 GMT           

Sea-based XbandA recent commentary article Physics in finance: Trading at the speed of light (Mark Buchanan, Nature, 11 February 2015) made a seemingly logical statement: In order to take advantage of price differences between two distant financial exchanges, it is best to station a trading strategy computer at the midpoint between the two exchanges, for example, on a ship in the middle of an ocean. The geodesic midpoint is where the two price levels can be compared at the earliest possible time. Therefore – the enticing logic goes – this is also the ideal point where the arbitrage decision can be made – buy at a low price on one side and sell at a higher price on the other side.

 

Buchanan's logic follows the reasoning in Relativistic statistical arbitrage (A. D. Wissner-Gross and C. E. Freer, Physical Review E 82, Nov 5, 2010); although – to be fair – the assumptions of the two articles are somewhat different.

 

Having spent years designing distributed global trading systems, I had serious reservations regarding the optimality of the "midpoint trading" mechanism – but could not find any published theoretical results to the contrary. This motivated me to write the paper Colocation beats the speed of light (Edward Howorka, Feb 25, 2015). The post that you are reading presents the background and a brief summary of the paper's results without getting into formal proofs and technical details.

 

In short, the paper demonstrates that a trading strategy using cooperating computers colocated with the exchanges will perform better than any other configuration, including the midpoint computer setup. This advantage of distributed computing is non-obvious and – to my knowledge – has never been explained as simply and convincingly. The final section considers the reverse issue: using distributed exchange architecture to eliminate the need for distributed arbitrage strategies.

 

The controversy

 

The Nature article states (emphasis mine):

 

In future, when airborne laser networks span the oceans, things may get even stranger. The location at which traders get the earliest possible information from two exchanges lies at their mid-point – between Chicago and London, this is in the middle of the Atlantic Ocean. At such a site, traders could exploit a technique called "relativistic arbitrage" to profit from momentary imbalances in prices in Chicago and London.

 

To explain: special relativity says that nothing can travel faster than the speed of light, c. Hence, a trader standing a distance D away from an exchange can find out what happened there, in the best circumstance, at a time T = D/c after it happened. Between major trading centres around the globe, such delays can be from a few to tens of milliseconds. If a trader stands halfway between the two exchanges, he or she will receive information from both after the same interval, T = D/c. Anywhere else, the distance to at least one of the exchanges would be greater and information would take longer to get there.

 

In other words, within a few years it may become profitable to station a ship or other trading platform near halfway points between pairs of financial centres worldwide (see Fast-trading hotspots).

 

The map in the Nature article shows all world financial centers and over a thousand midpoint locations where high-frequency traders would need to position their computers to optimally exploit the possible arbitrage possibilities. As expected, many of these "optimal" locations are truly in the "middle of nowhere" (one lies in the desolate spot where the flight MH 370 has disappeared). The map does not indicate the significance of the "fast trading hotspots", but it is clear that the most important "hotspots" lie in the North Atlantic Ocean.

 

The news regarding the optimal placement of HFT (high-frequency trading) computers (on ships!) traveled fast, see stories in PhysicsCentral, MarketWatch, International Business Times, and WatersTechnology. Most pundits accepted the "fact" as self-evident, one expressed some doubts: Bill Harts, CEO of Modern Markets Initiative, a group that supports high-frequency trading, dismissing the idea, told MarketWatch: "Even if a trader could receive data from a market faster at such a location, in order to profit from it she would have to transmit orders to a market from the same location and would always be slower than traders stationed at the actual markets."

 

The midpoint puzzle

 

The Nature article is trivially correct in stating that the midpoint between the two exchanges is the point where an arbitrage opportunity can be first discovered. What is less obvious – given that Mark Buchanan, a respected physicist and author got it wrong – is that the midpoint is a particularly bad place where to make trading decisions.

 

Here, Bill Harts' gut feeling is correct. In a follow-up IBT article, he states "If this were possible, data center space in Youngstown, Ohio, (roughly half way between the New York and Chicago exchanges) would be at a premium because all the HFTs would locate there. But guess what? There is no HFT in Youngstown, Ohio."

 

So, HFTs know that trading from a midpoint location between exchanges is a bad idea. However, Harts' statement is not entirely correct – the midpoint trader is in fact as fast as anyone could be to take advantage of a price disparity between the two exchanges. Where she loses compared to "traders stationed at the actual markets" is that – after submitting her orders, she is blind and out of control of her orders. Read on to see clearly the truly incredible advantage of exchange colocation and distributed trading strategies.

 

The approach - equivalency of colocated mechanism

 

We start by showing that there is no need for any "central" computer. Our result is quite general – it applies to any centralized algorithm that interacts with a number of servers:

 

Theorem 1. Any algorithm that runs on a single central computer located anywhere in the world, connected to a number of distant servers, can be re-implemented using a group of computers that are colocated with the respective servers in such a way that the new implementation's behavior (as observed by the servers) is indistinguishable from the original single computer solution.

 

The proof uses a simple Lego-like transformation of the "Central computer mechanism" into a "Collocated group mechanism" using only message replicators, delay generators, and clock offsets. The following two figures illustrate this transformation in the basic two-exchange midpoint scenario of the Nature article. Think of NY and London with one-way speed-of-light latency d of about 20 milliseconds. The half-way latency is h = d/2, or about 10 ms. The arbitrage algorithm A runs on a midpoint computer C, located on a ship in the North Atlantic Ocean. Here, we transform the Midpoint computer mechanism into an equivalent Colocated pair mechanism (see figures below). You should be able to convince yourself of the equivalency of the two black boxes by taking any "centralized" arbitrage strategy and seeing how it performs in each case. (We provide such a step-by-step illustration here.)

 

Midpoint computer mechanism

 

Colocated pair mechanism

 

Colocation provides a better solution for high-speed applications

 

We have shown that the central computer mechanism can be replaced by a functionally identical mechanism using a group of computers colocated with the exchange servers. Paradoxically, in order to achieve the equivalent behavior we had to make use of grotesque artificial delay generators that delay transmission from each exchange server to the colocated computer by the full round-trip latency between the server and the central computer. For most algorithms, removing the delay generators in the collocated scenario would allow the designers to develop a new distributed algorithm that is far better than the central computer algorithm. The nature and the significance of the improvements would depend on the algorithm's function. In most cases, the central computer algorithm would have to be rewritten from scratch to take advantage of the foreknowledge of the local market (and – in rare cases – no improvement would be possible). However, most high-frequency trading firms would pay dearly to see 20 milliseconds into the local market future at the time they submit an order. In other words, if the computer located at the midpoint between financial exchanges represented the best way to cope with the speed of light limitations, then using computers colocated with the exchanges "beats the speed of light" – whence the title of this post.

 

But even without any additional improvements, the equivalent collocated strategies need to be deployed in only N financial centers worldwide, not the N⋅(N-1)/2 "fast trading hotspots" shown in the Nature article. Finally – and best of all – we will never need the scary high-speed trading ships!

 

Colocation provides the optimal solution

 

Once we have shown that a single central computer mechanism can always be replaced by a functionally equivalent or better mechanism using a fully collocated group (a set of N computers colocated with N respective servers), the remaining question is: Can that architecture be improved further? Could we add a few extra computers in the middle and/or remove some of the colocated computers and design an algorithm that is strictly better that any algorithm using a fully collocated group? The answer is no – the fully collocated group architecture is optimal. This follows from the following corollary that generalizes Theorem 1.

 

Corollary 1. Any distributed algorithm running on a group of computers connected to a number of servers can be re-implemented using a fully collocated group in such a way that the new implementation's behavior (as observed by the servers) is indistinguishable from the original solution.

 

The case for distributed exchanges

 

So far, we have considered Theorem 1 in scenarios consisting of a trading computer connected to many exchange servers, because that scheme fits the midpoint computer scenario in the Nature article. But we may also use Theorem 1 "upside-down" by reversing the roles of exchanges and trading centers.

 

Traditional exchanges have a central matching engine with HFT customers' computers colocated in the same data center. Remote customers may be distributed throughout the world using computers located in access point data centers (provided either by the exchange itself or by third-party companies) that offer secure, reliable, and consistently low-latency connectivity to the central engine.

 

If you consider an exchange's matching engine as the "central computer" and the access point data centers as the "servers" cited in Theorem 1, it follows that the optimal architecture for an exchange order matching machinery would be distributed, with a matching engine component colocated with every access point data center.

 

Obviously, HFTs figured out long ago the advantages of colocation and distributed algorithms for high-speed arbitrage. So, why do exchanges persistently use single central matching engines?

 

The main reason is exchange's performance and profits. By comparison with a single central exchange computer, distributed exchange architecture incurs penalties in terms of design and behavior complexity, increased transaction latency, and additional infrastructure costs. These effects would be experienced by all customers, whether or not they engage in arbitrage. Distributed exchange architecture would also reduce the need for arbitrage and the resulting massive trading volumes that benefit the exchange.

 

One area where distributed exchange architecture found its justification is Forex spot trading. For example, when you trade USD (US dollar) against JPY (Japanese yen), you need local information about the state of economy of each of the two very distant countries. In addition, there are political reasons against trading one's currency in an overseas location.

 

When Forex spot electronic order matching was first introduced in the 1990's, there were initially two USD/JPY exchanges: Reuters (based in London) and Minex (based in Tokyo). London-Tokyo latencies in those days were on the order of 300 ms, easily noticeable even by human traders. When I was tasked with designing a new global EBS Forex exchange, I chose distributed architecture with three synchronized order matching engines – in London, NY, and Tokyo. It provided excellent performance in all three regions and satisfied the priorities of the global banks. Eventually, EBS merged with – and replaced – Minex, becoming the dominant global USD/JPY interbank electronic exchange, the position that it maintains to this day. EBS matching architecture (and much more) is described in Architecture for anonymous trading system (Edward R. Howorka and Andrew P. Foray, U.S. Patent No. 7,184,982, issued Feb 27, 2007).

 

, , , , , , , , , , , , , , , ,