Transforming Quantitative Trading Applications With Persistent Flash Memory
Thu, 20 Mar 2014 12:39:00 GMT
In this article, Mike O’Hara looks at how the implementation of persistent flash memory is having a transformative effect on application performance at quantitative hedge funds, particularly where large amounts of data are analysed.
In the last fifteen years or so, the financial markets have seen a massive growth in the number of quantitative and systematic hedge funds, investment firms whose trading decisions are made solely on the basis of computational analysis.
Many of these firms are processing increasingly large datasets - and are running increasingly complex computations against those datasets - in order to analyse and back-test their trading strategy ideas and run those strategies live in production.
To satisfy the performance requirements needed in order to process all this data, firms are running ever-larger compute farms containing multiple CPUs, increasing amounts of memory and the fastest disks they can get their hands on.
“The high throughput, time-critical computations that we run in order to test out our trading strategies rely on an extensive combination of both historical and real-time data, and many of these computations also make use of parallel and grid computing techniques”, says the CTO of one such hedge fund.
Processing the data behind all of these trading strategies is often extremely write-intensive, in that the results of each calculation – and there can be millions – have to be stored until the job is complete. This means that persistence of data during the lifetime of each job is an absolute requisite.
“Data persistence presents us with a challenge, because even when we use the fastest disks to store persistent data, they still tend to be the weakest link in the chain when it comes to our high performance, high throughput applications”, continues the CTO.
“With relatively high read and write latencies to the disk subsystem, only so many concurrent jobs can be run before bottlenecks start to occur, leading to unused CPU. And having CPUs sitting idle in an expensive compute farm while jobs are being backed up is not only an inefficient use of our compute resources, but also gives us sub-optimal ROI”, he says.
This is an issue that more and more quant trading firms face as the sheer volume of data they have to process continues to increase.
So what can be done?
Performance Improvements with Flash Memory
The answer lies in flash memory. A growing number of quantitative and systematic hedge funds are now stripping out their physical disks and replacing them with arrays of persistent flash memory, where the read/write performance is orders of magnitude faster than that of disk.
By completely removing disk operations from the critical path of jobs completing, disk contention is no longer a limiting factor because the bottlenecks associated with disk are eliminated. Not only does this have a significant impact on the efficiency of how the compute resources are utilised, but it means that firms can now test and run strategies within timeframes that were previously considered impossible, leading to new and unique trading and investment opportunities.
“All of our Capital Markets clients are striving to make better decisions quicker”, says Steve Willson, VP Technology Services EMEA at Violin Memory
“To do this they need to process existing and live data, and often reprocess the data for many scenarios. Which means they need an enterprise-class persistent data platform that can concurrently read and write gigabytes of data per second, as query datasets are built and then queried in realtime by multiple users”, he explains.
Firms that utilise this relatively new technology face some decisions regarding how and where it can be used.
For example, the most common type of persistent flash memory is MLC (multi-level cell). However, SLC (single-level cell) technology, which stores less data per cell than MLC, has the advantage of lower power consumption, faster write speeds and greater reliability and endurance, so is often more appropriate for these types of environment.
Another challenge is how to monitor the health and performance of flash memory arrays in real-time in the same way they would with traditional disk arrays. This is a key issue because by having such visibility, firms know exactly how much data they can drive through their systems.
As quantitative hedge funds start to invest heavily in persistent flash memory to replace disks, they are realizing some clear benefits.
“As soon as we migrated from disk to flash arrays, we saw a three-fold increase in application throughput, which means we’ve been able to completely transform our trading applications”, says the CTO.
“We can now run deep analysis on strategy ideas that would have been impossible previously, and which have led to some very profitable trading strategies”, he says.
This is not unusual, according to Violin’s Willson.
“Some clients report an 8x improvement in overall processing capability by using Violin Memory flash memory arrays. Some individual queries show a 100x performance throughput. No other persistent platform delivers a higher sustained performance in this arena”, he says.
The message is clear. Quantitative hedge funds that install arrays of persistent flash memory in place of physical disk arrays can not only process larger data sets in shorter time frames, thus revealing new trading and investment opportunities, but at an infrastructure level, they can also fully utilise their compute resources, thus realizing greater ROI. And with the necessary dashboards in place, they can observe the health, performance and usage of their memory arrays in real-time; thus giving them complete control over their data and compute flow.
For more details regarding Violin Memory’s flash memory solutions and their applications within the financial markets sector, visit www.violin-memory.com