Remember me

Register  |   Lost password?

The Trading Mesh

Roundtable Report - Fine-tuning the performance of HPC for financial services

Mon, 07 Aug 2017 08:39:36 GMT           

By Stef Weegels, Global Director of Sales, Verne Global

 

Last month a roundtable hosted by the Realization Group and Verne Global, saw a group of industry experts – including lead participants from NVIDIANumerical Algorithms Groupbigxyt and the National Physical Laboratory - pick out the key issues that finance needs to address in order to fully realise the value of high performance computing (HPC).

HPC used to enhance the performance of solutions with a heavy compute component, has a long legacy within industry. Today, its potential within financial services is in the nascent stages. Some of the challenges the finance sector currently faces, particularly the impact of regulations like MiFID II, Basel and the Fundamental Review of the Trading Book (FRTB) are computationally heavy, lending themselves to the use of HPC, particularly for firms that trade in high volume and at high speed.

Challenges

There are clearly many applications that can benefit from HPC. Being able to execute very parallelised algorithms such as Monte Carlo simulation using HPC allows banks to shift from overnight to intraday risk calculations, for example. With the rise of artificial intelligence models like neural networks being applied to finance, HPC can be used to accelerate the training of those networks across datasets, a classic parallel processing problem.

However, there are a number of technical challenges that need to be addressed when adopting HPC, particularly around scale, cost, and resource allocation. From a hardware perspective, firms need to assess the ability of standard computer processing units (CPUs) such as x86 versus more specialised chips such as graphical processor units (GPUs) to support specific requirements. Although parallelised tasks lean towards the use of GPUs to speed up calculations, alternative mathematical approaches such as adjoint algorithmic differentiation (AAD) are increasingly being used on x86 chipsets.

However, when financial services clients have to assess whether to use x86 or move to GPU, any change in hardware has repercussions. One issue is that they use different software code bases, so moving from X86 to GPU will require a team of people who are capable of coding in a relevant languages such as Cuda, for example. Another is that the GPU is very power hungry. While the amount of compute received from the GPU per watt is phenomenal set against the CPU, data centers need to be run ‘hot’ and the scale of processing can make that a very expensive premise indeed. Participants agreed that a strategic direction to select scalable data centre solutions with low electricity prices is imperative.

Consequently the hardware costs – both initial outlay and operational expenditure – have to be assessed as a firm looks at its HPC requirements.

Performance

It was observed that while the GPU is a “brute force” technology useful for very specific computationally heavy tasks, many banks do not take advantage of their CPUs very well. One participant cited the common practice of copying I/O tasks many more times than is necessary, which inevitably leads to poor performance

At a technical layer, certain elements of HPC such as getting data in and out of the risk models is also a challenge that needs to be overcome in order to make applications run successfully. Not all applications are going to sit in the same environment so there needs to be a de-centralisation of processing.

HPC architecture

When using a distributed architecture, it is important to monitor bottlenecks and latency of data across the network, especially as latency can impact real-time processing of data far more than overnight batch-processing. Accurate time-stamping of data can allow firms to have greater confidence that the timing of events can be accurately recorded. This allows for different levels of latency within an infrastructure, so that highly expensive low-latency architecture is reserved for time-sensitive activity and non-time-sensitive activity can be supported appropriately.

When setting up such architectures, there is of course a cost associated with moving large quantities of data around, which makes the coordination between the locations key. However, it was noted that the price of connectivity has fallen dramatically; there is more headroom in connectivity available; and sophisticated networks mean that access to such connectivity is nearly global. This enables organisations to operate their HPC applications from a de-centralised data center, with latency sensitive processes set up as necessary near the trading centres.

This can be particularly valuable for firms such as electronic market makers, whose trading decisions are typically made on servers co-located next to a trading venue’s matching engine. One roundtable participant gave the example of a market-maker developing, back-testing and optimising strategies off-line, away from the highly expensive co-located set-up used for trade execution. In this example, a highly scalable, HPC environment is used to prepare the strategies in order to get the results, then the calibrated model is fed back to the co-located trading engines.

It was noted that finding the right data center partners with the right cost structures, involving a wide range of options, can help firms avoid unnecessary costs and complexity when taking this approach.

Hybrid Approach

The need to increase computing performance stems from the more quantitative approach to regulation and regulatory reporting that authorities are today demanding. Back testing of trading systems is required, creating pressure to build a system that can store large amounts of data. Running calculations across these massive data sets will make the use of HPC far more important than ever. Banks will need to consider their use of cloud versus self-hosted environments, assessing the use of burst capability within a cloud structure to build a hybrid model to minimise the cost of cloud usage.

Where firms are upgrading legacy software to the latest version, one of the biggest headaches providers found was explaining any change in the numbers that resulted. Trying to explain to business people who do not have maths backgrounds that mathematical optimisation is a complicated topic, and depending on which way the method works you can have differences in answers, can result in the business teams deciding to leave the project for another day.

There is a lot of education still needed in order to ensure the best solutions are implemented where a hybrid approach can be used. Every firm has to review its business in terms of competitive advantage and decide whether it really makes sense to manage the technology stack from end-to-end. Technology is not only an enabler to bring costs down, but also enables new disruptive players. Market share will shift away from big players that are not adapting quickly.

In conclusion, there is much yet to be decided. Regulation over the coming years will play a big part in the decision-making process and will settle some of the uncertainties for the industry, allowing businesses to focus on the next step regarding HPC, how it can be used in the generation of alpha.

Note: You can read the full Financial Markets Insight paper on which the roundtable discussion was framed around here.