Building the Future of Finance
Mon, 20 Feb 2017 07:21:36 GMT
In this article, Mike O’Hara, publisher of The Trading Mesh, examines the infrastructure needs for financial markets firms as use cases for artificial intelligence and machine learning expand, with John Denheen of Tyler Capital, KPMG’s Robert Mirsky, Trade Informatics’ Aaron Schweiger, Sybenetix’s Taras Chaban, Jan Machacek of Cake Solutions, Michael Cooper of BT Global Banking and Financial Markets, Vincent Kilcoyne of SAS UK & Ireland and Verne Global’s Tate Cantrell and Stef Weegels
Artificial intelligence has long played an important role in the development of trading strategies, but the finance sector now stands on the brink of an explosion in use cases that could transform the industry and its underlying technology infrastructure requirements.
While many financial markets participants have invested heavily in recent decades to ensure speed of communication, the emphasis is turning increasingly to cost-effective harnessing of compute power, due to the finance sector’s growing dependence on analysis and interpretation of large data sets to deliver improved outcomes for clients and investors.
To this end, the data management and compute power demands of large-scale artificial intelligence and machine-learning (AI/ML) programmes require buy and sell-side firms to fundamentally reappraise their existing technology capabilities. The precise nature of applications developed by individual firms will influence their needs, but reliable, flexible and immediate access to high-density compute power will become a business-critical consideration for many.
”The strategic focus for the financial sector is shifting from making the fast trade to making the most strategic transaction. The more horsepower you have in your computing engine, the more strategic you can be,” says Tate Cantrell, Chief Technology Officer at Verne Global. An effective AI/ML programme relies on its underlying infrastructure to deliver the required number of cores or processing units to rapidly retrain algorithms with current and past information. Moreover, users need access to large volumes of data and a large number of processors linked together through applications.
“Ultimately, a financial service company should be thinking: “Where can I deploy the most amount of storage and the most compute capacity for the lowest cost?” It’s a matter of making sure they can apply the most compute resources towards making those applications run effectively,” says Cantrell.
Internet giants are already offering ‘infrastructure as a service’ value propositions based on immense cloud-based capacity, while specialist providers are developing facilities tailored to serve the large-scale needs of specific industry sectors, including finance. For firms with substantial requirements, a key factor will be certainty of access and control over their AI/ML supporting infrastructure, but even institutions that have built out their own data centres and networks may find complete ownership a step too far.
Widening range of uses
Proprietary trading firms and quant-based hedge funds have been developing trading models via analysis of huge market data sets for more than two decades. Today, their algorithms are adjusting to market signals with minimal human intervention, meaning they not only react to data consumed in the live market environment, but also learn as they consume the information. Although these models typically work at very high speeds, responding to market movements tick by tick, they can be developed – built, back-tested and tweaked – remotely, at more costeffective locations.
“Our execution will always be co-located, but there’s a strong case for suggesting our research functions should not be conducted in an expensive data centre. Over the next few years, we’ll be figuring out how we’re going to scale up, bearing in mind how big our data set is going to get, and implementing the appropriate supporting infrastructure,” says John Denheen, Head of Data at proprietary trading and market-making firm Tyler Capital.
“There’s a strong case for suggesting research functions should not be conducted in an expensive data centre.”
John Denheen, Tyler Capital
Gaining ready access to all the data that a firm holds, in an environment where trading models can be built and tested, can present some significant challenges, according to Vincent Kilcoyne, Capital Markets and Fintech Industry Lead at SAS UK & Ireland.
“The challenge for an organisation trying to do this within their own facility is that they probably don’t have that amount of real estate, that amount of computational power, to be able to provide a suitably rich analytical environment to the end user. In a research environment, you need to be able to supercharge the analytical curiosity of your potentially most lucrative people, so they are able to construct trading strategies with data sets that they possibly never would have considered before. You want to give them the ability to do a huge amount of back-testing, without it impacting the day-to-day business of the organisation”, says Kilcoyne.
"In a research environment, you need to be able to supercharge the analytical curiosity of your potentially most lucrative people.”
Vincent Kilcoyne, SAS UK & Ireland
Meanwhile, the development of algorithms that respond to changes in market conditions is only one of a growing range of finance sector applications of AI/ML, broadly defined as systems that are able to improve their performance and accuracy through the consumption of large data sets, including unstructured data, typically with the capacity to respond to natural language queries. To underline the potential of AI/ML, IBM CEO Ginni Rometty recently told the Sibos banking conference: “Financial services can and will lead the way in the adoption and exploitation of cognitive computing.” Applications are being developed to identify and respond to unusual data patterns that might indicate cyber-security breaches, for example, while robo-advisors and other ‘fintech’ startups are using AI/ML to make recommendations to clients, for example suggesting investment strategies based on customer preferences and available products.
According to KPMG’s recent report, ‘Transformative Change’, almost 60% of hedge fund managers see AI/ML as having an impact on how they do business. Robert Mirsky, Global Head of Hedge Funds at KPMG, says the real figure is probably higher as many managers may not consider some existing uses as AI/ML. Use of technology to identify market signals in the development of trading programmes is fairly ubiquitous, whilst around a third of respondents told KPMG their firms already use predictive analytics to uncover new trends and opportunities.
Mirsky says hedge funds are replacing human resources with AI/ML capabilities for a range of repetitive processing tasks, such as data scrubbing. “Many hedge funds are using AI to get an edge from a trading perspective, but are also looking further into the investment process. They’re looking to invest more in AI to have the best in class solutions to generate alpha more effectively than competitors,” says Mirsky, who acknowledges that sell-side firms and other service providers are more advanced in leveraging AI/ML beyond the front office.
"Many hedge funds are using AI to get an edge from a trading perspective, but are also looking further into the investment process.”
Robert Mirsky, KPMG
Pushing the boundaries
Many niche service providers in the wholesale financial markets are already pushing the boundaries of AI/ML, and are simultaneously evolving their technology and operating infrastructures, with a particular focus on data storage and computing capacity.
For New York-based Trade Informatics, analysis of vast quantities of customer data is integral to its service proposition to asset managers, which includes transaction cost analytics, trading strategy and workflow management and consulting. Analysing and improving clients’ trading performance is a data-intensive business, requiring access to significant computing fire-power.
Head of Quantitative Research Aaron Schweiger says the value add of firms like his lies in their domain expertise not in the underlying infrastructure, but acknowledges its importance in supporting Trade Informatics’ core competence. “There is an increasing role for specialised hardware, e.g. field programmable gate arrays (FPGAs) and graphic processing units (GPUs) to support parallel processing. These emerging technologies can be business critical. Having the right infrastructure to support hardware that gives you maximum performance is actually quite important.”
According to Schweiger, applications that facilitate natural language queries require largescale infrastructure investments to support timely responses. “An ad-hoc question answering service may not be able to pre-determine the appropriate database indices. In those cases, custom hardware solutions allow firms to query vast quantities of data. Firms in that space need to leverage the available resources, whether public, like the cloud, or private.”
Similarly, the expansion of AI/ML from analysis of structured market data to construct trading models to using natural language processing to extract semantic value from unstructured data sets has significant infrastructure implications. “Housing data and getting it into a place where you can quickly sift through it becomes key,” explains Schweiger.
Power on demand
An innovative compliance-related application of AI/ML is provided by trade surveillance and investment performance experts Sybenetix, The firm has pioneered the development of enterprise behavioural analytics, and combines data analytics with behaviour profile algorithms to identify and investigate behaviours that might contravene market abuse regulations for clients including asset managers, hedge funds and sell-side firms. Due to the data-intensive nature of the business, Sybenetix’s underlying operating infrastructure must be scalable and deployable to a number of locations cost-effectively. “Our clients must be open to data being stored either on their systems or in a distributive fashion, i.e. via cloud-based capabilities, so that it is easily accessible and recoverable,” adds CEO Taras Chaban.
“We must process data on demand, so we need compute power on demand.”
Taras Chaban, Sybenetix
As well as ensuring security and integrity of client data, timeliness is a priority. “We must process data on demand at critical points in decision making for compliance and investment managers, so we need compute power on demand. Our compute capacity is heavily parallelised, because we need to be able to analyse the performance and decisions of portfolio managers separately and independently. Moreover, it’s not a matter of constant data processing at a steady rate; we need to process in bursts, then the models need to update themselves,” says Chaban, Although Sybenetix’s software operates on client systems and various cloud infrastructure, it has taken a proprietary approach in the areas where internal expertise adds most value, with the firm’s own memory management system used to present results to clients. “We use a tailored cache management system to show data to clients on screen. Off-the-shelf programmes weren’t able to represent the results of our calculations quickly enough to end-users on multiple different devices. We wanted clients to be able to interact with the analytics on screen immediately, so we built our own.”
Jan Machacek, Chief Technology Officer at UK-based technology consultants Cake Solutions, says finance sector firms should not underestimate the scale of commitment required to undertake the development of AI/ML applications, in terms of internal and external resources. “You need responsive and scalable engineering capabilities, but you also need data science skills too,” he says. As well as data storage and classification capabilities, firms must establish a model, training and evaluation data repository to support ongoing management, development and re-evaluation of models, as part of the ‘architecture’ required by long-term AI/ML systems. “Systems must ingest data using a supervised or unsupervised loop for ongoing fine-tuning. Very few people are able to do this successfully,” says Machacek.
Moreover, he believes that few service providers are able to deliver the kinds of infrastructure capabilities that finance sector clients will need. Big data analysis requires very expensive processing capabilities, while further expenditure is required on data storage infrastructure as well as networking and connectivity hardware.
“To have enough processing power to deal with big data workloads, you need thousands of computing cores (typically GPUs or specialised computation accelerators), thousands of terabytes of solid state drive storage, and high-speed networking,” he explains.
“Firms… should realise that AI/ML requires engineering on a scale most firms probably haven’t done before.”
Jan Machacek, Cake Solutions
The requirement of AI/ML applications to make iterative changes to how they operate on a continuous basis is a departure for even the most technologically sophisticated firms, suggests Machecek, and should be recognised as such. “Firms need to consider the data management, i.e. the testing, validation, model storage and distribution that goes into constructing, refining and deploying these AI/ML models. Ultimately, they should realise that AI/ML requires engineering on a scale most firms probably haven’t done before,” he says.
Different requirements for different programs
Vincent Kilcoyne of SAS UK & Ireland believes that significant benefits can be achieved by fully decoupling data from analytics.
“If you have sufficiently advanced analytical capability, the data can reveal things that are probably far in excess - and far more financially rewarding - than what you may have initially set out to find. But you need to be able to use techniques that allow you to test hypotheses on seemingly unrelated datasets.”
Kilcoyne says this differs from the more common approach of associating a particular dataset with a particular set of analytical models. “You want to make it possible to analyse all of the different datasets and test many more models, so that you can then find the one that is most likely a champion model. This analytical technique allows you to find champion models within a much broader range of data types than you would have previously considered.”
Firms embarking on AI/ML programs must also contend with the reality of data sets in multiple locations in this increasingly distributed world, which may thwart immediacy of access. “When your applications are running over data from multiple locations, the signals from those data sets must be propagated and presented very precisely,” says Michael Cooper, CTO of the financial technology services group at BT Global Banking and Financial Markets. “Where you’ve got a distributed set of data to analyse, you need to pull either the analytics or the data together. Some firms localise the analytics, i.e. they pull the output from the analytics back to a point where they’re acting on it in conjunction with other signals; others pull the data back to where they process it. In both instances, being able to network accurately is important.”
Cooper also points out that our increasing ability to generate and capture data from so many social and economic activities inevitably means that AI/ML applications will need to be able to process ever larger data sets, which may or may not become more standardised over time. Simply put, scale is essential. “The ability of networking capabilities to manage large data sets will become really important,” he says.
“The security and sensitivity of data sets will determine where and how they are stored.”
Michael Cooper, BT Global Banking and Financial Markets
From a data storage perspective, Cooper says the “hyper-scale cloud computer storage capabilities of major commercial providers are still not able to meet some of the demands of financial markets data sets.” As such, privacy, security and capacity considerations may lead some finance sector firms to continue to seek private data storage and management options. “What attributes does your data have? How you classify them? The security and sensitivity of data sets will determine where and how they are stored,” he says.
As we collect, store and analyse data from a wider diversity of sources, classification differences will become more apparent, Cooper suggests, with further handling implications. “If data scales in a particular dimension, it may not be possible to use it in one infrastructure domain. Data has many different attributes, such as shelf-life. How do you manage data which has a retention period of seven years versus seven days, or becomes stale much more quickly? How you clean your data or delete your data will influence how you manage, it,” says Cooper.
As AI/ML take a more central part in the business strategy and thus consume larger slices of budget, firms will be scrutinising their arrangements to ensure they get the best bang for their buck. “If leveraging data in a machine learning application is important to your business, you’re likely to want control of the entire stack, from the power, the internet connectivity, to the environmental controls. It depends on the business use case, but for companies that are dealing with the biggest data and the most important problems, it’s absolutely important to get the entire stack correct,” says Schwieger at Trade Informatics.
"For companies that are dealing with the biggest data and the most important problems, it’s absolutely important to get the entire stack correct.”
Aaron Schweiger, Trade Informatics
But control does not denote ownership in the sense understood by banks that have traditionally developed much of their own operating infrastructure. Verne Global’s Cantrell points to the importance of bespoke hardware to the supporting infrastructure for AI/ML programs as a reason why firms may not be able to support AI/ML within their existing environment.
“If you want to use customised hardware to give you a better output, you can create a purpose-built facility, which requires expertise on developing a highly industrialised infrastructure for this very-high intensity equipment, more akin to operating a factory than a compute environment. But it’s hard for financial services firms or even manufacturers to make that ongoing hardware commitment. Once you get into high-performance computing, you’ve got to have a pretty strong appetite and invest a lot of capital and a lot of intellectual rigour to servicing those loads.”
Cake Solutions’ Machacek agrees that firms will want to exert control over more aspects of their AI/ ML operations as they assume greater importance, but asserts there is still scope to utilise existing, proven capabilities, such as DC/OS (data centre operating system), a well-established platform for running ML/AI apps, based on open source code. “DC/OS can be used very successfully to describe the entire infrastructure-as-code,” he says.
Even though such common resources can be leveraged, many factors point toward AI/ML being something of an arms race. “If your success is tied to how much compute you can apply to retraining algorithms, you’re going to need the biggest engine. You don’t get the biggest engine by going to the ‘as a service’ providers. If you’re looking to scale, you need to take responsibility for some of that infrastructure into your own hands, to avoid the risk of finding yourself limited at some point in the future,” adds Cantrell.
“If you can foresee the actual cost of power for ten years or more, then you can build that into your cost model.”
Stef Weegels, Verne Global
According to Stef Weegels, Business Development Director at Verne Global, trading firms and finance sector service providers can benefit significantly from disaggregating their technology infrastructure. Such firms may choose to host systems and capabilities more cost-efficiently away from financial centres, as high-density compute capacity becomes increasingly available in lower-cost environments that may offer other benefits, such as lower carbon emission or tax advantages. Weegels highlights the fact that two leading London-based hedge funds have recently relocated much of their compute infrastructure to Verne’s data centre campus in Iceland for reasons such as these.
“We have been able to negotiate our power contracts with a fixed term for 15 years, which is possible because Iceland’s power companies draw all of their power from geothermal and hydroelectric. Because Iceland only taps into 10% of that, they’re quite certain of the actual cost of power generation over the coming years”. Weegels contrasts this with the power profiles of the London and Frankfurt markets, which are reliant on older energy infrastructure, suffer from a critical lack of supply, and are adversely impacted by taxes and charges associated with the move towards renewable energy. “All of these factors lead to future uncertaintly around power pricing”, he says, “whereas if you can foresee the actual cost of power for ten years or more, then you can build that into your cost model”.
"With high-performance computing, the bottom line is really all about energy consumption, which may grow exponentially for some firms as their needs develop.”
Tate Cantrell, Verne Global
Ultimately, the long-term cost of accessing the most effective AI/ML infrastructure will weigh heavily. “The ’as-a-service’ providers are trying to improve the predictability of their pricing, but with highperformance computing, the bottom line is really all about energy consumption, which may grow exponentially for some firms as their needs develop,” explains Cantrell. “If you’re not at least paying for the attention to how the energy is provided and what the energy costs are going to be five and ten years out, then you’re putting yourself in a position of unbridled growth on infrastructure costs and a disturbing trend for your bottom line.”
Writing and additional research by Chris Hall, Associate Editor, The Realization Group.
For more information, visit: