Remember me

Register  |   Lost password?

The Trading Mesh

Round Table Report: Using Deep Learning Models in Real Time

Wed, 01 Nov 2017 04:40:00 GMT           

Verne Global and Realization Group held a roundtable at the Trading Show in New York  on the 3 October 2017, discussing “Using Deep Learning Models in Real Time”.

Putting artificial intelligence (AI) and deep learning to work parsing financial securities industry data is a taller order, as data management experts made clear in a roundtable discussion of technology and modeling challenges within AI and deep learning, led by Stef Weegels, Business Development Director at Verne Global and Yam Peleg, founder of Deep Trading, at The Trading Show in New York on October 3rd.

Applying deep learning is trickier when the data being evaluated and processed is always changing, in real time. Deep learning models need to be designed to adapt to temporal changes, stated Nilesh Choubey, data scientist at Qplum, an investment advisory services company that provides automated investing using AI and data science, including self-learning design of trading systems.

“You should think of your model as a robo-brain that learns how to adapt to changing market behaviour. It's not just the current features of the model but how it can change with new data that defines your deep learning trading system.”

Along with Choubey, representatives of Thasos Group and the Moscow Exchange who took part in the discussion described the complexity involved in preparing and training a deep learning model. Python programming language, the TensorFlow open source software library, the Pytorch deep learning framework based on the Python language, the Keras software package, and the MNIST standards database are all important elements in that process. The MNIST sample set of digit data is used for training deep learning systems, with about 70,000 images of handwritten digits. However, the ImageNet dataset used more broadly and generally for deep learning has 10 million images and counting.

Even though a human baby only needs to see two cats to know what a cat is, AI developed by Google may need to see 100,000 cats or more to be able to recognize a cat, observed Wei Pan, co-founder and chief scientist at Thasos Group, an alternative data intelligence platform. MNIST is useful as a way to train a deep learning modelby giving that model thousands of images of digits to learn from. That is not quite training for handling of changing market data, however.

When building a deep learning model, designers have to determine when they have enough layers – with each layer being one particular criteria of data –rather than too few layers to be meaningful, as Pan explained. Two or three layers are not enough, Pan said. Five to 10 are necessary to start seeing performance benefits, he added.

Designers of models should avoid having too many layers, however, because that could lead to over fitting–much more than entering too much training data or actual data would, according to Pelegof Deep Trading, a deep learning and machine learning financial systems company based in Haifa, Israel. “If you have the right architecture for your problem, you can learn with an order of magnitude more parameters than your data points, and still get good generalisations,” Peleg said. “If the data doesn’t actually represent the real phenomena, that can cause your model to overfit –more than the model itself having to too many parameters. It can be the data is not representing –it’s not a good subset.”

Another method of optimisation involves running multiple versions of the same model with different training data, then applying them to the actual live market data, to see how close the results from each version are. Accomplishing this, however, requires enough hardware. And human traders need to be able to see what the crucial points are coming out of the model and what the interesting and relevant information is.

Aside from optimisation, reverse engineering is another possibility for working with a deep learning model to ensure its results are valid, said Fedor Pshirkov, managing director and co-head of data science at the Moscow Exchange. After training the model for a specific purpose, then finding that it produces some unexpected type of insight or information, one could try to redesign the model to regularly produce the same nature of insight. “Maybe you can approximate your deep learning model with a similar model just by learning the major factors you picked up on, then train it accordingly,” he explained.

Deep learning models can have a lot of power, as all the participants in the discussion clearly recognised, but require higher level thought and customisation to be usefully applied to financial market data –to yield meaningful, accurate results that are relevant to steer future business activity. Designing these models requires careful consideration of how many layers of categories they will include and how much data they can process meaningfully. Even after a model appears to be built and complete, more adjustment can still be necessary. And, as Weegels observed, the requirement for scalable and cost-efficient compute cannot be understated.


This article was first published by Verne Global (a client of The Realization Group) - Financial Services - Using Deep Learning models in real time