Remember me

Register  |   Lost password?

The Trading Mesh

Software Testing in Financial Services – is the price of success paid for in advance?

Thu, 06 Jul 2017 09:21:40 GMT           

 

In this paper, Mike O’Hara and Dan Barnes of The Realization Group investigate the trends and dynamics that are reframing the way that testing is viewed and managed within financial services, in conversation with Sascha McDonald, Founding Director of CloudLoad.io; Graham Perry, Manager for the UK & Ireland at Neotys; Rob Jacobs, Director of Engineering Operations at Worldpay; Matt Roberts, Lead Consultant at Cake Solutions; and Darren Stocks, Head of Testing & Quality Assurance at change management consultancy Certeco.

 

Introduction

Banks are advocating earlier and more frequent approaches to testing in the software development lifecycle. The advantages of more effective testing are clear, nowhere more so than in finance. 

Outages have struck the payment systems of the UK’s major retail banks. Both John Cryan, CEO of Deutsche Bank, and the UK’s Prudential Regulatory Authority have used the term ‘antiquated’ when referring to the banking systems they supervise. The challenges of upgrading core banking technology are well documented. 

Added to this woe, pressure on performance is set to increase. In 2018 the new European Payment Services Directive (PSD 2) and the UK’s Open Banking Initiative will open banks’ customer and transactional data up to third parties, capable of offering services that compete with the banks themselves.

Consequently, banking technology is both aging and under increasing pressure to perform. At the heart of this change is a need for faster, more reliable technology deployment. In development circles this often means a ‘shift left’ in the assurance and testing process. Moving testing to an earlier stage in the development lifecycle offers considerable advantages for banking operations through increased reliability. 

An appreciation of the importance that performance testing and automation has for the competitiveness of a bank is growing. Software delivery processes that emphasise greater communication and collaboration throughout, such as agile development methods and DevOps, are being successfully ingrained within financial services firms, driving home the need for more integrated development and testing.

 

A tough journey

For context, it is important to understand the journey that testing has been on. Historically it has been at risk from inadequate resourcing as a result of budgetary pressures and a misunderstanding of its value. 

Typically, when a programme manager looked at qualifying the budget to build an application for a project, testing would be separated out. Within that testing ‘bucket’, functional testing was often prioritised; other non-functional elements of testing lacked a specific budget and consequently were not supported adequately.

 

“We see constant failure in budgeting across areas like performance testing”

Sascha McDonald, CloudLoad.io

 

“We see constant failure in budgeting across areas like performance testing,” says Sascha McDonald, Founding Director of CloudLoad.io. “All the way through my career I've seen that it’s a misunderstood area of testing, in terms of budgeting for it and budgeting early.”

That lack of explicit budget for performance testing extends to other non-functional testing. The term ‘Operational Acceptance Testing’ (OAT) is often a bucket for non-functional components. When a catch-all area in the programme is applied, there is a greater risk that specific specialty areas contained within it do not get the resources they need. 

“I think programme managers should be really looking at what is being covered in OAT, because it’s a place where a programme will often not be very clear” says McDonald. “Things that need to be covered will be included, like disaster recovery and performance testing. However, because everything is bucketed together, these things will lose their priority. For example, performance testing might be scant within the OAT criteria, while disaster recovery takes up the majority of the OAT requirement.” 

Creating a false impression that testing has been properly taken care of, when in fact coverage has been slight, creates a risk to both the project and the business. 

Assessing how to develop a strong testing environment has also proven challenging, despite the potential benefits. For example, building a scalable environment to undertake performance testing can be one of the biggest overheads, and one of the most expensive forms of testing, yet they can also allow firms to become highly efficient through increased automation.

 

“Environment is a huge thing that plays into your ability to automate”

Rob Jacobs, Worldpay

 

“Environment is a huge thing that plays into your ability to automate,” says Rob Jacobs, Director of Engineering Operations at Worldpay. “Environments that you can stand up, which are fully provisioned at the touch of a button and you can tear down again afterwards, then repeat the process again and again. We've put a lot of effort into how we automate the provision of our environments.”

Firms are finding that the cost implications of failure provide a convincing case to justify allocating more resource into performance testing, while at the same time the process is being made simpler through new services and functionality. 

McDonald says, “Typically, if you don’t performance test, then the issues related to performance will become an operational overhead. These products coming out at the moment, like New Relic, APM, and Dynatrace, are getting huge inroads into the market, because they allow customers to quickly identify performance issues in production and rectify them.”

 

New approaches, new skills

The two big drivers for change, margin compression and rising costs, are being brought to bear upon the banking sector. New non-bank firms are launching services that can eat into the margins for traditional banking services. This new competitive landscape is being supported by regulators. In the European Union, the new Payment Services Directive (PSD 2) will force banks to open up their application programming interfaces so permissioned service providers can access customers’ account and transactional data. The Open Banking Initiative in the UK is creating an identical model. 

For banks the impact is two-fold. Firstly, the newcomers do not have to battle with legacy systems to deploy a service, making them faster to market. Their services are more responsive. Their digital approach to capturing customer data allows it to be applied to different and new products, not tied to a product-specific silo. That means traditional banks have to improve their own development cycles.

Secondly, they will be subject to a completely new operational risk environment as third parties access their data, creating a need for a new testing framework.

Graham Perry, Manager for the UK & Ireland at automated load testing and performance monitoring specialist Neotys, notes that traditional players are changing their approach to development in order to handle this impact, but there are limits imposed by the architecture. 

 

“Not all projects lend themselves to Agile”

Graham Perry, Neotys

 

“What’s emerging, as the market matures, is that not all projects lend themselves to Agile,” he says. “So, we’re in the second tranche now where people are saying, ‘This is an Agile project, that’s a Waterfall project’. They do that for a number of reasons. If you’re in financial services, it’s usually around regulatory drivers.”

Where the Agile methodology involves rapid deployment with cross-functional teams, increasing the amount of collaboration between testers and developers and their ability to go back and adjust problems when found. The Waterfall model is a sequential process in which adjustment cannot be made.

New mobile apps and ecommerce platforms tend to be developed using the Agile methodology because they need to be highly reactive. By contrast back office systems for reconciliation or information management need very careful planning and do not have new features introduced very quickly.

“Mobilisation of financial service applications is one of the areas where Agile is nibbling away at the market,” says Perry. “The core systems are all Waterfall development and have to be very rigid. When you’re mobile-enabling some of a bank’s internal datasets, that’s one area where financial services organisations are using agile.”

Middleware such as an API engine is typically used to hook into the back end, making data available via an API, which can then be integrated with the mobile application so that while teams creating mobile apps are operating in a very agile way, they’re not touching the back end.

However, as testing is integrated into the Agile methodology, some shortcomings have been observed in the capabilities team members have developed 

“It is emerging that a tester embedded in an Agile team, historically, won’t necessarily have had the technical skills to make decisions or have discussions with developers,” says Perry. “In performance testing, testers do not have the technical skills to be part of a bigger team where there’s group responsibility for delivering something.”

To overcome this challenge, they need to develop a broader range of skills, in programming, for example, so they can understand what the developers are doing and engage with their process. 

“Unless testers in general do embrace these new skills, they’ll disappear, particularly with manual testing, because a lot of it is being automated as part of speeding up delivery,” Perry adds.

 

Fitting in

Within the testing function, there has been a noticeable shift in focus. Where in the mid-1990s testers and developers had a different mindset - testers found the bugs developers had left, creating some distrust between them - when Agile matured, that mindset started to change. 

 

“That boundary between developers and testers is starting to fade”

Darren Stocks, Certeco

 

“Both are now responsible for testing, so there is more innovation around test automation to support the speed to market,” says Darren Stocks, Head of Testing & Quality Assurance at business change consultancy Certeco. “Rather than being kept to the side when things were being developed, testers are now being brought a lot closer to it. Now that boundary between developers and testers is starting to fade because we’re starting to work together a lot more.”

Within firms, there are not hard and fast ways of working, says Matt Roberts, Lead Consultant at system engineering firm Cake Solutions. Consequently firms may ask software provider and testers to run different ranges of tests and have different expectations as to when they should be deployed.

“When we do unit and functional tests, sometimes clients ask us to do acceptance tests as well,” he notes. “Not only is there expectation that tests should be run early, they should be repeatedly run as we move from left to right, furthest right being production. Clients expect them to be automated and repeatable.”

The lack of an agreed model also sees challenges emerging. There is some uncertainty as to where performance testing fits into ‘shift left’ and Agile. 

“If you’re doing a nightly or a weekly sprint, it’s all about velocity, and it’s about testing quickly,” Perry says. “So, there’s evidence that some developers are doing unit testing, and they can do that fairly quickly, but only on a small scale.”

However, with the agile flow and continuous integration, problems arise when a number of different developers all have their code integrated together. If those developers have all made the same call to a database, for example, the unit test would have had the database for the one developer and tested a relatively small bit of code.

 

“Clients expect tests to be automated and repeatable”

>Matt Roberts, Cake Solutions

 

“When you integrate it, you’ve got developments from a lot of developers coming together,” he says. “They may have been making the same call to the database, so you could get two or three instances of pulling out customer ID and balance information, for example, from the different developers after integration.”

As a result, there is a risk that the developers start to introduce inefficiencies and performance issues and unless performance testing is started after that point, those issues are not going to be picked up. 

“At some stage, it will come as a big surprise that you’ve got all these new features within your development and the code runs like a dog,” warns Perry.

 

Evolution of methodology

Moving from either Agile or Waterfall methodologies to DevOps requires firms to undergo a cultural shift in order to drive collaboration between the testing, development and operations functions. As an evolution of these functions it can support firms operating within a Waterfall framework, while firms that have already established an Agile methodology will often already have taken on the concept around the cultural changes; more collaboration, more communication, better use of tools and their involvement of testing.

“Firms that have already made that step change find the next step to get them to DevOps is a lot smoother,” says Stocks. “With some of the other organisations who are very waterfall based, it takes a lot more to make that change to go straight to DevOps because of the way that that works, with different departments and different structures within that organisation.”

If testing is still heavily manual, managing the change requires that firms look into the cultural changes, the tools that will be needed and how roles will change. Testers will also need to adapt the way they work, for example, through training on certain tools and participation in scrums. 

“The big thing for me is around the cultural change,” says Stocks. “For one client that I was talking to about Agile, it was almost a tick-box exercise that they had been told to engage in without knowing why. For a successful adoption of an Agile framework and then a move to DevOps, the whole organisation has got to buy into it.”

The capability to deploy testing through the development process is dependent upon the underlying systems and the appropriateness of using Agile and DevOps methodologies. A medium-to-large financial services provider can expect to have a very broad spectrum of technology ranging from a 30-year-old mainframe to platforms that are hosted within private clouds, public clouds and all levels of system and deployment in between.

“We do have challenges around how we test the diversity of technology we have,” says Jacobs. “We equally have specialists in testing each of these platforms with very different approaches to how they undertake testing. We have everything from teams that have been Agile for many years and are trying to keep themselves as cutting edge as possible and evolving constantly about how they deliver agile testing in a kind of DevOps world, right through to testing on the mainframe where it takes around three to six months to deliver a typical change through the entire end-to-end process and test it in a very, waterfall way.” 

As a result, at Worldpay there are different teams running the testing, or different individuals within teams. It is becoming an integrated cycle and depending on the specific platforms being discussed, there are different levels of maturity. Nevertheless, the firm is seeing opportunities to gain efficiency.

“We want to get to a stage over the next 12 months where we have automated regression and certainly performance regression capability,” Jacobs notes.

The process is not an all or nothing exercise; some organisations will look at bi-modal development structures with waterfall for some particular programmes and Agile for others. In each case, it is crucial that the firms revisits the benefit they get from operating a particular way and assesses whether it might be worth moving more towards full Agile or DevOps methodologies. 

While this may be necessary, Stocks notes that moving to a more complete methodology can prove invaluable.  

“One client I worked with really did embrace Agile, in all its form, right from to top down to the bottom, everyone bought into it,” he says. “When you see it working that way, it’s really powerful how good it can be. But unfortunately, some organisations don’t go that way.”

 

Conclusion

Testing has undergone an amazing journey, becoming ever more closely entwined with development methodologies such as Agile and DevOps. While testing has evolved significantly since the mid-1990s, it continues to evolve, grow and improve. 

The value that testers provide to the development process has become more apparent as pressures on the commercial business and on speed of development have grown, post-crisis. 

Through the increased collaboration with other teams that Agile and DevOps have engendered, the industry as a whole is getting into a mindset of ‘shift left’, with early testing and a more collaborative approach ensuring that that everyone is responsible for testing.

 

 

For more information on the companies mentioned in this article, visit::

www.certeco.co.uk

www.cloudload.io

www.neotys.co.uk

www.worldpay.co.uk

www.cakesolutions.net