Enabling Data-as-a-Service on Legacy Platforms

Enabling Data-as-a-Service on Legacy Platforms

In a previous blog, we wrote about the competitive edge that data-rich financial institutions and solution vendors can gain by offering ‘Data-as-a-Service‘.

But what are some of the key considerations when it comes to cloud-enabling a firm’s existing legacy platforms? How difficult is it to offer live data or real-time data sharing through commonly used desktop apps such as Excel?

A bank, broker or asset manager might want to take on-premise data that sits on an internal platform – trade data for example – and seamlessly share that to the cloud, thus removing the need for end users to be onsite or to remote desktop in via a VPN.

Or a solution vendor with products designed for on-premise installation or access via dedicated lines and specific client software, might wish to go cloud-based in order to offer real-time or on-demand data sharing into existing applications and workflows without its customers having to rely on clunky FTP or building to APIs.

The good news is that enabling Data-as-a-Service on these legacy platforms is not as difficult as it might seem.

 

Enabling Data-as-a-Service on Legacy Platforms –

Real-time data sharing in the real world

Amongst financial institutions, many firms, on both the buy side and the sell side, are looking to gain greater leverage from their own internal systems by cloud-enabling them, thus improving the service they offer to internal colleagues and external clients. Whether that’s through making real-time data available within chat and collaborative workflow apps, feeding live data to and from Excel or sharing data via other desktop apps, there are many benefits that such an approach offers.

A good real-world example of this is the e-commerce fixed income department of a well-known bank, which uses its own internally-developed platform to generate trade axes from its current bond inventory. Working together with ipushpull, the bank has cloud-enabled this internal platform with secure, real-time data sharing, so that customers are automatically updated with new trade axes via their own choice of desktop apps (such as Symphony or Excel) and can respond with indications of interest directly from within those apps.

From a solution vendor perspective, there are many companies that have fantastic products and services, but live data sharing is restricted by the fact that their customers need to have software installed onsite or can only access data through FTP or API integration with a centralised service. A number of these vendors are now seeing the benefits of cloud-enabling these platforms to offer Data-as-a-Service.

Again, it’s worth citing a couple of real-world examples.

The first is a risk solution vendor that offers intra-day margin calculations. They have a great product that enables customers to load up their position data and calculate span margining for those positions on the fly. However, the product was originally designed to be installed on premise at the customer’s site, which made it expensive and meant that it could only be sold to larger institutions. By working with ipushpull to create a multi-tenant version with a secure cloud presentation layer, the vendor can broaden the service out to a wider, more diverse customer base and offer more affordable subscription-based or on-demand pricing models.

The second example is a data vendor that has a centralised multi-tenant platform, where customers download large data files and upload trade files via secure FTP. Again, their legacy installation and onboarding process meant that their commercial model was limited to larger customers. ipushpull helped the vendor cloud-enable this service to make the data available on demand, which has now opened up the service to a much wider group of potential customers.

 

Seamless integration of data sharing tools

The common thread with all of these legacy systems is that they handle data, with a set of inputs and outputs. And there is no fundamental, technical reason why they should not be cloud-enabled with data sharing tools.

This is what ipushpull does. At the front end, we deliver these systems as true services with a unified presentation layer via the common desktop apps that people are already using. At the back end we develop APIs that plug into these legacy technologies. From the perspective of both service providers and end users, this is a completely seamless process. Services  connect to ipushpull via the cloud and we take care of the rest, i.e. marshalling the data, providing access controls, presenting the data into multiple desktop apps and marshalling data back and forth to the service from within those apps in real-time.

Service providers benefit from not only being able to offer live and on-demand access through desktop apps like Excel, Slack, Symphony, Microsoft Teams and Eikon messenger, desktop containers like ChartIQ Finsemble and Openfin, and internal platforms and applications like pricing engines, risk systems and OMSs, without have to completely re-platform their existing systems, but also being able to deliver data-driven custom notifications into those apps based upon user-defined parameters.

In summary, Data-as-a-Service offers many benefits, and there is no reason that firms should be restricted to on-premise deployment or to API/SFTP integration. By working with a trusted partner such as ipushpull, firms that are looking to cloud-enable their internal platforms can minimise their internal development costs, broaden their reach and rapidly accelerate their time to market.

.

Download “Fintech’s Next Frontier: Data-as-a-Service” our Financial Markets Insights report. In collaboration with Natwest Markets, Maystreet, Euromoney TRADEDATA and Engine, part of The Investment Association, ipushpull explores the importance of Data-as-a-Service in facilitating remote working and accelerating digital initiatives within the financial markets industry.

.

Enabling Data-as-a-Service on Legacy Platforms

How to Excel in your Post-Trade Digitalisation Workflow

workflow

A senior manager at a major bank noted at a recent conference that some staff spent well over 60% of their time in email, chat and spreadsheets.  As we complete the journey from paper to digital, with increased compliance and regulatory burden in our industry, is there an opportunity to innovate here?

A lot of post-trade workflow is spent managing exceptions and reconciliation breaks, which means viewing data from different systems in a normalised way. Spreadsheets lend themselves to this challenge and have become the norm, since they do not care what the source is. So long as the data is tabular and there is a common key across systems you can just copy the data across or re-enter it.

Alastair Rutherford, MD Ascendant Strategy says:

“Getting on top of all the data exchanges and workflows that occur to support post-trade activities is a key element of any Digital strategy in Capital Markets organisations. To industrialise post-trade, and make a step-function reduction in TCO, firms must understand these processes properly in the context of their target operating model, and implement automation that complements their core applications.”

post-trade workflow

 

There is a rich ecosystem of tools and applications to provide the glue such as external data lookup or calculation tools. Once you have added that glue it becomes transportable to your peers. Those with whom you share these spreadsheets can see exactly what you see and the method behind your conclusions.

Well, not quite… if you want to modify the recipe in your calculations, a new spreadsheet needs to be sent. When speaking to your peers (especially outside the organisation) how do you know you are looking at the same spreadsheet? What happens if the data that drives the calculation is changing or perhaps only available for you? What happens if you have incorrectly entered some of that data. Before long you have a huge pile of legacy, complicated spreadsheets, hopefully accurate for the moment they were created but with a context and scenario unclear in the document and certainly unclear to anyone auditing it. It shouldn’t come as a surprise that Accenture has estimated £125bn of complexity costs in pre-trade and post-trade workflows.

The solution here is to use a common set of tools in an environment which is centralised and maintained. Platforms such as Symphony can deliver the environment securely, meeting the needs of Information Security. However, the tools need to allow users common access to shared data with the appropriate interface to meet the needs within the post-trade workflow.

At ipushpull we are seeing a great deal of interest in our collaborative data platform to deliver exactly this – the ability to share data in real-time between groups of users, for workflow tools to rapidly enable decisions to be made which are then fully audited, but also the ability to rapidly adapt.

The success of the spreadsheet has been its ability to provide a quick solution to a business problem which is generally planned to be temporary. Over time, however, the overhead of navigating and maintaining the collection of spreadsheets has become too high. ipushpull addresses this challenge by providing an ecosystem for collaborative workflow across the post-trade community, delivering efficiency savings in terms of time spent converting data, but also cost savings in terms of accuracy – reducing the data errors means less resolutions. Less resolutions means more efficiency savings.

If you would like to speak to ipushpull please get in touch with sales@ipushpull.com.