How banks slash the cost of managing market fragmentation

621
Steve Toland, Co-Founder, TransFICC.

A smart application of hardware, cloud and open source technology makes for efficient trading systems. The complexity of the fixed income market should not be found in the technology used to trade it. Rather, smart application of both hardware and software can reduce the inefficiency that complexity creates, says Steve Toland, Co-Founder of TransFICC.

What are the headaches in workflow efficiency?
Steve Toland: In the fixed income markets we have fragmentation between asset classes; rates, credit, futures, repos, etc. Then we have further fragmentation from the regional differences and different markets. Trading platforms can then have multiple APIs for each product to handle aspects like streaming prices, request for quotes, CLOB and post trade workflows.

Complexity is increased by different trade types (outright, switch, butterflies) and further by different trading protocols e.g. RFQ, RFM, Process, Click & Trade, Trade at Close, NC Lists, All or Non Lists, Compressions etc. For more complex products, such as mortgage-backed-security specified pools, the buy side might send across anywhere between five and 50 bonds at a time to get priced up by the dealer, with four different ways to negotiate the price. That price needs to be referenced against a benchmark later in the day. The process of haggling over a package of 50 bonds, back and forth via spreadsheets, is ripe for automation.

Portfolio trades or multi asset trades also have more complex workflows. The system designed to handle those needs to understand the nuances of the different trading venues, and the different asset classes as well.

That takes time to build, so what we are trying to do is normalise some of that, so the different asset classes and workflows are standardised in terms of how the messages are handled. That may mean that inconsistencies in the workflows at different trading venues need to be ironed out on our side. Understanding how to deal with that in a consistent way is what we do. For automated trading systems, with computers talking to each other via APIs, everything needs to be thoroughly tested both for functionality and for stress testing.

How great is the appetite for change?
There are three drivers. One is regulatory. A lot of fixed income trading is not via electronic trading venues, but a compliance officer at a bank wants to see an audit trail for each and every trade. Secondarily, there is efficiency. Computers can price within a complex workflow much better than humans, and deliver more accurate pricing. Then there is the cost saving. The amounts the banks are spending on infrastructure is huge, and some of those costs get passed onto the buy side.

If you were to compare the workflows from fixed income to FX, what level of complexity difference are we seeing?
Fixed income is much more complex than FX, with fragmentation by asset class, by region and with more ways to trade the same instrument. In FX every bank streams executable prices to every one of their customers. There are more complex trades such as iceberg orders, but these essentially put parameters on a limit order.
For a list trade you may not have all the bonds in the list upfront, you have got to wait for them all to come in. Automating that workflow is difficult. The value-add is packaging them all up and then sending them through as part of one message.


How do you see the deployment of different solution types, whether cloud versus on-premise, or different types of technology, making this simpler?

Complex requirements frequently mean that when you start building systems the end spec is not 100% clear. Often, if you try and get all the requirements upfront, you are going to get it wrong. So, our development team tries to build a small part of the functionality first and then build out from there. The cloud also allows you to scale your infrastructure as needed to meet evolving functionality and system throughput needs.

By using a couple of virtual machines, a firm can start small, build for one venue, and spin up tomorrow so it can be really low cost to start with, while you work out what you need to build.

Cloud is really good for that, it’s also good for testing. We built a complete trading infrastructure, but we have to stress test our systems, which we do with a load of 80,000 messages a second against all the venues simultaneously that we support. To build that testing infrastructure is expensive. What we do in the cloud is spin it up once a day for an hour, measure everything, make sure it’s all working then shut it all down again. That is much more efficient. We also use open source components, and the main one is Aeron messaging. Open source enables us to invite customers use our test environment free of charge and pass on cost savings in the live environment.


FD: How you see that technology evolving and fitting into the workflow?

There are four main dealer-to-client venues but if a buy side uses three of those four they probably warrant having an execution management system. Some of these are re-purposed FX systems. The more complex workflows for portfolio trades, packages on swaps, or compression trades on swaps are not yet fully integrated into EMS systems, but people want that functionality. We are looking at integrating with EMS systems as we believe our approach of normalising venue APIs will speed adoption of these workflows.


FD: To what extent is connectivity a barrier to evolving technology?

APIs are complex, and they evolve over time. As venues add functionality, or expand the fields available, or for regulatory reasons processes get added for post-trade booking, APIs need to be updated. Some of our bank customers are good at building in-house. But there is no budget assigned for maintenance. If you change a tag on an API, it’s pretty simple but resource is needed for it to be thoroughly tested.


FD: What lessons you are going to be discussing about at FILS this year?

We are talking about how to manage fragmentation in fixed income markets. It’s mainly about D to C venues and workflows. We will show that you can build a system which is very fast in terms of the messages it can handle, but also very efficient in hardware usage.

A modern computer chip has multiple cores, and those cores can be utilised in the same way as different stations on a production line. The way we work is to start out by allocating cores per trading venue. We then increase efficiency by looking at workflows. In simple terms, a workflow associated with streaming prices is more resource intensive than post-trade notifications. For example, one of our customers is sending 8,000 prices per second into a single venue. So we might dedicate a single core of the chip solely to that venue for pricing contributions, then another core will handle RFQ negotiations for three platforms. Another core will handle post-trade for every single venue. Ultimately we are trying to make the most efficient use of hardware to minimise the impact that fragmentation has on operational costs.

Using this approach we think a global investment bank in a region can handle the four major RFQ venues for every asset class on two servers, costing US$22,000 a year.

©Markets Media Europe 2021
TOP OF PAGE