Cloud-based exchanges could change the value of a broker dealer

2016

Nasdaq and CME are both moving market infrastructure toward the cloud, but in very different ways: the former keeps the critical path to the matching engine, the network, within its direct control, while the latter aims to build the venue itself around Google’s private cloud environment, raising concerns among some market participants about stability and determinism.

Depending on the model used, exchanges could begin offering technology to investment firms currently supplied by broker-dealers, lowering latency and changing the value of sell-side support.

When exchanges talk about moving matching infrastructure toward the cloud, their language can sound similar. Their architecture choices are not. Nasdaq and CME Group are both using hyperscaler technology to modernise markets, but they are making very different choices in terms of scope and ambition.

Nasdaq’s model is already in production. AWS said in May 2025 that Nasdaq had migrated the core trading system of three North American markets to AWS Outposts, with the largest market processing up to 36 billion daily messages. AWS also said the setup delivered low double-digit microsecond end-to-end and order-to-trade latency, alongside up to 10% better round-trip latency. Nasdaq’s earlier press release on GEMX, one of the markets it operates, said that the exchange alone processes 12 billion daily messages and was the third Nasdaq market to move to AWS after MRX and the Nasdaq Bond Exchange.

Magnus Haglind, Nasdaq’s head of marketplace technology

Magnus Haglind, Nasdaq’s head of marketplace technology, said, “AWS runs the platform and the compute, we run the network.” He added: “Nasdaq’s team runs and oversees the network, and that’s ultra low latency, which means that we can guarantee the same experience for any orders that come in. That remains the case regardless of whether it’s AWS compute or any other server in our traditional setup.”

Haglind also said Nasdaq moved the AWS server racks into its own data centres and kept control of the network between customer cabinets, gateways, matching engine and compute.

“Retaining control of the networks means that we can control the end-to-end experience, which is important from a trading firm point of view. Investors know when their order leaves their colocation equipment, it goes directly to a Nasdaq switch that connects directly to the matching engine,” he said.

CME’s published material describes something altogether different. In June 2024, CME Group and Google Cloud said they would build a private Google Cloud region and a co-location facility in Aurora, Illinois. The companies said clients would be able to choose between self-managed infrastructure in the co-location facility and Google Cloud’s specialised infrastructure-as-a-service offering, both with equal network latency to the exchange.

CME’s client wiki is full of telling specifications. It says CME and Google are building an infrastructure that “brings the exchange into the cloud while respecting existing market structure and industry investment.” It also says that, after migration, “CME Globex, the premier futures and options electronic platform CME launched in 1992 on which all the largest financial derivatives trade, from S&P futures to treasury and bond futures and energy futures, will be wholly operating in the private Google Cloud Chicago region.”

The architected design goes beyond simple hosting. The CME wiki says the new setup will use Ultra Low Latency (ULL) Google Compute Engine instances, a dedicated Google Cloud ULL Network for order routing and hardware multicast market data, and Google Cloud Network Premium Tier for shared services. Production in Chicago is due to alternate across two isolated zones to prevent issues related to cloud-wide updates or changes. CME says this “innovative dual zone design unlocks greater determinism and reduces the potential for variance”.

CME has declined to give specific details in terms of technology or performance on when it will actually be deployed. CME says the sandbox in private Google Cloud Dallas zones is due in mid-2026, with pre-production access in Chicago before production migration begins in late 2027 and continues through 2028. Dallas would then become the disaster recovery setup in 2029.

For many market participants, that makes the CME-Google project the more ambitious attempt at modernisation. It does not simply insert cloud hardware into an exchange-controlled network but moves the core matching environment into a private hyperscaler architecture.

Data and trading disintermediation 

Matt Barrett, co-founder and chief executive of Adaptive, a specialist provider of trading technology and cloud infrastructure for capital markets firms, said both Nasdaq and CME can be seen as viewing cloud-native markets as a way to change how market data is delivered.

Matt Barrett
Matt Barrett, CEO, Adaptive.

“Both Nasdaq and CME can be seen as viewing cloud-native markets as a way to change how market data is delivered, potentially reducing the role of firms that currently aggregate and redistribute exchange data,” he said.

“In CME’s case, the implications could go further, because a cloud-native market model could also reduce the need for some of the intermediaries that currently package connectivity, hosted infrastructure and trading access around Globex,” he continues.

Barrett also said the technical challenge goes beyond timestamping.

“Can cloud infrastructure provide the stable and well-understood baseline that ultra-low latency markets require?” he asked and added: “Cloud providers still do not yet have a fully proven, production-quality answer for the most demanding trading use cases.”

He added: “CME and Google appear to be trying to build that production-quality baseline directly.”

That could matter for more than just market data vendors. If more of the technological infrastructure, from low-latency compute to network access, is standardised inside the Google environment, then some of the value provided today by intermediaries through managed connectivity, hosted infrastructure or access packaging could come under pressure. For now, the clearing-member layer would remain, leaving time to adapt to the affected intermediaries and exchange members. Longer term, this could affect connectivity providers, technology vendors and some brokers or FCMs that currently sit between end-clients and CME’s infrastructure.

Goldman Sachs, RJ O’ Brien, and other future commissions merchants did not reply to request for comments on the disintermediation prospects.

Jason Shaffer, chief technology and product officer for Trading Technologies, the capital markets technology platform services provider historically infamous for its derivatives EMS used by many dealers to trade on CME, said, “We are well prepared for this transition, engaged with our exchange partners and intent on doing all the heavy lifting so our clients can have a seamless experience.”

Nasdaq’s own cloud position is also narrower than a general multi-cloud pitch.

Haglind said: “AWS has led innovation in edge compute, which makes them an ideal partner where we benefit from their ongoing investment in R&D and can optimise for deterministic and high-performance use cases.”

For now, the comparison is still partly obscured by the lack of transparency beyond CME’s wiki.

Beyond the stability angst for CME markets participants, the data and access to market disintermediation that both exchanges aim at is bound to reshape the future interaction and structure of trading on their venues and beyond.

©Markets Media Europe 2025

TOP OF PAGE