GSR: A Crypto Market Maker and Ecosystem Partner

 

Video title: GSR: A Crypto Market Maker and Ecosystem Partner 

Summary:

 

This is about the company architecture of GSR, a market maker specializing in cryptocurrency and digital asset trading. GSR has over 45 exchanges connected worldwide and relies on AWS for fast and secure connectivity. 

 

The DevOps team manages Kubernetes clusters on top of EC2 instances running in different availability zones to sustain regional failures. To connect to exchanges, the company leverages AWS Private Link technology and Direct Connect technology. 

 

Data is stored in Amazon Aurora and other storage services such as Elastic Cache and EFS. The data stored in S3 is used for simulations, which are run using AWS Batch. The results from the simulations performed by the research team are provided to the trading team for monetizing the trading signals found.

 

Transcript: 

 

Interviewer: Welcome to this is my architecture today. I’m here with Matteo from GSR. 

Matteo: Hi, thanks for having me. 

Interviewer: Hi Matteo. Can you tell us about GSR? 

Matteo: Sure. GSR is a global trading firm, an investor in the exciting world of cryptocurrencies and digital assets trading. We specialize in providing liquidity risk management and structure products to institutional participants in the crypto ecosystem. 

Interviewer: Interesting. So what are your technical challenges and how is the cloud helping you with them? 

Matteo: Our technology is connected to over 45 exchanges around the world, so we need fast and secure connectivity to them. AWS global infrastructure provides just for that. 

Interviewer: That’s great to hear. So can you walk us through the architecture? Matteo: Sure. Our DevOps team manages and provisions Kubernetes clusters on top of easy two instances running in different availability zones to sustain regional failures. We run over a multitude of production clusters, often in key globally distributed, often in key regions where exchanges might be AWS colocated. 

Interviewer: Okay. 

Matteo: This allows us to optimize exchange connectivity, increasing throughput and experiencing lower latencies as a result. 

Interviewer: I see. And how can you connect to those exchanges from your infrastructure? 

Matteo: When available, we leverage AWS Private Link technology to simulate across connect to exchanges that run in the same AWS region. 

Interviewer: Okay, and how do you do that if Private Link is not available? 

Matteo: For those exchanges that do not run on AWS, we leverage direct Connect technology, which gives us direct market access without having to manage physical hardware. 

Interviewer: Okay. Okay, that’s cool. So you connect from multiple AWS regions to different exchanges around the world. Now, when you have the data in the AWS regions, do you need to sync the data across those regions? 

Matteo: We do. All our live started trading data lives in Amazon Aurora. We store millions of market trades and other type of relational data in our database. Aurora allows us to scale and replicate our clusters globally by using Aurora global databases. 

Interviewer: That’s good. And I see that you have many other storage services here. Can you walk us through them? 

Matteo: Most of our workloads, like I said, are stateless. On top of Kubernetes, we use Elastic Cache technology to persist key value type of data, for instance, overflow, which is transient in nature.That simplifies our set up. As we don’t need to provision persistent volumes directly attached to the workloads. We run an Elastic Cache elevated cluster per region which serves the local workloads. We also use EFS technology to store the terabytes of market data that we collect on a daily basis recorded in our proprietary format. We then leverage data sync technology to daily sync our EFS production volumes to those S Three buckets which will be used when running simulations. We found that S Three scales much better than EFS when running simulations at scale, as we can drive S Three throughput much higher than what EFS provides. 

Interviewer: Fantastic. And you just mentioned that you use the data here in S Three to run simulations. So how do you run those simulations? 

Matteo: For that, we leverage AWS batch. Our research team needs to analyze tons of market data to find or opt in trading signals or perform some other type of data analytics. Market data is consumed from S three cached on easy two instances for data locality and results are pushed back to Sri for easily retrieval so that our users, specifically the research team can consume this data very easily. 

Interviewer: Wonderful. And so, who will actually use the results of the analysis that has been performed by your research teams? 

Matteo: The research team will provide this analysis and results to the trading team who will be monetizing the trading signals that they found and we’ll be using them in our automated trading strategies. 

Interviewer: Perfect. Thanks a lot, Matteo, for sharing your architecture with us today. Thanks for the opportunity and thanks for watching. This is my architecture.