Thursday, April 25, 2024
No menu items!
HomeCloud ComputingBetting on Cloud SQL pays off for oddschecker

Betting on Cloud SQL pays off for oddschecker

oddschecker is the UK’s largest sports odds comparison website. We collect bookmaker pricing of various sports teams, games, and best prices, in multiple territories, and present them to our customers in a collated view. This unique view benefits our customers because of the pricing difference between one bookmaker and another. In the UK, we work with 25 different bookmakers and recently launched a U.S. site, as well as Spanish and Italian platforms — each with their own set of bookmakers. 

In the spring of 2018, we successfully migrated to Google Cloud. We were interested in Google from the start because we knew they would be far better at managing our database needs than we ever could be. We decided to take advantage of their managed services; Cloud SQL for MySQL, Memorystore for Redis, and Google Kubernetes Engine (GKE). As a result of the move, we’ve been able to free up developer time that used to be spent on the day-to-day hassles of database management. Because of our move to Cloud SQL, we have set the organization up to be able to make further architecture and roadmap innovations in the future. 

Google Cloud’s managed services were a sure bet

We chose Google Cloud because it supports the most popular engines, MySQL, PostgreSQL and SQL Server, which means we can work the way we want to. Specifically, we opted for Cloud SQL primarily because of its ease of use. We were originally using on-premises databases, so we had to have large, custom virtual machines (VMs), disks, and cards to get the power we needed. Prior to the migration to Google Cloud, we were running in a private data center on custom hardware using MySQL 5.2, with about half a terabyte of data. When testing Cloud SQL, we replicated our system and found Cloud SQL performed the way we were hoping it would. Though we had initially considered a hybrid migration -where we would run parts through the cloud and chip away at making the full move over time — we ultimately decided to go all in with Google Cloud. We spun up an entire, all-around auto oddschecker infer in parallel, then did an overnight migration. After backing up and restoring to Cloud SQL, everything was ready to go.

Cloud SQL covers the performance spread and then some

We were counting on Cloud SQL to meet our performance demands while radically transforming customer experiences, and it delivered everything we needed. The oddschecker site features an aggregation of platforms with a convenient, collated view which shows the odds and the status of each game. We pull 8,000 updates a second from our different operators because our customers need to get prices in a performant manner, otherwise they’re out of date, irrelevant, and, ultimately, bad for business. 

Things are now running on a large, single Cloud SQL database that’s a 64 CPU machine with a terabyte of storage that auto adjusts its size as needed (we use about 800 gigabytes of that consistently). We’re able to handle a couple thousand reads per second on average and about half that number of writes, running on MySQL 7.0. 

Because it’s our one source of truth for all onsite data, the database is critical. This content includes tips and articles, as well as our user database for new onsite customer registrations. We also have our hierarchy. It’s like a tree of information that structures each sport and team, all the way down to the markets and the matches customers bet on. In addition, we keep sports data, odds, and commentary — in short, basically every data point on the site comes from that Cloud SQL database.

GKE provides the juice for delivering prices to customers

Currently, more than 90% of our workload, including our website, runs on GKE, with a few VMs running some of our legacy kit as well. We have multiple abstractions in our GKE clusters and we pull information from the various platforms, each through their own APIs. When data comes in via the ingress, we have our API reroute down to the services underneath. From there, it eventually falls into our proxies and on to Cloud SQL. 

On average, we try to deliver a price to customers within seven seconds of its publication by an operator. That’s complicated by some language processing that needs to happen since bookmakers often call the same teams by different names. We have to do aggregation as well as the odds comparison through a complex, homegrown mapping system that normalizes the data. Once again, Google Cloud delivered by making it possible for us to come through for our customers in a big way.

MemoryStore for Redis delivers the cache, consistent key values stores, and more

As for Memorystore for Redis, we’re using versions 3.2 and 4.0, with different teams using it for different purposes. We have 16 Memorystore instances running, with our API being one of the largest. Other instances include the caching of site content, price updates, and unmapped bets.  We also use it as a key value store and to keep some of our services stateless. This way, if we want to autoscale, we have a consistent key value store that doesn’t live inside the app, so we can quickly share odds data across services. For some data, we don’t want to be hitting the databases as much, so we have them sitting in Redis instances for ease of lookup. 

If you look at the odds data grid for each game on our site, it shows the map mark — all the prices for a specific bet. One line across is one key value, stored in Redis, which we can look up and share across services.

Removing headaches and breaking monoliths

As we suspected, Google Cloud is better at managing, patching, and upgrading our database than we’d ever be. Since our migration, we haven’t had to touch the database, except for some minor sizing.  

As for future goals, we’d like to break up our monolithic database because it’s become less nimble over the years — in part to alleviate blast radius concerns. Over the past three years, we’ve been building functional platforms, like our recently completed, reengineered, backend aggregation platform. It performs the same use cases so we can start chipping away at the monolith.

The instance contains 30-40 critical and non-critical databases, and we don’t want to have read replicas or failover on all of them. Another goal is to move away from storing content in the database, which accounts for about half a terabyte. By adopting more of a microservices architecture, our services could be more flexible if each had their own part of the database.

For example, we could have a completely different Cloud SQL profile for each. Some databases are write-heavy, some are read-heavy, and some are both. We could have custom, individually scalable machine types that cater to those use cases and provide a general improvement in functionality across the site. We could also begin breaking apart the database so we have the freedom to change when we know that it’s not going to impact the other 90 databases.

In conclusion, Cloud SQL and Google Cloud’s suite of fully integrated managed services helped oddschecker make a painless move to the cloud, and meet the demands of odds comparisons, where every second matters. 

Learn  more about oddschecker and Cloud SQL. You can also check out How BBVA is using Cloud SQL for its next-generation IT initiatives.

Related Article

How BBVA is using Cloud SQL for its next generation IT initiatives

BBVA prioritizes managed services for speed, ease of maintenance, and centralized control features. Learn how Cloud SQL fits perfectly wi…

Read Article

Cloud BlogRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments