Meeting the rapidly growing demands of our customers’ high performance computing and data-intensive workloads requires deep innovation — at Google Cloud, we know we can’t rely on ever-faster CPUs alone, like Moore’s Law has enabled in the past. Customers can either optimize their workloads for a given platform, or we can offer them a platform that is optimized for their specific needs. At Google Cloud, we choose the latter.
Today, we have an exciting new release resulting from these efforts: the new C3 machine series powered by the 4th Gen Intel Xeon Scalable processor and Google’s custom Intel Infrastructure Processing Unit (IPU). Along with the recently announced Hyperdisk block storage which offers 80% higher IOPS per vCPU for high-end database management system (DBMS) workloads when compared to other hyperscalers. C3 machine instances can deliver strong performance gains to enable high performance computing and data-intensive workloads. Customers such as Snap, for example, have seen approximately a 20% increase in performance for a key workload over the previous generation C2.
The C3 machine series is just the latest example of this architectural approach. For over two decades, Google has purpose-built and designed some of the world’s most efficient and scalable computing systems to meet the needs of our customers. We built the Tensor Processing Unit (TPU) to power real-time voice search, photo object recognition, and interactive language translation; unveiled Titan, a secure, low-power microcontroller to help ensure that every machine boots from a trusted state; and launched Video Coding Units (VCUs) to enable video distribution that addresses a range of formats and client requirements. We engineer golden paths from silicon to the console, using a combination of purpose-built infrastructure, prescriptive architectures, and an open ecosystem to deliver what we term workload-optimized infrastructure.
Getting to know the C3 machine series
The Compute Engine C3 machine series, now available in Private Preview, is the first VM in the public cloud with the 4th Gen Intel Xeon Scalable processor and with Google’s custom Intel IPU. C3 machine instances use offload hardware for more predictable and efficient compute, high-performance storage, and a programmable packet processing capability for low latency and accelerated, secure networking.
“We are pleased to have co-designed the first ASIC Infrastructure Processing Unit with Google Cloud, which has now launched in the new C3 machine series. A first of its kind in any public cloud, C3 VMs will run workloads on 4th Gen Intel Xeon Scalable processors while they free up programmable packet processing to the IPUs securely at line rates of 200Gb/s. This Intel and Google collaboration enables customers through infrastructure that is more secure, flexible, and performant.” – Nick McKeown, Senior Vice President, Intel Fellow and General Manager of Network and Edge Group
The System on a Chip hardware architecture introduced in C3 VMs can enable better security, isolation, and performance. In the future, this purpose-built architecture will also allow us to offer a richer product portfolio, such as support for native bare-metal instances.
Hyperdisk block storage and 200 Gbps networking
Block storage and VMs go hand in hand. Last month, we announced the Preview release of Hyperdisk, our next-generation block storage. The new architecture decouples compute instance sizing from storage performance to deliver 80% higher IOPS per vCPU than other leading hyperscale cloud provider. And compared with the previous generation C2, C3 VMs with Hyperdisk deliver 4x higher throughput and 10x higher IOPS. Now, you don’t have to choose expensive, larger compute instances just to get the storage performance you need for data workloads such as Hadoop and Microsoft SQL Server.
To enable high performance computing workloads, C3 VMs also feature 200 Gbps low-latency networking powered by our custom IPU, as well as line-rate encryption using the open source PSP protocol.
What our customers and partners are saying
“We were pleased to observe a 20% increase in performance over the current generation C2 VMs from Google Cloud in testing with one of our key workloads. These continued performance improvements enable better end user experience and application cost efficiency.” – Aaron Sheldon, Sr. Software Engineer, Snap Inc.
“Based on the initial performance data, running weather research and forecasting (WRF) on C3 clusters can deliver as much as 10x quicker time to results for about the same computational cost. This will significantly accelerate R&D for our customers in weather, environment, and engineering domains.” — Michael Wilde, CEO, Parallel Works Inc.
“In early testing with our flagship products, including Ansys Fluent, Mechanical and LS-DYNA, on the new Google Cloud C3 VM, we’re seeing up to 3x performance gains over C2 VMs due to higher memory bandwidth and lower network latency.” – Shane Emswiler, Senior Vice President of Products, Ansys
Where we are headed
With the exponential rise in the complexity of cloud infrastructure, we as an industry must turn to automation to manage these platforms efficiently at scale. Along with Infrastructure as Code, custom chips like the Titan, the TPU and the IPU, pave the way for a not-so-distant future where we’ll automate over half of all infrastructure decisions, configuring systems dynamically in response to usage patterns. At Google Cloud, we are committed to continuing our long history of hardware innovation with a focus on workload optimization and automation.
To learn more about C3 VMs and Hyperdisk, check out our session at NEXT ‘22, How Google Cloud optimizes infrastructure for your workloads. To request access to the C3 VMs or Hyperdisk, please reach out to your sales representative or account manager.
Cloud BlogRead More