Info Image

AWS Unveils Updated AWS Graviton4 and AWS Trainium2 Processor Families to Offer Enhanced Price Performance and Energy Efficiency

AWS Unveils Updated AWS Graviton4 and AWS Trainium2 Processor Families to Offer Enhanced Price Performance and Energy Efficiency Image Credit: AWS

Invent, Amazon Web Services, an Amazon.com company, yesterday announced the next generation of two AWS-designed chip families—AWS Graviton4 and AWS Trainium2—delivering advancements in price performance and energy efficiency for a broad range of customer workloads, including machine learning (ML) training and generative artificial intelligence (AI) applications. Graviton4 and Trainium2 mark the latest innovations in chip design from AWS. With each successive generation of chip, AWS delivers better price performance and energy efficiency, giving customers even more options—in addition to chip/instance combinations featuring the latest chips from third parties like AMD, Intel, and NVIDIA—to run virtually any application or workload on Amazon Elastic Compute Cloud (Amazon EC2).

Graviton4 provides up to 30% better compute performance, 50% more cores, and 75% more memory bandwidth than current generation Graviton3 processors, delivering the best price performance and energy efficiency for a broad range of workloads running on Amazon EC2.

Trainium2 is designed to deliver up to 4x faster training than first generation Trainium chips and will be able to be deployed in EC2 UltraClusters of up to 100,000 chips, making it possible to train foundation models (FMs) and large language models (LLMs) in a fraction of the time, while improving energy efficiency up to 2x.

Graviton4 processors deliver up to 30% better compute performance, 50% more cores, and 75% more memory bandwidth than Graviton3. Graviton4 also raises the bar on security by fully encrypting all high-speed physical hardware interfaces. Graviton4 will be available in memory-optimized Amazon EC2 R8g instances, enabling customers to improve the execution of their high-performance databases, in-memory caches, and big data analytics workloads. R8g instances offer larger instance sizes with up to 3x more vCPUs and 3x more memory than current generation R7g instances. This allows customers to process larger amounts of data, scale their workloads, improve time-to-results, and lower their total cost of ownership. Graviton4-powered R8g instances are available today in preview, with general availability planned in the coming months.

Trainium2 chips are purpose-built for high performance training of FMs and LLMs with up to trillions of parameters. Trainium2 is designed to deliver up to 4x faster training performance and 3x more memory capacity compared to first generation Trainium chips, while improving energy efficiency (performance/watt) up to 2x. Trainium2 will be available in Amazon EC2 Trn2 instances, containing 16 Trainium chips in a single instance. Trn2 instances are intended to enable customers to scale up to 100,000 Trainium2 chips in next generation EC2 UltraClusters, interconnected with AWS Elastic Fabric Adapter (EFA) petabit-scale networking, delivering up to 65 exaflops of compute and giving customers on-demand access to supercomputer-class performance. With this level of scale, customers can train a 300-billion parameter LLM in weeks versus months. By delivering the highest scale-out ML training performance at significantly lower costs, Trn2 instances can help customers unlock and accelerate the next wave of advances in generative AI.

David Brown, vice president of Compute and Networking at AWS

Silicon underpins every customer workload, making it a critical area of innovation for AWS. By focusing our chip designs on real workloads that matter to customers, we’re able to deliver the most advanced cloud infrastructure to them. Graviton4 marks the fourth generation we’ve delivered in just five years, and is the most powerful and energy efficient chip we have ever built for a broad range of workloads. And with the surge of interest in generative AI, Tranium2 will help customers train their ML models faster, at a lower cost, and with better energy efficiency.

Tom Brown, co-founder of Anthropic

Since launching on Amazon Bedrock, Claude has seen rapid adoption from AWS customers. We are working closely with AWS to develop our future foundation models using Trainium chips. Trainium2 will help us build and train models at a very large scale, and we expect it to be at least 4x faster than first generation Trainium chips for some of our key workloads. Our collaboration with AWS will help organizations of all sizes unlock new possibilities, as they use Anthropic’s state-of-the-art AI systems together with AWS’s secure, reliable cloud technology.

Naveen Rao, vice president of Generative AI at Databricks

Thousands of customers have implemented Databricks on AWS, giving them the ability to use MosaicML to pre-train, finetune, and serve FMs for a variety of use cases. AWS Trainium gives us the scale and high performance needed to train our Mosaic MPT models, and at a low cost. As we train our next generation Mosaic MPT models, Trainium2 will make it possible to build models even faster, allowing us to provide our customers unprecedented scale and performance so they can bring their own generative AI applications to market more rapidly.

Juergen Mueller, CTO and member of the Executive Board of SAP SE

Customers rely on SAP HANA Cloud to run their mission-critical business processes and next-generation intelligent data applications in the cloud. As part of the migration process of SAP HANA Cloud to AWS Graviton-based Amazon EC2 instances, we have already seen up to 35% better price performance for analytical workloads. In the coming months, we look forward to validating Graviton4, and the benefits it can bring to our joint customers.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Principle Analyst and Senior Editor | IP Networks

Ariana specializes in IP networking, covering both operator networks - core, transport, edge and access; and enterprise and cloud networks. Her work involves analysis of cutting-edge technologies that drive application visibility, traffic awareness, network optimization, network security, virtualization and cloud-native architectures.

She can be reached at ariana.lynn@thefastmode.com

PREVIOUS POST

Sprint to Showcase Live 5G Demo in Copa Soccer Stadium

NEXT POST

STC Signs New Contract with Intelsat to Grow VSAT Services