Click here to Skip to main content
15,888,984 members
Articles / Artificial Intelligence
Article

Accelerate Llama 2 with Intel® AI Hardware and Software Optimizations

Rate me:
Please Sign up or sign in to vote.
5.00/5 (2 votes)
18 Dec 2023CPOL9 min read 14.9K   2   1
Democratizing Access to Large Language Models

This article is a sponsored article. Articles such as these are intended to provide you with information on products and services that we consider useful and of value to developers

We are excited to see Meta release Llama 2, with the intent to further democratize access to large language models (LLMs). We believe that making the models more widely available will facilitate efforts across the AI community to benefit the world at large. LLMs offer one of the most promising AI technologies to benefit society, given the remarkable capability they have demonstrated in generating text, summarizing and translating content, responding to questions, engaging in conversations, and performing more complicated tasks, such as solving math problems or reasoning. LLMs have the potential to unlock new forms of creativity and insights and inspire passion in the AI community to advance the technology.

Llama 2 is designed to help developers, researchers, and organizations build generative AI-powered tools and experiences. Meta released pretrained and fine-tuned versions of Llama 2 with 7B, 13B, and 70B parameters. With Llama 2, Meta implemented three core safety techniques across the company’s fine-tuned models: supervised safety fine-tuning, targeted safety context distillation, and safety reinforcement learning from human feedback. This has enabled Meta to improve safety performance. By democratizing access, it will allow for continual identification and mitigation of vulnerabilities in a transparent and open manner.

Intel offers a portfolio of AI solutions that provide competitive and compelling options for the community to develop and run models like Llama 2. Intel’s rich hardware portfolio, combined with optimized open software, provides alternatives to mitigate the challenge of accessing limited compute resources. With the release of Llama 2, we are happy to share initial inference performance of 7B and 13B parameter models on Intel’s AI portfolio, including Habana Gaudi2* deep learning accelerator, 4th Gen Intel® Xeon® Scalable processors, Intel® Xeon® CPU Max Series, and Intel® Data Center GPU Max. The results that we share here are for out-of-box performance with our currently released software, with additional performance gains expected in upcoming releases. We are also enabling the 70B parameter model and will provide an update shortly to keep the community informed.

Habana Gaudi2* Deep Learning Accelerator

Habana Gaudi2 is designed to provide high-performance, high-efficiency training and inference, and is particularly suited to large language models such as Llama and Llama 2. Each Gaudi2 accelerator features 96 GB of on-chip HBM2E to meet the memory demands of LLMs, thus accelerating inference performance. Gaudi2 is supported by the Habana SynapseAI* software suite, which integrates PyTorch* and DeepSpeed* for both training and inference. Moreover, support for HPU Graphs and DeepSpeed inference have recently been introduced in SynapseAI, and these are well-suited to latency-sensitive inference applications. Further software optimizations are coming to Gaudi2, including support for the FP8 data type in Q3 2023, which is expected to deliver substantial performance boosts, increasing throughput and reducing latency in LLM execution.

Performance on LLMs requires flexible and nimble scalability to reduce network bottlenecks, both within the server and across nodes. Every Gaudi2 integrates 24 100 GB Ethernet ports; 21 ports can be dedicated to all-to-all connectivity to the eight Gaudi2s within the server and three ports per Gaudi2 dedicated to scale out. This network configuration helps accelerate scaled performance both within and beyond the server.

Gaudi2 has demonstrated excellent training performance on large language models on the recently published MLPerf* benchmark for training the 175B parameter GPT-3 model on 384 Gaudi2 accelerators. (See New MLCommons Results Highlight Impressive Competitive AI Gains for Intel for more information.) This proven performance on Gaudi2 makes it a highly effective solution for both training and inference of Llama and Llama 2.

Below, we share the inference performance of the Llama 2 7B and Llama 2 13B models, respectively, on a single Habana Gaudi2 device with a batch size of one, an output token length of 256, and various input token lengths using mixed precision (BF16). The performance metric reported is the latency per token (excluding the first token). The optimum-habana text generation script was used to run inference on the Llama models. The Hugging Face optimum-habana library makes it simple and easy to deploy these models with minimal code changes on Gaudi accelerators. In Figure 1, we see that for 128 to 2K input tokens, Gaudi2 inference latency for the 7B model ranges from 9.0 to 12.2 milliseconds per token, while for the 13B model, it ranges from 15.5 to 20.4 milliseconds per token. (Hardware and software configuration details are included at the end of this article.)

Image 1

Figure 1. Llama 2 7B and 13B inference performance on Habana Gaudi2*

Get started on your generative AI journey with Llama 2 on the Habana Gaudi platform today. If you would like to get access to Gaudi2, sign up for an instance on the Intel® Developer Cloud or contact Supermicro regarding Gaudi2 Server infrastructure.

Intel® Xeon® Scalable Processor

The 4th Gen Intel Xeon Scalable processors are general-purpose compute with AI-infused acceleration known as Intel® Advanced Matrix Extensions (Intel® AMX). Specifically, it has built-in BF16 and INT8 GEMM (general matrix-matrix multiplication) accelerators in every core to accelerate deep learning training and inference workloads. In addition, the Intel Xeon CPU Max Series offers 128 GB of high-bandwidth memory (HBM2E) across two sockets, which is beneficial for LLMs because the workload is often memory bandwidth-bound.

Software optimizations for Intel Xeon processors have been upstreamed into deep learning frameworks and are available in the default distributions of PyTorch, TensorFlow*, DeepSpeed, and other AI libraries. Intel leads the development and optimization of the CPU backend of torch.compile, which is a flagship feature in PyTorch 2.0. Intel also offers Intel® Extension for PyTorch to stage the advanced optimizations for Intel® CPUs before they are upstreamed into the official PyTorch distribution.

The 4th Gen Intel Xeon processor’s higher memory capacity enables low latency LLM execution within a single socket, which is applicable for conversational AI and text summarization applications. This evaluation highlights the latency of executing one model per single socket for BF16 and INT8. Intel Extension for PyTorch has enabled support of SmoothQuant to secure good accuracy with INT8 precision models.

Considering that LLM applications need to generate tokens fast enough to satisfy the reading speed of a fast reader, we chose token latency (time to generate each token) as the major performance metric to report, and, as a reference, the reading speed of a fast human reader, which translates to ~100ms per token. Figures 2 and 3 show that a 4th Gen Intel Xeon Scalable single socket processor delivers <100ms latency for Llama 2 7B BF16 model and Llama 2 13B INT8 model.

Image 2

Figure 2. Llama 2 7B and 13B inference (Bfloat16) performance on Intel® Xeon® Scalable processors

Image 3

Figure 3. Llama 2 7B and 13B inference (INT8) performance on Intel® Xeon® Scalable processors

Intel Xeon CPU Max Series delivers lower latency for both models benefiting from the HBM2E higher bandwidth. With Intel AMX acceleration, customers can improve the throughput with higher batch size. One 4th Gen Intel Xeon processor delivers <100ms latency for 7B and 13B parameter models. Users can run two parallel instances, one on each socket, for higher throughput and to serve clients independently. Alternatively, users can leverage Intel Extension for PyTorch and DeepSpeed to run inference on both 4th Gen Intel Xeon processor, using tensor parallelism to further reduce the latency or to support larger models.

Developers can get more details about running LLMs and Llama 2 on Intel Xeon platforms here. The cloud instances of 4th Gen Intel Xeon Scalable processors are available for preview on AWS*, GCP*, and Azure*, and for general availability on Ali Cloud. Intel will continue adding software optimizations to PyTorch and DeepSpeed to further accelerate Llama 2 and other LLMs.

Intel® Data Center GPU Max Series

Intel Data Center GPU Max delivers parallel compute, HPC, and AI for HPC acceleration. The Intel Data Center GPU Max Series — Intel’s highest performing, highest density discrete GPU, packing over 100 billion transistors into a package and containing up to 128 Intel® Xe Cores — is Intel’s foundational GPU compute building block.

The Intel Data Center GPU Max Series is designed for breakthrough performance in data-intensive computing models used in AI and HPC including:

  • 408 MB of L2 cache based on discrete SRAM technology and 64 MB of L1 cache and up to 128 GB of high-bandwidth memory (HBM2E)
  • AI-boosting Intel® Xe Matrix Extensions (XMX) with systolic arrays enabling vector and matrix capabilities in a single device

The Intel Data Center Max family of products are unified by oneAPI for a common, open, standards-based programming model to unleash productivity and performance. Intel® oneAPI tools include advanced compilers, libraries, profilers, and code migration tools to easily migrate CUDA* code to open C++ with SYCL*.

The software enabling and optimizations for Intel Data Center Max GPUs are delivered through the open-source extensions for frameworks today, such as Intel Extension for PyTorch, Intel® Extension for TensorFlow and Intel® Extension for DeepSpeed. By using these extensions together with the upstream framework releases, users will be able to realize drop-in acceleration for machine learning workflows.

The inference performance of Llama 2 7B and 13B parameter models are evaluated on a 600W OAM device which has two GPUs (tiles) on the package, while we only used one of the tiles to run the inference. Figure 4 shows that Intel Data Center GPU Max single tile can deliver less than 20 milliseconds per token latency for inference of the 7B model and 29.2 to 33.8 milliseconds per token latency for inference of the 13B model, for input token lengths of 32 to 2K tokens. Users can run two parallel instances, one on each tile, for higher throughput and to serve clients independently.

Image 4

Figure 4. Llama 2 7B and 13B inference performance on Intel® Data Center GPU Max 1550

Customers can get more details about running LLMs and Llama 2 on Intel Data Center GPU platforms here. The Intel Data Center GPU Max cloud instances available on the Intel Developer Cloud are currently in beta.

Besides inference, Intel has been actively working on the acceleration of fine-tuning through upstreaming the optimizations to Hugging Face Transformers, PEFT, Accelerate, and Optimum libraries, and providing reference workflows in Intel® Extension for Transformers that can support efficient deployment of typical LLM-based tasks, such as text generation, code generation, completion and summarization on supported Intel platforms.

Summary

In this article, we have presented our initial evaluation of inference performance of Llama 2 7B and 13B parameter models from Intel’s AI hardware portfolio, including Habana Gaudi2 deep learning accelerator, 4th Gen Intel Xeon Scalable processor, Intel Xeon CPU Max Series, and Intel Data Center GPU Max. We are continuing to add optimizations in software releases and will share more evaluations around LLMs and larger Llama 2 models soon.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
United States United States
You may know us for our processors. But we do so much more. Intel invents at the boundaries of technology to make amazing experiences possible for business and society, and for every person on Earth.

Harnessing the capability of the cloud, the ubiquity of the Internet of Things, the latest advances in memory and programmable solutions, and the promise of always-on 5G connectivity, Intel is disrupting industries and solving global challenges. Leading on policy, diversity, inclusion, education and sustainability, we create value for our stockholders, customers and society.
This is a Organisation

42 members

Comments and Discussions

 
GeneralMy vote of 5 Pin
Ștefan-Mihai MOGA13-Nov-23 7:26
professionalȘtefan-Mihai MOGA13-Nov-23 7:26 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.