Strengthening its enterprise portfolio in the country, Intel India has announced the launch of its Xeon Phi™ Processor. As data volumes continue to explode and become more complex, new hardware, software and architectures are needed to drive deeper insight and accelerate new discoveries, business innovation and the next evolution of analytics in machine learning and the field of artificial intelligence.
A key to unlocking these deeper insights is the new Intel® Xeon Phi™ processor. As a foundational element of Intel® Scalable System Framework (Intel® SSF), the Intel® Xeon Phi™ product family is part of a complete solution that brings together key technologies for easy-to-deploy and high-performance clusters.
Currently, Intel is enabling High Performance computing in India, largely for the science and technology research vertical, through its family of Xeon™ processors. With the availability of this new processor, which can help get products to market faster, solve complex problems faster, and power simulations that don’t require physical testing, the company is looking to collaborate more closely with the automotive, manufacturing and aerospace industries.
Solving the Biggest Challenges Faster with the Intel® Xeon Phi™ Processor Family
The new Intel Xeon Phi™ Processor is Intel’s first bootable host processor specifically designed for highly parallel workloads, and the first to integrate both memory and fabric technologies. As a bootable x86 CPU, the Intel Xeon Phi processor scales efficiently without being constrained by a dependency on the PCIe bus like GPU accelerators. By eliminating this dependency, the Intel Xeon Phi™ processor offers greater scalability and is capable of handling a wider variety of workloads and configurations than accelerator products.
Across a wide range of applications and environments – from machine learning to high performance computing (HPC), the Intel Xeon Phi product family helps solve the biggest computational challenges faster and with greater efficiency and scale3. The product family also helps to drive new breakthroughs using high-performance modeling and simulation, visualization and data analytics.
Additional Intel Xeon Phi features and benefits, include:
Features up to 72 powerful and efficient cores with ultra-wide vector capabilities (Intel® Advanced Vector Extensions or AVX-512), raising the bar for highly parallel computing performance.
Delivers data center-class CPU scalability and reliability for running high-performance workloads such as machine learning where scaling efficiency is critical for rapid training of complex neural networks.
Offers binary-compatibility with Intel® Xeon® processors to allow for running any x86 workload. This optimizes asset utilization across the data center while the use of a common programming model increases productivity through a shared developer base and code reuse.
Built on general-purpose x86 CPU architecture and open standards, with the support of a broad ecosystems of partners, programming languages and available tools – for superior flexibility, software portability and reusability.
To date, Intel has shipped tens of thousands of units and expects to sell a total of more than 100,000 units this year. The product family’s broad ecosystem support includes more than 50 OEM, ISV and middleware partners. To learn more, visit www.intel.com/xeonphi/partners.
Machine Learning Goes Deeper with Intel® Xeon Phi™ Processors
Machine learning implementations require an enormous amount of compute power to run mathematical algorithms and process huge amounts of data. With these challenges in mind, Intel has expanded its range of technologies for machine learning with the release of the Intel® Xeon Phi™ processor family. The Intel® Xeon Phi™ processor offers robust performance for machine learning training models, and with the flexibility of a bootable host processor, it is capable of running multiple analytics workloads. Intel® Scalable System Framework-based clusters powered by the Intel Xeon Phi processors and available integrated Intel® Omni-Path Architecture, enable data scientists to run complex neural networks and run training models in significantly shorter time. In a 32-node infrastructure, the Intel Xeon Phi family offers up to 1.38 times better scaling than GPUs and in a 128-node infrastructure, the time to train models can be completed up to 50 times faster using the Intel Xeon Phi family3.
The Intel Xeon Phi family is complemented by the Intel® Xeon® processor E5 family, the most widely deployed infrastructure for machine learning4. Intel Xeon processor E5 v4 product family is well suited for machine learning scoring models and provides great performance and value for a wide variety of data center workloads. Together, these Intel Xeon processor families offer developers a consistent programming model for training and scoring and a common architecture that can be used for high-performance computing, data analytics and machine learning workloads.