OVHcloud adds GPUs to help companies power AI’s most demanding workloads  Duncan is an award-winning editor with more than 20 years experience in journalism. Having launched his tech journalism career as editor of Arabian Computer News in Dubai, he has since edited an array of tech and digital marketing publications, including Computer Business Review, TechWeekEurope, Figaro Digital, Digit and Marketing Gazette.


.pp-multiple-authors-boxes-wrapper {display:none;} img {width:100%;}

OVHcloud, part of the NVIDIA Partner Network, continues to develop its portfolio of AI solutions, adding new NVIDIA GPU product offerings that form an integral part of its strategic vision for artificial intelligence.

Acknowledging the tremendous impact AI will have in the years to come, OVHcloud is on a mission to help customers grow their businesses, uniting an ecosystem through innovative, easy and affordable AI solutions, featuring transparent, ethical and open models that preserve data privacy. 

With over 20 years of expertise in infrastructure, through a unique vertically integrated industrial model, OVHcloud is designing AI-enabled infrastructures, which include new best-of-breed NVIDIA H100 and A100 Tensor Core GPUs. Customers will be able to choose from many options to power their most ambitious machine learning workloads, including large language models. 

Adding to critically acclaimed options already offered at a competitive price with older generation NVIDIA V100 and NVIDIA V100S GPUs, the Group today announced new offerings based on the following GPUs: NVIDIA H100, NVIDIA A100, NVIDIA L40S and NVIDIA L4, with effective deployment ramping up in the coming weeks. 

New GPU instances with NVIDIA A100 for deep learning training and inference 

New NVIDIA A100 80GB powered GPU instances are immediately available and let AI specialists run complex projects on highly specialized NVIDIA Tensor Cores. With exceptional abilities in deep learning training, A100 is also ideally suited to run inference thanks to various optimizations in tackling those workloads, including LLM-related projects. High-performance computing is another playing field where the A100 GPU instances help unlock the next-generation of discoveries, through advanced simulations, thanks to double-precision compute and high-bandwidth memory. 

A100-based public cloud instances can be configured as A100-180 with 1x A100, 15 vCore and 180GB of RAM, A100-360 with 2x A100, 30 vCore and 360GB of RAM and A100-720 with 4x A100, 60 vCore and 720GB of RAM. 

New GPU instances with NVIDIA H100 for deep learning training 

OVHcloud is also announcing upcoming H100-based GPU instances built around NVIDIA’s latest accelerator with a compute power starting at 26 petaFLOPS (FP64) per PCIe GPU. Purpose-built for the most demanding AI models, the NVIDIA H100 is the de facto choice for innovation in AI, whether accelerating LLMs with its Transformer Engine, or creating generative AI applications. 

For the most demanding use cases, such as extreme fine-tuning and training, the Group will offer NVIDIA H100 SXM-based solutions. With 67 TFlops of FP64 compute power and a higher GPU bandwidth, this select offering will showcase the full power of the NVIDIA Hopper GPU architecture. 

New GPU instances and bare-metal servers with NVIDIA L4 and L40S 

The Group also today unveiled GPU instances featuring NVIDIA L4 GPUs with 24GB of memory. The L4, based on the NVIDIA Ada Lovelace GPU architecture, is a universal GPU for every workload with enhanced AI and video capabilities. It provides efficient compute resources for graphics, simulation, data science and data analytics. 

The NVIDIA L40S GPU with 48GB of memory is also joining the Group’s GPU instances. NVIDIA L40S benefits from fourth-generation Tensor Cores and FP8 Transformer Engine providing robust performance for AI workloads both in training and inferencing. 

These GPUs will be available through public cloud instances as well as in dedicated baremetal servers with L4 in SCALE-GPU and L40S in HGR-AI. 

Establishing a foundation to supercharge customer AI journeys 

Thanks to an unprecedented choice of NVIDIA GPU architectures, OVHcloud now delivers an AI-designed infrastructure for AI engineers, researchers, data scientists and data practitioners that can leverage the elasticity of the cloud to support their needs from training to running inference. 

Furthermore, OVHcloud will gradually add NVIDIA H100 and A100 options to its set of comprehensive AI PaaS solutions designed to accompany the data life cycle: AI Notebooks, AI Training and AI Deploy. OVHcloud AI solutions act as a complete set of tools that are easy to use, and designed to explore data, train models and serve them into production. 

Michel Paulin, CEO OVHcloud, said: “AI will seriously transform our clients’ businesses and we are in a unique position to help them easily transition to this new era. With a one-of-a-kind AI infrastructure that leverages all our expertise and the most sought-after GPUs, we provide world-class performance with all the benefits of the cloud. Our AI solutions customers will also benefit from these novelties through our easy-to-use AI Notebooks, AI Training and AI Deploy offers.”

Matthew McGrigg, director of global development for cloud partners at NVIDIA, said: “Enterprises are looking for flexible cloud service options to drive innovation internally and to help customers adopt new generative AI applications. By offering a full complement of NVIDIA accelerated computing, OVHcloud is able to handle a variety of inference and training workloads.” 

Executing on a strong dedicated AI roadmap, OVHcloud is set to announce in the weeks to come a wave of AI innovations designed to further help its customers in navigating this new paradigm. 

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: AI, GPU, OVHcloud

Similar Posts