cloud GPU
Overview
Advanced medical imaging for battling cancer. Automated customer service. Cinematic-quality gaming. Next-generation capabilities in AI, high-performance computing (HPC), and graphics are pushing the boundaries of what’s possible. And now, with NVIDIA’s GPU-accelerated solutions available through all top cloud platforms, innovators everywhere can access massive computing power on demand and with ease.
Features
Finish First
by delivering the fastest time to solution
Solve
previously unsolvable challenges
Save
with the best performance ROI across workloads
Purpose
Limitless Compute on Demand
Virtual machines are available globally and around the clock and can be increased or reduced as needed.
Easy Updates
The latest hardware, software, and services can be accessed with a simple button click.
Simplified IT Management
IT is streamlined with access to IT management experts who can focus on a business's core needs.
Flexible Pricing
Free trials, a range of prices, and a variety of leasing options are available.
How it Works
A cloud graphics processing unit (GPU) provides hardware acceleration for an application, without requiring that a GPU is deployed on the user’s local device. Common use cases for cloud GPUs are:
Visualization workloads: Powerful server/desktop applications often employ graphically demanding content. Cloud GPUs can be used to accelerate video encoding, rendering, and streaming, as well as computer-aided design (CAD) applications.
Computational workloads: Large-scale mathematical modeling, deep learning, and analytics require the parallel processing abilities of general-purpose graphics processing unit (GPGPU) cores.
Technology
Continuous Performance Improvements Through Software Optimisations
Cloud GPU software optimisations enabled the same GPU hardware to perform 40 percent better in just seven months. There is a collection of libraries, tools, and technologies that deliver dramatically higher performance than alternatives across multiple application domains—from AI to high-performance computing.
Power Intelligent Experiences
AI is empowering organisations to extract deeper insights that improve the way they serve customers and stay competitive. But state-of-the-art AI requires a fully accelerated pipeline with cutting-edge infrastructure, process, and guidelines. The Cloud GPU platform meets these needs by accelerating and supporting all AI frameworks for training and inference, delivering an end-to-end platform that lets developers innovate on applications and achieve optimal levels of performance with little to no tuning.
Drive Scientific Breakthroughs
High-performance computing is fueling the advancement of science. By leveraging GPU-powered parallel processing across multiple compute instances in the cloud, it can run advanced, large-scale application programs efficiently, reliably, and quickly. This delivers a dramatic boost in throughput and cost savings and paves the way to scientific discovery.
Create with Next-Gen Graphics Capabilities
RTX Virtual Workstations (vWS) for GPU-accelerated graphics help creative and technical professionals maximize their productivity from anywhere by giving them access to the most demanding design and engineering applications from the cloud. With the latest Cloud GPU data center T4 GPUs, users can enjoy the most advanced 3D graphics platform, in a virtual machine.
FAQs
What is computing acceleration?
Computing acceleration is used to perform floating-point computing and graphic processing with a hardware accelerator or a coprocessor, which is more efficient than using a software running on CPU. Tencent Cloud three two computing acceleration models: GPU computing (GN2, GN8) for generic computing, and GPU rendering GA2 for graphics-intensive applications.
What are the advantages of GPU over CPU?
GPU has more arithmetic logic units (ALU) than CPU and supports large-scale multi-threaded parallel computing.
When should I use GPU instances?
GPU instances are most suitable for parallel applications requiring high concurrency, such as workloads that use thousands of threads. When a great deal of computation is required for graphics processing where each task is relatively small, a group of operations to be performed form a pipeline. The throughput of this pipeline is more important than the latency of a single operation. To build an application that makes full use of this parallelism, you need to master the expertise of GPU devices, and to learn how to program for various graphical APIs (DirectX, OpenGL) or GPU computing programming models (CUDA, OpenCL).
How are GPU instances billed?
GPU instances are billed per usage. The bills are calculated down to the second and settled on an hourly basis. You can purchase and release the instances any time. GPU instances are applicable to scenarios where the demand for devices fluctuates dramatically, such as flash sale on an ecommerce site.