Nvidia unveiled the Nvidia GPU Cloud, a new plan for leveraging GPUs wherever they may reside. The details are still vague, but it’s clearly been built to aid developing the apps now most closely associated with GPU acceleration: machine learning and artificial intelligence.
The few details currently known about Nvidia GPU Cloud come mostly from the company’s press release. The phrasing throughout indicates that GPU Cloud amounts to an end-to-end software stack for deep learning. It features many common frameworks for GPU-accelerated machine learning—Caffe, Caffe2, CNTK, MXNet, TensorFlow, Theano and Torch—along with Nvidia-specific tools for deep learning, including support for running the above in Docker containers.
There’s been a growing need in for applications that provide complete workflows for machine learning so data ingestion, normalization, model training, and prediction generation are all handled through a single consistent pipeline.
GPU Cloud can use GPUs attached to a single PC, but it also can use Nvidia’s DGX-1 supercomputing appliance. Nvidia also hints at how GPU Cloud can run in the cloud and use GPU resources there.