One More Reason For Running Machine Learning Jobs In The Cloud: GPUs

Top public cloud vendors want you to store massive data sets in their platforms to run complex Machine Learning algorithms. Apart from offering affordable compute and storage services based on pay-as-you-go pricing model, they are also luring the customers by bringing the latest GPU technology to the cloud.

Last week, IBM announced that its IaaS platform now supports the latest GPUs from NVIDIA – the Tesla P100. Combined with the CPUs from the bare metal servers, the new platform promises to deliver unmatched performance for processing and analyzing massive amounts of data. IBM is one of the first to offer the latest GPU technology from NVIDIA in the public cloud.

GPUs reduce the time it takes to process large datasets from weeks to hours. Traditionally, CPUs are designed to deal with sequential instructions. On the other hand, GPUs are meant for handling multiple jobs in parallel. They are not as powerful as CPUs, but they are much cheaper and faster in accessing memory.

[Source]

Leave a Comment

Your email address will not be published. Required fields are marked *