Google Team Refines GPU Powered Neural Machine Translation

At last year’s Nvidia GPU Technology Conference, Jeff Dean, Senior Google Fellow offered a vivid description of how the search giant has deployed GPUs for a large number of workloads, many centered around speech recognition and language-oriented research projects as well as various computer vision efforts.

The Google Brain team, which focuses on many of the areas cited above, is working on software tuning to keep pushing the limits of GPU backed machine learning.

Most recently, a group there has put together a detailed analysis of architectural hyperparameters for neural machine translation systems, an effort that required more than 250,000 GPU hours on their in-house cluster, which is based on Nvidia Tesla K40m and Tesla K80 GPUs, distributed over 8 parallel workers and 6 parameter servers. New work will help push current hardware and software approaches to the NMT problem beyond current limitations.

Google’s internal neural machine translation work was made public at the end of 2016 and is the driving neural network force behind Google Translate.

[Source]

Leave a Comment

Your email address will not be published. Required fields are marked *