Blair Frank reports in Venture Beat:
This new hardware could help attract customers to Google’s cloud platform with the promise of faster machine learning computation and execution. Google is using its advanced AI capabilities to attract new blood to its cloud platform and away from market leaders Amazon Web Services and Microsoft Azure
Google’s Cloud Tensor Processing Units are now available in public beta for anyone to try, providing customers of the tech titan’s cloud platform with specialized hardware that massively accelerates the training and execution of AI models.
The Cloud TPUs, which Google first announced last year, work by providing customers with specialized circuits solely for the purpose of accelerating AI computation. Google tested using 64 of them to train ResNet-50 (a neural network for identifying images that also serves as a benchmarking tool for AI training speed) in only 30 minutes.
This new hardware could help attract customers to Google’s cloud platform with the promise of faster machine learning computation and execution. Accelerating the training of new AI systems can be a significant help, since data scientists can then use the results of those experiments to make improvements for future model iterations.
Google is using its advanced AI capabilities to attract new blood to its cloud platform and away from market leaders Amazon Web Services and Microsoft Azure. Businesses are increasingly looking to diversify their use of public cloud platforms, and Google’s new AI hardware could help the company capitalize on that trend.
Companies had already lined up to test the Cloud TPUs while they were in private alpha, including Lyft, which is using the hardware to train the AI models powering its self-driving cars.
It’s been a long road for the company to get here. Google announced the original Tensor Processing Units (which solely provided inference capabilities) in 2016 and promised that customers would be able to run custom models on them, in addition to providing a speed boost for other businesses’ workloads through the cloud machine learning APIs. But enterprises were never able to run their own custom workloads on top of an original TPU.
Google isn’t the only one to push AI acceleration through specialized hardware. Microsoft is using a fleet of field-programmable gate arrays (FPGAs) to speed up its in-house machine learning operations and provide customers of its Azure cloud platform with accelerated networking. In the future, Microsoft is working on providing customers with a way to run their machine learning models on top of the FPGAs, just like the company’s proprietary code.
Amazon, meanwhile, is providing its customers with compute instances that have their own dedicated FPGA. The company is also working on developing a specialized AI chip that will accelerate its Alexa devices’ machine learning computation, according to a report released by The Information today.
Actually getting AI acceleration from TPUs won’t be cheap. Google is currently charging $6.50 per TPU per hour, though that pricing may shift once the hardware is generally available. Right now, Google is still throttling the Cloud TPU quotas that are available to its customers, but anyone can request access to the new chips.
Once people get access to the Cloud TPUs, Google has several optimized reference models available that will let them start kicking the tires and using the hardware to accelerate AI computation.
0 comments:
Post a Comment