in Link Post

Jeff Dean’s Slides from NIPS 2017

Source: learningsys.org

The title of the talk is “Machine Learning for Systems and Systems for Machine Learning”. Obviously it’s difficult to get a complete handle on it from just the slides, but the deck is still an interesting read. It’s Jeff Dean, after all. Here’s the TLDR from the conclusion:

ML hardware is at its infancy.

Even faster systems and wider deployment will lead to many more breakthroughs across a wide range of domains.

Learning in the core of all of our computer systems will make them better/more adaptive.

There are many opportunities for this.

I’d say the entire deck is worth reading just for the photo of a “Tensor Processing Unit Pod” in a Google data centre. That’s some serious brute force, right there.

One question that does jump out at me is this: what if the next generation of machine learning algorithms isn’t so dependant on tensor arithmetic? Personally, I’d expect something like FPGAs1 modelling neural networks “directly” to be the thing which comes next.

The second half of the deck is about applying machine learning to basically every area of computing. Which is fascinating, but a little scary. It would definitely make technology harder to explain.

I’ll update this post with a link to a video of the talk, if one is released.

  1. Field Programmable Gate Arrays