Google released the video of Chris Lattner and Richard Wei’s short talk on Swift and TensorFlow. It’s pretty good, if slightly awkwardly delivered. Watching it, three things jumped out at me in particular.

Firstly, the compile time checking:

This is awesome. It seems the type information being checked by the compiler includes the rank^{1} and the shape of the tensors being operated upon.

Matrix multiplication errors were a huge pain during Andrew Ng’s Machine Learning course (non affiliate link). The math for many of the algorithms in that course amounts to: multiply the triaining input values by the algorithm parameters. Unlike scalar multiplication, matrix multiplication is not commutative. What’s not consistent between the algorithms in that course is the order of the input operands (and whether either of them needs transposing). The upshot of this is lots of “size of matrices do not match” errors at runtime. This despite the course being taught in Matlab, a language who’s name is short for “Matrix labatory”.

It will definitely be nice to find that sort of thing out at compile time, rather than after training has already started.

Secondly, automatic differentiation of functions:

Okay. So… this is clearly voodoo. It could be naively accomplished by running the function with various input values and calculating the gradient empirically (as per gradient checking). I don’t think that’s what happening, though. For one thing it wouldn’t scale well, and there would probably be performance issues. I guess this is another big win for being able to make compiler level changes. My assumption here is that the compiler can look inside a function and see if it’s made up entirely of differentiable operations. If that’s not the case: compile time error. But if it is: the differential of the function can be calculated.

Lastly, Python integration:

That’s a pretty good answer to a likely objection to using Swift for machine learning. Namely: Python is the current king for a reason. It has awesome libraries like NumPy, pandas and SciKit-Learn. That seems like a lot less of an issue if you can also use those libraries (and a lot more besides) from Swift. I’m pretty curious as to how this will actually work, though. Python and Swift have very different typing and execution models.

Unfortunately it’s not all sunshine and roses. At present these changes are going into a fork of the Swift compiler. Fingers crossed they all get accepted upstream.

- I.e. One dimensional (vectors), two dimensional (matrices), and so on. ↩