in Link Post

John Carmack Spends a Week in a Cabin Implementing Neural Nets on BSD

Source: John Carmack @ www.facebook.com

I’m really loath to link to posts on facebook, so apologies for that. When it’s someone like Carmack I think it’s probably worth it, however.

This definitely falls under “diary entry” rather than “serious technical post”, but I found it an interesting read all the same. For example:

I don’t think I have anything particularly insightful to add about neural networks, but it was a very productive week for me, solidifying “book knowledge” into real experience.

I used a common pattern for me: get first results with hacky code, then write a brand new and clean implementation with the lessons learned, so they both exist and can be cross checked.

There’s nothing earth shattering there. But if you’re going to look for programming best practices somewhere, Carmack is a pretty damn good place to start. He sees the benefits of building practical applications of theory. He also likes to switch to a clean reimplementation after building a prototype.

The part of this post which makes me the most happy is the following, though:

I initially got backprop wrong both times, comparison with numerical differentiation was critical! It is interesting that things still train even when various parts are pretty wrong — as long as the sign is right most of the time, progress is often made.

Andrew Ng also confesses in his Coursera Machine Learning course that he finds back propagation “highly unintuitive”. Carmack is a self admitted neural network novice, but about as expert a programmer as you can possibly find. The fact that he apparently had similar issues is… reassuring, to say the least.