The booming field of artificial intelligence (AI) is grappling with a replication crisis, much like the ones that have afflicted psychology, medicine, and other fields over the past decade. AI researchers have found it difficult to reproduce many key results, and that is leading to a new conscientiousness about research methods and publication protocols. “I think people outside the field might assume that because we have code, reproducibility is kind of guaranteed,” says Nicolas Rougier, a computational neuroscientist at France’s National Institute for Research in Computer Science and Automation in Bordeaux. “Far from it.”
I’ve touched on this subject previously. If there’s one field which can make replication of experiments easy it’s computer science. By extension, so can artificial intelligence. There is no squishy biology at play here. At worst all that should be required is the download of a large data set, or the weights for a deep neural network.
For example: DeepMind gave enough detail in the AlphaGo Zero paper for the Leela-Zero team to write an open source reimplementation. They didn’t publish their source code, though. Ideally an open source reimplementation shouldn’t be required at all. Perhaps more frustratingly, they didn’t publish the trained weights from their neural network, and it looks like 1700 years of training time might be required to recreate them on commodity hardware. The kind of computing power required to perform this in a sensible amount of time just isn’t available to most research labs.
Verifying results given the source code and trained weights, though? You could probably do that on your phone. To me, it seems like something necessary for scientific progress. If you want see further, you need to verify how tall the giant really is before you stand on their shoulders.