Major Advance in Machine Learning?

So says the NYT via this article. The paper in question can be found here.

In short, the paper notes a new model for accurately detecting alphabet characters with as little as a single training set, which, if true (and extensible/generalized), would be pretty amazing. Specifically, the paper claims to have better results than deep neural networks, which I totally don’t understand (as evidenced by the meeting I just left 20 minutes ago which left my head spinning), but know to be at the head of the class for AI.

Here’s the “Editor’s Summary” from the paper:

“Handwritten characters drawn by a model

Not only do children learn effortlessly, they do so quickly and with a remarkable ability to use what they have learned as the raw material for creating new stuff. Lake et al. describe a computational model that learns in a similar fashion and does so better than current deep learning algorithms. The model classifies, parses, and recreates handwritten characters, and can generate new letters of the alphabet that look “right” as judged by Turing-like tests of the model’s output in comparison to what real humans produce.”

And from the paper:

“This paper introduces the Bayesian program learning (BPL) framework, capable of learning a large class of visual concepts from just a single example and generalizing in ways that are mostly indistinguishable from people.”

I did a quick search to find some feedback on this paper on regular nerd outlets (Slashdot and Reddit), but couldn’t find anything. I’m curious to see what folks smarter than myself think about this. If this paper was only copying the character from a single training example, that wouldn’t seem too difficult, but the fact that it can also recognize a character after a single training example seems impressive to me.

Josh and J are very lonely. Please leave a reply. Pretty please?