NYT article: http://www.nytimes.com/2014/11/18/science/res....html?_r=1

Tech paper: http://cs.stanford.edu/people/karpathy/deepim...isagen.pdf

So, basically these dudes/dudettes used a software structure that mimics the human brain (neural network) and threw images and language at it, and the resulting program can see what's on a normal everyday photograph and put it into words (after it has been fed a (surprisingly low) amount of example images+descriptions so it gets the gist of what this is about).