Researchers from Stanford University, in partnership with Google, have put online a Python-based project called Neuraltalk, which allows a neural network to learn how to describe images with sentences. Basically, the neural network is first fed by photos accompanied by text descriptions and is able to determine the objects or living beings present in the photos, their characteristics and the interactions between them.
With this data, the neural network is able to generate its own photo descriptions. This is enough to instantly qualify an image bank, without human intervention.
The code uploaded to Github seems to be directly usable, so it seems interesting to follow and test if you have a similar need for qualification in your projects. We are still far from an artificial intelligence concept, but this analysis capacity will surely be an essential brick of our future computers and our future robots. In any case, it will save a lot of time