In June 2016 I published a paper showing that archerfish are able to discriminate between two pictures of human faces. They were then able to discriminate the learned face from a series of faces they had not seen before. This video summarizes the experiment:
This is a very exciting finding as human facial recognition is a really complicated task even for computers. Facial recognition is difficult because all faces have the same general components (e.g. two eyes, a nose and a mouth) so you have to discriminate them based on subtle cues (e.g. the shape of the eye, the distance from the nose to the mouth). Considering how similar features can be amongst family members, these differences can be extremely subtle. The features themselves are also changing constantly when we make different expressions. In addition, the orientation of the face and lighting can make a huge difference in what sort of information is available to the observer.
In my experiment, the fish were only presented with a frontal image of a human face under standard lighting conditions. This is a long way from full human facial recognition, however, it shows that fish are able to discriminate very complicated visual images and I will be continuing to test the fish under more difficult conditions in future experiments.
This paper is open access, so you can read it here for free. There has also been a lot of media coverage including The Washington Post, BBC, Livescience, Motherboard, The Guardian, Wired, CBC, Vox, The Huffington Post, Business Insider UK, Phys.org, Big Think, Mental Floss, IFL Science, CNN, Smithsonian, Gizmodo, and many more all around the world (it was apparently picked up by 175 news outlets). It even made it into the BBC’s ‘100 things we didn’t know last year list‘ at #15.
You can also see interviews I gave to Reuters, Quirks and Quarks and Deep Look on KQED.