A pair of University of Toronto researchers have been giving their computer a crash course in human hair.
Computers are pretty good at recognizing faces, but less so at what surrounds them. That’s where Parham Arabi and Wenzhangzhi Guo come in.
With the help of machine learning — and a little human input — the two researchers have been teaching computer vision algorithms to more accurately identify hair.
Eventually, Arabi hopes the technique can be applied to more important applications, like the visual detection of skin cancer, or even the development of safer self-driving cars.
“If you could take [dermatologists’] expertise, and then train a deep neural net to then realize some features or details that even doctors won’t be aware of, that would be just amazing,” Arabi said.
But hair — something that most people have — is a good place to start.
Their research relies on neural networks, or computer code that mimics how layers of neurons in the brain process information. With enough information, neural networks are remarkably adept at finding patterns in seas of information.
When fed millions of videos, photos or audio recordings, they have learned to identify photos with cats, or perform more accurate language translation — a technique known as deep learning.
In this case, however, researchers only had 100 photos with corresponding cutouts of human hair for each. That wasn’t nearly enough data to teach their neural network to identify hair on its own, they found. So they enlisted human expertise to help their neural network out.
Humans, of course, are particularly good at being able to tell what’s hair, and what’s not. We rely on a number of characteristics, such as colour, texture and the direction in which strands of hair flow. Arabi and Guo took those characteristics, turned them into rules, and trained their neural network to look for only the parts of images that humans would mostly likely classify as hair.
The result, according to their paper, was a nine per cent improvement in performance over their previous hair-detection method, which did not rely on deep learning. In fact, the algorithm worked so well, according to Arabi, that it was able to identify patches of hair that the human rules used to train the neural network didn’t catch.
The research is an extension of work that Arabi and Guo have been doing at their company ModiFace, which develops augmented reality software for the beauty industry. Smartphone apps powered by ModiFace technology can show users how a particular brand of makeup might look on their face, or how a new hair colour might look — the latter of which relies on the ModiFace software’s ability to accurately segment, or separate out hair.
But Arabi believes that combining human experience and machine learning has the potential to help in other fields. Dermatologists, for example, have visual cues that help them identify potentially cancerous moles, while drivers rely on rules to stay safe while on the road — both of which could be used to more effectively train a machine learning algorithm in the absence of a larger data set.
“Hair segmentation may not be the world’s most important problem, but it was one that we could really quantify very accurately,” he said. “And sometimes having toy examples are very useful for understanding how the algorithm is working. For that purpose, it actually served it quite well.”
Their research will appear in an upcoming issue of the journal IEEE Transactions on Neural Networks Learning Systems, but has already been published online.