Researchers enable computers to teach themselves common sense
- 27 November, 2013 17:14
While some people may think they're getting dumbed down as they scroll through images of cats playing the piano or dogs playing in the snow, one computer is doing the same and getting smarter and smarter.
A computer cluster running the so-called the Never Ending Image Learner at Carnegie Mellon University runs 24 hours a day, 7 days a week searching the Internet for images, studying them on its own and building a visual database. The process, scientists say, is giving the computer an increasing amount of common sense.
"Images are the best way to learn visual properties," said Abhinav Gupta, assistant research professor in Carnegie Mellon's Robotics Institute. "Images also include a lot of common sense information about the world. People learn this by themselves and, with [this program], we hope that computers will do so as well."
The computers have been running the program since late July, analyzing some three million images. The system has identified 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of images, according to the university.
The program has connected the dots to learn 2,500 associations from thousands of instances.
Thanks to advances in computer vision that enable software to identify and label objects found in images and recognize colors, materials and positioning, the Carnegie Mellon cluster is better understanding the visual world with each image it analyzes.
The program also is set up to enable a computer to make common sense associations, like buildings are vertical instead of lying on their sides, people eat food, and cars are found on roads. All the things that people take for granted, the computers now are learning without being told.
"People don't always know how or what to teach computers," said Abhinav Shrivastava, a robotics Ph.D. student at CMU and a lead researcher on the program. "But humans are good at telling computers when they are wrong."
He noted, for instance, that a human might need to tell the computer that pink isn't just the name of a singer but also is the name of a color.
While previous computer scientists have tried to "teach" computers about different real-world associations, compiling structured data for them, the job has always been far too vast to tackle successfully. CMU noted that Facebook alone has more than 200 billion images.
The only way for computers to scan enough images to understand the visual world is to let them do it on their own.
"What we have learned in the last five to 10 years of computer vision research is that the more data you have, the better computer vision becomes," Gupta said.
CMU's computer learning program is supported by Google and the Office of Naval Research.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, or subscribe to Sharon's RSS feed . Her email address is firstname.lastname@example.org.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.