Decoded: Here's how our brain stores so much data and works efficiently

For the first time ever scientists have decoded brain science by unravelling the mystery of how our brain manages to store such a large amount of data and more importantly how our brain can identify and classify things even with small percentage of original information. According to a team of researchers including an Indian-origin, human brain can predict and categories correctly by using just 0.15 percent of the original information. In addition, scientists have designed a simple algorithm that mimics the learning and storage ability of the brain and performs equally well.

Scientists explained that we humans learn very quickly, even by seeing just a small portion of original information like the face of dear ones even if they change their style, we can easily identify them. Humans can identify objects like doors, rooms, gadgets even if a small part of the object is visible. “How do we make sense of so much data around us, of so many different types, so quickly and robustly?” said Santosh Vempala, from the Georgia Institute of Technology.

To test the learning ability of the brain, researchers conducted random projection test and studied the response of the brain. In the test, participants were shown random images and were asked to identify them again when a small portion of the original image was shown later. 16 images were shown to participants for 10 seconds and every image was of 150×150 pixels in size. In the test, it was revealed that only 0.15 percent of the original information is enough for humans to correctly predict, categories and identify the entire information.

Later, study authors designed a simple algorithm and tested its performance over machines. They were astonished to see that simple algorithm that mimics very simple neural network performed as well as human brain which suggest how our brain learns. The newly designed algorithm will help researchers and engineers in designing highly sophisticated and advanced learning machines like humans in future.

“We were surprised by how close the performance was between extremely simple neural networks and humans,” Vempala said.

“This fascinating paper introduces a localized random projection that compresses images while still making it possible for humans and machines to distinguish broad categories,” said Sanjoy Dasgupta, professor at the University of California San Diego.

The study appeared in the journal Neural Computation.

Around the web

Around the web