Scientists Reconstruct Brains’ Visions Into Digital Video (1 Viewer)

Apr 15, 2006
56,639
#1
Scientists Reconstruct Brains’ Visions Into Digital Video In Historic Experiment

UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips. Eventually, this process will allow you to record and reconstruct your own dreams on a computer screen.

I just can't believe this is happening for real, but according to Professor Jack Gallant—UC Berkeley neuroscientist and coauthor of the research published today in the journal Current Biology—"this is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds."

Indeed, it's mindblowing. I'm simultaneously excited and terrified. This is how it works:

They used three different subjects for the experiments—incidentally, they were part of the research team because it requires being inside a functional Magnetic Resonance Imaging system for hours at a time. The subjects were exposed to two different groups of Hollywood movie trailers as the fMRI system recorded the brain's blood flow through their brains' visual cortex.

The readings were fed into a computer program in which they were divided into three-dimensional pixels units called voxels (volumetric pixels). This process effectively decodes the brain signals generated by moving pictures, connecting the shape and motion information from the movies to specific brain actions. As the sessions progressed, the computer learned more and more about how the visual activity presented on the screen corresponded to the brain activity.
An 18-million-second picture palette

After recording this information, another group of clips was used to reconstruct the videos shown to the subjects. The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie. Although the resulting video is low resolution and blurry, it clearly matched the actual clips watched by the subjects.

Think about those 18 million seconds of random videos as a painter's color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he's seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video are not what the subject is seeing. They are random bits used just to compose the brain image.

Given a big enough database of video material and enough computing power, the system would be able to re-create any images in your brain.




In this other video you can see how this process worked in the three experimental targets. On the top left square you can see the movie the subjects were watching while they were in the fMRI machine. Right below you can see the movie "extracted" from their brain activity. It shows that this technique gives consistent results independent of what's being watched—or who's watching. The three lines of clips next to the left column show the random movies that the computer program used to reconstruct the visual information.

Right now, the resulting quality is not good, but the potential is enormous. Lead research author—and one of the lab test bunnies—Shinji Nishimoto thinks this is the first step to tap directly into what our brain sees and imagines:

"Our natural visual experience is like watching a movie. In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.
The brain recorders of the future"


Imagine that. Capturing your visual memories, your dreams, the wild ramblings of your imagination into a video that you and others can watch with your own eyes.

This is the first time in history that we have been able to decode brain activity and reconstruct motion pictures in a computer screen. The path that this research opens boggles the mind. It reminds me of Brainstorm, the cult movie in which a group of scientists lead by Christopher Walken develops a machine capable of recording the five senses of a human being and then play them back into the brain itself.

This new development brings us closer to that goal which, I have no doubt, will happen at one point. Given the exponential increase in computing power and our understanding of human biology, I think this will arrive sooner than most mortals expect. Perhaps one day you would be able to go to sleep wearing a flexible band labeled Sony Dreamcam around your skull. [UC Berkeley]


http://gizmodo.com/5843117/scientists-reconstruct-video-clips-from-brain-activity

-------------------------------------------


HOLY SHIT! :shocked:
 

Buy on AliExpress.com

swag

L'autista
Administrator
Sep 23, 2003
84,922
#2
As long as it isn't Prof. Ralph Freeman, who is also in his department. That guy was the most unethical person I have ever met -- and, for a period as a researcher, worked for -- in my entire life. :tdown:
 

swag

L'autista
Administrator
Sep 23, 2003
84,922
#5
Yeah, but they both have active labs in the UC Berkeley neuroscience department:
http://neuroscience.berkeley.edu/users/

Ralph Freeman is the most foul and disgusting excuse of a human I have ever met. Sleazeball supreme -- everything from screwing over research advisors by deciding not to pay them after the fact, to sexually assaulting female lab staff at team conferences and offsites (while the guy was married), etc.

He makes my blood boil even so many years later. :(
 

swag

L'autista
Administrator
Sep 23, 2003
84,922
#7
Why didn't you report him to the authorities?
Because those issues weren't mine to report. That's called hearsay in justice. Only offended parties can file for their own grievances.

In my case, for the dickhead moves he pulled on me, I did just that and reported him. I was the first person in the history of the university who actually stood up to that asshole. I filed grievances with an ombudsman and the Dean of the university. Freeman came back with an official paper that replied with a snarky, "Well, in all my years in the university I think I've been blessed with great relationships with graduate students and lab assistants."

I filed all this with the university while everyone was telling me not to do it -- it would be a "career-limiting move" in academia, given his stature and my need for his references. But at that stage, I was done with graduate school and didn't care anymore -- what was wrong was wrong.

The problem is that the rest of the lot who got victimized by this guy held back for those reasons: fear that they would essentially be blacklisted from working for another lab in the field again, etc. It was a form of extortion, actually.

What vindicated me a bit was that anonymous letters came in on my case saying that the university needs to dig more into this situation and that they felt my raised concerns had merit. Note that they were filed anonymously, because people were afraid to stand up to him.
 

swag

L'autista
Administrator
Sep 23, 2003
84,922
#9
It's cool stuff. But getting the data is highly controversial. These labs do a lot with living animals to map the visual cortex, and a lot of animal rights activists are mortified and outraged by the means by which they get this data. So using MRI data from humans is a great step to address some of those (often irrational, but not entirely) concerns and make it more relevant to humans.

I remember plenty of evidence to suggest this were possible in the lab years ago. I am guessing the technology finally caught up to start proving it out.
 

Users Who Are Viewing This Thread (Users: 0, Guests: 1)