Facebook's annual F8 conference in San Jose, California ended yesterday. The event included two days of interactive demos, announcements, and developer advice to get the best from Facebook. Among the expected social networking and apps features were some interesting technology and 'moon shoot' style presentations, many about VR and languages. However, probably the most interesting technology discussed was a brain-computer interface for typing, plus skin-hearing.
Penny for your thoughts
We hear that Facebook currently has 60 developers working on a brain-Computer interface. The ultimate aim is to allow users to type at about 100 words per minute, just by thinking what they want to write. Mark Zuckerberg said that the project "will one day allow us to choose to share a thought, just like we do with photos and videos".
Facebook's brain-computer interface is thankfully non-invasive, relying upon "optical imaging to scan your brain a hundred times per second to detect you speaking silently in your head, and translate it into text," reports TechCrunch. Regina Dugan, the head of Facebook's R&D division Building 8, said work on the non-invasive brain-computer interface begun 6 months earlier. It seems to be based upon current works at Stanford where paralysed patients can type using a brain embedded sensor.
Plans are to mass produce and ship the non-invasive devices as and when they are ready (within 2 years is the goal). Meanwhile Dugan sought to allay privacy fears by saying that the device would only decode "the words you've already decided to share by sending them to the speech centre of your brain." (So during interrogation you must avoid saying things to yourself in your head like: "don't tell them the secret key is in the coffee jar"…)
In a complementary sensory development other researchers are working on skin-hearing. This interesting technology has been built into prototypes that allow for patches of skin to mimic the cochlea in your ear, translating sound waves into specific frequencies for your brain. This system has already been tested and only works with a very limited vocabulary so definitely requires hardware and software optimisation.