Dawn M. Taylor on Brain-Computer Interfaces
Computerworld (05/18/09) Anthes, Gary
Brain-computer interface research is finally starting to take off because the expansion of computer capacity has broadened people's ability to digitize and process numerous individual neural impulses concurrently and apply more sophisticated mathematical decoding algorithms to those impulses, says Case Western Reserve University professor Dawn M. Taylor. She is a researcher at the Cleveland Functional Electrical Stimulation (FES) Center, where the focus is the restoration of movement and function to paralysis victims through the application of electrical current to the peripheral nerves. "Basically, we are reconnecting the brain to the muscles so people can control their paralyzed limb just by thinking about doing so," Taylor says. "Intended movements can also be used to control other technologies, such as prosthetic limbs, assistive robots, or a computer mouse." Taylor says the least invasive method for recording neural signals is external surface electroencephalograms, while sensors also can be implanted atop the brain, within the skull, or under the scalp. She says that ideally the same neurons should be recorded for decades so the brain can learn how to trigger those neurons to control a device. Taylor postulates that advancements in technology miniaturization and wireless communication should eventually enable the shrinkage of all the equipment and processors used in the lab to a package that can be carried on a wheelchair or implanted within the body. She says the FES Center has developed a hand-grasp system for people who cannot move their hands due to spinal cord injuries, which has been commercialized. Taylor also says that electrode technologies can be used to stimulate the brain and circumvent fissures in the neural pathways that bring sensory information in.
Thursday, May 28, 2009
Dawn M. Taylor on Brain-Computer Interfaces
Sunday, May 17, 2009
Tempo Manaul - Hacking
To understand Tempo source, the hacking manual is really important.!!!!!
This section is intended as short road-map though TEMPO
source code files. TEMPO code is pretty much populated with
comments, and, as with any other open source project, source code is
ultimate reference. Also, Doxygen format is used for
commenting TEMPO classes interfaces, so Doxygen
tool could be employed along with tempo.cfg
Doxygen configuration file in TEMPO distribution
root directory to generate documentation for these interfaces in
various formats.
Source code files are grouped in src subdirectory of
TEMPO distribution. Subdirectory data contains files
with face and scalp models, as well as file with electrode names and
positions and sample EDF file. Subdirectory
manual contain help files (what you are reading now) in
HTML format. Subdirectory resources contains
artwork (icon files). Subdirectory tmp is where compilation
artifacts are stored (note however that executable is generated in
TEMPO distribution root directory. File tempo.pro is
Qt project file. Other files in TEMPO
distribution root directory are rather standard files for any open
source project.
TEMPO is written in C++ programming language,
using OpenGL library for 3D graphics and Qt
toolkit for graphical user interface. Thus, familiarity with these
technologies is necessary in order to be able to understand source
code.
When examining src subdirectory, one should start with
vector.*, color.*, bounding_box.*,
vertex.*, material.* and triangle.* files.
These files contain definition of simple data structures to be used
for constructing geometry model.
One should proceed with object.* files then. These files
contain definition of data structure and accompanying operations for
geometry model. Two operations related to this data structure are
especially important: creating model from given data file and
rendering model using OpenGL routines. Input format for
object geometry is simple ASCII format; input file
contains arrays of vertices, normals, materials and triangles. Each
array is preceded by number designating array length. Array elements
are groups of numbers corresponding to fields of data structures
mentioned above. Subdirectory data of TEMPO
distribution contains files face.mo and scalp.mo
with face and scalp models in above format. OpenGL code
rendering object model is (as emphasized in comments) for simplicity
left without any optimization (nevertheless, this code works pretty
fast even on software-only OpenGL implementations).
Files scene.* contain definition of data structure and
accompanying operations for scene object. Scene object contains
instances of face and scalp objects. Visualization created by
TEMPO is pretty generic in the sense that this code could be
utilized for any kind of visualization where some parts of scene are
color coded from some interpolated values and other parts of scene
rendered as is. Thus, instead of "face" and
"scalp", terms "not mapped" and "mapped"
are utilized through TEMPO code. Also, term
"sensor" is utilized later instead of term
"electrode". It would be actually very easy to utilize
TEMPO rendering code to create visualization of say
interpolated statical load across the bridge given measured loads in
some points; but that is unrelated story.
Geometry related parts of TEMPO code are completed with
quaternion.* and rotation.* files. These files
contain definition of rotation data structures and corresponding
operations. Quaternions are used to calculate cumulative scene
rotation after user interaction with mouse.
Files exception.* contain definition of exceptions that
could be thrown from TEMPO for error handling purposes.
Signal processing related parts of TEMPO code begin with
fft.* files. These files contain implementation of
FFT procedure, together with caching of auxiliary data
for faster execution.
Files sensors.* contain data structure definition for
electrodes ("sensors" in TEMPO terminology, as
above explained) names and positions. These information are read from
file in simple ASCII format: first line of file contains
number of electrodes, and then follows lines with electrode
information where length of electrode name is given first, then
electrode name between quotation marks and finally electrode positions
(in same coordinate system as face and scalp models are given).
Subdirectory data of TEMPO distribution contains
file 10_10.ms with corresponding information for electrode
arrangement according to 10-10 and 10-20
international standards.
Reading of input EDF file and calculating
EEG scores is distributed between input.* and
signal.* files. Former files contain definition of data
structure and operations to read EDF file header and to
calculate array of EEG score values given needed set of
samples on each electrode. Later files contain definition of data
structure representing individual EEG channel and
operations to read corresponding signal header from EDF
file and provide set of signal samples of given length at given
position in EDF file (this operation is used by above
mentioned function from input.cpp file calculating array of
EEG score values). Important to note is that function
from input.cpp file reading EDF header is also
looking at signal headers and comparing signal names to names known to
above discussed sensors object (an instance of this object is passed
to this function). Thus, signal objects are created only for channels
from EDF file whose names corresponds to names of
electrodes known to TEMPO.
Mapping of score values to colors is encapsulated in interpolation
object, defined in interpolation.* files. Weight coefficients
for each scalp vertex according to vertex distance from set of nearest
electrodes are calculated during initialization of this object and
used later to actually perform interpolation. Mapping of values to
color pallette is also defined here.
Two simple Qt widgets are defined in
gradient_widget.* and visualization_widget.* files
respectively. First widget is drawing color legend and is later
instantiated in legend frame of TEMPO main window. Second
widget is actually showing OpenGL rendering and is later
instanced in left half of TEMPO main window.
Set of files info_widget.*, score_widget.*,
legend_widget.* and time_widget.* contain code for
creating graphical user interface and handling appropriate events for
corresponding frames on right half of TEMPO main window. This
code is bulky, but is rather simple actually.
Files main_widget.* contain code for TEMPO
application main widget, that is uniting highest level components of
above together. Files manual_widget.* contain implementation
of simple browser for TEMPO help files.
Finally, main.cpp is file containing program
main() function
Saturday, May 16, 2009
Install Tempo under Linux
TEMPO is open source software for 3D visualization of brain electrical activity. TEMPO accepts EEG file in standard EDF
format and creates animated sequence of topographic maps. Topographic
maps are generated over 3D head model and user is able to navigate
around head and examine maps from different viewpoints. Most mapping
parameters are adjustable through appropriate graphical user interface
controls. Also, individual topographic maps could be saved in PNG format for future examination or publishing.
http://code.google.com/p/tempo/
Download the source code of tempo.
Install qt through Syntactic Package Manager.
Open the terminal, navigate the the root folder of tempo.
Use command "qmake -makefile tempo.pro"
Then "make"
It's done!
However, as I have little knowledge about qt programming. I even don't know which tool can understand the ".pro" file. It took me a whole afternoon to try out how to compile the source files. I even tried to write a makefile by myself, but I found it rather hard. Tens of hundreds of errors jump out! Finally, when I searching on the net for a error, I saw somebody saying that "use qmake to create makefile from .pro file", then I realized that the pro file is for qmake!!! Thank goodness, I got it!
Learning GCC:
http://gcc.gnu.org/onlinedocs/gcc-4.4.0/gcc/index.html
GNU Make
http://web.mit.edu/gnu/doc/html/make_toc.html#SEC19
Thursday, May 14, 2009
g.tec smart home control with Thoughts
g.tec smart home control with Thoughts

A (Virtual) Smart Home Controlled By Your Thoughts
A (Virtual) Smart Home Controlled By Your Thoughts
ICT Results (05/11/09)
The European Union-funded Presenccia project has developed brain-computer interface technology that could eventually enable users to control the interconnected electronic devices in future smart homes. Electroencephalogram (EEG) equipment is used to monitor electrical activity in a user's brain, and after a period of training, the system learns to identify distinctive patterns of neuronal activity produced when a user imagines turning on a light switch or controlling a media device. The ability to move and control objects in real life or in virtual reality (VR) using only the power of thought could help amputees learn how to use a prosthetic limb or allow people confined to a wheelchair experience walking in virtual reality. "A virtual environment could be used to train a disabled person to control an electric wheelchair through a brain-computer interface," says Presenccia project coordinator Mel Slater. "It is much safer for them to learn in VR than in the real world, where mistakes could have physical consequences." One application developed by the project allows people to control a small robot using a system, known as Steady State Visual Evoked Potentials, which could be adapted for a wheelchair. Four lights on a small box flicker at different frequencies, which trigger different reactions in the brain when the subject looks at each light. Each of the four lights could be used to move the wheelchair in a different direction. Another approach allows people to type with their thoughts by staring at a character that they want to type. The researchers say that better software, hardware, and a more thorough understanding of EEG data could give people with "locked-in syndrome" new means of communication.
http://cordis.europa.eu/ictresults/index.cfm?section=news&tpl=article&BrowsingType=Features&ID=90565

Monday, May 11, 2009
BCI basis I
Type of BCI: Invasive and non-Invasive
Invasive:
1. Electrode implants (电极植入)
2. Prosthetic devices (人造器官)
3. Chip implants (芯片植入)
4. Brain in a dish (...)
Non-Invasive:
1. Output: EEG, MEG,fMRI
2. Input: cortical plasticity (皮质可塑性),Tongue interface (语言界面), Soundscapes (声)
EGG: Electroencephalography : 脑电图
Wednesday, May 6, 2009
Computational neuroscience
Computational neuroscience is an interdisciplinary science that links the diverse fields of neuroscience, cognitive science, electrical engineering, computer science, physics and mathematics. Historically, the term was introduced by Eric L. Schwartz, who organized a conference, held in 1985 in Carmel, California at the request of the Systems Development Foundation, to provide a summary of the current status of a field which until that point was referred to by a variety of names, such as neural modeling, brain theory and neural networks. The proceedings of this definitional meeting were later published as the book "Computational Neuroscience" (1990).[1] The early historical roots of the field can be traced to the work of people such as Hodgkin & Huxley, Hubel & Wiesel, and David Marr, to name but a few. Hodgkin & Huxley developed the voltage clamp and created the first mathematical model of the action potential. Hubel & Wiesel discovered that neurons inprimary visual cortex, the first cortical area to process information coming from the retina, have oriented receptive fields and are organized in columns.[2] David Marr's work focused on the interactions between neurons, suggesting computational approaches to the study of how functional groups of neurons within the hippocampus and neocortex interact, store, process, and transmit information. Computational modeling of biophysically realistic neurons and dendrites began with the work of Wilfrid Rall, with the first multicompartmental model using cable theory.
Computational neuroscience is distinct from psychological connectionism and theories of learning from disciplines such as machine learning,neural networks and statistical learning theory in that it emphasizes descriptions of functional and biologically realistic neurons (and neural systems) and their physiology and dynamics. These models capture the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, protein and chemical coupling to network oscillations, columnar and topographic architecture and learning and memory. These computational models are used to test hypotheses that can be directly verified by current or future biological experiments.
Currently, the field is undergoing a rapid expansion. There are many software packages, such as GENESIS and NEURON, that allow rapid and systematic in silico modeling of realistic neurons. Blue Brain, a collaboration between IBM and École Polytechnique Fédérale de Lausanne, aims to construct a biophysically detailed simulation of a cortical column on the Blue Gene supercomputer.
From: http://en.wikipedia.org/wiki/Computational_neuroscience