Showing posts with label News. Show all posts
Showing posts with label News. Show all posts

Thursday, January 21, 2010

Artificial Intelligence Is More Than Just Talk--Google's Top Inventor

Artificial Intelligence Is More Than Just Talk--Google's Top Inventor
Silicon Republic (01/14/10) Kennedy, John

Peter Norvig, head of research for Google, says that humans will soon be talking to computers. Norvig notes that humans and computers are already communicating, but not using the same language. He explains that humans use keywords rather than whole sentences to communicate with a search engine, which is unable to understand a person as well as another human. "But on the other hand, [the search engine] is giving us answers that a person wouldn't, so it has its strengths and weaknesses," Norvig says. He also expects the proliferation of mobile phones to lead to a different type of interaction with the Internet. Speech recognition will allow mobile phone users to talk more and displays will shrink. "The advantages with mobile are that if you're in a specific location and you ask a specific query then--because of [global positioning systems]--there's going to be an answer that's appropriate to the location," Norvig says.


From: http://www.siliconrepublic.com/news/article/14862/randd/artificial-intelligence-is-more-than-just-talk-googles-top-inventor


Tuesday, August 18, 2009

Doing what the brain does - how computers learn to listen

Max Planck scientists develop model to improve computer language recognition


 


We see, hear and feel, and make sense of countless diverse, quickly changing stimuli in our environment seemingly without effort. However, doing what our brains do with ease is often an impossible task for computers. Researchers at the Leipzig Max Planck Institute for Human Cognitive and Brain Sciences and the Wellcome Trust Centre for Neuroimaging in London have now developed a mathematical model which could significantly improve the automatic recognition and processing of spoken language. In the future, this kind of algorithms which imitate brain mechanisms could help machines to perceive the world around them. (PLoS Computational Biology, August 12th, 2009)

Many people will have personal experience of how difficult it is for computers to deal with spoken language. For example, people who 'communicate' with automated telephone systems now commonly used by many organisations need a great deal of patience. If you speak just a little too quickly or slowly, if your pronunciation isn’t clear, or if there is background noise, the system often fails to work properly. The reason for this is that until now the computer programs that have been used rely on processes that are particularly sensitive to perturbations. When computers process language, they primarily attempt to recognise characteristic features in the frequencies of the voice in order to recognise words.

'It is likely that the brain uses a different process', says Stefan Kiebel from the Leipzig Max Planck Institute for Human Cognitive and Brain Sciences. The researcher presumes that the analysis of temporal sequences plays an important role in this. 'Many perceptual stimuli in our environment could be described as temporal sequences.' Music and spoken language, for example, are comprised of sequences of different length which are hierarchically ordered. According to the scientist’s hypothesis, the brain classifies the various signals from the smallest, fast-changing components (e.g., single sound units like 'e' or 'u') up to big, slow-changing elements (e.g., the topic). The significance of the information at various temporal levels is probably much greater than previously thought for the processing of perceptual stimuli. 'The brain permanently searches for temporal structure in the environment in order to deduce what will happen next', the scientist explains. In this way, the brain can, for example, often predict the next sound units based on the slow-changing information. Thus, if the topic of conversation is the hot summer, 'su…' will more likely be the beginning of the word 'sun' than the word 'supper'.

To test this hypothesis, the researchers constructed a mathematical model which was designed to imitate, in a highly simplified manner, the neuronal processes which occur during the comprehension of speech. Neuronal processes were described by algorithms which processed speech at several temporal levels. The model succeeded in processing speech; it recognised individual speech sounds and syllables. In contrast to other artificial speech recognition devices, it was able to process sped-up speech sequences. Furthermore it had the brain’s ability to 'predict' the next speech sound. If a prediction turned out to be wrong because the researchers made an unfamiliar syllable out of the familiar sounds, the model was able to detect the error.

The 'language' with which the model was tested was simplified - it consisted of the four vowels a, e, i and o, which were combined to make 'syllables' consisting of four sounds. 'In the first instance we wanted to check whether our general assumption was right', Kiebel explains. With more time and effort, consonants, which are more difficult to differentiate from each other, could be included, and further hierarchical levels for words and sentences could be incorporated alongside individual sounds and syllables. Thus, the model could, in principle, be applied to natural language.

'The crucial point, from a neuroscientific perspective, is that the reactions of the model were similar to what would be observed in the human brain', Stefan Kiebel says. This indicates that the researchers’ model could represent the processes in the brain. At the same time, the model provides new approaches for practical applications in the field of artificial speech recognition.

Original work:

Stefan J. Kiebel, Katharina von Kriegstein, Jean Daunizeau, Karl J. Friston
Recognizing sequences of sequences
PLoS Computational Biology, August 12th, 2009.




Max Planck Society
for the Advancement of Science
Press and Public Relations Department

Hofgartenstrasse 8
D-80539 Munich
Germany

PO Box 10 10 62
D-80084 Munich

Phone: +49-89-2108-1276
Fax: +49-89-2108-1207

E-mail: presse@gv.mpg.de
Internet: www.mpg.de/english/

Head of scientific communications:
Dr. Christina Beck (-1275)

Press Officer / Head of corporate communications:
Dr. Felicitas von Aretin (-1227)

Executive Editor:
Barbara Abrell (-1416)


ISSN 0170-4656

 

PDF (121 KB)


Contact:

Dr Christina Schröder
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig
Tel.: +49 (0)341 9940-132
E-mail: cschroeder@cbs.mpg.de


Dr Stefan Kiebel
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig
Tel.: +49 (0)341 9940-2435
E-mail: kiebel@cbs.mpg.de


Saturday, August 8, 2009

Statistics

For Today's Graduate, Just One Word: Statistics
New York Times (08/06/09) Lohr, Steve; Fuller, Andrea

The statistics field's popularity is growing among graduates as they realize that it involves more than number crunching and deals with pressing real-world challenges, and Google chief economist Hal Varian predicts that "the sexy job in the next 10 years will be statisticians." The explosion of digital data has played a key role in the elevation of statisticians' stature, as computing and the Web are creating new data domains to investigate in myriad disciplines. Traditionally, social sciences tracked people's behavior by interviewing or surveying them. “But the Web provides this amazing resource for observing how millions of people interact,” says Jon Kleinberg, a computer scientist and social networking researcher at Cornell, who won the 2008 ACM-Infosys Foundation award. In research just published, Kleinberg and two colleagues tracked 1.6 million news sites and blogs during the 2008 presidential campaign, using algorithms that scanned for phrases associated with news topics like “lipstick on a pig.” The Cornell researchers found that, generally, the traditional media leads and the blogs follow, typically by 2.5 hours, though a handful of blogs were quickest to mention quotes that later gained wide attention. IDC forecasts that the digital data surge will increase by a factor of five by 2012. Meeting this challenge is the job of the newest iteration of statisticians, who use powerful computers and complex mathematical models to mine meaningful patterns and insights out of massive data sets. "The key is to let computers do what they are good at, which is trawling these massive data sets for something that is mathematically odd," says IBM researcher Daniel Gruhl. "And that makes it easier for humans to do what they are good at--explain those anomalies." The American Statistical Association estimates that the number of people attending the statistics profession's annual conference has risen from about 5,400 in recent years to some 6,400 this week.
View Full Article - May Require Free Registration | Return to Headlines


Tuesday, June 2, 2009

The Next Frontier: Decoding the Internet's Raw Data

The Next Frontier: Decoding the Internet's Raw Data



Washington Post (06/01/09) P. A10; Hart, Kim




The massive amounts of data available on the Internet
potentially have infinite uses. For example, advertisers want to mine
photos and status updates on social networks to better sell products,
while scientists are tracking weather patterns using decades of climate
records. Now, U.S. White House officials want to make government data
available to the public so citizens can monitor government actions. The
problem is determining how to organize and display such a massive
amount of data without having to sift through volumes of spreadsheets.
Participants at the recent symposium at the University of Maryland's
Human-Computer Interaction Lab focused on solving this problem. "We're
trying to understand data and make sense of it visually, but there's no
way of evaluating how effective these visuals really are for people,"
says PricewaterhouseCoopers research manager Mave Houston. Analysts
from the U.S. Department of Defense, SAIC, and Lockheed Martin
expressed their frustrations with available information visualization
tools, which are too complex for novice users, frequently do not work
well with user-generated content, and have difficulty handling large
amounts of data. The Human-Computer Interaction Lab is working on ways
of linking information, creating user-friendly technology devices, and
improving how people interact with the Web. "Our belief is that
technology is not just useful as toys or for business," says lab
founder Ben Shneiderman. "We're talking about using these technologies
for national priorities."


Thursday, March 19, 2009

News Digest Mar.19

Studying the Female Form: Math Could Lead to Sexier Lingerie, Safer Labcoats

PhysOrg.com (03/12/09)


Researchers at Japan's Kyoto Institute of Technology and Osaka University have developed a computerized model for identifying body shape components that can be used to design close-fitting products.  The researchers have developed a technique that allows them to extract a person's body shape components from three-dimensional (3D) data and then link that data to a classification of trunk shapes.  The researchers measured 560 Japanese women aged 19 to 63 using laser metrology to map control points at specific places on their trunks.  The data was applied to a generic 3D trunk model to create a database of body shapes.  The researchers then used statistical and cluster analysis to classify trunk characteristics into five different groups, each depending on slimness, breast size and angle, neck type, and shoulder slope.  The researchers say their analysis will be helpful in the production of clothes that fit better for each size and shape, and in improving practical functional clothes used for body adjustments and posture improvement.
http://www.physorg.com/news156096749.html

Thursday, February 12, 2009

News Digest Feb 12

Cognitive Computing Project Aims to Reverse-Engineer the Mind


Wired News (02/06/09) Ganapati, Priya

IBM Almaden Research Center cognitive computing project manager Dharmendra Modha has a plan to engineer the mind by reverse-engineering the brain.  Neuroscientists, computer engineers, and psychologists are working together to create a new computing architecture that simulates the brain's perception, interaction, and cognitive abilities.  The researchers hope to first simulate a human brain on a supercomputer, and then use new nano-materials to create logic gates and transistor-based equivalents of neurons and synapses to build a hardware-based, brain-like system.  The effort has received a $5 million grant from the Defense Advanced Research Projects Agency, which is enough to run the first phase of the project.  The researchers say if the project is successful it could lead to a new computing system within the next decade.  "The idea is to do software simulations and build hardware chips that would be based on what we know about how the brain and how neural circuits work," say University of California-Merced professor and project participant Christopher Kello.  The researchers started by building a real-time simulation of a small cerebral cortex, which has the same structure in all mammals.  The simulation required 8 terabytes of memory on an IBM BlueGene/L supercomputer.  Modha says the simulation, although not complete, offered insights into the brain's high-level computational principles.  A human cerebral cortex is about 400 times larger than the small mammal simulation and would require a supercomputer with a memory capacity of 3.2 petabytes and a computational capacity of 36.8 petaflops.  While waiting for supercomputing technology to improve, the researchers are working on implementing neural architectures in silicon.
http://blog.wired.com/gadgets/2009/02/cognitive-compu.html

Tuesday, February 10, 2009

News Digest Feb 10

Google Makes it Easy to Spy on Kids, Workers


Associated Press (02/05/09) Liedtke, Michael

Google recently upgraded its mobile maps software with a feature called Latitude that allows users with mobile devices to automatically share their location with others.  The feature expands on a tool released in 2007 that allows mobile phone users to check their own location on a Google map.  The new feature raises several security concerns, but Google is trying to address this issue by requiring each user to manually turn on the tracking software and making it easy to turn off or limit access to the service.  Google says it will not retain any information on its users' movements, and that only the last location recorded by the tracking service will be stored on Google's computers.  The software uses cell phone towers, global positioning systems, or a Wi-Fi connection to find users' locations in the United States and 26 other countries.  Each user can decide who can monitor their location.  Latitude will initially work on Blackberrys and devices running on Symbian software or Microsoft's Windows Mobile.  Eventually the software will be able to operate on some T-1 Mobile phones running Google's Android software and Apple's iPhone and iTouch devices.  Google also is offering a PC version of the feature.  The PC program will allow people who do not have a mobile phone to find the locations of contacts or keep track of their children.
http://www.rdmag.com/ShowPR.aspx?PUBCODE=014&ACCT=1400000101&ISSUE=0902&RELTYPE=SOFT&PRODCODE=00000000&PRODLETT=BU&CommonCount=0

 

UAHuntsville Lab Combines Psychology With Technology for Unique Research Projects


University of Alabama in Huntsville (02/02/09) Maples, Joyce

University of Alabama in Huntsville (UAHuntsville) professor Anthony Morris directs the school's Human Factors and Ergonomics Laboratory, which combines psychology and technology to research work performed by human factors engineers.  Experimentation and research projects include human operator interactions with complex systems such as aircraft, and designing work stations that are logical, user friendly, and can help prevent injuries.  For example, Morris and UAHuntsville graduate student Sage Jessee have been working with the Human Research and Engineering Directorate of the Army Research Lab to evaluate the head and eye movements of Black Hawk helicopter pilots.  The project involved using eye-tracking in a video game-like simulator to monitor the pilot's point of gaze and head position during flight scenarios.  The researchers created an "attentional landscape" that characterized the general gaze of the pilot, and identified specific eye measures that correlate with mental workload.  The result of the project was a new ergonomically designed cockpit that enabled pilots to spend 90 percent of their time looking outside windows instead of continuously staring at the instrument panel.
http://www.uah.edu/News/newsread.php?newsID=1283

Saturday, February 7, 2009

News Digest Feb. 7

CHI 2009 Will Showcase Technologies That Bring Digital Life to Reality


ACM (02/06/09)

CHI 2009, sponsored by ACM's Special Interest Group on Computer-Human Interaction, will showcase technologies, designs, and ideas that bring digital life to reality.  The conference will offer a diverse program that includes a video showcase, job fair, and design vignette demos, as well as world-renowned experts on innovation in computer user design.  Research highlights to be presented at the conference include designing digital games for rural children in India; effects of personal photos and presentation intervals on perceptions of recommender systems; a tool that increases Wikipedia credibility; home computer power management strategies; privacy concerns in everyday Wi-Fi use; and improving users gaming experience.  University of California, Irvine professor Judith Olsen, a pioneer in human-computer interaction and computer-supported cooperative work, will open the conference, and Kees Overbeeke of Eindhoven University of Technology in the Netherlands, a psychologist who works with designers, will close the event with a presentation on "Dreaming of the Impossible."  February 15 is the early registration deadline for CHI 2009, which takes place at the Hynes Convention Center in Boston, MA, on April 4-9.   For more information and to register, click on http://www.chi2009.org.
http://www.chi2009.org

 

Hospitals With Better IT Have Fewer Deaths, Study Shows


Computerworld (01/30/09) Mearian, Lucas

Patients have better outcomes in hospitals that make greater use of technology, concludes a collaborative study involving multiple universities and healthcare systems.  The study, which involved more than 167,000 patients in 41 hospitals, measured the amount of medical care automation with a Clinical Information Technology Assessment Tool, a survey-based metric that analyzes automation and the ease of use of a hospital's information system.  Study author Dr. Ruben Amarasingham, associate chief of medicine at Parkland Health & Hospital System and an assistant professor of medicine at the University of Texas Southwestern Medical Center, says "hospitals with automated notes and records, order entry, and clinical decision support had fewer complications, lower mortality rates, and lower costs."  By comparing in-patient death rates, medical complications, length of stay, and costs, the study found that hospitals with the most automation saved up to $1,729 per patient for various procedures.  The study explored four common medical conditions--heart attacks, congestive heart failure, coronary artery bypass grafting, and pneumonia--and how technology could be used to automate part of the treatment process.  The survey measured the automation of four procedures and asked doctors to describe the systems' effectiveness and ease-of-use on a 100-point scale.  A 10-point increase in the automation of medical notes and patient records was associated with a 15 percent decrease in patient deaths, and better automation order-entry systems were associated with a 9 percent decrease in the risk of heart attack and a 55 percent decrease in the need for coronary artery bypass grafts.
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9127072

Google+