Wednesday, February 23, 2011

Japan company developing sensors for seniors

Japan company developing sensors for seniors (AP)

Enlarge

Japan's top telecoms company is developing a simple wristwatch-like device to monitor the well-being of the elderly, part of a growing effort to improve care of the old in a nation whose population is aging faster than anywhere else.

The device, worn like a watch, has a built-in camera, microphone and, which measure the pace and direction of hand movements to discern what wearers are doing - from brushing their teeth to vacuuming or making coffee.

In a demonstration atCorp.'s research facility, the test subject's movements were collected as data that popped up as lines on a graph - with each kind of activity showing up as different patterns of lines. Using this technology, what an elderly person is doing during each hour of the day can be shown on a chart.

The prototype was connected to a personal computer for the demonstration, but researchers said such data could also be relayed by wireless or stored in a memory card to be looked at later.

Plans for commercial use are still undecided. But similar sensors are being tested around the world as tools for elderly care.

In the U.S., the Institute on Aging at the University of Virginia has been carrying out studies in practical applications of what it calls"body area"to promote senior independent living.

What's important is that wearable sensors be easy to use, unobtrusive, ergonomic and even stylish, according to the institute, based in Charlottesville, Virginia. Costs, safety andare also key.

Despite the potential for such technology in Japan, a nation filled with electronics and technology companies, NTT President Satoshi Miura said Japan is likely falling behind global rivals in promoting practical uses.

Worries are growing the Japanese government has not been as generous with funding and other support to allow the technology to grow into a real business, despite the fact that Japan is among the world's most advanced in the proliferation of broadband.

More than 90 percent of Japan's households are equipped with either optic fibers or fast-speed mobile connections.

"But how to use the technology is the other side of the story,"Miura said in a presentation."We will do our best in the private sector, but I hope the government will help."

Nintendo Co.'s Wii game-console remote-controller is one exception of such sensors becoming a huge business success. But that's video-game entertainment, not social welfare.

George Demiris, associate professor at the School of Medicine at the University of Washington, in Seattle, says technology for the elderly is complex, requiring more than just coming up with sophisticated technology.

Getting too much data, for instance, could simply burden already overworked health care professionals, and overly relying on technology could even make the elderly miserable, reducing opportunities for them to interact with real people, he said.

"Having more data alone does not mean we will have better care for older adults,"Demiris said in an e-mail.

"We can have the most sophisticated technology in place, but if the response at the other end is not designed to address what the data show in a timely and efficient way, the technology itself is not useful,"he said.


Source

Tuesday, February 22, 2011

Self-correcting robots, at-home 3-D printing are on horizon, says researcher at AAAS

Robots that can self-improve and machines that"print"products at home are technologies soon to become increasingly available, said Cornell's Hod Lipson at the 2011 American Association for the Advancement of Science (AAAS) annual meeting, Feb. 17-21.

Lipson, associate professor of mechanical and aerospace engineering and of computing and, said Feb. 19 that robots can observe and reconstruct their own behaviors and use this information to adapt to new circumstances.

Such advances are important because self-reflection plays a key role in accelerating adaptation by reducing costs of physical experimentation, he said. Similarly, the ability of a machine to reconstruct the morphology and behavior of other machines is important to cooperation and competition. Lipson demonstrated a number of experiments on self-reflecting robotic systems, arguing that reflective processes are essential in achieving meta-cognitive capacities, including consciousness and, ultimately, a form of self-awareness.

In a second talk (Feb. 21), Lipson discussed the emergence of solid free-form fabrication technology, which allows 3-D printing of various structures, layer by layer, from electronic blueprints. While this technology has been in existence for more than two decades, this process has recently been explored for. In particular, new developments in multimaterial printing may allow these compact"fabbers"to move from printing custom implants and scaffolds to"printing"live tissue.

His talk also touched on his experience with the open-source Fab@Home project and its use in printing a variety of biological and non-biological integrated systems. He concluded with some of the opportunities that this technology offers for moving from traditionalto digital tissue constructs.

Lipson directs Cornell's Computational Synthesis group, which focuses on automatic design, fabrication and adaptation of virtual and physical machines. He has led work in such areas as evolutionary robotics, multimaterial functional rapid prototyping, machine self-replication and programmable self-assembly. He was one of five Cornell faculty members who presented at this year's AAAS meeting.


Source

Monday, February 21, 2011

Putting your brain in the drivers seat (w/ Video)

Putting your brain in the drivers seat (w/ Video)

Enlarge

(PhysOrg.com) -- Picture driving your car without ever touching the wheel, driving a vehicle that is so user responsive to you that it is literally jacked into your thoughts. It sounds like the technology of the future, something out of a sci-fi movie doesn't it? Well, as it turns out, the future is now.

A team of German researchers, led by Raul Rojas, an AI professor at the Freie Universität Berlin, have created athat can be driven entirely by human thoughts. The car, which has been given the name BrainDriver, was shown off to the world in a video that highlighted the thought-powered driving system on a trip to the airport.

You need Flash installed to watch this video

The BrainDriver recordsactivity with the help of an Emotiv neuroheadset, a non-invasive brain interface based on electroencephalography sensors, that was made by the San Francisco-based company Emotiv. The neuroheadset was originally designed for gaming. Like most new devices the human has to be trained in order to use the interface properly. After some practice runs, moving a virtual object, the user can be up anda modified Volkswagen Passat Variant 3c. The driver's thoughts are able to control the engine, brakes, and steering of the car. Currently, there is a small delay between thethoughts and the cars response.

No word yet on how detailed controls will be for other necessary functions, for example opening the gas cap to fill up. The researchers selected the headset after rejecting several other options, including the iPad and eye-tracking devices.

The car is currently only in the prototype phase and no decision has been made as to whether or not this car will ever be made available to the public when it becomes roadworthy.


Source

Saturday, February 19, 2011

Machines beat us at our own game: What can we do?

Machines beat us at our own game: What can we do? (AP)

Enlarge

(AP) -- Machines first out-calculated us in simple math. Then they replaced us on the assembly lines, explored places we couldn't get to, even beat our champions at chess. Now a computer called Watson has bested our best at"Jeopardy!"

A gigantic computer created by IBM specifically to excel at answers-and-questions left two champs of the TV game show in its silicon dust after a three-day tournament, a feat that experts call a technological breakthrough.

Watson earned $77,147, versus $24,000 for Ken Jennings and $21,600 for Brad Rutter. Jennings took it in stride writing"I for one welcome our new computer overlords"alongside his correct Final Jeopardy answer.

The next step for the IBM machine and its programmers: taking its mastery of the arcane and applying it to help doctors plow through blizzards of medical information. Watson could also help make Internet searches far more like a conversation than the hit-or-miss things they are now.

Watson's victory leads to the question: What can we measly humans do that amazing machines cannot do or will never do?

The answer, like all of"Jeopardy!,"comes in the form of a question: Who - not what - dreamed up Watson? While computers can calculate and construct, they cannot decide to create. So far, only humans can.

"The way to think about this is: Can Watson decide to create Watson?"said Pradeep Khosla, dean of engineering at Carnegie Mellon University in Pittsburgh."We are far from there. Our ability to create is what allows us to discover and create new knowledge and technology."

Experts in the field say it is more than the spark of creation that separates man from his mechanical spawn. It is the pride creators can take, the empathy we can all have with the winners and losers, and that magical mix of adrenaline, fear and ability that kicks in when our backs are against the wall and we are in survival mode.

What humans have that Watson, IBM's earlier chess champion Deep Blue, and all their electronic predecessors and software successors do not have and will not get is the sort of thing that makes song, romance, smiles, sadness and all that jazz. It's something the experts in computers, robotics and artificial intelligence know very well because they can't figure out how it works in people, much less duplicate it. It's that indescribable essence of humanity.

Nevertheless, Watson, which took 25 IBM scientists four years to create, is more than just a trivia whiz, some experts say.

Richard Doherty, a computer industry expert and research director at the Envisioneering Group in Seaford, N.Y., said he has been studying artificial intelligence for decades. He thinks IBM's advances with Watson are changing the way people think about artificial intelligence and how a computer can be programmed to give conversational answers - not merely lists of sometimes not-germane entries.

"This is the most significant breakthrough of this century,"he said."I know the phones are ringing off the hook with interest in Watson systems. The Internet may trump Watson, but for this century, it's the most significant advance in computing."

And yet Watson's creators say this breakthrough gives them an extra appreciation for the magnificent machines we call people.

"I see human intelligence consuming machine intelligence, not the other way around,"David Ferrucci, IBM's lead researcher on Watson, said in an interview Wednesday."Humans are a different sort of intelligence. Our intelligence is so interconnected. The brain is so incredibly interconnected with itself, so interconnected with all the cells in our body, and has co-evolved with language and society and everything around it."

"Humans are learning machines that live and experience the world and take in an enormous amount of information - what they see, what they taste, what they feel, and they're taking that in from the day they're born until the day they die,"he said."And they're learning from all the input all the time. We've never even created something that attempts to do that."

The ability of a machine to learn is the essence of the field of. And there have been great advances in the field, but nothing near human thinking.

"I've been in this field for 25 years and no matter what advances we make, it's not like we feel we're getting to the finish line,"said Carnegie Mellon University professor Eric Nyberg, who has worked on Watson with its IBM creators since 2007."There's always more you can do to bring computers to human intelligence. I'm not sure we'll ever really get there."

Bart Massey, a professor of computer science at Portland State University, quipped:"If you want to build something that thinks like a human, we have a great way to do that. It only takes like nine months and it's really fun."

Working on computer evolution"really makes you appreciate the fact that humans are such unique things and they think such unique ways,"Massey said.

Nyberg said it is silly to think that Watson will lead to an end or a lessening of humanity."Watson does just one task: answer questions,"he said. And it gets things wrong, such as saying grasshoppers eat kosher, which Nyberg said is why humans won't turn over launch codes to it or its computer cousins.

Take Tuesday's Final Jeopardy, which Watson flubbed and its human competitors handled with ease. The category was U.S. cities, and the clue was:"Its largest airport is named for a World War II hero; its second largest, for a World War II battle."

The correct response was Chicago, but Watson weirdly wrote,"What is Toronto?????"

A human would have considered Toronto and discarded it because it is a Canadian city, not a U.S. one, but that's not the type of comparative knowledge Watson has, Nyberg said.

"A human working with Watson can get a better answer,"said James Hendler, a professor of computer and cognitive science at Rensselaer Polytechnic Institute."Using what humans are good at and what Watson is good at, together we can build systems that solve problems that neither of us can solve alone."

That's why Paul Saffo, a longtime Silicon Valley forecaster, and others, see better search engines as the ultimate benefit from the"!"-playing machine.

"We are headed toward a world where you are going to have a conversation with a machine,"Saffo said."Within five to10 years, we'll look back and roll our eyes at the idea that search queries were a string of answers and not conversations."

The beneficiaries, IBM's Ferrucci said, could include technical support centers, hospitals, hedge funds or other businesses that need to make lots of decisions that rely on lots of data.

For example, a medical center might use the software to better diagnose disease. Since a patient's symptoms can generate many possibilities, the advantage of a Watson-type program would be its ability to scan the medical literature faster than a human could and suggest the most likely result. A human, of course, would then have to investigate the computer's finding and make the final diagnosis.

isn't saying how much money it spent building Watson. But Doherty said the company told analysts at a recent meeting that the figure was around $30 million. Doherty believes the number is probably higher, in the"high dozens of millions."

In a few years, Carnegie Mellon University robotic whiz Red Whittaker will be launching a robot to the moon as part of Google challenge. When it lands, the robot will make all sorts of key and crucial real-time decisions - like Neil Armstrong and Buzz Aldrin did 42 years ago - but what humans can do that machines can't will already have been done: Create the whole darn thing.


Source

Friday, February 18, 2011

Robotic hand nearly identical to a human one (w/ Video)

Robotic hand nearly identical to a human one

Enlarge

(PhysOrg.com) -- When it comes to finding the single best tool for building, digging, grasping, drawing, writing, and many other tasks, nothing beats the human hand. Human hands have evolved over millions of years into four fingers and a thumb that can precisely manipulate a wide variety of objects. In a recent study, researchers have attempted to recreate the human hand by building a biomimetic robotic hand that they have optimized to achieve near-human appearance and performance.

The researchers, Nicholas Thayer and Shashank Priya from Virginia Tech in Blacksburg, Virginia, have published their study on thein a recent issue ofSmart Materials and Structures.

The researchers call the hand a dexterous anthropomorphic robotic typing hand, or DART hand, as the main objective was to demonstrate that the hand could type on a. They showed that a single DART hand could type at a rate of 20 words per minute, compared to the average human typing speed of 33 words per minute with two hands. The researchers predict that two DART hands could type at least 30 words per minute. Ultimately, the DART hand could be integrated into afor assisting the elderly or disabled people, performing tasks such as typing, reaching objects, and opening doors.

To design the DART hand, the researchers began by investigating the physiology of the human hand, including its musculoskeletal structure, range of motion, and grasp force. The human hand has about 40 muscles that provide 23 degrees of freedom in the hand and wrist. To replicate these muscles, the researchers used servo motors and wires extending throughout the robotic hand, wrist, and forearm. The robotic hand encompassed a total of 19 motors and achieved 19 degrees of freedom.

You need Flash installed to watch this video

The DART hand types“holly jolly.” Video credit: Nicholas Thayer and Shashank Priya.

“{The greatest significance of our work is the} optimization of the hand design to reduce the number of motors in order to achieve a similar degree of freedom and range of motion as the human hand,” Priya toldPhysOrg.com.“This also allowed us to achieve dimensions that are on par with the human hand. We were also able to program the hand in such a manner that a high typing efficiency can be obtained.”

One small difference between the DART hand and the human hand is that each finger in the robotic hand is controlled independently. In the human hand, muscles are sometimes connected at the tendons so they can move joints in more than one finger (which is particularly noticeable with the ring and pinky fingers).

The robotic hand can be controlled by input text, which comes from either a keyboard or a voice recognition program. When typing, a finger receives a command to position itself above the correct letter on the keyboard. The finger presses the key with a specific force, and the letter is checked for accuracy; if there is a typo, the hand presses the delete key. By moving the forearm and wrist, a single DART hand can type any key on the main part of a keyboard.

The DART hand isn’t the first robotic hand to be designed. During the past several years, robotic hands with varying numbers of fingers have been developed for a variety of purposes, from prosthetics to manufacturing. But as far as the researchers know, no robotic hand can accurately type at a keyboard at human speed. When the researchers compared the functional potential of the DART hand to other robotic hands, the DART hand had an overall functional advantage. In addition, the researchers used rapid prototyping to fabricate all the components, significantly reducing the cost, weight, and fabrication time.

In the future, the researchers plan to make further improvements to the robotic hand, including covering the mechanical hand in a silicone skin, as well as adding temperature sensors, tactile sensors, and tension sensors for improved force-feedback control. These improvements should give the robotic hand the ability to perform more diverse tasks.

“We have already experimented with grasping tasks,” Priya said.“In the current form it is not optimized for grasping, but in our next version there will be enough sensors to provide feedback for controlling the grasping action.”


Source

Thursday, February 17, 2011

Computer creams human 'Jeopardy!' champs

Computer creams human 'Jeopardy!' champs

Enlarge

An IBM computer creamed two human champions on the popular US television game show"Jeopardy!"Wednesday in a triumph of artificial intelligence.

"I for one welcome our new computer overlords,"contestant Ken Jennings -- who holds the"Jeopardy!"record of 74 straight wins -- cheekily wrote on his answer screen at the conclusion of the much-hyped three-day showdown.

"Watson"-- named after Thomas Watson, the founder of the US technology giant -- made some funny flubs in the game, but prevailed by beating his human opponents to the buzzer again and again.

The final tally from the two games: Watson at $77,147, Jennings at $24,000 and $21,600 for reigning champion Brad Rutter, who previously won a record $3.25 million on the quiz show.

"Watson is fast, knows a lot of stuff and can really dominate a match,"host Alex Trebek said at the opening of Wednesday's match.

Watson, which was not connected to the Internet, played the game by crunching through multiple algorithms at dizzying speed and attaching a percentage score to what it believed was the correct response.

"Jeopardy!", which first aired on US television in 1964, tests a player's knowledge in a range of categories, from geography to politics to history to sports and entertainment.

In a twist on traditional game play, contestants are provided with clues and need to supply the questions.

The complex language of the brain-teasers meant Watson didn't merely need to have access to a vast database of information, it also had to understand what the clue meant.

One impressive display came when Watson answered"What is United Airlines"to the clue"Nearly 10 million YouTubers saw Dave Carrol's clip called this 'friendly skies' airline 'breaks guitars.'"

But a Final Jeopardy flub on Tuesday's show prompted one IBM engineer to wear a Toronto Blue Jays jacket to the final day of taping and Trebek to joke that he had learned the Canadian metropolis was a US city.

Watson had answered"What is Toronto????"to the question:"Its largest airport is named for a WWII hero. Its second largest, for a WWII battle"under the category"US Cities."

Jennings and Rutter both gave Chicago as the correct answer.

Watson's success was a remarkable achievement and a historic moment for artificial intelligence, said Oren Etzioni, a computer science professor at the University of Washington.

"! is a particularly difficult form of natural language because it's so open-ended and it's so full of puns and quirky questions,"he told AFP.

But while Watson was impressive, he's still light years away from the kind of interactive, thinking computers imagined by science fiction, like the murderous Hal in the film"2001: A Space Odyssey."

"The day where robots will keep us as pets is still very far away,"Etzioni said.

That's because Watson can't really think for itself or even fully understand the questions, and instead"employs a lot of tricks and special cases to do what it's doing,"he said.

The next step is to see how this technology can be used in applications with real economic and social impacts.

Watson, which has been under development at IBM Research labs in New York since 2006, is the latest machine developed by IBM to challenge mankind.

In 1997, an IBM computer named"Deep Blue"defeated world chess champion Garry Kasparov in a closely-watched, six-game match.

Like Deep Blue, Watson"represents a major leap in the capacity of information technology systems to identify patterns, gain critical insight and enhance decision making,"IBM chairman Sam Palmisano said in a promotional video.

"We expect the science underlying Watson to elevate computer intelligence, take human to computer communication to new levels and to help extend the power of advanced analysts to make sense of vast quantities of structured and unstructured data."

IBM already has plans to apply the technology to help doctors track patients and stay up to date on rapidly evolving medical research.


Source

Wednesday, February 16, 2011

Industry researchers predict future of electronic devices

Industry researchers predict future of electronic devices

Enlarge

The just-released February issue of the<i>Journal of the Society for Information Display</i>contains the first-ever critical review of current and future prospects for electronic paper functions.

These technologies will bring us devices like:

  • full-color, high-speed, low-power e-readers;
  • iPads that can be viewed in bright sunlight, or
  • e-readers and iPads so flexible that they can be rolled up and put in a pocket.
The University of Cincinnati's Jason Heikenfeld, associate professor of electrical and computer engineering and an internationally recognized researcher in the field of electrofluidics, is the lead author on the paper titled"A Critical Review of the Present and Future Prospects for Electronic Paper."Others contributing to the article are industry researcher Paul Drzaic of Drzaic Consulting Services; research scientist Jong-Souk (John) Yeo of Hewlett-Packard's Imaging and Printing Group; and research scientist Tim Koch, who currently manages Hewlett-Packard's effort to develop flexible electronics.

TOP TEN LIST OF COMING e-DEVICES

Based on this latest article and his ongoing research and development related todevices, UC's Heikenfeld provides the following top ten list ofdevices that consumers can expect both near term and in the next ten to 20 years.

Heikenfeld is part of an internationally prestigious UC team that specializes in research and development of e-devices.

Industry researchers predict future of electronic devices
Enlarge

Within ten to 20 years, we will see e-Devices with magazine-quality color, viewable in bright sunlight but requiring low power.“Think of this as the green iPad or e-Reader, combining high function and high color with low power requirements.” said Heikenfeld. Credit: Noel Leon Gauthier, U. of Cincinnati

Coming later this year:
  • Color e-readers will be out in the consumer market by mid year in 2011. However, cautions Heikenfeld, the color will be muted as compared to what consumers are accustomed to, say, on an iPad. Researchers will continue to work toward next-generation (brighter) color in e-Readers as well as high-speed functionality that will eventually allow for point-and-click web browsing and video on devices like the Kindle.
Already in use but expansive adoption and breakthoughs imminent:
  • Electronic shelf labels in grocery stores. Currently, it takes an employee the whole day to label the shelves in a grocery store. Imagine the cost savings if all such labels could be updated within seconds– allowing for, say, specials for one type of consumer who shops at 10 a.m. and updated specials for other shoppers stopping in at 5:30 p.m. Such electronic shelf labels are already in use in Europe and the West Coast and in limited, experimental use in other locales. The breakthrough for use of such electronic labels came when they could be implemented as low-power devices. Explained Heikenfeld,"The electronic labels basically only consume significant power when they are changed. When it's a set, static message and price, the e-shelf label is consuming such minimal power– thanks to reflective display technology– that it's highly economical and effective."The current e-shelf labels are monochrome, and researchers will keep busy to create high-color labels with low-power needs.
  • The new"no knobs"etch-a-sketch. This development allows children to draw with electronic ink and erase the whole screen with the push of a button. It was created based on technology developed in Ohio (Kent State University). Stated Heikenfeld,"Ohio institutions, namely the University of Cincinnati and Kent State, are international leaders in display and liquid optics technology."
  • Technology in hot-selling Glow Boards will soon come to signage. Crayola's Glow Board is partially based on UC technology developments, which Crayola then licensed. While the toy allows children to write on a surface that lights up, the technology has many applications, and consumers can expect to see those imminently. These include indoor and outdoor sign displays that when turned off, seem to be clear windows. (Current LCD– liquid crystal display– sign technology requires extremely high power usage, and when turned off, provide nothing more than a non-transparent black background.)
Coming within two years:
  • An e-device that will consume little power while also providing high function and color (video playing and web browsing) while also featuring good visibility in sunlight. Cautions Heikenfeld,"The color on this first-generation low-power, high-function e-device won't be as bright as what you get today from LCD (liquid crystal display) devices (like the iPad) that consume a lot of power. The color on the new low-power, high-function e-device will be about one third as bright as the color you commonly see on printed materials. Researchers, like those of us at UC, will continue to work to produce the Holy Grail of an e-device: bright color, high function (video and web browsing) with low power usage."
Coming within three to five years:
  • Color adaptable e-device casings. The color and/or designed pattern of the plastic casing that encloses your cell phone will be adaptable. In other words, you'll be able to change the color of the phone itself to a professional black-and-white for work or to a bright and vivid color pattern for a social outing."This is highly achievable,"said Heikenfeld, adding,"It will be able to change color either automatically by reading the color of your outfit that day or by means of a downloaded app. It's possible because of low-power, reflective technology"(wherein the displayed pattern or color change is powered by available ambient light vs. powered by an electrical charge).

    Expect the same feature to become available in devices like appliances."Yes,"said Heikenfeld,"We'll see a color-changing app, so that you can have significant portions of your appliances be one color one day and a different color or pattern the next."

  • Bright-color but low-power digital billboards visible both night and day. Currently, the digital billboards commonly seen are based on LEDs (liquid crystal displays), which consume high levels of electric power and still lose color when in direct sunlight. Heikenfeld explained,"We have the technology that would allow these digital billboards to operate by simply reflecting ambient light, just like conventional printed billboards do. That means low power usage and good visibility for the displays even in bright sunlight. However, the color doesn't really sizzle yet, and many advertisers using billboards will not tolerate a washed-out color."
  • Foldable or roll-it-up e-devices. Expect that the first-generation foldable e-devices will be monochrome. Color will come later. The first foldable e-devices will come from Polymer Vision in the Netherlands. Color is expected later, using licensed UC-developed technology. The challenge, according to Heikenfeld, in creating foldable e-devices has been the device screen, which is currently made of rigid glass. But what if the screen were a paper-thin plastic that rolled like a window shade? You'd have a device like an iPad that could be folded or rolled up tens of thousands of times. Just roll it up and stick it in your pocket. See a video.

    You need Flash installed to watch this video

    University of Cincinnati researcher Jason Heikenfeld is part of an internationally prestigious team that specializes in research and development of e-devices. Based on his work, he provides a top ten list of electronic paper devices that consumers can expect in both the near term and in the next ten to 20 years. Credit: Two original images by Noel Leon Gauthier, University of Cincinnati.

    In ten to 20 years, consumers will see e-devices with magazine-quality color, viewable in bright sunlight but requiring low power.
Within ten to 20 years:
  • e-Devices with magazine-quality color, viewable in bright sunlight but requiring low power."Think of this as the green iPad or e-Reader, combining high function and high color with low power requirements."said Heikenfeld.
  • The e-Sheet, a virtually indestructible e-device that will be as thin and as rollable as a rubber place mat. It will be full color and interactive, while requiring low power to operate since it will charge via sunlight and ambient room light. However, it will be so"tough"and only use wireless connection ports, such that you can leave it out over night in the rain. In fact, you'll be able to wash it or drop it without damaging the thin, highly flexible casing.


Source

Tuesday, February 15, 2011

Seeing a future without 3-D glasses

This month, technology lovers from around the world descended on Las Vegas for the 2011 Consumer Electronics Show, an annual gathering of geekdom that features the latest in personal gadgetry.

Among the stars of the show were 3-D televisions - not only the sets that are populating your local electronics store, but newerthat answer the cry of consumers everywhere who don't want to wear those dorky 3-D glasses.

More than 2,200 miles away, Ken Conley sat at his desk in a small office park in Indian Trail, 15 minutes south of Charlotte. Unlike last year, he decided not to make the flight to Vegas for the show."Now,"he said,"I wish I had."

On display in Vegas were glasses-free 3-D TVs from companies that included Sony and Toshiba. They're the next potential big thing in visual displays - made possible, in part, in the warehouse over Conley's shoulder.

For the past quarter-century, the North Carolina State graduate has been a pioneer in the production and use of lenticular sheets, a plastic that is placed over images to give them a three-dimensional effect. Until recently, the technology has been used for still images, like the portrait behind Conley's desk of a firefighter emerging from a burning building or the 3-D poster promoting the movie blockbuster"Avatar."

But now, Conley's product and ideas are being used for moving images, including those on laptops and portable Blu-ray players that also had bloggers buzzing in Vegas.

"It's even more intense this year,"said Conley,"which makes us very happy."

Engineers from 3-D manufacturers send their units to Conley for customized lenticular sheets, or they fly to Charlotte and drive down to Indian Trail, where they are greeted at the front desk of a nondescript office by Conley's wife of 50 years, Mary Ellen.

The Conleys founded Micro Lens 14 years ago in the basement of their Matthews home. And if a basement seems a cliched kind of place for a technological advance to be born, well, it's about the only thing typical about Ken Conley.

He is 77 years old, a Shelby native. He doesn't look much like a techie - more, perhaps, like a techie's dad. And at an age he says might put him"down at Myrtle Beach,"he instead finds himself at the edge of new technology.

And he's there for a very old reason: He's never quite satisfied with his work.

---

"You want a tour of the place?"Conley asks.

It's a small operation - nine employees, including their daughter, who brings the Conleys' grandkids on weekdays to play in an office/toy room.

The tour begins in a conference room lined with striking 3-D images. A NASA astronaut leaps off a wall portrait. A crisp black-and-white head shot of Queen Elizabeth changes as you walk past.

Micro Lens is the world's leading producer of the lenticular sheets that cover images like these, which means that if you see a 3-D poster at a bus stop in, say, Dallas - odds are the plastic that created the 3-D effect came from North Carolina.

So how did Conley become an industry leader? A brief history of 3-D:

The origin of autostereoscopic displays, or 3-D imagery, began in the 17th century, when French painter G.A. Bois-Clair composed paintings that broke two images down into stripes and placed them behind a grid of vertical bars. The resulting effect was that if you walked by the paintings, one image would turn into another.

More than a century later, in the mid-1800s, English and Scottish inventors developed the stereoscope, a device that used lenses or mirrors to combine two photos of the same object into one 3-D image. By the early 20th century, film pioneers were doing the same with moving pictures.

The 3-D technology enjoyed its first heyday in the 1950s, with several movies employing the effect, and it has seen a recent resurgence that began in the 1990s with several documentaries using IMAX 3-D technology. The effect also was popular in advertising, thanks in part to lenticular technology, which began to boom in the 1940s and was used for products that included baseball cards and, of course, Cracker Jack prizes.

In the 1980s, a small Matthews company called Rexham supplied lenticular materials for manufacturers that had introduced multi-lens cameras. Ken Conley was an engineer on the project.

In the mid-1990s, he started his own company, Micro Lens, which sold lenticular products and the means to produce them to companies that wanted to make 3-D images. His timing was perfect. A new wave of technological advances was bringing computers and high-quality ink jet printers to small businesses, allowing Conley to offer lenticular sheets for 3-D products in smaller quantities. The use of 3-D in advertising again boomed.

In the late 1990s, a friend sent Conley a computer program that allowed users to place one picture on top of another and print it - perfect for 3-D photos. To make that process simpler, Conley created a lenticular sheet to go on a computer monitor, so artists could see their images in 3-D.

It was 2002, and a new thought came to him: If his lenticular sheet could make a monitor show a still image in 3-D, why couldn't he do the same for moving images?

"That's pretty much how I got started in 3-D TV,"he says.

How does 3-D work?

The simple answer: In real life, your brain takes the separate images your left and right eyes see and gives them depth. 3-D technologies essentially do the same. Says Conley:"Your brain is being tricked."

For 3-D movies, that trickery is performed by 3-D glasses, which separate the left and right images for your brain. For most of the 3-D TVs now hitting the market, motorized spectacles called"active shutter glasses"switch the image from one eye to another at a rate of many frames per second, fast enough to create one smooth, merged image.

But those same glasses may be holding 3-D TVs back from mass-market acceptance. Yes, the Dork Factor. People don't want to wear glasses - or look across the room and see their spouse wearing glasses - while they watch TV every night.

"I've never worked with glasses,"Conley says."They're a faddish thing."

TV manufacturers have apparently shared his concerns. About four years ago, even before today's 3-D TVs with glasses hit the U.S. market, companies were exploring how they could do 3-D without glasses. One of the people they approached was Ken Conley.

Now, his offices are strewn with monitors from those companies, and Conley is charged with making lenticular sheets that match the pitch and pixel arrangements of each TV.

"It's fun,"he says,"and it's frustrating."

The frustrating part: There are still obstacles to overcome before they're ready for market. The biggest such challenge is the"sweet spot."Viewers of 3-D TV without glasses have to sit in designated places to get the full effect of the technology. Move from those spots and the 3-D effect diminishes.

"It's still not quite easy to watch,"said Andrew Eisner of the electronics website Retrevo.com, who saw the displays at the Consumer Electronics Show this year."It seems as if it won't be practical for another four or five years."

Another possibility: A different technology will emerge that makes 3-D without glasses work. Already, Conley's lenticular sheets have competition: parallax barrier technology, which uses a fine grating of liquid crystals on TV screens to help create a 3-D effect.

Conley is optimistic. He keeps an eye on where research and development dollars are spent, and he sees a continued emphasis on 3-D without glasses. Companies also are pursuing 3-D on portable DVD players and gaming units, where the 3-D sweet spot isn't as much of an issue, and the technology has a logical future with the flourishing tablet market, as well.

All of which would be good for business at Micro Lens."It would be way bigger than it is now,"Conley says, finishing his tour in a warehouse full of lenticular screens and images.

There are Disney posters and several large prints of dinosaurs popping menacingly off their 3-D display. It's a visual treat that begs for a second or third look, and Conley waits patiently, smiling. He is humble about his place in the 3-D world - and his role in 3-D TV."I'm not the father of anything,"he says."I just created something that people can use."

Is he still wowed by it? Not really. He looks at the images he helps create, and he wonders how he can give them more pop, more crispness. New technology. Old school.

"I think, 'I can make this better,'"he says."I just ain't satisfied with it."

---

3-D TV: FUTURE OR FAD?

Sales of 3-D televisions disappointed major manufacturers in 2010. About 3.2 million 3-D TVs were sold worldwide, according to market researcher DisplaySearch. Samsung alone had expected to sell 3 million to 4 million by year's end.

What's the problem? A survey by Deloitte, an international consulting firm, found that consumers aren't convinced the technology is valuable enough to prompt a new shopping trip. In a 2010 survey, 83 percent said that 3-D wasn't enough to make them want to buy a new TV.

Another issue, of course, is the glasses. The firm said 30 percent of viewers reported that they didn't like wearing 3-D glasses.

3-D TV also suffers from the same obstacles that confronted HDTV early on. The TVs are expensive, with most in the $2,000 to $3,000 range this holiday season, not including $100-plus for the active shutter glasses necessary to get the 3-D effect. Also, there's still not enough content in 3-D to justify the purchase.

As with HDTV, industry observers expect prices to come down. (Already, some manufacturers are shifting to less expensive passive glasses, similar to those used in 3-D movies.) Also, content providers are starting to offer more 3-D programming - most notably ESPN, which launched ESPN 3D last year, a network that will show live and archived 3-D events 24/7 in 2011.


Source

Monday, February 14, 2011

Transparent 'DNA' adhesives help police nab thieves

Transparent 'DNA' adhesives help police nab thieves

Enlarge

(PhysOrg.com) -- Two British companies have worked out a way of helping dealers such as scrap and pawn dealers identify that objects brought to them have been stolen, and from whom, so they can then inform the police. The methods can also be used to mark valuable personal property to deter thieves.

The world shortage of some metals means that there is a growing incidence in theft of objects such as electrical cabling, telephone lines, manhole covers, traffic lights, and industrial piping. In the West Midlands in the UK for example around 1,500 trains were delayed or cancelled in an 18 month period because of thefts of signaling cables from the railways. British Transport Police spokesman Paul Crowther describes the theft of metal as the second greatest threat to infrastructure in Britain, after terrorism.

Two UK firms,Selectamark Security Systems LtdandSmartWater Technology Ltd, have developed different invisible marking systems to tackle the problem and help individuals and companies of all sizes protect their property from theft.

Selectamark Security Systems Ltd has developed a transparent, SelectaDNA, which can be painted onto objects that are a potential target for thieves, and which is virtually impossible to remove. The adhesive includes tiny microdots embedded in a nickel alloy or in polyester. The adhesive is invisible in normal light but glows inand the codes and company phone number imprinted on the microdots can be read under a microscope.

For further security, the substrate includes short stretches of synthetic DNA, which are unique to the particular batch of adhesive. The DNA codes are stored, along with customer details, on the Selectamark database, so even a tiny sample of the adhesive can be used to identify the owner of the object. The adhesive is supplied with warning signs to deter theft.

SmartWater Technology’s system is a similar transparent adhesive, with celluloid microdots imprinted with a code identifying the owner of the metal, and SmartWater Technology’s phone number. The microdots can be read under a microscope, and SmartWater can then determine if the goods have been stolen or are being sold legitimately.

The adhesive is almost impossible to be cleaned off but could be burned off, so SmartWater has added a unique mix of dozens of compounds of rare earth metals, which it refers to as a“,” that can survive fire and attempts at removal. If police or a dealer suspect the object has been stolen, it can be examined at the SmartWater laboratory, which wican identify the owner of the object.

Both SmartWater and Selectamark also sell spray can kits that can be installed near valuables or over doors. They are either triggered by motion sensors or a button pressed by a sales assistant and spray a mist onto the thieves. The spray gets into pores and creases in the skin and is impossible to remove for days and allows the police to identify the person as the thief.

The two companies allude to the use of DNA, and they hope this in itself helps to deter thieves, who are familiar through films and TV with DNA being used to catch criminals. Head of sales with Selectamark, Jason Brown, calls this the“DNA fear factor,” andagree, pointing out that just posting DNA warnings causes crime rates to drop.


Source

Sunday, February 13, 2011

Hands on high-tech moviemaking (w/ Video)

Hands On High-Tech Moviemaking

Enlarge

"Lights, camera, action!"is more than the quintessential phrase that describes the moment filming begins on a movie set -- it also embodies the heart and soul of moviemaking.

The science and engineering used in moviemaking is usually behind the scenes but during this year's Academy of Motion Picture Arts and Sciences Scientific and Technical awards hosted by Marisa Tomei on Saturday, Feb 12, the 23 nominees winning 11 awards are the stars of the show.

You need Flash installed to watch this video

This is a demonstration of the NAC hydraulic servo winch system. The"flying taxi"uses four motion controlled winches to maneuver high above the cinematic action.

Lights -- Bounce Light For Global Illumination

Astute observers sitting down to a marathon of the Shrek films might notice differences between the lighting of Shrek, the Ogre swamp, or even Shrek's dining room table between the first and second films.

"In the past, we would place virtual light sources all over the scene, but light would only come from the source,"said Eric Tabellion, a computer scientist who is part of the Research and Development staff at PDI/Dreamworks."In real life, light bounces off of surfaces and illuminates objects indirectly."

As a result, Tabellion and his colleague, Arnauld Lamorlette, have created a methodology to produce realistic"bounce lighting"to improve the global illumination -- techniques that light up everything in an animated scene. It has become an industry standard.

"If you tried to make an animated film without global illumination, it would look bad,"said Tabellion."Lighting has been my passion for a long time and 'Shrek 2' was the first film that we used the bounce lighting methodology in an entire film."

Camera -- Cablecam 3D

"Beauty shots"at the beginning of a movie really help set the scene. In these images, you may see, for instance, wide-sweeping views that show a pink and orange sunset behind a sandy beach or every metallic inch of a futuristic space ship. These shots are often difficult to film.

"We put cameras where you normally can't,"said Nic Salomon, President of Cablecam Inc."The Cablecam 3D allows you to get great shots where others can't get them."
The Cablecam 3D technology consists of a camera that is suspended over a set using a rope and pulley system, while customized winches allow the camera to move in three dimensions. This allows for a bird's eye view while moving freely within the scene -- perhaps most recognizable from its use in sports telecasts -- giving an overhead view of the action.

"We do a lot of beauty shots,"said Salomon."Most notably the train scene in 'Wanted' (where the train is going over a bridge between two mountain passes and peels off the track) people ask me all the time how we filmed across the train tracks."

Action -- NAC Servo Winch System For Special Effects

Even in the movies it is tricky to move large, heavy objects, but when special effects experts can, it's a scene you'll never forget.

"When we were working on 'Spider-man 3,' John Frazier, the special effects supervisor, wanted to make a taxi cab fly,"said Mark Noel, President of NAC Effects and Prop Animation."We had created a complicated system for 'Spiderman-2' and decided to start over and make it simpler -- we added brakes, digital electronics and a Waldo innovation."

Waldo looks like a wireless marionette controller. The puppet is strung up to wires, but you are across the room making it dance. Now, instead imagine an entire taxi cab connected to two bars -- one across the two front wheels and one across the back two -- and wires strung from the bars.

"Simplicity and safety are really important because it is fairly stressful– you are flying actors 20-30 feet in the air,"said Noel."This work is really hands-on and personal, I hate when they tell me that they'll do {additional effects like} the shaking in post-production because I wanted to do the shaking!"

Behind The Scenes -- Helping Animation&Special Effects Artists Work Efficiently

While working on an animated film, artists have to generate everything from the weather in the scene to the characters themselves. Each part is often dependent on the next and with an entire network of computers trying to generate characters and scenery as fast as possible, having step B ready before step A can be frustrating and cause delays.

"Alfred is a scriptable system for distributing computational tasks around a network of computers,"said David Laur, an engineer who is now the senior software engineer for the RenderMan Group at Pixar."It was intended to present a useful and practical interface to technical artists in the film industry, giving them control over their jobs and providing feedback about the job and system status."

The system organizes the jobs that need to be done by the computer's availability and the order of operation for jobs that are dependent on one another. Many movies, including"Finding Nemo"and"The Incredibles"were all made using this system.

"The behind-the-scenes aspects of these films are hands-on and personal,"said Laur."I am in awe and appreciate the talent and creativity of the artists and technical directors at these studios."


Source

Saturday, February 12, 2011

Someday 'talking cars' may save lives

Could"talking cars"save lives? Auto companies are developing safety systems using advanced WiFi signals and GPS systems that could allow vehicles to communicate with each other on the road. The cars could then send messages to warn their drivers about potential crashes.

Ford Motor Co. is demonstrating the technology for policy makers and journalists in advance of the Washington Auto Show in the nation's capital. The technology sends out multiple messages per second about the vehicle's location, speed, brakes and steering.

If adetects a potential hazard, it can warn the driver. The technology aims to prevent collisions involving a car changing lanes, approaching a stalled vehicle, or heading into an intersection in which another car ignores a red light or a stop sign.

"We really see a safety opportunity here,"said Mike Shulman, technical leader for Ford Research and Advanced Engineering.

Auto companies have been working on the technology for nearly a decade. Several automakers are part of a consortium sharing information on the crash avoidance systems, including General Motors, Toyota, Daimler and others.

The systems, which warn drivers through beeping sounds and flashing red lights at the base of the dashboard, are still five to 10 years from being deployed into the nation's fleet. But Ford officials said the technology, if installed on enough vehicles, could reduce the more than 30,000 people who are killed each year on the nation's highways.

The government has touted the intelligent vehicle systems. In October, thesaid the vehicle-to-vehicle communication could potentially address about 4.3 million vehicle crashes, or about 4 in 5 crashes involving drivers who are not impaired by drugs or alcohol.

Some crash avoidance systems have used radar systems positioned in the front or back of the vehicle. Ford said the GPS/WiFi systems are less costly and can detect movements surrounding the vehicles, including conditions along winding roads where a driver's vision might be obstructed or in side crashes involving a car that barrels through a red light. The broad availability of GPS and WiFi, meanwhile, could help car companies eventually install the technology on vehicles already in the fleet, Ford said.

To showcase the technology, auto companies plan to hold driving clinics next summer to let consumers experience the intelligent vehicles. Car companies and the government are developing standards and hoping to complete research by 2013 and plan for future deployment.

"This technology is an opportunity to help create a future where millions of vehicles communicate with each other by sharing anonymous real-time information about traffic speeds and conditions. This new world of wireless communication will make transportation safer,"said Peter Appel, administrator of the Transportation Department's Research and Innovative Technology Administration.


Source

Friday, February 11, 2011

IBM's 'Watson' to take on Jeopardy! champs

Jeopardy!, which first aired on US television in 1964, tests a player's knowledge of trivia in a range of categories

Enlarge

Nearly 15 years after an IBM machine defeated world chess champion Garry Kasparov, the US computer pioneer is rolling out another device to challenge mankind.

Watson, a supercomputer named for IBM founder Thomas Watson, is to take on two human champions of the long-running Jeopardy! television quiz show in two games over three days next week. The three"Jeopardy!"episodes featuring Watson will air Feb. 14 through Feb. 16.

Like Kasparov, who lost a six-game match to IBM's"Deep Blue"in 1997, Ken Jennings, who holds the Jeopardy! record of 74 straight wins, and Brad Rutter, winner of $3.25 million on the show, are expected to have their hands full.

In a practice match at IBM Research headquarters in upstate New York last month, Watson came out on top in terms of prize money, although the computer and the two human contestants correctly answered all of the 15 questions.

Jeopardy!, which first aired on US television in 1964, tests a player's knowledge of trivia in a range of categories, from geography to politics to history to sports and entertainment.

In a twist on traditional game play, contestants are provided with answers and need to supply the questions.

During the practice match, for example, one of the clues was:"The film Gigi gave him his signature song 'Thank Heaven for Little Girls.'"

Watson, represented by a large computer monitor, sounded the buzzer a split second ahead of Jennings and Rutter and answered correctly in its artificial voice"Who is Maurice Chevalier?"

A dollar amount is attached to each question and the player with the most money at the end of the game is the winner. Players have money deducted for wrong answers.

The winner of the Man Vs Machine showdown will win $1 million
Enlarge

An IBM Power7 computer, a work-load optimized system that can answer questions posed in natural language over a nearly unlimited range of knowledge. Watson, which is not connected to the Internet, plays the game by crunching through multiple algorithms at dizzying speed and attaching a percentage score to what it believes is the correct response.

Watson, which is not connected to the Internet, plays the game by crunching through multiple algorithms at dizzying speed and attaching a percentage score to what it believes is the correct response.

For the Maurice Chevalier question, for example, Watson was 98 percent certain that the name of the French crooner was the right answer.

Developing athat can compete with the best human Jeopardy! players involves challenges more complex than those faced by the scientists behind the chess-playing"Deep Blue."

"The thing about chess is that it's fairly straightforward to represent the game in a computer,"said Eric Brown, a member of the IBM Research team that has been working on Watson since 2006.

"With chess, it's almost mathematical,"Brown told AFP."You can consider all the possibilities. It's almost a closed set of options."

Jeopardy!, on the other hand, involves the use of natural language, raising a whole host of problems for a computer.

"Questions are expressed in language and with an ability to be asked in an infinite numbers of ways,"Brown said, including the use of irony, ambiguity, riddles and puns -- not a computer's strong suit.

"The initial approach that people might want to take is to just build a giant database,"Brown said."That approach is just not suitable."

Playing Jeopardy! is also not like searching the Web.

"While they're somewhat related, Google and Watson are solving two different problems,"Brown said.

"With Web search, you express your information with a few keywords and then a search engine will bring back 10 or half-a-million Web pages that match what you're looking for.

"But if you're looking for precise information (like with Jeopardy!) you'll have the task of wading through those documents to find the answer that you're looking for,"he said.

Watson uses what IBM calls Question Answering technology to tackle! clues, gathering evidence, analysing it and then scoring and ranking the most likely answer.

The winner of the man vs. machine showdown which begins on Monday is to receive $1 million. Second place is worth $300,000 and the third place finisher pockets $200,000.

IBM plans to donate 100 percent of its winnings to charity. Jennings and Rutter plan to give 50 percent of their prize money to charity.


Source

Thursday, February 10, 2011

New keyboard software makes typing faster on touch screens (w/ Video)

New keyboard software makes typing faster on touch screens (w/ Video)

Enlarge

(PhysOrg.com) -- Researchers in Australia have invented a virtual keyboard they say will make typing on touch screen devices such as the iPad much faster. The software senses the positions of the user’s fingertips and as soon as four fingers touch the screen it displays a QWERTY keyboard underneath the fingers, with half the keyboard under each hand. The keys respond to touch, and can be moved around the screen or pressed to type.

Computer systems researcher Christian Sax and colleague Hannes Lau of the University of Technology, Sydney (UTS) developed their prototype, the LiquidKeyboard (patent pending), in order to try to take the traditionaland adapt it to the new communication interface of the touch screen. It is intended for use with a pressure-sensitive screen, but since touch screens in iPads and similar devices are not yet sensitive to pressure, the software detects pressure by looking at hand size and the position of the fingers and measuring the surface areas of the fingertips.

You need Flash installed to watch this video

Sax and Lau are hoping their LiquidKeyboard could eventually be integrated into the operating system of a touch screen device such as the, or it could be made available as an application for sale in an App store. Mr Sax said they thought such a device was necessary because typing on a touch screen is currently difficult and tedious, resulting in hand fatigue. He said their device allows the user to use both hands, and this eliminates the necessity for additional hardware to be purchased.

Mr Sax said they chose the iPad first because the platform is"a bit more powerful and has a more sensitive multi-touch capability,"and because the hardware is cheap. They chose the system as their first step even though Google Android tends to be more straightforward, and they had to create a developer account for Apple and obtain the necessary permits and certification for the iPad.

The team say they are definitely aiming to extend the system to Android and as many other touch screen platforms as they can find in the future, and Mr Sax said the system’s versatility and low cost would make it an effective system for entering text in a wide range ofapplications.


Source

Wednesday, February 9, 2011

IBM puts supercomputer in 'Jeopardy!'

IBM puts supercomputer in 'Jeopardy!'

Enlarge

"Let’s finish,‘Chicks Dig Me’,"intones the somewhat monotone, but not unpleasant, voice of Watson, IBM’s new supercomputer built to compete on the game show Jeopardy!

The audience chuckles in response to the machine-like voice and its all-too-human assertion. But fellow contestant Ken Jennings gets the last laugh as he buzzes in and garners $1,000.

This exchange is part of a January 13 practice round for the world’s first man vs. machine game show. Scheduled to air February 14-16, the match pits Watson against the two best Jeopardy! players of all time. Jennings holds the record for the most consecutive games won, at 74. The other contestant, Brad Rutter, has winnings totaling over $3.2 million.

At the contestants’ podium, Watson appears as an atom-like icon sporting the splayed lines of an idea lightbulb. But behind its monitor is some of the most sophisticated computer science ever assembled. A core team of 25programmers spent four years building Watson, the world’s most advanced question and answer system.

Responding to Jeopardy! questions is tougher than the kind of search that Google does because it requires a single answer, not pages of possibilities. Confounding the problem, said Raymond Mooney, a computer science professor at The University of Texas at Austin, is that the kind of questions that make Jeopardy! interesting are riddled with ambiguity, allusions and puns.

David Ferrucci, program manager of the Watson project, explained that language comes naturally to people, because we think in words and phrases. But computers, at their core zeros and ones, process information mathematically. And there’s no formal mathematics for everyday language.

When Watson receives input, it parses it against a vast database including encyclopedias, textbooks and news archives, along with sources favored by Jeopardy! writers like the complete works of Shakespeare and the Bible.

Watson finds several-hundred candidate answers, which are scored by thousands of algorithms. It then compares the highest score against a threshold. The threshold changes depending on how much risk Watson will accept.“One way to think about it is for every question and answer we have lots of pieces of evidence, then we score each to come up with a single answer,” explained David Gondek, a member of Watson’s algorithms and strategy teams.

Watson runs on 3,000 cores simultaneously, an example of the aptly named“massively parallel computing.” Such hefty computational rigging reveals its most preferred answer in seconds. A typical PC requires about two hours for the same process.

This fall Watson played 55 sparring matches against Tournament of Champions winners. The training taught Watson that some algorithms are more trustworthy than others. The computer awards these algorithms higher scores.

Mooney, who attended several sparring matches but did not contribute to the Watson project, said he was skeptical that a machine could compete on Jeopardy! but became convinced after seeing Watson in action.“They’ve done a great job of putting together a lot of existing artificial intelligence technology into a very complicated system that actually works on this problem.”

Watson receives its input in text form -- it doesn’t use speech recognition software -- at the same moment the contestants see it. Host Alex Trebek then reads the question out loud. When he’s done speaking, a light signals that the buzzers are open. Watson receives an electronic signal at the same moment and will buzz in only if it has an answer with a high enough score in time.

Part of the fun of watching Watson is the answer panel that shows viewers the machine’s top three candidate answers and their scores, along with the threshold, which is lower when Watson trails in a game and increases when it’s ahead.

“It’s something that the team is watching closely during the game, too,“ said Gondek.

If the January 13 match is any indication, Watson has both the speed and accuracy to put up a fight. While neither man nor machine missed a single question during the 5-minute parry, Watson finished the round with $4,400, ahead of Jennings with $3,400 and Rutter with $1,200.

A representative from IBM reported that no changes were made to Watson following the January 13 round,“The system is now locked and has not been refined since the practice match.”

As of this writing, the all-human poll on the IBM website gives a high score to Watson: 53 percent of the respondents think the next Jeopardy! Champion will be a machine.


Source

Tuesday, February 8, 2011

Chicago's high-tech cameras spark privacy fears

A vast network of high-tech surveillance cameras in Chicago raises privacy concerns

Enlarge

A vast network of high-tech surveillance cameras that allows Chicago police to zoom in on a crime in progress and track suspects across the city is raising privacy concerns.

Chicago's path to becoming the most-watched US city began in 2003 whenbegan installing cameras with flashing blue lights at high-crime intersections.

The city has now linked more than 10,000 public and privately owned surveillance cameras in a system dubbed Operation Virtual Shield, according to a report published Tuesday by the American Civil Liberties Union.

At least 1,250 of them are powerful enough to zoom in and read the text of a book.

The sophisticated system is also capable of automatically tracking people and vehicles out of the range of one camera and into another and searching for images of interest like an unattended package or a particular license plate.

"Given Chicago's history of unlawful political surveillance, including the notorious 'Red Squad,' it is critical that appropriate controls be put in place to rein in these powerful and pervasivenow available to law enforcement throughout the City,"said Harvey Grossman, legal director of the ACLU of Illinois.

The Chicago police"Red Squad"program from the 1920s through the 1970s spied on and maintained dossiers about thousands of individuals and groups in an effort to find communists and other subversives.

Outgoing mayor Richard Daley has long championed the cameras as crime-fighting tools and said he would like to see one on every street corner.

Chicago police say the cameras have led to 4,500 arrests in the last four years.

But the ACLU said the $60 million spent on the system would be better spent filling the 1,000 vacancies in the Chicago police force.

It urged the city to impose a moratorium on new cameras and implement new policies to prevent the misuse of cameras, such as prohibiting filming of private areas like the inside of a home and limiting the dissemination of recorded images.

"Our city needs to change course, before we awake to find that we cannot walk into a book store or a doctor's office free from the government's watchful eye,"the ACLU said.

A police spokeswoman said the department regularly reviews its policies and maintains an"open dialogue"with the ACLU.

"The Chicago Police Department is committed to safeguarding the civil liberties of city residents and visitors alike,"Lieutenant Maureen Biggane said in an e-mail.

"Public safety is a responsibility of paramount importance and we are fully committed to protecting the public from crime, and upholding the constitutional rights of all."


Source

Monday, February 7, 2011

In future, cars might decide if driver is drunk

In future, cars might decide if driver is drunk (AP)

Enlarge

(AP) -- An alcohol-detection prototype that uses automatic sensors to instantly gauge a driver's fitness to be on the road has the potential to save thousands of lives, but could be as long as a decade away from everyday use in cars, federal officials and researchers said Friday.

U.S. Transportation Secretary Ray LaHood visited QinetiQ North America, a Waltham, Mass.-based research and development facility, for the first public demonstration of systems that could measure whether a motorist has a blood alcohol content at or above the legal limit of .08 and - if so - prevent the vehicle from starting.

The technology is being designed as unobtrusive, unlike current alcohol ignition interlock systems often mandated by judges for convicted drunken drivers. Those require operators to blow into a breath-testing device before the car can operate.

The Driver Alcohol Detection Systems for Safety, as the new approach is called, would use sensors that would measure blood alcohol content in one of two possible ways: either by analyzing a driver's breath or through the skin, using sophisticated touch-based sensors placed strategically on steering wheels and door locks, for example.

Both methods eliminate the need for drivers to take any extra steps, and those who are sober would not be delayed in getting on the road, researchers said.

The technology is"another arrow in our automotive safety quiver,"said LaHood, who emphasized the system was envisioned as optional equipment in future cars and voluntary for auto manufacturers.

David Strickland, head of the, also attended the demonstration and estimated the technology could prevent as many as 9,000 fatal alcohol-related crashes a year in the U.S., though he also acknowledged that it was still in its early testing stages and might not be commercially available for 8-10 years.

The systems would not be employed unless they are"seamless, unobtrusive and unfailingly accurate,"Strickland said.

The initial $10 million research program is funded jointly by NHTSA and the Automotive Coalition for Traffic Safety, an industry group representing many of the world's car makers.

Critics, such as Sarah Longwell of the American Beverage Institute, a restaurant trade association, doubt if the technology could ever be perfected to the point that it would be fully reliable and not stop some completely sober people from driving.

"Even if the technology is 99.9 percent reliable, that's still tens of thousands of cars that won't start every day,"said Longwell. Her group also questions whether an .08 limit would actually be high enough to stop all drunken drivers, since blood alcohol content can rise in people during a trip depending on factors such as how recently they drank and how much they ate.

"It's going to eliminate the ability of people to have a glass of wine with dinner or a beer at a ball game and then drive home, something that is perfectly safe and currently legal in all 50 states,"she said.

LaHood disputed that the technology would interfere with moderate social drinking, and said the threshold in cars would never be set below the legal limit.

In Friday's demonstration, a woman in her 20s weighing about 120 pounds drank two, 1 1/2 ounce glasses of vodka and orange juice about 30 minutes apart, eating some cheese and crackers in between to simulate a typical social setting, said Bud Zaouk, director of transportation safety and security for QinetiQ.

Using both the touch-based and breath-based prototypes, the woman registered a .06 blood alcohol content, Zaouk said, so she would be able to start the car.

Laura Dean Mooney, president of Mothers Against Drunk Driving, said the technology could"turn cars into the cure."

While she did not foresee the alcohol detection system ever being mandated by the government, Mooney, whose husband died in an accident caused by a drunken driver 19 years ago, said she could envision it someday becoming as ubiquitous as air bags or anti-lock brakes in today's cars, particularly if insurance companies provide incentives for drivers to use those systems by discounting premiums.


Source

Sunday, February 6, 2011

Pay for a latte by mobile at Starbucks

Starbucks said its mobile payment program will be the largest in the country

Enlarge

US coffee chain Starbucks on Wednesday began allowing customers in its US stores to keep their cash and credit cards in their wallets and pay for their drinks with mobile phones.

Starbucks said the mobile payment system, which has been tested in selected cities since last year, was being expanded to the nearly 6,800 Starbucks around the country and the more than 1,000 Starbucks located in Target stores.

While Japanese shoppers have been able to pay by mobile phone for years for certain purchases, the practice is still in its infancy in the United States.

The Seattle, Washington-based Starbucks said its mobile payment program will be the largest in the country.

Starbucks said owners of a Blackberry smartphone, anor anwho have downloaded the free Starbucks Card mobile application can buy drinks by waving their mobile phone at a scanner at the cash register.

The scanner reads an on-screen barcode and debits the purchase from the Starbucks Card, which can be reloaded with funds using a credit card or with PayPal.

"Starbucks anticipates mobile payment will be a draw for customers looking to experience the speed, ease and convenience of paying with their mobile phone,"the company said in a statement.

last month unveiled a new mobile phone, the"S,"powered by its Android software, that allows for another form of mobile payment.

The Nexus S is equipped with a near field communication (NFC) chip that turns the device into a virtual wallet, allowing users to"tap and pay"for financial transactions.

NFC chips storethat can be transmitted to readers, say at a shop checkout stand, by tapping a handset on a pad.


Source

Saturday, February 5, 2011

XWave for iPhone lets you read your own mind

XWave for iPhone lets you read your own mind

Enlarge

(PhysOrg.com) -- A new application for the iPhone, the XWave, lets you read your own mind via a headset clamped to your head and connected to the phone’s audio jack.

The plastic headband, which costs around $100, has a sensor that presses against the user’s forehead and communicates with a free XWave iPhone application that then shows your brain waves graphically on the iPhone screen. As you focus your mind on a task the graphics are changed— a ball may move higher for instance, or your state of relaxation may be indicated by changes in a pulsating color, which moves towards blue as you become more relaxed.

You need Flash installed to watch this video

Brainwave detection is powered by an NeuroSky eSense dry sensor, which provides a brain-computer interface (BCI) to sense even faint electrical impulses in the brain and convert them to digital signals that are sent to the iPhone. Previous applications of the NeuroSky technology include computer games and toys. In XWave an algorithm is applied to the brain rhythms to convert them to graphical representations of attention and meditation values.

XWave for iPhone lets you read your own mind
Enlarge


XWave enables you to manipulate a number of other iPhone graphical applications and objects in games using only your brain waves, providing your rating in attention or meditation is high enough. At present you cannot text or browse the web using XWave, but you can use the device to train your mind to relax and focus on command. The list of applications for the device is likely to grow rapidly.

You need Flash installed to watch this video

XWave, developed by PLX Devices, is meant to be used purely for entertainment, but the implications for the future are enormous, and may be particularly important for people who are disabled since they may be able to have much more control in their lives using theiralone to control their phonse and potentially other applications. According to PLX, the headset device is also open for use with applications from other companies.

XWave for iPhone lets you read your own mind
Enlarge

XWave iPhone app screen.

XWave is compatible with the, iPod Touch and iPad. Wireless versions are also available for WiFi and Bluetooth devices. The free XWave application is available for download via iTunes.


Source