Monday, May 23, 2011

Toyota to set up social networking service

Toyota is setting up a social networking service with the help of a U.S. Internet company and Microsoft so drivers can interact with their cars in ways similar to Twitter and Facebook.

Japanese automakerCorp. and Salesforce.com, based in San Francisco, announced their alliance Monday to launch"Toyota Friend,"a private social network for Toyota owners that works similar to tweets on.

In a demonstration at a Tokyo showroom, an owner of a plug-in Prius hybrid found out through a cellphone message from hiscalled"Pre-boy"that he should remember to recharge his car overnight.

When the owner plugged in his car to recharge it, the car replied,"The charge will be completed by 2:15 a.m. Is that OK? See you tomorrow."

The exchanges can be kept private, or be shared with other"Toyota Friend"users, as well as made public on, Twitter and other services, the company said.

The companies did not give details of how the technology, such as the content of the talking car's dialogues, will be managed. A launch where such details will be offered is set for Tuesday.

Toyota is investing 442 million yen ($5.5 million),Corp. is investing 335 million yen ($4.1 million) and Salesforce.com 223 million yen ($2.8 million) in the project.

Many cars are already equipped with navigation and other network-linking capabilities, and can function as a mobile device just like anor a Blackberry.

Toyota's service, built on open-source cloud platforms that are the specialty of Salesforce.com, as well as on Microsoft's platform, will start in Japan in 2012, and will be offered later worldwide, according to Toyota.

Toyota President Akio Toyoda, a racing fan, said he always"talks"with his car when he is zipping around on the circuit.

With the popularity of, cars and their makers should become part of that online interaction, he said.

"I hope cars can become friends with their users, and customers will see Toyota as a friend,"he said.

Salesforce.com chief executive Marc Benioff said social networks can add value to products and companies. It can also help Toyota gain massive information not only about their buyers but about how the car is working or not working, he said.

"I want a relationship with my car in the same way we have a relationship with our friends on social networks,"he said.

Toyoda, who has always been interested in telematics, or the use of Internet technology in autos, has been aggressive in forging alliances with new kinds of companies, including one with U.S. luxury electric carmaker Tesla Motors that he announced last year.

Partnerships with dot.com types have been a bright spot in Toyoda's bumpy career as president. He has faced growing doubts about reliability and transparency because of the massive global recalls that began two years ago, shortly after he took office, and which now affect more than 14 million vehicles.

Toyota is also battling parts shortages after the March 11 earthquake and tsunami in Japan destroyed key suppliers, hampering production.


Source

Sunday, May 22, 2011

AIDA 2.0 brings a full panel, plus some, location display to drivers (w/ video)

AIDA 2.0

Enlarge

(PhysOrg.com) -- If you remember the AIDA (Affective, Intelligent Driving Agent) system, which came out roughly a year and a half ago, then you remember that it was a joint project, made by MIT and Volkswagen,<a href="http://www.physorg.com/news176294342.html">that put a robot head in your dashboard</a>. The head gave driving directions to end users. The newest version, AIDA 2.0, has gotten rid of the talking head, and turned the entire view of the car into one large navigation display.

In the AIDA 2.0 system all of the information that the driver needs will be placed onto the dashboard and surrounding areas. While this will make the information easily accessible, it may also lead to potential distractions on the road. The new virtualnow consists of the entirety of the, the console, the instrument panel, and the wing mirrors. Working in conjunction, they create one virtual display that is able to update itself as you move.

This video is not supported by your browser at this time.

While this idea does seem really cool, like something out of a Tron movie, it does stretch the drivers view, and could potentially distract from the stretch of road in front of the driver, and the other cars on the road.

On the bright side, the system is both adaptive and considerate. The system will, over time, learn facts about you such as the types of places where you like to eat and the activities that you are interested in. Then, it will search through information about the area and tell you about things that you may be interested in that are close by. As with any adaptive system, the more you use it, the better it will become.

No word has been given yet about when consumers will see the AIDA 2.0 system in cars.


Source

Saturday, May 21, 2011

Sony develops 'SmartAR' Integrated Augmented Reality technology

Sony develops“SmartAR” Integrated Augmented Reality technology

Sony has developed integrated‘Augmented Reality (hereafter referred to as‘AR’) technology’ called as“Smart AR.” When capturing visuals through a camera on a device such as a smartphone, the technology enables additional information to be displayed on the device’s screens such as virtual objects, or images and texts that cannot be identified by visual perception alone. The technology employs the markerless approach, forgoing any requirement for special markers such as 2D barcodes. The object captured by the camera is quickly recognized and can be tracked at high-speed along with the movement of the camera, as it is displayed over the actual 3D space.

AR technology has recently been the subject of much interest, and is being used in a variety of applications such as advertisements, promotions, games, and information searches. Sony began researching AR in 1994 with two-dimensional barcodes recognition (marker approach), and in 1998, it developed VAIO“PCG-C1” personal computers equipped with software which automatically recognized‘CyberCode.’

“SmartAR” technology combines‘object recognition technology’ (markerless approach in which no special markers are required) for recognition of general objects such as photographs and posters with Sony’s own proprietary‘3D space recognition technology,’ which has been fostered through the research of robots such as“AIBO” and“QRIO.” With“SmartAR” technology objects can be recognized and tracked at high-speed. In addition to displaying virtual objects or additional image or text information (hereafter,‘AR information’), the technology also facilitates the expression of AR information over an extended space, thus producing a dynamic, large-scale AR experience.

Furthermore, information can be acquired or navigated by simply touching the AR information directly on the screen of the smartphone or other device, thus achieving an intuitive and seamless user interface that is unique to“SmartAR.”

Main features

(1) Object recognition that enables the markerless approach
  AR information can be displayed on the captured image which appears on a device’s screen, including those images that do not have any special markers for AR. This technology is also compatible with image recognition technologies that use conventional markers (such as“CyberCode”). Because“SmartAR” can recognize everyday objects such as posters and menus, it has the potential for a wide variety of applications.

  “SmartAR” object recognition technology identifies objects by analyzing features detected from a portion of the image (hereafter,‘local features’) together with their positional relationship. Our feature matching technology that employs a proprietary probabilistic method that matches local features with minimal calculations enables high-speed recognition that is resistant to changes in lighting or the position of the object. In addition, recognition is still possible even if the object captured appears to be comparatively small in the display.

(2) High-speed tracking (‘rapid&accurate’)
  Sony achieved its natural-feeling‘rapid&accurate’ AR by quickly displaying AR information on the screen and then tracking the camera’s movements at high-speed. This has been realized by combining object recognition technology with our proprietary matching technology that uses features detected from a portion of the image (‘local features’) and image tracking technology that is capable of dealing with changes in the shape of the object.

(3) 3D space recognition
  With our dynamic, large-scale AR, virtual objects can be merged with 3D structures detected in the physical world. For example, even if the AR image is a gigantic virtual character whose size exceeds the dimensions of the device’s screen, the technology allows the user to grasp the entirety of the virtual character when the camera is moved around. Furthermore, it is also possible to move the virtual object in the actual 3D space as if it were really there.

  Three-dimensional space recognition technology is based on use of the disparity observed by the camera movements to estimate the shape of the 3D space and the position and angle of the camera. By combining this with object recognition technology, devices become capable of identifying and remembering 3D space constructions.

(4) AR Interaction
  Information can be intuitively acquired and navigated by directly touching the AR information displayed on the smartphone or device’s screen. The distinctiveness of“SmartAR” technology comes from the user interface which enables users to naturally use and operate additional information and virtual objects.


Source

Friday, May 20, 2011

Queens University students hack Microsoft Kinect to make a 360-degree display

Queens University students hack Microsoft Kinect to make a 360-degree display

Enlarge

(PhysOrg.com) -- It seems like lately everyone is playing around with the Microsoft Kinect to make something different than the in the box configuration. This time the modifications are coming students at Queens University. They have combined a pair of the Kinect sensors with a hacked 3D HD projector and a hemispherical mirror mounted inside of an acrylic sphere to make a pseudo holographic display.

The project, aptly dubbed Project Snowglobe, is capable of showing aof a digital object. While at first that may sound really cool, the display system is limited to a single user at the present moment. The system cannot project a truly 3D object at the moment. It simply tracks the movement of the viewer and rotates the image so that it is in sync with their position.

Don't expect to see this bit of creative hacking on sale any time in the near future; no plans have been made at this time.


Source

Thursday, May 19, 2011

In-car device monitors blood sugar for diabetic drivers

People with diabetes and their caregivers know that careful and constant monitoring of their blood sugar levels is critical to managing the disease. But even while driving?

In an unusual marriage of medical technology, consumer electronics and automotive engineering, Fridley, Minn.-based Medtronic Inc. and Ford Motor Co. on Wednesday unveiled athat uses the automaker's in-car communications system to help drivers track theiractivity while on the go.

"It's a real high-tech approach to the old saying, 'I've fallen and I can't get up!'"said Phil Nalbone, an analyst with Wedbush Securities."This makes good use of widely available communications technology to safeguard patients and improve quality of care."

Using Bluetooth connectivity, the system links the automaker's popular in-car infotainment system, called Sync, to a Medtronic continuous glucose monitor. If a driver'sare too low, an alert sounds or a signal appears on a dashboard screen.

, in particular, can cause light-headedness, blurry vision and other potentially dangerous symptoms that could cause a traffic accident. Theestimates nearly 26 million adults and children in the United States have diabetes, but of that amount, only a portion use glucose monitors and insulin pumps.

The Ford-Medtronic prototype is still being researched, so it's unclear when - and if - the technology will ever be marketed."Today it's all about possibilities,"said Medtronic senior vice president James Dallas, who attended an unveiling at Ford headquarters in Dearborn, Mich., on Wednesday."There's nothing formal yet, but the technology has reached a point where possibilities can become probabilities."

The idea has won some preliminary fans in the diabetes community."I know when I'm driving, if the 'check engine' light comes on, I'm going to pay attention,"said Dr. Richard Bergenstal, executive director of Park Nicollet's International Diabetes Center."It's kind of the same principle."

For Medtronic, the partnership signals a growing movement toward managing health remotely through smartphones, tablets, laptops and, possibly, cars. Dallas said the company is in talks with other tech leaders, such as IBM, Cisco, Apple, Verizon and Qualcomm, for other partnerships."It helps us extend our reach in new ways,"Dallas said.

Medtronic's $1.2 billion diabetes business has led the way in continuous glucose monitoring, which recordsthroughout the day and night. The readings permit patients to make adjustments to insulin levels, often using a Medtronic insulin pump, or by ingesting sugar to coax levels back into normal territory.

"Ideally, we will get to a place where the sensor and pump communicate and when you get a reading, the pump automatically adjusts,"Medtronic spokesman Brian Henry said.

Pairing the Medtronic technology with automotive engineering may seem far-fetched at first blush, but Ford maintains that 78 percent of U.S. consumers are deeply interested in"mobile health solutions."According to a study by MobileStorm cited by the company, medical and health care applications are the third-fastest growing category of smartphone apps.

Ford also announced a project on Wednesday to provide drivers with allergy alerts and pollen levels on the Sync device for those suffering from asthma and severe allergies.

"Ford's approach to health and wellness in the vehicle is not about trying to take on the role of a health care or medical provider,"said Gary Strumolo, the company's global manager of interiors, infotainment, and health and wellness research."We're a car company."

By partnering with"experts,"Strumolo said, the Sync system can be used as a kind of"secondary alert system and alternate outlet for real-time patient coaching services."

Ford released Sync in 2008 to mostly positive reviews. Developed with Microsoft, the system enables voice control of phones and audio systems and is available in most models of Ford vehicles. (It's an extra $395 when optional.)

However, as safety concerns mount over potentially distracted drivers using cell phones for texting, talking and other activities, U.S. Transportation Secretary Ray LaHood expressed concern last fall about systems like Sync and General Motors' OnStar, even if they are"hands free."

There are also questions about whether there's significant money to be made."It remains to be seen how Medtronic will monetize this and whether it will contribute to a meaningful revenue stream,"Nalbone said."But it's an intriguing idea."


Source

Wednesday, May 18, 2011

AnatOnMe: Doctor patient communication enhanced with new Microsoft device (w/ video)

Doctor patient communication enhanced with new Microsoft Device

(Medical Xpress) -- Microsoft researchers announced this week a new handheld device that they hope will work as an aid for doctors and patients to better communicate injuries and recommended therapy treatments. The new prototype device is called AnatOnMe and enables doctors to project an image of the bones, tendons and muscles involved in an injury directly onto the patient's skin.

AnatOnMe is composed of two parts. The main component has a handheld projector, a digital camera, and an. The second part of the device holds aand the main control buttons. Amy Karlson from Microsoft Research’s Computational User Experiences Group located in Redmond, Washington says that the technology is actually low-tech but could provide many possibilities in the future.

The projector is capable of projecting stock images of an injury onto a patient’s skin to better enable them to see inside and understand the injury. The camera enables a doctor to take images of a patient to document progress and allow doctors to make notations. They can also take pictures of a patient performing physical therapy and note what they might be doing wrong or need to work on. This method allows the patient to better see how their body is working and what needs to be done in order to heal from the injury.

This video is not supported by your browser at this time.

AnatOnMe is a projection-enabled mobile device designed to improve patient-doctor communication in a medical setting.

After an exam, the doctor is then able to print out the pictures and create a personalized file to show what has been discussed in the office visit for the patient to take home, as well as provide detailed information in a patient’s medical record. By making the visit and instructions more personalized, the hope is to better improve patient body awareness and communication between the doctor and the patient.


Source

Tuesday, May 17, 2011

YikeFusion: same design, heavier frame, less expensive

YikeFusion: same design, heavier frame, less expensive

Enlarge

(PhysOrg.com) -- Some of you may be familiar with the YikeBike. For those you who are not familiar with the YikeBike it is a computerized bike that can be folded up and packed away when it is not in use. The bike, which looks like it belongs to a classic cartoon character, allows users to tool around on the sidewalk much faster than most of us could walk, or even pedal on a standard bike.

The standard version of the YikeBike weighs in at 10.8kg or 24 pounds, which is about the same as a Brompton folding bike. That low weight comes with the help of a carbon fiber body. The carbon fiber is lighter than other materials on the market, but it also makes the bike fairly expensive. Anyone who wants to buy the original YikeBike would have to pay $3,800.

This video is not supported by your browser at this time.

If $3800 is not in your budget then you should be glad that Yike has created the Fusion. The Fusion is significantly less expensive, at $2000, because it is made from ancomposite, which makes it notably heavier at 14kg or roughly 31 pounds. This change of materials in the frame has added about seven pounds. This less expensive bike still carries over the same design from the original YikeBike, and features the same 450-watt motor. That motor will take you about six miles in total with a top speed of 14mph.

YikeFusion: same design, heavier frame, less expensive
Enlarge


Since both the YikeBike and the YikeFusion are meant only for short-term commutes the extra weight should not be a significant issue for the majority of users, who could stash it in the trunk of a car or rolling suitcase. The YikeFusion is already on the market.



Source

Monday, May 16, 2011

The worlds smallest 3D HD display

The worlds smallest 3D HD display

(PhysOrg.com) -- It seems like small displays are all of the rage these days, and they just keep getting more and more advanced. In October of last year Ortus Technology created a 4.8-inch liquid crystal display that showed full color images. At the time, this screen with its pixel density of 458 pixels per inch, a density beyond the detection limit of the human eye, was the latest and greatest in the world of tiny screens. Now, it is only the most advanced of the 2D screens out there.

Now, it has some 3D competition, and the call is coming from inside the house. Ortus has created a HyperTFT (HAST) screen. This new screen reduces the space between the pixels and gives it a whole new view. The 4.8-inch LCD screen will still show 2D images at the 458 pixels per inch rate, but now it can also show 3D images at a fairly impressive rate of 229 pixels per inch. This rate of pixels per inch will be able to show full HD resolution images with a final resolution of 1920 x 1080 pixels. The 3D does require the use of glasses to see the images pop, unlike other small format 3D screens such as the one found on the3DS. The 3D images will have a viewing angle of 160 degrees, and will be able to display up to 16.77 million colors.

The 3D effect is created with a circular polarizing film known as Xpol, which was developed by Arisawa Manufacturing. The film needs to be precisely placed on the screen because this technology shows images for the left and right eye alternately on each line, halving the vertical resolution.


Source

Sunday, May 15, 2011

The next generation of E-ink may be on cloth (w/ video)

The next generation of E-ink may be on cloth (w/ video)

(PhysOrg.com) -- Most people have become familiar with E-ink through e-readers. Devices, such as the Amazon Kindle and the Nook, have brought a less limited version of the bookstore to the reader. E-ink technology works by using an electrophoretic display that either push black magnetic powered to the top, or hides it on the bottom, creating a black and white screen. Either way, the result is the same, a matte finished screen, without any serious sun glare issues, that displays black and white text.

Well, more recently, the E-ink of the world has been getting some upgrades. First it was the addition of images. Then it was the capability to have color screens. Now, video is the next frontier for e-ink, and this time, it won’t be on the screen of a handheld device.

It may just be on a piece of cloth. One of the most interesting, and potentially expensive ways to use E-ink in the future would be to print it on screen cloth. Demo videos, taken from CES, show off this capability to embed e-ink displays in other types of materials. In the video below, you can see an e-ink screen, embedded in a bit of Tyvex cloth which can be viewed and crumpled over and over again. Some of you may already be familiar with Tyvex cloths, since its paper-like quality allows it to be used in high durability shipping envelopes. The cloth-like paper is able to withstand significant wear and tear.

This video is not supported by your browser at this time.

This video is not supported by your browser at this time.

Envelopes made for e-ink could be made to be reusable, eliminating waste, by allowing for quick and easy address changes, without the need for multiple packing slips and a new envelop every time. Basically, this type of envelope could be a kind of endless routing slip, one that would never get those annoying ink smudges or eraser marks.

No word yet on when these products would come out, or how much they would cost, but give the high cost of e-ink readers when they first came out, this e-ink fabric would not be inexpensive.

This video is not supported by your browser at this time.

Color Video on an E-ink Screen



Source

Saturday, May 14, 2011

Waste-conversion startup Sanergy bowls over competition

Waste-conversion startup Sanergy bowls over competition

Enlarge

A team of students with a toilet and a dream won this year’s grand prize, as well as the audience-choice award, in MIT’s 21st annual $100K Business Plan Competition.

Sanergy, the finalist in the emerging-markets track, beat out 280 teams with its plan for an innovative form of low-cost, energy-converting sanitation. Throughout the competition, Sloan School of Management MBA candidates David Auerbach and Ani Vallabhaneni refined their pitch, which they presented at the competition’s finale on Wednesday night.

Auerbach and Vallabhaneni opened with a question to the audience:“Who here has used a clean toilet today?” They then outlined the critical need for clean, affordable sanitation in African slums.

Sanergy’s solution: a low-cost, portable toilet facility that separates waste to be collected and converted to biogas and organic fertilizer. Within five years, the team hopes to provide facilities to more than 500,000 Africans, generating 7.5 million kilowatt-hours of electricity and 11,000 tons of fertilizer.

Later in the evening, Sanergy’s pitch garnered the most votes from the crowd, winning the team the audience-choice award and an additional $5,000 dollars.

Sanergy took home a total of $120,000, which it plans to put toward building and implementing up to 60 toilets throughout Kenya.

Seed money for top seeds

Sanergy joins a list of more than 150 companies launched with the help of MIT’s $100K Business Plan Competition. The contest, started in 1990, is the largest student-run business competition in the world. Of the 280 teams that entered this year, 27 semifinalists advanced; each worked for two months with an experienced entrepreneur to hone its business plan.

Six finalists were announced last week, representing winners in each of six tracks: energy, life sciences, mobile, products and services, web/IT, and emerging markets. During Wednesday’s $100K finale, finalists pitched their ideas to a packed house at Kresge Auditorium. Each finalist received a check for $15,000.

Cool Chip Technologies, the winner of the $200,000 Clean Energy Prize earlier this week, was automatically entered as a $100K finalist in the energy track. The team of three MIT graduate students has developed a fan system for cooling processor chips, a potentially significant cost-saver for busy data centers. Chip developer William Sanchez, a doctoral student in electrical engineering and computer science, says the fan, which is less than three inches wide, could also potentially work in personal-gaming systems such as Microsoft Xbox and Sony PlayStation.

Zinaura Pharmaceuticals won the top spot in the life-sciences category. The startup, helmed by Drew Cronin-Fine, a graduate student in MIT’s Biomedical Enterprise Program, and Heather Kline, an MBA student at Harvard Business School, is built around the compound Huperzine A, a treatment for epilepsy and pain. The team hopes the drug, which targets novel biochemical pathways that can cause seizures, will successfully treat the 30 percent of epilepsy patients who do not respond to current medications.

In the mobile category, Sensactive took top billing. Matt Hirsch, a PhD student in the MIT Media Lab, devised an LCD screen that senses gestures in 3-D, enabling users to manipulate on-screen images with a wave of their hand. In their pitch to the MIT audience Wednesday night, the team likened the technology to“Kinect for mobile devices.”

The winner of this year’s products and services track has already received a significant financial boost: Last year, Green Logistics won the $100K Elevator Pitch contest for its concept of collapsible air cargo containers that are stackable when empty, conserving cargo space. According to Sloan MBA student Anand Dass, airlines waste millions of dollars’ worth of fuel annually transporting empty cargo containers back and forth across the country. The team hopes to shop their prototype around to the airline industry in the coming year, offering potential fuel savings of up to 30 percent.

The startup Upkast led the web/IT category with its plan for a virtual file-sharing system. David Jia, a computer science undergraduate at MIT, developed an online platform that connects to popular web applications such as Facebook, Picasa and Google Docs, making it easier for users to jump between applications, sharing files and photos among multiple services. Jia is currently developing an Upkast app for the iPad.

Failure is an option

Venture capitalist Vinod Khosla, co-founder of Sun Microsystems, gave the keynote address at Wednesday night’s finale. Khosla said the key to entrepreneurial success is having the guts to fail.

Khosla also advised the young entrepreneurs at MIT:“Sell your heart out, with guts and a level of arrogance that’s mostly unwarranted… in-your-face bravado is absolutely essential {to entrepreneurial success}.”

As an example, Khosla outlined his experience applying to MBA programs. His sights were set on Stanford, but he lacked the requisite two years’ work experience. Eager to speed the process along, Khosla took an unconventional approach, working two full-time jobs for one year. Still, Stanford hesitated. He badgered the admissions office weekly until, one week before the start of the fall semester, he called with a bold question: Where was his acceptance letter? His persistence paid off: One day before the start of classes, Khosla enrolled as a Stanford student. 

New competitive streak

This year, competitors in the $100K competition also had the opportunity to enter a new category: the Linked Data Prize, an award inspired by the work of Tim Berners-Lee, director of the World Wide Web Consortium. Berners-Lee, who served as a judge for the new contest, has championed the concept of linked data: unearthing and connecting raw data from disparate sources, thus revealing new patterns between previously unconnected records.

Three teams shared the Linked Data Prize of $10,000: Convexic, for a novel algorithm that matches job applicants with ideal positions; Link Cycle, for a collaborative online tool that improves environmental life-cycle analyses; and Upkast.

For the first time this year, $100K organizers also introduced YouPitch— an online contest challenging any student in the world to pitch a business idea, in 60 seconds or less, in the form of a YouTube video. Entries with the most“likes” made it to the final round. The inaugural YouPitch challenge generated 34 videos from four continents, including entries from Pakistan, Cameroon and Taiwan.

The winning video came from Clear Ear, a team from MIT and Stanford that has developed a patent-pending earwax-removal system. While some video entries featured people pitching ideas directly to the camera, Clear Ear took a playful approach, illustrating its concept through a flipbook of hand drawings. MIT electrical engineering and computer science graduate student Michael Yung Peng and Stanford graduate Lily Truong took home $2,000 in prize money for their video pitch.

“Many of the {YouTube} entries were very innovative, both with respect to ideas as well as delivery,” said Kourosh Kaghazian, managing director of the $100K Business Plan Competition.“We're planning to make it even bigger and better for next year.”


This story is republished courtesy of MIT News (http://web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.


Source

Friday, May 13, 2011

Water wonder

Water wonder

A brilliant water saving idea by UNSW engineering academics Greg Leslie and Bruce Sutton has impressed the judges on ABC TV’s<i>New Inventors</i>program.

Associate Professor Leslie, of the School of Chemical Engineering, and Professor Sutton, formerly of the University of Sydney and now a Visiting Fellow at UNSW, won their night on the popular program and will now go on to the finals, to be held later this year.

The pair won for ROSDI, their Reverse-Osmosis Sub-surface Drip Irrigation system, which allows saltyto be used in crop irrigation without energy-intensive water treatment.

ROSDI uses pipes made from reverse-osmosis membrane, like that used in desalination plants, to filter salt from brackish groundwater for cropin times of drought or low water availability.

Water wonder
Drip-feed ... a diagram of the ROSDI concept

The system uses the suction force created by a plant’s roots to draw water through the membrane, dispensing with the need for pumping.

New Inventorsjudge james Bradfield Moody described ROSDI as a"really elegant, potentially world-first"concept.

ROSDI also won the Eureka Prize for Water Research and Innovation in 2010.

The technology is beingcommercialisedby NewSouth Innovations, UNSW's technology commercialisation company.


Source

Tuesday, May 10, 2011

Voice-based phone recharging

Voice-based phone recharging

(PhysOrg.com) -- The noise that we produce can be a lot of things. It can be a valid means of communication. It can be an annoyance when you are trying to get to bed at night. It can be a migraine waiting to happen, and depending on who you ask, it can even be a form of pollution. But, could that annoying loud man next to you on the subway, or your can't keep it down neighbors TV, be a potential source of renewable-energy?

Sang-Woo Kim, a researcher at the Institute of Nanotechnology at Sungkyunkwan University in Seoul thinks that it just might be.

He is working in a field known as energy scavenging in which power is made by the day-to-day life of humans. Other forms of scavenged energy include California's current proposal to grab vibrational energy from cars driving on the highways as a source of power. These types of innovation have the possibility to give usthat do not require putting up solar panels orin areas where this type of construction may not always be possible.

You may be wondering how this sound-based technology would work. Well, the proposed technology would convert sound into the kind of energy that a phone can use by pairing the electrodes with strands of zinc oxide. When the noise comes at the phone, a pad designed to absorb the noise would capture it, and vibrate the phone (or other device in question), which would make thefibers expand and contract. It is this expanding and contracting that actually generates the power for the battery.

A currentwas able to convert 100of sound, the equivalent of city traffic, into 50 millivolts of power.


Source

Monday, May 9, 2011

The Aeryon Scout gets VideoZoom10x upgrade (w/ video)

The Aeryon Scout gets VideoZoom10x upgrade (w/ video)

Enlarge

(PhysOrg.com) -- The Aeryon Scout is not a new piece of technology. This flying robot, which was created by a Canadian company called Aeryon Labs, is able to quietly hover in place and point a camera down onto the people and objects below. If you have, or are interested in getting your hands on, one of these quadrocopters then you are in for a substantial upgrade, known as the VideoZoom10x payload.

The VideoZoom10x payload is ready to turn your bot into a full-fledged spy copter. The payload, which weights in at just 200 grams, adds a stabilized ten times opticalvideo capability, which means that you will be able to see a lot more detail in what is going on, at least within the payloads useful range, which comes out to be around 300 meters.

This video is not supported by your browser at this time.

This is not the first adaptation of the Aeryon Scout. Previous versions have carried a Kinect sensor, a mini quadrotor DIY project, or a quadrotor that is able to juggle. The company even claims that the machine has helped to take down a drug lord in Central America by providing importantof a narcotic-traffickers compound deep in the jungles of an unnamed country.

The Scout has a maximum range of 3 kilometers with a top speed of 50 kilometers per hour. It is able to deal with wind up to 80 km/h. The base unit weighs a little bit more than a kilogram. It can be carried in case, and also has aoption for nighttime use.


Source

Sunday, May 8, 2011

Courts nationwide hold hearings with video

(AP) -- Courts in New York City and around the country are increasingly using video conference software to hold minor hearings.

The technology is boosting efficiency and also cutting costs.

The savings for some are staggering. According to a recent national survey, $30 million has been saved in Pennsylvania so far, $600,000 in Georgia, and $50,000 per year in transportation costs in Ohio.

say the virtual hearing is easier on defendants, who don't have to spend hours going back-and-forth from prison and waiting for their appearance.

Judges say their cases are moving faster. And civil rights groups say the practice raises no red flags.


Source

Saturday, May 7, 2011

Japanese company introduces irresistibly cute mind-controlled 'cat ears' (w/ video)

Japanese company introduces irresistibly cute mind-controlled 'cat ears' (w/ video)

Enlarge

(PhysOrg.com) -- In a bit of science mixed with whimsy, a Japanese company has created a set of electromechanical cat ears that can be worn on the human head and manipulated with nothing but the mind. Called the necomimi (a combination of the Japanese words for cat and ear) and looking very much like the ears that come with a cat costume, the ears respond to thoughts or mood by means of a sensor on a second small band pressed against the forehead; they can stand straight up when the wearer is concentrating, or wriggle and turn slightly when amused, or lay flat when tired or bored, demonstrating what the company calls, an ability to reveal emotion.

The company, called Neurowear, demonstrated its new product in the“Smile Bazar”at Omotesando Hills, which it captured on video and displays on its site; and while the participants are clearly amused by the cat ears moving around, and there is much smiling and some laughing, it’s difficult to tell just how much control over the ears the wearers have. A natural question arises also as to whether people can get better at manipulating their ears if they wear them over time.

In spite of the gimmick quality of the necomimi, it’s obvious that the concept could have a more serious purpose, such as helping those with communication difficulties express themselves.

This video is not supported by your browser at this time.

Also, a not so obvious part of the necomimi experience is the reaction of the people around the person wearing the ears; in the video, it’s impossible to not notice the looks of mirth on the faces of the people around, and, it’s difficult to not smile yourself as you watch the people in the video try on the device; their reactions, and the way the ears react combined with the expressions on their face, is actually rather profound, though it’s hard to say why. Whether it’s the cuteness factor, or a feeling that something is being conveyed by the person, albeit artificial ears, that you don’t generally see in any other way, there is something unique and sweet about the whole human/machine interaction that very clearly evokes something in others.

The necomimi is another in a long line of products that listen and respond to brain waves, and doubtless there will be many more, though what’ s not certain, is whether they will be nearly as cute.

This video is not supported by your browser at this time.



Source

Friday, May 6, 2011

Snail Braille reader could read books to the blind

Snail Braille reader could read books to the blind

Enlarge

(PhysOrg.com) -- To most of us, Braille is largely a mystery. It feels really cool, but the idea of actually reading it is kind of a pipe dream. Our sense of touch simply is not as sensitive as that of a blind person. That is not a problem if you happen to have picked up a Braille book out of curiosity. If however, you have recently lost your eyesight, then this is a major problem. As with learning any new language, it takes time to adapt.

That time can be very frustrating, since writing and reading are still important forms of communication in our society. That is where a tool such as the Snail Braille reader could come in handy.

Snail Braille reader could read books to the blind
Enlarge


This tool takes Braille text, and by rolling over a straight line of Braille text, the machine is able to read the Braille, and then translate it into speech. The machine, which is capable of storing text for latter replay, can also be paired with a standardheadset, similar to the ones you get with your cell phone. That is good news for students who want to study without having to search for the page in a book, or for people who like to hear the instructions while they are completing a task.

Snail Braille reader could read books to the blind
Enlarge


The machine would also feature kinetic recharging, which could possibly allow the reader to charge the device while they are using it. The only snag currently is that this device has not been created. It is currently in the design and prototype stages of development. With proper funding however, this tool could become indispensable to the newly blind.


Source

Wednesday, February 23, 2011

Japan company developing sensors for seniors

Japan company developing sensors for seniors (AP)

Enlarge

Japan's top telecoms company is developing a simple wristwatch-like device to monitor the well-being of the elderly, part of a growing effort to improve care of the old in a nation whose population is aging faster than anywhere else.

The device, worn like a watch, has a built-in camera, microphone and, which measure the pace and direction of hand movements to discern what wearers are doing - from brushing their teeth to vacuuming or making coffee.

In a demonstration atCorp.'s research facility, the test subject's movements were collected as data that popped up as lines on a graph - with each kind of activity showing up as different patterns of lines. Using this technology, what an elderly person is doing during each hour of the day can be shown on a chart.

The prototype was connected to a personal computer for the demonstration, but researchers said such data could also be relayed by wireless or stored in a memory card to be looked at later.

Plans for commercial use are still undecided. But similar sensors are being tested around the world as tools for elderly care.

In the U.S., the Institute on Aging at the University of Virginia has been carrying out studies in practical applications of what it calls"body area"to promote senior independent living.

What's important is that wearable sensors be easy to use, unobtrusive, ergonomic and even stylish, according to the institute, based in Charlottesville, Virginia. Costs, safety andare also key.

Despite the potential for such technology in Japan, a nation filled with electronics and technology companies, NTT President Satoshi Miura said Japan is likely falling behind global rivals in promoting practical uses.

Worries are growing the Japanese government has not been as generous with funding and other support to allow the technology to grow into a real business, despite the fact that Japan is among the world's most advanced in the proliferation of broadband.

More than 90 percent of Japan's households are equipped with either optic fibers or fast-speed mobile connections.

"But how to use the technology is the other side of the story,"Miura said in a presentation."We will do our best in the private sector, but I hope the government will help."

Nintendo Co.'s Wii game-console remote-controller is one exception of such sensors becoming a huge business success. But that's video-game entertainment, not social welfare.

George Demiris, associate professor at the School of Medicine at the University of Washington, in Seattle, says technology for the elderly is complex, requiring more than just coming up with sophisticated technology.

Getting too much data, for instance, could simply burden already overworked health care professionals, and overly relying on technology could even make the elderly miserable, reducing opportunities for them to interact with real people, he said.

"Having more data alone does not mean we will have better care for older adults,"Demiris said in an e-mail.

"We can have the most sophisticated technology in place, but if the response at the other end is not designed to address what the data show in a timely and efficient way, the technology itself is not useful,"he said.


Source

Tuesday, February 22, 2011

Self-correcting robots, at-home 3-D printing are on horizon, says researcher at AAAS

Robots that can self-improve and machines that"print"products at home are technologies soon to become increasingly available, said Cornell's Hod Lipson at the 2011 American Association for the Advancement of Science (AAAS) annual meeting, Feb. 17-21.

Lipson, associate professor of mechanical and aerospace engineering and of computing and, said Feb. 19 that robots can observe and reconstruct their own behaviors and use this information to adapt to new circumstances.

Such advances are important because self-reflection plays a key role in accelerating adaptation by reducing costs of physical experimentation, he said. Similarly, the ability of a machine to reconstruct the morphology and behavior of other machines is important to cooperation and competition. Lipson demonstrated a number of experiments on self-reflecting robotic systems, arguing that reflective processes are essential in achieving meta-cognitive capacities, including consciousness and, ultimately, a form of self-awareness.

In a second talk (Feb. 21), Lipson discussed the emergence of solid free-form fabrication technology, which allows 3-D printing of various structures, layer by layer, from electronic blueprints. While this technology has been in existence for more than two decades, this process has recently been explored for. In particular, new developments in multimaterial printing may allow these compact"fabbers"to move from printing custom implants and scaffolds to"printing"live tissue.

His talk also touched on his experience with the open-source Fab@Home project and its use in printing a variety of biological and non-biological integrated systems. He concluded with some of the opportunities that this technology offers for moving from traditionalto digital tissue constructs.

Lipson directs Cornell's Computational Synthesis group, which focuses on automatic design, fabrication and adaptation of virtual and physical machines. He has led work in such areas as evolutionary robotics, multimaterial functional rapid prototyping, machine self-replication and programmable self-assembly. He was one of five Cornell faculty members who presented at this year's AAAS meeting.


Source

Monday, February 21, 2011

Putting your brain in the drivers seat (w/ Video)

Putting your brain in the drivers seat (w/ Video)

Enlarge

(PhysOrg.com) -- Picture driving your car without ever touching the wheel, driving a vehicle that is so user responsive to you that it is literally jacked into your thoughts. It sounds like the technology of the future, something out of a sci-fi movie doesn't it? Well, as it turns out, the future is now.

A team of German researchers, led by Raul Rojas, an AI professor at the Freie Universität Berlin, have created athat can be driven entirely by human thoughts. The car, which has been given the name BrainDriver, was shown off to the world in a video that highlighted the thought-powered driving system on a trip to the airport.

You need Flash installed to watch this video

The BrainDriver recordsactivity with the help of an Emotiv neuroheadset, a non-invasive brain interface based on electroencephalography sensors, that was made by the San Francisco-based company Emotiv. The neuroheadset was originally designed for gaming. Like most new devices the human has to be trained in order to use the interface properly. After some practice runs, moving a virtual object, the user can be up anda modified Volkswagen Passat Variant 3c. The driver's thoughts are able to control the engine, brakes, and steering of the car. Currently, there is a small delay between thethoughts and the cars response.

No word yet on how detailed controls will be for other necessary functions, for example opening the gas cap to fill up. The researchers selected the headset after rejecting several other options, including the iPad and eye-tracking devices.

The car is currently only in the prototype phase and no decision has been made as to whether or not this car will ever be made available to the public when it becomes roadworthy.


Source

Saturday, February 19, 2011

Machines beat us at our own game: What can we do?

Machines beat us at our own game: What can we do? (AP)

Enlarge

(AP) -- Machines first out-calculated us in simple math. Then they replaced us on the assembly lines, explored places we couldn't get to, even beat our champions at chess. Now a computer called Watson has bested our best at"Jeopardy!"

A gigantic computer created by IBM specifically to excel at answers-and-questions left two champs of the TV game show in its silicon dust after a three-day tournament, a feat that experts call a technological breakthrough.

Watson earned $77,147, versus $24,000 for Ken Jennings and $21,600 for Brad Rutter. Jennings took it in stride writing"I for one welcome our new computer overlords"alongside his correct Final Jeopardy answer.

The next step for the IBM machine and its programmers: taking its mastery of the arcane and applying it to help doctors plow through blizzards of medical information. Watson could also help make Internet searches far more like a conversation than the hit-or-miss things they are now.

Watson's victory leads to the question: What can we measly humans do that amazing machines cannot do or will never do?

The answer, like all of"Jeopardy!,"comes in the form of a question: Who - not what - dreamed up Watson? While computers can calculate and construct, they cannot decide to create. So far, only humans can.

"The way to think about this is: Can Watson decide to create Watson?"said Pradeep Khosla, dean of engineering at Carnegie Mellon University in Pittsburgh."We are far from there. Our ability to create is what allows us to discover and create new knowledge and technology."

Experts in the field say it is more than the spark of creation that separates man from his mechanical spawn. It is the pride creators can take, the empathy we can all have with the winners and losers, and that magical mix of adrenaline, fear and ability that kicks in when our backs are against the wall and we are in survival mode.

What humans have that Watson, IBM's earlier chess champion Deep Blue, and all their electronic predecessors and software successors do not have and will not get is the sort of thing that makes song, romance, smiles, sadness and all that jazz. It's something the experts in computers, robotics and artificial intelligence know very well because they can't figure out how it works in people, much less duplicate it. It's that indescribable essence of humanity.

Nevertheless, Watson, which took 25 IBM scientists four years to create, is more than just a trivia whiz, some experts say.

Richard Doherty, a computer industry expert and research director at the Envisioneering Group in Seaford, N.Y., said he has been studying artificial intelligence for decades. He thinks IBM's advances with Watson are changing the way people think about artificial intelligence and how a computer can be programmed to give conversational answers - not merely lists of sometimes not-germane entries.

"This is the most significant breakthrough of this century,"he said."I know the phones are ringing off the hook with interest in Watson systems. The Internet may trump Watson, but for this century, it's the most significant advance in computing."

And yet Watson's creators say this breakthrough gives them an extra appreciation for the magnificent machines we call people.

"I see human intelligence consuming machine intelligence, not the other way around,"David Ferrucci, IBM's lead researcher on Watson, said in an interview Wednesday."Humans are a different sort of intelligence. Our intelligence is so interconnected. The brain is so incredibly interconnected with itself, so interconnected with all the cells in our body, and has co-evolved with language and society and everything around it."

"Humans are learning machines that live and experience the world and take in an enormous amount of information - what they see, what they taste, what they feel, and they're taking that in from the day they're born until the day they die,"he said."And they're learning from all the input all the time. We've never even created something that attempts to do that."

The ability of a machine to learn is the essence of the field of. And there have been great advances in the field, but nothing near human thinking.

"I've been in this field for 25 years and no matter what advances we make, it's not like we feel we're getting to the finish line,"said Carnegie Mellon University professor Eric Nyberg, who has worked on Watson with its IBM creators since 2007."There's always more you can do to bring computers to human intelligence. I'm not sure we'll ever really get there."

Bart Massey, a professor of computer science at Portland State University, quipped:"If you want to build something that thinks like a human, we have a great way to do that. It only takes like nine months and it's really fun."

Working on computer evolution"really makes you appreciate the fact that humans are such unique things and they think such unique ways,"Massey said.

Nyberg said it is silly to think that Watson will lead to an end or a lessening of humanity."Watson does just one task: answer questions,"he said. And it gets things wrong, such as saying grasshoppers eat kosher, which Nyberg said is why humans won't turn over launch codes to it or its computer cousins.

Take Tuesday's Final Jeopardy, which Watson flubbed and its human competitors handled with ease. The category was U.S. cities, and the clue was:"Its largest airport is named for a World War II hero; its second largest, for a World War II battle."

The correct response was Chicago, but Watson weirdly wrote,"What is Toronto?????"

A human would have considered Toronto and discarded it because it is a Canadian city, not a U.S. one, but that's not the type of comparative knowledge Watson has, Nyberg said.

"A human working with Watson can get a better answer,"said James Hendler, a professor of computer and cognitive science at Rensselaer Polytechnic Institute."Using what humans are good at and what Watson is good at, together we can build systems that solve problems that neither of us can solve alone."

That's why Paul Saffo, a longtime Silicon Valley forecaster, and others, see better search engines as the ultimate benefit from the"!"-playing machine.

"We are headed toward a world where you are going to have a conversation with a machine,"Saffo said."Within five to10 years, we'll look back and roll our eyes at the idea that search queries were a string of answers and not conversations."

The beneficiaries, IBM's Ferrucci said, could include technical support centers, hospitals, hedge funds or other businesses that need to make lots of decisions that rely on lots of data.

For example, a medical center might use the software to better diagnose disease. Since a patient's symptoms can generate many possibilities, the advantage of a Watson-type program would be its ability to scan the medical literature faster than a human could and suggest the most likely result. A human, of course, would then have to investigate the computer's finding and make the final diagnosis.

isn't saying how much money it spent building Watson. But Doherty said the company told analysts at a recent meeting that the figure was around $30 million. Doherty believes the number is probably higher, in the"high dozens of millions."

In a few years, Carnegie Mellon University robotic whiz Red Whittaker will be launching a robot to the moon as part of Google challenge. When it lands, the robot will make all sorts of key and crucial real-time decisions - like Neil Armstrong and Buzz Aldrin did 42 years ago - but what humans can do that machines can't will already have been done: Create the whole darn thing.


Source

Friday, February 18, 2011

Robotic hand nearly identical to a human one (w/ Video)

Robotic hand nearly identical to a human one

Enlarge

(PhysOrg.com) -- When it comes to finding the single best tool for building, digging, grasping, drawing, writing, and many other tasks, nothing beats the human hand. Human hands have evolved over millions of years into four fingers and a thumb that can precisely manipulate a wide variety of objects. In a recent study, researchers have attempted to recreate the human hand by building a biomimetic robotic hand that they have optimized to achieve near-human appearance and performance.

The researchers, Nicholas Thayer and Shashank Priya from Virginia Tech in Blacksburg, Virginia, have published their study on thein a recent issue ofSmart Materials and Structures.

The researchers call the hand a dexterous anthropomorphic robotic typing hand, or DART hand, as the main objective was to demonstrate that the hand could type on a. They showed that a single DART hand could type at a rate of 20 words per minute, compared to the average human typing speed of 33 words per minute with two hands. The researchers predict that two DART hands could type at least 30 words per minute. Ultimately, the DART hand could be integrated into afor assisting the elderly or disabled people, performing tasks such as typing, reaching objects, and opening doors.

To design the DART hand, the researchers began by investigating the physiology of the human hand, including its musculoskeletal structure, range of motion, and grasp force. The human hand has about 40 muscles that provide 23 degrees of freedom in the hand and wrist. To replicate these muscles, the researchers used servo motors and wires extending throughout the robotic hand, wrist, and forearm. The robotic hand encompassed a total of 19 motors and achieved 19 degrees of freedom.

You need Flash installed to watch this video

The DART hand types“holly jolly.” Video credit: Nicholas Thayer and Shashank Priya.

“{The greatest significance of our work is the} optimization of the hand design to reduce the number of motors in order to achieve a similar degree of freedom and range of motion as the human hand,” Priya toldPhysOrg.com.“This also allowed us to achieve dimensions that are on par with the human hand. We were also able to program the hand in such a manner that a high typing efficiency can be obtained.”

One small difference between the DART hand and the human hand is that each finger in the robotic hand is controlled independently. In the human hand, muscles are sometimes connected at the tendons so they can move joints in more than one finger (which is particularly noticeable with the ring and pinky fingers).

The robotic hand can be controlled by input text, which comes from either a keyboard or a voice recognition program. When typing, a finger receives a command to position itself above the correct letter on the keyboard. The finger presses the key with a specific force, and the letter is checked for accuracy; if there is a typo, the hand presses the delete key. By moving the forearm and wrist, a single DART hand can type any key on the main part of a keyboard.

The DART hand isn’t the first robotic hand to be designed. During the past several years, robotic hands with varying numbers of fingers have been developed for a variety of purposes, from prosthetics to manufacturing. But as far as the researchers know, no robotic hand can accurately type at a keyboard at human speed. When the researchers compared the functional potential of the DART hand to other robotic hands, the DART hand had an overall functional advantage. In addition, the researchers used rapid prototyping to fabricate all the components, significantly reducing the cost, weight, and fabrication time.

In the future, the researchers plan to make further improvements to the robotic hand, including covering the mechanical hand in a silicone skin, as well as adding temperature sensors, tactile sensors, and tension sensors for improved force-feedback control. These improvements should give the robotic hand the ability to perform more diverse tasks.

“We have already experimented with grasping tasks,” Priya said.“In the current form it is not optimized for grasping, but in our next version there will be enough sensors to provide feedback for controlling the grasping action.”


Source

Thursday, February 17, 2011

Computer creams human 'Jeopardy!' champs

Computer creams human 'Jeopardy!' champs

Enlarge

An IBM computer creamed two human champions on the popular US television game show"Jeopardy!"Wednesday in a triumph of artificial intelligence.

"I for one welcome our new computer overlords,"contestant Ken Jennings -- who holds the"Jeopardy!"record of 74 straight wins -- cheekily wrote on his answer screen at the conclusion of the much-hyped three-day showdown.

"Watson"-- named after Thomas Watson, the founder of the US technology giant -- made some funny flubs in the game, but prevailed by beating his human opponents to the buzzer again and again.

The final tally from the two games: Watson at $77,147, Jennings at $24,000 and $21,600 for reigning champion Brad Rutter, who previously won a record $3.25 million on the quiz show.

"Watson is fast, knows a lot of stuff and can really dominate a match,"host Alex Trebek said at the opening of Wednesday's match.

Watson, which was not connected to the Internet, played the game by crunching through multiple algorithms at dizzying speed and attaching a percentage score to what it believed was the correct response.

"Jeopardy!", which first aired on US television in 1964, tests a player's knowledge in a range of categories, from geography to politics to history to sports and entertainment.

In a twist on traditional game play, contestants are provided with clues and need to supply the questions.

The complex language of the brain-teasers meant Watson didn't merely need to have access to a vast database of information, it also had to understand what the clue meant.

One impressive display came when Watson answered"What is United Airlines"to the clue"Nearly 10 million YouTubers saw Dave Carrol's clip called this 'friendly skies' airline 'breaks guitars.'"

But a Final Jeopardy flub on Tuesday's show prompted one IBM engineer to wear a Toronto Blue Jays jacket to the final day of taping and Trebek to joke that he had learned the Canadian metropolis was a US city.

Watson had answered"What is Toronto????"to the question:"Its largest airport is named for a WWII hero. Its second largest, for a WWII battle"under the category"US Cities."

Jennings and Rutter both gave Chicago as the correct answer.

Watson's success was a remarkable achievement and a historic moment for artificial intelligence, said Oren Etzioni, a computer science professor at the University of Washington.

"! is a particularly difficult form of natural language because it's so open-ended and it's so full of puns and quirky questions,"he told AFP.

But while Watson was impressive, he's still light years away from the kind of interactive, thinking computers imagined by science fiction, like the murderous Hal in the film"2001: A Space Odyssey."

"The day where robots will keep us as pets is still very far away,"Etzioni said.

That's because Watson can't really think for itself or even fully understand the questions, and instead"employs a lot of tricks and special cases to do what it's doing,"he said.

The next step is to see how this technology can be used in applications with real economic and social impacts.

Watson, which has been under development at IBM Research labs in New York since 2006, is the latest machine developed by IBM to challenge mankind.

In 1997, an IBM computer named"Deep Blue"defeated world chess champion Garry Kasparov in a closely-watched, six-game match.

Like Deep Blue, Watson"represents a major leap in the capacity of information technology systems to identify patterns, gain critical insight and enhance decision making,"IBM chairman Sam Palmisano said in a promotional video.

"We expect the science underlying Watson to elevate computer intelligence, take human to computer communication to new levels and to help extend the power of advanced analysts to make sense of vast quantities of structured and unstructured data."

IBM already has plans to apply the technology to help doctors track patients and stay up to date on rapidly evolving medical research.


Source

Wednesday, February 16, 2011

Industry researchers predict future of electronic devices

Industry researchers predict future of electronic devices

Enlarge

The just-released February issue of the<i>Journal of the Society for Information Display</i>contains the first-ever critical review of current and future prospects for electronic paper functions.

These technologies will bring us devices like:

  • full-color, high-speed, low-power e-readers;
  • iPads that can be viewed in bright sunlight, or
  • e-readers and iPads so flexible that they can be rolled up and put in a pocket.
The University of Cincinnati's Jason Heikenfeld, associate professor of electrical and computer engineering and an internationally recognized researcher in the field of electrofluidics, is the lead author on the paper titled"A Critical Review of the Present and Future Prospects for Electronic Paper."Others contributing to the article are industry researcher Paul Drzaic of Drzaic Consulting Services; research scientist Jong-Souk (John) Yeo of Hewlett-Packard's Imaging and Printing Group; and research scientist Tim Koch, who currently manages Hewlett-Packard's effort to develop flexible electronics.

TOP TEN LIST OF COMING e-DEVICES

Based on this latest article and his ongoing research and development related todevices, UC's Heikenfeld provides the following top ten list ofdevices that consumers can expect both near term and in the next ten to 20 years.

Heikenfeld is part of an internationally prestigious UC team that specializes in research and development of e-devices.

Industry researchers predict future of electronic devices
Enlarge

Within ten to 20 years, we will see e-Devices with magazine-quality color, viewable in bright sunlight but requiring low power.“Think of this as the green iPad or e-Reader, combining high function and high color with low power requirements.” said Heikenfeld. Credit: Noel Leon Gauthier, U. of Cincinnati

Coming later this year:
  • Color e-readers will be out in the consumer market by mid year in 2011. However, cautions Heikenfeld, the color will be muted as compared to what consumers are accustomed to, say, on an iPad. Researchers will continue to work toward next-generation (brighter) color in e-Readers as well as high-speed functionality that will eventually allow for point-and-click web browsing and video on devices like the Kindle.
Already in use but expansive adoption and breakthoughs imminent:
  • Electronic shelf labels in grocery stores. Currently, it takes an employee the whole day to label the shelves in a grocery store. Imagine the cost savings if all such labels could be updated within seconds– allowing for, say, specials for one type of consumer who shops at 10 a.m. and updated specials for other shoppers stopping in at 5:30 p.m. Such electronic shelf labels are already in use in Europe and the West Coast and in limited, experimental use in other locales. The breakthrough for use of such electronic labels came when they could be implemented as low-power devices. Explained Heikenfeld,"The electronic labels basically only consume significant power when they are changed. When it's a set, static message and price, the e-shelf label is consuming such minimal power– thanks to reflective display technology– that it's highly economical and effective."The current e-shelf labels are monochrome, and researchers will keep busy to create high-color labels with low-power needs.
  • The new"no knobs"etch-a-sketch. This development allows children to draw with electronic ink and erase the whole screen with the push of a button. It was created based on technology developed in Ohio (Kent State University). Stated Heikenfeld,"Ohio institutions, namely the University of Cincinnati and Kent State, are international leaders in display and liquid optics technology."
  • Technology in hot-selling Glow Boards will soon come to signage. Crayola's Glow Board is partially based on UC technology developments, which Crayola then licensed. While the toy allows children to write on a surface that lights up, the technology has many applications, and consumers can expect to see those imminently. These include indoor and outdoor sign displays that when turned off, seem to be clear windows. (Current LCD– liquid crystal display– sign technology requires extremely high power usage, and when turned off, provide nothing more than a non-transparent black background.)
Coming within two years:
  • An e-device that will consume little power while also providing high function and color (video playing and web browsing) while also featuring good visibility in sunlight. Cautions Heikenfeld,"The color on this first-generation low-power, high-function e-device won't be as bright as what you get today from LCD (liquid crystal display) devices (like the iPad) that consume a lot of power. The color on the new low-power, high-function e-device will be about one third as bright as the color you commonly see on printed materials. Researchers, like those of us at UC, will continue to work to produce the Holy Grail of an e-device: bright color, high function (video and web browsing) with low power usage."
Coming within three to five years:
  • Color adaptable e-device casings. The color and/or designed pattern of the plastic casing that encloses your cell phone will be adaptable. In other words, you'll be able to change the color of the phone itself to a professional black-and-white for work or to a bright and vivid color pattern for a social outing."This is highly achievable,"said Heikenfeld, adding,"It will be able to change color either automatically by reading the color of your outfit that day or by means of a downloaded app. It's possible because of low-power, reflective technology"(wherein the displayed pattern or color change is powered by available ambient light vs. powered by an electrical charge).

    Expect the same feature to become available in devices like appliances."Yes,"said Heikenfeld,"We'll see a color-changing app, so that you can have significant portions of your appliances be one color one day and a different color or pattern the next."

  • Bright-color but low-power digital billboards visible both night and day. Currently, the digital billboards commonly seen are based on LEDs (liquid crystal displays), which consume high levels of electric power and still lose color when in direct sunlight. Heikenfeld explained,"We have the technology that would allow these digital billboards to operate by simply reflecting ambient light, just like conventional printed billboards do. That means low power usage and good visibility for the displays even in bright sunlight. However, the color doesn't really sizzle yet, and many advertisers using billboards will not tolerate a washed-out color."
  • Foldable or roll-it-up e-devices. Expect that the first-generation foldable e-devices will be monochrome. Color will come later. The first foldable e-devices will come from Polymer Vision in the Netherlands. Color is expected later, using licensed UC-developed technology. The challenge, according to Heikenfeld, in creating foldable e-devices has been the device screen, which is currently made of rigid glass. But what if the screen were a paper-thin plastic that rolled like a window shade? You'd have a device like an iPad that could be folded or rolled up tens of thousands of times. Just roll it up and stick it in your pocket. See a video.

    You need Flash installed to watch this video

    University of Cincinnati researcher Jason Heikenfeld is part of an internationally prestigious team that specializes in research and development of e-devices. Based on his work, he provides a top ten list of electronic paper devices that consumers can expect in both the near term and in the next ten to 20 years. Credit: Two original images by Noel Leon Gauthier, University of Cincinnati.

    In ten to 20 years, consumers will see e-devices with magazine-quality color, viewable in bright sunlight but requiring low power.
Within ten to 20 years:
  • e-Devices with magazine-quality color, viewable in bright sunlight but requiring low power."Think of this as the green iPad or e-Reader, combining high function and high color with low power requirements."said Heikenfeld.
  • The e-Sheet, a virtually indestructible e-device that will be as thin and as rollable as a rubber place mat. It will be full color and interactive, while requiring low power to operate since it will charge via sunlight and ambient room light. However, it will be so"tough"and only use wireless connection ports, such that you can leave it out over night in the rain. In fact, you'll be able to wash it or drop it without damaging the thin, highly flexible casing.


Source