Saturday, 28 May 2016

What Happens When Drones Start Thinking on Their Own?

Drones – or unmanned aerial vehicles (UAVs) as they are increasingly known – have reached a mass-market tipping point. You can buy them on the high street for the price of a smartphone and, despite a large DIY Drone community, the out-of-the-box versions are pretty extraordinary, fitted with built-in cameras and “follow me” technology, where your drone will follow you as you walk, run, surf, or hang-glide. Their usefulness to professional filmmakers has led to the first New York Drone Film Festival to be held in March 2015.
Technologically speaking, drones' abilities have all manner of real-world applications. Some of the highlights from the US$1m prize for the Drones for Good competition include a drone that delivers a life-ring to those in distress in the water. Swiss company Flyability took the international prize for Gimball, a drone whose innovative design allows it to collide into objects without becoming destabilised or hard-to-control, making it useful in rescue missions in difficult areas.
The winner of the national prize was a drone that demonstrates the many emerging uses for drones in conservation. In this case, the Wadi drone can help record and document the diversity of flora and fauna, providing a rapid way to assess changes to the environment.
More civilian uses than military
What does this all mean for how we think about drones in society? It wasn’t long ago that the word “drones” was synonymous with death, destruction, and surveillance. Can we expect us all to have our own personal, wearable drone, as the mini-drone Nixie promises? Of course the technology continues to advance within a military context, where drones – not the kind you can pick up, but large, full-scale aircraft – are serious business. There’s even a space drone, NASA’s Boeing X-37, which spent several years in automated orbit, while others are in development to help explore other planets.
There’s no escaping the fact that drones, like a lot of technology now in the mainstream, have trickled down from their military origins. There are graffiti drones, drone bands, Star Wars-style drone racing competitions using virtual reality interfaces, and even theatrical drone choreography, or beautiful drone sculptures in the sky. 
There are a few things about drones that are extremely exciting – and controversial. The autonomous capabilities of drones can be breathtaking – witnessing one just fly off at speed on its own, it feels extremely futuristic. But this is not strictly legal at present due to associated risks.
A pilot must always have “line of sight” of the drone and have the capacity to take control. Technically even the latest drones still require a flight path to be pre-programmed, so the drone isn’t really making autonomous decisions yet, although the new DJI Inspire is pretty close. Drone learning has to be the next step in their evolution.
Yet this prospect of artificial intelligence raises further concerns of control, if a drone could become intelligent enough to take off, fly and get up to all kinds of mischief, and locate a power source to re-charge, all without human intervention or oversight, then where does that leave humanity?


Virtual Reality Can Make You Forget Pain

A couple of weeks from now I will be in hospital undergoing a knee replacement. It will be the most extreme surgery I’ve ever experienced and I’m pretty scared. I’ve been told that I can expect to endure excruciating pain afterwards but I won’t be allowed to lie in bed feeling sorry for myself. In order to ensure a good recovery I have to get up and exercise the new joint numerous times a day. Make no mistake, this is going to hurt.
It may not be too long, however, until patients like me will be able to ward off their agonies simply by playing virtual reality games. This surprising advance is already being tested, but the premise behind it is not new.
As neuroscientist David Linden recently explained on NRP, the brain has more control over pain than we might at first imagine. It can say “hey that’s interesting, turn up the volume on this pain information that’s coming in”, or it can say “turn down the volume on that and pay less attention to it”. In Linden’s book Touch: The Science of Hand, Heart and Mind, he discusses how our perception of pain relies on the brain and how it processes information coming from the nervous system.
Lieutenant Sam Brown
Researchers are now attempting to see if this process can be manipulated through gaming. In the US, a group of patients suffering from severe burns were invited to play SnowWorld, a virtual reality computer game devised by two cognitive psychologists, Hunter Hoffman and Dave Patterson, to persuade the brain to ignore pain signals in favour of more compelling scenarios. Their motivation, Hoffman said was because opioids (morphine and morphine-related chemicals) can control burn pain when the patient is at rest, they are nowhere near adequate to quench the agony of daily bandage changes, wound cleaning and staple removals.
The best-known SnowWorld player is lieutenant Sam Brown who, during his first tour of duty in Kandahar, Afghanistan, in 2008, suffered third degree burns over 30% of his body. An IED buried in a road hit the vehicle he was travelling in and exploded into a fireball, engulfing Brown in flames. His injuries were so severe he had to be kept in a medically induced coma for several weeks. Back in the US, Brown endured more than two dozen painful surgeries, but none were as bad as the daily ritual of caring for his wounds. When nurses attended to his burns and helped him perform the necessary physical therapies, he experienced the most excruciating pain.
In 2012, NBC News reported on Brown’s experience and how the pain of dressing burn wounds could be so intense it could make patients relive the original trauma. In Brown’s case the procedures were so unbearable that on some occasions his superior officers had to order him to undergo treatment.
For Brown, help arrived not in the form of new kinds of medicines or dressings, but by a video game. Brown was one of the first participants in SnowWorld’s pilot study, which was designed in conjunction with the US military, to test whether it really could help wounded soldiers.

A distracting annoyance
At the time, Hoffman’s main work at the University of Washington was using virtual reality techniques to help people overcome a pathological fear of spiders. Patterson, based at the Harborview Burn Centre in Seattle, is an expert in psychological techniques such as hypnosis that can be used to help burn patients.
It was already known that the way we experience pain can be psychologically manipulated – for example, anticipating pain can make it worse. Research looking at how soldiers experience pain has also revealed how emotions can affect how that pain feels. So if your brain can interpret pain signals differently depending on what you’re thinking or feeling at the time, why not see if the experience of pain can be altered by deliberately diverting a patient’s attention towards something else? If it worked, the wound care could become more of a distracting annoyance and the distressing sensation of pain could be much reduced.
It was a long shot, but Hoffman’s expertise in virtual reality therapy made it possible to develop a game which offered that kind of diversion. To do this patients first put on a virtual reality headset and earphones and are then transported through an icy canyon filled with snowball hurling snowmen, flocks of squawking penguins, woolly mammoths and other surprises. Flying through the gently falling snow, they can then retaliate by throwing their own snowballs. Often, they get so involved with it that they don’t even notice when their procedure has finished.
In the interview with NBC Patterson explained how, during painful procedures like scrubbing off a wound, the patient is taken into a soothing and icy world, a completely different place from the reality. It works, he said, “for as long as people seem to be in the virtual world.”
The 2011 pilot study showed promising results. In some cases, soldiers with the worst pain reported that SnowWorld worked better than morphine. Brown himself is now much recovered, and attributes a large part of that success on his immersive experience.
Similar projects are happening elsewhere. In the UK, staff at Queen Elizabeth Hospital Birmingham and the University of Birmingham have been looking at how computer game technology can alleviate patients’ pain and discomfort through distraction therapy in which patients “wander around” a virtual world based on real locations in the Devon countryside. The idea is to combine authentic natural landscapes with virtual reality aids that help patients divert their attention from pain while also offering opportunities for real physical exercise – walking up hill, going over bridges, sitting on the beach – that creates movement inside the game.
As with SnowWorld, patients are generally injured military personnel. Most suffer from severe burns, but some also have phantom pain from amputated limbs.


'Chappie': How Realistic Is the Film's Artificial Intelligence?

The new film "Chappie" features an artificially intelligent robot that becomes sentient and must learn to navigate the competing forces of kindness and corruption in a human world.
Directed by Neill Blomkamp, whose previous work includes "District 9" and "Elysium," the film takes place in the South African city of Johannesburg. The movie's events occur in a speculative present when the city has deployed a force of police robots to fight crime. One of these robots, named "Chappie," receives an upgrade that makes him sentient.
Blomkamp said his view of artificial intelligence (AI) changed over the course of making the film, which opens in the United States on Friday (March 6). "I'm not actually sure that humans are going to be capable of giving birth to AI in the way that films fictionalize it," he said in a news conference.
Yet, while today's technology isn't quite at the level of that in the film, "We definitely have had major aspects of systems like Chappie already in existence for quite a while," said Wolfgang Fink, a physicist and AI expert at Caltech and the University of Arizona, who did not advise on the film.
Chappie in real life?
Existing AI computer systems modeled on the human brain, known as artificial neural networks, are capable of learning from experience, just like Chappie does in the film, Fink said. "When we expose them to certain data, they can learn rules, and they can even learn behaviors," he said. Today's AI can even teach itself to play video games.

Something akin to Chappie's physical hardware also exists. Google-owned robotics company Boston Dynamics, based in Waltham, Massachusetts, has an anthropomorphic bipedal robot, called PETMAN that can walk, bend and perform other movements on its own. And carmaker Honda has ASIMO, a sophisticated humanoid robot that once played soccer with President Barack Obama.
But Chappie goes beyond what current systems can do, because he becomes self-aware. There's a moment during the film when he says, "I am Chappie."
"That statement, if that's truly result of a reasoning process and not trained, that is huge," Fink said. An advance like that would mean robots could go beyond being able to play a video game or execute a task better than a human. The machine would be able to discriminate between self and nonself, which is a "key quality of any truly autonomous system," Fink said.
Childlike persona
As opposed to the "Terminator"-style killing machines of most Hollywood AI films, Chappie's persona is depicted as childlike and innocent — even cute.
To create Chappie, actor Sharlto Copley performed the part, and a team of animators "painted" the computer-generated robot over his performance, said visual effects supervisor Chris Harvey.
"We still had Sharlto on set [as Chappie]," Harvey told Live Science. But unlike many other special-effects-heavy films, "Chappie" did not use motion capture, which involves an actor wearing a special suit with reflective markers attached and having cameras capture the performer's movements. Instead, "the animators did that by hand," Harvey said.
Because Chappie is a robot, Harvey's biggest fear was not being able to have it convey emotion. So, his team gave Chappie an expressive pair of "ears" (antennae), a brow bar and a chin bar, which could express a fairly wide range of emotions, "almost like a puppy dog," Harvey said.
Humanity's biggest threat
In the film, Chappie's "humanity" is sharply contrasted with the inhumanity of Hugh Jackman's character Vincent Moore, a former military engineer who is developing a massive, brain-controlled robot called the "Moose" to rival intelligent 'bots like Chappie.
"The original concept for Jackman's character was always to be in opposition to artificial intelligence," Blomkamp told reporters.
Jackman himself takes a more positive view of AI. "Unlike my character, I like to think optimistically about these discoveries," Jackman said in a news conference. "I'm a firm believer that the pull for human beings is toward the good generally outweighing the bad."
But billionaire Elon Musk and famed astrophysicist Stephen Hawking have sounded alarms about the dangers of artificial intelligence, with Musk calling it humanity's "biggest existential threat."
Truly autonomous AI is not something most researchers are working on, but Fink shares some of these concerns.
"Depending on how old we are, we might see something in our lifetime which might become scary," Fink said. If it gets out of control, he said, "then we have created a monster."

Solar-Powered Plane Takes Off on Epic Round-the-World Flight

A solar-powered plane, dubbed Solar Impulse 2, took flight today (March 8), embarking on the historic first leg of a planned journey around the world.
The aircraft, which can fly without using any fuel, took off from Al Bateen Executive Airport in Abu Dhabi, capital of the United Arab Emirates, shortly after 11:10 p.m. EDT (7:10 a.m. local time on March 9). The plane will now fly roughly 250 miles (400 kilometers) in 12 hours to reach Oman, officials said.
Next, Solar Impulse 2 will make stops in India, Myanmar and China, before crossing the Pacific Ocean. The plane is then expected to fly across the continental United States, touching down in three cities along the way. After journeying across the Atlantic Ocean, the plane will make a stopover either in southern Europe or North Africa before returning to Abu Dhabi, according to company officials.
If successful, Solar Impulse 2 will become the first solar-powered aircraft to circumnavigate the globe. Swiss pilots and Solar Impulse co-founders André Borschberg and Bertrand Piccard have said the round-the-world flight will likely end in late July or early August.
"We are very ambitious in our goal, but modest given the magnitude of the challenge," Borschberg and Piccard said in a statement. "This is an attempt, and only time will tell if we can overcome the numerous weather, technical, human and administrative issues."
Borschberg was at the controls when Solar Impulse 2 took off from Abu Dhadi, but he and Piccard will alternate flying the solar-powered plane on each leg of the round-the-world trip.
Solar Impulse 2 is designed to fly day and night without using a single drop of fuel. The plane is powered entirely by solar panels and on-board batteries, which charge during the day to enable the ultra-lightweight plane to continue its journey throughout the night.
The plane has a wingspan of 236 feet (72 meters), and it weighs only 5,070 pounds (2,300 kilograms), or about the same as a car, company officials have said. The aircraft's wings are covered with 17,000 solar cells that power the plane's on-board systems.

The round-the-world flight is designed to demonstrate the possibilities of "green" technology and sustainable energy.
In 2013, Borschberg and Piccard completed an unprecedented coast-to-coast flight across the United States, using a first-generation prototype of the Solar Impulse plane. The first-of-its-kind flight took two months, and included five stops between California and New York.
Since that cross-country flight, the Solar Impulse team has made several upgrades to the aircraft to prepare for the current round-the-world journey. Engineers made Solar Impulse 2 more energy efficient by improving the quality of the aircraft's batteries and using lighter materials to construct the plane. The aircraft's cockpit was also upgraded to include more space and better ergonomic designs, which will help Borschberg and Piccard remain as comfortable as possible during long flights, according to company officials.

Your Life, and Your Future, Predicted by Data

Just a decade ago, it would have been unthinkable to use data to make everyday decisions. Now, such "predictive analytics" are the norm: Simply type a query into Google and it magically suggests what you were searching for. How about those stories you read this morning on your Facebook news-feed? That's predictive analytics at work again. 
A survey by management consulting, technology services and outsourcing company Accenture found the use of predictive analytics technologies has tripled since 2009. That number isn't surprising when you recognize all the ways in which we use predictive analytics on a daily basis. 
Not a crystal ball, but it works like one
Consider Amazon, the ubiquitous one-click Internet retailer. By plugging into an algorithm such user data as links clicked, wish list items, number of visits to the site and previously purchased items, the retailer can predict buyer activity accurately enough to send items to its warehouses before merchandise has even been purchased.
Amazon is so confident in its predictive algorithms, it'll put money on them. For example, if there's a large demand for flip-flops in Florida, the local fulfillment centers might fill up with flip-flops before orders are even placed, allowing for shorter delivery time when a customer finally clicks the purchase button. According to an article by Lance Ulanoff, chief correspondent and editor-at-large of Mashable, it's all a part of making the shipping process more efficient for the customer, and less costly for Amazon. 
Fantasy sports take a similar approach. There are 41.5 million people managing fantasy sports teams, according to the Fantasy Sports Trade Association. The selection of a player for a fantasy team depends on a number of different factors. Participants take into consideration things like historical performance, coaches and a player's current team. Selecting a player based on one variable just doesn't give an accurate picture of that player's value. 
Consider when quarterback Alex Smith left the San Francisco 49ers and joined the Kansas City Chiefs. Smith's productivity (points per game per year) jumped nearly 35 percent — and analytics tells us that this probably isn't just good luck. It could be because Kansas City uses Andy Reid's pass-first West Coast offense that better jives with Smith's abilities. Or, it could even be because Smith operated better in Kansas City's climate. 
Regardless of why, it's obvious that there are multiple variables, like team strategies and location, which affect performance. Using predictive analytics offers a more robust model that takes multiple variables into account. Instead of leaving it to intuition or chance, an algorithm pulls together dozens of factors to identify which players will be most successful in a given situation.
Predicting health?
This data analysis trend is also present in industries like health care. Looking at analytics helps caregivers treat the patient individually — for example, predictive algorithms can help show which patients are at risk for rehospitalization, which patients could benefit from another care episode (services that treat a clinical condition or procedure), and which would benefit from hospice care. My own company, Medalogix, helped reduce readmission rates for one home health care agency by nearly 36 percent in one year with the use of our predictive analytics software. Patients receive the most personalized health care services, which increases care outcomes and quality, while providers reduce expenses. 
Another leg on the stool
Predictive analytics, in all of its uses, should be used as a resource to better decision-making.
Consider the decision-making process as a three-legged stool. One leg represents the education and experience that goes into decision-making; the second leg is built upon the instinctual feelings considered throughout the process. Together, those two dimensions of traditional decision-making support the stool, but still don't keep it from falling over. Analytics is the third dimension — another leg to make it sturdier. Having more information makes for more informed, stronger decisions. 
While seemingly complex, predictive analytics makes lives simpler by modeling data into useful insights. By looking at how predictive analytics function in our lives — like speeding up online deliveries or curbing hospital read missions — the concept quickly becomes more accessible and less intimidating. Adding additional dimensions into decision-making through analytics creates a more robust and complete picture, allowing people and businesses to make the most informed decisions possible. 


Ultra-Fast 'Hyperloop' Train Gets Test Track

The "Hyper loop," a hypothetical high-speed transportation system that could shuttle people between Los Angeles and San Francisco in only 30 minutes, just sped a bit closer to reality.First proposed in 2013 by billionaire entrepreneur Elon Musk, CEO of Tesla Motors and Space X, the hyper loop would transport passengers in floating pods inside low-pressure tubes at speeds of more than 750 mph (1,200 km/h).
Now, the company hyper loop Transportation Technologies Inc. (which is not affiliated with Musk or Tesla) has inked a deal with landowners in central California to build the world's first hyper loop test track, according to market research firm Navigant Research. The 5-mile (8 km) test track will be built along California's Interstate 5.
Separately, Musk has said he plans to build his own 5-mile test track, likely in Texas, for companies and students to test out potential hyper loop design.
How hyper loop will work
Musk laid out his plans for the hyper loop in a paper published on the Space X website. He has described the super speedy mode of transport as a "cross between a Concorde, a rail gun and an air-hockey table."
The idea is, passenger pods will travel inside tubes under a partial vacuum, and will be accelerated to blistering speeds using magnets. A set of fans attached to the pods will allow the train to rest on a cushion of air. The system would be powered by solar panels along the length of the tube.
The world's fastest magnetically levitated (maglev) train travels at about 310 mph (500 km/h). Maglev trains work by using magnets to produce both lift and propulsion. By contrast, the hyper loop would only use magnets for propulsion, relying on compressed air for lift. Maglev trains are in operation in Shanghai and Tokyo, and South Korea plans to open one in June.
Hyper loop pods could theoretically travel very fast, because they wouldn't have to overcome friction between the wheels and track that a typical train uses, or the air resistance that conventional vehicles experience at high speeds.
"You can go a couple of hundred miles an hour with a wheel, as the French and Germans and Japanese have proven," said Marc Thompson, an engineering consultant at Thompson Consulting Inc. in Boston, who has worked on maglev systems. But, "as you go faster, the drag force on the train becomes a very high energy cost."
The design Musk proposed would travel at speeds of up to about 760 mph (1,220 km/h), but the test project, which aims to break ground in early 2016, would be tested at 200 mph (322 km/h) to prove it works and is safe, Navigant reported.
At that speed, the air drag is still possible to overcome, but beyond that, the power needed to exceed the drag increases as the speed cubed, said James Powell, a retired physicist and co-inventor of the superconducting maglev concept.
Is it feasible?
The Hyper loop has the potential to be a faster, cheaper and more energy-efficient form of travel than planes, trains or buses, its proponents say. However, it's not yet known if the technology is feasible, or safe.
For one thing, the tubes have to be very straight, leaving very little room for error. "The guide way [track] has to be built to very fine tolerances, because if the position of the wall deviates from straightness by a few thousandths of an inch, you could crash," Powell told Live Science.
The tubes also have to maintain low-pressure air. "The problem with traveling in an evacuated tube is, if you lose the vacuum in the tube, everybody in the tube will crash," Powell said. In addition, the vehicle's compressor — which produces the air cushion on which the pods rest — can't fail, or the pods will crash into the walls, he added.
"The whole system is vulnerable to a single-point failure," Powell said. For example, somebody could blow a hole in the tube's side, or an earthquake (no rarity in California) could shift the tube by a fraction of an inch, both of which would cause the vehicles to crash. In superconducting maglev, by contrast, the magnets are very stable and operate reliably, Powell said. "It doesn’t require continuous control to keep it suspended."
What will it cost?
The 5-mile test track is estimated to cost about $100 million, which Hyper loop Transportation Technologies hopes to pay for with its initial public offering (IPO) later this year, according to Navigant's blog. Assuming building costs remain the same, a 400-mile (644 km) track between Los Angeles and San Francisco would cost about $8 billion (not including development costs), experts estimate. This price tag is still far less than that for California's planned high-speed rail project, which could cost $67.6 billion, according to the California High-Speed Rail Authority.
But Powell questions whether the Hyperloopwould really be as cheap as promised. "The main cost of these high-speed systems is in the cost of the guideway," he said. And because the track must be built so precisely, it's going to be more expensive, he added.
Even if the Hyperloop is successful, Powell doesn't think it will fix the United States' transportation problems — namely, congested highways and airways. "A few isolated high-speed rail corridors in the United States really won't address our big problems," he said.

Friday, 27 May 2016

World's Thinnest Light Bulb Created from Graphene

Graphene, a form of carbon famous for being stronger than steel and more conductive than copper, can add another wonder to the list: making light.
Researchers have developed a light-emitting graphene transistor that works in the same way as the filament in a light bulb.
"We've created what is essentially the world's thinnest light bulb," study co-author James Hone, a mechanical engineer at Columbia University in New York, said in a statement.
Scientists have long wanted to create a teensy "light bulb" to place on a chip, enabling what is called photonic circuit, which run on light rather than electric current. The problem has been one of size and temperature — incandescent filaments must get extremely hot before they can produce visible light. This new graphene device, however, is so efficient and tiny, the resulting technology could offer new ways to make displays or study high-temperature phenomena at small scales, the researchers said.
Making light
When electric current is passed through an incandescent light bulb’s filament — usually made of tungsten — the filament heats up and glows. Electrons moving through the material knock against electrons in the filament's atoms, giving them energy. Those electrons return to their former energy levels and emit photons (light) in the process. Crank up the current and voltage enough and the filament in the light bulb hits temperatures of about 5,400 degrees Fahrenheit (3,000 degrees Celsius) for an incandescent. This is one reason light bulbs either have no air in them or are filled with an inert gas like argon: At those temperatures tungsten would react with the oxygen in air and simply burn.
In the new study, the scientists used strips of graphene a few microns across and from 6.5 to 14 microns in length, each spanning a trench of silicon like a bridge. (A micron is one-millionth of a meter, where a hair is about 90 microns thick.) An electrode was attached to the ends of each graphene strip. Just like tungsten, run a current through graphene and the material will light up. But there is an added twist, as graphene conducts heat less efficiently as temperature increases, which means the heat stays in a spot in the center, rather than being relatively evenly distributed as in a tungsten filament. 
Myung-Ho Bae, one of the study's authors, told Live Science trapping the heat in one region makes the lighting more efficient. "The temperature of hot electrons at the center of the graphene is about 3,000 K [4,940 F], while the graphene lattice temperature is still about 2,000 K [3,140 F]," he said. "It results in a hotspot at the center and the light emission region is focused at the center of the graphene, which also makes for better efficiency." It's also the reason the electrodes at either end of the graphene don't melt.
As for why this is the first time light has been made from graphene, study co-leader Yun Daniel Park, a professor of physics at Seoul National University, noted that graphene is usually embedded in or in contact with a substrate.
"Physically suspending graphene essentially eliminates pathways in which heat can escape," Park said. "If the graphene is on a substrate, much of the heat will be dissipated to the substrate. Before us, other groups had only reported inefficient radiation emission in the infrared from graphene."
The light emitted from the graphene also reflected off the silicon that each piece was suspended in front of. The reflected light interferes with the emitted light, producing a pattern of emission with peaks at different wavelengths. That opened up another possibility: tuning the light by varying the distance to the silicon.
The principle of the graphene is simple, Park said, but it took a long time to discover.
"It took us nearly five years to figure out the exact mechanism but everything (all the physics) fit. And, the project has turned out to be some kind of a Columbus' Egg," he said, referring to a legend in which Christopher Columbus challenged a group of men to make an egg stand on its end; they all failed and Columbus solved the problem by just cracking the shell at one end so that it had a flat bottom.


These Insect-Inspired Robots Can Jump on Water

Swarms of robots inspired by water-hopping insects could one day be used for surveillance, search-and-rescue missions and environmental monitoring, researchers say.
More than 1,200 species of animals have evolved the ability to walk on water. These include tiny creatures such as insects and spiders, and larger beasts such as reptiles, birds and even mammals.
Whereas relatively big animals, such as the so-called " jesus lizard," must slap water with enough force and speed to keep their heavy bodies from going under, insects called water striders are small enough for their weight to be almost entirely supported by the surface tension of water — the same phenomenon that makes water droplets bead up. In 2003, scientists created the first robots that mimic the water strider, which is capable of floating on top of, and skating across, the surface of water.
But until now, one water-strider feat that researchers could not explain or copy was how the insects can jump from the surface of water, leaping just as high off water as they can off solid ground. For instance, water striders collected from streams and ponds in Seoul, South Korea, with bodies a half-inch (1.3 centimeters) long can jump more than 3 inches (8 cm) high on average, co-lead study author Je-Sung Koh, a roboticist at Seoul National University and Harvard University, told Live Science.
Now, scientists have solved the mystery of how these insects accomplish these amazing leaps, and the researchers have built a robot capable of such jumps.
"We have revealed the secret of jumping on water using robotics technology," co-senior study author Kyu-Jin Cho, director of the Biorobotics Laboratory at Seoul National University, told Live Science. "Natural organisms give a lot of inspiration to engineers."
Using high-speed cameras, the researchers analyzed water striders jumping on water. They noticed that the insects' long, super waterproof legs accelerated gradually, so that the surface of the water did not retreat too quickly and lose contact with the legs. Using a theoretical model of a flexible cylinder floating on top of liquid, the scientists found that the maximum force the water striders' legs exerted was always just below the maximum force that water’s surface tension could withstand.
The scientists also found that water striders swept their legs inward to maximize the amount of time they could push against the surface of the water, maximizing the overall force for their leaps. Moreover, the shape of the tips of their legs were curved to adapt to the dimples that formed on the water's surface when the legs pushed downward, thereby maximizing the surface tension the legs experienced.
Next, the scientists developed lightweight robots made of glass-fiber-reinforced composite materials that, in total, weighed only 68 milligrams (0.002 ounces) — a little more than the weight of three adult houseflies. Using a jumping mechanism inspired by fleas, the robot could leap about 5.5 inches (14 cm) off the surface of the water — about the length of its body and 10 times its body's height.
"Our small robot can jump on water without breaking the water surface, and can jump on water as high as jumping on land," Cho said.
The researchers cautioned that, so far, the robot can jump only once, and it lands randomly. In the far future, the scientists want to build a robot that can not only jump repeatedly and land in a controlled manner, but also carry electronics, sensors and batteries.
"This would be an extremely difficult task, since the weight of the body has to be really lightweight for it to jump on water," Cho said. "It would be great to add a swimming behavior as well."


The Next Level Beyond Quantum Computing

The one thing everyone knows about quantum mechanics is its legendary weirdness, in which the basic tenets of the world it describes seem alien to the world we live in. Superposition, where things can be in two states simultaneously, a switch both on and off, a cat both dead and alive. Or entanglement, what Einstein called “spooky action at distance” in which objects are invisibly linked, even when separated by huge distances.
But weird or not, quantum theory is approaching a century old and has found many applications in daily life. As John von Neumann once said: “You don’t understand quantum mechanics, you just get used to it.” Much of electronics is based on quantum physics, and the application of quantum theory to computing could open up huge possibilities for the complex calculations and data processing we see today.
Imagine a computer processor able to harness super-position, to calculate the result of an arbitrarily large number of permutations of a complex problem simultaneously. Imagine how entanglement could be used to allow systems on different sides of the world to be linked and their efforts combined, despite their physical separation. Quantum computing has immense potential, making light work of some of the most difficult tasks, such as simulating the body’s response to drugs, predicting weather patterns, or analyzing big datasets.
Such processing possibilities are needed. The first transistors could only just be held in the hand, while today they measure just 14 nm – 500 times smaller than a red blood cell. This relentless shrinking, predicted by Intel founder Gordon Moore as Moore’s law, has held true for 50 years, but cannot hold indefinitely. Silicon can only be shrunk so far, and if we are to continue benefiting from the performance gains we have become used to, we need a different approach.
Quantum fabrication
Advances in semiconductor fabrication have made it possible to mass-produce quantum-scale semiconductors – electronic circuits that exhibit quantum effects such as super-position and entanglement.
The image, captured at the atomic scale, shows a cross-section through one potential candidate for the building blocks of a quantum computer, a semiconductor Nano-ring. Electrons trapped in these rings exhibit the strange properties of quantum mechanics, and semiconductor fabrication processes are poised to integrate these elements required to build a quantum computer. While we may be able to construct a quantum computer using structures like these, there are still major challenges involved.
In a classical computer processor a huge number of transistors interact conditionally and predictably with one another. But quantum behavior is highly fragile; for example, under quantum physics even measuring the state of the system such as checking whether the switch is on or off, actually changes what is being observed. Conducting an orchestra of quantum systems to produce useful output that couldn’t easily by handled by a classical computer is extremely difficult.
The basic element of quantum computing is known as a qubit, the quantum equivalent to the bits used in traditional computers. To date, scientists have harnessed quantum systems to represent qubits in many different ways, ranging from defects in diamonds, to semiconductor nano-structures or tiny superconducting circuits. Each of these has its own advantages and disadvantages, but none yet has met all the requirements for a quantum computer, known as the DiVincenzo Criteria. The most impressive progress has come from D-Wave Systems, a firm that has managed to pack hundreds of qubits on to a small chip similar in appearance to a traditional processor.
Quantum secrets
The benefits of harnessing quantum technologies aren’t limited to computing, however. Whether or not quantum computing will extend or augment digital computing, the same quantum effects can be harnessed for other means. The most mature example is quantum communications.
Quantum physics has been proposed as a means to prevent forgery of valuable objects, such as a banknote or diamond, as illustrated in the image below. Here, the unusual negative rules embedded within quantum physics prove useful; perfect copies of unknown states cannot be made and measurements change the systems they are measuring. These two limitations are combined in this quantum anti-counterfeiting scheme, making it impossible to copy the identity of the object they are stored in.
The concept of quantum money is, unfortunately, highly impractical, but the same idea has been successfully extended to communications. The idea is straightforward: the act of measuring quantum super-position states alters what you try to measure, so it’s possible to detect the presence of an eavesdropper making such measurements. With the correct protocol, such as BB84, it is possible to communicate privately, with that privacy guaranteed by fundamental laws of physics.
Quantum communication systems are commercially available today from firms such as Toshiba and ID Qauntique. While the implementation is clunky and expensive now it will become more streamlined and miniaturized, just as transistors have miniaturized over the last 60 years.
Improvements to Nano scale fabrication techniques will greatly accelerate the development of quantum-based technologies. And while useful quantum computing still appears to be some way off, its future is very exciting indeed.


Wearable Sensors Could Translate Sign Language Into English

Wearable sensors could one day interpret the gestures in sign language and translate them into English, providing a high-tech solution to communication problems between deaf people and those who don’t understand sign language.
Engineers at Texas A&M University are developing a wearable device that can sense movement and muscle activity in a person's arms.
The device works by figuring out the gestures a person is making by using two distinct sensors: one that responds to the motion of the wrist and the other to the muscular movements in the arm. A program then wirelessly receives this information and converts the data into the English translation.
After some initial research, the engineers found that there were devices that attempted to translate sign language into text, but they were not as intricate in their designs.
"Most of the technology ... was based on vision- or camera-based solutions," said study lead researcher Roozbeh Jafari, an associate professor of biomedical engineering at Texas A&M.
These existing designs, Jafari said, are not sufficient because often when someone is talking with sign language, they are using hand gestures combined with specific finger movements.
"I thought maybe we should look into combining motion sensors and muscle activation," Jafari told Live Science. "And the idea here was to build a wearable device."
The researchers built a prototype system that can recognize words that people use most commonly in their daily conversations. Jafari said that once the team starts expanding the program, the engineers will include more words that are less frequently used, in order to build up a more substantial vocabulary.One drawback of the prototype is that the system has to be "trained" to respond to each individual that wears the device, Jafari said. This training process involves asking the user to essentially repeat or do each hand gesture a couple of times, which can take up to 30 minutes to complete.
"If I'm wearing it and you're wearing it — our bodies are different … our muscle structures are different," Jafari said.
But, Jafari thinks the issue is largely the result of time constraints the team faced in building the prototype. It took two graduate students just two weeks to build the device, so Jafari said he is confident that the device will become more advanced during the next steps of development.
The researchers plan to reduce the training time of the device, or even eliminate it altogether, so that the wearable device responds automatically to the user. Jafari also wants to improve the effectiveness of the system's sensors so that the device will be more useful in real-life conversations. Currently, when a person gestures in sign language, the device can only read words one at a time.
This, however, is not how people speak. "When we're speaking, we put all the words in one sentence," Jafari said. "The transition from one word to another word is seamless and it's actually immediate."
"We need to build signal-processing techniques that would help us to identify and understand a complete sentence," he added.
Jafari's ultimate vision is to use new technology, such as the wearable sensor, to develop innovative user interface between humans and computers. 
For instance, people are already comfortable with using keyboards to issue commands to electronic devices, but Jafari thinks typing on devices like smart watches is not practical because they tend to have small screens.
"We need to have a new user interface (UI) and a UI modality that helps us to communicate with these devices," he said. "Devices like [the wearable sensor] might help us to get there. It might essentially be the right step in the right direction."
Jafari presented this research at the Institute of Electrical and Electronics Engineers (IEEE) 12th Annual Body Sensor Networks Conference in June.


The Singularity

According to techno-futurists, the exponential development of technology in general and artificial intelligence (“AI”) in particular — including the complete digital replication of human brains — will radically transform humanity via two revolutions. The first is the "singularity," when artificial intelligence will redesign itself recursively and progressively, such that AI will become vastly more powerful than human intelligence ("super strong AI"). The second revolution will be "virtual immortality," when the fullness of our mental selves can be uploaded perfectly to non-biological media (such as silicon chips), and our mental selves will live on beyond the demise of our fleshy, physical bodies. 
AI singularity and virtual immortality would mark a startling, transhuman world that techno-futurists envision as inevitable and perhaps just over the horizon. They do not question whether their vision can be actualized; they only debate when it will occur, with estimates ranging from 10 to 100 years.
I'm not so sure. Actually, I'm a skeptic — not because I doubt the science, but because I challenge the philosophical foundation of the claims. Consciousness is the elephant in the room, and most techno-futurists do not see it. Whatever consciousness may be, it affects the nature of the AI singularity and determines whether virtual immortality is even possible.
It is an open question, post-singularity, whether super strong AI without inner awareness would be in all respects just as powerful as super strong AI with inner awareness, and in no respects deficient? In other words, are there kinds of cognition that, in principle or of necessity, require true consciousness? For assessing the AI singularity, the question of consciousness is profound.
Is virtual immortality possible?
Now, what about virtual immortality — digitizing and uploading the fullness of one's first-person mental self (the "I") from wet, mushy, physical brains that die and decay to new, more permanent (non-biological) media or substrates? Could this actually work?
Again, the possibilities for virtual immortality relate to each of the alternative causes of consciousness.
1. If consciousness is entirely physical, then our first-person mental self would be uploadable, and some kind of virtual immortality would be attainable. The technology might take hundreds or thousands of years — not decades, as techno-optimists believe — but barring human-wide catastrophe, it would happen. 
2. If consciousness is an independent, non-reducible feature of physical reality, then it would be possible that our first-person mental self could be uploadable — though less clearly than in No. 1 above, because not knowing what this consciousness-causing feature would be, we could not know whether it could be manipulated by technology, no matter how advanced. But because consciousness would still be physical, efficacious manipulation and successful uploading would seem possible. 
3. If consciousness is a non-reducible feature of each and every elementary physical field and particle (panpsychism), then it would seem probable that our first-person mental self would be uploadable, because there would probably be regularities in the way particles would need to be aggregated to produce consciousness, and if regularities, then advanced technologies could learn to control them.
4. If consciousness is a radically separate, nonphysical substance (dualism), then it would seem impossible to upload our first-person mental self by digitally replicating the brain, because a necessary cause of our consciousness, this nonphysical component, would be absent. 
5. If consciousness is ultimate reality, then consciousness would exist of itself, without any physical prerequisites. But would the unique digital pattern of a complete physical brain (derived, in this case, from consciousness) favor a specific segment of the cosmic consciousness (i.e., our unique first-person mental self)? It's not clear, in this extreme case, that uploading would make much difference (or much sense).
In trying to distinguish these alternatives, I am troubled by a simple observation. Assume that a perfect digital replication of my brain does, in fact, generate human-level consciousness (surely alternative 1, possibly 2, probably 3, not 4, 5 doesn’t matter). This would mean that my first-person self and personal awareness could be uploaded to a new medium (non biological or even, for that matter, a new biological body). But if "I" can be replicated once, then I can be replicated twice; and if twice, then an unlimited number of times.
So, what happens to my first-person inner awareness? What happens to my "I"? 
Assume I do the digital replication procedure and it works perfectly — say, five times.
Where is my first-person inner awareness located? Where am I?
Each of the five replicas would state with unabashed certainty that he is "Robert Kuhn," and no one could dispute them. (For simplicity of the argument, physical appearances of the clones are neutralized.) Inhabiting my original body, I would also claim to be the real “me,” but I could not prove my priority.
I'll frame the question more precisely. Comparing my inner awareness from right before to right after the replications, will I feel or sense differently? Here are four obvious possibilities, with their implications:
I do not sense any difference in my first-person awareness. This would mean that the five replicates are like super-identical twins — they are independent conscious entities, such that each begins instantly to diverge from the others. This would imply that consciousness is the local expression or manifestation of a set of physical factors or patterns. (An alternative explanation would be that the replicates are zombies, with no inner awareness — a charge, of course, they will deny and denounce.)
My first-person awareness suddenly has six parts — my original and the five replicates in different locations — and they all somehow merge or blur together into a single conscious frame, the six conscious entities fusing into a single composite (if not coherent) "picture." In this way, the unified effect of my six conscious centers would be like the "binding problem" on steroids. (The binding problem in psychology asks how do our separate sense modalities like sight and sound come together such that our normal conscious experience feels singular and smooth, not built up from discrete, disparate elements). This would mean that consciousness has some kind of overarching presence or a kind of supra-physical structure.
My personal first-person awareness shifts from one conscious entity to another, or fragments, or fractionates. These states are logically (if remotely) possible, but only, I think, if consciousness would be an imperfect, incomplete emanation of evolution, devoid of fundamental grounding. 
My personal first-person awareness disappears upon replication, although each of the six (original plus five) claims to be the original and really believes it. (This, too, would make consciousness even more mysterious.)
Suppose, after the replicates are made, the original (me) is destroyed. What then? Almost certainly my first-person awareness would vanish, although each of the five replicates would assert indignantly that he is the real "Robert Kuhn" and would advise, perhaps smugly, not to fret over the deceased and discarded original.
At some time in the future, assuming that the deep cause of consciousness permits this, the technology will be ready. If I were around, would I submit? I might, because I'm confident that 1 (above) is true and 2, 3 and 4 are false, and that the replication procedure would not affect my first-person mental self one whit. (So I sure wouldn't let them destroy the original.)
Bottom line, for me for now: The AI singularity and virtual immortality must confront the deep cause of consciousness.


Wearable Sweat Sensors Could Track Your Health

Blood tests allow doctors to peer into the human body to analyze people's health. But in the future, there may be a less invasive way to obtain valuable information about a person's health: wearable sensors that use human sweat to look for signs of disease.

Sweat is a rich source of chemical data that could help doctors determine what is happening inside the human body, scientists explained in a new study. Perspiration is loaded with molecules, ranging from simple electrically charged ions to more complex proteins, and doctors can use sweat to diagnose certain diseases, uncover drug use and optimize athletic performance, they said.
"Sweat is pretty attractive to target for noninvasive wearable sensors, since it's, of course, very easy to analyze — you don't have to poke the body to get it — and it has a lot of information about one's health in it," said study senior author Ali Javey, an electrical engineer at the University of California, Berkeley.
Commercially available wearable sensors, like the Fitbit and the Apple Watch, track users' physical activities and some vital signs, such as heart rate. However, they do not provide data about a user's health on a molecular level. Now, scientists say "smart" wristbands and headbands embedded with sweat sensors could sync data wirelessly in real time to smartphones using Bluetooth.
Previously, studies of sweat largely relied on perspiration collected off the body in containers that was later analyzed in a lab. Now, researchers have devised a soft, flexible, wearable sensor array to continuously monitor changes in four molecular components of sweat and to provide real-time tracking of a person's health.
These devices might one day help athletes track their performance and enable doctors to continuously monitor the health of their patients to better personalize their medication, the scientists said.
"This could help tell athletes to take liquids or warn them they are going through heat shock," Javey told Live Science.
The invention uses five sensors to simultaneously track levels of glucose, lactate, sodium and potassium, as well as skin temperature. This data is fed to a flexible board of microchips that processes these signals and uses Bluetooth to wirelessly transmit data to a smartphone. All of these electronics could be incorporated into either a wristband or headband.
"We have a smartphone app that plots the data from sweat in real time," Javey said.The researchers tested the device on 26 men and women who pedaled indoors on stationary bikes or ran outdoors on tracks and trails. Sodium and potassium in sweat could help check for problems such as dehydration and muscle cramps. Glucose could help keep track of blood sugar levels. Lactate levels could indicate blood flow problems, and skin temperature could reveal overheating and other problems.
In addition, the skin temperature sensor helps adjust the chemical sensors to make sure they get proper readings, the researchers said. For instance, higher skin temperatures increase the electrical signals from glucose, which can make it look as if people are releasing more glucose in their sweat than they actually are.
Previous wearable sweat monitors could track only a single molecule at a time, which could generate misleading information, the researchers said. For example, if a lone sensor showed a drop in a molecule's level, it might not be because that molecule's level is actually falling in a person's sweat, but rather because sweating has stopped, the sensor has detached from the skin or the sensor is failing. The inclusion of multiple sensors could help shed light on what is happening to a person and the sensor array as a whole.
In the near future, the researchers hope to shrink the device's electronics down and boost the number of molecules it monitors. Such molecules could include heavy metals such as lead, which recently made news for appearing in dangerously high levels in the water of Flint, Michigan, Javey noted.
In the long term, the researchers hope to conduct large-scale studies with their device on many volunteers. The data such work gathers could help researchers better understand what levels of various molecules in sweat mean for athletic performance and human health, Javey said.
The researchers have filed a patent on their work, although they are not currently collaborating with anyone to commercialize the sensors, Javey said.

Robot Painters

While the world's great artists have all been humans so far, robots may soon give the old masters a run for their money. Participants in the first annual Robot Art competition showed just how far our silicon counterparts have come in creating great artwork.
The robots took a variety of approaches, with some coming up with their own compositions, or challenging themselves to work with a limited palette.
"The results of this competition show a significant step in the advancement of robotics and artificial intelligence to create beauty. In addition to being geographically diverse, the approach to creating art that these robots took varied significantly, sometimes in unexpected exciting ways. Some robots concentrated on mastering traditional painting techniques, others experimented with artificial creativity, while others explored the nature of human/robot collaboration. I am excited to see how new teams take in this year's results, and try to top them in next year's competition," RobotArt.org founder Andrew Conru said in a statement.  The Robot Art competition drew 70 different competitors, and each of their paintings had a unique look and style. Though the idea of a robot producing art may seem fantastical, most of the paintings stuck to fairly traditional subject matters and styles.
The winner of the competition, a paintbrush-wielding robot called TAIDA from Taiwan University, painted a still life of a bowl of fruit in a classical style. TAIDA impressed the judges with its artistic sensibilities by mixing its own color palette and painting under-layers before going in and coloring over parts, to make the image match mostly with the "vision" it had in mind, similar to the process human artists take. TAIDA's creators took home a prize of $30,000.
The second-place winner, cloudPainter, was designed by Pinder and Hunter van Arman, who wanted their robot artist to have true artistic license. The cloudPainter bot oversaw the entire creative process by taking pictures, cropping them and choosing the best one, and making each brushstroke independently. The robot earned its creators an $18,000 prize.
The third-place winner, NoRAA, earned a $12,000 prize for its abstract paintings. The robot's creator, Patrick Tabarelli, sought to represent the interplay between algorithms and the physical world, he said.
Judges Patrick and Jeannie Wilshire noted that it was an example of "artwork produced by a robot, rather than through a robot."
While scientists and artists have experimented with computer-based painting for decades, and industrial robots have been a mainstay for painting cars, the rise of robot-created art has been slow. Early attempts experimented with using the robot as a tool for laying down the human artist's vision.
More recently, however, the robots have been taking over more of the creative process. In April, researchers unveiled the "Next Rembrandt," a painting done in the style of the famous painter that used artificial intelligence to select the subject matter, compose the painting, and execute the actual painting with a specialized printer.


Foldable Droid Could Mend Stomachs

There likely aren't many occasions when you'd want to swallow a tiny robot. But what if such an ingestible bot could be put to work inside your body, targeting a foreign object or patching up an internal wound, before decomposing without a trace?
A team of researchers from the Massachusetts Institute of Technology has proposed a new, minimally invasive way of using bio-compatible and biodegradable miniature robots to carry out tasks inside the human body. The design of the bots is inspired by origami, the Japanese art of paper folding.
Made primarily from dried pig intestines (commonly used for sausage casings), the tiny robots look like a cross between a caterpillar and an accordion. A tiny magnet allows them to be maneuvered by a tunable external magnetic field, the researchers said.
The researchers have already demonstrated origami-inspired robots capable of swimming, climbing and carrying a load twice their weight, but creating an ingestible device that can operate inside a stomach presented a whole new set of challenges, said Shuhei Miyashita, who was part of the MIT team that developed the robot but is now a lecturer of intelligent robotics at the University of York in the United Kingdom.
"The toughest problem we had to solve was that of getting the robot to work in such an unpredictable environment," Miyashita told Live Science. "The robot design was re-created so that it can still walk when flipped upside down and can correspond to the change of the stomach anatomy."
Building a tiny bot
At the heart of the robot's layered structure is a material that shrinks when heated. When this happens, carefully placed slits cut in the outer layer cause the initially flat structure to fold into a series of box-like segments, the researchers said.
This design allows the robot to rely on so-called "stick-slip" motion, in which parts of the robot stick to a surface due to friction during certain movements, but then slip free when the weight distribution changes as the robot's body flexes.
But, because this particular robot is designed to work in a fluid-filled stomach, the team redesigned the robot to be more like a fin so that it also provides thrust by propelling water, effectively allowing the machine to swim as well as a crawl.
"It is really important to see such small robots enable both actuation [or movement] and biodegradation," said Hongzhi Wang, a professor of materials science at Donghua University in China, who works on self-folding origami-inspired materials but was not involved with the new study. "It has great potential applications to health care."
How it works
In a paper that was presented at the IEEE International Conference on Robotics and Automation, held May 16-21 in Stockholm, Sweden, the team from MIT's Computer Science and Artificial Intelligence Laboratory described how they created a synthetic stomach to test the device and devised a two-step process for hypothetically removing a watch battery that had been swallowed. The scientists also demonstrated how the robot can patch the wound the battery leaves behind.
A 3D-printed open cross-section of the stomach and esophagus was lined with a silicone rubber mold, which matched both the shape and physical properties of a real-life stomach. The synthetic organ was then filled with a liquid that simulated the properties of gastric fluid.
In the study, one of the robots was rolled up and encased in a pill-size capsule of ice. Once the device reached the stomach, an external array of metal coils created a magnetic field that interacted with the robot's magnet and could be tuned to make the capsule roll toward the ingested watch battery.
The magnet causes the capsule to attach itself to the battery and when the robot rolls away again, it dislodges the battery from the stomach lining. Both the robot and the battery are then naturally passed out of the digestive system, the researchers said.
A second robot is then ingested in the same way, but this time the ice is left to melt and the robot unfolds. The same magnetic array is used to guide the robot to the wound site, which the robot covers before it eventually dissolves. The robot's structure also includes a dissolvable layer impregnated with drugs designed to aid healing, the scientists said. Larry Howell, a professor of mechanical engineering at Brigham Young University in Utah, who works on origami-inspired mechanisms and medical devices, said the new research marks a valuable step forward in creating robots that can carry out medical procedures inside the body.
"The idea of ingesting the robot in an ice capsule for initial delivery, and having it be biodegradable so that it decomposes afterwards, has the potential of having reduced long-term impact compared to some surgical alternatives," Howell told Live Science.
Miyashita said it could be at least six to eight years before these robots reach the clinic, though. Control accuracy needs to be improved, he said, adding that rigorous animal and human testing will need to be conducted first.