Bat Bot Acrobatics Robots

Whether they’re swooping around to catch dinner or delicately hanging upside down to sleep, bats are known for their acrobatic prowess. Now, scientists have created a robot inspired by these flying creatures. Dubbed the “Bat Bot,” it can fly, turn and swoop like its real-life counterpart in the animal kingdom.

Since at least the time of Leonardo da Vinci, scientists have sought to mimic the acrobatic way in which bats maneuver the sky. Someday, robotic bats could help deliver packages or inspect areas ranging from disaster zones to construction sites, the researchers said.

“Bat flight is the Holy Grail of aerial robotics,” said study co-author Soon-Jo Chung, a robotics engineer at the California Institute of Technology and a research scientist at NASA’s Jet Propulsion Laboratory, both in Pasadena.

Bats may possess the most sophisticated wings in the animal kingdom, with more than 40 joints in their wings that enable unparalleled agility during flight, likely so that they can pursue equally nimble insect prey, the researchers said.

“Whenever I see bats make sharp turns or perch upside down with elegant wing movements, I get mesmerized,” Chung told Live Science.

Previous work has developed a variety of flying robots biologically inspired by insects and birds. However, attempts to build robots that mimic bats have been met with limited success because of the complexities of bats’ wings, such as their multitude of joints, the researchers said.

Now, Chung and his colleagues have developed the “Bat Bot,” or B2, a robot that can fly, turn and swoop like a bat. The aim is “to build a safe, energy-efficient, soft-winged robot,” Chung told Live Science.

The researchers said previous bat robots followed the skeletal anatomy of these flying creatures too closely, resulting in bots that were too bulky to fly. Instead, the scientists figured out which components were key to the beating of a bat’s wing — the shoulder, elbow and wrist joints, and the side-to-side swish of their thighs — and used only those in their robot.

Whereas conventional flapping-wing robots used rigid wings, the Bat Bot has thin, elastic wings. “When a bat flaps its wings, it’s like a rubber sheet — it fills up with air and deforms,” said study co-author Seth Hutchinson, a robotics engineer at the University of Illinois at Urbana-Champaign. During the downward stroke, “the flexible wing fills up with air, and at the bottom of the downstroke, it flexes back into place and expels the air, which generates extra lift,” he explained. “That gives us extra flight time.”

The Bat Bot’s wings are made of bones of carbon fiber and ball-and-socket joints composed of 3D-printed plastic, all covered with a soft, durable, ultrathin, silicone-based skin only 56 microns thick. (For comparison, the average human hair is about 100 microns thick.)

The robot flapped its wings up to 10 times per second using micro-motors in its backbone. The Bat Bot weighed only about 3.3 ounces (93 grams) and had a wingspan of about 18.5 inches (47 centimeters) — measurements similar to those of Egyptian fruit bats, Chung said.

In experiments, the Bat Bot could fly at speeds averaging 18.37 feet per second (5.6 meters per second). It could also carry out sharp turns and straight dives, reaching speeds of 45.9 feet per second (14 m/s) while swooping down.

The researchers said their robot’s softness and light weight make it safer for use around humans than, for example, the quadrotor drones that are popular commercially. For instance, the Bat Bot would cause little or no damage if it were to crash into humans or other obstacles in its environment, they said. In contrast, quadrotors spin their rotor blades at high speeds of up to 18,000 revolutions per minute, which could result in dangerous interactions, Chung said.

“The high-speed rotor blades of quadrotors and other craft are inherently unsafe for humans,” Chung said. “Our Bat Bot is considerably more safe.”

The safer, more agile nature of the Bat Bot could enable a wide range of applications. For instance, Bat Bots could serve as “aerial service robots at home or in hospitals to help the elderly or disabled by quickly fetching small objects, relaying audio and video from various distant locations without requiring hard-mounting of multiple cameras, and becoming fun, pet-like companions,” Hutchinson told Live Science.

Another potential application for Bat Bots would be “to supervise construction sites,” Hutchinson said. “The need for automation in construction through advances in computer science and robotics has been highlighted by the National Academy of Engineering as one of the grand challenges of engineering in the 21st century,” he noted.

The dynamic and complex nature of construction sites has prevented the deployment of fully, or even partially, robotic and automated solutions to monitor them. “Keeping track of whether a building is put together in the right way and at the right time is an important problem, and it’s not a trivial problem — a lot of money gets spent on that in the construction industry,” Hutchinson said. Instead, Bat Bots could “fly around, pay attention and compare the building information model to the actual building that’s being constructed,” he added.

Bat Bots could also help inspect disaster zones and other areas. “For example, an aerial robot equipped with a radiation detector, 3D camera system, and temperature and humidity sensors could inspect something like the Fukushima nuclear reactors [in Japan], where the radiation level is too high for humans, or fly into tight crawl spaces, such as mines or collapsed buildings,” Hutchinson said. “Such highly maneuverable aerial robots, with longer flight endurance and range than quadrotors have, will make revolutionary advances in monitoring and recovery of critical infrastructure such as nuclear reactors, power grids, bridges and borders.”

Moreover, the Bat Bot could shed light on some of the mysteries of bat flight. Currently, researchers analyze how bats fly with video, but with the Bat Bot, researchers could develop better models of the aerodynamic forces that bats experience “beyond what can be observed with just the eyes,” Hutchinson said.

The researchers noted that the Bat Bot cannot carry heavy objects yet, but future versions of the robotic bat could lead to “drone-enabled package delivery,” Chung said.

Future research could achieve other aspects of bat flight, such as hovering or perching right side up or even upside down, the researchers said. Perching is more energy-efficient than hovering, “since stationary hovering is difficult for quadrotors in the presence of even mild wind, which is common for construction sites,” Chung said.


A RoboDragonfly

Scientists look to flying animals — birds, bats and insects — for inspiration when they design airborne drones. But researchers are also investigating how to use technology to interact with, and even guide, animals as they fly, enhancing the unique adaptations that allow them to take to the air.

To that end, engineers have fitted dragonflies with tiny, backpack-mounted controllers that issue commands directly to the neurons controlling the insects’ flight.

This project, known as DragonflEye, uses optogenetics, a technique that employs light to transmit signals to neurons. And researchers have genetically modified dragonfly neurons to make them more light-sensitive, and thereby easier to control through measured light pulses.

Dragonflies have large heads, long bodies and two pairs of wings that don’t always flap in sync, according to a 2007 study published in thejournal Physical Review Letters. The study authors found that dragonflies maximize their lift when they flap both sets of wings together, and they hover by flapping their wing pairs out of synch, though at the same rate.

Meanwhile, separate muscles controlling each of their four wings allow dragonflies to dart, hover and turn on a dime with exceptional precision, scientists found in 2014. Researchers used high-speed video footage to track dragonfly flight and build computer models to better understand the insects’ complex maneuvers, presenting their findings at the 67th Annual Division of Fluid Dynamics meeting, according to a statement released by the American Physical Society in November 2014.

DragonflEye sees these tiny flight masters as potentially controllable flyers that would be “smaller, lighter and stealthier than anything else that’s manmade,” Jesse Wheeler, a biomedical engineer at the Charles Stark Draper Laboratory (CSDL) in Massachusetts and principal investigator on the DragonflEye program, said in a statement.

A close-up of the backpack board and components before being folded and fitted to the dragonfly.
A close-up of the backpack board and components before being folded and fitted to the dragonfly.

Credit: Charles Stark Draper Laboratory

The project is a collaboration between the CSDL, which has been developing the backpack that controls the dragonfly, and the Howard Hughes Medical Institute (HHMI), where experts are identifying and enhancing “steering” neurons located in the dragonfly’s nerve cord, inserting genes that make it more responsive to light.

“This system pushes the boundaries of energy harvesting, motion sensing, algorithms, miniaturization and optogenetics, all in a system small enough for an insect to wear,” Wheeler said.

Even smaller than the dragonfly backpack are components created by CSDL called optrodes — optical fibers supple enough to wrap around the dragonfly’s nerve cord, so that engineers can target only the neurons related to flight, CSDL representatives explained in a statement.

And in addition to controlling insect flight, the tiny, flexible optrodes could have applications in human medicine, Wheeler added.

“Someday these same tools could advance medical treatments in humans, resulting in more effective therapies with fewer side effects,” Wheeler said. “Our flexible optrode technology provides a new solution to enable miniaturized diagnostics, safely access smaller neural targets and deliver higher precision therapies.”


The Magnetic Robot Swarms Could Combat Disease

Magnetically controlled swarms of microscopic robots might one day help fight cancer inside the body, new research suggests.

Over the past decade, scientists have shown they can manipulatemagnetic forces to guide medical devices within the human body, as these fields can apply forces to remotely control objects. For instance, prior work used magnetic fields to maneuver a catheter inside the heart and steer video capsules in the gut.

Previous research also used magnetic fields to simultaneously control swarms of tiny magnets. In principle, these objects could work together on large problems such as fighting cancers. However, individually guiding members of a team of microscopic devices so that each moves in its own direction and at its own speed remains a challenge. This is because identical magnetic items under the control of the same magnetic field usually behave identically to each other.

Now, scientists have developed a way to magnetically control each member of a swarm of magnetic devices to perform specific, unique tasks, researchers in the new study said.

“Our method may enable complex manipulations inside the human body,” said study lead author Jürgen Rahmer, a physicist at Philips Innovative Technologies in Hamburg, Germany.

First, the scientists created a number of tiny identical magnetic screws. The researchers next used a strong, uniform magnetic field to freeze groups of these magnetic screws in place. In small, weak spots within this powerful magnetic field, the microscopic screws are free to move. Superimposing a relatively weak rotating magnetic field could make these free screws spin, the researchers said.

In experiments, the researchers could make several magnetic screws whirl in different directions at the same time with pinpoint accuracy. In principle, the scientists noted, they could manipulate hundreds of microscopic robots at once.First, the scientists created a number of tiny identical magnetic screws. The researchers next used a strong, uniform magnetic field to freeze groups of these magnetic screws in place. In small, weak spots within this powerful magnetic field, the microscopic screws are free to move. Superimposing a relatively weak rotating magnetic field could make these free screws spin, the researchers said.

“One could think of screw-driven mechanisms that perform tasks inside the human body without the need for batteries or motors,” Rahmer told Live Science.

One application for these magnetic swarms could involve magnetic screws embedded within injectable microscopic pills. Doctors could use magnetic fields to make certain screws spin to open the pills, the researchers said. This could help doctors make sure that cancer-killing radioactive “seeds” within the pills  target and damage only tumors rather than healthy tissues, cutting down on harmful side effects, the researchers said. Once the pills deliver a therapeutic dose of radiation, physicians could then use magnets to essentially switch the pills off. (The pills would be made of metallic material that would otherwise keep radiation from leaking out.)

Another potential application could be medical implants that change over time, the researchers said. For instance, as people heal, magnetic fields could help alter the shape of implants to better adjust to the bodies of patients, Rahmer said.

In the future, researchers could develop compact and magnetic field applicators to control tiny magnetic robots, and use imaging technologies such as X-ray machines or ultrasound scanners to show where those devices are located in the body, Rahmer suggested.


This Robo-Bees Could Aid Insects with Pollination Duties

Mini drones sporting horsehair coated in a sticky gel could one day take the pressure off beleaguered bee populations by transporting pollen from plant to plant, researchers said.

Roughly three-quarters of the world’s flowering plants and about 35 percent of the world’s food crops depend on animals to pollinate them, according to the U.S. Department of Agriculture.

Some of nature’s most prolific pollinators are bees, but bee populations are declining around the world, and last month, the U.S. Fish and Wildlife Service listed a native species as endangered for the first time.

Now, researchers from Japan said they’ve taken the first steps toward creating robots that could help pick up the slack from insect pollinators. The scientists created a sticky gel that lets a $100 matchbox-size dronepick up pollen from one flower and deposit it onto another to help the plants reproduce.

“This is a proof of concept — there’s nothing compared to this. It’s a totally first-time demonstration,” said study leader Eijiro Miyako, a chemist at the National Institute of Advanced Industrial Science in Tsukuba, Japan. “Some robots are expected to be used for experiments in pollination, but no one has tried yet.”

The key innovation of the new study, published today (Feb. 9) in thejournal Chem, is the so-called ionic liquid gel, but according to Miyako it was more down to luck than design. The gel was actually the result of a failed attempt to create electrically conducting liquids and had sat forgotten in a desk drawer for nearly a decade.

But after eight years, it still hadn’t dried out, which most other gels would have done, and was still very sticky, Miyako said. Fortunately, this discovery coincided with Miyako watching a documentary that detailed concerns about insect pollinators.

 “I actually dropped the gel on the floor and I noticed it absorbed a lot of dust, and everything linked together in my mind,” he told Live Science.

The gel has just the right stickiness, meaning it can pick up pollen but is not so adhesive that it won’t let the grains go.

The scientists next tested how effectively the gel could be used to transport pollen among flowers. To do so, the researchers put droplets of the material on the back of ants and left the insects overnight in a box full of tulips. The next day, the scientists found that the ants with the gel had picked up far more pollen grains than those insects that lacked the sticky substance.

In a side experiment, the researchers found that it was possible to integrate photochromic compounds, which change color when exposed to UV or white light, into the gel. Scientists stuck this material onto living flies, giving the bugs color-changing capabilities. This, Miyako said, could ultimately act as some kind of adaptive camouflage to protect pollinators from predators.

But while improving the ability of other insects to pollinate flowers is a potential solution to falling bee numbers, Miyako said he was not convinced, and so began to look elsewhere. “It’s very difficult using living organisms for real practical realizations, so I decided to change my approach and use robots,” he said.

The hairs that make insects like bees fuzzy are important for their role as pollinators, because the hairs increase the surface area of the bees’ bodies, giving pollen more material to stick to. In order to give the smooth, plastic drone similar capabilities, the scientists added a patch of horsehair to the robot’s underside, which was then coated with the gel.

The researchers then flew the drones to collect pollen from the flowers of Japanese lilies and transport this pollen to other flowers. In each experiment, the researchers made 100 attempts at pollinating the flower, achieving an overall success rate of 37 percent. Drones without the patch of hair, or with uncoated hair, failed to pollinate the plants.

Miyako said there are currently limitations to the technology, because it is difficult to manually pilot the drone. However, he added that he thinks GPS and artificial intelligence could one day be used to automatically guide robotic pollinators.

Before these robo-bees become a reality, though, the cost of the drone will have to come down drastically and it’s 3-minute battery life will need to improve significantly, Miyako said. But he added that he is confident this will happen eventually.

Dave Goulson, a professor at the University of Sussex in the United Kingdom, said he sees the intellectual interest in trying to create robot bees, but he’s skeptical  about how practical they are and worries about distracting from more vital pollinator conservation work.Goulson specializes in the conservation of bumblebees but was not involved with the new research.

In a blog post, he wrote that there are roughly 3.2 trillion bees on the planet. Even if the robo-bees cost 1 cent per unit and lasted a year, which he said is a highly optimistic estimate, it would cost $32 billion a year to maintain the population and would litter the countryside with tiny robots.

“Real bees avoid all of these issues; they are self-replicating, self-powering and essentially carbon-neutral,” Goulson wrote in the post. “We have wonderfully efficient pollinators already. Let’s look after them, not plan for their demise.”


North Korea’s Missile Will Threats to US

North Korea has always talked the talk, and now it seems to be walking the walk as never before.

The nuclear-armed rogue nation appears to be making progress on anintercontinental ballistic missile (ICBM), which could conceivably allow the Hermit Kingdom to make good on its oft-repeated threat to turn major American cities into “seas of fire,” experts say.

“They’ve probably reached the point where they’re going to need to start testing the missiles themselves — the whole system,” said Joel Wit, senior fellow at the U.S.-Korea Institute (USKI) at Johns Hopkins University’s School of Advanced International Studies. “Most people think that could come sometime this year.”

Last year’s successful test-launch of a missile from a submarine suggests that a mobile-strike capability may be within North Korea’s grasp soon as well, analysts have said.

The North Korean missile program got its start with the importation of Soviet Scuds, which made their way into the nation in the 1970s. North Korea reworked Scud technology into a number of variants over the years, apparently with the help of Soviet engineers (many of whom fled the USSR after its 1991 collapse).

These versions include the Hwasong-5 and Hwasong-6, which are thought to have a range of a few hundred miles, and the Nodong, which experts believe can reach targets 620 miles to 800 miles (1,000 to 1,300 kilometers) away. (It’s hard to know anything for sure about North Korea’s missiles and rockets, because the nation’s government is extremely secretive and works to keep much information from getting to the outside world.)

North Korea has also developed longer-range missiles, including the Taepodong-1, Musudan and Taepodong-2, which have estimated maximum ranges of about 1,500 miles (2,500 km), 2,000 miles (3,200 km) and 3,000 miles to 5,400 miles (5,000 to 9,000 km), respectively.

Taepodong-1 has just one known flight under its belt. In April 1998, a modified space-launch configuration of the vehicle lifted off with a small satellite onboard; Western observers concluded that the launch failed.

The Taepodong-2 failed during a 2006 test flight, its only known liftoff. However, North Korea modified the missile into the Unha space launcher, which lofted satellites to orbit in December 2012 and February 2016.

The Musudan has seen a lot more action. North Korea apparently tested the medium-range missile seven times last year, with just one success, said physicist and missile-technology expert David Wright, co-director of the Union of Concerned Scientists’ Global Security Program.

Such flights flout United Nations resolutions, which prohibit North Korea from testing missiles and nuclear weapons. Pyongyang has also conducted five known nuclear tests, with the latest one coming in September 2016. [North Korea Looks Strangely Dark From Space In Asia Fly-Over (Video)]

North Korea could conceivably combine several of these existing vehicles to build an ICBM, topping an Unha first stage with a second stage based on the Musudan and adding a third stage of some sort, Wright said. But there’s no evidence that the nation is actually doing that, he added.

“North Korea is probably reluctant to turn the Unha into a ballistic missile, because I think they want something that really is a civil space-launch program that they can point to and say, ‘This is what countries do. We’re launching satellites; it’s not a threat,'” Wright told Space.com. “So my guess is, they won’t go that route.”

The route that Pyongyang appears to be taking instead, experts say, centers on a missile called the KN-08, a likely Russian-derived vehicle that Western observers first spotted in North Korean military parades about five years ago.

“It is much better suited as a militarily effective ICBM than the Unha is,” Brian Weeden, a technical adviser for the nonprofit Secure World Foundation, told Space.com. He noted, for example, that the KN-08 can be launched from a truck, whereas the Unha requires a stationary facility.

Work on the KN-08 has apparently been proceeding apace. For instance, in April 2016, Pyongyang ground-tested a large, liquid-fueled engine that could power the putative ICBM and/or a more muscular variant known as the KN-14.

“Using this technology, North Korea’s road-mobile intercontinental ballistic missile (ICBM), the KN-08 or the KN-14 modification, could deliver a nuclear warhead to targets at a distance of 10,000 to 13,000 kilometers [6,200 to 8,000 miles],” aerospace engineer and rocket-propulsion expert John Schilling wrote on 38North.org, a North Korea analysis site, shortly after the test.

“That range, greater than had previously been expected, could allow Pyongyang to reach targets on the U.S. East Coast, including New York or Washington, D.C.,” he added.

And North Korea has also been working on a re-entry vehicle, which would protect the warhead during the ICBM’s return to Earth’s atmosphere from suborbital space. Last year, North Korean leader Kim Jong-un held an event during which he stood next to a re-entry vehicle, said Wit, who is also the co-founder of 38 North (a USKI program).

“I think you can be almost 100 percent certain that they’ve done [re-entry vehicle] tests on the ground,” Wit told Space.com.

During a speech on New Year’s Day, Kim announced that Pyongyang was in final preparations to test-launch its ICBM. Wit said such a flight could come soon — possibly as early as next month, when the U.S. and South Korea hold their annual joint military exercises.

“That could trigger a North Korean response,” Wit said.

If ICBM testing does indeed start this year, the missiles could potentially be ready for deployment by late 2019, he added.

Pyongyang also conducted a successful test launch from a submarine in August 2016, sending one of its KN-11 (also known as Pukguksong-2) missiles about 300 miles (500 km) toward Japan. Developing this technology to the fullest extent would make North Korea more dangerous and capable, Wright said.

“That’s another thing that people are watching — this combination of a missile and a submarine,” he said.

The missile that North Korea fired on Sunday (Feb. 12), which traveled 300 miles (500 km) before splashing down in the Sea of Japan, was a land-based version of the KN-11, according to the North Korean news service.

North Korea is famously unpredictable, secretive and prone to outbursts of grandiose and threatening rhetoric; Kim and other officials have repeatedly vowed to wipe out South Korea, Japan and the United States, for example.

But Pyongyang’s development of a functional ICBM, if and when that does indeed happen, shouldn’t incite panic across the United States, experts said. After all, North Korea has been capable of hitting South Korea and Japan for a while but has yet to do so — probably because the nation knows that such an unprovoked strike would be suicidal, drawing a devastating response from the U.S.

And the Kim regime is not suicidal; rather, it appears focused primarily on strengthening and perpetuating its rule, Weeden said.

“It’s very clear that they want to send a signal to the West that they can’t be messed with,” he said. “There’s a rationality there.”

There are other reasons to doubt that North Korea will launch a nuclear ICBM attack on the U.S. anytime soon.

For example, Pyongyang is thought to possess just a handful of nuclear weapons. A 2015 SAIS report co-authored by Wit pegged the nation’s stockpile at 10 to 16 nukes. By 2020, this number could grow to 20 in a “best-case scenario,” and to 100 in a “worst-case scenario,” the report predicted.

Each warhead is therefore quite valuable to North Korea, Wright said — meaning the nation probably won’t use its nukes lightly.

“It might be the kind of thing you would like to have in your back pocket, to make people think, ‘Well, gee, maybe in a bad situation, they might try a Hail Mary pass and see whether it works,'” Wright said. “But it’s not the sort of thing that you’re going to be able to rely on other than that.”

That’s not to suggest that North Korea is all bluster, however.

“I think the best bet is that they would use nuclear weapons if they felt the regime was threatened in a serious way,” Wit said. “Of course, the main way that might happen is if there’s a war on the Korean Peninsula, and U.S. and South Korean troops are moving north.”


This 3D Printed Micro Camera Sees with Eagle Eye Vision

A bird of prey on the hunt must be able to clearly see faraway objects while remaining aware of threats in its peripheral vision. In some cases, that’s also true for a drone — even one so small that its eye must fit on the tip of a ballpoint pen. Now, a team of engineers has developed a camera that could provide eagle-eye vision to micro-drones.

The new camera could be used for medical procedures, such as endoscopies, or to build micro-robots specially designed to measure, explore or survey, the researchers said.

Previously, the engineers used a technique called femtosecond laser writing to 3D-print miniature lenses directly onto an image-sensing chip. To create sharp images like an eagle’s eye, the researchers used this process to print clusters of four lenses at a time. The lenses range from wide to narrow and low to high resolution, and images can then be combined into a bull’s-eye shape with a sharp image at the center, similar to how eagles see.

“This means that we still cover the whole object and get a better resolution in the center,” said study lead author Simon Thiele, a scientist at the Institute of Technical Optics at the University of Stuttgart in Germany. “The drawback is that we lose information in the periphery.”

The goal is to optimize the flow of information, Thiele told Live Science in an email.

The four lenses can be scaled down to a footprint as small as 300 micrometers by 300 micrometers (0.012 inches or 0.03 centimeters on each side), similar to a medium-size grain of sand. The researchers said the size of the entire camera setup could decrease with design tweaks to pack in or combine lenses, or as smaller chips become available.

In the animal kingdom, creatures must balance their visual needs and their brain power. The solution in humans and many other vertebrates is known as “foveated” vision, with the sharpest image in the center and a wide range of lower-clarity vision at the edges.

“If you had the resolution of the fovea all over your eye, you’d have to carry the visual part of your brain around in a wheelbarrow,” said Wilson Geisler, a vision scientist at the University of Texas at Austin, who was not involved in the new research.

“If you’ve got the right application, this could be a very useful technology,” Geisler told Live Science. The technology could be used in drones that face challenges similar to animals with foveated vision, with limitations on the bandwidth to send information, but the ability to control movement of the camera to focus on areas of interest, he said.

Thiele said the next step in the research will be to print a lens array on the smallest available image sensors, measuring about 0.04 square inches (1 square millimeter), with the lenses covering more of the surface of the sensor.


Coloring Books Go 3D

Have you ever wished that the characters in your coloring book could come alive — leap from the page and dance around, perhaps? Well, good news: There’s an app for that.

Developed by the tech nerds over at Disney Research (a network of laboratories affiliated with the Walt Disney Company), the new coloring book app turns your doodles into virtual, 3-D figures that move around on screen like cartoon characters.

Here’s how it works: You color in one of the characters inside a regular (but app-compatible) coloring book and launch the Disney coloring app on your phone or tablet. The app accesses the device’s camera and uses it to detect which character you are coloring.

Then the app uses special software to re-create the two-dimensional coloring-book character as a 3-D character on the device’s screen. As you color with your crayon, the app applies the same color you’re using on the page to the 3-D character. [The Cool Physics of 7 Classic Toys]

The app isn’t meant to replace the low-tech practice of putting crayon to paper; it’s only meant to “enhance engagement” with this treasured pastime by offering a “magical digital overlay” to accompany the act of coloring, Disney said.

“Augmented reality holds unique and promising potential to bridge between real-world activities and digital experiences, allowing users to engage their imagination and boost their creativity,” Robert Sumner, principal research scientist at Disney Research, said in a statement.

Turning a coloring-book character into a cartoon was not an easy task, especially since virtual characters are 3D and the outlined characters in a coloring book lie flat against the page. Disney Research had to figure out what to do about all the 3-D space (they call this space the “occluded areas”) that exists on the screen but not inside the coloring book.

To fix this issue, the app uses a “lookup map” for each character. This map matches the pixels in the occluded areas with the corresponding areas that the user can actually see. For example, if you color the front of a character’s head with a brown crayon, the app will automatically figure out what color might be appropriate for the back of the character’s head (perhaps a darker hue, representing the character’s hair).

Though the app certainly makes coloring a much more high-tech task, Disney said that, so far, it’s gotten a good response from users. In the initial tests, the majority of users said the app increased their motivation to color. And 80 percent of trial users said the app increased their feeling of connection to a character, Disney said.

However, all of the users who have tried out the new coloring-book app have been adults. It’s still not clear whether this “augmented” coloring experience will go over well with kids.

Disney researchers, together with others who helped develop the app, presented the augmented reality coloring app at the recent the IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2015) in Fukuoka, Japan. The app already launched to the public earlier this year through Disney’s publishing company, Disney Publishing Worldwide. Called “Disney Color and Play,” the app is available on Google Play and iTunes.


This Augmented-Reality Diving Helmets Join the US Navy

New high-tech diving helmets being developed by the U.S. Navy will incorporateaugmented-reality tech to keep naval divers safe on underwater missions.

The U.S. Navy announced this month a “next-generation” and “futuristic” system: the Divers Augmented Vision Display (DAVD). Embedded directly inside a diving helmet, DAVD is a high-resolution, see-through heads-up display (HUD), meaning divers can see instrument readings or other data directly on the transparent display without having to lower their eyes.

“By building this HUD directly inside the dive helmet instead of attaching a display on the outside, it can provide a capability similar to something from an ‘Ironman’ movie,” Dennis Gallagher, underwater systems development project engineer at the Naval Surface Warfare Center Panama City Division, said in a statement. “You have everything you visually need right there within the helmet.”

Augmented-reality (AR) devices superimpose information on the world we see, such as how Google Glass works. The technology has existed for years in some form or another. For instance, the HUDs in fighter aircraft as far back as the ’90s were capable of showing information about the attitude, direction and speed of the planes.

For the U.S. Navy’s purposes, their augmented-reality helmet display will offer divers real-time information, ranging from diagrams to text messages. By having this operational data in real time, divers can work more effectively and stay safe on their missions, according to military.

“Instead of having to rely on pre-dive briefings alone to determine what they are looking for, how specific items should appear and where they may be located, the DAVD system places the information right before divers’ eyes with a look and feel comparable to a point-of-view video game display,” the U.S. Navy said in the statement.

The system can be used for diving missions like underwater construction or salvage operations, according to the Navy, and eventually could be used by first responders and the commercial diving community.

Gallagher and his team are now working on components designed for both helmet systems and full-face masks. In-water simulation testing of the equipment is scheduled for October, with phase three of the project — hardening the system for field tests with dive commands — set to begin in 2017.


Tech : When Will The Augmented Reality Get Real?

Augmented reality, or AR, is technology that blends virtual content with real-world surroundings. Unlike virtual reality, which immerses you in a self-contained digital world, AR overlays 3D graphics and interactive characters into your everyday world.

In the hit game Pokémon Go, players use their smartphones to catch Pokémon characters in the local park or at the office. Released in July 2016, the game was downloaded faster than any mobile app in history and generated nearly $1 billion in revenue in its first six months.

Yet despite Pokémon Go’s huge numbers, when surveyors asked average Americans what they thought about augmented reality, the majority of people didn’t have a clue. In a ReportLinker survey conducted last September — the same month that Pokémon Go downloads topped 500 million — 58 percent of Americans said they were “not at all familiar with” augmented reality. Awareness was slightly higher among famously connected millennials.

 The survey results highlight AR’s frustrating identity crisis.

While Silicon Valley investors and tech CEOs are hyping AR as a game-changing technology that’s going to transform the way we interact with computers and our world, the average consumer still needs a lot of convincing. Where is the killer new device or app that’s going to make augmented reality a reality?

Shawn Cheng has been closely tracking the augmented reality and virtual reality space as a venture capitalist with Vayner/RSE. He sees huge long-term potential for AR technology to make a real impact in gaming, entertainment and education. But in the short-term, he recognizes that augmented reality is in a tough spot in the hype cycle where its high expectations aren’t backed up by real-world applications.

Cheng points to Magic Leap, a Florida company developing an AR headset that is rumored to deliver mind-blowing interactive graphics for entertainment and productivity. Bigshots like Google and Warner Bros. have invested more than a billion dollars in the headset. But we’ve yet to see even a prototype.

“I don’t know if Magic Leap will ever be able to pay off on the hype that’s surrounding them,” Cheng told Seeker. “The proof will be people voting with their dollars. For all the promise and excitement that investors have a for a particular space, it’s still going to come down to whether or not their narrative, what they say that they can do, is something people are willing to pay for.”

The problem right now is that consumers don’t have many real AR options from which to choose. Microsoft’s HoloLens, one of the first true AR headsets, retails for $3,000. It’s undeniably cool, but the list ofavailable applications is still awfully short for that kind of investment. Plus the bulky headset, which looks like a futuristic welding visor, is not exactly something that you’re going to wear around town.

Blair MacIntyre has been researching and developing AR technology and applications since 1991. He’s currently on leave from Georgia Tech’s School of Interactive Computing to help develop an AR browser for Mozilla. MacIntyre thinks that we’re still a few years away from compelling consumer AR products with mass appeal.

“We’re still at the point when the underlying technologies aren’t really great for delivering the kinds of experiences that people imagine,” said MacIntyre. “‘I’ll put on a pair of glasses, I’ll walk down the street and I’ll see stuff all around me: social media, advertising, games, things related to my job.’ And we’re a long way from having a head-mounted display that 99 percent of the public would be comfortable wearing around outside for a whole day.”

Call that the Google Glass conundrum. When Google released a beta version of its now-infamous AR headset in 2013, it was widely criticized for its geeky looks, which only seemed to amplify the creepy factor of its built-in video camera. Glass was discontinued in 2015, but other headset makers are learning from Google’s very public missteps.

Paul Travers is the CEO of Vuzix, a company that cut its teeth making AR headsets for industrial, commercial and military applications, most of which are as unsexy as an Excel spreadsheet. But Vuzix debuted a new pair of lightweight AR sunglasses at CES 2017 that Travers hopes will change the American consumer’s tune on AR specs.

“If you’re going to play at consumers, you can’t look dumb,” Travers told Seeker in an interview. “When you put on any of the headsets that work in business, you still look like you stepped off the starship enterprise. Consumers won’t wear them.”

A pair of Vuzix Blade 3000 sunglasses, which will sell for $500 later in 2017, weigh in at only 2.8 ounces while packing impressive processing power and a heads-up 3D graphics display. Travers heralds the widespread adoption of hands-free AR headsets a “paradigm shift.”

“The ability to connect the web to the real world opens up so many amazing possibilities,” Travers said. “One application could drive this thing through the roof if it was right.”

But what will be that killer AR app? Pokémon Go points the way, but even that wasn’t a pure AR game. Cheng at Vayner/RSE says many Pokémon Go players shut off the AR component, which wasn’t integral to game play. He credits the game’s success to great intellectual property — the Pokémon brand itself — not to its magical merging of virtual and real.

Until everybody is walking around with AR glasses, the platform with the biggest potential for AR adoption is the smartphone. But the problem, said MacIntyre at Mozilla, is that Apple and Android have yet to release phones with real AR capabilities. The GPS-based spatial location on the iPhone isn’t precise enough to accurately map the world around us.

“GPS is great for Google Maps, but if I want to hold my phone and look at a Pokémon that’s 20 feet away from me, and the GPS error is two to five meters, that’s really not going to work,” said MacIntyre. “The Pokemon is going to be jumping dramatically from left to right as my phone’s estimate of where I am is jumping around.”

Google’s new Tango-enabled phones are equipped with special depth cameras that can scan a room or a street and build an inch-by-inch 3D map. With Tango technology — currently only available on a handful of Lenovo and Asus phones — a Pokémon Go character can hide behind a stone fountain and an interactive map of the solar system can stretch across your dining room table.

Gap, the retail clothing giant, recently launched a virtual dressing roomapp for Tango-enabled phones. Kendra Gratteri is chief customer officer at Avametric, an AR fashion start-up that built the Gap app, which allows shoppers to “try on” different garments using a 3D avatar in their phones. The Gap app allows for handy side-by-side fit comparisons and a 360-degree view, but Gratteri said that’s just the tip of the iceberg.

“We’re also working on using the Tango platform as a scanner,” she said in an interview. “So you could scan your own body or have someone scan you and those measurements would drop into our system and create a personalize avatar.”

Gratteri imagines a very near future where our fully-customized 3D avatar follows us through the shopping experience. Instead of sticking to the same few brands and clothing items we currently feel comfortable buying online, we could search the broader fashion world for garments that match the contours of our avatar.

Still, Gratteri admits, “Until Apple offers a solution for a depth sensor camera, we understand that wide use is going to be a little curtailed.”

An AR-enabled iPhone or an Apple AR headset might be just the big break that AR desperately needs to enter the mainstream. And consumers may not have to wait much longer. Apple has been quietly buying up AR software and hardware companies for years and a pair of recent patentshave lead some to predict the release of an Apple-branded AR device as early as late 2017.


Wow Futuristic Helmets Use Smart Glasses

In my younger days, I used to ride a motorcycle — less to look cool than to get cheap campus parking. But the risk/reward ratio went haywire one day, in an incident concerning a blind spot and a UPS truck. I have since embraced public transportation.

This all comes to bear on a development from this week’s AWE 2014 event in Santa Clara, Cali. AWE stands for Augmented World Expo, and the gathering is dedicated to emerging technologies including augmented reality, wearables, digital eyewear and gesture and voice interaction.

An outfit called Fusar Technologies is introducing the new Guardian GA-1 motorcycle helmet, which incorporates a heads-up display (HUD) system, plus video recording, voice commands and augmented reality technology.

The goal is to leverage emerging technology to make riders safer. The backwards-facing camera pipes a “rearview mirror” live feed into the helmet’s built-in Epson Moverio smart glasses, along with GPS navigation information. Built-in speakers and microphones handle voice commands.

While no specifics have been made public yet, the project video suggests the Guardian system will allow riders to monitor traffic conditions, find gas stations, track fellow riders, and take photos and videos — all hands-free, of course.

None of this is particularly new — high-tech motorcycle helmets with various communication gizmos have been around forever. And next-gen wearable systems like the Skully helmet, now in beta testing, incorporate similar technologies. But the trick is to get the various HUD and augmented reality features to work reliably (and without causing distraction) in the famously unreliable conditions of two-wheeled locomotion — road glare, ambient noise, UPS trucks, this sort of thing.