FreshRSS

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 28 mars 2020Vos flux RSS

Video Friday: Qoobo the Headless Robot Cat Is Back

Par Evan Ackerman

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICARSC 2020 – April 15-17, 2020 – [Online Conference]
ICRA 2020 – May 31-4, 2020 – [TBD]
ICUAS 2020 – June 9-12, 2020 – Athens, Greece
RSS 2020 – July 12-16, 2020 – [Online Conference]
CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.


You need this dancing robot right now.

By Vanessa Weiß at UPenn.

[ KodLab ]


Remember Qoobo the headless robot cat? There’s a TINY QOOBO NOW!

It’s available now on a Japanese crowdfunding site, but I can’t tell if it’ll ship to other countries.

[ Qoobo ]


Just what we need, more of this thing.

[ Vstone ]


HiBot, which just received an influx of funding, is adding new RaaS (robotics as a service) offerings to its collection of robot arms and snakebots.

HiBot ]


If social distancing already feels like too much work, Misty is like that one-in-a-thousand child that enjoys cleaning. See her in action here as a robot disinfector and sanitizer for common and high-touch surfaces. Alcohol reservoir, servo actuator, and nozzle not (yet) included. But we will provide the support to help you build the skill.

[ Misty Robotics ]


After seeing this tweet from Kate Darling that mentions an MIT experiment in which “a group of gerbils inhabited an architectural environment made of modular blocks, which were manipulated by a robotic arm in response to the gerbils’ movements,” I had to find a video of the robot arm gerbil habitat. The best I could do was this 2007 German remake, but it’s pretty good:

[ Lutz Dammbeck ]


We posted about this research almost a year ago when it came out in RA-L, but I’m not tired of watching the video yet.

Today’s autonomous drones have reaction times of tens of milliseconds, which is not enough for navigating fast in complex dynamic environments. To safely avoid fast moving objects, drones need low-latency sensors and algorithms. We depart from state of the art approaches by using event cameras, which are novel bioinspired sensors with reaction times of microseconds. We demonstrate the effectiveness of our approach on an autonomous quadrotor using only onboard sensing and computation. Our drone was capable of avoiding multiple obstacles of different sizes and shapes at relative speeds up to 10 meters/second, both indoors and outdoors.

[ UZH ]


In this video we present the autonomous exploration of a staircase with four sub-levels and the transition between two floors of the Satsop Nuclear Power Plant during the DARPA Subterranean Challenge Urban Circuit. The utilized system is a collision-tolerant flying robot capable of multi-modal Localization And Mapping fusing LiDAR, vision and inertial sensing. Autonomous exploration and navigation through the staircase is enabled through a Graph-based Exploration Planner implementing a specific mode for vertical exploration. The collision-tolerance of the platform was of paramount importance especially due to the thin features of the involved geometry such as handrails. The whole mission was conducted fully autonomously.

[ CERBERUS ]


At Cognizant’s Inclusion in Tech: Work of Belonging conference, Cognizant VP and Managing Director of the Center for the Future of Work, Ben Pring, sits down with Mary “Mary” Cummings. Missy is currently a Professor at Duke University and the Director of the Duke Robotics Labe. Interestingly, Missy began her career as one of the first female fighter pilots in the U.S. Navy. Working in predominantly male fields – the military, tech, academia – Missy understands the prevalence of sexism, bias and gender discrimination.

Let’s hear more from Missy Cummings on, like, everything.

[ Duke ] via [ Cognizant ]


You don’t need to mountain bike for the Skydio 2 to be worth it, but it helps.

[ Skydio ]


Here’s a look at one of the preliminary simulated cave environments for the DARPA SubT Challenge.

[ Robotika ]


SherpaUW is a hybrid walking and driving exploration rover for subsea applications. The locomotive system consists of four legs with 5 active DoF each. Additionally, a 6 DoF manipulation arm is available. All joints of the legs and the manipulation arm are sealed against water. The arm is pressure compensated, allowing the deployment in deep sea applications.

SherpaUW’s hybrid crawler-design is intended to allow for extended long-term missions on the sea floor. Since it requires no extra energy to maintain its posture and position compared to traditional underwater ROVs (Remotely Operated Vehicles), SherpaUW is well suited for repeated and precise sampling operations, for example monitoring black smockers over a longer period of time.

[ DFKI ]


In collaboration with the Army and Marines, 16 active-duty Army soldiers and Marines used Near Earth’s technology to safely execute 64 resupply missions in an operational demonstration at Fort AP Hill, Virginia in Sep 2019. This video shows some of the modes used during the demonstration.

[ NEA ]


For those of us who aren’t either lucky enough or cursed enough to live with our robotic co-workers, HEBI suggests that now might be a great time to try simulation.

[ GitHub ]


DJI Phantom 4 Pro V2.0 is a complete aerial imaging solution, designed for the professional creator. Featuring a 1-inch CMOS sensor that can shoot 4K/60fps videos and 20MP photos, the Phantom 4 Pro V2.0 grants filmmakers absolute creative freedom. The OcuSync 2.0 HD transmission system ensures stable connectivity and reliability, five directions of obstacle sensing ensures additional safety, and a dedicated remote controller with a built-in screen grants even greater precision and control.

US $1600, or $2k with VR goggles.

[ DJI ]


Not sure why now is the right time to introduce the Fetch research robot, but if you forgot it existed, here’s a reminder.

[ Fetch ]


Two keynotes from the MBZIRC Symposium, featuring Oussama Khatib and Ron Arkin.

[ MBZIRC ]


And here are a couple of talks from the 2020 ROS-I Consortium.

Roger Barga, GM of AWS Robotics and Autonomous Services at Amazon shares some of the latest developments around ROS and advanced robotics in the cloud.

Alex Shikany, VP of Membership and Business Intelligence for A3 shares insights from his organization on the relationship between robotics growth and employment.

[ ROS-I ]


Many tech companies are trying to build machines that detect people’s emotions, using techniques from artificial intelligence. Some companies claim to have succeeded already. Dr. Lisa Feldman Barrett evaluates these claims against the latest scientific evidence on emotion. What does it mean to “detect” emotion in a human face? How often do smiles express happiness and scowls express anger? And what are emotions, scientifically speaking?

[ Microsoft ]


À partir d’avant-hierVos flux RSS

Video Friday: Robots Help Keep Medical Staff Safe at COVID-19 Hospital

Par Evan Ackerman

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – [ONLINE EVENT]
ICARSC 2020 – April 15-17, 2020 – [ONLINE EVENT]
ICRA 2020 – May 31-4, 2020 – [SEE ATTENDANCE SURVEY]
ICUAS 2020 – June 9-12, 2020 – Athens, Greece
CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.


UBTECH Robotics’ ATRIS, AIMBOT, and Cruzr robots were deployed at a Shenzhen hospital specialized in treating COVID-19 patients. The company says the robots, which are typically used in retail and hospitality scenarios, were modified to perform tasks that can help keep the hospital safer for everyone, especially front-line healthcare workers. The tasks include providing videoconferencing services between patients and doctors, monitoring the body temperatures of visitors and patients, and disinfecting designated areas.

The Third People’s Hospital of Shenzhen (TPHS), the only designated hospital for treating COVID-19 in Shenzhen, a metropolis with a population of more than 12.5 million, has introduced an intelligent anti-epidemic solution to combat the coronavirus.

AI robots are playing a key role. The UBTECH-developed robot trio, namely ATRIS, AIMBOT, and Cruzr, are giving a helping hand to monitor body temperature, detect people without masks, spray disinfectants and provide medical inquiries.

[ UBTECH ]

Someone has spilled gold all over the place! Probably one of those St. Paddy’s leprechauns... Anyways... It happened near a Robotiq Wrist Camera and Epick setup so it only took a couple of minutes to program and ’’pick and place’’ the mess up.

Even in situations like these, it’s important to stay positive and laugh a little, we had this ready and though we’d still share. Stay safe!

[ Robotiq ]


HEBI Robotics is helping out with social distancing by controlling a robot arm in Austria from their lab in Pittsburgh.

Can’t be too careful!

[ HEBI Robotics ]

Thanks Dave!


SLIDER, a new robot under development at Imperial College London, reminds us a little bit of what SCHAFT was working on with its straight-legged design.

[ Imperial ]


Imitation learning is an effective and safe technique to train robot policies in the real world because it does not depend on an expensive random exploration process. However, due to the lack of exploration, learning policies that generalize beyond the demonstrated behaviors is still an open challenge. We present a novel imitation learning framework to enable robots to 1) learn complex real world manipulation tasks efficiently from a small number of human demonstrations, and 2) synthesize new behaviors not contained in the collected demonstrations. Our key insight is that multi-task domains often present a latent structure, where demonstrated trajectories for different tasks intersect at common regions of the state space. We present Generalization Through Imitation (GTI), a two-stage offline imitation learning algorithm that exploits this intersecting structure to train goal-directed policies that generalize to unseen start and goal state combinations.

[ GTI ]


Here are two excellent videos from UPenn’s Kod*lab showing the capabilities of their programmable compliant origami spring things.

[ Kod*lab ]


We met Bornlove when we were reporting on drones in Tanzania in 2018, and it’s good to see that he’s still improving on his built-from-scratch drone.

[ ADF ]


Laser. Guided. Sandwich. Stacking.

[ Kawasaki ]


The Self-Driving Car Research Studio is a highly expandable and powerful platform designed specifically for academic research. It includes the tools and components researchers need to start testing and validating their concepts and technologies on the first day, without spending time and resources on building DYI platforms or implementing hobby-level vehicles. The research studio includes a fleet of vehicles, software tools enabling researchers to work in Simulink, C/C++, Python, or ROS, with pre-built libraries and models and simulated environments support, even a set of reconfigurable floor panels with road patterns and a set of traffic signs. The research studio’s feature vehicle, QCar, is a 1/10 scale model vehicle powered by NVIDIA Jetson TX2 supercomputer and equipped with LIDAR, 360-degree vision, depth sensor, IMU, encoders, and other sensors, as well as user-expandable IO.

[ Quanser ]

Thanks Zuzana!


The Swarm-Probe Enabling ATEG Reactor, or SPEAR, is a nuclear electric propulsion spacecraft that uses a new, lightweight reactor moderator and advanced thermoelectric generators (ATEGs) to greatly reduce overall core mass. If the total mass of an NEP system could be reduced to levels that were able to be launched on smaller vehicles, these devices could deliver scientific payloads to anywhere in the solar system.

One major destination of recent importance is Europa, one of the moons of Jupiter, which may contain traces of extraterrestrial life deep beneath the surface of its icy crust. Occasionally, the subsurface water on Europa violently breaks through the icy crust and bursts into the space above, creating a large water plume. One proposed method of searching for evidence of life on Europa is to orbit the moon and scan these plumes for ejected organic material. By deploying a swarm of Cubesats, these plumes can be flown through and analyzed multiple times to find important scientific data.

[ SPEAR ]


This hydraulic cyborg hand costs just $35.

Available next month in Japan.

[ Elekit ]


Microsoft is collaborating with researchers from Carnegie Mellon University and Oregon State University to compete in the DARPA Subterranean (SubT) challenges, collectively named Team Explorer. These challenges are designed to test drones and robots on how they perform in hazardous physical environments where humans can’t access safely. By participating in these challenges, these teams hope to find a solution that will assist emergency first responders to help find survivors more quickly.

[ Team Explorer ]


Aalborg University Hospital is the largest hospital in the North Jutland region of Denmark. Up to 3,000 blood samples arrive here in the lab every day. They must be tested and sorted – a time-consuming and monotonous process which was done manually until now. The university hospital has now automated the procedure: a robot-based system and intelligent transport boxes ensure the quality of the samples – and show how workflows in hospitals can be simplified by automation.

[ Kuka ]


This video shows human-robot collaboration for assembly of a gearbox mount in a realistic replica of a production line of Volkswagen AG. Knowledge-based robot skills enable autonomous operation of a mobile dual arm robot side-by-side of a worker.

[ DFKI ]


A brief overview of what’s going on in Max Likhachev’s lab at CMU.

Always good to see PR2 keeping busy!

[ CMU ]


The Intelligent Autonomous Manipulation (IAM) Lab at the Carnegie Mellon University (CMU) Robotics Institute brings together researchers to address the challenges of creating general purpose robots that are capable of performing manipulation tasks in unstructured and everyday environments. Our research focuses on developing learning methods for robots to model tasks and acquire versatile and robust manipulation skills in a sample-efficient manner.

[ IAM Lab ]


Jesse Hostetler is an Advanced Computer Scientist in the Vision and Learning org at SRI International in Princeton, NJ. In this episode of The Dish TV they explore the different aspects of artificial intelligence, and creating robots that use sleep and dream states to prevent catastrophic forgetting.

[ SRI ]


On the latest episode of the AI Podcast, Lex interviews Anca Dragan from UC Berkeley.

Anca Dragan is a professor at Berkeley, working on human-robot interaction -- algorithms that look beyond the robot’s function in isolation, and generate robot behavior that accounts for interaction and coordination with human beings.

[ AI Podcast ]


Stanford Makes Giant Soft Robot From Inflatable Tubes

Par Evan Ackerman

As much as we love soft robots (and we really love soft robots), the vast majority of them operate pneumatically (or hydraulically) at larger scales, especially when they need to exert significant amounts of force. This causes complications, because pneumatics and hydraulics generally require a pump somewhere to move fluid around, so you often see soft robots tethered to external and decidedly non-soft power sources. There’s nothing wrong with this, really, because there are plenty of challenges that you can still tackle that way, and there are some up-and-coming technologies that might result in soft pumps or gas generators.

Researchers at Stanford have developed a new kind of (mostly) soft robot based around a series of compliant, air-filled tubes. It’s human scale, moves around, doesn’t require a pump or tether, is more or less as safe as large robots get, and even manages to play a little bit of basketball.

Stanford soft robot
Image: Stanford/Science Robotics

Stanford’s soft robot consists of a set of identical robotic roller modules mounted onto inflated fabric tubes (A). The rollers pinch the fabric tube between rollers, creating an effective joint (B) that can be relocated by driving the rollers. The roller modules actuate the robot by driving along the tube, simultaneously lengthening one edge while shortening another (C). The roller modules connect to each other at nodes using three-degree-of-freedom universal joints that are composed of a clevis joint that couples two rods, each free to spin about its axis (D). The robot moves untethered outdoors using a rolling gait (E).

This thing looks a heck of a lot like the tensegrity robots that NASA Ames has been working on forever, and which are now being commercialized (hopefully?) by Squishy Robotics. Stanford’s model is not technically a tensegrity robot, though, because it doesn’t use structural components that are under tension (like cables). The researchers refer to this kind of robot as “isoperimetric,” which means while discrete parts of the structure may change length, the overall length of all the parts put together stays the same. This means it’s got a similar sort of inherent compliance across the structure to tensegrity robots, which is one of the things that makes them so appealing. 

While the compliance of Stanford’s robot comes from a truss-like structure made of air-filled tubes, its motion relies on powered movable modules. These modules pinch the tube that they’re located on through two cylindrical rollers (without creating a seal), and driving the rollers moves the module back and forth along the tube, effectively making one section of the tube longer and the other one shorter. Although this is just one degree of freedom, having a whole bunch of tubes each with an independently controlled roller module means that the robot as a whole can exhibit complex behaviors, like drastic shape changes, movement, and even manipulation.

There are numerous advantages to a design like this. You get all the advantages of pneumatic robots (compliance, flexibility, collapsibility, durability, high strength to weight ratio) without requiring some way of constantly moving air around, since the volume of air inside the robot stays constant. Each individual triangular module is self-contained (with one tube, two active roller modules, and one passive anchor module) and easy to combine with similar modules—the video shows an octahedron, but you can easily add or subtract modules to make a variety of differently shaped robots with different capabilities.

Since the robot is inherently so modular, there are all kinds of potential applications for this thing, as the researchers speculate in a paper published today in Science Robotics:

The compliance and shape change of the robot could make it suitable for several tasks involving humans. For example, the robot could work alongside workers, holding parts in place as the worker bolts them in place. In the classroom, the modularity and soft nature of the robotic system make it a potentially valuable educational tool. Students could create many different robots with a single collection of hardware and then physically interact with the robot. By including a much larger number of roller modules in a robot, the robot could function as a shape display, dynamically changing shape as a sort of high–refresh rate 3D printer. Incorporating touch-sensitive fabric into the structure could allow users to directly interact with the displayed shapes. More broadly, the modularity allows the same hardware to build a diverse family of robots—the same roller modules can be used with new tube routings to create new robots. If the user needed a robot to reach through a long, narrow passageway, they could assemble a chain-like robot; then, for a locomoting robot, they could reassemble into a spherical shape.

Stanford soft robot
Image: Farrin Abbott

I’m having trouble picturing some of that stuff, but the rest of it sounds like fun.

We’re obligated to point out that because of the motorized roller modules, this soft robot is really only semi-soft, and you could argue that it’s not fundamentally all that much better than hydraulic or pneumatic soft robots with embedded rigid components like batteries and pumps. Calling this robot “inherently human-safe,” as the researchers do, might be overselling it slightly, in that it has hard edges, pokey bits, and what look to be some serious finger-munchers. It does sound like there might be some potential to replace the roller modules with something softer and more flexible, which will be a focus of future work.

An untethered isoperimetric soft robot,” by Nathan S. Usevitch, Zachary M. Hammond, Mac Schwager, Allison M. Okamura, Elliot W. Hawkes, and Sean Follmer from Stanford University and UCSB, was published in Science Robotics.

What Is a Robot? Rodney Brooks Offers an Answer—in Sonnet Form

Par Rodney Brooks

Editor’s Note: When we asked Rodney Brooks if he’d write an article for IEEE Spectrum on his definition of robot, he wrote back right away. “I recently learned that Warren McCulloch”—one of the pioneers of computational neuroscience—“wrote sonnets,” Brooks told us. “He, and your request, inspired me. Here is my article—a little shorter than you might have desired.” Included in his reply were 14 lines composed in iambic pentameter. Brooks titled it “What Is a Robot?” Later, after a few tweaks to improve the metric structure of some of the lines, he added, “I am no William Shakespeare, but I think it is now a real sonnet, if a little clunky in places.”

What Is a Robot?*
By Rodney Brooks

Shall I compare thee to creatures of God?
Thou art more simple and yet more remote.
You move about, but still today, a clod,
You sense and act but don’t see or emote.

You make fast maps with laser light all spread,
Then compare shapes to object libraries,
And quickly plan a path, to move ahead,
Then roll and touch and grasp so clumsily.

You learn just the tiniest little bit,
And start to show some low intelligence,
But we, your makers, Gods not, we admit,
All pledge to quest for genuine sentience.

    So long as mortals breathe, or eyes can see,
    We shall endeavor to give life to thee.

* With thanks to William Shakespeare

Rodney Brooks is the Panasonic Professor of Robotics (emeritus) at MIT, where he was director of the AI Lab and then CSAIL. He has been cofounder of iRobot, Rethink Robotics, and Robust AI, where he is currently CTO.

Video Friday: Autonomous Security Robot Meets Self-Driving Tesla

Par Evan Ackerman

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K. [CANCELED]
ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores
ICRA 2020 – May 31-4, 2020 – Paris, France
ICUAS 2020 – June 9-12, 2020 – Athens, Greece
CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.


Having robots learn dexterous tasks requiring real-time hand-eye coordination is hard. Many tasks that we would consider simple, like hanging up a baseball cap on a rack, would be very challenging for most robot software. What’s more, for a robot to learn each new task, it typically takes significant amounts of engineering time to program the robot. Pete Florence and Lucas Manuelli in the Robot Locomotion Group took a step closer to that goal with their work.

[ Paper ]


Octo-Bouncer is not a robot that bounces an octopus. But it’s almost as good. Almost.

[ Electron Dust ]


D’Kitty (pronounced as “The Kitty”) is a 12-degree-of-freedom platform for exploring learning-based techniques in locomotion and it’s adooorable!

[ D’Kitty ]


Knightscope Autonomous Security Robot meets Tesla Model 3 in Summon Mode!  See, nothing to fear, Elon. :-)

The robots also have a message for us:

Humans, wash your hands, say Knightscope robots

[ Knightscope ]


If you missed the robots vs. humans match at RoboCup 2019, here are the highlights.

Tech United ]


Fraunhofer developed this cute little demo of autonomously navigating, cooperating mobile robots executing a miniaturized logistics scenario involving chocolate for the LogiMAT trade show. Which was canceled. But enjoy the video!

[ Fraunhofer ]

Thanks Thilo!


Drones can potentially be used for taking soil samples in awkward areas by dropping darts equipped with accelerometers. But the really clever bit is how the drone can retrieve the dart on its own.

[ UH ]


Rope manipulation is one of those human-easy robot-hard things that’s really, really robot-hard.

[ UC Berkeley ]


Autonomous landing on a moving platform presents unique challenges for multirotor vehicles, including the need to accurately localize the platform, fast trajectory planning, and precise/robust control. This work presents a fully autonomous vision-based system that addresses these limitations by tightly coupling the localization, planning, and control, thereby enabling fast and accurate landing on a moving platform. The platform’s position, orientation, and velocity are estimated by an extended Kalman filter using simulated GPS measurements when the quadrotor-platform distance is large, and by a visual fiducial system when the platform is nearby. To improve the performance, the characteristics of the turbulent conditions are accounted for in the controller. The landing trajectory is fast, direct, and does not require hovering over the platform, as is typical of most state-of-the-art approaches. Simulations and hardware experiments are presented to validate the robustness of the approach.

[ MIT ACL ]


And now, this.

[ Soft Robotics ]


The EPRI (Electric Power Research Institute) recently worked with Exyn Technologies, a pioneer in autonomous aerial robot systems, for a safety and data collection demonstration at Exelon’s Peach Bottom Atomic Power Station in Pennsylvania. Exyn’s drone was able to autonomously inspect components in elevated hard to access areas, search for temperature anomalies, and collect dose rate surveys in radiological areas— without the need for a human operator.

[ Exyn ]

Thanks Zach!


Relax: Pepper is here to help with all of your medical problems.

[ Softbank ]


Amir Shapiro at BGU, along with Yoav Golan (whose work on haptic control of dogs we covered last year), have developed an interesting new kind of robotic finger with passively adjustable friction.

Paper ] via [ BGU ]

Thanks Andy!


UBTECH’s Alpha Mini Robot with Smart Robot’s “Maatje” software is expected to offer healthcare services to children at Sint Maartenskliniek in the Netherlands. Before that, three of them have been trained to have exercise, empathy and cognition capabilities.

[ UBTECH ]


Get ready for CYBATHLON, postponed to September 2020!

[ Cybathlon ]


In partnership with the World Mosquito Program (WMP), WeRobotics has led the development and deployment of a drone-based release mechanism that has been shown to help prevent the incidence of Dengue fever.

[ WeRobotics ]


Sadly, koalas today face a dire outlook across Australia due to human development, droughts, and forest fires. Events like these and a declining population make conservation and research more important than ever. Drones offer a more efficient way to count koalas from above, covering more ground than was possible in the past. Dr. Hamilton and his team at the Queensland University of Technology use DJI drones to count koalas, using the data obtained to better help these furry friends from down under.

[ DJI ]


Fostering the Next Generation of Robotics Startups | TC Sessions: Robotics

Robotics and AI are the future of many or most industries, but the barrier of entry is still difficult to surmount for many startups. Speakers will discuss the challenges of serving robotics startups and companies that require robotics labor, from bootstrapped startups to large scale enterprises.

[ TechCrunch ]


Skin-like, Flexible Sensor Lets Robots Detect Us

Par Michelle Hampson
Journal Watch report logo, link to report landing page

A new sensor for robots is designed to make our physical interactions with these machines a little smoother—and safer. The sensor, which is now being commercialized, allows robots to measure the distance and angle of approach of a human or object in close proximity.

Industrial robots often work autonomously to complete tasks. But increasingly, collaborative robots are working alongside humans. To avoid collisions in these circumstances, collaborative robots need highly accurate sensors to detect when someone (or something) is getting a little too close.

Many sensors have been developed for this purpose, each with its own advantages and disadvantages. Those that rely on sound and light (for example, infrared or ultrasonic time-of-flight sensors) measure the reflections of those signals and must therefore be closely aligned with the approaching object, which limits their field of detection.

sensor
Photos: Aidin Robotics

To circumvent this problem, a group of researchers in South Korea created a new proximity sensor that measures impedance. It works by inducing electric and magnetic fields with a wide angle. When a human approaches the sensor, their body causes changes in resistance within those fields. The sensor measures the changes and uses that data to inform the robot of the person’s distance and angle of approach. The researchers describe their design in a study published 26 February in IEEE Transactions on Industrial Electronics. It has since been commercialized by Aidin Robotics.

The sensor is made of electrodes with a flexible, coil-like design. “Since the sensor is highly flexible, it can be manufactured in various shapes tailored to the geometries of the robot,” explains Yoon Haeng Lee, CEO of Aidin Robotics. “Moreover, it is able to classify the materials of the approaching objects such as human, metals, and plastics.”

Tests show that the sensor can detect humans from up to 30 centimeters away. It has an accuracy of 90 percent when on a flat surface. However, the electric and magnetic fields become weaker and more dispersed when the sensor is laid over a curved surface. Therefore, the sensor’s accuracy decreases as the underlying surface becomes increasingly curved.

Every robot is different, and the sensor’s performance may change based on a specific robot’s characteristics. The latest version of the integrated sensor module, when installed on a curved surface, can detect objects from up to 20 centimeters away with an accuracy of 94 percent.

Lee says the device is already being used in some collaborative robot models, including the UR10 (by Universal Robots) and Indy7 (by Neuromeka Inc.). “In the future, the sensor module will be mass-produced and applied to the other service robots, as well as collaborative and industrial robots, to contribute to the truly safe work and coexistence of robots and humans,” he says.

Swarm of Robots Forms Complex Shapes Without Centralized Control

Par Evan Ackerman
Journal Watch report logo, link to report landing page

Swarms of small, inexpensive robots are a compelling research area in robotics. With a swarm, you can often accomplish tasks that would be impractical (or impossible) for larger robots to do, in a way that’s much more resilient and cost effective than larger robots could ever be.

The tricky thing is getting a swarm of robots to work together to do what you want them to do, especially if what you want them to do is a task that’s complicated or highly structured. It’s not too bad if you have some kind of controller that can see all the robots at once and tell them where to go, but that’s a luxury that you’re not likely to find outside of a robotics lab.

Researchers at Northwestern University, in Evanston, have been working on a way to provide decentralized control for a swarm of 100 identically programmed small robots, which allows them to collectively work out a way to transition from one shape to another without running into each other even a little bit.

The process that the robots use to figure out where to go seems like it should be mostly straightforward: They’re given a shape to form, so each robot picks its goal location (where it wants to end up as part of the shape), and then plans a path to get from where it is to where it needs to go, following a grid pattern to make things a little easier. But using this method, you immediately run into two problems: First, since there’s no central control, you may end up with two (or more) robots with the same goal; and second, there’s no way for any single robot to path plan all the way to its goal in a way that it can be certain won’t run into another robot.

To solve these problems, the robots are all talking to each other as they move, not just to avoid colliding with its friends, but also to figure out where its friends are going and whether it might be worth swapping destinations. Since the robots are all the same, they don’t really care where exactly they end up, as long as all of the goal positions are filled up. And if one robot talks to another robot and they agree that a goal swap would result in both of them having to move less, they go ahead and swap. The algorithm makes sure that all goal positions are filled eventually, and also helps robots avoid running into each other through judicious use of a “wait” command.

What’s really novel about this approach is that despite the fully distributed nature of the algorithm, it’s also provably correct, and will result in the guaranteed formation of an entire shape without collisions or deadlocks. As far as the researchers know, it’s the first algorithm to do this. And it means that since it’s effective with no centralized control at all, you can think of “the swarm” as a sort of Borg-like collective entity of its own, which is pretty cool.

The Northwestern researchers behind this are Michael Rubenstein, assistant professor of electrical engineering and computer science, and his PhD student Hanlin Wang. You might remember Mike from his work on Kilobots at Harvard, which we wrote about in 2011, 2013, and again in 2014, when Mike and his fellow researchers managed to put together a thousand (!) of them. As awesome as it is to have a thousand robots, when you start thinking about what it takes to charge, fix, and modify them, a thousand robots (a thousand robots!), it makes sense why they’ve updated the platform a bit (now called Coachbot) and reduced the swarm size to 100 physical robots, making up the rest in simulation.

These robots, we’re told, are “much better behaved.”

Robot swarm
Image: Northwestern University

The hardware used by the researchers in their experiments. 1. The Coachbot V2.0 mobile robots (height of 12 cm and a diameter of 10 cm) are equipped with a localization system based on the HTC Vive (a), Raspberry Pi b+ computer (b), electronics motherboard (c), and rechargeable battery (d). The robot arena used in experiments has an overhead camera only used for recording videos (e) and an overhead HTC Vive base station (f). The experiments relied on a swarm of 100 robots (g). 2. The Coachbot V2.0 swarm communication network consists of an ethernet connection between the base station and a Wi-Fi router (green link), TCP/IP connections (blue links), and layer 2 broadcasting connections (black links). 3. A swarm of 100 robots. 4. The robots recharge their batteries by connecting to two metal strips attached to the wall.

For more details on this work, we spoke with Mike Rubenstein via email.

IEEE Spectrum: Why switch to the new hardware platform instead of Kilobots?

Mike Rubenstein: We wanted to make a platform more capable and extendable than Kilobot, and improve on lessons learned with Kilobot. These robots have far better locomotion capabilities that Kilobot, and include absolute position sensing, which makes operating the robots easier. They have truly “hands free” operations. For example with Kilobot to start an experiment you had to place the robots in their starting position by hand (sometimes taking an hour or two), while with these robots, a user just specifies a set of positions for all the robots and presses the “go” button. With Kilobot it was also hard to see what the state of all the robots were, for example it was difficult to see if 999 robots are powered on or 1000 robots are powered on. These new robots send state information back to a user display, making it easy to understand the full state of the swarm. 
 
How much of a constraint is grid-ifying the goal points and motion planning?

The grid constraint obviously makes motion less efficient as they must move in Manhattan-type paths, not straight line paths, so most of the time they move a bit farther. The reason we constrain the motions to move in a discrete grid is that it makes the robot algorithm less computationally complex and reasoning about collisions and deadlock becomes a lot easier, which allowed us to provide guarantees that the shape will form successfully. 

Swarm robots
Image: Northwestern University

Still images of a 100 robot shape formation experiment. The robots start in a random configuration, and move to form the desired “N” shape. Once this shape is formed, they then form the shape “U.” The entire sequence is fully autonomous. (a) T = 0 s; (b) T = 20 s; (c) T = 64 s; (d) T = 72 s; (e)  T = 80 s; (f) T = 112 s.

Can you tell us about those couple of lonely wandering robots at the end of the simulated “N” formation in the video?

In our algorithm, we don’t assign goal locations to all the robots at the start, they have to figure out on their own which robot goes where. The last few robots you pointed out happened to be far away from the goal location the swarm figured they should have. Instead of having that robot move around the whole shape to its goal, you see a subset of robots all shift over by one to make room for the robot in the shape closer to its current position.
 
What are some examples of ways in which this research could be applied to real-world useful swarms of robots?

One example could be the shape formation in modular self-reconfigurable robots. The hope is that this shape formation algorithm could allow these self-reconfigurable systems to automatically change their shape in a simple and reliable way. Another example could be warehouse robots, where robots need to move to assigned goals to pick up items. This algorithm would help them move quickly and reliably.
 
What are you working on next?

I’m looking at trying to understand how to enable large groups of simple individuals to behave in a controlled and reliable way as a group. I’ve started looking at this question in a wide range of settings; from swarms of ground robots, to reconfigurable robots that attach together by melting conductive plastic, to swarms of flying vehicles, to satellite swarms. 

Shape Formation in Homogeneous Swarms Using Local Task Swapping,” by Hanlin Wang and Michael Rubenstein from Northwestern, is published in IEEE Transactions on Robotics.

Video Friday: NASA’s Curiosity Mars Rover Captures 1.8 Billion-Pixel Panorama

Par Evan Ackerman

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K.
ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores
ICRA 2020 – May 31-4, 2020 – Paris, France
ICUAS 2020 – June 9-12, 2020 – Athens, Greece
CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.


NASA Curiosity Project Scientist Ashwin Vasavada guides this tour of the rover’s view of the Martian surface. Composed of more than 1,000 images and carefully assembled over the ensuing months, the larger version of this composite contains nearly 1.8 billion pixels of Martian landscape.

This panorama showcases "Glen Torridon," a region on the side of Mount Sharp that Curiosity is exploring. The panorama was taken between Nov. 24 and Dec. 1, 2019, when the Curiosity team was out for the Thanksgiving holiday. Since the rover would be sitting still with few other tasks to do while it waited for the team to return and provide its next commands, the rover had a rare chance to image its surroundings several days in a row without moving.

[ MSL ]


Sarcos has been making progress with its Guardian XO powered exoskeleton, which we got to see late last year in prototype stage:

The Sarcos Guardian XO full-body, powered exoskeleton is a first-of-its-kind wearable robot that enhances human productivity while keeping workers safe from strain or injury. Set to transform the way work gets done, the Guardian XO exoskeleton augments operator strength without restricting freedom of movement to boost productivity while dramatically reducing injuries.

[ Sarcos ]


Professor Hooman Samani, director of the Artificial Intelligence and Robotics Technology Laboratory (AIART Lab) at National Taipei University, Taiwan, writes in to share some ideas on how robots could be used to fight the coronavirus outbreak. 

Time is a critical issue when dealing with people affected by Coronavirus. Also due to the current emergency disaster, doctors could be far away from the patients. Additionally, avoiding direct contact with infected person is a medical priority. An immediate monitoring and treatment using specific kits must be administered to the victim. We have designed and developed the Ambulance Robot (AmbuBot) which could be a solution to address those issues. AmbuBot could be placed in various locations especially in busy, remote or quarantine areas to assist in above mentioned scenario. The AmbuBot also brings along an AED in a sudden event of cardiac arrest and facilitates various modes of operation from manual to semi-autonomous to autonomous functioning.

[ AIART Lab ]


Digit is launching later this month alongside a brand new sim that’s a 1:1 match to both the API and physics of the actual robot. Here, we show off the ability to train a learned policy against the validated physics of the robot. We have a LOT more to say about RL with real hardware... stay tuned.

Staying tuned!

Agility Robotics ]


This video presents simulations and experiments highlighting the functioning of the proposed Trapezium Line Theta* planner, as well as its improvements over our previous work namely the Obstacle Negotiating A* planner. First, we briefly present a comparison of our previous and new planners. We then show two simulations. The first shows the robot traversing an inclined corridor to reach a goal near the low-lying obstacle. This demonstrates the omnidirectional and any-angle motion planning improvement achieved by the new planner, as well as the independent planning for the front and back wheel pairs. The second simulation further demonstrates the key improvements mentioned above by having the robot traverse tight right-angled corridors. Finally, we present two real experiments on the CENTAURO robot. In the first experiment, the robot has to traverse into a narrow passage and then expand over a low lying obstacle. The second experiment has the robot first expand over a wide obstacle and then move into a narrow passage.

To be presented at ICRA 2020.

Dimitrios Kanoulas ]


We’re contractually obligated to post any video with “adverse events” in the title.

JHU ]


Waymo advertises their self-driving system in this animated video that features a robot car making a right turn without indicating. Also pretty sure that it ends up in the wrong lane for a little bit after a super wide turn and blocks a crosswalk to pick up a passenger. Oops!

I’d still ride in one, though.

Waymo ]


Exyn is building the world’s most advanced, autonomous aerial robots. Today, we launched our latest capability, Scoutonomy. Our pilotless robot can now ‘scout’ freely within a desired volume, such as a tunnel, or this parking garage. The robot sees the white boxes as ‘unknown’ space, and flies to explore them. The orange boxes are mapped obstacles. It also intelligently avoids obstacles in its path and identifies objects, such as people or cars. Scoutonomy can be used to safely and quickly finding survivors in natural, or man-made, disasters.

Exyn ]


I don’t know what soma blocks are, but this robot is better with them than I am.

This work presents a planner that can automatically find an optimal assembly sequence for a dual-arm robot to assemble the soma blocks. The planner uses the mesh model of objects and the final state of the assembly to generate all possible assembly sequence and evaluate the optimal assembly sequence by considering the stability, graspability, assemblability, as well as the need for a second arm. Especially, the need for a second arm is considered when supports from worktables and other workpieces are not enough to produce a stable assembly.

[ Harada Lab ]


Semantic grasping is the problem of selecting stable grasps that are functionally suitable for specific object manipulation tasks. In order for robots to effectively perform object manipulation, a broad sense of contexts, including object and task constraints, needs to be accounted for. We introduce the Context-Aware Grasping Engine, which combines a novel semantic representation of grasp contexts with a neural network structure based on the Wide & Deep model, capable of capturing complex reasoning patterns. We quantitatively validate our approach against three prior methods on a novel dataset consisting of 14,000 semantic grasps for 44 objects, 7 tasks, and 6 different object states. Our approach outperformed all baselines by statistically significant margins, producing new insights into the importance of balancing memorization and generalization of contexts for semantic grasping. We further demonstrate the effectiveness of our approach on robot experiments in which the presented model successfully achieved 31 of 32 suitable grasps.

[ RAIL Lab ]


I’m not totally convinced that bathroom cleaning is an ideal job for autonomous robots at this point, just because of the unstructured nature of a messy bathroom (if not of the bathroom itself). But this startup is giving it a shot anyway.

The cost target is $1,000 per month.

[ Somatic ] via [ TechCrunch ]


IHMC is designing, building, and testing a mobility assistance research device named Quix. The main function of Quix is to restore mobility to those stricken with lower limb paralysis. In order to achieve this the device has motors at the pelvis, hips, knees, and ankles and an onboard computer controlling the motors and various sensors incorporated into the system.

[ IHMC ]


In this major advance for mind-controlled prosthetics, U-M research led by Paul Cederna and Cindy Chestek demonstrates an ultra-precise prosthetic interface technology that taps faint latent signals from nerves in the arm and amplifies them to enable real-time, intuitive, finger-level control of a robotic hand.

[ University of Michigan ]


Coral reefs represent only 1% of the seafloor, but are home to more than 25% of all marine life. Reefs are declining worldwide. Yet, critical information remains unknown about basic biological, ecological, and chemical processes that sustain coral reefs because of the challenges to access their narrow crevices and passageways. A robot that grows through its environment would be well suited to this challenge as there is no relative motion between the exterior of the robot and its surroundings. We design and develop a soft growing robot that operates underwater and take a step towards navigating the complex terrain of a coral reef.

[ UCSD ]


What goes on inside those package lockers, apparently.

[ Dorabot ]


In the future robots could track the progress of construction projects. As part of the MEMMO H2020 project, we recently carried out an autonomous inspection of the Costain High Speed Rail site in London with our ANYmal robot, in collaboration with Edinburgh Robotics.

[ ORI ]


Soft Robotics technology enables seafood handling at high speed even with amorphous products like mussels, crab legs, and lobster tails.

[ Soft Robotics ]


Pepper and Nao had a busy 2019:

[ SoftBank Robotics ]


Chris Atkeson, a professor at the Robotics Institute at Carnegie Mellon University, watches a variety of scenes featuring robots from movies and television and breaks down how accurate their depictions really are. Would the Terminator actually have dialogue options? Are the "three laws" from I, Robot a real thing? Is it actually hard to erase a robot’s memory (a la Westworld)?

[ Chris Atkeson ] via [ Wired ]


This week’s CMU RI Seminar comes from Anca Dragan at UC Berkeley, on “Optimizing for Coordination With People.”

From autonomous cars to quadrotors to mobile manipulators, robots need to co-exist and even collaborate with humans. In this talk, we will explore how our formalism for decision making needs to change to account for this interaction, and dig our heels into the subtleties of modeling human behavior — sometimes strategic, often irrational, and nearly always influenceable. Towards the end, I’ll try to convince you that every robotics task is actually a human-robot interaction task (its specification lies with a human!) and how this view has shaped our more recent work.

[ CMU RI ]


Late Nights, Cool Hacks, and More Stories From the DARPA SubT Urban Circuit

Par Evan Ackerman

For the past two weeks, teams of robots (and their humans) have been exploring an unfinished nuclear power plant in Washington State as part of DARPA’s Subterranean Challenge. The SubT Challenge consists of three separate circuits, each representing a distinct underground environment: tunnel systems, urban underground, and cave networks.

The Urban Circuit portion of the challenge ended last Thursday, and DARPA live streamed all of the course runs and put together some great video recaps of the competition itself. But that footage represents just a small portion of what actually went on at the challenge, as teams raced to implement fixes and improvements in hardware and software in between runs, often staying up all night in weird places trying to get their robots to work better (or work at all).

We visited the SubT Urban Challenge during the official media day last week, and also spent some time off-site with the teams themselves, as they solved problems and tested their robots wherever they could, from nearby high schools to empty malls to hotel stairwells at 5 a.m. 

And the winner of the SubT Urban Circuit is...

The winner of the SubT Urban Circuit was Team CoSTAR, a collaboration between NASA JPL, MIT, Caltech, KAIST, LTU, and industry partners, including Clearpath Robotics and Boston Dynamics. Second place went to Carnegie Mellon’s Team Explorer, which took first at the previous SubT Tunnel Circuit six months ago, setting up a highly competitive Cave Circuit event which will take place six months from now.

We’ll have some more details on the teams’ final scores, but first here’s a brief post-challenge overview video from DARPA to get you caught up:

The Urban Circuit location: an unfinished nuclear power plant

The Urban Circuit of the DARPA Subterranean Challenge was held at the Satsop Business Park, about an hour and a half south of Seattle. 

Satsop
Photo: DARPA
Aerial photo of the unfinished Satsop nuclear power plant.

Started in 1977, the plant was about 80 percent complete when state funding fell through, and after nothing happened for a couple of decades, ownership was transferred to the Satsop Redevelopment Project to try and figure out how to turn the aging dystopian infrastructure into something useful. Something useful includes renting the space for people to film action movies, and for DARPA to host challenges.

The biggest difference between Tunnel and Urban is that while Tunnel was mostly, you know, tunnels (mostly long straight-ish passages connected with each other), Urban included a variety of large spaces and interconnected small rooms spread out across multiple levels. This is a 5-minute long walkthrough from DARPA that shows one of the course configurations; you don’t need to watch the whole thing, but it should give you a pretty good idea of the sort of environment that these robots had to deal with:

The biggest challenge: Communications, or stairs?

While communications were an enormous challenge at the Tunnel Circuit, from talking with the teams it sounded like comms was not nearly as much of an issue at Urban, because of a combination of a slightly friendlier environment (concrete walls instead of meters of solid rock) and teams taking comms very, very seriously as they prepared their systems for this event. More teams used deployable networking nodes to build up a mesh network as their robots progressed farther into the course (more on this later), and there was also more of an emphasis on fully autonomous exploration where robots were comfortable operating for extended periods outside of communication range completely. 

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
Team garages at the event. You can’t see how cold it is, but if you could, you’d understand why they’re mostly empty.

When we talked to DARPA SubT Program Manager Tim Chung a few weeks ago, he was looking forward to creating an atmosphere of warm camaraderie between teams:

I’m super excited about how we set up the team garages at the Urban Circuit. It’ll be like pit row, in a way that really highlights how much I value the interactions between teams, it’ll be an opportunity to truly capitalize on having a high concentration of enthusiastic and ambitious roboticists in one area. 

Another challenge: Finding a warm place to test the robots

Having all the teams gathered at their garages would have been pretty awesome, except that the building somehow functioned as a giant heat sink, and while it was in the mid-30s Fahrenheit outside, it felt like the mid-20s inside! Neither humans nor robots had any particular desire to spend more time in the garages than was strictly necessary—most teams would arrive immediately before the start of their run staging time, and then escape to somewhere warmer immediately after their run ended. 

It wasn’t just a temperature thing that kept teams out of the garages—to test effectively, most teams needed a lot more dedicated space than was available on-site. Teams understood how important test environments were after the Tunnel Circuit, and most of them scrounged up spaces well in advance. Team CSIRO DATA61 found an indoor horse paddock at the local fairgrounds. Team CERBERUS set up in an empty storefront in a half dead mall about 20 miles away. And Team CoSTAR took over the conference center at a local hotel, which turned out to be my hotel, as I discovered when I met Spot undergoing testing in the hallway outside of my room right after I checked in:

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
Team CoSTAR’s Spot robot (on loan from Boston Dynamics) undergoing testing in a hotel hallway.

Spot is not exactly the stealthiest of robots, and the hotel testing was not what you’d call low-key. I can tell you that CoSTAR finished their testing at around 5:15 a.m., when Spot’s THUMP THUMP THUMP THUMP THUMPing gait woke up pretty much the entire hotel as the robot made its way back to its hotel room. Spot did do a very good job on the stairs, though:

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
Even with its top-heavy JPL autonomy and mapping payload, Spot was able to climb stairs without too much trouble.

After the early morning quadrupedal wake-up call, I put on every single layer of clothing I’d brought and drove up to the competition site for the DARPA media day. We were invited to watch the beginning of a few competition runs, take a brief course tour (after being sworn to secrecy), and speak with teams at the garages before and after their runs. During the Tunnel circuit, I’d focused on the creative communications strategies that each team was using, but for Urban, I asked teams to tell me about some of the clever hacks they’d come up with to solve challenges specific to the Urban circuit.

Here’s some of what teams came up with:

Team NCTU

Team NCTU from Taiwan has some of the most consistently creative approaches to the DARPA SubT courses we’ve seen. They’re probably best known for their “Duckiefloat” blimps, which had some trouble fitting through narrow tunnels during the Tunnel circuit six months ago. Knowing that passages would be even slimmer for the Urban Circuit, NCTU built a carbon fiber frame around the Duckiefloats to squish their sides in a bit.

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
Duckiefloat is much slimmer (if a bit less pleasingly spherical) thanks to a carbon fiber framework that squeezes it into a more streamlined shape to better fit through narrow corridors.

NCTU also added millimeter wave radar to one of the Duckiefloats as a lighter substitute for on-board lidar or RGBD cameras, and had good results navigating with the radar alone, which (as far as I know) is a totally unique approach. We will definitely be seeing more of Duckiefloat for the cave circuit.

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
NCTU’s Anchorball droppable WiFi nodes now include a speaker, which the Husky UGV can localize with microphone arrays (the black circle with the white border).

At Tunnel, NCTU dropped mesh WiFi nodes that doubled as beacons, called Anchorballs. For Urban, the Anchorballs are 100 percent less ball-like, and incorporate a speaker, which plays chirping noises once deployed. Microphone arrays on the Husky UGVs can localize this chirping, allowing multiple robots to use the nodes as tie points to coordinate their maps.

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
NCTU is developing mobile mesh network nodes in the form of autonomous robot balls.

Also under development at NCTU is this mobile Anchorball, which is basically a big Sphero with a bunch of networking gear packed into it that can move itself around to optimize signal strength.

Team NUS SEDS

Team NUS SEDS accidentally burned out a couple of the onboard computers driving their robots. The solution was to run out and buy a laptop, and then 3D print some mounts to attach the laptop to the top of the robot and run things from there.

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
When an onboard computer burned out, NUS SEDS bought a new laptop to power their mobile robot, because what else are you going to do?

They also had a larger tracked vehicle that was able to go up and down stairs, but it got stuck in customs and didn’t make it to the competition at all.

Team Explorer

Team Explorer did extensive testing in an abandoned hospital in Pittsburgh, which I’m sure wasn’t creepy at all. While they brought along some drones that were used very successfully, getting their beefy wheeled robots up and down stairs wasn’t easy. To add some traction, Explorer cut chunks out of the wheels on one of their robots to help it grip the edges of stairs. 

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
Team Explorer’s robot has wedges cut out of its wheels to help it get a grip on stairways.

It doesn’t look especially sophisticated, but the team lead Sebastian Scherer told me that this was the result of 14 (!) iterations of wheel and track modifications. 

Team MARBLE

Six months ago, we checked out a bunch of different creative communications strategies that teams used at SubT Tunnel. MARBLE improved on their droppable wireless repeater nodes with a powered, extending antenna (harvested from a Miata, apparently).

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
After being dropped from its carrier robot, this mesh networking node extends its antennas half a meter into the air to maximize signal strength.

This is more than just a neat trick: We were told that the extra height that the antennas have once fully deployed does significantly improve their performance.

Team Robotika

Based on their experience during the Tunnel Circuit, Team Robotika decided that there was no such thing as having too much light in the tunnels, so they brought along a robot with the most enormous light-to-robot ratio that we saw at SubT.

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
No such thing as too much light during DARPA SubT.

Like many other teams, Robotika was continually making minor hardware adjustments to refine the performance of their robots and make them more resilient to the environment. These last-minute plastic bumpers would keep the robot from driving up walls and potentially flipping itself over.

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
A bumper hacked together from plastic and duct tape keeps this robot from flipping itself over against walls.

Team CSIRO Data61

I met CSIRO Data61 (based in Australia) at the testing location they’d found in a building at the Grays Harbor County Fairgrounds, right next to an indoor horse arena that provided an interesting environment, especially for their drones. During their first run, one of their large tracked robots (an ex-police robot called Titan) had the misfortune to get its track caught on an obstacle that was exactly the wrong size, and it burned out a couple motors trying to get free.

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
A burned motor, crispy on the inside.

You can practically smell that through the screen, right? And these are fancy Maxon motors, which you can’t just pick up at your local hardware store. CSIRO didn’t have spares with them, so the most expedient way to get new motors that were sure to work turned out to be flying another team member over from Australia (!) with extra motors in their carry-on luggage. And by Tuesday morning, the Titan was up and running again.

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
A fully operational Titan beside a pair of commercial SuperDroid robots at CSIRO’s off-site testing area.

Team CERBERUS

Team CERBERUS didn’t have a run scheduled during the SubT media day, but they invited me to visit their testing area in an empty store next to an Extreme Fun Center in a slightly depressing mall in Aberdeen (Kurt Cobain’s hometown), about 20 miles down the road from Satsop. CERBERUS was using a mix of wheeled vehicles, collision-tolerant drones, and ANYmal legged robots.

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
Team CERBERUS doing some off-site testing of their robots with the lights off.

CERBERUS had noticed during a DARPA course pre-briefing that the Alpha course had an almost immediate 90-degree turn before a long passage, which would block any directional antennas placed in the staging area. To try to maximize communication range, they developed this dumb antenna robot: Dumb in the sense that it has no sensing or autonomy, but instead is designed to carry a giant tethered antenna just around that first corner.

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
Basically just a remote-controlled directional antenna, CERBERUS developed this robot to extend communications from their base station around the first corner of Alpha Course.

Another communications challenge was how to talk to robots after they traversed down a flight of stairs. Alpha Course featured a flight of stairs going downwards just past the starting gate, and CERBERUS wanted a way of getting a mesh networking node down those stairs to be able to reliably talk to robots exploring the lower level. Here’s what they came up with:

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
A mesh network node inside of a foam ball covered in duct tape can be thrown by a human into hard-to-reach spots near the starting area.

The initial idea was to put a node into a soccer ball which would then be kicked from the staging area, off the far wall, and down the stairs, but they ended up finding some hemispheres of green foam used for flower arrangements at Walmart, hollowed them out, put in a node, and then wrapped the whole thing in duct tape. With the addition of a tether, the node in a ball could be thrown from the staging area into the stairwell, and brought back up with the tether if it didn’t land in the right spot.

Plan B for stairwell communications was a bit more of a brute force approach, using a directional antenna on a stick that could be poked out of the starting area and angled over the stairwell.

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
If your antenna balls don’t work? Just add a directional antenna to a stick.

Since DARPA did allow tethers, CERBERUS figured that this was basically just a sort of rigid tether. Sounds good to me!

Team CoSTAR

Team CoSTAR surprised everyone by showing up to the SubT Urban Circuit with a pair of Spot quadrupeds from Boston Dynamics. The Spots were very much a last-minute addition to the team, and CoSTAR only had about six weeks to get them up and (metaphorically) running. Consequently, the Spots were a little bit overburdened with a payload that CoSTAR hasn’t had much of a chance to optimize. The payload takes care of all of the higher level autonomy and map making and stuff, while Spot’s own sensors handle the low-level motion planning. 

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
Team CoSTAR’s Spot robots carried a payload that was almost too heavy for the robot to manage, and included sensors, lights, computers, batteries, and even two mesh network node droppers.

In what would be a spectacular coincidence were both of these teams not packed full of brilliant roboticists, Team CoSTAR independently came up with something very similar to the throwable network node that Team CERBERUS was messing around with.

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
A throwable mesh network node embedded in a foam ball that could be bounced into a stairwell to extend communications.

One of the early prototypes of this thing was a Mars lander-style “airbag” system, consisting of a pyramid of foam balls with a network node embedded in the very center of the pile. They showed me a video of this thing, and it was ridiculously cool, but they found that carving out the inside of a foam ball worked just as well and was far easier to manage.

There was only so much testing that CoSTAR was able to do in the hotel and conference center, since a better match for the Urban Circuit would be a much larger area with long hallways, small rooms, and multiple levels that could be reached by ramps and stairs. So every evening, the team and their robots drove 10 minutes down the road to Elma High School, which seemed to be just about the perfect place for testing SubT robots. CoSTAR very kindly let me tag along one night to watch their Huskies and Spots explore the school looking for artifacts, and here are some pictures that I took.

DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
The Elma High School cafeteria became the staging area for Team CoSTAR’s SubT test course. Two Boston Dynamics Spot robots and two Clearpath Robotics Huskies made up CoSTAR’s team of robots. The yellow total station behind the robots is used for initial location calibration, and many other teams relied on them as well.
DARPA SubT
Photo: Evan Ackerman/IEEE Spectrum
Team CoSTAR hid artifacts all over the school to test the robots’ ability to autonomously recognize and locate them. That’s a survivor dummy down the hall.

JPL put together this video of one of the test runs, which cuts out the three hours of setup and calibration and condenses all the good stuff into a minute and a half:

DARPA SubT Urban Circuit: Final scores

In their final SubT Urban run, CoSTAR scored a staggering 9 points, giving them a total of 16 for the Urban Circuit, 5 more than Team Explorer, which came in second. Third place went to Team CTU-CRAS-NORLAB, and as a self-funded (as opposed to DARPA-funded) team, they walked away with a $500,000 prize.

DARPA SubT Urban Circuit final score
Image: DARPA
DARPA SubT Urban Circuit final scores.

Six months from now, all of these teams will meet again to compete at the SubT Cave Circuit, the last (and perhaps most challenging) domain that DARPA has in store. We don’t yet know exactly when or where Cave will take place, but we do know that we'll be there to see what six more months of hard work and creativity can do for these teams and their robots.

[ DARPA SubT Urban Results ]

Special thanks to DARPA for putting on this incredible event, and thanks also to the teams that let me follow them around and get (ever so slightly) in the way for a day or two.

Video Friday: Child Robot Affetto Learning New Facial Expressions

Par Evan Ackerman

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K.
ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores
ICRA 2020 – May 31-4, 2020 – Paris, France
ICUAS 2020 – June 9-12, 2020 – Athens, Greece
CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.


We’ll have more on the DARPA Subterranean Challenge Urban Circuit next week, but here’s a quick compilation from DARPA of some of the competition footage.

[ SubT ]


ABB set up a global competition in 2019 to assess 20 leading AI technology start-ups on how they could approach solutions for 26 real-world picking, packing and sorting challenges. The aim was to understand if AI is mature enough to fully unlock the potential for robotics and automation. ABB was also searching for a technology partner to co-develop robust AI solutions with. Covariant won the challenge by successfully completing each of the 26 challenges; on February 25, ABB and Covariant announced a partnership to bring AI-enabled robotic solutions to market.

We wrote about Covariant and its AI-based robot picking system last month. The most interesting part of the video above is probably the apple picking, where the system has to deal with irregular, shiny, rolling objects. The robot has a hard time picking upside-down apples, and after several failures in a row, it nudges the last one to make it easier to pick up. Impressive! And here’s one more video of real-time picking mostly transparent water bottles:

[ Covariant ]


Osaka University’s Affetto robot, which we’ve written about before, is looking somewhat more realistic than when we first wrote about it.

Those are some weird noises that it’s making though, right? Affetto, as it turns out, also doesn’t like getting poked in its (disembodied) tactile sensor:

They’re working on a body for it, too:

[ Osaka University ]


University of Washington students reimagine today’s libraries.

[ UW ]

Thanks Elcee!


Astrobee will be getting a hand up on the ISS, from Columbia’s ROAM Lab.

I think this will be Astrobee’s second hand, in addition to its perching arm. Maybe not designed for bimanual tasks, but still, pretty cool!

[ ROAM Lab ]


In this paper, we tackle the problem of pushing piles of small objects into a desired target set using visual feedback. Unlike conventional single-object manipulation pipelines, which estimate the state of the system parametrized by pose, the underlying physical state of this system is difficult to observe from images. Thus, we take the approach of reasoning directly in the space of images, and acquire the dynamics of visual measurements in order to synthesize a visual-feedback policy.

[ MIT ]


In this project we are exploring ways of interacting with terrain using hardware already present on exploration rovers - wheels! By using wheels for manipulation, we can expand the capabilities of space robots without the need for adding hardware. Nonprehensile terrain manipulation can be used many applications such as removing soil to sample below the surface or making terrain easier to cross for another robot. Watch until the end to see MiniRHex and the rover working together!

[ Robomechanics Lab ]


Dundee Precious Metals reveals how Exyn’s fully autonomous aerial drones are transforming their cavity monitoring systems with increased safety and maximum efficiency.

[ Exyn ]

Thanks Rachel!


Dragonfly is a NASA mission to explore the chemistry and habitability of Saturn’s largest moon, Titan. The fourth mission in the New Frontiers line, Dragonfly will send an autonomously-operated rotorcraft to visit dozens of sites on Titan, investigating the moon’s surface and shallow subsurface for organic molecules and possible biosignatures.

Dragonfly is scheduled to launch in 2026 and arrive at Titan in 2034.

[ NASA ]


Researchers at the Max Planck Institute for Intelligent Systems in Stuttgart in cooperation with Tampere University in Finland developed a gel-like robot inspired by sea slugs and snails they are able to steer with light. Much like the soft body of these aquatic invertebrates, the bioinspired robot is able to deform easily inside water when exposed to this energy source.

Due to specifically aligned molecules of liquid crystal gels – its building material – and illumination of specific parts of the robot, it is able to crawl, walk, jump, and swim inside water. The scientists see their research project as an inspiration for other roboticists who struggle to design untethered soft robots that are able to move freely in a fluidic environment.

[ Max Planck Institute ]


Forests are a very challenging environment for drones, especially if you want to both avoid and map trees at the same time.

[ Kumar Lab ]


Some highlights from the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) that took place in Abu Dhabi, UAE last week.

[ MBZ IRC ]


I never get tired of hearing technical presentations from Skydio, and here’s Ryan Kennedy giving at talk at the GRASP Lab.

The technology for intelligent and trustworthy navigation of autonomous UAVs has reached an inflection point to provide transformative gains in capability, efficiency, and safety to major industries. Drones are starting to save lives of first responders, automate dangerous infrastructure inspection, digitize the physical world with millimeter precision, and capture Hollywood quality video - all on affordable consumer hardware.

At Skydio, we have invested five years of R&D in the ability to handle difficult unknown scenarios in real-time based on visual sensing, and shipped two generations of fully autonomous drone. In this talk, I will discuss the close collaboration of geometry, learning, and modeling within our system, our experience putting robots into production, and the challenges still ahead.

[ Skydio ]


This week’s CMU RI Seminar comes from Sarjoun Skaff at Bossa Nova Robotics: “Yes, That’s a Robot in Your Grocery Store. Now what?”

Retail stores are becoming ground zero for indoor robotics. Fleet of different robots have to coexist with each others and humans every day, navigating safely, coordinating missions, and interacting appropriately with people, all at large scale. For us roboticists, stores are giant labs where we’re learning what doesn’t work and iterating. If we get it right, it will serve as an example for other industries, and robots will finally become ubiquitous in our lives.

[ CMU RI ]


❌