When the smiley-faced robot tells two boys to pick out the drawing of an ear from three choices, one of the boys, about 5, touches his nose. “No. Ear,” his teacher says, a note of frustration in her voice. The child picks up the drawing of an ear and hands it to the other boy, who shows it to the robot. “Yes, that is the ear,” the ever-patient robot says. “Good job.” The boys smile as the teacher pats the first boy in congratulations.
The robot is powered by technology created by Movia Robotics, founded by Tim Gifford in 2010 and headquartered in Bristol, Connecticut. Unlike other companies that have made robots intended to work with children with autism spectrum disorder (ASD), such Beatbots, Movia focuses on building and integrating software that can work with a number of humanoid robots, such as the Nao. Movia has robots in three school districts in Connecticut. Through a U.S. Department of Defense contract, they’re being added to 60 schools for the children of military personnel worldwide.
It’s Gifford’s former computer science graduate student, Christian Wanamaker, who programs the robots. Before graduate school at the University of Connecticut, Wanamaker used his computer science degree to program commercial kitchen fryolators. He enjoys a crispy fry as much as anyone, but his work coding for robot-assisted therapy is much more challenging, interesting and rewarding, he says.
“I start with a robot that won’t do anything without a programmer and end up with one that allows teachers to run therapies for children,” he says. “That’s very gratifying.”
Toward the end of graduate school, he worked on a team designing and programming a robot to greet a child and demonstrate yoga moves as part of physical therapy. After graduating, he stayed on at the university as a research assistant.
One of his first projects involved working as the lead developer writing code for an interactive media wall at Boston Children’s Hospital as a way to give joy and control to sick kids. The multi-discipline team built a series of kid-friendly scenes designed to track movement and react. One scene on the three-story video screen displays grass swaying as someone passes by.
“That was pretty amazing, especially the response of the kids,” he says.
Meanwhile, Gifford learned from his wife, a primary-school teacher, that the number of children in the classroom with ASD was growing but staffing resources were limited. He worked with Wanamaker to program robots to work with ASD children in school in a nonthreatening way.
Gifford, a UConn researcher, talks with educators and clinicians about their students’ needs and writes the software architecture to support the array of skills being taught. He conveys this to Wanamaker, and it’s up to him to break down the requirements into programming steps so that teachers can individualize the commands to suit each student’s needs.
Wanamaker writes software using languages such as Python, Java, C# and C++ so the robot can speak and move to direct the child as well as respond when the child reacts. He not only finds the most common places where bugs crop up, but also tries to find all the edge cases and make it fail. This lets him fix the bugs before the robot is sent to work in a school. In the early days, a team member controlled the robot; today, Wanamaker and the team have to ensure the robot can be controlled by a classroom teacher or other nontechnical person.
A coder working with robots needs curiosity, patience, and tenacity, Wanamaker says. Movia Robotics is constantly working with new robots that require him to use different languages and operating systems. Programming the robots to interact with individual people is not a straight line. “It’s a bit like herding cats,” he says.
“The thing I find rewarding about coding: You’re literally creating something out of nothing,” he says. “You’re kind of like a wizard.”
It’s 10 a.m. and Indian peanut farmer Venkeapream is relaxing at his family compound in Pavagada, an arid area north of Bangalore. The 67-year-old retired three years ago upon leasing his land to the Karnataka state government. That land is now part of a 53-square-kilometer area festooned with millions of solar panels. As his fields yield carbon-free electricity, Venkeapream pursues his passion full time: playing the electric harmonium, a portable reed organ.
With a capacity of 2 gigawatts and counting, Pavagada’s arrays represent the world’s largest cluster of photovoltaics. It’s also one of the most successful examples of a solar “park,” whereby governments provide multiple companies land and transmission—two big hurdles that slow solar development. Solar parks account for much of the 25.5 GW of solar capacity India has added in the last five years. The states of Rajasthan and Gujarat have, respectively, 2.25-GW and 5.29-GW solar parks under way, and Egypt’s 1.8-GW installation is one of several new international projects.
Alas, even as they speed the growth of renewable energy, solar parks also concentrate some of solar energy’s liabilities.
Sheshagiri Rao, an agricultural researcher and farmer based near Pavagada, says lease payments give peanut farmers such as Venkeapream a steadier income. But Rao says shepherds who held traditional rights to graze their fields were fenced out without compensation, and many have sold out. In Venkeapream’s village, flocks once totaled 2,000 to 3,000 sheep. There are now only about 600 left.
The constant need to keep dust off the panels, meanwhile, has put more strain on already overtapped groundwater supplies. Local farmers bring water to clean the more than 400,000 panels at the Pavagada site of Indian energy developer Acme Cleantech Solutions. “At least 2 liters of water is required to clean one panel. This is huge,” says B. Prabhakar, Acme’s site manager. Robotic dusters allow Acme to clean just twice a month, but most operators lack such equipment.
Then there are the power surges and drops created as clouds pass over Pavagada—generation swings that must be countered with coal-fired and hydropower plants. Balancing renewable energy swings is a growing challenge for grid operators in Karnataka, which leads India in solar capacity and also has more than 4 GW of variable wind power.
Karnataka capped new solar parks at 0.2 GW after launching Pavagada. Analysts heralded the state’s apparent shift toward distributed installations, such as rooftop solar systems, during a November 2019 meeting on sustainable energy in neighboring state Tamil Nadu. As Saptak Ghosh, who leads renewable energy programs at the Bangalore-based Center for Study of Science, Technology & Policy (CSTEP), put it: “Pavagada will be the end of big solar parks in Karnataka. Smaller is the future.”
Just a few days later, though, news broke that Karnataka’s renewable energy arm was acquiring land for three 2.5-GW solar megaparks. The state’s move may reflect pressure from the national government to accelerate solar installations, as well as confidence that Pavagada’s shortcomings can be fixed.
Instead of harming shepherds, for example, solar operators could open their gates. Grass and weeds growing amidst the panels pose a serious fire risk, according to Acme’s Prabhakar. Increasingly, operators in other countries rely on sheep to keep vegetation down.
Higher-tech solutions may ultimately address Pavagada’s water consumption and cloud-induced power swings. Israeli robotics firm Ecoppia is already providing what it calls “water free” cleaning at the Pavagada site operated by Fortum, a Finnish energy company.
Karnataka’s solution for power swings at its new megaparks, meanwhile, is to plug the parks straight into the national grid’s biggest power lines. The trio of plants are a joint project with the national-government-owned Solar Energy Corporation of India, and designed to export renewable electricity to other states. Power stations outside of Karnataka will balance the solar parks’ generation, according to Ghosh’s colleague, CSTEP senior research engineer and power-grid specialist Milind R.
India’s government is eager to help, having promised to boost renewable capacity to 175 GW by March 2022 and to 450 GW by 2030. As Thomas Spencer, research fellow at the Energy and Resources Institute, a New Delhi–based nonprofit, noted at the November meeting in Tamil Nadu, India is “well off the track” for meeting either target.
This article appears in the February 2020 print issue as “India Grapples With Vast Solar Park.”
The European Space Agency (ESA) received a sizable budget boost in late 2019 and committed to joining NASA’s Artemis program, expanding Earth observation, returning a sample from Mars, and developing new rockets. Meanwhile, less glamorous projects will seek to safeguard and maintain the use of critical infrastructure in space and on Earth.
ESA’s ClearSpace-1 mission, having just received funding in November, is designed to address the growing danger of space debris, which threatens the use of low Earth orbit. Thirty-four thousand pieces of space junk larger than 10 centimeters (cm) are now in orbit around Earth, along with 900,000 pieces larger than 1 cm. They stem from hundreds of space missions launched since Sputnik-1 heralded the beginning of the Space Age in 1957. Traveling at the equivalent of Mach 25, even the tiniest piece of debris can threaten, for example, the International Space Station and its inhabitants, and create more debris when it collides.
The ClearSpace-1 Active Debris Removal (ADR) mission will be carried out by a commercial consortium led by Swiss startup ClearSpace. Planned for launch in 2025, the mission will target a spent upper stage from an ESA Vega rocket orbiting at 720 kilometers above the Earth. Atmospheric drag is very low at this altitude, meaning objects remain in orbit for decades before reentry.
There, ClearSpace-1 will rendezvous with a target, which will be traveling at close to 8 kilometers per second. After making its approach, the spacecraft will employ ‘tentacles’ to reach beyond and around the object.
"It's like tentacles that embrace the object because you can capture the object before you touch it. Dynamics in space are very interesting because if you touch the object on one side, it will immediately drift away,” says Holger Krag of ESA’s Space Safety department and head of the Space Debris Office in Darmstadt, Germany.
During the first mission, once ClearSpace-1 secures its target, the satellite will use its own propulsion to reenter Earth’s atmosphere, burning up in the process and destroying the piece it embraced. In future missions, ClearSpace hopes to build spacecraft that can remove multiple pieces of debris before the satellite burns up with all the debris onboard.
Collisions involving such objects create more debris and increase the odds of future impacts. This cascade effect is known as the Kessler Syndrome for the NASA scientist who first described it. The 2009 collision of the active U.S. commercial Iridium 33 and defunct Russian military Kosmos-2251 satellites created a cloud of thousands of pieces of debris.
With SpaceX, OneWeb, and other firms planning so-called megaconstellations of hundreds or even thousands of satellites, getting ahead of the situation is crucial to prevent low Earth orbit from becoming a graveyard.
Eventually, ClearSpace-1 is intended to be a cost-efficient, repeatable approach to reducing debris available at a low price for customers, says Krag. ESA backing for the project comes with the aim of helping to establish a new market for debris removal and in-orbit servicing. Northern Sky Research projects that revenues from such services could reach US $4.5 billion by 2028.
Other debris removal and servicing initiatives are being devised by companies including Astroscale in Japan and Northrop Grumman in the United States. The U.K.-based Surrey Satellite is also working on net and harpoon concepts to tackle space junk.
ESA is also looking to protect Earth from potential catastrophe with a mission to provide early warning of solar activity. The Carrington event, as the largest solar storm on record is known, was powerful enough to send aurora activity to as low as 20 degrees latitude and interfered with telegraph operators in North America. That was in 1859, with little vulnerable electrical infrastructure in place. A similar event today would disrupt GPS and communications satellites, cause power outages, affect oil drilling (which uses magnetic fields to navigate), and generally cause turmoil.
The L5 ‘Lagrange’ mission will head to the Sun-Earth Lagrange point 5, one of a number of stable positions created by gravitational forces of the two large bodies. From there, it will monitor the Sun for major events and warn of coronal mass ejections (CMEs) including estimates of their speed and direction.
These measurements would be used to provide space weather alerts and help mitigate against catastrophic damage to both orbital and terrestrial electronics. Krag, in an interview at a European Space Week meeting last month, states that these alerts could reduce potential harm and loss of life if used to postpone surgeries, divert flights over and near the poles, and stop trains during the peak of predicted activity from moderate-to-large solar storms.
“Estimates over the next 15 years are that damages with no pre-warning can be in the order of billions to the sensitive infrastructure we have,” Krag states. Developments like autonomous driving, which rely on wireless communications, would be another concern, as would crewed space missions, especially those traveling beyond low Earth orbit, such as NASA’s Artemis program to return astronauts to the moon.
Despite an overall budget boost, ESA’s request for 600 million euros from its member states for ‘space safety’ missions was not fully met. The L5 mission was not funded in its entirety so the team will concentrate first on developing the spacecraft’s instruments over the next three-year budget cycle, and hope for more funding in the future. Instruments currently under assessment include a coronagraph to help predict CME arrival times, a wide-angle, visible-light imaging system, a magnetograph to scan spectral absorption lines, and an X-ray flux monitor to quantify flare energy.
Suction is a useful tool in many robotic applications, as long as those applications are grasping objects that are suction-friendly—that is, objects that are impermeable and generally smooth-ish and flat-ish. If you can’t form a seal on a surface, your suction gripper is going to have a bad time, which is why you don’t often see suction systems working outside of an environment that’s at least semi-constrained. Warehouses? Yes. Kitchens? Maybe. The outdoors? Almost certainly not.
In general, getting robotic grippers (and robots themselves) to adhere to smooth surfaces and rough surfaces requires completely different technology. But researchers from Zhejiang University in China have come up with a new kind of suction gripper that can very efficiently handle surfaces like widely-spaced tile and even rough concrete, by augmenting the sealing system with a spinning vortex of water.
The paper is a little bit dense, but from what I can make out, what’s going on is that you’ve got a traditional suction gripper with a vacuum pump, modified with a water injection system and a fan. The fan has nothing to do with creating or maintaining a vacuum—its job is to get the water spinning at up to 90 rotations per second. Centripetal force causes the spinning water to form a ring around the outside of the vacuum chamber, which keeps the water from being sucked out through the vacuum pump while also maintaining a liquid seal between the vacuum chamber and the surface. Because water can get into all of those annoying little nooks and crannies that can mean doom for traditional vacuum grippers, the seal is much better, resulting in far higher performance, especially on surfaces with high roughness.
For example, a single suction unit weighing 0.8 kg was able to generate a suction force of over 245 N on a rough surface using less than 400 W, while a traditional suction unit of the same footprint would need several thousand watts (and weigh dozens of kilograms) to generate a comparable amount of suction, since the rough surface would cause a significant amount of leakage (although not a loss of suction). At very high power, the efficiency does decrease a bit— the “Spider-Man” system weighs 3 kg per unit, with a suction force of 2000 N using 650 W.
And as for the downsides? Er, well, it does kind of leak all over the place, especially when disengaging. The “Spider-Man” version leaks over 2 liters per minute. It’s only water, but still. And since it leaks, it needs to be provided with a constant water supply, which limits its versatility. The researchers are working on ways of significantly reducing water consumption to make the system more independent, but personally, I feel like the splooshyness is part of the appeal.
If there’s one thing about Moore’s Law that’s obvious to anyone, it’s that transistors have been made smaller and smaller as the years went on. Scientists and engineers have taken that trend to an almost absurd limit during the past decade, creating devices that are made of one-atom-thick layers of material.
The most famous of these materials is, of course, graphene, a hexagonal honeycomb-shaped sheet of carbon with outstanding conductivity for both heat and electricity, odd optical abilities, and incredible mechanical strength. But as a substance with which to make transistors, graphene hasn’t really delivered. With no natural bandgap—the property that makes a semiconductor a semiconductor—it’s just not built for the job.
Instead, scientists and engineers have been exploring the universe of transition metal dichalcogenides, which all have the chemical formula MX2. These are made up of one of more than a dozen transition metals (M) along with one of the three chalcogenides (X): sulfur, selenium, or tellurium. Tungsten disulfide, molybdenum diselenide, and a few others can be made in single-atom layers that (unlike graphene) are natural semiconductors. These materials offer the enticing prospect that we will be able to scale down transistors all the way to atom-thin components long after today’s silicon technology has run its course.
While this idea is really exciting, I and my colleagues at Imec believe 2D materials could actually show up much sooner, even while silicon still remains king. We’ve been developing a technology that could put 2D semiconductors to work in silicon chips, enhancing their abilities and simplifying their designs.
Devices made with 2D materials are worth all the scientific and engineering work we and other researchers around the world have put into them because they could eliminate one of the biggest problems with today’s transistors. The issue, the result of what are called short-channel effects, is a consequence of the continual shrinking of the transistor over the decades.
A metal-oxide semiconductor field-effect transistor (MOSFET), the type of device in all digital things, is made up of five basic parts: The source and drain electrodes; the channel region that connects them; the gate dielectric, which covers the channel on one or more sides; and the gate electrode, which contacts the dielectric. Applying a voltage at the gate relative to the source creates a layer of mobile charge carriers in the channel region that forms a conductive bridge between the source and drain, allowing current to flow.
But as the channel was made smaller and smaller, current would increasingly leak across it even when there was no voltage on the gate, wasting power. The change from the planar designs of the 20th century to the FinFET transistor structure used in today’s most advanced processors was an attempt to counter this important short-channel effect by making the channel region thinner and having the gate surround it on more sides. The resulting fin-shaped structure provides better electrostatic control. (The coming move to the nanosheet transistor is a furthering of this same idea. See “The Last Silicon Transistor,” IEEE Spectrum, August 2019.)
Certain 2D semiconductors could circumvent short-channel effects, we think, by replacing the silicon in the device channel. A 2D semiconductor provides a very thin channel region—as thin as a single atom if only one layer of semiconductor is used. With such a restricted pathway for current to flow, there is little opportunity for charge carriers to sneak across when the device is meant to be off. That means the transistor could continue to be shrunk down further with less worry about the consequences of short-channel effects.
These 2D materials are not only useful as semiconductors, though. Some, such as hexagonal boron nitride, can act as gate dielectrics, having a dielectric constant similar to that of silicon dioxide, which was routinely used for that job until about a decade ago. Add graphene in place of the transistor’s metal parts and you’ve got a combination of 2D materials that forms a complete transistor. Indeed, separate groups of researchers built such devices as far back as 2014. While these prototypes were much larger, you could imagine scaling them down to the size of just a few nanometers.
As amazing as an all-2D transistor that’s a fraction of the size of today’s devices might be, that won’t be the first implementation of 2D materials in electronic circuits. Instead, 2D materials will probably arrive in low-power circuits that have more relaxed performance requirements and area constraints.
The set of circuits we’re targeting at Imec are built in the so-called back-end-of-line. Chipmaking is divided into two parts: the front-end-of-line part consists of processes—many of them requiring high temperatures—that alter the silicon itself, such as implanting dopants to define the parts of a transistor. The back-end-of-line part builds the many layers of interconnects that link the transistors to form circuits and deliver power.
With traditional transistor scaling becoming more and more difficult, engineers have been looking for ways to add functionality to the interconnect layers. You can’t do this simply by using ordinary silicon processes because the heat involved would damage the devices and interconnects beneath them. So, many of these schemes rely on materials that can be made into devices at relatively low temperatures.
A specific advantage of using 2D semiconductors instead of some other candidates is the potential ability to build both p-type (carrying positive charges) and n-type (carrying electrons) devices, a necessity in CMOS logic. CMOS circuits are the backbone of today’s logic because, ideally, they consume power only when switching from one state to the other. In our preferred 2D semiconductor, we’ve demonstrated n-type transistors but not yet p-type. However, the physics underlying these materials strongly suggests we can get there through engineering the dielectrics and metals that contact the semiconductor.
Being able to produce both p- and n-type devices would allow the development of compact back-end logic circuits such as repeaters. Repeaters essentially relay data that must travel relatively far across a chip. Ordinarily, the transistors involved reside on the silicon, but that means signals must climb up the stack of interconnects until they reach a layer where they can travel part of the distance to their destination, then go back down to the silicon to be repeated and up again to the long-distance interconnect layer. It’s a bit like having to exit the highway and drive into the center of a crowded city to buy petrol before getting back on the highway.
A repeater up near the long-distance interconnect layer is more akin to a motorway petrol station. It saves the time it would take the signal to make the two-way vertical trip and also prevents the loss of power due to the resistance of the vertical interconnects. What’s more, moving the repeater to the interconnect layer saves space on the silicon for more logic.
Repeaters aren’t the only potential use. A 2D material could also be used to build other circuits, such as on-chip power-management systems, signal buffers, and memory selectors. One thing these circuits all have in common is that they don’t require the device to drive a lot of current, so one layer of 2D material would probably be sufficient.
Neither future supersmall 2D devices nor the less demanding back-end-of-line circuits will be possible without a fabrication process compatible with industry-standard 300-millimeter silicon wafers. So our team at Imec is working on just that, hoping to develop a process that will serve for all applications.
The first step is identifying the most promising 2D material and device architecture. We have therefore benchmarked a variety of 2D semiconductors and 2D FET architectures against an advanced silicon FinFET device.
Because researchers have the most experience with molybdenum disulfide (MoS2), experimental devices made using it have advanced furthest. Indeed, at the IEEE International Electron Device Meeting last December, Imec unveiled an MoS2 transistor with a channel just 30 nanometers across and source and drain contacts only 13 nm long. But after examining the possibilities, we’ve decided that MoS2 is not the answer. Instead, we concluded that among all the materials compatible with 300-mm silicon-wafer technology, tungsten disulfide (WS2) in the form of a stacked nanosheet device has the highest performance potential, meaning it can drive the most current. For less demanding, back-end-of-line applications, we also concluded that a FET architecture with a gate both below and above the semiconductor channel region works better than one with only a single gate.
We already knew one important thing about WS2 before we reached that conclusion: We can make a high-quality version of it on a 300-mm silicon wafer. We demonstrated that for the first time in 2018 by growing the material on a wafer using metal-organic chemical vapor deposition (MOCVD), a common process that grows crystals on a surface by means of a chemical reaction. The approach we took results in thickness control down to a single-molecule layer, or monolayer, over the full 300-mm wafer. The benefits of the MOCVD growth come, however, at the price of a high temperature—and recall that high temperatures are forbidden in back-end processes because they could damage the silicon devices below.
To get around this problem, we grow the WS2 on a separate wafer and then transfer it to the already partially fabricated silicon wafer. The Imec team developed a unique transfer process that allows a single layer of WS2—as thin as 0.7 nm—to be moved to a silicon target wafer with negligible degradation in the 2D material’s electrical properties.
The process starts by growing the WS2 on an oxide-covered silicon wafer. That’s then placed in contact with a specially prepared wafer. This wafer has a layer of material that melts away when illuminated by a laser. It also has a coating of adhesive. The adhesive side is pressed to the WS2-covered wafer, and the 2D material peels away from the growth wafer and sticks to the adhesive. Then the adhesive wafer with its 2D cargo is flipped over onto the target silicon wafer, which in a real chipmaking effort would already have transistors and several layers of interconnect on it. Next, a laser is shone through the wafer to break the bulk of it away, leaving only the adhesive and the WS2 atop the target wafer. The adhesive is removed with chemicals and plasma. What’s left is just the processed silicon with the WS2 attached to it, held in place by Van der Waals forces.
The process is complicated, but it works. There is, of course, room for improvement, most importantly in mitigating defects caused by unwanted particles on the wafer surface and in eliminating some defects that occur at the edges.
Once the 2D semiconductor has been deposited, building devices can begin. On that front there have been triumphs, but some major challenges remain.
Perhaps the most crucial issue to tackle is the creation of defects in the WS2. Imperfections profoundly degrade the performance of a 2D device. In ordinary silicon devices, charge can get caught in imperfections at the interface between the gate dielectric and the channel region. These can scatter electrons or holes near the interface as they try to move through the device, slowing things down. With 2D semiconductors the scattering problem is more pronounced because the interface is the channel.
Sulfur vacancies are the most common defects that affect device channel regions. Imec is investigating how different plasma treatments might make those vacancies less chemically reactive and therefore less prone to alter the transistor’s behavior. We also need to prevent more defects from forming after we’ve grown the monolayer. WS2 and other 2D materials are known to age quickly and degrade further if already defective. Oxygen attacking a sulfur vacancy can cause more vacancies nearby, making the defect area grow larger and larger. But we’ve found that storing the samples in an inert environment makes a difference in preventing that spread.
Defects in the semiconductor aren’t the only problems we’ve encountered trying to make 2D devices. Depositing insulating materials on top of the 2D surface to form the gate dielectric is a true challenge. WS2 and similar materials lack dangling bonds that would otherwise help fasten the dielectric to the surface.
Our team is currently exploring two routes that might help: One is atomic layer deposition (ALD) at a reduced growth temperature. In ALD, a gaseous molecule adsorbs to the semiconductor’s exposed surface to form a single layer. Then a second gas is added, reacting with the adsorbed first one to leave an atomically precise layer of material, such as the dielectric hafnium dioxide. Doing this at a reduced temperature increases the ability of the gas molecules to stick to the surface of the WS2 even when no chemical bonds are available.
The other option is to enhance ALD by using a very thin oxidized layer, such as silicon oxide, to help nucleate the growth of the ALD layer. A very thin layer of silicon is deposited by a physical deposition method such as sputtering or evaporation; it’s then oxidized before a regular ALD deposition of gate oxide is done. We’ve achieved particularly good results with evaporation.
A further challenge in making superior 2D devices is in choosing the right metals to use as source and drain contacts. Metals can alter the characteristics of the device, depending on their work function. That parameter, the minimum energy needed to extract an electron from the metal, can mean the difference between a contact that can easily inject electrons and one that can inject holes. So the Imec team has screened a variety of metals to put in contact with the WS2 nanosheet. We found that the highest on-current in an n-type device was obtained using a magnesium contact, but other metals such as nickel or tungsten work well. We’ll be searching for a different metal for future p-type devices.
Despite these challenges, we’ve been able to estimate the upper limits of device performance, and we’ve mapped out what roads to follow to get there.
As a benchmark, the Imec team used dual-gated devices like those we described earlier. We built them with small, naturally exfoliated flakes of WS2, which have fewer defects than wafer-scale semiconductors. For these lab-scale devices, we were able to measure electron mobility values up to a few hundred square centimeters per volt-second, which nearly matches crystalline silicon and is close to the theoretically predicted maximum for the 2D material. Because this excellent mobility can be found in natural material, we are confident that it should also be possible to get there with materials synthesized on 300-mm wafers, which currently reach just a few square centimeters per volt-second.
For some of the main challenges ahead in 2D semiconductor development, our team has a clear view of the solutions. We know, for example, how to grow and transfer the material onto a 300-mm target wafer; we’ve got an idea of how to integrate the crucial gate dielectric; and we’re on a path to boost the mobility of charge carriers in devices toward a level that could compare with silicon.
But, as we’ve laid out, there are still significant problems remaining. These will require an intensive engineering effort and an even better fundamental understanding of this new class of intriguing 2D materials. Solving these challenges will enable high-performance devices that are scaled down to atomic layers, but they might first bring new capabilities that need less demanding specifications even as we continue to scale down silicon.
This article appears in the February 2020 print issue as “Atom-Thick Transistors.”
THE INSTITUTE IEEE Member Victoria Serrano, an engineering professor at the Universidad Tecnológica de Panamá in Chiriquí, has come across many preuniversity students who don’t have a clue what kinds of STEM careers are available. She understood because she didn’t become interested in electrical engineering until she was in high school.
In 2016 she decided to help such teens by launching STEM Beyond the Borders. The program used robots to teach preuniversity students in Panama about STEM subjects. Classes were held not only in classrooms but also in public marketplaces and church recreation rooms.
The program received financial support from the IEEE Control Systems Society and EPICS in IEEE, which aims to empower students to apply technical solutions to aid their communities. Today, Serrano continues her mission through independent outreach efforts in the country.
For her work, Serrano received the 2019 IEEE Education Activities Board Meritorious Achievement Award in Outreach and Informal Education. The award honors IEEE members who teach STEM skills outside a classroom setting.
Serrano, born and raised in Panama, found her calling for educational outreach while pursuing her master’s degree and doctorate in electrical engineering at Arizona State University, in Tempe.
She says she was determined to focus only on her studies; however, a fellow graduate student, Michael Thompson, asked her to help out at the university’s Society of Hispanic Professional Engineers chapter and the ASU Mechanical-Autonomous Vehicles Club, where students research, design, and fly small radio-controlled aircraft. When Serrano visited local schools on behalf of both organizations, she taught the preuniversity students about mathematical concepts, using hands-on activities such as designing mechanical birds.
“When I realized what wonderful things could be done through outreach programs in the United States, I wanted to bring those types of projects to my home country,” Serrano says.
When she created the curriculum for STEM Beyond the Borders, Serrano took inspiration from those volunteering activities.
She says the most popular hands-on activity she teaches today in Panama is building Lego Mindstorms snake robots and racing them. Serrano creates the obstacle course, which has a curvy trajectory. She devises a theme for each session she teaches, such as military combat.
The students use blueprints to build their robots. Components include a DC battery, temperature and sound sensors, a Wi-Fi nano adapter, and a USB cable.
The students program and control their robot using the computational platform Matlab and simulation software Simulink. They conduct experiments to learn more about their robot’s speed to better prepare it for the race.
The project takes about two weeks to complete.
“When developing my program, I didn’t focus only on having the students build the robot,” she says. “They also learn math concepts such as distance, time, and how to calculate velocity.”
After the race, the students prepare a presentation and a poster to explain what experiments they conducted and why.
Serrano says one of her most satisfying moments is learning that one of her students has decided to pursue a STEM degree because of the program.
Of the 15 high school students who participated in the first STEM Beyond the Borders session in 2016, nine went on to study engineering at Universidad Tecnológica de Panamá.
Since then, Serrano has taught close to 100 students through her program.
As the demand for sessions and locations grows, Serrano is developing new ways to bring the program to more students across Panama.
She created CIATEC, which lets students access her Mindstorms robot-building course as well as a session on how to build circuit boards using Arduino, an open-source electronics platform. CIATEC incorporates the Spanish words for science (ciencia), art (arte), and technology (tecnología).
Storing light beams—putting an ensemble of photons, traveling through a specially prepared material, into a virtual standstill—has come a step closer to reality with a new discovery involving microwaves.
The research finds that microwaves traveling through a particular configuration of ceramic aluminum oxide rods can be made to hold in place for several microseconds. If the optical or infrared equivalent of this technology can be fabricated, then Internet and computer communications networks (each carried by optical and infrared laser pulses) might gain a new versatile tool that enables temporary, stationary storage of a packet of photons.
For now, the researchers from the City College of New York and the Moscow Institute of Physics and Technology have developed this new technology for microwaves. Which represents an important stepping stone, says Alexander Khanikaev, associate professor of electrical engineering at City College. (The frequency of microwaves they’re working with falls right in the middle of the 802.11 Wi-Fi spectrum—a wireless local area network standard that IEEE sets and upholds.)
The group’s microwave “crystal”—a latticework of aluminum oxide ceramic columns sandwiched by aluminum plates in a triangle configuration some 20 centimeters on each side—is tuned to respond to specific wavelengths.
Send one frequency of microwaves (5.15 GHz) through this triangle-shaped course, and the waves will hold still at the triangle’s corners for some 1,000 cycles before being absorbed by the material or dissipated into the surrounding environment. Another frequency range (5.01 to 5.07 GHz) will propagate around the perimeter of the triangle.
In neither case will the microwaves penetrate inside of the triangle. This edge behavior of the medium makes it analogous to topological insulators—those materials that conduct electricity only at their outside edges, while inside behaving like a pure insulator.
The peculiar state of these microwave photons in which they hold in place at the corners of the triangular “crystal” is unique to electromagnetic waves, says Khanikaev. “You can think of it like they’re interfering destructively everywhere else,” he says. “And they’re interfering constructively at these corners.”
The geometric configuration of the aluminum oxide columns is crucial to the peculiar behavior the researchers observed, too. Khanikaev says they arranged the aluminum oxide columns in pairs (“dimers”) and groups of three (“trimers”). Using the dimers to trace out a smaller triangle in the middle of the 20-centimeter triangle, it turns out, made a kind of boundary inside of which the microwaves in the experiment did not venture.
The researchers describe their setup and results in a recent issue of the journal Nature Photonics.
However, Khanikaev also notes that they’ve begun to experiment with shrinking their findings down to a similar triangular crystal for infrared light. (Their new device measures 300 micrometers on a side.) They’ve reported their initial findings on the pre-print server arxiv.org.
“This geometric arrangement allows for the creation of a completely new class of electromagnetic modes,” he says. “Any new electromagnetic mode has potential for new applications, because it behaves differently.”
The ultimate idea, Khanikaev says, is to make the crystal tuneable. So a laser pulse might be bounced into one of these tiny resonator-like crystals. And then the crystal’s structure is altered so that the pulse now holds steady at the triangle’s corners—like a nanophotonic game of freeze tag.
The pulses couldn’t hold in place indefinitely, but crystals of higher quality factor (“Q factor”) might keep the photons steady for more cycles before being absorbed or dissipating into the surrounding environment.
The time spent in this photonic, semi-stationary state might only be microseconds or less. But in the world of laser pulses and digital communications, microseconds can still be long enough to, say, wait for another pulse to arrive while the trapped laser pulse idles in place.
“You can do signal manipulation,” Khanikaev says. “It has great potential. Because if you can do trapping and release on demand and control it in time, then you can control photonic signals.”
“Claims” is the mot juste because this nice, round dollar amount is an estimate based on the mass-manufacturing maturity of a product that has yet to ship. Such a factoid would hardly be worth mentioning had it come from some of the several-score odd lidar startups that haven’t shipped anything at all. But Velodyne created this industry back during DARPA-funded competitions, and has been the market leader ever since.
“The projection is $100 at volume; we’ll start sampling customers in the next few months,” Anand Gopalan, the company’s chief technology officer, tells IEEE Spectrum.
The company says in a release that the Velabit “delivers the same technology and performance found on Velodyne’s full suite of state-of-the-art sensors.” Given the device’s small size, that must mean the solid-state version of the technology. That is, the non-rotating kind.
Gopalan wouldn’t say much about how it works, only that the beam-steering did not use tiny mirrors based on micro-electromechanical systems (MEMS). “It differs from MEMS in that there’s no loss of form factor or loss of light,” he said. “In the most general language, it uses a metamaterial activated by low-cost electronics.”
Metamaterials also figure in a lidar from Lumotive.
The Velabit is no replacement for Velodyne’s iconic rotating roof tower, whose 128 laser beams rake a 360-degree field. To get that much coverage from this little lidar, you’d need six units; even then you’d see just 100 meters out, compared to the 128 beamer’s nearly 220 meters.
That’s okay. The new product isn’t meant for true driverless cars, of which there are precisely none on the market now (or likely anytime soon). Rather it’s for use in advanced driver assistance systems (ADAS), such as emergency braking, lane keeping, and adaptive cruise control.
“In the past 12 to 18 months [carmakers] are saying that lidar can be a very important tool for ADAS,” Gopalan says. “And even in a luxury car, a safety option can’t cost more than few thousand [dollars]; a single device therefore has to be less.”
Another advantage is the Velabit’s compactness—at 6 x 6 x 3.5 centimeters, smaller than a deck of cards—which makes it possible for car designers to hide it, say, in the grille. It also makes it suitable for use in drones, robots, and road signs.
After specializing in hand-aligned apparatus that it sold in tiny numbers at stratospheric prices—around $80,000 for the 128-beam machine—Velodyne has moved to fully automated assembly, higher volumes, and lower margins. That was always going to happen as lidar moved out of the lab and into commercial products. But competitive pressures were also at play.
Velodyne once eschewed short- and medium-range lidar. Then Waymo, which was undoubtedly Velodyne’s largest single customer, began designing and building lidars of all ranges in-house. Last March, it began selling the short-range model.
It sounds like science fiction, but a neural implant could, many years from now, read and edit a person’s thoughts. Neural implants are already being used to treat disease, rehabilitate the body after injury, improve memory, communicate with prosthetic limbs, and more.
The U.S. Department of Defense and the U.S. National Institutes of Health (NIH) have devoted hundreds of millions of dollars in funding toward this sector. Independent research papers on the topic appear in top journals almost weekly.
Here, we describe types of neural implants, explain how neural implants work, and provide examples demonstrating what these devices can do.
A neural implant is a device placed inside the body that interacts with neurons.
Neurons are cells that communicate in the language of electricity. They fire electrical impulses in particular patterns, kind of like Morse code. An implant is a human-made device that is placed inside the body via surgery or an injection.
A neural implant, then, is a device—typically an electrode of some kind—that’s inserted into the body, comes into contact with tissues that contain neurons, and interacts with those neurons in some way.
With these devices, it’s possible to record native neural activity, allowing researchers to observe the patterns by which healthy neural circuits communicate. Neural implants can also send pulses of electricity to neurons, overriding native firing patterns and forcing the neurons to communicate in a different way.
In other words, neural implants enable scientists to hack into the nervous system. Call it neuromodulation, electroceuticals, or bioelectronics—interventions involving neural implants have the potential to become tremendously powerful medical tools.
Consider the functions of the nervous system: It controls thinking, seeing, hearing, feeling, moving, and urinating, to name a few. It also controls many involuntary processes such as organ function and the body’s inflammatory, respiratory, cardiovascular, and immune systems.
“Anything that the nervous system does could be helped or healed by an electrically active intervention—if we knew how to do it,” says Gene Civillico, a neuroscientist at the NIH, who runs the agency’s peripheral nerve stimulation funding program SPARC.
One of the most established clinical uses of neural implants is in a treatment called deep brain stimulation, or DBS. In this therapy, electrodes are surgically placed deep into the brain where they electrically stimulate specific structures in an effort reduce the symptoms of various brain-based disorders.
The U.S. Food and Drug Administration (FDA) first approved the use of DBS in 1997 for essential tremor. Since then, the FDA or other global regulators have approved DBS for Parkinson’s disease, dystonia, tinnitus, epilepsy, obsessive-compulsive disorder, and neuropathic pain. DBS is also being investigated as a treatment for Tourette syndrome and psychiatric disorders such as depression. It is estimated that more than 150,000 people globally have received a DBS implant.
Researchers have also put a great deal of time into manipulating the vagus nerve using neural implants. The vagus nerve connects most of our key organs to the brain stem, and researchers are hacking this communication superhighway in an effort to treat heart failure, stroke, rheumatoid arthritis, Crohn’s disease, epilepsy, type 2 diabetes, obesity, depression, migraine, and other ailments.
Some of the most emotionally moving experiments involving neural implants have come with the stimulation of the spinal cord, also known as epidural stimulation. The treatment has enabled a handful of people with paralysis in their lower bodies to move, stand, and even walk a short distance for the first time since sustaining spinal cord injuries.
Perhaps no neuromodulation research has captivated the public’s imagination more than mind-controlled prostheses. These systems enable amputees to control robotic hands, arms, and legs—in rudimentary ways—using their thoughts. This can be accomplished with a neural implant in the brain or in the extremity above the amputation. Some of these robotic limbs can also provide sensory feedback by stimulating nerves just above the amputation, giving the user a sense of what he or she is touching.
And then there’s the stuff that comes across like science fiction. Researchers have successfully enhanced people’s memory capability for specific tasks by stimulating brain structures in precise ways. Quadriplegic individuals with brain implants have operated computers and typed sentences using only their thoughts. There’s an algorithm that can determine a person’s mood based on brain activity alone. A couple of companies have successfully brought to market implants that correct neural communication between the eye and the brain. Elon Musk says his company Neuralink plans to sync our brains with AI.
The invasiveness of any implant limits its use. It’s hard to justify brain or spinal surgery unless a person is in severe medical need. So engineers are constantly inventing better devices that reach deep in the body with less impact on tissues.
“Engineers are continually pushing the boundaries for what’s technically possible,” says David McMullen, program chief of the neuromodulation and neurostimulation program at the U.S. National Institute of Mental Health. “It’s all about decreasing the surgical burden, increasing the chronic nature of the implant and constantly trying to get ever smaller electrodes that cover a wider area of brain,” he says.
Engineers have concocted dust-sized brain implants, electrodes that climb nerves like a vine, electrodes made from flexible materials such as a nanoelectronic thread, stent-like electrodes, or “stentrodes,” that can get to the brain via blood vessels and record electrical activity, injectable electronic mesh made from silicon nanowires, electrodes that can be injected into the body as a liquid and then harden into a stretchy taffy-like substance, and more.
Neuromodulation can even be performed non-invasively using electrodes or magnetic coils placed on or near the skin. The strategy has proven effective for some conditions, although so far it doesn’t have the specificity or efficacy of implants.
But these innovative devices only get us so far. “There’s a misconception that the obstacles [to neuromodulation] are mainly technical, like the only reason we don’t have thought-controlled devices is because nobody has made a flexible-enough electrode yet,” says Civillico at NIH.
Researchers still need a basic understanding of the physiology of neural circuits, says Civillico. They need maps of how neurons are communicating, and the specific effects of these circuits on the body and brain. Without these maps, even the most innovative implants are effectively shooting electrical impulses into the dark.
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
Let us know if you have suggestions for next week, and enjoy today’s videos.
The Real-World Deployment of Legged Robots Workshop is back at ICRA 2020!
We’ll be there!
[ Workshop ]
This video shows some cool musical experiments with Pepper. They should definitely release this karaoke feature to Peppers everywhere—with “Rage Against the Machine” songs included, of course. NSFW warning: There is some swearing by both robot and humans, so headphones recommended if you’re at work.
It all started when on a whim David and another team member fed a karaoke file into Pepper’s text to speech, with a quick Python script, and playing some music in parallel from their PC. The effect was a bit strange, but there was something so fun (and funny) to it. I think they were going for a virtual performance from Pepper or something, but someone noted that it sounds like he’s struggling like someone doing karaoke. And from there it grew into doing duets with Pepper.
This thing might seem ridiculous, and it is. But believe me, it’s genuinely fun. It was going all night in a meeting room at the office winter party.
[ Taylor Veltrop ]
And now, this.
In “Scary Beauty,” a performance conceived and directed by Tokyo-based musician Keiichiro Shibuya, a humanoid robot called Alter 3 not only conducts a human orchestra but also sings along with it.
Unlike the previous two "Alters", the Alter 3 has improved sensory and expression capabilities closer to humans, such as a camera with both eyes and the ability to utter from the mouth, as well as expressiveness around the mouth for singing. In addition, the output was enhanced compared to the alternator 2, which made it possible to improve the immediacy of the body expression and achieve dynamic movement. In addition, portability, which allows anyone to disassemble and assemble and transport by air, is one of the evolutions of the Altera 3.
Carnegie Mellon University’s Henny Admoni studies human behavior in order to program robots to better anticipate people’s needs. Admoni’s research focuses on using assistive robots to address different impairments and aid people in living more fulfilling lives.
[ HARP Lab ]
Olympia was produced as part of a two-year project exploring the growth of social and humanoid robotics in the UK and beyond. Olympia was shot on location at Bristol Robotics Labs, one of the largest of its kind in Britain.
Humanoid robotics - one the most complex and often provocative areas of artificial intelligence - form the central subject of this short film. At what point are we willing to believe that we might form a real bond with a machine?
In this work, we explore user preferences for different modes of autonomy for robot-assisted feeding given perceived error risks and also analyze the effect of input modalities on technology acceptance.
This video brings to you a work conducted on a multi-agent system of aerial robots to form mid-air structures by docking using position-based visual servoing of the aerial robot. For the demonstration, the commercially available drone DJI Tello has been modified to fit to use and has been commanded using the DJI Tello Python SDK.
[ YouTube ]
The video present DLR CLASH (Compliant Low-cost Antagonistic Servo Hand) developed within the EU-Project Soma (grant number H2020-ICT-645599) and shows the hand resilience tests and the capability of the hand to grasp objects under different motor and sensor failures.
[ DLR ]
Squishy Robotics is celebrating our birthday! Here is a short montage of the places we’ve been and the things we’ve done over the last three years.
[ Squishy Robotics ]
The 2020 DJI RoboMaster Challenge takes place in Shenzhen in early August 2020.
[ RoboMaster ]
With support from the National Science Foundation, electrical engineer Yan Wan and a team at the University of Texas at Arlington are developing a new generation of "networked" unmanned aerial vehicles (UAVs) to bring long distance, broadband communications capability to first responders in the field.
[ NSF ]
Drones and UAVs are vulnerable to hackers that might try to take control of the craft or access data stored on-board. Researchers at the University of Michigan are part of a team building a suite of software to keep drones secure.
The suite is called Trusted and Resilient Mission Operations (TRMO). The U-M team, led by Wes Weimer, professor of electrical engineering and computer science, is focused on integrating the different applications into a holistic system that can prevent and combat attacks in real time.
[ UMich ]
A mobile robot that revs up industrial production: SOTO enables efficient automated line feeding, for example in the automotive industry. The supply chain robot SOTO brings materials to the assembly line, just-in-time and completely autonomous.
[ Magazino ]
MIT’s Lex Fridman get us caught up with the state-of-the-art in deep learning.
[ MIT ]
Just in case you couldn’t make it out to Australia in 2018, here are a couple of the keynotes from ICRA in Brisbane.
[ ICRA 2018 ]
For all its pure-electric acceleration and range and its ability to shapeshift, the Hypersport motorcycle shown off last week at CES by Vancouver, Canada-based Damon Motorcycles matters for just one thing: It’s the first chopper swathed in active safety systems.
These systems don’t take control, not even in anticipation of a crash, as they do in many advanced driver assistance systems in cars. They leave a motorcyclist fully in command while offering the benefit of an extra pair of eyes.
Why drape high tech “rubber padding” over the motorcycle world? Because that’s where the danger is: Motorcyclists are 27 times more likely to die in a crash than are passengers in cars.
“It’s not a matter of if you’ll have an accident on a motorbike, but when,” says Damon chief executive Jay Giraud. “Nobody steps into motorbiking knowing that, but they learn.”
The Hypersport’s sensor suite includes cameras, radar, GPS, solid-state gyroscopes and accelerometers. It does not include lidar–“it’s not there yet,” Giraud says–but it does open the door a crack to another way of seeing the world: wireless connectivity.
The bike’s brains note everything that happens when danger looms, including warnings issued and evasive maneuvers taken, then shunts the data to the cloud via 4G wireless. For now that data is processed in batches, to help Damon refine its algorithms, a practice common among self-driving car researchers. Some day, it will share such data with other vehicles in real-time, a strategy known as vehicle-to-everything, or V2x.
But not today. “That whole world is 5-10 years away—at least,” Giraud grouses. “I’ve worked on this for over decade—we’re no closer today than we were in 2008.”
The bike has an onboard neural net whose settings are fixed at any given time. When the net up in the cloud comes up with improvements, these are sent as over-the-air updates to each motorcycle. The updates have to be approved by each owner before going live onboard.
When the AI senses danger it gives warning. If the car up ahead suddenly brakes, the handlebars shake, warning of a frontal collision. If a vehicle coming from behind enters the biker’s blind spot, LEDs flash. That saves the rider the trouble of constantly having to look back to check the blind spot.
Above all, it gives the rider time. A 2018 report by the National Highway Traffic Safety Administration found that from 75 to 90 percent of riders in accidents had less than three seconds to notice a threat and try to avert it; 10 percent had less than one second. Just an extra second or two could save a lot of lives.
The patterns the bike’s AI tease out from the data are not always comparable to those a self-driving car would care about. A motorcycle shifts from one half of a lane to the other; it leans down, sometimes getting fearsomely close to the pavement; and it is often hard for drivers in other vehicles to see.
One motorbike-centric problem is the high risk a biker takes just by entering an intersection. Some three-quarters of motorcycle accidents happen there, and of that number about two-thirds are caused by a car’s colliding from behind or from the side. The side collision, called a T-bone, is particularly bad because there’s nothing at all to shield the rider.
Certain traffic patterns increase the risk of such collisions. “Patterns that repeat allow our system to predict risk,” Giraud says. “As the cloud sees the tagged information again and again, we can use it to make predictions.”
Damon is taking pre-orders, but it expects to start shipping in mid-2021. Like Tesla, it will deliver straight to the customer, with no dealers to get in the way.
What makes a job nearly perfect? It’s a combination of salary, demand (the number empty posts waiting to be filled), and job satisfaction, according to job search firm Glassdoor, which this week released a list of the best jobs in America for 2020.
Using median base salaries reported on Glassdoor in 2019, the number of U.S. job openings as of 18 December 2019, and the overall job satisfaction rating (on a scale of 1 to 5) reported by employees in those jobs, the company put front-end engineer in the number one spot, followed by Java developer and data scientist. That’s a switch previous trends; data scientist held the number one spot on Glassdoor’s top jobs list for the four previous years.
In fact, you don’t hit a non-tech job until the 8th ranking, where speech language pathologist claims the spot, boosted by astronomical demand [see table].
2020’s Top Jobs
|Rank||Job||Median Base Salary||Job Satisfaction||Job Openings|
|1||Front End Engineer*||$105,240||3.9||13,122|
|8||Speech Language Pathologist||$71,867||3.8||29,167|
|10||Business Development Manager||$78,480||4.0||6,560|
*Tech job Source: Glassdoor
Tech jobs are among the highest paying, however, with seven of the top ten median salaries [see table].
2020’s Top Jobs by Salary
|Rank||Job||Median Base Salary|
|8||Dev Ops engineer*||$107,310|
|10||Front End Engineer*||$105,240|
*Tech job Source: Glassdoor
Tech jobs, however, aren’t the most satisfying, according to Glassdoor’s rankings. Top honors in that category go to corporate recruiter posts, followed by strategy manager. The only tech jobs to make the top ten rankings in job satisfaction were Salesforce Developer and Data Scientist; two other “most satisfying” job categories included a mix of technical and non-technical professionals [see table].
2020’s Top Jobs by Satisfaction
|Satisfaction Score (out of 5)|
|3||Customer Success Manager||4.2|
|9||Business Development Manager||4.0|
*Tech job °Job category includes some tech professions Source: Glassdoor
A complete list of the 50 top jobs is available on Glassdoor.
One of the primary purposes of a gate driver is to enable power switches to turn on and off faster, improving rise and fall times. Faster switching enables higher efficiency and higher power density, reducing losses in the power stage associated with high slew rates. However, as slew rates increase, so do measurement and characterization uncertainty.
Effective measurement and characterization considerations must account for: ► Proper gate driver design – Accurate timing (propagation delay in regard to skew, PWD, jitter) – Controllable gate rise and fall times – Robustness against noise sources (input glitches and CMTI) ► Minimized noise coupling ► Minimized parasitic inductance
The trend for silicon based power designs over wide bandgap power designs makes measurement and characterization a greater challenge. High slew rates in SiC and GaN devices present designers with hazards such as large overshoots and ringing, and potentially large unwanted voltage transients that can cause spurious switching of the MOSFETs.
Birds have been doing their flying thing with flexible and feathery wings for about a hundred million years, give or take. And about a hundred years ago, give or take, humans decided that, although birds may be the flying experts, we’re just going to go off in our own direction with mostly rigid wings and propellers and stuff, because it’s easier or whatever. The few attempts at making artificial feathers that we’ve seen in the past have been sufficient for a few specific purposes but haven’t really come close to emulating the capabilities that real feathers bestow on the wings of birds. So a century later, we’re still doing the rigid wings with discrete flappy bits, while birds (one has to assume) continue to judge us for our poor choices.
In a paper published today in Science Robotics, researchers at Stanford University have presented some new work on understanding exactly how birds maintain control by morphing the shape of their wings. They put together a flying robot called PigeonBot with a pair of “biohybrid morphing wings” to test out new control principles, and instead of trying to develop some kind of fancy new artificial feather system, they did something that makes a lot more sense: They cheated, by just using real feathers instead.
The reason why robots are an important part of this research (which otherwise seems like it would be avian biology) is because there’s no good way to use a real bird as a test platform. As far as I know, you can’t exactly ask a pigeon to try and turn just using some specific wing muscles, but you can definitely program a biohybrid robot to do that. However, most of the other bioinspired flying robots that we’ve seen have been some flavor of ornithopter (rigid flapping wings), or they’ve used stretchy membrane wings, like bats.
Feathers aren’t just more complicated to manufacture, but you have to find some way of replicating and managing all of the complex feather-on-feather interactions that govern wing morphing in real birds. For example, by examining real feathers, the researchers discovered that adjacent feathers stick to each other to resist sliding in one direction only using micron-scale features that researchers describe as “directional Velcro,” something “new to science and technology.” Real feathers can slide to allow the wing to morph, but past a certain point, the directional Velcro engages to keep gaps from developing in the wing surface. There are additional practical advantages, too: “they are softer, lighter, more robust, and easier to get back into shape after a crash by simply preening ruffled feathers between one’s fingers.”
With the real feathers elastically connected to a pair of robotic bird wings with wrist and finger joints that can be actuated individually, PigeonBot relies on its biohybrid systems for maneuvering, while thrust and a bit of additional stabilizing control comes from a propeller and a conventional tail. The researchers found that PigeonBot’s roll could be controlled with just the movement of the finger joint on the wing, and that this technique is inherently much more stable than the aileron roll used by conventional aircraft, as corresponding author David Lentink, head of Stanford's Bio-Inspired Research & Design (BIRD) Lab, describes:
The other cool thing we found is that the morphing wing asymmetry results automatically in a steady roll angle. In contrast aircraft aileron left-right asymmetry results in a roll rate, which the pilot or autopilot then has to stop to achieve a steady roll angle. Controlling a banked turn via roll angle is much simpler than via roll rate. We think it may enable birds to fly more stably in turbulence, because wing asymmetry corresponds to an equilibrium angle that the wings automatically converge to. If you are flying in turbulence and have to control the robot or airplane attitude via roll rate in response to many stochastic perturbations, roll angle has to be actively adjusted continuously without any helpful passive dynamics of the wing. Although this finding requires more research and testing, it shows how aerospace engineers can find inspiration to think outside of the box by studying how birds fly.
The researchers suggest that the directional Velcro technology is one of the more important results of this study, and while they’re not pursuing any of the numerous potential applications, they’ve “decided to not patent this finding to help proliferate our discovery to the benefit of society at large” in the hopes that anyone who makes a huge pile of money off of it will (among other things) invest in bird conservation in gratitude.
As for PigeonBot itself, Lentink says he’d like to add a biohybrid morphing tail, as well as legs with grasping feet, and additional actuators for wing folding and twisting and flapping. And maybe make it fly autonomously, too. Sound good to me—that kind of robot would be great at data transfer.
[ Science Robotics ]
THE INSTITUTE The IEEE Power Electronics Society (PELS) organized the Empower a Billion Lives (EBL) global competition to crowdsource ideas that could improve energy access in underserved communities. The competition’s solutions were aimed at addressing the energy-access needs of the 3 billion people living in energy poverty, including 1 billion people who have no access at all to energy services, as identified by the International Energy Agency.
“Energy access is an area where IEEE has the expertise and global reach and can review viable solutions to help de-risk market entry for solutions that will address the challenge,” says Deepakraj M. Divan, global steering chair of EBL.
Teams developed agnostic technology solutions using renewable and sustainable 21st-century technologies that were regionally respectful and had sound business plans that could be scaled up to address more people.
Solutions had to provide users with at least 200 watt-hours of electricity per day—an amount sufficient for a variety of activities beyond just providing lighting, such as cellphone charging, pumping water, running fans, milling, and refrigeration.
The competition consisted of an online round, a regional round, field-testing, and then the global final. More than 475 teams from 70-plus countries registered for the 2018 online round. They came from universities, companies, research labs, and nonprofit organizations.
The regional rounds included 82 teams selected from proposals submitted online. The sessions were held in Atlanta; Chennai, India; Johannesburg; Seville, Spain; and Shenzhen, China. The field-testing included the 23 teams that had won at the regionals. The field-testing of solutions was deployed in an area where there was no access to electricity and where people lived on less than US $1.90 per day.
“The format of Empower a Billion Lives was helpful for the teams because they were able to interact with experts in the power electronics field,” says Mike Kelly, IEEE PELS executive director. “I think the feedback they received was invaluable in helping them further develop their solutions.”
The global final winner—selected based on the technology the team used, the project’s social impact, the business model, and field-testing data—was Solar Urja Through Localization for Sustainability, which collected the $100,000 grand prize. SoULS, founded at the Indian Institute of Technology Bombay in Mumbai, India, is an initiative that provides training and support for women to become entrepreneurs in the solar business.
The World Bank reports that more than 200 million people in India are not connected to the power grid.
The winning SoULS solution trains women and schoolchildren to assemble simple solar lamps that students bring home to use while studying at night. After constructing the lamps, the female entrepreneurs learn to assemble, install, and repair more sophisticated solar solutions. Hundreds of women who received the training now run factories.
“So far 700 women-led factories have opened up in 10 states serving 317 subdistricts and 40,000 villages,” Divan says. “One of the great advantages to this model is the money the women make stays within the community.”
Additionally, many of the women have opened up stores to offer villagers related products such as solar panels, DC appliances, batteries, and accessories.
The global final was made possible by funding from PELS, Vicor, ON Semiconductor, Southern Power, Kehua, Sungrow, and Texas Instruments. The IEEE Foundation provided partnership and support along with the Center for Distributed Energy at Georgia Tech.
The next EBL competition is scheduled to begin next year.
Jeremiah Daniels is a former intern for The Institute. Jane Celusak is a PELS project manager.
Researchers from Rice, Stanford, Princeton, and Southern Methodist University have developed a new way to use lasers to see around corners that beats the previous technique on resolution and scanning speed. The findings appear today in the journal Optica.
The U.S. military—which funded the work through DARPA grants—is interested for obvious reasons, and NASA wants to use it to image caves, perhaps doing so from orbit. The technique might one day also let rescue workers peer into earthquake-damaged buildings and help self-driving cars navigate tricky intersections.
One day. Right now it’s a science project, and any application is years away.
The original way of corner peeping, dating to 2012, studies the time it takes laser light to go to a reflective surface, onward to an object and back again. Such time-of-flight measurement requires hours of scanning time to produce a resolution measured in centimeters. Other methods have since been developed that look at reflected light in an image to infer missing parts.
The latest method looks instead at speckle, a shimmering interference pattern that in many laser applications is a bug; here it is a feature because it contains a trove of spatial information. To get the image hidden in the speckle—a process called non-line-of-sight correlography—involves a bear of a calculation. The researchers used deep-learning methods to accelerate the analysis.
“Image acquisition takes a quarter of a second, and we’re getting sub-millimeter resolution,” says Chris Metzler, the leader of the project, who is a post-doc in electrical engineering at Stanford. He did most of the work while completing a doctorate at Rice.
The problem is that the system can achieve these results only by greatly narrowing the field of view.
“Speckle encodes interference information, and as the area gets larger, the resolution gets worse,” says Ashok Veeraraghavan, an associate professor of electrical engineering and computer science at Rice. “It’s not ideal to image a room; it’s ideal to image an ID badge.”
The two methods are complementary: Time-of-flight gets you the room, the guy standing in that room, and maybe a hint of a badge. Speckle analysis reads the badge. Doing all that would require separate systems operating two lasers at different wavelengths, to avoid interference.
Today corner peeping by any method is still “lab bound,” says Veeraraghavan, largely because of interference from ambient light and other problems. To get the results that are being released today, the researchers had to work from just one meter away, under ideal lighting conditions. A possible way forward may be to try lasers that emit in the infrared.
Another intriguing possibility is to wait until the wireless world’s progress to ever-shorter wavelengths finally hits the millimeter band, which is small enough to resolve most identifying details. Cars and people, for instance.
Augmented reality in a contact lens? Science fiction writers envisioned the technology decades ago, and startups have been working on developing an actual product for at least 10 years.
Today, Mojo Vision announced that it has done just that—put 14K pixels-per-inch microdisplays, wireless radios, image sensors, and motion sensors into contact lenses that fit comfortably in the eyes. The first generation of Mojo Lenses are being powered wirelessly, though future generations will have batteries on board. A small external pack, besides providing power, handles sensor data and sends information to the display. The company is calling the technology Invisible Computing, and company representatives say it will get people’s eyes off their phones and back onto the world around them.
The first application, says Steve Sinclair, senior vice president of product and marketing, will likely be for people with low vision—providing real-time edge detection and dropping crisp lines around objects. In a demonstration last week at CES 2020, I used a working prototype (albeit by squinting through the lens rather than putting it into my eyes), and the device highlighted shapes in bright green as I looked around a dimly lit room.
The effect was impressive and it was easy to see how useful this could be. Even people’s facial features were highlighted—not in extreme detail, but with enough resolution to distinguish a smile from a neutral expression. The company eventually plans to add the ability to zoom to its vision enhancement features, and announced a partnership with the Vista Center for the Blind and Visually Impaired to develop additional applications.
I also saw a demonstration of text displayed using the prototype; it was easy to read. Potential future applications, beyond those intended for people with low vision, include translating languages in real time, tagging faces, and providing emotional cues.
Mojo Vision has yet to implement its planned eye-tracking technology with the lenses, but says that’s coming soon, and will allow the wearer to control apps without relying on external devices.
“People can’t tell you are wearing it, so we want the interaction to be subtle, done using just your eyes,” Sinclair said.
The experience is different from wearing glasses, says Sinclair, who along with other Mojo Vision executives has been wearing the lenses. “When you close your eyes, you still see the content displayed,” he says.
The path ahead is not a short one; contact lenses are considered medical devices and therefore need U.S. Food and Drug Administration (FDA) approval. But the Mojo Lens has been designated as an FDA Breakthrough Device which will speed things up a little. And clinical studies have begun.
The company is well-funded for the journey. Based in Saratoga, Calif., Mojo to date has 84 employees and has pulled in US $105 million in investment from traditional Silicon Valley venture firms like Khosla Ventures as well as big companies like LG and Google. And its technology is well-protected, with more than 100 patents, Mojo said in a press release.
Biological organisms have certain useful attributes that synthetic robots do not, such as the abilities to heal, adapt to new situations, and reproduce. Yet molding biological tissues into robots or tools has been exceptionally difficult to do: Experimental techniques, such as altering a genome to make a microbe perform a specific task, are hard to control and not scalable.
Now, a team of scientists at the University of Vermont and Tufts University in Massachusetts has used a supercomputer to design novel lifeforms with specific functions, then built those organisms out of frog cells.
The new, AI-designed biological bots crawl around a petri dish and heal themselves. Surprisingly, the biobots also spontaneously self-organize and clear their dish of small trash pellets.
“This wasn’t something that we explicitly selected for in our evolutionary algorithm,” says Josh Bongard, a roboticist at the University of Vermont who co-led the research, published this week in the Proceedings of the National Academy of Sciences. “It emerges from the fact that cells have their own intelligence and their own plans.”
The idea for AI-designed biobots came from a DARPA funding call for autonomous machines that adapt and thrive in the environment. Bongard and biologist Michael Levin at Tufts University conceived a plan to take advantage of Mother Nature’s hard work and build a machine out of something already capable of adapting: living cells.
The researchers ran an evolutionary algorithm on a supercomputer at the University of Vermont over several days. The algorithm, inspired by natural selection, used biological building blocks to create a random population of new life-form candidates. The algorithm then winnowed through the designs with a fitness function that scored each candidate on its ability to do a certain thing—in this case, the ability to move.
The most promising designs became the basis to spawn a new set of designs, and the best of those were selected again. Rinse and repeat, and after 100 runs of the algorithm, tossing out billions of potential designs, the team had a set of five finalists—AI-created designs that moved well in silico.
Bongard’s team sent the finalist designs to Levin’s lab at Tufts, where microsurgeon Douglas Blackiston deemed four of the five designs too difficult or impossible to build. But the fifth design seemed doable. Blackiston used tiny forceps and a tiny electrode under a microscope to cut and join heart and skin cells from the African frog Xenopus laevis into a close approximation of the computer’s design. When cut in half, the cells stitched themselves back together—something today’s robots and computers clearly don’t do.
Once constructed, the millimeter-wide biobots moved around a petri dish as the heart cells contracted. When the team put small pellets into the dish, the cells unexpectedly worked together to clump the pellets into neat piles.
Bongard imagines a future where such biobots could be used to clean up microplastics in the ocean, especially as the biobots are 100 percent biocompatible and degrade in salt water. “That might make these biobots a uniquely appealing approach for environmental remediation,” says Bongard.
For now, the miniscule robots are best at locomotion, but Bongard has other tasks in mind. The next step, he says, is developing a “cage bot”—an empty cube to pick up and carry a payload. With that ability, one could build bots out of a person’s own cells, then use them to deliver medications deep into the body without prompting an immune response, the authors suggest.
Without a digestive system to ingest food or a nervous system to sense the surrounding environment, the organisms lived for just days. In the future, incorporating different cell types could change that: “If we wanted them to exist for longer periods of time, we might want them to be able to find and eat food sources,” says Bongard. “We’d also like to be able to incorporate sense organs into these biobots.” The collaborators are now building AI-designed biobots with mammalian cells.
The team is keenly aware that their new organisms might leave some people feeling unsettled, slipping into the uncanny valley. “Frogs that are not frogs definitely qualify for this,” says Bongard.
Plus, as they create new lifeforms—with, say, digestive, nervous, and even reproductive systems—the team is working with bioethicists and following strict animal welfare laws. “As we move further and further away from recognizable organisms, we may need to create new regulations for this kind of technology,” says Bongard.
Amidst rising tensions after the United States killed Qassem Soleimani, the chief of Iran’s Quds Force, in a drone strike in Baghdad last week, security experts and U.S. government officials warn that Iran may retaliate with cyberattacks.
Iran-based attack groups have expanded their digital offensive capabilities significantly since 2012, when they launched crippling distributed denial-of-service attacks against financial services companies. Since then, the cybersecurity arm of Iran’s Islamic Revolutionary Guard Corps, and private sector contractors acting on behalf of the government, have added tools to their arsenals.
Those tools enable attackers to execute account takeovers and spear phishing campaigns to steal intellectual property and sensitive information, and include destructive malware designed to disrupt operations, according to the National Cyber Awareness System alert issued by the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) earlier this month.
Iran has also “demonstrated a willingness” to use wiper malware, CISA said in its 6 January alert. Wipers refer to a category of malware which erase the contents of the hard drive of an infected machine and then destroy the computer’s master boot record to make it impossible for the machine to boot up again. Just like any other type of malware, wipers rely on various methods for the initial infection, and once in, can steal information or execute unauthorized code. The difference is that wipers don’t care about being stealthy because the primary purpose is to render the machine unusable.
“Don’t expect DDoS this time, [Iran] won’t view it as a proportionate response,” says Hank Thomas, the CEO of cybersecurity venture capital firm Strategic Cyber Ventures. “The Iranians will want to respond with something violent in the physical domain, and destructive in the cyber domain.”
The destructive data-wiping malware used in the 2012 Shamoon attack to destroy tens of thousands of computers belonging to Saudi oil giant Aramco is believed to be of Iranian origin. In 2015, James R. Clapper, then-U.S. Director of National Intelligence, told a Congressional committee [PDF] that the information-stealing malware which infected and erased the hard drives of Sands Las Vegas Corporation computers in 2014 was linked to Iran.
Just last week, the Saudi National Cybersecurity Authority (NCSC) identified an attack using the Dustman wiper malware against an unnamed entity in the Middle East. While Saudi authorities themselves did not name Iran as the culprit, analysts familiar with the attack told CyberScoop that Dustman was technically similar to past Iranian activities. Sources told ZDNet the victim was Bapco, Bahrain’s national oil company.
Saudi authorities stated with “moderate confidence” that the attackers broke into the victim’s networks by “exploiting one of the remote execution vulnerabilities in a VPN appliance that was disclosed in July 2019.” A 9 January U.S. Federal Bureau of Information advisory, first reported by CyberScoop, noted that Iranian groups frequently target vulnerabilities in virtual private network (VPN) applications.
CISA has also issued several advisories about multiple vulnerabilities in VPN servers from FortiNet, Palo Alto Networks, and Pulse Secure over the past year. The most recent advisory focused on Pulse Secure VPN servers, where attackers were successfully exploiting vulnerabilities despite a patch being available since April 2019. “Unpatched Pulse Secure VPN servers continue to be an attractive target for malicious actors,” CISA said in that alert on 10 January.
Even as CISA warned about heightened risks of cyberattacks from Iran and its proxies, the agency said in its public advisory [PDF] that organizations should assess how attractive they are to Iranian attack groups. Organizations may be targeted because their business model intersects with Iranian interests, or to gain access to, or information about, their customers and competitors, says Rick Holland, chief information security officer and vice president of strategy at digital risk protection company Digital Shadows. Businesses should look beyond their own threat models to see how Iranian interests might intersect with their supply chains.
Wiper malware has not yet been widely deployed, but extortion threat models and wiper tabletop exercises can help organizations plan how they would respond to wiper attacks, Holland says. Elements of ransomware recovery planning can be used for wiper malware planning—particularly the parts that have to do with disaster recovery and maintaining business continuity. More importantly, Holland says, work done now on responding to wiper malware could also prove useful against a multitude of other threats—not just Iran-based attackers.
“Threat du jour thinking isn’t an adequate defense model,” Holland says. “If a nation-state is going to target you, detection and response will be your fall back.”
Whether you know it or not, you’re feeding artificial intelligence algorithms. Companies, governments, and universities around the world train machine learning software on unsuspecting citizens’ medical records, shopping history, and social media use. Sometimes the goal is to draw scientific insights, and other times it’s to keep tabs on suspicious individuals. Even AI models that abstract from data to draw conclusions about people in general can be prodded in such a way that individual records fed into them can be reconstructed. Anonymity dissolves.
To restore some amount of privacy, recent legislation such as Europe’s General Data Protection Regulation and the California Consumer Privacy Act provides a right to be forgotten. But making a trained AI model forget you often requires retraining it from scratch with all the data but yours. This process that can take weeks of computation.
Two new papers offer ways to delete records from AI models more efficiently, possibly saving megawatts of energy and making compliance more attractive. “It seemed like we needed some new algorithms to make it easy for companies to actually cooperate, so they wouldn’t have an excuse to not follow these rules,” said Melody Guan, a computer scientist at Stanford and co-author of the first paper.
Because not much has been written about efficient data deletion, the Stanford authors first aimed to define the problem and describe four design principles that would help ameliorate it. The first principle is “linearity”: Simple AI models that just add and multiply numbers, avoiding so-called nonlinear mathematical functions, are easier to partially unravel. The second is “laziness,” in which heavy computation is delayed until predictions need to be made. The third is “modularity”: If possible, train a model in separable chunks and then combine the results. The fourth is “quantization,” or making averages lock onto nearby discrete values so removing one contributing number is unlikely to shift the average.
The Stanford researchers applied two of these principles to a type of machine learning algorithm called k-means clustering, which sorts data points into natural clusters—useful for, say, analyzing genetic differences between closely related populations. (Clustering has been used for this exact task on a medical database called the UK Biobank, and one of the authors has actually received a notice that some patients had asked for their records to be removed from that database.) Using quantization, the researchers developed an algorithm called Q-k-means and tested it on six datasets, categorizing cell types, written digits, hand gestures, forest cover, and hacked Internet-connected devices. Deleting 1,000 data points from each set, one point at a time, Q-k-means was 2 to 584 times as fast as regular k-means, with almost no loss of accuracy.
Using modularization, they developed DC-k-means (for Divide and Conquer). The points in a dataset are randomly split into subsets, and clustering is done independently within each subset. Then those clusters are formed into clusters, and so on. Deleting a point from one subset leaves the others untouched. Here the speedup ranged from 16 to 71, again with almost no loss of accuracy. The research was presented last month at the Neural Information Processing Systems (NeurIPS) conference, in Vancouver, Canada.
“What’s nice about the paper is they were able to leverage some of the underlying aspects of this algorithm”—k-means clustering—said Nicolas Papernot, a computer scientist at the University of Toronto and Vector Institute, who was not involved in the work. But some of the tricks won’t work as well with other types of algorithms, such as the artificial neural networks used in deep learning. Last month, Papernot and collaborators posted a paper on the preprint server arXiv presenting a training approach that can be used with neural networks, called SISA training (for Sharded, Isolated, Sliced, and Aggregated).
The approach uses modularity in two different ways. First, sharding breaks the dataset into subsets, and copies of the model are trained independently on each. When it comes time to make a prediction, the predictions of each model are aggregated into one. Deleting a data point requires retraining only one model. The second method, slicing, further breaks up each subset. The model for that subset trains on slice 1, then slices 1 and 2, then 1 and 2 and 3, and so on, and the trained model is archived after each step. If you delete a data point from slice 3, you can revert to the third stage of training and go from there. Sharding and slicing “give us two knobs to tune how we train the model,” Papernot says. Guan calls their methods “pretty intuitive,” but says they use “a much less stringent standard of record removal.”
The Toronto researchers tested the method by training neural networks on two large datasets, one containing more than 600,000 images of home address numbers, and one containing more than 300,000 purchase histories. When deleting 0.001 percent of each dataset and then retraining, sharding (with 20 shards) made retraining go 3.75 times as fast for the addresses and 8.31 times as fast for the purchases (compared with training a model in the standard fashion and then retraining it from scratch without the deleted data points), with little reduction in accuracy. Slicing further increased speed by 18 percent for addresses and 43 percent for purchases, with no reduction in accuracy.
Deleting only 0.001 percent might not seem like much, but, Papernot says, it’s orders of magnitude more than the amount requested of services like Google search, according to publicly released figures. And an 18 percent speedup might not seem dramatic, but for giant models, that improvement can save lots of time and money. Further, in some cases you might know that certain data points are more likely to require forgetting—perhaps they belong to ethnic minorities or people with medical conditions, who might be more concerned about privacy violations. Concentrating these points in certain shards or slices can make deletion even more efficient. Papernot says they’re looking at ways to use knowledge of a dataset to better tailor SISA.
Certain AI methods aim to anonymize records, but there are reasons one might want AI to forget individual data points besides privacy, Guan says. Some people might not want to contribute to the profits of a disliked company—at least without profiting from their own data themselves. Or scientists might discover problems with data points post-training. (For instance, hackers can “poison” a dataset by inserting false records.) In both cases, efficient data deletion would be valuable.
“We certainly don’t have a full solution,” Guan says. “But we thought it would be very useful to define the problem. Hopefully people can start designing algorithms with data protection in mind.”
THE INSTITUTE IEEE DataPort made its debut to the public last year, and to date more than 200,000 people have used the Web-based platform, uploading more than 1,000 data sets. Developed and supported by IEEE, the product allows researchers to store, share, access, and manage their research data sets in a single trusted location.
The portal accepts both standard and open-access data sets in many formats. Each data set uploaded may be as large as 2 terabytes. The data is stored on the Amazon Web Services cloud and can be downloaded or retrieved at any time. There is currently no charge to upload data sets.
IEEE DataPort is integrated with the Open Researcher and Contributor ID (ORCID). An ORCID identifier distinguishes you from other researchers, and data set owners have the option to enter their ORCID identifier with their data sets to ensure they are included on their ORCID asset list.
In addition, digital object identifiers (DOIs) are automatically assigned to each data set.
Uploaded data sets that are open access are by definition freely accessible at no cost to all users. Data uploaded as a standard data set is also free to access by subscribers. According to the terms and conditions of IEEE DataPort, those who upload a data set automatically grant a Creative Commons license to their data set so others may access and use it.
Data sets must be cited if used. Therefore, data owners can generate citations once they’ve uploaded their data set.
IEEE DataPort also stores related documentation such as scripts and visualizations. Data sets are visible to all users, while related documentation may be accessed only by those with an IEEE DataPort account.
Data sets can be linked to articles previously published by IEEE, and the platform is integrated into the article submission process for more than 91 IEEE journals and magazines.
Users can send an email message directly through the platform to other data set owners and provide them with feedback, ask questions, and request collaboration. They also have the ability to share their data sets and others’ through social media platforms.
IEEE DataPort was created in response to the demand by IEEE members to address the growing data needs of the global technical community.
“IEEE DataPort is now widely used by researchers around the world,” says IEEE Fellow K.J. Ray Liu, 2019 vice president, IEEE Technical Activities, who is one of the platform’s originators. “IEEE DataPort can bring global exposure to your research efforts. It supports research reproducibility and serves as a valuable resource for additional researchers.”
Senior Member David Belanger, the IEEE volunteer lead for IEEE DataPort, adds that “all major IEEE organizational units came together to develop and support the platform.
“We are extremely pleased to see the rapid usage growth,” Belanger says. “User feedback clearly indicates IEEE is meeting their data needs.”
Researchers at organizations around the world are using IEEE DataPort to collaborate and advance their work.
Rabindra Lamsal, a graduate research scholar at Jawaharlal Nehru University, in New Delhi, is developing a disaster response system that can classify crisis-related Twitter tweets into categories such as community needs, number of deaths, and property damage.
Creating a centralized platform to easily access standard data sets was necessary, Lamsal says. He was able to access data sets on machine learning that helped him develop his own research, he says. By getting in contact with other data set owners, he adds, he was able to use the full benefits of the platform to his advantage.
“The large data storage capacity is impressive and extremely beneficial to data set users to directly connect with data owners to make specific inquiries,” he says. “Because the data sets have DOIs, they can be easily cited in future research.”
Ana-Cosmina Popescu, a graduate researcher at the University Politehnica of Bucharest, Romania, is using the platform to store data from her research on using machine learning for recognizing human activities, such as falling, sitting, standing, and writing, based on merging information from all channels of a 3D video.
“IEEE DataPort offered an affordable and stable storage method,” Popescu says. “It also provided me visibility among other researchers and engineers and let me browse through existing activity recognition and machine learning data sets.”
The platform helps her meet her goals, adds Popescu, whose research includes filming an RGB-D (red, green, blue, and depth) human activity recognition and machine learning data set, with the purpose of sharing it with the computer-vision research community.
Postdoctoral researcher Manjunath Matam at the University of Central Florida, in Orlando, is using IEEE DataPort for solar photovoltaic system work. His research on identifying corrupt data, such as nonrelevant values and substandard measurements in grid-tied photovoltaic systems, is quickly reaching a broader audience.
“My two data sets together have reached more than 1,000 people in the past two months, and I’ve received direct feedback from some of them,” Matam says. “Having 2 terabytes of free storage is a great feature.”
IEEE DataPort has an option for users to host a data competition, a time-limited challenge in which a data set owner can invite members of the global technical community to provide specific analyses or make predictions based on the available files.
Participation in the competitions is managed by the initiator, and can be open to all or limited to specific participants.
Visit the IEEE DataPort website to explore data sets and house your own research data. Use promo code DATAPORT1 at checkout to get a free subscription.
Melissa Handa is the senior program manager for IEEE DataPort.
At CES 2017, I got my mind blown by a giant mystery box from a company called AxonVR that was able to generate astonishingly convincing tactile sensations of things like tiny palm-sized animals running around on my palm in VR. An update in late 2017 traded the giant mystery box (and the ability to reproduce heat and cold) for a wearable glove with high resolution microfluidic haptics embedded inside of it. By itself, the HaptX system is constrained to virtual reality, but when combined with a pair of Universal Robotics UR10 arms, Shadow dexterous robotic hands, and SynTouch tactile sensors, you end up with a system that can reproduce physical reality instead.
The demo at CES is pretty much the same thing that you may have seen video of Jeff Bezos trying at Amazon’s re:MARS conference. The heart of the system are the haptic gloves, which are equipped with spatial position and orientation sensors as well as finger location sensors. The movements that you make with your hands and fingers are mapped to the Shadow hands, while the UR10 arms try to match the relative position of the hands in space to your own. Going the other way, there’s a more or less 1-to-1 mapping between what the robot hands feel, and the forces that are transmitted into the fingers of the gloves.
It’s not a perfect system quite yet—sensors get occluded or otherwise out of whack on occasion, and you have to use a foot pedal as a sort of pause button on the robots while you reposition your limbs in a way that’s easier for the system to interpret. And the feel of the force transmission takes some getting used to. I want to say that it could be more finely calibrated, but much of that feeling is likely on my end, since we’re told that the system gets much easier to control with practice.
Even as a brand new user, it was surprising how capable my remote controlled hands were. I had no trouble grabbing plastic cups and transferring a ball between them, although I had to take care not to accidentally crush the cups (which would trap the ball inside). At first, it was easy to consider the force feedback as more of a gimmick, but once I started to relax and pay attention to it, it provided useful information that made me more effective at the task I was working on.
After playing around with things a bit more (and perhaps proving myself not to be totally incompetent), I was given the second most challenging scenario—a simple mockup of a control panel used to shut down a nuclear reactor. I had to turn a valve, flip some switches, twist a knob, and then push a button, all of which required a variety of different grasps, motions, and forces. It was a bit fiddly, but I got it all, and what I found most impressive was that I was able to manipulate things even when I couldn’t see them—in this case, because one of the arms was blocking my view. I’m not sure that would have been possible without the integrated haptic system.
The news from CES is that the three companies involved in this project (Shadow Robot Company, HaptX, and Tangible Research) have formed a sort of consortium-thing called Converge Robotics Group. Basically, the idea is to create a framework under which the tactile telerobot can be further developed and sold, because otherwise, it’s not at all clear who you’d even throw money at if you wanted to buy one.
Speaking of buying one, this system is “now available for purchase by early access customers.” As for what it might cost, well… It’ll be a lot. There isn’t a specific number attached to the system yet, but with two UR10 arms and pair of Shadow hands, we’re looking at low six figures just in that portion of the hardware. Add in the HaptX gloves and whatever margin you need to keep your engineers fed, and it’s safe to say that this isn’t going to end up in your living room in the near future, no matter how cool that would be.
Engineers at Purdue University and at Georgia Tech have constructed the first devices from a new kind of two-dimensional material that combines memory-retaining properties and semiconductor properties. The engineers used a newly discovered ferroelectric semiconductor, alpha indium selenide, in two applications: as the basis of a type of transistor that stores memory as the amount of amplification it produces; and in a two-terminal device that could act as a component in future brain-inspired computers. The latter device was unveiled last month at the IEEE International Electron Devices Meeting in San Francisco.
Ferroelectric materials become polarized in an electric field and retain that polarization even after the field has been removed. Ferroelectric RAM cells in commercial memory chips use the former ability to store data in a capacitor-like structure. Recently, researchers have been trying to coax more tricks from these ferroelectric materials by bringing them into the transistor structure itself or by building other types of devices from them.
In particular, they’ve been embedding ferroelectric materials into a transistor’s gate dielectric, the thin layer that separates the electrode responsible for turning the transistor on and off from the channel through which current flows. Researchers have also been seeking a ferroelectric equivalent of the memristors, or resistive RAM, two-terminal devices that store data as resistance. Such devices, called ferroelectric tunnel junctions, are particularly attractive because they could be made into a very dense memory configuration called a cross-bar array. Many researchers working on neuromorphic- and low-power AI chips use memristors to act as the neural synapses in their networks. But so far, ferroelectric tunnel junction memories have been a problem.
“It’s very difficult to do,” says IEEE Fellow Peide Ye, who led the research at Purdue University. Because traditional ferroelectric materials are insulators, when the device is scaled down, there’s too little current passing through, explains Ye. When researchers try to solve that problem by making the ferroelectric layer very thin, the layer loses its ferroelectric properties.
Instead, Ye’s group sought to solve the conductance problem by using a new ferroelectric material—alpha indium selenide— that acts as a semiconductor instead of an insulator. Under the influence of an electric field, the molecule undergoes a structural change that holds the polarization. Even better, the material is ferroelectric even as a single-molecule layer that is only about a nanometer thick. “This material is very unique,” says Ye.
Ye’s group made both transistors and memristor-like devices using the semiconductor. The memristor-like device, which they called a ferroelectric-semiconductor junction (FSJ), is just the semiconductor sandwiched between two conductors. This simple configuration could be formed into a dense cross-bar array and potentially shrunk down so that each device is only about 10 nanometers across, says Ye.
Proving the ability to scale the device down is the next goal for the research, along with characterizing how quickly the devices can switch, explains Ye. Further on, his team will look at applications for the FSJ in neuromorphic chips, where researchers have been trying a variety of new devices in the search for the perfect artificial neural synapse.
5G introduces test challenges related to massive MIMO, mmWave frequencies, and over-the-air (OTA) test. Successfully overcoming these challenges is the only way to reach 5G commercialization before the competition.
In Keysight's latest eBook, Making 5G Work, you will learn:
A professor finishes a lecture and checks his computer. A software program shows that most students lost interest about 30 minutes into the lecture—around the time he went on a tangent. The professor makes a note to stop going on tangents.
The technology for this fictional classroom scene doesn’t yet exist, but scientists are working toward making it a reality. In a paper published this month in IEEE Transactions on Visualization and Computer Graphics, researchers described an artificial intelligence (AI) system that analyzes students’ emotions based on video recordings of the students’ facial expressions.
The system “provides teachers with a quick and convenient measure of the students’ engagement level in a class,” says Huamin Qu, a computer scientist at the Hong Kong University of Science and Technology, who co-authored the paper. “Knowing whether the lectures are too hard and when students get bored can help improve teaching.”
Qu and his colleagues tested their AI system in two classrooms consisting of toddlers in Japan and university students in Hong Kong. The teachers for each class received a readout of the emotions of individual students and the collective emotions of the group as a whole during their lectures.
The visual analytics system did a good job of detecting obvious emotions such as happiness. But the model often incorrectly reported anger or sadness when students were actually just focused on the lectures. (The frown that often washes over our faces when we listen closely can be easily confused, even by humans, with anger, when taken out of context .) “To address this issue, we need to add new emotion categories, relabel our data and retrain the model,” says Qu.
The focus frown and other confusing facial expressions are a challenge for just about everyone working in the field of emotion recognition, says Richard Tong, chief architect at Squirrel AI Learning, who was not involved in the paper. “We have had similar problems in our own experiments,” he says, referring to the multimodal behavioral analysis algorithms his company is developing with its partners.
Lots of groups are working on some kind of behavior or emotion recognition technology for the classroom, says Tong, who is also the chair of the IEEE Learning Technology Standards Committee. But he says this kind of analysis is of limited use for teachers in traditional classroom settings.
“Teachers are overwhelmed already, especially in the public schools,” Tong says. “It’s very hard for them to read analytical reports on individual students because that’s not what they’re trained for and they don’t have time.”
Instead, Tong envisions using emotion recognition and other means of behavioral analysis for the development of AI tutors. These one-on-one computer-based teachers will be trained to recognize what motivates a student and spot when a student is losing interest, based on their physical or behavioral cues. The AI can then adjust its teaching strategy accordingly.
In this world of AI tutors, Tong says he envisions human teachers taking a role as head coach over the AI agents, which would work one-on-one with students. “But that requires a much more capable AI” than what we have now, he says.
Putting video cameras in the classroom also creates privacy issues. “The disclosure of the analysis of an individual’s emotion in a classroom may have unexpected consequences and can cause harm to students,” says Qu.
And it could backfire on educators. “It may distract students and teachers and could be harmful to learning, since students and teachers may feel like someone could be watching them and might not freely express their opinions,” Qu says. “The privacy issue is important for everyone, and needs to be carefully considered.”
Bend, roll, twist, scrunch, fold, flex. These are terms we might use to describe a lithe gymnast doing a complex floor routine. But batteries?
Yet these are precisely the words the company Jenax in South Korea wants you to use when talking about its batteries. The Busan-based firm has spent the past few years developing J.Flex, an advanced lithium-ion battery that is ultra-thin, flexible, and rechargeable.
With the arrival of so many wearable gadgets, phones with flexible displays, and other portable gizmos, “we’re now interacting with machines on a different level from what we did before,” says EJ Shin, head of strategic planning at Jenax. “What we’re doing at Jenax is putting batteries into locations where they couldn’t be before,” says Shin. Her firm demonstrated some of those new possibilities last week at CES 2020 in Las Vegas.
The devices shown by Jenax included a sensor-lined football helmet developed by UK-based firm HP1 Technologies to measure pressure and force of impact; a medical sensor patch designed in France that will be embedded in clothing to monitor a wearer’s heart rate; and wearable power banks in the form of belts and bracelets for patients who must continuously be hooked up to medical devices.
“You don’t want to carry a big, bulky battery on your body all the time. It’s heavy, uncomfortable, and sticks out from your clothes,” says Shin. “That’s when you need very thin, flexible batteries.”
Such batteries may one day power more than just wearables, says Nicholas Kotov, a professor of chemical engineering at the University of Michigan. He points to unmanned aerial vehicles as one example—a flexible battery installed in the wings or landing gear of such a device could free up space in the body for other components.
Apart from Jenax, companies including Panasonic, Samsung, and STMicroelectronics are working to develop flexible batteries of their own. But Jenax claims to have “a higher degree of flexibility” compared with its competitors.
To make batteries flexible, companies play around with the components of a battery cell, namely the cathode, anode, electrolyte, and membrane separator. In the case of Jenax, which has more than 100 patents protecting its battery technology, Shin says the secret to its flexibility lies in “a combination of materials, polymer electrolyte, and the know-how developed over the years.” J.Flex is made from graphite and lithium cobalt oxide, but its exact composition and architecture remains a secret.
Jenax began as a metal manufacturing company in 1991, but later diversified into batteries following the 2008 financial crisis.
Today, Jenax works with clients—B2B companies in consumer electronics, logistics, medical, health, and other sectors across the United States, Europe, and Japan; Shin declined to give specific names citing non-disclosure agreements—to customize batteries according to their needs.
J.Flex can be as thin as 0.5 millimeters (suitable for sensors), and as tiny as 20 by 20 millimeters (mm) or as large as 200 by 200 mm. Its operating voltage is between 3 and 4.25 volts. Depending on the size, battery capacity varies from 10 milliampere-hours to 5 ampere-hours, with close to 90 percent of this capacity remaining after 1,000 charge-discharge cycles. Each charge typically takes an hour. J. Flex’s battery life depends on how it’s used, Shin says—a single charge can last for a month in a sensor, but wouldn’t last that long if the battery was powering a display.
“Overall, I think it’s a very good battery,” says Michigan’s Kotov, who is developing electrolytes for flexible batteries made from lithium and zinc. He is particularly impressed with how safe J.Flex seems.
Batteries can explode when their electrolytes—typically organic fluids that facilitate quick ion movement but are yet incredibly flammable—leak out, or when the cathode and anode come close together, as might be the case when you bend flexible batteries. “The key [to safety] is to find good electrolytes or good ion-conducting membranes,” says Kotov.
To that end, Jenax has recently developed a special semi-solid electrolyte. “We went to one of the biggest causes of battery explosions and made it non-flammable,” says Shin of the new gel polymer, which will be incorporated into all J.Flex batteries this year. Jenax looks to set up its first production line for that battery in South Korea by the end of 2020, a move that will make it cost-competitive with traditional lithium-ion batteries, says Shin.
“We’re changing the paradigm of batteries,” she says. “They’re no longer a component you buy at the end of your product design. Instead, batteries are becoming one of the critical enabling technologies for the final product.”
At CES, among the bigger, brighter TVs, mock smart homes that seem to know more about you than you do, and all the Alexa- and Google-Assistant-enabled devices eager to talk to you, are a few products that defy categorization. Some of these new products grabbed my attention because they involve truly innovative technology. Some are just clever and cheap enough to catch on, and some are a little too wild to find a big market—but it’s still impressive when a developer realizes an extreme dream.
So, as CES 2020 retreats into history, here is my top 10 list of CES gadgets that at least got my attention, if not a spot on my shopping list. There is no way to rank these in order of importance, so I’ll list them roughly by size, from small to big. (The largest products demonstrated at CES, like John Deere’s AI-powered crop sprayer, Brunswick’s futuristic speedboat, or Hyundai’s flying taxi developed in partnership with Uber Elevate can’t be called gadgets, so didn’t make this roundup.)
I’ve been a fan of Reliefband’s line of motion-sickness-prevention wearables for a few years. The new Reliefband Sport, the company says, solves a few of the issues with previous products. For one, it’s waterproof; the gadget is particularly useful for motion sickness sufferers traveling on boats. Too many people wearing the company’s existing products forgot to take them off before jumping in the water for a swim—and then were stuck on a moving boat with motion sickness. Reliefband Sport also includes an automatic shutoff, which fixes one of my complaints—too many times, I thought I’d turned my Reliefband off to put it away, only to find out the next time I needed it that I had left it on and couldn’t use it without recharging. Finally, this latest Reliefband can share a band with an Apple Watch, reducing wrist clutter. $150
OK, nobody needs a rechargeable, waterproof glowstick. But glowsticks are fun on a summer evening, for both kids and adults, and the one-night-only chemical versions do seem wasteful. Nite Ize representatives envision its glowstick floating in a pool or acting as a flashy cocktail stirrer. I think I’d just toss a few on the picnic table as an easy alternative to candles for outdoor dining. $12
I think I saw at least a dozen companies offering smart, Internet-connected locks designed to replace your front door’s key-turned deadbolt lock. And Internet-connected door locks may certainly become the norm someday. Taking apart your front door’s hardware to install a smart lock takes a real commitment to the new technology, however. That’s why I liked Tapplock’s gadget: a smart padlock involves far less commitment—you could try it on a storage shed or on a bicycle. The gadget will open with a fingerprint (it stores up to 500 prints for the heavy duty Tapplock One, 100 for Tapplock Lite), a Bluetooth signal (that can be shared for time-restricted use), or a Morse code pattern communicated via a button or the lock shank. $99/$59
Intel is getting ready to ship an inexpensive (relatively) consumer-grade, small, lidar camera. The company expects developers to jump on the chance to build products around this technology. “We think this is an enablement technology that will open a lot of applications,” says Sagi Ben Moshe, general manager of Intel’s emerging technology group. At CES, the company demonstrated an application that uses the lidar to quickly measure boxes for shipping—something Intel thinks will make the lidar a must-have gadget for neighborhood shipping storefronts—as well as applications involving joint tracking, full body scanning, and small robot navigation. The gadget will start shipping to developers in April. $349
AO Air’s wearable goes in front of the wearer’s nose and mouth, but doesn’t seal tightly against the skin. Instead, it protects the user from hazards in the air by pulling in dirty air from behind the wearer, filtering it, and blowing it in front, as positive pressure keeps unfiltered air away. I’ve worn N95 face masks during California’s fires, and fit is definitely an issue. It’s not always clear they are doing much good, and they aren’t pleasant to breath through. I’m not sure AO Air’s alternative will be more comfortable—it’s a little heavy—but I bet it’s more effective, and I was happy to see a tech company taking on the challenge of improving this wearable; the need for air-filtering face masks is unfortunately growing. $350
RayShaper has designed modular video cameras that can be snapped together into arrays. The company says its secret sauce is its algorithms that enable the decoding of multiple images into a single high-quality image in real time at video frame rates. The system can, for example, turn foggy, out-of-focus objects in the distance into sharp, high-resolution pictures, RayShaper representatives say. The company is aiming its technology at professional photographers covering sporting and other unscripted events, starting with ski races this year. Pricing is still in flux, but a version with three to four modules will likely cost around $50,000
PicoBrew, the manufacturer of countertop, computer-controlled, home brewing machines, introduced an automatic distilling system as an accessory to its brewing systems. Company representatives indicated that, in addition to flavored spirits, the system can make bitters, essential oils, and cannabis derivatives. $349 (requires a PicoBrew system, $399 and up)
Sunflower Labs takes a complex and costly approach to home security: smart motion and vibration sensors distributed around a property call in an autonomous drone when they sense something amiss. The drone sends a live video stream to a smartphone, but, company representatives indicated, the real deterrent is the arrival of the drone itself; bad guys are unlikely to stick around to see what happens next. The drone can fly for about half an hour, but is designed to head back to its self-charging station after about 15 minutes. It seems like a lot of technology to throw at a not-so-complex problem, but you have to admire the company’s ambition. $10,000
No chemical coolant. That’s the twist in OxiCool’s HomeCool room air conditioner. The system uses water and a clay filter that, the company says, has nanopores sized for water vapor molecules. When the filter absorbs water vapor, the remaining water in the sealed chamber boils, pulling in heat from the room and lowering the room’s temperature. A gas heater drives the water out of the filter to reset the cycle, and the heat it generates vents to the outdoors. The company says its system is vastly more environmentally friendly than those that use coolants, and it uses natural gas along with 10 percent of the electricity of a standard room air conditioner, reducing its overall operating cost by 20 to 30 percent. Pricing has not yet been announced.
Manta5 thinks it’s time to take electric bikes into deep water. The company’s hydrofoiling ebike is already shipping in New Zealand and comes to the United States in a few months. I don’t quite get the appeal; but for someone who has everything, well, it would be a lot less annoying to your fellow beachgoers than a noisy jet ski. $7,500
In 2018, the U.S. Defense Advanced Research Projects Agency (DARPA) announced the multi-million-dollar DARPA Launch Challenge to promote rapid access to space within days rather than years. To earn prizes totaling more than US $12 million, rocket companies would have to launch unfamiliar satellites from two sites in quick succession.
“The launch environment of tomorrow will more closely resemble that of airline operations—with frequent launches from a myriad of locations worldwide,” said Todd Master, DARPA’s program manager for the competition at the time. The U.S. military relies on space-based systems for much of its navigation and surveillance needs, and wants a way to quickly replace damaged or destroyed satellites in the future. At the moment, it takes at least three years to build, test, and launch spacecraft.
To ensure that DARPA was incentivizing the flexible, responsive launch technologies the U.S. military needs, competitors would receive information about the site of their next launch fewer than 30 days prior to each flight, DARPA’s rules stated, and only learn their actual payloads two weeks out.
While 18 companies impressed DARPA enough to pre-qualify, just three startups overcame the Challenge’s first hurdle by securing a launch license from the U.S. Federal Aviation Administration.
Vector Launch had already flown prototype rockets on sub-orbital missions, and Virgin Orbit was developing an air-launched rocket that would be carried aloft by a modified Virgin Atlantic 747. The third qualifier was a space startup that asked to remain anonymous. In April 2019, each company received $400,000 to help them advance to the launch phase.
It was too little, too late for Vector, which in August replaced its founding CEO, Jim Cantrell, and suspended operations amid financial difficulties. In October, Virgin Orbit also pulled out, writing: “After comparing DARPA’s requested timeline with our commitments to our commercial and government customers, we have elected to withdraw from the competition.”
That left the hopes of the U.S. military resting on a mysterious space company still operating in stealth mode.
According to a recent filing at the FCC, the last startup standing in the Launch Challenge is actually Astra Space, a secretive Bay Area company that has won dozens of U.S. government contracts from NASA and the Pentagon. Astra Space does not even have a website, and calls itself a “stealth space company” in job listings.
Last year, Astra Space launched its first two sub-orbital rocket missions from the remote Pacific Spaceport Complex-Alaska (PSCA) on Kodiak Island. Neither flight was a complete success. The first resulted in minor damage to a rocket processing facility, and debris from both launches caused environmental damage that required hundreds of tons of soil to be removed for remediation.
In December, Astra Space was granted permission by the FCC for its first orbital rocket flight at PSCA. This will involve launching a small experimental satellite called GEARRS3, developed in part by the Air Force Research Laboratory.
Quick on the heels of that, Astra Space requested permission this week for another orbital launch, from NASA’s Wallops Flight Facility on the eastern shore of Virginia. In its application, Astra Space noted: “The operation is to launch a small satellite into low earth orbit. This will be a launch… in support of the DARPA Challenge.”
Under Launch Challenge rules, Astra Space should only have been informed of the launch site at most 30 days before making its attempt. Astra Space’s FCC paperwork, however, states an earliest launch date of 1 March 2020, giving Astra Space at least 55 days to prepare. The paperwork even requests permission to launch as late as the start of September.
Attempting to compress multi-year launch preparations into weeks “was always a bridge too far,” says Cantrell, ex-CEO of Vector. “From a purely physical point of view, we could string up a rocket up in our backyard and launch it, but the reality of the regulatory environment forces more of a six-to-12-month decision. We gave [DARPA] the feedback, gently, that it was not realistic.”
“We have made some amendments to timelines to allow compliance with national policy and regulatory requirements, while still meeting our goals of being responsive,” says Master. Although Astra Space will know where it will fly from, it will only receive its final trajectory 30 days prior to launch, and only receive the actual spacecraft a few days before.
Since qualifying for the Launch Challenge, Astra Space’s original FAA launch license expired. A fresh license [PDF], issued this week, allows Astra Space to use a new version of its rocket for its first orbital attempt—but only from PSCA in Alaska. It would need a modification of this license for its DARPA flight at Wallops, potentially adding another delay.
Astra Space did not immediately reply to requests for comment.
Despite the Launch Challenge loosening its rules and losing almost all its participants, Cantrell remains supportive of DARPA’s efforts. “They’re really pushing the boundaries and trying to break the rules of how things have always been done,” he says.
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
Let us know if you have suggestions for next week, and enjoy today’s videos.
Apparently the whole “little home robot with a cute personality will seamlessly improve your life” thing is still going on, at least as far as Samsung is concerned.
Predictably, there’s no information on how much Ballie costs, when it might be available, what’s inside of it, and whether it can survive a shaggy carpet.
[ Samsung ]
Because of the nightmarish Wi-Fi environment in the convention center, Digit is steered manually, but the box interaction is autonomous.
[ Agility Robotics ]
Stefano Mintchev from EPFL and his startup Foldaway Haptics are the folks behind the 33 individual actuated "bionic flaps" on the new Mercedes-Benz Vision AVTR concept car that was at CES this week.
The underlying technology, which is based on origami structures, can be used in a variety of other applications, like this robotic billboard:
The Sarcos Guardian XO alpha version is looking way more polished than the pre-alpha prototype that we saw late last year.
And Sarcos tells us that it’s now even more efficient, although despite my begging, they won’t tell us exactly how they’ve managed that.
[ Sarcos ]
It is our belief that in 5 years’ time, not one day will go by without most of us interacting with a robot. Reachy is the only humanoid service robot that is open source and can manipulate objects. He mimics human expressions and body language, with a cute free-moving head and antennas as well as bio-inspired arms. Reachy is the optimum platform to create real-world interactive & service applications right away.
[ Pollen Robotics ]
Ritsumeikan Humanoid System Laboratory is working on a promising hydraulic humanoid:
[ Ritsumeikan HSL ]
With the steep rise of automation and robotics across industries, the requirements for robotic grippers become increasingly demanding. By using acoustic levitational forces, No-Touch Robotics develops damage and contamination free contactless robotic grippers for handling highly fragile objects. Such grippers can beneficially be applied in the field of micro assembly and the semiconductor industry, resulting in an increased production yield, reduced waste, and high production quality by completely eliminating damage inflicted during handling.
You can also experience the magic by building your own acoustic levitator.
[ ETHZ ]
Preview of the Unitree A1. Maximum torque of each joint is 35.5 Nm. Weight (with battery) 12 kg. Price Less than $10k.
Under $10k? I’m going to start saving up!
[ Unitree ]
A team from the Micro Aerial Vehicle Lab (MAVLab) of TU Delft has won the 2019 Artificial Intelligence Robotic Racing (AIRR) Circuit, with a final breathtaking victory in the World Championship Race held in Austin, Texas, last December. The team takes home the $1 million grand prize, sponsored by Lockheed Martin, for creating the fastest and most reliable self-piloting aircraft this season.
[ MAVLab ]
After 10 years and 57 robots, hinamitetu brings you a few more.
[ Hinamitetu ]
Vision 60 legged robot managing unstructured terrain without vision or force sensors in its legs.
[ Ghost Robotics ]
In 2019 GRVC has lived one of the best years of its life, with the lastest developments of GRIFFIN ERC Advances Grant, the kick-off meeting of H2020 Aerial-Core Project and another projects.
[ GRVC ]
The Official Wrap-Up of ABU ROBOCON 2019 Ulaanbaatar, Mongolia.
[ RoboCon 2019 ]
Roboy had a busy 2019:
[ Roboy ]
Very interesting talk from IHMC’s Jerry Pratt, at the Workshop on Teleoperation of Humanoid Robots at Humanoids 2019.
[ Workshop ]