FreshRSS

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 26 janvier 2022IEEE Spectrum Recent Content full text

Cutting Carbon Emissions Is Harder Than the Glasgow Climate Pact Thinks

Par Vaclav Smil


Three months ago the Glasgow Climate Pact (COP26) declared that by 2030 the world must cut total carbon dioxide emissions by 50 percent relative to the 2010 level, which was 30.4 billion tonnes. This would bring annual emissions to less than 20 billion tonnes, a level last seen more than 30 years ago.

What are the chances of that? Let’s look at the arithmetic.




First, assume that all energy-consuming sectors share the cuts equally and that global energy demand stays constant (instead of increasing by 2 percent a year, as it did in the prepandemic decade). Today our best commercial batteries have energy densities of about 300 watt-hours per kilogram, less than 3 percent as much as kerosene; among some 25,000 planes in the global commercial fleet, there is not a single high-capacity electric or hydrogen-powered aircraft. A 50 percent cut in kerosene-fueled flying would mean that by 2030 we would have to build about 12,000 new airplanes with capacities of from 100 people (the Embraer 190) to 400 people (the Boeing 777-300ER), all powered by as-yet-nonexistent superbatteries or equally nonexistent hydrogen systems. That’s what we’d need to fly about 2.2 billion passengers a year, for a total of about 4.3 trillion carbon-free passenger-kilometers. What are the chances of that?

In 2019 the world produced 1.28 billion tonnes of pig (cast) iron in blast furnaces fueled with coke made from metallurgical coal. That pig iron was charged into basic oxygen furnaces to make about 72 percent of the world’s steel (the rest comes mostly from electric arc furnaces melting scrap metal). Today there is not a single commercial steel-making plant that reduces iron ores by hydrogen. Moreover, nearly all hydrogen is now produced by the reforming of natural gas, and zero-carbon iron would require mass-scale electrolysis of water powered by renewable energies, something we still haven't got. A 50 percent cut of today’s carbon dependence would mean that by 2030 we would have to smelt more than 640 million tonnes of iron–more than the annual output of all of the blast furnaces outside China–-by using green hydrogen instead of coke. What are the chances of that?

Decarbonizing the global fleet of cars by 40 percent in nine years would require that we manufacture 63 million EVs a year, nearly as much as the total global production of all cars in 2019.

In 2021 there were some 1.4 billion motor vehicles on the road, of which no more than 1 percent were electric. Even if the global road fleet were to stop growing, decarbonizing 50 percent of it by 2030 would require that we manufacture about 600 million new electric passenger vehicles in nine years—that’s about 66 million a year, more than the total global production of all cars in 2019. In addition, the electricity to run those cars would have to come from zero-carbon sources. What are the chances of that?

To set goals that correspond to available technical capabilities while taking into account reasonable advances in the production and adoption of non-carbon energy sources, we must start with grade-school algebra. What are the chances of that?

This article appears in the February 2022 print issue as “Decarbonization Algebra.”

3D-Printed OLEDs Enable DIY  Screens Nearly Anywhere

Par Payal Dhar


LCDs may be the mainstay of consumer displays, but when it comes to picture quality, including high contrast ratio, brighter colors, and wider viewing angles, OLEDs have the edge. These organic light-emitting diode displays are so-called because of their self-emission capabilities, using organic carbon-based compounds and other ingredients to create colors. Because each pixel produces its own light, OLEDs require no backlighting. They are, therefore, more power efficient, and can be fabricated into slimmer and more flexible displays.

Of course, there is a catch. OLED displays are expensive to manufacture, and traditional fabrication techniques need specialized set-ups. Researchers have been looking at 3D-printing solutions, but even these have had drawbacks, among them a lack of uniformity in the active (emitting) layers of the display.

Recent research from the University of Minnesota (UM) Twin Cities describes a “one-pot” 3D-printing platform for flexible OLEDs that overcomes some of the common printing problems and simplifies the manufacturing process. Essentially, the researchers combined all the critical steps for the production of the display—extrusion printing of the lower layers, spray printing of the active layers, and structural reconfiguration—into a single device, a custom-built table-top 3D printer.

“Anyone with the basic knowledge of 3D printing can [print] OLED displays... in homes that possess the proper inks and designs.”
—Ruitao Su, MIT

“[Our] printing platform consists of…a high-precision motion control module, an ink dispensing module that extrudes or sprays materials, an imaging system that assists the alignment of device layers, and an ink curing system,” says Ruitao Su, former Ph.D. student at the University, now a post-doctoral researcher at MIT’s Computational Design and Fabrication Group.

The result was a six-layer, 1.5-inch square flexible display, in which the electrodes, interconnects, insulation, and encapsulation were extrusion-printed, while the active layers were spray-printed using the same 3D printer at room temperature. The device had 64 pixels, each one emitting light. It was also flexible, and the emission remained stable over 2,000 bending cycles.

The major challenge printing the active or emitting layers, Su says, is achieving a relatively uniform morphology on the 3D printer. He says his team solved the problem by generating homogenous layers of the OLEDs with controllable thickness. Another issue involved creating stable, room-temperature cathode–polymer junctions. That one, Su says, they solved by developing a mechanical compression process “that simulates conventional metal forging but conducted on 3D printers.”

For Su and team, one of the considerations in coming up with a fabrication process for a flexible, fully 3D-printed OLED display was cost effectiveness. Traditional production processes require expensive microfabrication facilities that have to be housed in cleanrooms, he says, but “[in our prototype] the cost…is reduced in terms of the required facilities and specialized personnel.”

Apart from its potential in soft electronics and wearables, this “one-pot” methodology allows for other unique form factors beyond the typical 2D layout. “I envision the direct printing of OLED displays on non-conventional surfaces such as tables, cars, or even human bodies for ubiquitous information display,” says Su.

Such flexible displays could be packaged in an encapsulating material for a wide variety of applications as well. “The pixels can be conformally printed on curved surfaces to integrate with daily objects in the era of Internet of Things…. The OLED pixels can also be printed in 3D matrices so that the entire printed [device] functions as displays.” The group's print-your-own-display tech could even ultimately enable homemade holograms—though, he says, further innovations in the hardware would be necessary first.

The word hello in yellow pixelsThe word “HELLO” captured while the text scrolled on the 8 × 8 OLED display.Ruitao Su/University of Minnesota

Even though their method was designed for small-batch, customized fabrication, Su says, “The point is that you don’t have to build a semiconducting factory in order to have your desired devices fabricated. Because anyone with the basic knowledge of 3D printing can operate the machine, the OLED displays theoretically can be printed in homes that possess the proper inks and designs.”

Working with flexible OLEDs bring specific challenges too. “[They] require pixels and conductive interconnects that maintain good performance under large mechanical deformations,” Su explains. “Therefore, we selected materials that maintained high electrical conductivities, such as silver-based inks for our electrodes. For the encapsulation, we used a common transparent and flexible polymer, PDMS, to coat the device on top.”

There is plenty of work yet to be done to improve this technology, of course. Better device efficiency and increased brightness are major challenges for 3D-printed semiconducting devices, Su adds, and that is where their next focus will be.

How Claude Shannon Helped Kick-start Machine Learning

Par Rodney Brooks


Among the great engineers of the 20th century, who contributed the most to our 21st-century technologies? I say: Claude Shannon.

Shannon is best known for establishing the field of information theory. In a 1948 paper, one of the greatest in the history of engineering, he came up with a way of measuring the information content of a signal and calculating the maximum rate at which information could be reliably transmitted over any sort of communication channel. The article, titled “A Mathematical Theory of Communication,” describes the basis for all modern communications, including the wireless Internet on your smartphone and even an analog voice signal on a twisted-pair telephone landline. In 1966, the IEEE gave him its highest award, the Medal of Honor, for that work.


If information theory had been Shannon’s only accomplishment, it would have been enough to secure his place in the pantheon. But he did a lot more.

A decade before, while working on his master’s thesis at MIT, he invented the logic gate. At the time, electromagnetic relays—small devices that use magnetism to open and close electrical switches—were used to build circuits that routed telephone calls or controlled complex machines. However, there was no consistent theory on how to design or analyze such circuits. The way people thought about them was in terms of the relay coils being energized or not. Shannon showed that Boolean algebra could be used to move away from the relays themselves, into a more abstract understanding of the function of a circuit. He used this algebra of logic to analyze, and then synthesize, switching circuits and to prove that the overall circuit worked as desired. In his thesis he invented the AND, OR, and NOT logic gates. Logic gates are the building blocks of all digital circuits, upon which the entire edifice of computer science is based.

In 1950 Shannon published an article in Scientific American and also a research paper describing how to program a computer to play chess. He went into detail on how to design a program for an actual computer. He discussed how data structures would be represented in memory, estimated how many bits of memory would be needed for the program, and broke the program down into things he called subprograms. Today we would call these functions, or procedures. Some of his subprograms were to generate possible moves; some were to give heuristic appraisals of how good a position was.

While working on his master’s thesis at MIT, Shannon invented the logic gate.

Shannon did all this at a time when there were fewer than 10 computers in the world. And they were all being used for numerical calculations. He began his research paper by speculating on all sorts of things that computers might be programmed to do beyond numerical calculations, including designing relay and switching circuits, designing electronic filters for communications, translating between human languages, and making logical deductions. Computers do all these things today. He gave four reasons why he had chosen to work on chess first, and an important one was that people believed that playing chess required “thinking.” Therefore, he reasoned, it would be a great test case for whether computers could be made to think.

Shannon suggested it might be possible to improve his program by analyzing the games it had already played and adjusting the terms and coefficients in its heuristic evaluations of the strengths of board positions it had encountered. There were no computers readily available to Shannon at the time, so he couldn’t test his idea. But just five years later, in 1955, Arthur Samuel, an IBM engineer who had access to computers as they were being tested before being delivered to customers, was running a checkers-playing program that used Shannon's exact method to improve its play. And in 1959 Samuel published a paper about it with “machine learning” in the title—the very first time that phrase appeared in print.

So, let’s recap: information theory, logic gates, non-numerical computer programming, data structures, and, arguably, machine learning. Claude Shannon didn’t bother predicting the future—he just went ahead and invented it, and even lived long enough to see the adoption of his ideas. Since his passing 20 years ago, we have not seen anyone like him. We probably never will again.

This article appears in the February 2022 print issue as “Claude Shannon’s Greatest Hits.”

Water Scarcity Concerns Drive Semiconductor Industry to Adopt New Technologies

Par Dexter Johnson


In these days of seemingly never-ending chip shortages, more and greater varieties of semiconductors are in demand. Chip fabs around the world are now racing to catch up to the world's many microelectronic needs. And chip fabs need a lot of water to operate.

By some estimates, a large chip fab can use up to 10 million gallons of water a day, which is equivalent to the water consumption of roughly 300,000 households.

While semiconductor companies have long understood that water access is a key element to their business, over the past decade that awareness has become more acute. Back in 2015, a drought in Taiwan (where 11 of the 14 largest fabs in the world are located), led Taiwan Semiconductor Manufacturing Co. to open up its plants to inspection to demonstrate their water conservation efforts. Also in 2015, Intel made it known that it had reduced its water consumption by over 40 percent from its 2010 levels in response to the arid conditions at the sites where its plants are located.

Since that time, water recycling at semiconductor plants has continued to increase, according to Prakash Govindan, chief operating officer at Gradiant, a company that offers end-to-end water recycling technologies to a range of industries, including semiconductors.

“Conventional treatment of wastewater at semiconductor plants had recycled anywhere from 40 percent to 70 percent of water used in their processes,” explains Govindan. “Some manufacturers still only recycle 40 percent of the water they use.”

However, over the past two years Gradiant has been working with semiconductor plants, improving their water reuse so that they're able to recycle 98 percent of the water they use. So, instead of bringing in 10 million gallons of freshwater a day from outside, these new recycling technologies mean they need to draw only 200,000 gallons of water from outside the plant to operate.

The technology that Gradiant has developed is based around counterflow reverse osmosis (CFRO), which is an adaptation of a well-established reverse osmosis technique. Counterflow streams enable the technology to push water recovery to much higher levels than preexisting reverse osmosis techniques could.

While reverse osmosis techniques depend on high pressure that typically demands a lot of energy, Gradiant has developed a thermodynamic balancing technique that minimizes the driving force across the filtering membrane, therefore reducing the energy consumption for a given amount of water treated.

The water scarcity problem for Taiwan fabs has become even more acute in the past year due to new drought conditions. This has led the Taiwan fabs to adopt the latest water recycling technologies more rapidly than fabs in other geographic locations, with an eye toward fending off any interruptions to their production.

“There are three drivers for adopting more effective water recycling technologies,” said Govindan. “The first is an interruption to the continuity of business; this is the situation in which Taiwan fabs found themselves when they began to face localized climate conditions that have been unusually dry. The second is sustainability concerns, which is a driver for fabs in Singapore and other locations. And the third is just cost savings, which is the main concern at this point for U.S. fabs.”

While an interruption to the continuity of business is clearly the most pressing driver, sustainability and cost savings ultimately lead to the business continuity issues too, according to Govindan.

“Most, if not all, corporate boards get reports on sustainability factors, said Govindan. “Some microchip manufacturers have even signed up for the net-zero water consumption pledge from the U.N. So, sustainability is a massive driver.”

The semiconductor industry's concerns on issues of sustainability track along with how the industry has evolved over the past 20 years. As feature sizes have become smaller, the level of contaminants chips can tolerate and the level of toxic chemicals they use have changed. What was applicable 20 years ago in Mountain View, Calif., when Fairchild did chip manufacturing there is completely different from what the Micron plant in Idaho is doing today.

While sustainability is shaping up to be a key outside driver for water recycling efforts, the primary driver is almost always cost savings. In Arizona, for example, the cost of finding, sourcing, and using freshwater is high enough that a company like Gradiant can save a company a great deal of money simply by recycling the water it does manage to acquire. “Our cost of treatment is typically lower than the cost of sourcing and disposal,” adds Govindan.

While U.S. fabs are not facing a threat to the continuity of their business because of water scarcity—despite their locations in arid regions such as Arizona—climate change in general is a looming risk to water availability. Climate change and limited availability of freshwater is reported to be already affecting 40 percent of the global population.

“Because of climate change," Govindan notes, "the levels of freshwater availability have dropped in some regions, and those numbers could easily be accelerated relative to predictive models. Water scarcity may be even more urgent than we predict today.”

À partir d’avant-hierIEEE Spectrum Recent Content full text

Meta Aims to Build the World’s Fastest AI Supercomputer

Par Samuel K. Moore


Meta, parent company of Facebook, says it has built a research supercomputer that is among the fastest on the planet. By the middle of this year, when an expansion of the system is complete, it will be the fastest, Meta researchers Kevin Lee and Shubho Sengupta write in a blog post today. The AI Research SuperCluster (RSC) will one day work with neural networks with trillions of parameters, they write. The number of parameters in neural network models have been rapidly growing. The natural language processor GPT-3, for example, has 175 billion parameters, and such sophisticated AIs are only expected to grow.

RSC is meant to address a critical limit to this growth, the time it takes to train a neural network. Generally, training involves testing a neural network against a large data set, measuring how far it is from doing its job accurately, using that error signal to tweak the network’s parameters, and repeating the cycle until the neural network reaches the needed level of accuracy. It can take weeks of computing for large networks, limiting how many new networks can be trialed in a given year. Several well-funded startups, such as Cerebras and SambaNova, were launched in part to address training times.

Among other things, Meta hopes RSC will help it build new neural networks that can do real-time voice translations to large groups of people, each speaking a different language, the researchers write. “Ultimately, the work done with RSC will pave the way toward building technologies for the next major computing platform—the metaverse, where AI-driven applications and products will play an important role,” they write.

“The experiences we’re building for the metaverse require enormous compute power (quintillions of operations / second!) and RSC will enable new AI models that can learn from trillions of examples, understand hundreds of languages, and more,” Meta CEO and cofounder Mark Zuckerberg said in a statement.

Old System: 22,000 Nvidia V100 GPUs
Today: 6,080 Nvidia A100 GPUs
Mid-2022: 16,000 Nvidia A100 GPUs

Compared to the AI research cluster Meta uses today, which was designed in 2017, RSC is a change in the number of GPUs involved, how they communicate, and the storage attached to them.

In early 2020, we decided the best way to accelerate progress was to design a new computing infrastructure from a clean slate to take advantage of new GPU and network fabric technology. We wanted this infrastructure to be able to train models with more than a trillion parameters on data sets as large as an exabyte—which, to provide a sense of scale, is the equivalent of 36,000 years of high-quality video.

The old system connected 22,000 Nvidia V100 Tensor Core GPUs. The new one switches over to Nvidia’s latest core, the A100, which has dominated in recent benchmark tests of AI systems. At present the new system is a cluster of 760 Nvidia DGX A100 computers, with a total of 6,080 GPUs. The computer cluster is bound together using an Nvidia 200-gigabit-per-second Infiniband network. The storage includes 46 petabytes (46 million billion bytes) of cache storage and 175 petabytes of bulk flash storage.

Speedups:
Computer vision: 20x
Large-scale natural-language processing: 3x

Compared to the old V100-based system, RSC marked a 20-fold speedup in computer vision tasks and a 3-fold boost in handling large natural-language processing.

When the system is complete in the middle of this year, it will connect 16,000 GPUs, which, Lee and Sengupta write, making it one of the largest of its kind. At that point, its cache and storage will have a capacity of 1 exabyte (1 billion billion bytes) and be able to serve 16 terabytes per second of data to the system. The new system will also focus on reliability. That’s important because very large networks might take weeks of training time, and you don’t want a failure partway through the task that means having to start over.

Three people wearing construction helmets in the foreground look out to an empty warehouse-sized room where giant spools of something sit.Meta’s RSC was designed and built entirely during the COVID-19 pandemic.Meta

For reference, the largest production-ready systems tested in the latest round of the MLPerf neural network training benchmarks was a 4,320-GPU system fielded by Nvidia. That system could train the natural language processor BERT in less than a minute. However BERT has only 110 million parameters compared to the trillions Meta wants to work with.

The launch of RSC also comes with a change in the way Meta uses data for research:

Unlike with our previous AI research infrastructure, which leveraged only open source and other publicly available data sets, RSC also helps us ensure that our research translates effectively into practice by allowing us to include real-world examples from Meta’s production systems in model training.

The researchers write that RSC will be taking extra precautions to encrypt and anonymize this data to prevent and chance of leakage. Those steps include that RSC is isolated from the larger internet—having neither inbound nor outbound connections. Traffic to RSC can flow in only from Meta’s production data centers. In addition, the data path between storage and the GPUs is end-to-end encrypted, and data is anonymized and subject to a review process to confirm the anonymization.

Rooftop Drones for Autonomous Pigeon Harassment

Par Evan Ackerman


Feral pigeons are responsible for over a billion dollars of economic losses here in the United States every year. They’re especially annoying because the species isn’t native to this country—they were brought over from Europe (where they’re known as rock doves and are still quite annoying) because you can eat them, but enough of the birds escaped and liked it here that there are now stable populations all over the country, being gross.

In addition to carrying diseases (some of which can occasionally infect humans), pigeons are prolific and inconvenient urban poopers, deploying their acidic droppings in places that are exceptionally difficult to clean. Rooftops, as well as ledges and overhangs on building facades, are full of cozy nooks and crannies, and despite some attempts to brute-force the problem by putting metal or plastic spikes on every horizontal surface, there are usually more surfaces (and pigeons) than can be reasonably bespiked.

Researchers at EPFL in Switzerland believe that besting an aerial adversary requires an aerial approach, and so they’ve deployed an autonomous system that can identify roof-invading pigeons and then send a drone over to chase them away.


Drones, it turns out, are already being used for bird control, but so far (for a variety of reasons) it’s with an active human pilot using a drone to scare flocks of birds at specific places and times. One of the reasons for this is because it’s illegal (or at least a major administrative headache) to fly drones autonomously anywhere, and Switzerland is no exception, which is why this research involved a human supervisor on standby, ready to jump in and take over should the otherwise fully autonomous system suffer some sort of glitch.

At first glance, you might think that the ideal system for drone-based bird deterrence would be completely self-contained, with a patrolling drone using a camera to detect wayward avians and then chasing them off. But as the researchers point out, such a system is not very practical. Roofs can be big, pigeons are small, and drone-based cameras are smaller still and will suck down power doing onboard pigeon detection. Plus, the drone will waste most of its time not finding birds and will need to recharge frequently.

A better solution is a base station with a dedicated high-resolution pan-tilt-zoom camera that can survey as much roof as possible without having to move. When pigeons are spotted, the drone (a Parrot Anafi) gets dispatched to the spot, spending a minimal amount of time in the air. Note that there’s only a single (monocular) camera involved here, but fortunately, pigeons are mostly the same size so it’s possible to make a fairly accurate estimation of distance based on apparent bird height.

Testing on the roof of the SwissTech Convention Center in Lausanne revealed some issues with the autonomous pigeon detection system, mostly because within a pigeon flock you get a whole bunch of pigeons that are occluded by other pigeons, so if your detector is trying to figure out whether a flock is worth going after depending on the number of individual pigeon detections, you might run into some trouble. But even so, the overall system was quite successful—on average, a pigeon flock could spend up to 2.5 hours just chilling on the roof and (probably) pooping a whole bunch. When the drone was airborne, though, the maximum bird loitering time was cut down to just a couple of minutes, which includes the several minutes it took to do the detection and actually launch the drone.

The researchers also noticed some interesting drone-on-bird behaviors:

Several interesting observations regarding the interactions of pigeons and the drone were made during the experiments. First, the distance at which pigeons perceive the drone as a threat is highly variable and may be related to the number of pigeons. Whereas larger flocks were often scared simply by takeoff (which happened at a distance of 40–60 m from the pigeons), smaller groups of birds often let the drone come as close as a few meters. Furthermore, the duration in which the drone stays in the target region is an important tuning parameter. Some pigeons attempted to return almost immediately but were repelled by the hovering drone.

The researchers suggest that it might be useful to collaborate with some zoologists (ornithologists?) as a next step, as it’s possible that “the efficiency of the system could be radically changed by leveraging knowledge about the behaviors and interactions of pigeons.”

Wi-Fi 7 Stomps on the Gas

Par Matthew S. Smith


Consumer technology is often a story of revolutionary leaps followed by a descent into familiarity. The first computers advanced so quickly that new models went obsolete while they were still on store shelves. Today, any US $500 laptop will be relevant for a decade. A similar story can be told of smartphones, TVs, even cars.

Yet there is one technology that has escaped this trend: Wi-Fi.


Wi-Fi went mainstream with the 802.11g standard in 2003, which improved performance and reliability over earlier 802.11a/b standards. My first 802.11g adapter was a revelation when I installed it in my ThinkPad’s PC Card slot. A nearby café jumped on the trend, making a midday coffee-and-classwork break possible. That wasn’t a thing before 802.11g.

Still, 802.11g often tried your patience. Anything but an ideal connection left me staring at half-loaded Web pages. I soon learned which spots in the café had the best connection.

Wi-Fi 6, released in 2019, has maximum speeds of 600 megabits per second for the single band and 9,608 Mb/s on a single network. That’s nearly 40 percent as fast as the Wi-Fi 5 standard and more than 175 times as fast as the 802.11g connection I used in 2003.

Such extreme bandwidth is obviously overkill for Web browsing, but it’s a necessity for streaming augmented- and virtual-reality content.

Those figures, while impressive, don’t tell the whole story. Peak Wi-Fi speeds require support on each device for multiple “spatial streams”—that is, for multiplexed channels. Modern Wi-Fi can support up to eight spatial streams, but most consumer-grade Wi-Fi adapters support just one or two streams, to keep costs down. Fortunately, Wi-Fi 6 boosts the performance per stream enough to lift even entry-level Wi-Fi adapters above gigabit speeds.

That’s key, as gigabit Internet remains the best available to most people across the globe. I’m lucky enough to have gigabit service, and I’ve tested quite a few Wi-Fi 6 devices that hurdle this performance bar. It renders gigabit Ethernet nearly obsolete, at least for most home use. And you don’t need to spend a fortune: A basic Wi-Fi 6 router like TP-Link’s AX73 or Asus’s RT-AX3000 can do the trick.

Wi-Fi 6E, released in 2020, further improves the standard with a 6-gigahertz band that appears as a separate connection, just as 2.4- and 5-GHz bands have appeared separately on prior Wi-Fi networks. It’s early days for Wi-Fi 6E, so device support is limited, but the routers I’ve tested were extremely consistent in hitting the peak potential of gigabit Internet.

Wi-Fi 6 already reaches a level of performance that exceeds the Internet service available to most people. Yet the standard isn’t letting off the gas. MediaTek plans the first demonstration of Wi-Fi 7 at CES 2022 (the standard is expected to be released in 2024). Wi-Fi 7 is expected to boost maximum bandwidth up to 40 gigabits per second, four times as fast as Wi-Fi 6. Such extreme bandwidth is obviously overkill for Web browsing, but it’s a necessity for streaming augmented- and virtual-reality content.

This rapid improvement stands in contrast to the struggles in cellular networking. In theory, 5G can meet or beat the performance of Wi-Fi; Qualcomm claims its latest hardware can hit peak data rates of 20 Gb/s. But the reality often falls short.

The performance of 5G varies between markets. A report from OpenSignal found customers of Taiwan’s FarEasTone can expect average download speeds of nearly 448 Mb/s. Verizon and AT&T customers in the United States average just 52.3 Mb/s. 5G is also saddled with confusing and deceptive marketing, such as AT&T’s decision to brand some 4G phones as “5GE.”

Inconsistent 5G cuts especially deep for consumers because the problem is out of their hands. If you want faster Wi-Fi, you can make it happen by purchasing a new router and, possibly, an adapter for older devices. But if you want faster mobile bandwidth data, tough luck. You could try a new smartphone or switching providers, but both options are expensive, and improvements aren’t guaranteed. The best way to improve cellular data is to improve the infrastructure, but that’s up to your service provider.

Perhaps cellular providers will get their act together and bring the best 5G speeds beyond dense urban centers. Until then, Wi-Fi is the way to go if you want maximum bandwidth without a cord.

AI Could Analyze Speech to Help Diagnose Alzheimer’s

Par Rebecca Sohn


Alzheimer’s disease is notoriously difficult to diagnose. Typically, doctors use a combination of cognitive tests, brain imaging, and observation of behavior that can be expensive and time-consuming. But what if a quick voice sample, easily taken at a person’s home, could help identify a patient with Alzheimer’s?

A company called Canary Speech is creating technology to do just that. Using deep learning, its algorithms analyze short voice samples for signs of Alzheimer’s and other conditions. Deep learning provider Syntiant recently announced a collaboration with Canary Speech, which will allow Canary to take a technology that is mostly used in doctor’s offices and hospitals into a person’s home via a medical device. While some research has found deep learning techniques using voice and other types of data to be highly accurate in classifying those with Alzheimer’s and other conditions in a lab setting, it’s possible the results would be different in the real world. Nevertheless, AI and deep learning techniques could become helpful tools in making a difficult diagnosis.

Most people think of Alzheimer’s disease, the most common form of dementia, as affecting memory. But research suggests that Alzheimer’s can impact speech and language even in the disease’s earliest stages, before most symptoms are noticeable. While people can’t usually pick up on these subtle effects, a deep learning model, trained on the voices of tens of thousands of people with and without these conditions, may be able to distinguish these differences.

“What you’re interested in is, what is the central nervous system telling you that is being conveyed through the creation of speech?” says Henry O’Connell, CEO and cofounder of Canary Speech. “That's what Canary Speech does—we analyze that data set.”

Until now, O’Connell says that the algorithm has been cloud-based, but Canary’s collaboration with Syntiant allows for a chip-based application, which is faster and has more memory and storage capacity. The new technology is meant to be incorporated into a wearable device and take less than a second to analyze a 20- or 30-second sample of speech for conditions like Alzheimer’s, as well as anxiety, depression, and even general energy level. O’Connell says that Canary’s system is about 92.5 percent accurate when it comes to correctly distinguishing between the voices of people with and without Alzheimer’s. There is some research to suggest that conditions like depression and anxiety impact speech, and O’Connell says that Canary is working to test and improve the accuracy of algorithms to detect these conditions.

Other voice-based technologies have had similar success, says Frank Rudzicz, an associate professor of computer science at the University of Toronto and cofounder of Winterlight Labs, which makes a similar product to Canary Speech. In a 2016 study, Rudzicz and other researchers used simple machine learning methods to analyze the speech of people with and without Alzheimer’s with an accuracy of about 81 percent.

“With deep learning, you would just give the raw data to these deep neural networks, and then the deep neural networks automatically produce their own internal representations,” Rudzicz says. Like all deep learning algorithms, this creates a “black box”—meaning it’s impossible to know exactly what aspects of speech the algorithm is homing in on. With deep learning, he says, the accuracy of these algorithms has risen above 90 percent.

Previously, programmers have used deep learning alongside medical imaging of the brain, such as MRI scans. In studies, many of these methods are similarly accurate—usually above 90 percent accuracy. In a December 2021 study, programmers successfully trained an algorithm to not only distinguish between the brains of cognitively normal people and those with Alzheimer’s, but also between those with mild cognitive impairment, in many cases an early precursor to Alzheimer’s, whose brains were either more similar to those of healthy people or more similar to those with Alzheimer’s. Distinguishing these subtypes is especially important because not everyone with mild cognitive impairment goes on to develop Alzheimer’s.

“We want to have methods to stratify individuals along the Alzheimer’s disease continuum,” says Eran Dayan, an assistant professor of radiology at the University of North Carolina, Chapel Hill and an author of the 2021 study. “These are subjects who are likely to progress to Alzheimer's disease.”

Identifying these patients as early as possible, Dayan says, will likely be crucial in effectively treating their diseases. He also says that, generally, scan-based deep learning has a similarly high efficacy rate, at least in classification studies done in the lab. Whether these technologies will be just as effective in the real world is less clear, he says, though they are still likely to work well. He says more research is needed to know for sure.

Another reason for concern, Dayan says, is potential biases, which recent research has shown that AI can harbor if there is not enough variety in the data the algorithm is trained on. For instance, Rudzicz says it’s possible that an algorithm trained using speech samples from people in Toronto would not work as well in a rural area. O’Connell says that the algorithm that Canary Speech analyzes nonlanguage elements of speech, and that they have versions of the technologies used in other countries, like Japan and China, that are trained using data from native language speakers.

“We validate our model and train it in that system, in that environment, for performance,” he says.

Though Canary’s collaboration with Syntiant may make remote, real-time monitoring possible, O’Connell personally believes a formal diagnosis should come from a doctor, with this technology serving as another tool in making the diagnosis. Dayan agrees.

“AI, in the coming years, I hope will help assist doctors, but absolutely not replace them,” he says.

Letting Robocars See Around Corners

Par Fredrik Brännström


An autonomous car needs to do many things to make the grade, but without a doubt, sensing and understanding its environment are the most critical. A self-driving vehicle must track and identify many objects and targets, whether they’re in clear view or hidden, whether the weather is fair or foul.

Today’s radar alone is nowhere near good enough to handle the entire job—cameras and lidars are also needed. But if we could make the most of radar’s particular strengths, we might dispense with at least some of those supplementary sensors.

Conventional cameras in stereo mode can indeed detect objects, gauge their distance, and estimate their speeds, but they don’t have the accuracy required for fully autonomous driving. In addition, cameras do not work well at night, in fog, or in direct sunlight, and systems that use them are prone to being fooled by optical illusions. Laser scanning systems, or lidars, do supply their own illumination and thus are often superior to cameras in bad weather. Nonetheless, they can see only straight ahead, along a clear line of sight, and will therefore not be able to detect a car approaching an intersection while hidden from view by buildings or other obstacles.

Radar is worse than lidar in range accuracy and angular resolution—the smallest angle of arrival necessary between two distinct targets to resolve one from another. But we have devised a novel radar architecture that overcomes these deficiencies, making it much more effective in augmenting lidars and cameras.

Our proposed architecture employs what’s called a sparse, wide-aperture multiband radar. The basic idea is to use a variety of frequencies, exploiting the particular properties of each one, to free the system from the vicissitudes of the weather and to see through and around corners. That system, in turn, employs advanced signal processing and sensor-fusion algorithms to produce an integrated representation of the environment.

We have experimentally verified the theoretical performance limits of our radar system—its range, angular resolution, and accuracy. Right now, we’re building hardware for various automakers to evaluate, and recent road tests have been successful. We plan to conduct more elaborate tests to demonstrate around-the-corner sensing in early 2022.

Each frequency band has its strengths and weaknesses. The band at 77 gigahertz and below can pass through 1,000 meters of dense fog without losing more than a fraction of a decibel of signal strength. Contrast that with lidars and cameras, which lose 10 to 15 decibels in just 50 meters of such fog.

Rain, however, is another story. Even light showers will attenuate 77-GHz radar as much as they would lidar. No problem, you might think—just go to lower frequencies. Rain is, after all, transparent to radar at, say, 1 GHz or below.

This works, but you want the high bands as well, because the low bands provide poorer range and angular resolution. Although you can’t necessarily equate high frequency with a narrow beam, you can use an antenna array, or highly directive antenna, to project the millimeter-long waves in the higher bands in a narrow beam, like a laser. This means that this radar can compete with lidar systems, although it would still suffer from the same inability to see outside a line of sight.

For an antenna of given size—that is, of a given array aperture—the angular resolution of the beam is inversely proportional to the frequency of operation. Similarly, to achieve a given angular resolution, the required frequency is inversely proportional to the antenna size. So to achieve some desired angular resolution from a radar system at relatively low UHF frequencies (0.3 to 1 GHz), for example, you’d need an antenna array tens of times as large as the one you’d need for a radar operating in the K (18- to 27-GHz) or W (75- to 110-GHz) bands.

Even though lower frequencies don’t help much with resolution, they bring other advantages. Electromagnetic waves tend to diffract at sharp edges; when they encounter curved surfaces, they can diffract right around them as “creeping” waves. These effects are too weak to be effective at the higher frequencies of the K band and, especially, the W band, but they can be substantial in the UHF and C (4- to 8-GHz) bands. This diffraction behavior, together with lower penetration loss, allows such radars to detect objects around a corner.


Graph of multipath reflections and through building transmission for autonomous vehicles.


Graph of multipath reflections and through building transmission for autonomous vehicles.


Graph of multipath reflections and through building transmission for autonomous vehicles.


Graph of multipath reflections and through building transmission for autonomous vehicles.


Graph of multipath reflections and through building transmission for autonomous vehicles.


Graph of multipath reflections and through building transmission for autonomous vehicles.


Graph of multipath reflections and through building transmission for autonomous vehicles.


Graph of multipath reflections and through building transmission for autonomous vehicles.


Graph of multipath reflections and through building transmission for autonomous vehicles.

One weakness of radar is that it follows many paths, bouncing off innumerable objects, on its way to and from the object being tracked. These radar returns are further complicated by the presence of many other automotive radars on the road. But the tangle also brings a strength: The widely ranging ricochets can provide a computer with information about what’s going on in places that a beam projected along the line of sight can’t reach—for instance, revealing cross traffic that is obscured from direct detection.

To see far and in detail—to see sideways and even directly through obstacles—is a promise that radar has not yet fully realized. No one radar band can do it all, but a system that can operate simultaneously at multiple frequency bands can come very close. For instance, high-frequency bands, such as K and W, can provide high resolution and can accurately estimate the location and speed of targets. But they can’t penetrate the walls of buildings or see around corners; what’s more, they are vulnerable to heavy rain, fog, and dust.

Lower frequency bands, such as UHF and C, are much less vulnerable to these problems, but they require larger antenna elements and have less available bandwidth, which reduces range resolution—the ability to distinguish two objects of similar bearing but different ranges. These lower bands also require a large aperture for a given angular resolution. By putting together these disparate bands, we can balance the vulnerabilities of one band with the strengths of the others.

Different targets pose different challenges for our multiband solution. The front of a car presents a smaller radar cross section—or effective reflectivity—to the UHF band than to the C and K bands. This means that an approaching car will be easier to detect using the C and K bands. Further, a pedestrian’s cross section exhibits much less variation with respect to changes in his or her orientation and gait in the UHF band than it does in the C and K bands. This means that people will be easier to detect with UHF radar.

Furthermore, the radar cross section of an object decreases when there is water on the scatterer's surface. This diminishes the radar reflections measured in the C and K bands, although this phenomenon does not notably affect UHF radars.

The tangled return paths of radar are also a strength because they can provide a computer with information about what’s going on sideways—for instance, in cross traffic that is obscured from direct inspection.

Another important difference arises from the fact that a signal of a lower frequency can penetrate walls and pass through buildings, whereas higher frequencies cannot. Consider, for example, a 30-centimeter-thick concrete wall. The ability of a radar wave to pass through the wall, rather than reflect off of it, is a function of the wavelength, the polarization of the incident field, and the angle of incidence. For the UHF band, the transmission coefficient is around –6.5 dB over a large range of incident angles. For the C and K bands, that value falls to –35 dB and –150 dB, respectively, meaning that very little energy can make it through.

A radar’s angular resolution, as we noted earlier, is proportional to the wavelength used; but it is also inversely proportional to the width of the aperture—or, for a linear array of antennas, to the physical length of the array. This is one reason why millimeter waves, such as the W and K bands, may work well for autonomous driving. A commercial radar unit based on two 77-GHz transceivers, with an aperture of 6 cm, gives you about 2.5 degrees of angular resolution, more than an order of magnitude worse than a typical lidar system, and too little for autonomous driving. Achieving lidar-standard resolution at 77 GHz requires a much wider aperture—1.2 meters, say, about the width of a car.

Besides range and angular resolution, a car’s radar system must also keep track of a lot of targets, sometimes hundreds of them at once. It can be difficult to distinguish targets by range if their range to the car varies by just a few meters. And for any given range, a uniform linear array—one whose transmitting and receiving elements are spaced equidistantly—can distinguish only as many targets as the number of antennas it has. In cluttered environments where there may be a multitude of targets, this might seem to indicate the need for hundreds of such transmitters and receivers, a problem made worse by the need for a very large aperture. That much hardware would be costly.

One way to circumvent the problem is to use an array in which the elements are placed at only a few of the positions they normally occupy. If we design such a “sparse” array carefully, so that each mutual geometrical distance is unique, we can make it behave as well as the nonsparse, full-size array. For instance, if we begin with a 1.2-meter-aperture radar operating at the K band and put in an appropriately designed sparse array having just 12 transmitting and 16 receiving elements, it would behave like a standard array having 192 elements. The reason is that a carefully designed sparse array can have up to 12 × 16, or 192, pairwise distances between each transmitter and receiver. Using 12 different signal transmissions, the 16 receive antennas will receive 192 signals. Because of the unique pairwise distance between each transmit/receive pair, the resulting 192 received signals can be made to behave as if they were received by a 192-element, nonsparse array. Thus, a sparse array allows one to trade off time for space—that is, signal transmissions with antenna elements.

Chart of radars signal loss of strength due to rain.Seeing in the rain is generally much easier for radar than for light-based sensors, notably lidar. At relatively low frequencies, a radar signal’s loss of strength is orders of magnitude lower.Neural Propulsion Systems

In principle, separate radar units placed along an imaginary array on a car should operate as a single phased-array unit of larger aperture. However, this scheme would require the joint transmission of every transmit antenna of the separate subarrays, as well as the joint processing of the data collected by every antenna element of the combined subarrays, which in turn would require that the phases of all subarray units be perfectly synchronized.

None of this is easy. But even if it could be implemented, the performance of such a perfectly synchronized distributed radar would still fall well short of that of a carefully designed, fully integrated, wide-aperture sparse array.

Consider two radar systems at 77 GHz, each with an aperture length of 1.2 meters and with 12 transmit and 16 receive elements. The first is a carefully designed sparse array; the second places two 14-element standard arrays on the extreme sides of the aperture. Both systems have the same aperture and the same number of antenna elements. But while the integrated sparse design performs equally well no matter where it scans, the divided version has trouble looking straight ahead, from the front of the array. That’s because the two clumps of antennas are widely separated, producing a blind spot in the center.

In the widely separated scenario, we assume two cases. In the first, the two standard radar arrays at either end of a divided system are somehow perfectly synchronized. This arrangement fails to detect objects 45 percent of the time. In the second case, we assume that each array operates independently and that the objects they’ve each independently detected are then fused. This arrangement fails almost 60 percent of the time. In contrast, the carefully designed sparse array has only a negligible chance of failure.

Image of a truck using multiband radar.

Image of a car using multiband radar.The truck and the car are fitted with wide-aperture multiband radar from Neural Propulsion Systems, the authors’ company. Note the very wide antenna above the windshield of the truck.Neural Propulsion Systems

Seeing around the corner can be depicted easily in simulations. We considered an autonomous vehicle, equipped with our system, approaching an urban intersection with four high-rise concrete buildings, one at each corner. At the beginning of the simulation the vehicle is 35 meters from the center of the intersection and a second vehicle is approaching the center via a crossing road. The approaching vehicle is not within the autonomous vehicle’s line of sight and so cannot be detected without a means of seeing around the corner.

At each of the three frequency bands, the radar system can estimate the range and bearing of the targets that are within the line of sight. In that case, the range of the target is equal to the speed of light multiplied by half the time it takes the transmitted electromagnetic wave to return to the radar. The bearing of a target is determined from the incident angle of the wavefronts received at the radar. But when the targets are not within the line of sight and the signals return along multiple routes, these methods cannot directly measure either the range or the position of the target.

We can, however, infer the range and position of targets. First we need to distinguish between line-of-sight, multipath, and through-the-building returns. For a given range, multipath returns are typically weaker (due to multiple reflections) and have different polarization. Through-the-building returns are also weaker. If we know the basic environment—the position of buildings and other stationary objects—we can construct a framework to find the possible positions of the true target. We then use that framework to estimate how likely it is that the target is at this or that position.

As the autonomous vehicle and the various targets move and as more data is collected by the radar, each new piece of evidence is used to update the probabilities. This is Bayesian logic, familiar from its use in medical diagnosis. Does the patient have a fever? If so, is there a rash? Here, each time the car’s system updates the estimate, it narrows the range of possibilities until at last the true target positions are revealed and the “ghost targets” vanish. The performance of the system can be significantly enhanced by fusing information obtained from multiple bands.

We have used experiments and numerical simulations to evaluate the theoretical performance limits of our radar system under various operating conditions. Road tests confirm that the radar can detect signals coming through occlusions. In the coming months we plan to demonstrate round-the-corner sensing.

The performance of our system in terms of range, angular resolution, and ability to see around a corner should be unprecedented. We expect it will enable a form of driving safer than we have ever known.

Atari Breakout: The Best Video Game of All Time?

Par Tekla S. Perry


Breakout was the best video game ever invented, many designers say, because it was the first true video game. Before Breakout, all were games like Pong—imitations of real life. With Breakout, a single paddle was used to direct a ball at a wall of colored bricks. Contact made a brick vanish and the ball change speed. The game could never exist in any medium other than video.

Like Pong, the specifications for Breakout—its look and game rules—were defined by Nolan Bushnell at Atari Inc., Sunnyvale, Calif. But along with the specs came an engineering challenge in 1975: design the game with less than 50 chips, and the designer would receive $700; design the game with less than 40 chips, and the designer would receive $1000. Most games at that time contained over 100 chips. Steven Jobs, now president of Apple Computer, Santa Clara, Calif., was hanging around Atari at that time. “He was dirt poor,” recalled Allan Alcorn, who joined Atari at its formation. Atari’s design offer was “good cash”—to Mr. Jobs. Mr. Alcorn remembered that Mr. Jobs quickly designed the game with fewer than 50 chips. He had help. He called on his friend, Steven Wozniak, who later designed the Apple computer.


This article was first published as "Breakout: a video breakthrough in games." It appeared in the December 1982 issue of IEEE Spectrum as part of a special report, “Video games: The electronic big bang.” A PDF version is available on IEEE Xplore.


Mr. Jobs had to make a trip to Oregon, Mr. Wozniak related, “so we just had four days.” Mr. Wozniak went to his regular job at Hewlett-Packard during the day and joined Mr. Jobs at Atari at night. “We got it down to 45 chips, and got the bugs out, but after four days we wouldn’t have done anything to get it down further,” Mr. Wozniak said.

They got their bonus, but, Mr. Alcorn recalled, the game used such minimized logic it was impossible to repair.

Larry Kaplan, a designer who was also at Atari at that time, explained; “What Woz or Jobs liked to do was to design things that were parallel sequential, so at a given point in time this chip was used in one part of the circuit and three microseconds later it was used in a different part of the circuit. It’s a dream, but it’s impossible to debug or produce.”

Breakout sat in the Atari lab for eight months. Then the same design was reworked with 100 ICs before it was put into production.

Editor’s note (January 2022): The financial terms described were those explained to me by Steve Wozniak in 1982. Years later, Wozniak discovered to his dismay that the actual bonus received was $5000—and Steve Jobs kept it for himself.

How E Ink Developed Full-Color e-Paper

Par Edzer Huitema


It was the end of 2008, October, right before the holiday shopping season. Talk-show host Oprah Winfrey released her highly anticipated Favorite Things list, with the Amazon Kindle topping the gadget category.

This is the moment that the concept of electronic paper, or e-paper, went mainstream.


But this black-and-white, reflective display that always appeared to be on was invented well before the Amazon Kindle made it famous. Its story began a decade earlier, in 1997, at the MIT Media Lab, when it was created by two students, J.D. Albert and Barrett Comiskey, who were inspired by their professor Joseph Jacobson.

From the very beginning, e-paper seemed magical. It was easy on the eyes, even outdoors and in bright sunlight, where other portable displays became unreadable. It could go weeks between charges while mobiles equipped with other displays barely made it through a day (some of them still barely make it through a day). Yet its limitation was obvious—images could appear only in black and white. In a world that hadn’t seen a monochrome display in a very long time—TVs made the switch in the 1960s, computer monitors in the late ’80s—a monochrome display was definitely quaintly old school.

So, since the initial development of electronic ink, as the basic technology behind e-paper is known, and even more with the release of the Kindle, a big question hung over e-paper: When would we see this magical display in brilliant, blazing color?It’s not that people hadn’t been trying. Electronic-ink researchers had been pursuing color e-paper for years, as had other researchers around the world, in universities, corporate research labs, and startups. They came up with some early products that targeted shelf labels for brick-and-mortar retail stores and also for signage. But these added just one color to a black-and-white screen—red or yellow—and that wasn’t anybody’s idea of a full-color display. Indeed, more than a decade after that first Kindle, and more than two decades after its invention, full-color e-paper had still not reached the consumer market.

Why did it take so long for e-paper to make that Wizard-of-Oz transition from black and white to color? Over the years, researchers tried several approaches, some taking technologies from more traditional displays, others evolving from the original e-paper’s unique design. Qualcomm, for example, spent billions pursuing an approach inspired by butterfly wings. Overall, the path to successful color e-paper is a classic, if tortuous, tale of tech triumph. Read on to find out why this seemingly straightforward challenge was only realized just two years ago at E Ink, where we are chief technical officers.

Three transparent spheres containing small circles of black and white sit under a panel split into three segments, one red, one green, and one blue. Yellow lines indicate light passing through the panel and reflecting from the spheresIn E Ink’s Triton and Kaleido displays, color filters turn light reflected from white particles into red, green, and blue subpixels. This approach, however, reduces resolution and brightness, limiting the popularity of the first generation of the technology.James Provost

Today, E Ink’s full-color ePaper is in consumer hands, in products including e-readers and smartphones and note-taking devices, and from roughly a dozen manufacturers. These include the Guoyue Smartbook V5 Color, the HiSense A5C Color Smartphone, the Onyx Boox Poke 2 Color, and the PocketBook Color. Only one other full-color electronic paper product has been announced—DES (Display Electronic Slurry) from China’s Dalian Good Display. At this writing, no devices using DES have shipped to consumers, though a handful of journalists have received samples and two Kickstarter campaigns feature products designed to use the display.

The challenge stemmed from the nature of the technology. Black-and-white electronic ink is a straightforward fusion of chemistry, physics, and electronics, and it does pretty much what traditional ink and paper does. E Ink’s version is made of microcapsules of negatively charged black particles and positively charged white particles—the same pigments used in the printing industry today—floating in clear liquid. Each microcapsule is about the width of a human hair.

To manufacture our ePaper display, we start by making batches of this electronic ink, then use it to coat a plastic substrate some 25 to 100 micrometers thick, depending on which product it’s intended for. We then cut the rolls of coated film into the desired display size and add thin-film transistors to create electrodes above and below the ink layer, which is sandwiched between protective sheets, and, possibly, touch panels or front lights.

To produce an image, an ePaper device applies different voltages to the top and bottom electrodes to create an electric field. At the top, the voltage is close to zero, and at the bottom it alternates among –15, 0, or 15. Every time the image on the screen needs to change, a specific sequence of voltages applied to the bottom electrode moves the particles from their previous position to the position needed to show the correct color for the new image. This update time typically takes less than half a second.

Bringing white particles to the top of the display creates the appearance of “paper”; black ones create “ink.” But the particles don’t have to sit at the very top or very bottom; when we stop generating that electric field, the particles stop in their tracks. This means we can create a mixture of black-and-white particles near the top of the display—appearing as shades of grey.

Five trapezoids, labeled respectively green, orange, black, white, and yellow, contain different arrangements of small magenta, white, yellow, and cyan circles.E Ink’s Advanced Color ePaper (ACEP) uses four different types of pigment particles, varying in size and charge. The system applies varying electric fields to push and pull them to different positions in each trapezoidal microcup to create the desired colors.James Provost

The software that determines the timing and the voltages applied to each electrode is complex. The choices depend on what was previously displayed at that pixel. If a black pixel in one image will be black again in the next image, for example, no voltage needs to be applied at that spot. We also have to be careful with the transitions; we don’t want a previous image to linger, yet we don’t want an abrupt change to cause the screen to flash. These are but a few of the factors we took into consideration when designing the algorithms, called waveforms, that we use to set the sequence of voltages. Designing them is as much art as science.

To bring color into the equation greatly complicates the waveforms. Black and white is a simple dichotomy, given that an electric field can create either a positive or a negative charge. That approach can’t accommodate full-color digital paper. We needed something entirely new.

We started exploring options in the early 2000s. One of our first commercially launched color products, in 2010, used a color filter—an array of squares printed onto a layer of glass placed on top of the standard black-and-white ink layer. When we applied a charge to move the white particles to the surface at a selected spot, the light would bounce back to the viewer through the red, green, or blue filter above it. It was an obvious approach: All of the colors visible to humans can be created with combinations of red, green, and blue light, which is why most of today’s most common display technologies, like LCDs and OLEDs, use RGB emitters or color filters.

We called our product E Ink Triton. While an electronic textbook did launch with the technology, the main thing this effort taught us was what would not work for the consumer market. Its resolution was simply too low and the colors not bright enough for people who were used to the high resolution of tablet computers or print magazines.

The brightness problem stemmed from the fact that unlike LCDs and OLEDs, which, respectively, use a backlight or emit light directly, E Ink’s displays are fully reflective. That is, light from an outside source goes through the transparent cover, hits the ink layer, and bounces back to the viewer’s eyes. This arrangement is great for outdoor use, because reflective displays are enhanced rather than washed out by bright sunlight. And the displays are good for eye comfort, because they don’t shine light directly at a user. But with a reflective system, every layer between the ink and eye absorbs or scatters some of the light. Adding that color filter layer, it turned out, caused significant dimming.

Seven panels are showing in a blown-out stack, the top and bottom frame the middle five, labeled front light, touch panel, color filter, micro capsules, and TFTFor its Kaleido color display, E Ink included a front light and patterned the color filters as a series of short lines to improve brightness, color saturation, and contrast.James Provost

In addition, using a color filter to split monochrome pixels into three colored pixels reduced the overall resolution. A display originally having a resolution of 300 pixels per inch, with an addition of a three-color filter, now has a resolution of 100 pixels per inch. This was not as much of an issue for a 32-inch display used as a sign—pixel sizes could be larger, and big letters don’t require high resolution. But it was a real problem for small fonts and line drawings on handheld devices.

While our researchers were coming up with this filtered display, others in our labs focused on a different approach, called multipigment, that didn’t rely on color filters. However, that approach requires far more complicated chemistry and mechanics.

Multipigment e-paper also shares fundamentals with its monochrome predecessors. However, instead of only two types of particles, there are now three or four, depending on the colors chosen for a particular application.

We needed to get these particles to respond uniquely to electric fields, not simply be attracted or repelled. We did a few things to our ink particles to allow them to be better sorted. We made the particles different sizes—larger particles will generally move more slowly in liquid than smaller ones. We varied the charges of the particles, taking advantage of the fact that charge is more analog than digital. That is, it can be very positive, a little positive, very negative, or a little negative. And a lot of gradations in between.

Once we had our particles differentiated, we had to adapt our waveforms; instead of just sending one set of particles to the top as another goes to the bottom, we both push and pull them to create an image. For example, we can push particles of one color to the top, then pull them back a little so they mix with other particles to create a specific shade. Cyan and yellow together, for example, produce green, with white particles providing a reflective background. The closer a particle is to the surface, the greater the intensity of that color in the mix.

We also changed the shape of our container, from a sphere to a trapezoid, which gave us better control over the vertical position of the particles. We call these containers Microcups.

For the three-particle system, now on the market as E Ink Spectra and used primarily in electronic shelf labels (ESLs), we put black, white, and red or black, white, and yellow pigments into each Microcup. In 2021, we added a fourth particle to this system; our new generation uses black, white, red, and yellow particles. These are great for generating deeply saturated colors with high contrast, but these four colors cannot be combined to create full-color images. This technology was first launched in 2013 for retail ESLs. Companies have built E Ink screens into millions of these tags, shipping them throughout the world to retailers such as Best Buy, Macy’s, and Walmart. Similar electrophoretic shelf labels that use displays from China’s DKE Co. have since come on the market.

For our true, full-color system, which we call Advanced Color ePaper (ACeP), we also use four particles, but we have dropped the black and rely on white—our paper—along with cyan, magenta, and yellow, the colors used in inkjet printers. By stopping the particles at different levels, we can use these particles to create up to 50,000 colors. The resulting display renders colors like those in newspapers or even watercolor art.

Hands hold a tablet with a color displayThe PocketBook Inkpad 3 Pro, introduced in 2021, uses the second-generation E Ink Kaleido Plus color displayPocketBook

E Ink launched ACeP as E Ink Gallery in 2016. Again, it wasn’t appropriate for consumer devices, because of slow refresh rates. Also, as it’s a reflective display without a backlight, the colors were too muted for consumers accustomed to bright smartphone and tablet displays. For now, it has been geared predominantly toward use in retail signs in Asia.

Realizing we still weren’t hitting the consumer-market sweet spot with our color displays, our R&D team went back to take another look at Triton, the system that used RGB color filters. What worked and what didn’t? Were there modifications we could make to finally produce a color e-reader that consumers would want?

We knew the filters were sapping brightness. We were pretty sure we could significantly reduce this loss by getting the filters closer to the electronic ink.

We also wanted to increase the resolution of the displays, which meant a much finer color-filter array. To get a resolution more in line with what consumers are accustomed to, we had to shoot for at least 200 pixels per square inch. That’s about twice the density we were able to achieve with our first round of Triton displays.

Compared with the complexity of formulating inks with a variety of charges, as we had done in developing ACeP, you might think this would have been easy. But it ended up requiring a new technology to print the color filters on the glass substrate.

We had created our earlier filters by printing semi-transparent red, green, and blue ink on glass. But this glass was an added layer. So we decided to print directly onto the plastic film that holds the top electrode, adding this step after our display modules were nearing the end of the assembly process. This arrangement would get the filters as close to the electronic ink as possible. It would also allow us to increase resolution, because aligning the filters with the display pixels could be done more precisely than was possible when using a separate surface.

Grocery shelves with packages, shelf labels are E Ink displaysE Ink Spectra, the company’s first three-color display, allowed retailers to insert a pop of red or yellow into their electronic shelf labels.E Ink

We found the type of printer we needed at the German company Plastic Logic, a partner of E Ink since the early days of the company. But this printer was intended for use in an R&D lab, not for high-volume production. The processes it used had to be converted to operate in a different, production-ready machine.

We also needed to figure out new printing patterns for the color filter. These are the actual shapes and arrangements of the red, blue, and green filters. We had found through working on Triton that printing the filters as a simple square grid was not the best option, as the pattern could be visible during certain image transitions. And so the hunt for the perfect pattern was on. We went through many iterations, considering the angle at which light hit the display, as this angle could easily shift the color seen by the user. We evaluated a grid, straight printed lines, long lines, and a host of other designs, and settled on a pattern of short lines.

Because this is a reflective display, the more light hitting the display, the brighter it is. The research team decided to add a front light to the display, something that was not part of Triton, working hard to ensure that the light rays hit the ink layer at an angle that maximizes reflectivity. Using a front light increases energy consumption, of course, but it’s worth it in this case.

As a result, E Ink’s new color technology, E Ink Kaleido, has significantly more saturated colors and a better contrast ratio than E Ink Triton. And finally, a full-color electronic-ink display was ready for use in consumer products.

The first official batch of Kaleido displays rolled off the manufacturing line in late 2019. We began shipping to customers soon after, and you can now see the technology in products like the Hisense A5C, the iFlytek Book C1, and the PocketBook Color, all of which were launched in 2020. A second generation of Kaleido, called Kaleido Plus, began shipping in early 2021, with products released by Onyx and PocketBook and more launching soon. This update improved color saturation thanks to adjustments made in the printing pattern and the light guides for the front light.

Manufacturing equipment being fed by large sheets of flexible plastic bending around rollersAs part of E Ink’s manufacturing process, microcapsules of ink coat plastic film. The film is then dried, inspected, rerolled, and sent on for further processing.E Ink

We have a few things to work on. Light efficiency, the fraction of incoming light that makes its way back out to the user’s eyes, is good but it could be better. We are continuing to work on our film layers to further cut this loss.

By continuing to refine our printing pattern, we are also working to improve resolution by using denser circuitry in the electronics that sit below the ink layer and turn voltages on and off to move the charged particles.

We are also continuing to work on our filterless, multipigment electronic-ink technology. We expect to release a new generation for use in signage soon, and it will include brighter colors and faster page updates. Someday we might even be able to move this into consumer devices.

When E Ink’s researchers set out exploring color electronic ink in the early 2000s, they thought it would be a matter of a few years to fruition, given our expertise with the technology. After all, black-and-white e-paper took only 10 years from concept to commercialization. The road to full color turned out to be much longer. But, just like Dorothy in the Wizard of Oz, we finally made it over the rainbow.

This article appears in the February 2022 print issue as “E Ink’s Technicolor Moment.”

Medal of Honor Goes to Asad M. Madni, Microsensor and Systems Pioneer

Par Joanna Goodrich


IEEE Life Fellow Asad M. Madni is the recipient of this year’s IEEE Medal of Honor. He is being recognized “for pioneering contributions to the development and commercialization of innovative sensing and systems technologies, and for distinguished research leadership.”


The IEEE Foundation sponsors the award.

Madni has been a distinguished adjunct professor of electrical and computer engineering and distinguished scientist since 2011 at the Samueli School of Engineering at the University of California, Los Angeles. He is also a faculty Fellow at the UCLA Institute of Transportation Studies and the university’s Connected Autonomous Electric Vehicle consortium.

Before starting his career in academia, Madni served as chairman, president, and chief executive of Systron Donner and president, chief operating officer, and chief technology officer of BEI.

Madni led the development and commercialization of intelligent microsensors and systems for the aerospace, defense, industrial, and transportation industries. The GyroChip technology he helped develop at BEI revolutionized navigation and stability in aerospace and automotive systems, making them safer.

While at BEI, he also led the development of an extremely slow motion servo control system for NASA’s Hubble Space Telescope’s star selector. The system, which is still used today, provides the telescope with unprecedented pointing accuracy and stability, allowing astronomers to make new discoveries and learn more about the universe’s history.

“Dr. Madni has outstanding accomplishments in both managing research and development, and in personally inventing and innovating technologies at the cutting edge of his field,” says one engineer who endorsed Madni for the award. “I know of no one else more deserving of the IEEE Medal of Honor.”

SMART SENSORS

Under Madni’s leadership, BEI’s quartz rate sensor technology, later known as the GyroChip, was developed in the early 1990s. The technology is the first microelectromechanical system (MEMS)-based gyroscope and inertial measurement unit for aerospace and automotive safety applications, according to an entry about Madni on the Engineering and Technology History Wiki. It is smaller and more cost-efficient and reliable than prior technologies.

The GyroChip is used worldwide in more than 90 types of aircraft, including the stability control systems of the Boeing 777; the yaw damper for the Boeing 737; and in most business jets as a sensing element in attitude control and reference programs. It also is used for guidance, navigation, and control in major U.S. missiles, underwater autonomous vehicles, and helicopters, as well as NASA’s Mars rover Sojourner and AERCam Sprint autonomous robotic camera. The GyroChip is also employed in the U.S. Civil Air Patrol’s Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance system, which is deployed in search-and-rescue missions.

After the aerospace and defense markets began to decline following the end of the Cold War, Madni led the defense conversion of the GyroChip technology from the aerospace and defense sectors to the automotive and commercial aviation markets. The GyroChip became the foundation of vehicle dynamic control, which monitors a driver’s actions including braking and steering to combat the loss of steering control that can occur in unsafe driving conditions. The GyroChip is used in more than 80 models of passenger cars worldwide for electronic stability control and rollover protection.

The GyroChip and numerous other sensing, actuation, and signal-processing techniques developed by Madni laid the foundation for autonomous vehicles. The technologies and techniques are used for features such as lane-change assist, autonomous cruise control, steering and wheel-speed detection, navigation, and drowsy- and drunken-driver detection.

While at Systron Donner, Madni led the development of RF and microwave systems and instrumentation—which significantly enhanced the combat readiness of the U.S. Navy and its allies. The technologies provided the U.S. Department of Defense with the ability to simulate more threats for warfare training that are representative of ECM environments.

His current research focuses on the development of wideband instruments with ultrahigh data throughput to detect one-time rare events and cancer cells in the bloodstream, as well as a single-shot network analyzer for fast characterization of electronic and optoelectronic devices.

Madni also is leading research in the areas of computational sensing; wearable sensors; artificial intelligence and machine learning; and demand-and-response techniques for the smart grid and electric vehicles.

He is a member of the U.S. National Academy of Engineering and a Fellow of the U.S. National Academy of Inventors, the Royal Academy of Engineering, and the Canadian Academy of Engineering.

Madni, an eminent member of the IEEE Eta Kappa Nu honor society, has received numerous other awards over the years. They include the 2020 American Society of Mechanical Engineers Soichiro Honda Medal; the 2019 IEEE Frederik Philips Award; a 2016 Ellis Island Medal of Honor; the 2012 IEEE Aerospace and Electronic Systems Society’s Pioneer Award; the 2015 Institution of Engineering and Technology J.J. Thomson Medal; the 2010 IEEE Instrumentation and Measurement Society’s Career Excellence Award; and an IEEE Millennium Medal.

Video Friday: An Agile Year

Par Evan Ackerman


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022: 23–27 May 2022, Philadelphia
ERF 2022: 28–30 June 2022, Rotterdam, Germany
CLAWAR 2022: 12–14 September 2022, Açores, Portugal

Let us know if you have suggestions for next week, and enjoy today's videos.


Agility had a busy 2021. This is a long video, but there's new stuff in it (or new to me, anyway), including impressive manipulation skills, robust perceptive locomotion, jumping, and some fun costumes.

[ Agility Robotics ]

Houston Mechatronics is now Nauticus Robotics, and they have a fancy new video to prove it.

[ Nauticus ]

Club_KUKA is an unprecedented KUKA show cell that combines entertainment and robotics with technical precision and artistic value. All in all, the show cell is home to a cool group called the Kjays. A KR3 AGILUS at the drums loops its beats and sets the beat. The KR CYBERTECH nano is our nimble DJ with rhythm in his blood. In addition, a KR AGILUS performs as a light artist and enchants with soft and expansive movements. In addition there is an LBR iiwa, which, mounted on the ceiling, keeps an eye on the unusual robot party.

And if that was too much for you to handle (?), here's "chill mode:"

[ Kuka ]

The most amazing venue for the 2022 Winter Olympics is the canteen.

[ SCMP ]

A mini documentary thing on ANYbotics from Kaspersky, the highlight of which is probably a young girl meeting ANYmal on the street and asking the important questions, like whether it comes in any other colors.

[ ANYbotics ]

If you’re looking for a robot that can carry out maintenance tasks, our teleoperation systems can give you just that. Think of it as remote hands that are able to perform tasks, without you having to be there, on location. You’re still in full control, as the robot hands will replicate your hand movements. You can control the robot from anywhere you like, even from home, which is a much safer and environmentally friendly approach.

[ Shadow Robot ]

If I had fingers like this, I'd be pretty awesome at manipulating cubes, too.

[ Yale ]

The open-source, artificially intelligent prosthetic leg designed by researchers at the University of Michigan will be brought to the research market by Humotech, a Pittsburgh-based assistive technology company. The goal of the collaboration is to speed the development of control software for robotic prosthetic legs, which have the potential to provide the power and natural gait of a human leg to prosthetic users.

[ Michigan Robotics ]

This video is worth watching entirely for the shoulder-dislocating high-five.

[ Paper ]

Of everything in this SoftBank Robotics 2021 rewind, my favorite highlight is the giant rubber duck avoidance.

[ SoftBank ]

On this episode of the Robot Brains Podcast, Pieter talks with David Rolnick about how machine learning can be applied to climate change.

[ Robot Brains ]

A talk from Stanford's Mark Cutkosky on "Selectively Soft Robotics: Integration Smart Materials in Soft Robotics."

[ BDML ]

This is a very long video from Yaskawa, which goes over many (if not most or all) of the ways that its 500,000 industrial arms are currently being used. It's well labeled, so I recommend just skipping around to the interesting parts, like cow milking.

[ Yaskawa ]

Spin Me Up, Scotty—Up Into Orbit

Par Philip E. Ross


At first, the dream of riding a rocket into space was laughed off the stage by critics who said you’d have to carry along fuel that weighed more than the rocket itself. But the advent of booster rockets and better fuels let the dreamers have the last laugh.

Hah, they said: To put a kilogram of payload into orbit we just need 98 kilograms of rocket plus rocket fuel.

What a ratio, what a cost. To transport a kilogram of cargo, commercial air freight services typically charge about US $10; spaceflight costs reach $10,000. Sure, you can save money by reusing the booster, as Elon Musk and Jeff Bezos are trying to do, but it would be so much better if you could dispense with the booster and shoot the payload straight into space.

The first people to think along these lines used cannon launchers, such as those in Project HARP (High Altitude Research Project), in the 1960s. Research support dried up after booster rockets showed their mettle. Another idea was to shoot payloads into orbit along a gigantic electrified ramp, called a railgun, but that technology still faces hurdles of a basic scientific nature, not least the need for massive banks of capacitors to provide the jolt of energy.

Imagine a satellite spinning in a vacuum chamber at many times the speed of sound. The gates of that chamber open up, and the satellite shoots out faster than the air outside can rush back in—creating a sonic boom when it hits the wall of air.

Now SpinLaunch, a company founded in 2015 in Long Beach, Calif., proposes a gentler way to heave satellites into orbit. Rather than shoot the satellite in a gun, SpinLaunch would sling it from the end of a carbon-fiber tether that spins around in a vacuum chamber for as long as an hour before reaching terminal speed. The tether lets go milliseconds before gates in the chamber open up to allow the satellite out.

“Because we’re slowly accelerating the system, we can keep the power demands relatively low,” David Wrenn, vice president for technology, tells IEEE Spectrum. “And as there’s a certain amount of energy stored in the tether itself, you can recapture that through regenerative braking.”

SpinLaunch www.youtube.com

The company has raised about $100 million. Among the backers are the investment arms of Airbus and Google and the Defense Innovation Unit, part of the U.S. Department of Defense.

SpinLaunch began with a lab centrifuge that measures about 12 meters in diameter. In November, a 33-meter version at Space Port America test-launched a payload thousands of meters up. Such a system could loft a small rocket, which would finish the job of reaching orbit. A 100-meter version, now in the planning stage, should be able to handle a 200-kg payload.

Wrenn answers all the obvious questions. How can the tether withstand the g-force when spinning at hypersonic speed? “A carbon-fiber cable with a cross-sectional area of one square inch (6.5 square centimeters) can suspend a mass of 300,000 pounds (136,000 kg),” he says.

How much preparation do you need between shots? Not much, because the chamber doesn’t have to be superclean. If the customer wants to loft a lot of satellites—a likely desideratum, given the trend toward massive constellations of small satellites–the setup could include motors powerful enough to spin up in 30 minutes. “Upwards of 10 launches per day are possible,” Wrenn says.

How tight must the vacuum be? A “rough” vacuum suffices, he says. SpinLaunch maintains the vacuum with a system of airlocks operated by those millisecond-fast gates.

Most parts, including the steel for the vacuum chamber and carbon fiber, are off-the-shelf, but those gates are proprietary. All Wrenn will say is that they’re not made of steel.

So imagine a highly intricate communications satellite, housed in some structure, spinning at many times the speed of sound. The gates open up, the satellite shoots out far faster than the air outside can rush back in. Then the satellite hits the wall of air, creating a sonic boom.

No problem, says Wrenn. Electronic systems have been hurtling from vacuums into air ever since the cannon-launching days of HARP, some 60 years ago. SpinLaunch has done work already on engineering certain satellite components to withstand the ordeal—“deployable solar panels, for example,” he says.

SpinLaunch says it will announce the site for its full-scale orbital launcher within the next five months. It will likely be built on a coastline, far from populated areas and regular airplane service. Construction costs would be held down if the machine can be built up the side of a hill. If all goes well, expect to see the first satellite slung into orbit sometime around 2025.

Intel Invests $20 Billion in Ohio for Advanced Fabs

Par Samuel K. Moore


Intel has chosen to expand its advanced manufacturing in a U.S. state that neither the company nor any other chipmaker has a presence in: Ohio. Intel announced today that it will build two leading-edge logic fabs east of Columbus at a cost of US $20 billion. Construction is set to start in 2022, and production should begin in 2025, Intel says. The company gave no information about the fabs' capacities in terms of wafers per month. But it said that the site, situated on 4 square kilometers in Licking County, could be expanded over the decade for a total investment of $100 billion.

The new fabs are part of a reset of Intel's manufacturing, a plan called IDM 2.0, that would see Intel regain its ability to make chips at the most advanced nodes and offer foundry services to other companies. It also comes as the United States embarks on an effort to grow advanced chipmaking capacity. Currently, all cutting-edge chipmaking is done by Taiwan Semiconductor Manufacturing Corp. in Taiwan and Samsung in South Korea. Both companies have announced plans for new cutting-edge fabs in the United States, and the government plans to incentivize domestic production with $52 billion in incentives. The bill that would supply that money, the CHIPS Act, has passed in the U.S. Senate, but has not yet been taken up in the House of Representatives. "The scope and pace of Intel’s expansion in Ohio...will depend heavily on funding from the CHIPS Act," said Keyvan Esfarjani, Intel senior vice president of manufacturing, supply chain, and operations in a press release.

According to Intel's manufacturing road map under the IDM 2.0 plan, by the time the Ohio fabs begins production, the company's most advanced production technology will be its Intel 20A node, which it expects will bring it back into a leadership position versus TSMC and Samsung. That technology will combine a new kind of transistor—what Intel calls RibbonFET but is more generically known as the nanosheet transistor—with backside power delivery and buried power rails. The latter will allow for smaller, more efficient transistors with a degree of design flexibility that can't be achieved using today's FinFET devices. The former moves the interconnects that supply power to circuits, which are relatively large, beneath the transistors. This frees space for interconnects that carry data and signals in the area above the transistors, making chips more dense. According to Randhir Thakur, senior vice president and president of Intel Foundry Services, the Ohio site will be designed to support the next generation of manufacturing, Intel 18A, as well.

The new fabs will likely depend on the next generation of lithography systems, high-numerical-aperture extreme ultraviolet lithography. Intel issued its first purchase order for such a machine from ASML earlier this week as part of a long-term joint collaboration.

So, why Ohio? It's a fair question. “Ohio is an ideal location for Intel’s U.S. expansion because of its access to top talent, robust existing infrastructure, and long history as a manufacturing powerhouse," Esfarjani said in a press release. But the same is true of other locations.

Advanced fabs tend to cluster together for straightforward reasons: They need both specialized infrastructure—such as supplies of specialty equipment, chemicals, and gases—and a workforce trained to use it. That combination likely came into play when TSMC announced in 2020 the construction of a $12 billion fab in Arizona, where Intel already has several fabs and is spending $20 billion to build another. It's possible that with the existing expansion and competition from TSMC, Arizona was no longer attractive. But there are other tech hubs, such as the Albany, N.Y. region, that were also passed over.

In comparison, Ohio is a chipmaking frontier. Intel is likely counting on Ohio State University, located just west of its site, to supply the top end of the talent it will need. Its college of engineering has nearly 8,000 undergraduates in its program and about 1,800 graduate students. Intel says it will spend $100 million over the next 10 years partnering with local universities and community colleges. As for the infrastructure, Intel provided statements of support from chip equipment makers Applied Materials and Lam Research, fab subsystem specialist Ultra Clean Technology, and specialty gases and chemicals supplier Air Products.

For Better AR Cameras, Swap Plastic Lenses for Silicon Chips

Par Tekla S. Perry


This week, startup Metalenz announced that it has created a silicon chip that, paired with an image sensor, can distinguish objects by the way they polarize light. The company says its “PolarEyes” will be able to make facial authentication less vulnerable to spoofing, improve 3D imaging for augmented and virtual reality, aid in telehealth by distinguishing different types of skin cells, and enhance driving safety by spotting black ice and other hard-to-see road hazards.

The company, founded in 2017 and exiting stealth a year ago, previously announced that it was commercializing waveguides composed of silicon nanostructures as an alternative to traditional optics for use in mobile devices.

Metalenz recently began a partnership with ST Microelectronics to move its technology into mass production and expects to be shipping imaging packages sometime in the second quarter of this year, according to CEO Robert Devlin.

IEEE Spectrum spoke with Devlin last week to find more about the company’s technology and what it will be able to do when it gets into consumer hands.

Before we talk about your new polarization optics, briefly help us understand how your basic technology works.

Robert Devlin: We use standard semiconductor lithography on 12-inch wafers to create nanostructures in the form of little pillars. These structures are smaller than the wavelength of light, so by changing the radius of the pillars, we can use them to control the length of the optical path of the light passing through. For the first generation of this technology, we are working with near-infrared wavelengths, which transmits through silicon, rather than reflecting as visible light would do.

What’s the advantage of using nanostructures over traditional lenses?

Devlin: Our technology is flat, for one. When you are using a curved lens to put an image on a flat sensor, you have to make all sorts of corrections using multiple lenses and finely controlling the spacing between the lenses to make it work; we don’t have to do that. We also can bring the functions of multiple traditional lenses onto one chip . And we can manufacture these lenses in the same semiconductor foundries as the image sensors and electronics used in camera modules.

The iPhone face ID system, for example, has three lenses: one diffractive lens, for splitting infrared light being projected onto your face into a grid of dots, and two refractive, for collimating the lasers to project onto the face. Some of these modules have an optical path that’s folded by mirrors, because otherwise they would be too thick to fit into compact spaces required for consumer devices. With the single-chip flat optics, we can shrink the overall thickness, and don’t need folded optical paths or mirrors in even the most space-constrained applications.

3D mapping is another infrared imaging application that uses multiple lenses today. Augmented reality systems need to create a 3D map of the world around them in real time, in order to know where to place the virtual objects. Today, these use a time-of-flight system—again, working in the infrared part of the spectrum—which sends out pulses of light and times how long they take to get back to the image sensor. This system requires several refractive lenses to focus the outgoing light and a diffractive lens to multiply the light to a grid of points. They also require multiple lenses on the imaging side to collect the light from the scene. Some of the lenses are needed to correct for the curvature of the lenses themselves, some are needed to make sure the image is crisp across the entire field of view. Using nanostructures, we can put all of these functions onto one chip.

So that’s what the chips you announced do?

Devlin: Yes, and the first product to use our technology, shipping in the second quarter of this year, will be a module for use in 3D imaging.

Initially for mobile phones?

Devlin: For consumer devices generally but also for mobile phones.

What about AR?

Devlin: Of course, everyone is eagerly waiting for AR glasses, and the form factor remains a problem. I think what we are doing—simplifying the optics—will help solve the form-factor problem. People get suspicious if they see a big camera sitting on someone’s face. Ours can be very small, and, for this application, infrared imaging is appropriate. It allows the system to understand the world around it in order to meld the virtual world with it. And it isn’t affected by changes in lighting conditions.

Okay, let’s talk about what you’re announcing now, the polarization technology, your PolarEyes.

Devlin: When we spoke a year ago, I talked about Metalenz wanting to not just simplify existing mobile-camera modules, but to take imaging systems that have been locked away in scientific laboratories because they are too expensive, complex, or big, and combine their optics into a single layer that would be small enough and cheap enough for consumer devices.

One of those imaging systems involves the polarization of light. Polarization is used in industrial and medical labs; it can be used to see where cancerous cells start and end, it can in many cases tell what material something is made of. In industry, it can be used to detect features of black objects, the shape of transparent objects, or even scratches on transparent objects. Today, complete polarization cameras measure around 100 by 80 by 80 millimeters, with optics that can cost hundreds of dollars.

gif of four views of face with and without masks showing different polarizations of lightThe PolarEyes chip from Metalenz sorts light by its polarization, allowing the pixels of images captured to be color-coded by polarization. In this case, the difference in polarization between materials makes it obvious when a mask obstructs skin.Metalenz

Using metasurface technology, we can bring the size down to 3 by 6 by 10 mm and the price down to [US] $2 to $3. And unlike many typical systems today, which take multiple views at different polarizations sequentially and use them to build up an image, we can use one of our chips to take those multiple views simultaneously, in real time. We take four views—that turns out to be the number we need to combine into a normal image or to create a full map of the scene color-coded to indicate the complete polarization at each pixel.

Besides the medical and industrial uses you mentioned, why else are polarized images useful?

Devlin: When you get these into mobile devices, we will likely find all sorts of applications we haven’t thought of yet, and that’s really exciting. But we do have an initial application that we think will help get the technology adopted—that’s in facial recognition. Today’s facial recognition systems are foiled by masks. That’s not because they couldn’t get enough information from above the mask to recognize the user. They use a high-res 2D image that provides enough data to the algorithms to do that. But they also use a 3D imaging system that is very low resolution. It’s meant to make sure that you’re not trying to spoof the system with a mask or photograph, and that’s what makes facial recognition fail when you are wearing a mask. A polarization imaging module could easily distinguish between skin and mask and solve that problem.

What You Need to Know About the FAA 5G Kerfuffle

Par Michael Koziol


AT&T and Verizon finally fired up vital components of their 5G networks in the United States on Wednesday. Mostly.

The two companies had already agreed twice to delay the activation of the parts of their networks that operated on the so-called C-band, because the U.S. Federal Aviation Administration had raised concerns about the spectrum’s usage.

The C-band stretches between either 4 to 8 gigahertz or 3.7–4.2 GHz, depending on whom you’re asking. (The IEEE considers it to be the former, while the U.S. Federal Communications Commission says it’s the latter). Regardless, the issue is that swath of spectrum hovering around either side of that 4-GHz mark. Above it are the frequencies that airplanes use and below it are frequencies opened up for use by wireless network operators to meet the growing bandwidth demands of their 5G networks.

What’s the problem with 5G and planes?

TheFAA has raised specific concerns over airplanes' radio altimeters, which help planes (and their pilots) determine how far above the ground an aircraft is by bouncing a signal off the ground below and timing how long it takes to return to the plane. Such data are crucial when the plane is taking off and landing, particularly when visibility is low: at night and in fog or rain.

Therefore anything that potentially messes with radio altimeter signals could be bad news. If other entities are using the same frequencies (between 4.2 and 4.4 GHz), altimeters could be affected and return incorrect altitude measurements or worse, have their signals blocked entirely.

It’s important to note that the C-band frequencies being used by AT&T and Verizon are not in that same 4.2-to-4.4-GHz band. Both companies’ C-band allotments are between 3.7 and 3.98 GHz. The concern raised by the FAA is whether those frequencies are too close. Signals broadcast on frequencies that are similar, but not exact, to one another can still cause interference, although not as severe as if they were on the same frequency.

So what happens now that AT&T and Verizon are turning on their C-band radios?

For now, AT&T and Verizon aren’t turning on C-band radios close to airports, to avoid interfering with takeoffs and landings. After the companies indicated that they would not delay switching on their C-band spectrum for a third time, some airlines had begun to cancel flights. Most of those flights were rescheduled after AT&T and Verizon’s decision to limit C-band usage near airports. In the end, fewer than 200 flights were canceled on the first day of C-band operation.

Meanwhile, the FAA is working to clear radio altimeters already in use. By the time AT&T and Verizon had turned on their radios, the agency had okayed five different altimeters used in certain Boeing and Airbus planes. In total, the FAA estimates that 62 percent of the U.S. commercial fleet can still safely land even in low-visibility situations where 5G C-band radios are operating. AT&T and Verizon agreed on Tuesday (the day before their C-band activation delay was set to expire) to keep C-band radios turned off near airports, to serve as an additional precaution while the FAA assesses the remaining radio altimeters in use.

The affected cell towers largely belong to Verizon, which has agreed to keep 5G radios on roughly 500 of its towers switched off—about 10 percent of its total C-band deployment. Those towers, and the smaller number owned by AT&T, will remain switched off until the two companies and the FAA arrive at a more permanent solution. It's not clear yet how long that pause will last, or what a permanent solution might entail beyond the altimeter vetting the FAA is already conducting.

Wait a second—what about T-Mobile?

There are three big mobile operators in the United States, and the C-band kerfuffle has only involved AT&T and Verizon. That’s because T-Mobile lucked out.

In 2020, T-Mobile and Sprint completed a drawn-out merger process. When it was done, the newly merged T-Mobile had an abundance of so-called “midband spectrum.” This includes, but isn’t limited to, C-band spectrum.

One of the most important things to understand about 5G is that, to deliver the promised downlink speeds (up to 1 terabit per second!), it requires more bandwidth than 3G or 4G networks. Thus, cellular-network radios have been creeping into higher and higher frequencies on the radio spectrum. These higher frequencies, although they don’t travel as far as the traditional frequencies used for cellular communications, can carry a lot more data per hertz.

As it so happens, T-Mobile had all the midband spectrum it needed to begin rolling out 5G networks courtesy of the merger. And that midband spectrum is nowhere near the frequencies used by radio altimeters—it’s centered around 2.5 GHz. AT&T and Verizon, meanwhile, acquired their problematic midband spectrum from FCC auctions, in part to compete with T-Mobile’s extensive existing midband spectrum.

Have airports in other countries faced this problem?

Nope. This has been a uniquely U.S. problem. That’s because different countries allocate radio spectrum in different ways for different uses. When 5G was being developed, and it became clear that midband frequencies would play an important role, the specific frequency bands that would be used were not defined in order to avoid selecting frequencies that might be available in some countries but already assigned for military, scientific, or other uses in others.

In Europe, for example, 5G midband rollouts have proceeded without much concern for radio altimeters, because the spectrum allocated is at just slightly lower frequencies (3.4–3.8 GHz in Europe, as opposed to the mentioned 3.7–3.98 GHz in the United States). Meanwhile, countries like Canada have installed buffer zones like the ones AT&T and Verizon have agreed to. The Australian Communications and Media Authority has said it believes that a 200-MHz guard band (like the one in the United States) between 5G networks and radio altimeters is sufficient in itself.

And it remains to be seen whether the FAA is acting simply out of an abundance of caution. Currently, Ireland, Denmark, and Finland have operating 5G networks with midband signals that are more powerful than approved in the United States, with no effect on altimeters.

Legged Robots Learn to Hike Harsh Terrain

Par Evan Ackerman


Robots, like humans, generally use two different sensory modalities when interacting with the world. There’s exteroceptive perception (or exteroception), which comes from external sensing systems like lidar, cameras, and eyeballs. And then there’s proprioceptive perception (or proprioception), which is internal sensing, involving things like touch, and force sensing. Generally, we humans use both of these sensing modalities at once to move around, with exteroception helping us plan ahead and proprioception kicking in when things get tricky. You use proprioception in the dark, for example, where movement is still totally possible—you just do it slowly and carefully, relying on balance and feeling your way around.

For legged robots, exteroception is what enables them to do all the cool stuff—with really good external sensing and the time (and compute) to do some awesome motion planning, robots can move dynamically and fast. Legged robots are much less comfortable in the dark, however, or really under any circumstances where the exteroception they need either doesn’t come through (because a sensor is not functional for whatever reason) or just totally sucks because of robot-unfriendly things like reflective surfaces or thick undergrowth or whatever. This is a problem because the real world is frustratingly full of robot-unfriendly things.

The research that the Robotic Systems Lab at ETH Zürich has published in Science Robotics showcases a control system that allows a legged robot to evaluate how reliable the exteroceptive information that it’s getting is. When the data are good, the robot plans ahead and moves quickly. But when the data set seems to be incomplete, noisy, or misleading, the controller gracefully degrades to proprioceptive locomotion instead. This means that the robot keeps moving—maybe more slowly and carefully, but it keeps moving—and eventually, it’ll get to the point where it can rely on exteroceptive sensing again. It’s a technique that humans and animals use, and now robots can use it too, combining speed and efficiency with safety and reliability to handle almost any kind of challenging terrain.


We got a compelling preview of this technique during the DARPA SubT Final Event last fall, when it was being used by Team CERBERUS’s ANYmal legged robots to help them achieve victory. I’m honestly not sure whether the SubT final course was more or less challenging than some mountain climbing in Switzerland, but the performance in the video below is quite impressive, especially since ANYmal managed to complete the uphill portion of the hike 4 minutes faster than the suggested time for an average human.

Learning robust perceptive locomotion for quadrupedal robots in the wild www.youtube.com

Those clips of ANYmal walking through dense vegetation and deep snow do a great job of illustrating how well the system functions. While the exteroceptive data is showing obstacles all over the place and wildly inaccurate ground height, the robot knows where its feet are, and relies on that proprioceptive data to keep walking forward safely and without falling. Here are some other examples showing common problems with sensor data that ANYmal is able to power through:

Grid of 12 images showing the ANYmal robot in many different challenging situations

Other legged robots do use proprioception for reliable locomotion, but what’s unique here is this seamless combination of speed and robustness, with the controller moving between exteroception and proprioception based on how confident it is about what it's seeing. And ANYmal’s performance on this hike, as well as during the SubT Final, is ample evidence of how well this approach works.

For more details, we spoke with first author Takahiro Miki, a Ph.D. student in the Robotic Systems Lab at ETH Zürich and first author on the paper.

The paper’s intro says, “Until now, legged robots could not match the performance of animals in traversing challenging real-world terrain.” Suggesting that legged robots can now “match the performance of animals” seems very optimistic. What makes you comfortable with that statement?

Takahiro Miki: Achieving a level of mobility similar to animals is probably the goal for many of us researchers in this area. However, robots are still far behind nature and this paper is only a tiny step in this direction.

Your controller enables robust traversal of "harsh natural terrain." What does “harsh” mean, and can you describe the kind of terrain that would be in the next level of difficulty beyond “harsh”?

Miki: We aim to send robots to places that are too dangerous or difficult to reach for humans. In this work, by “harsh”, we mean the places that are hard for us, not only for robots. For example, steep hiking trails or snow-covered trails that are tricky to traverse. With our approach, the robot traversed steep and wet rocky surfaces, dense vegetation, or rough terrain in underground tunnels or natural caves with loose gravels at human walking speed.

We think the next level would be somewhere which requires precise motion with careful planning such as stepping-stones, or some obstacles that require more dynamic motion, such as jumping over a gap.

How much do you think having a human choose the path during the hike helped the robot be successful?

Miki: The intuition of the human operator choosing a feasible path for the robot certainly helped the robot’s success. Even though the robot is robust, it cannot walk over obstacles which are physically impossible, e.g., obstacles bigger than the robot or cliffs. In other scenarios such as during the DARPA SubT Challenge however, a high-level exploration and path planning algorithm guides the robot. This planner is aware of the capabilities of the locomotion controller and uses geometric cues to guide the robot safely. Achieving this for an autonomous hike in a mountainous environment, where a more semantic environment understanding is necessary, is our future work.

What impressed you the most in terms of what the robot was able to handle?

Miki: The snow stairs were the very first experiment we conducted outdoors with the current controller, and I was surprised that the robot could handle the slippery snowy stairs. Also during the hike, the terrain was quite steep and challenging. When I first checked the terrain, I thought it might be too difficult for the robot, but it could just handle all of them. The open stairs were also challenging due to the difficulty of mapping. Because the lidar scan passes through the steps, the robot couldn’t see the stairs properly. But the robot was robust enough to traverse them.

At what point does the robot fall back to proprioceptive locomotion? How does it know if the data its sensors are getting are false or misleading? And how much does proprioceptive locomotion impact performance or capabilities?

Miki: We think the robot detects if the exteroception matches the proprioception through its feet contact or feet positions. If the map is correct, the feet get contact where the map suggests. Then the controller recognizes that the exteroception is correct and makes use of it. Once it experiences that the feet contact doesn’t match with the ground on the map, or the feet go below the map, it recognizes that exteroception is unreliable, and relies more on proprioception. We showed this in this supplementary video experiment:

Supplementary Robustness Evaluation youtu.be

However, since we trained the neural network in an end-to-end manner, where the student policy just tries to follow the teacher’s action by trying to capture the necessary information in its belief state, we can only guess how it knows. In the initial approach, we were just directly inputting exteroception into the control policy. In this setup, the robot could walk over obstacles and stairs in the lab environment, but once we went outside, it failed due to mapping failures. Therefore, combining with proprioception was critical to achieve robustness.

How much are you constrained by the physical performance of the robot itself? If the robot were stronger or faster, would you be able to take advantage of that?

Miki: When we use reinforcement learning, the policy usually tries to use as much torque and speed as it is allowed to use. Therefore if the robot was stronger or faster, we think we could increase robustness further and overcome more challenging obstacles with faster speed.

What remains challenging, and what are you working on next?

Miki: Currently, we steered the robot manually for most of the experiments (except DARPA SubT Challenge). Adding more levels of autonomy is the next goal. As mentioned above, we want the robot to complete a difficult hike without human operators. Furthermore, there is big room for improvements in the locomotion capability of the robot. For “harsher” terrains, we want the robot to perceive the world in 3D and manifest richer behaviors such as jumping over stepping-stones or crawling under overhanging obstacles, which is not possible with current 2.5D elevation map.

Detect Solar Flares and Gamma-Ray Bursts for Less Than $100

Par David Schneider


In the 1960s and ’70s, musicians would sometimes insert into their releases odd sounds that could be made intelligible only by rotating the vinyl record backward using your finger. If you suspect this is only an urban legend, load a digital version of Electric Light Orchestra’s 1975 recording of “Fire on High” into an audio editor like Audacity and play it in reverse. You’ll hear ELO drummer Bev Bevan very clearly say, “The music is reversible, but time...turn back, turn back, turn back.”

In the 1980s, vinyl records gave way to compact discs, which weren’t amenable to such “ backmasking.” But at least one CD of that era contains a hidden message: Virgin Records’ 1983 release of the album Tubular Bells, recorded a decade earlier at Richard Branson’s Manor Studio in Shipton-on-Cherwell, England.


You see, an hour’s drive north from Shipton is a suburb of Rugby called Hillmorton, where at the time the British government operated a very-low-frequency (VLF) radio station to send messages to submarines. It seems the powerful emanations from this nearby station, broadcast at a radio frequency of just 16 kilohertz (within the audio range), were picked up by the electronic equipment at Branson’s studio and recorded at a level too low for anyone to notice.

After learning of this, I purchased an old CD of Tubular Bells, ripped a WAV file of one track, and piped it into a software-defined-radio package. Tuning to 16 kHz and setting the SDR software to demodulate continuous-wave signals immediately revealed Morse code. I couldn’t copy much of it, but I could make out many repetitions of VVV (“testing”) and GBR (the station’s call sign).

This inadvertent recording aptly demonstrates that VLF transmissions aren’t at all hard to pick up. And these signals can reveal more than just the presence of a powerful radio transmitter nearby. The application I had in mind was to use changes in VLF-signal strength to monitor space weather.

This drawing shows a coil antenna, external \u201csound card,\u201d signal generator, protoboard, and oscilloscope. The solar-flare monitor consists of a coil antenna and external “sound card,” which connects to a laptop computer. Tuning the coil antenna to an appropriate frequency also required a signal generator, a protoboard, and an oscilloscope. James Provost

That’s possible because these VLF transmissions travel over large distances inside the globe-encircling waveguide that is formed by the Earth’s surface and the ionosphere. Solar flares—and rare astronomical events called gamma-ray bursts—can alter the ionosphere enough to change how radio signals propagate in this waveguide. I hoped to use VLF broadcasts to track such goings-on.

There’s a long history of amateur astronomers using VLF radio equipment to measure solar flares by the sudden ionospheric disturbances (SIDs) they spawn. Years ago, it was a challenge to build suitable gear for these observations, but it now takes just a few modest Amazon purchases and a laptop computer.

The first item needed is a simple coil antenna. The model I bought (US $35) actually contains two coils. One was connected to a variable capacitor, so it can be tuned to various AM-broadcast frequencies; the other coil, of just two turns and inductively coupled to the first one, was wired to the output jack. I bypassed that two-turn coil and wired the jack directly to the wider coil, adding a couple of capacitors in parallel across it to lower its resonant frequency to 25 kHz.

There’s a long history of amateur astronomers using VLF radio to measure solar flares by the ionospheric disturbances they spawn.

To choose the right capacitors, I purchased a $9 signal generator, also on Amazon, temporarily connected that wider coil in series with a 1,000-ohm resistor, and applied a sinusoidal signal to this circuit. I used an oscilloscope to identify the frequency that caused the alternating voltage across the coil to peak. With some experimentation, I was able to find a couple of ceramic capacitors (nominally 0.11 microfarads in total) to place in parallel with the coil to set the resonant frequency near the broadcast frequency of some U.S. Navy VLF transmitters.

Using a scrounged 3.5-mm plug, I then plugged the modified coil antenna into the mic input of an external “sound card,” having purchased one for $34 on Amazon that allowed a sampling rate of 96 kHz. This feature was key, because my plan was to tune into a station that the U.S. Navy operates in Cutler, Maine, which goes by the call sign NAA and broadcasts at 24 kHz. Fans of Harry Nyquist will remember that you need to sample a signal at least two times per cycle to capture it properly. So a typical sound card that samples at 44 kHz wouldn’t cut it.

The final thing I needed was suitable software. I experimented with two SDR packages ( HDSDR and SDR Sharp), with my sound card taking the place of the usual radio dongle. While these packages displayed transmissions from NAA clearly enough, they didn’t provide a good way to monitor variations in signal strength over time. But I soon discovered how to do that with Spectrum Lab, following an online tutorial explaining how to use this software to measure SIDs.

This diagram plots relative changes in x-ray flux and VLF signal strength for 28 December 2021.Within days of its construction, the monitor registered the signal from a solar flare . Two flares earlier that day, which were well documented in X-ray measurements taken by NASA’s GOES-16 satellite in geosynchronous orbit [purple line], did not affect the VLF measurements [magenta line] because they occurred when the relevant part of the ionosphere was in darkness and shielded from the sun. James Provost

This combination of desktop AM antenna, external sound card, and Spectrum Lab software proved ideal. With it, I am not only able to monitor NAA, located about 1,400 kilometers from my home in North Carolina, I can also pick up the VLF station in LaMoure, N.D. (call sign NML), which transmits on 25.2 kHz. At times, I clearly receive the Jim Creek Naval station (NLK), near Oso, Wash., on 24.8 kHz and can even register the Navy’s Aguada station in Puerto Rico, despite it transmitting at 40.75 kHz, far from my coil’s resonant frequency.

The first few days of using this gear captured the expected pattern of daily variation in the signal from NAA, with sharp transitions when the sun rises and sets. Within a week, the sun became unusually active, producing three good-size flares in one day—as documented by NASA’s Geostationary Operational Environmental Satellites, which measure X-ray flux in geosynchronous orbit. Two of those flares occurred when the East Coast was in darkness, so they had no effect on the relevant portion of the ionosphere or the signal strength I was monitoring. But the third, which took place at about 11 a.m. local time, showed up nicely.

It’s rather amazing that with just $70 worth of simple electronics and a decade-old laptop, I can now monitor flares on the surface of the sun. One day I might see the effects of a gamma-ray burst taking place on a star in a distant galaxy, as a group at Stanford did in 2004. I’ll probably have to wait years to detect one of those, though. In the meantime, I can entertain myself hunting for more radio signals inadvertently recorded at the Manor Studio in the ’70s. Maybe I’ll start those explorations, fittingly, with Van Morrison’s 1978 album Wavelength.

This article appears in the February 2021 print issue as “A Barometer for Space Weather.”

Taking Cosmology to the Far Side of the Moon

Par Andrew Jones


A team of Chinese researchers are planning to use the moon as a shield to detect otherwise hard-to-observe low frequencies of the electromagnetic spectrum and open up a new window on the universe. The Discovering the Sky at the Longest Wavelengths (DSL) mission aims to seek out faint, low-frequency signals from the early cosmos using an array of 10 satellites in lunar orbit. If it launches in 2025 as planned, it will offer one of the very first glimpses of the universe through a new lens.

Nine “sister” spacecraft will make observations of the sky while passing over the far side of the moon, using our 3,474-kilometer-diameter celestial neighbor to block out human-made and other electromagnetic interference. Data collected in this radio-pristine environment will, according to researchers, be gathered by a larger mother spacecraft and transmitted to Earth when the satellites are on the near side of the moon and in view of ground stations.

The mission aims to map the sky and catalog the major sources of long-wavelength signals—the last, largely undiscovered area of the electromagnetic spectrum—according to a paper on the DSL mission by Xuelei Chen and others at the National Astronomical Observatories and the National Space Science Center, two institutions under the Chinese Academy of Sciences.

“A mission like this being in lunar orbit could make a scientific impact, particularly on cosmic dawn and dark ages science,” says Marc Klein Wolt, managing director of the Radboud Radio Lab in the Netherlands and a member of the Netherlands-China Low Frequency Explorer (NCLE), aboard the Chinese Queqiao relay satellite.

“When you open up a new window on the universe, you’re going to make new discoveries, things that you don’t know about yet—the unknown unknowns.”
—Marc Klein Wolt, Radboud Radio Lab, Netherlands

Detecting the cosmic dark ages (the time before the first stars formed and began to shine) and the cosmic dawn (when the first stars and galaxies formed) requires making observations of frequencies between 10 and 50 megahertz. Signals emitted by hydrogen atoms during these early cosmic eras have been stretched out over cosmic timescales to much longer wavelengths across 13 billion years of travel time. Radio astronomy of this kind is extremely difficult on Earth as the ionosphere interferes with or completely blocks such ultralong wavelengths.

“To measure the 'cosmic dawn' signal, or even the 'dark ages' signal, which is even more difficult, you have to be in a really quiet environment,” Wolt notes.

The satellites could, over time, measure the primordial distributions of hydrogen at several different epochs in the early life of the universe, says Wolt. Learning how the distributions changed and evolved over time and grew into bigger clusters of matter to form stars and galaxies would be an important contribution to astronomy.

Heliophysics, space weather, exoplanets, the interstellar medium, and extragalactic radio sources are just some of the other areas in which DSL’s long-wavelength astronomy could make additional new contributions.

“When you open up a new window on the universe, you're going to make new discoveries, things that you don't know about yet,” says Wolt. “The unknown unknowns.”

Astronomers in the United States and elsewhere have proposed setting up telescopes on the far side of the moon to benefit from the radio quiet to make unprecedented observations. Over billions of years, the Earth’s gravity has slowed the rotation of the moon, making it “tidally locked,” meaning the lunar far side now never faces Earth and is shielded from any electromagnetic noise created by terrestrial sources.

The DSL mission will, however, avoid the much greater cost and complexity of needing to land and set up on the moon, nor will it be required to carry radioisotope heating systems to keep electronics warm during frigid two-week-long lunar nights. On the other hand, being in orbit limits the duration of the observations the satellites can make while shielded by the moon.

Yet there are other benefits, too.

“With the train of satellites, you're able to do interferometry observations, so you combine the measurements of the various instruments together. And as they orbit around the moon, they can cover most of the sky every month,” says Wolt.

The mission presents a number of challenges, such as maintaining the satellites orbiting in a precise configuration. It would also be an early example of using small satellites for space science in deep space.

China previously attempted to test interferometry in lunar orbit with two small satellites that launched along with the Queqiao relay satellite in 2018 to support China’s Chang’e-4 lunar far side landing mission, but one of the spacecraft was lost after the burn to take them from Earth into translunar orbit. This next attempt would be much more ambitious.

The DSL team has recently completed the intensive study into the mission and is now applying for entering the engineering phase, according to Chen, targeting a launch in 2025. While the “dark side of the moon” is a misnomer, the silence (and thus at least radio darkness) on the lunar far side could offer unprecedented insight into cosmic mysteries.

Correction 19 Jan. 2022: A previous version of this post stated the DSL mission was Chinese and European. There was a proposal for a similar Sino-European effort, but another team was ultimately selected. The present mission is a Chinese one.

Can Freight Train Cars Go Electric—and Self-Driving?

Par Evan Ackerman


Moving freight by rail hasn’t changed a whole heck of a lot over the last several decades. And there are good reasons for this: Trains can move freight four times as efficiently as trucks can, and they can move a huge amount of it at once with minimal human supervision. The disadvantage of trains is that they’re best at long-distance hub-to-hub freight transfers, and usually, you still need to put your cargo on a truck to get it to its final destination. Plus, if you just have a little bit of cargo, you may be at the mercy of a network that prioritizes large volume rather than small.

Parallel Systems, a startup founded by a trio of former SpaceX engineers that is coming out of stealth today, hopes to change this model substantially by introducing autonomous rail vehicles that can handle standard shipping containers—the same containers that currently move freely between cargo ships, traditional rail systems, and trucks. By moving containers one at a time, Parallel Systems believes that rail can be much more agile with no loss in efficiency, helping reduce the reliance on trucking. Can they do it? Maybe—but there are some challenges.


From a technical perspective, these autonomous electric-rail vehicles really seem like they’re achievable. It’s a substantial simplification of an autonomous-driving problem, in the sense that you only need to worry about control in one dimension, and (in particular) that most of the time you can be reasonably certain that you’ll have right-of-way on the track. With some halfway decent sensors to detect obstacles on the track, reliable motors, and batteries that last long enough (current range is 800 kilometers with a subhour recharge time), and the software infrastructure required to sort it all out, I don’t see any major obstacles to building these things and putting them on some tracks. Where things get more complicated is when you consider the long-term plan that Parallel Systems has for its technology:

The overall vision seems very compelling. Decentralizing freight transport and distribution can provide flexibility and increased efficiency, getting cargo closer to where it needs to go in a more timely manner while taking some stress off of overloaded ports. And with each individual container being effectively an independent autonomous vehicle, there are a bunch of clever things you can do, like platooning. In a traditional platoon, efficiency is unequal since the leader takes the brunt of the aerodynamic forces to make things easier on all of the following vehicles, and obviously rotating leaders won’t work on rail. But Parallel Systems’ vehicles can go bumper to bumper and push each other, meaning that overall energy use can be equalized. Neat!

Illustration showing a small freight terminal transferring freight containers from Parallel Systems vehicles onto trucksParallel Systems

The potential issue here, and it could be a significant one, is that Parallel Systems only builds and controls these little railcars. They don’t build, own, or control the rail systems that their vehicles require. North America has rail all over the place already, but that rail is in the charge of other companies, who are using it to do their own thing. So the question is, how does Parallel Systems fit in with that?

To get a better understanding of the current state of rail in the United States and how Parallel Systems might fit in with that, we spoke with Nick Little, director of the Railway Management Program at the Eli Broad College of Business at Michigan State University.

IEEE Spectrum: Can you describe how the rail network functions in the United States?

Nick Little: We've got 250,000 km of railroad in the United States, and it's owned by over 600 different companies. There are seven large companies called Class 1, which move the majority of the freight the majority of the distance, between large cities or across the country. Those companies all own their own infrastructure—the track, the bridges, the signaling system, the locomotives. The other 600-odd railroads are generally referred to as short lines, which do a lot of the first and last mile, and most of those railroads are small operations that may not have good quality track and older equipment, including less sophisticated signaling systems.

I can imagine that a difficulty here will be integrating this new vehicle into an existing railroad network, since it’s such a different concept. If one of these vehicles was going to travel a distance of hundreds of kilometers, there could be a lot of different railroads involved, with a lot of different pieces of track and switches to navigate.

Can you talk a little more about switches and signaling?

Little: To really benefit from the ability of these vehicles to increase capacity by transporting single containers, you’d need to have the right signaling and control system in place, and it would have to be much more like what is used on a high-capacity metropolitan subway—control based on direct communication with vehicles, rather than the block control that’s used on most long-distance railroads in this country at the moment. That block system operates on the principle that you can have only one train in that block at any one time to avoid collisions, and sometimes those blocks are 20 km long. And a lot of the long-distance freight lines are still basically single-track lines with passing sidings rather than one track in each direction.

In theory, you could make these vehicles a lot more responsive by being able to send a signal to some...trackside device that would change a switch for dynamic rerouting, but this isn’t something that currently exists across freight-rail networks. It does exist on tramways; urban streetcars work that way. Could it be done? Heck, yes. The technology is there.

So in the short term, you don't see that there's really a good way of mixing these vehicles in with traditional freight traffic on existing freight lines?

Little: Correct.

Does our rail system currently have the spare capacity to add more traffic to the existing network anyway?

Little: If we run the rail system the same way it's set up at the moment, it's pretty close to capacity in many areas, but not at absolute capacity across the whole network, by no means. And some of the constraints on capacity are nothing to do with the physical side of it; it's also down to availability of trained and experienced labor.

If you were in charge of a Class 1 railway right now and Parallel Systems came to you with this idea, would you be interested?

Little: If I had a dedicated set of tracks that didn't carry any other different types of traffic, just moved containers, I would be interested, but I don't know of any company that does that. However, if you think about the problem we've got in this country at the moment, with all the ships and containers stacked up outside the Long Beach and Los Angeles ports, and how to get those containers from the docks to an inland terminal where they can go to transloading places or warehouses or some of them can go straight through to other destinations, if you were to build a short dedicated stretch of line that just moved containers off the port to, let's call it a pop-up inland port, that could be a great idea. That seems like a really, really efficient way of doing it.

The technology needs to be proven, which could be done on a small scale. Scaling it up could have a lot more issues, but I’d like to see it potentially applied to urban freight. A lot of container traffic has to move on roads through cities, but if you could actually have distribution with rail from a major terminal to lots of different customers in an industrial park, that could be something useful too. But that’s a different scale.

And for Parallel Systems’ perspective on some of these challenges, we were also able to speak with the company’s cofounder and CEO, Matt Soule.

Why is now the right time to start a company like this, and what challenges are you dealing with?

Matt Soule: There is a big advantage in that what we're building is achievable. There are no breakthroughs required to do it. We're basically integrating a lot of existing technologies together. But a lot of what we're doing is very unique for the railroads themselves. What one of the challenges has been is just understanding how railroads operate at the nuts-and-bolts level of how you actually manage that kind of network.

One of the things we're working through now is how to leverage the physical infrastructure that's already there, because we want to use that as it is. We think that's a big advantage in terms of our market entry—not having to force the creation of any infrastructure. But there's also the control and data-management infrastructure that exists at the software level. And discovering how that works requires working with the industry to really figure it out. So that's been one of the things we’ve been working hard toward.

Are you concerned about having to rely on potentially many different companies for track access?

Soule: So in terms of our market entry, it's going to be working with specific railroads on specific sections of network, and we’ll be captive to that network, but as we validate and verify what we're doing, we’ll start working toward becoming a common interface that can address the needs of all railroads. In terms of the business side, we likely would have to build individual relationships with them, but it's something they already do with each other today—they work with each other to use each other's tracks, and it’s one of the successes of North American rail that the industry has worked together to create interchangeable standards.

What kind of feedback have you gotten from the rail industry so far?

Soule: We've had a lot of conversations with the rail industry; we're definitely calibrating off of the problems that they see. I’d say across the board the reception has been very positive, and it's exceeded our expectations. I think there are certainly operators out there that are going to take a wait-and-see approach, but we have a lot of strong interest from industry already. Railroads are our customers.

We think that marrying autonomy with electrification gives the railroads a lot of tools to create new markets, like capturing trucking volume, that were previously out of reach.
—Matt Soule, CEO Parallel Systems

Will dealing with signals autonomously with your system be an issue?

Soule: Class 1 railroads have state-of-the-art control systems and a tremendous amount of software that ties it all together and operates it. And so what we're doing is building out our own system that can be compatible with these existing network control systems. The short-line railroads have more variety. I wouldn't say they're out of date, but they're not quite as modernized as some of the Class 1 systems. They'll usually use a phone or radio to call into a central dispatch office, and then say, “Hey, can I have this track authority?” And the dispatcher will have a software tool that's managing those authorities, which are what prevents trains from having a collision.

What's nice about our system is that it can electronically make that same request from that dispatch office. So, there's electronic handshaking happening between our system and that legacy dispatching office, and there’s various levels at which you could automate that. The grand vision would be that the dispatching office and our system are just computers talking to each other.

What’s next for Parallel Systems?

Soule: We're in the midst of building our second-generation vehicle, so we're looking to hire a lot of software and hardware engineers. We should be rolling this quarter, and we're launching an advanced testing program this year. We are working with some players in the rail industry, but we're not revealing too much about the nature of this relationship.

Parallel Systems has raised US $49.55 million in a Series A round, so they should have plenty of time to see if they can make this work. And if they can, I’d be first in line for the spinoff business that they should absolutely do: putting little luxury private compartments on their vehicles and offering private tours of scenic rail lines.

A First: AI System Named Inventor

Par Kathy Pretz


The South African patent office made history in July when it issued a patent that listed an artificial intelligence system as the inventor.

The patent is for a food container that uses fractal designs to create pits and bulges in its sides. Designed for the packaging industry, the new configuration allows containers to fit more tightly together so they can be transported better. The shape also makes it easier for robotic arms to pick up the containers.


The patent’s owner, AI pioneer Stephen L. Thaler, created the inventor, the AI system known as Dabus (device for the autonomous bootstrapping of unified sentience).

The patent success in South Africa was thanks to Thaler’s attorney, Ryan Abbott.

Abbott and his team filed applications in 2018 and 2019 in 17 patent offices around the world, including in the United States, several European countries, China, Japan, and India.

The European Patent Office (EPO), the U.K. Intellectual Property Office (UKIPO), the U.S. Patent and Trademark Office (USPTO), and Intellectual Property (IP) Australia all denied the application, but Abbott filed appeals. He won an appeal in August, when the Federal Court of Australia ruled that the AI system can be an inventor under the country’s 1990 Patents Act.

The EPO Boards of Appeal and the U.K. Court of Appeal recently ruled that only humans can be inventors. Abbott is asking the U.K. Supreme Court to allow him to challenge that point. He says he expects a decision to be made this year.

Abbott, a physician as well as a lawyer, is a professor of law and health sciences at the University of Surrey’s School of Law. He also is an adjunct assistant professor at the Geffen School of Medicine at the University of California, Los Angeles, and he wrote The Reasonable Robot: Artificial Intelligence and the Law.

He spoke about the decision by South Africa during the Artificial Intelligence and the Law virtual event held in September by the IEEE student branch at the University of South Florida, in Tampa. The event was a collaboration among the branch and several other IEEE groups including Region 3 and Region 8, the Africa Council, the University of Cape Town student branch, the Florida Council, and the Florida West Coast Section. More than 340 people attended. Abbott’s talk is available on IEEE.tv.

The Institute recently interviewed Abbott to find out how an AI entity could invent something, the nuances of patent law, and the impact the Australian and South African decisions could have on human inventors. The interview has been condensed and edited for clarity.

INVENTIVE SYSTEMS

In 2014 Abbott began noticing that companies were increasingly using AI to do a variety of tasks including creating designs. A neural network–based system can be trained on data about different types of car suspensions, for example, he says. The network can then alter the training data, thereby generating new designs.

A second network, which he calls a critical neural network, can monitor and evaluate the output. If you tell the AI system how to evaluate new designs, and that you are looking for a car suspension that can reduce friction better than existing designs, it can alert you when a design comes out that meets that criterion, Abbott says.

“Some of the time, the AI is automating the sort of activity that makes a human being an inventor on a patent,” he says. “It occurred to me that this sort of thing was likely to become far more prevalent in the future, and that it had some significant implications for research and development.”

Some patent applicants have been instructed by their attorney to use a person’s name on the patent even if a machine came up with the invention.

But Abbott says that’s a “short-sighted approach.” If a lawsuit is filed challenging a patent, the listed inventor could be deposed as part of the proceedings. If that person couldn’t prove he or she was the inventor, the patent couldn’t be enforced. Abbott acknowledges that most patents are never litigated, but he says it still is a concern for him.

Meanwhile he found that companies using AI to invent were growing worried.

“AI is automating the sort of activity that makes a human being an inventor on a patent.”

“It wasn’t clear what would happen if you didn’t have a human inventor on a patent,” he says. “There was no law on it anywhere. Just a bunch of assumptions.”

He and a group of patent lawyers decided to seek out a test case to help establish a legal precedent. They approached Thaler, founder of Imagination Engines, in St. Charles, Mo. The company develops artificial neural network technology and associated products and services. Thaler created Dabus in part to devise and develop new ideas. He had Dabus generate the idea for a new type of food container, but he did not instruct the system what to invent specifically or do anything that would traditionally qualify him to be an inventor directly.

The lawyers decided the food container design was patentable because it met all the substantive criteria: It was new, not obvious, useful, and appropriate subject matter.

They filed Thaler’s application in the U.K. and in Europe first because, Abbott says, those jurisdictions don’t initially require an application to list an inventor.

The patent offices did their standard evaluations and found the application to be “substantively patentable in preliminary examination.”

Next the lawyers adjusted the application to list Dabus as the inventor.

Typically an inventor’s employer is made the owner of a patent. Even though Dabus is not an employee, Abbott says, “We argue that Dr. Thaler is entitled to own the patents under the general principles of property ownership—such as a rule called accession—which refers to owning some property by virtue of owning some other property. If I own a fruit tree, I own fruit from that tree. Or, if Dabus had been a 3D printer and made a 3D-printed beverage container, Thaler would own that.”

IMPACT ON INVENTORS

Abbott says he believes the decisions in Australia and South Africa will encourage people to build and use machines that can generate inventive output and use them in research and development. That would in turn, he says, promote the commercialization of new technologies.

He says he hopes the decisions also encourage people to be open about whether their invention was developed by a machine.

“The reason we have a patent system is to get people to disclose inventions to add to the public store of knowledge in return for these monopoly rights,” he says.

Human inventors likely will face more competition from AI in the future, he says.

“AI hasn’t gotten to the point where it is going to be driving mass automation of research,” he says. “When it does, it will likely be in certain areas where AI has natural advantages, like discovering and repurposing medicines. In the medium term, there will still be plenty of ways for human researchers to stay busy while society gets to enjoy dramatic advances in research.”

You can listen to an interview with Abbott on IEEE Spectrum’s Fixing the Future podcast: Can a Robot Be Arrested? Hold a Patent? Pay Income Taxes?

The Lies that Powered the Invention of Pong

Par Tekla S. Perry


In 1971 video games were played in computer science laboratories when the professors were not looking—and in very few other places. In 1973 millions of people in the United States and millions of others around the world had seen at least one video game in action. That game was Pong.

Two electrical engineers were responsible for putting this game in the hands of the public—Nolan Bushnell and Allan Alcorn, both of whom, with Ted Dabney, started Atari Inc. in Sunnyvale, Calif. Mr. Bushnell told Mr. Alcorn that Atari had a contract from General Electric Co. to design a consumer product. Mr. Bushnell suggested a Ping-Pong game with a ball, two paddles, and a score, that could be played on a television.

“There was no big contract,” Mr. Alcorn said recently. “Nolan just wanted to motivate me to do a good job. It was really a design exercise; he was giving me the simplest game he could think of to get me to play with the technology.”

The key piece of technology he had to toy with, he explained, was a motion circuit designed by Mr. Bushnell a year earlier as an employee of Nutting Associates. Mr. Bushnell first used the circuit in an arcade game called Computer Space, which he produced after forming Atari. It sold 2000 units but was never a hit.


This article was first published as "Pong: an exercise that started an industry." It appeared in the December 1982 issue of IEEE Spectrum as part of a special report, “Video games: The electronic big bang.” A PDF version is available on IEEE Xplore.


In the 1960s Mr. Bushnell had worked at an amusement park and had also played space games on a PDP-10 at college. He divided the cost of a computer by the amount of money an average arcade game made and promptly dropped the idea, because the economics did not make sense.

Then in 1971 he saw a Data General computer advertised for $5000 and determined that a computer game played on six terminals hooked up to that computer could be profitable. He began designing a space game to run on such a timeshared system, but because game action occurs in real time, the computer was too slow. Mr. Bushnell began trying to take the load off the central computer by making the terminals smarter, adding a sync generator in each, then circuits to display a star field, until the computer did nothing but keep track of where the player was. Then, Mr. Bushnell said, he realized he did not need the central computer at all—the terminals could stand alone.

“He actually had the order for the computers completed, but his wife forgot to mail it,” Mr. Alcorn said, adding, “We would have been bankrupt if she had.”

Mr. Bushnell said, “The economics were not longer a $6000 computer plus all the hardware in the monitors; they became a $400 computer hooked up to a $100 monitor and put in a $100 cabinet. The ice water thawed in my veins.”


The ball in Pong is square. Considering the amount of circuitry a round ball would require, “who is going to pay an extra quarter for a round ball?”


Computer Space appealed only to sophisticated game players—those who were familiar with space games on mainframe computers, or those who frequent the arcades today. It was well before its time. Pong, on the other hand, was too simple for an EE like Mr. Bushnell to consider designing it as a real game—and that is why it was a success.

Mr. Bushnell had developed the motion circuit in his attempt to make the Computer Space terminals smarter, but Mr. Alcorn could not read his schematics and had to redesign it. Mr. Alcorn was trying to get the price down into the range of an average consumer product, which took a lot of ingenuity and some tradeoffs.

“There was no real bulk memory available in 1972,” he said. “We were faced with having a ball move into any of the spots in a 200-by-200 array without being able to store a move. We did it with about 10 off-the-shelf TTL parts by making sync generators that were set one or two lines per frame off register.”

Thus, the ball would move in relation to the screen, both vertically and horizontally, just as a misadjusted television picture may roll. Mr. Alcorn recalled that he originally used a chip from Fairchild to generate the display for the score, but it cost $5, and he could do the same thing for $3 using TTL parts, though the score was cruder.

The ball in Pong is square—another tradeoff. Considering the amount of circuitry a round ball would require, Mr. Alcorn asked, “who is going to pay an extra quarter for a round ball?”

Sound was also a point of contention at Atari. Mr. Bushnell wanted the roar of approval of a crowd of thousands; Mr. Dabney wanted the crowd booing.

“How do you do that with digital stuff?” Mr. Alcorn asked. “I told them I didn’t have enough parts to do that, so I just poked around inside the vertical sync generator for the appropriate tones and made the cheapest sound possible.”

The hardware design of Pong took three months, and Mr. Alcorn’s finished prototype had 73 ICs, which, at 50 cents a chip, added up to $30 to $40 worth of parts. “That’s a long way from a consumer product, not including the package, and I was depressed, but Noland said ‘Yeah, well, not bad.’”

They set the Pong 2 prototype up in a bar and got a call the next day to take it out because it was not working. When they arrived, the problem was obvious: the coin box was jammed full of quarters.


Video Friday: Guitar Bot

Par Evan Ackerman


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022: 23–27 May 2022, Philadelphia
ERF 2022: 28–30 June 2022, Rotterdam, Germany
CLAWAR 2022: 12–14 September 2022, Açores, Portugal

Let us know if you have suggestions for next week, and enjoy today's videos.


Robotics. It's a wicked game.

[ GA Tech ]

This experiment demonstrated the latest progress of the flying humanoid robot Jet-HR2. The new control strategy allows the robot to hover with position feedback from the motion-capture system. Video demonstrates the robot's ability to remain stable hovering in midair for more than 20 seconds.

[ YouTube ]

Thanks, Zhifeng!

This super cool soft robotic finger from TU Berlin is able to read Braille with astonishing accuracy by using sound as a sensor.

[ TU Berlin ]

Cassie Blue navigates around furniture used as obstacles in the Ford Robotics Building at the University of Michigan. All the clips in this video are magnified 1x on purpose to show Cassie's motion.

[ Michigan Robotics ]

Thanks, Bruce!

Tapomayukh Bhattacharjee received a National Science Foundation (NSF) National Robotics Initiative (NBI) collaborative grant for a project that aims to address—and ameliorate—the way people with mobility issues are given a chance for improved control and independence over their environments, especially in how they are fed—or better, how they can feed themselves with robotic assistance.

[ Cornell ]

A novel quadcopter capable of changing shape midflight is presented, allowing for operation in four configurations with the capability of sustained hover in three.

[ HiPeR Lab ]

Two EPFL research groups teamed up to develop a machine-learning program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, tetraplegic patients will be able to carry out more day-to-day activities on their own.

[ EPFL ]

The MRV is SpaceLogistics’ next-generation on-orbit servicing vehicle, incorporating a robotic arm payload developed and integrated by the U.S. Naval Research Laboratory and provided by the U.S. Defense Advanced Research Projects Agency. In this test of Flight Robotic Arm System 1, the robotic arm is executing an exercise called the Gauntlet, which moves the arm through a series of poses that exercise the full motion of all seven degrees of freedom.

[ Northrop Grumman ]

The Shadow Robot Co. would like to remind you that the Shadow Hand is for sale, and if you're a researcher who thinks "wow that would be great but I almost certainly can't afford it," the company encourages you to give them a ring to see what they may be able to do to help make it happen.

[ Shadow ]

Join ESA astronaut Matthias Maurer inside Kibo, the Japanese laboratory module of the International Space Station in 360°, setting up Astrobee free-flying robots for the ReSWARM (RElative Satellite sWArming and Robotic Maneuvering) experiment. This robotics demonstration tests autonomous microgravity motion planning and control for on-orbit assembly and coordinated motion.

[ NASA ]

Boeing's MQ-25 autonomous aerial tanker continues its U.S. Navy carrier testing.

[ Boeing ]

Sphero Sports is built for sports foundations, schools, and CSR-driven organizations to teach STEM subjects. Sphero Sports gets students excited about STEM education and proactively supports educators and soccer foundation staff to become comfortable in learning and teaching these critical skills.

[ Sphero ]

Adibot-A is Ubtech Robotics' fully loaded autonomous disinfection solution, which can be programmed and mapped to independently navigate one or multiple floor plans.

[ UBTECH ]

Survice Engineering Co. was proud to support the successful completion of the Unmanned Logistics System–Air (ULS-A) Joint Capability Technology Demonstration (JCTD) program as the lead system integrator. We worked with the U.S. government, leaders in autonomous unmanned systems, and our warfighters to develop, test, and evaluate the latest multirotor VTOL platforms and technologies for assured logistics resupply at the forward edge of the battlefield.

[ SURVICE ] via [ Malloy Aeronautics ]

Thanks, Chris!

Yaqing Wang from JHU's Terradynamics Lab gives a talk on trying to make a robot that is anywhere near as talented as a cockroach.

[ Terradynamics Lab ]

In episode one of season two of the Robot Brains podcast, host Pieter Abbeel is joined by guest (and close collaborator) Sergey Levine, professor at UC Berkeley, EECS. Sergey discusses the early years of his career, how Andrew Ng influenced his interest in machine learning, his current projects, and his lab's recent accomplishments.

[ The Robot Brains ]

Thanks, Alice!

Learn About the Candidates Running for 2023 President-Elect

Par Joanna Goodrich

The IEEE Board of Directors has nominated Life Fellow Thomas Coughlin and Senior Members Kathleen Kramer and Maike Luiken as candidates for IEEE president-elect. IEEE Life Fellow Kazuhiro Kosuge is seeking to be a petition candidate.

Other members who want to become a petition candidate still may do so by submitting their intention to elections@ieee.org by 8 April.

The winner of this year’s election will serve as IEEE president in 2024. For more information about the election, president-elect candidates, and petition process, visit the IEEE website.

Life Fellow Thomas Coughlin

Nominated by the IEEE Board of Directors

Portrait of a white haired smiling man wearing glasses and a suitTom CoughlinHarry Who Photography

Coughlin is founder and president of Coughlin Associates, in San Jose, Calif., which provides market and technology analysis as well as data storage, memory technology, and business consulting services. He has more than 40 years of experience in the data storage industry and has been a consultant for more than 20 years. He has been granted six patents.

Before starting his own company, Coughlin held senior leadership positions in Ampex, Micropolis, and SyQuest.

He is the author of Digital Storage in Consumer Electronics: The Essential Guide, which is in its second edition. He is a regular contributor on digital storage for the Forbes blog and other news outlets.

In 2019 he was IEEE-USA president as well as IEEE Region 6 director. He also was chair of the IEEE New Initiatives and Public Visibility committees. He was vice president of operations and planning for the IEEE Consumer Technology Society and served as general chair of the 2011 Sections Congress in San Francisco.

He is an active member of the IEEE Santa Clara Valley Section, which he chaired, and has been involved with several societies, standards groups, and the IEEE Future Directions committee.

As a distinguished lecturer for the Consumer Technology Society and IEEE Student Activities, he has spoken on digital storage in consumer electronics, digital storage and memory for artificial intelligence, and how students can make IEEE their “professional home.”

Coughlin is a member of the IEEE–Eta Kappa Nu (IEEE-HKN) honor society.

He has received several recognitions including the 2020 IEEE Member and Geographic Activities Leadership Award.

Coughlin is active in several other professional organizations including the Society of Motion Picture and Television Engineers and the Storage Networking Industry Association.

Senior Member Kathleen Kramer

Nominated by the IEEE Board of Directors

Portrait of a smiling blond womanKathleen KramerJT MacMillan

Kramer is a professor of electrical engineering at the University of San Diego, where she served as chair of the EE department and director of engineering from 2004 to 2013. As director she provided academic leadership for all of the university’s engineering programs.

Her areas of interest include multisensor data fusion, intelligent systems, and cybersecurity in aerospace systems. She has authored or co-authored more than 100 publications.

Kramer has worked for several companies including Bell Communications Research, Hewlett-Packard, and Viasat.

She served as the 2017–2018 director of IEEE Region 6 and was the 2019–2021 IEEE secretary. In that position, she chaired the IEEE Governance Committee and helped make major changes including centralizing ethics conduct reporting, strengthened processes to handle ethics and member conduct, and improved the process used to periodically review each of the individual committees and major boards of the IEEE.

She has held several leadership positions in the IEEE San Diego Section, including chair, secretary, and treasurer. Her first position with the section was advisor to the IEEE University of San Diego Student Branch.

Kramer is an active leader within the IEEE Aerospace and Electronic Systems Society. She currently heads its technical operations panel on cybersecurity. From 2016 to 2018 she served as vice president of education.

She is a distinguished lecturer for the society and has given talks on signal processing, multisensor data fusion, and neural systems.

Kramer serves as an IEEE commissioner within ABET, the global accrediting organization for academic programs in applied science, computing, engineering, and technology. She has contributed to several advances for graduate programs, cybersecurity, mechatronics, and robotics.

Life Fellow Kazuhiro Kosuge

Seeking petition candidacy

Portrait of a smiling man in a suit with dark hairKazuhiro KozugeMajesty Professional Photo

Kosuge is a professor of robotic systems at the University of Hong Kong’s electrical and electronic engineering department. He has been conducting robotics research for more than 35 years, has published more than 390 technical papers, and has been granted more than 70 patents.

He began his engineering career as a research staff member in the production engineering department of Japanese automotive manufacturer Denso. After two years, he joined the Tokyo Institute of Technology’s department of control engineering as a research associate. In 1989 and 1990, he was a visiting research scientist at MIT. After he returned to Japan, he began his academic career at Nagoya University as an associate professor.

In 1995 Kosuge left Nagoya and joined Tohoku University, in Sendai, Japan, as a faculty member in the machine intelligence and system engineering department. He is currently director of the university’s Transformative AI and Robotics International Research Center.

An IEEE-HKN member, he has held several IEEE leadership positions including 2020 vice president of Technical Activities, 2015–2016 Division X director, and 2010–2011 president of the Robotics and Automation Society.

He has served in several advisory roles for Japan, including science advisor to the Ministry of Education, Culture, Sports, Science, and Technology’s Research Promotion Bureau from 2010 to 2014. He was a senior program officer of the Japan Society for the Promotion of Science from 2007 to 2010. In 2005 he was appointed as a Fellow of the Japan Science and Technology Agency’s Center for Research and Development Strategy.

Among his honors and awards are the purple-ribbon Medal of Honor in 2018 from the emperor of Japan.

To sign Kosuge’s petition, click here.

Senior Member Maike Luiken

Nominated by the IEEE Board of Directors

Portrait of a smiling woman with grey hair and glassesMaike LuikenHeather O’Neil/Photos Unlimited

Luiken’s career in academia spans 30 years, and she has more than 20 years of experience in industry. She is co-owner of Carbovate Development, in Sarnia, Ont., Canada, and is managing director of its R&D department. She also is an adjunct research professor at Western University in London, also in Ontario.

Her areas of interest include power and energy, information and communications technology, how progress in one field enables advances in other disciplines and sectors, and how the deployment of technologies contributes—or doesn’t contribute—to sustainable development.

In 2001 she joined the National Capital Institute of Telecommunications in Ottawa as vice president of research alliances. There she was responsible for a wide area test network and its upgrades. While at the company, she founded two research alliance networks that spanned across industry, business, government, and academia in the areas of wireless and photonics.

She joined Lambton College, in Sarnia, in 2005 and served as dean of its technology school as well as of applied research and innovation. She led the expansion of applied research conducted at the school and helped Lambton become one of the top three research colleges in Canada.

In 2013 she founded the Bluewater Technology Access Centre (now the Lambton Manufacturing Innovation Centre). It provides applied research services to industry while offering students and faculty opportunities to develop solutions for industry problems.

Luiken, an IEEE-HKN member, was last year’s vice president of IEEE Member and Geographic Activities. She was president of IEEE Canada in 2018 and 2019, when she also served as Region 7 director.

She has served on numerous IEEE boards and committees including the IEEE Board of Directors, the Canadian Foundation, Member and Geographic Activities, and the Internet Initiative.

Unitree’s AlienGo Quadruped Can Now Wield a Lightsaber

Par Evan Ackerman


Unitree Robotics, well known for providing affordable legged robots along with questionable Star Wars–themed promotional videos, has announced a brand-new, custom-made, 6-degree-of-freedom robotic arm intended to be mounted on the back of its larger quadrupeds. Also, it will save humanity from Sith from Mars, or something.


This, we should point out, is not the first time Unitree has used the Force in a promotional video, although its first attempt was very Dark Side and the second attempt seemed to be mostly an apology for the first. The most recent video here seems to have landed squarely on the Light Side, which is good, but I’m kinda confused about the suggestion that the baddies come from Mars (?) and most humans are killed (??) and the answer is some sort of “Super AI” (???). I guess Unitree will have to release more products so that we can learn how this story ends.

Anyway, about the arm: There are two versions, the Z1 Air and the Z1 Pro, built with custom motors using harmonic reducers for low backlash and torque control. They are almost exactly the same, except that the Pro weighs 4.3 kilograms rather than 4.1 kg, and has a payload of 3–5 kg rather than 2 kg. Max reach is 0.7 meters, with 0.1 millimeter repeatability. The price for the Air version is “about $6600,” and it’s compatible with “other mobile robots” as well.

It’s important to note that just having an arm on a robot is arguably the easy part—it’s using the arm that’s the hard part, in the sense that you have to program it to do what you need it to do. A strong, lightweight, and well-integrated arm certainly makes that job easier, but it remains to be seen what will be involved in getting the arm to do useful stuff. I don’t want to draw too many comparisons to Boston Dynamics here, but Spot’s arm comes with autonomous and semi-autonomous behaviors built-in, allowing otherwise complex actions to be leveraged by commercial end users. It’s not yet clear how Unitree is handling this.

We’re at the point now with robots in general that in many cases, software is the differentiator rather than hardware, and you get what you pay for. That said, sometimes what you want or need is a more affordable system to work with, and remember that Unitree’s AlienGo costs under $10K. There’s certainly a demand for affordable hardware, and while it may not be ready to be dropped into commercial applications just yet, it’s good to see options like these on the market.

Physicists Spin Up Quantum Tornadoes

Par Philip E. Ross


Shrink down to the level of atoms and you enter the quantum world, so supremely weird that even a physicist will sometimes gape. Hook that little world to our big, classical one, and a cat can be both alive and dead (sort of).

“If you think you understand quantum mechanics, you don’t understand quantum mechanics,” said the great Richard Feynman, four decades ago. And he knew what he was talking about (sort of).

Now comes a report on a quantum gas, called a Bose-Einstein condensate, which scientists at the Massachusetts Institute of Technology first stretched into a skinny rod, then rotated until it broke up. The result was a series of daughter vortices, each one a mini-me of the mother form.

The research, published in Nature, was conducted by a team of scientists affiliated with the MIT-Harvard Center for Ultracold Atoms and MIT’s Research Laboratory of Electronics.

The rotating quantum clouds, effectively quantum tornadoes, recall phenomena seen in the large-scale, classical world that we are familiar with. One example would be so-called Kelvin-Helmholtz clouds, which look like periodically repeating, serrated cartoon images of waves on the ocean.

White clouds stretching from the leftmost to the rightmost upper part of a photograph form a regularly repeating wavelike pattern above and behind a block of apartments.These wave-shaped clouds, seen over an apartment complex in Denver, exhibit what’s called Kelvin-Helmholtz instability.Rick Duffy/Wikipedia

The way to make quantum cloud vortices, though, involves more lab equipment and less atmospheric wind shear. “We start with a Bose-Einstein condensate, 1 million sodium atoms that share one and the same quantum-mechanical wave function,”…, says Martin Zwierlein, a professor of physics at MIT.

The same mechanism that confines the gas—an atom trap, made up of laser beams—allows the researchers to squeeze it and then spin it like a propeller. “We know what direction we’re pushing, and we see the gas getting longer,” he says. “The same thing would happen to a drop of water if I were to spin it up in the same way—the drop would elongate while spinning.”

What they actually see is effectively the shadow cast by the sodium atoms as they fluoresce when illuminated by laser light, a technique known as absorption imaging. Successive frames in a movie can be captured by a well-placed CCD camera.

At a particular rotation rate, the gas breaks up into little clouds. “It develops these funny undulations—we call it flaky, then becomes even more extreme. We see how this gas ‘crystalizes’ in a chain of droplets—in the last image there are eight droplets.”

Why settle for a one-dimensional crystal when you can go for two? And in fact the researchers say they have done just that, in as yet unpublished research.

That a rotating quantum gas would break into blobs had been predicted by theory—that is, one could infer that this would happen from earlier theoretical work. “We in the lab didn’t expect this—I was not aware of the paper; we just found it,” Zwierlein says. “It took us a while to figure it out.”

The crystalline form appears clearly in a magnified part of one of the images. Two connections, or bridges, can be seen in the quantum fluid, and instead of the single big hole you’d see in water, the quantum fluid has a whole train of quantized vortices. In a magnified part of the image, the MIT researchers found a number of these little holelike patterns, chained together in regularly repeating fashion.

“It’s similar in what happens when clouds pass each other in the sky,” he says. “An originally homogeneous cloud starts forming successive fingers in the Kelvin-Helmholtz pattern.”

Very pretty, you say, but surely there can be no practical application. Of course there can; the universe is quantum. The research at MIT is funded by DARPA—the Defense Research Advanced Project Agency—which hopes to use a ring of quantum tornadoes as fabulously sensitive rotation sensors.

Today if you’re a submarine lying under the sea, incommunicado, you might want to use a fiber optic gyroscope to detect slight rotational movement. Light travels in both one way and the other in the fiber, and if the entire thing is spinning, you should get an interference pattern. But if you use atoms rather than light, you should be able to do the job better, because atoms are so much slower. Such a quantum-tornado sensor could also measure slight changes in the earth’s rotation, perhaps to see how the core of the earth might be affecting things.

The MIT researchers have gone far down the rabbit hole, but not quite to the bottom of it. Those little daughter tornadoes can be confirmed as still being Bose-Einstein condensates because even the smallest ones still have about 10 atoms apiece. If you could get down to just one per vortex, you’d have the quantum Hall effect, which is a different state of matter. And with two atoms per vortex, you’d get a “fractional quantum Hall” fluid, with each atom “doing its own thing, not sharing a wave function,” Zwierlein says.

The quantum Hall effect is now used to define the ratio of Planck’s constant divided by the charge of the electron squared (h/e2)—a number called the von Klitzing constant—which is about as basic as basic physics gets. But this effect is still not fully understood. Most studies have focused on the behavior of electrons, and the MIT researchers are trying to use sodium atoms as stand-ins, says Zwierlein.

So although they’re not all the way to the bottom of the scale yet, there’s plenty of room for discovery on the way to the bottom. As Feynman also might have said (sort of).

Video Friday: Welcome to 2022

Par Evan Ackerman


Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022: 23–27 May 2022, Philadelphia
ERF 2022: 28–30 June 2022, Rotterdam, Germany

Let us know if you have suggestions for next week, and enjoy today's videos.


Happy Holidays from Voliro!

[ Voliro ]

Thanks, Daniel!

Merry Christmas from the Autonomous Systems Lab!

[ ASL ]

Лаборатория робототехники Сбера сердечно поздравляет вас с наступающим новым годом!

[ Sberbank Robotics Laboratory ]

Thanks, Alexey and Mike!

Holiday Greetings from KIMLAB!

[ KIMLAB ]

Thanks, Joohyung!

Quebec is easy mode for wintery robot videos.

[ NORLAB ]

Happy New Year from Berkshire Grey!

[ Berkshire Grey ]

Introducing John Deere’s autonomous 8R Tractor for large-scale production. To use the John Deere autonomous tractor, a farmer only needs to transport the machine to a field and configure it for autonomous operation. Using John Deere Operations Center Mobile, he or she can swipe from left to right to start the machine. While the machine is working the farmer can leave the field to focus on other tasks, while monitoring the machine’s status from their mobile device.

[ John Deere ]

I appreciate the idea that this robot seems to have some conception of personal space and will react when that space is rudely violated.

[ Engineered Arts ]

Merry Christmas and Happy New Year from Xiaomi Robotics Lab!

[ Xiaomi ]

Thanks, Yangwei!

We developed advanced neural control with proactive behavior learning and short-term memory for complex locomotion and lifelong adaptation of autonomous walking robots. The control method is inspired by a locomotion control strategy used by walking animals like cats, in which they use their short-term visual memory to detect an obstacle and take proactive steps to avoid colliding it.

[ VISTEC ]

Thanks, Poramate!

Not totally sure what this is from Exyn, but I do like the music.

[ Exyn ]

Nikon, weirdly, seems to be getting into the computer vision space with a high-speed, high-accuracy stereo system.

[ Nikon ]

Drone Badminton enables people with low vision to play badminton again using a drone as a ball and a racket can move a drone. This has potential to diversify the physical activities available, and improve physical and mental health for people with low vision.

[ Digital Nature Group ]

The Manta Ray program seeks to develop unmanned underwater vehicles (UUVs) that operate for extended durations without the need for on-site human logistics support or maintenance.

[ DARPA ]

A year in the life of Agility Robotics.

[ Agility Robotics ]

A new fabrication technique, developed by a team of electrical engineers and computer scientists, produces low-voltage, power-dense artificial muscles that improve the performance of flying microrobots.

[ MIT ]

What has NASA’s Perseverance rover accomplished since landing on the surface of Mars in February 2021? Surface Operations Mission Manager Jessica Samuels reflects on a year filled with groundbreaking discoveries at Jezero Crater and counts up the rover's achievements.

[ NASA ]

Construction is one of the largest industries on the planet, employing more than 10M workers in the US each year. Dusty Robotics believes in a future where robots and automation are standard tools employed by the construction workforce to build buildings more efficiently, safer, and at lower cost. In this talk I'll tell the story of how Dusty Robotics originated, our journey through the customer discovery process, and our vision for how robotics will change the face of construction.

[ Dusty Robotics ]

Adhesives Gain Popularity for Wearable Devices

Par Master Bond


This is a sponsored article brought to you by Master Bond.

Master Bond adhesive formulations provide solutions for challenging assembly applications in manufacturing electronic wearable devices. Product formulations include epoxies, silicones, epoxy-polyurethane hybrids, cyanoacrylates, and UV curing compounds.

There are some fundamental things to consider when deciding what is the right adhesive for the assembly of electronic wearable devices. Miniaturization of devices, and the need to meet critical performance specifications with multiple substrates, require an analysis of which chemical composition is most suitable to satisfy the required parameters.

These preliminary decisions are often predicated on the tradeoffs between different adhesive chemistries. They may vary widely, and in many cases are essential in achieving the needed goals in adhering parts and surfaces properly.


About ​Master Bond EP37-3FLF


Master Bond EP37-3FLF is an exceptionally flexible epoxy compound that forms high strength bonds that stand up well to physical impact and severe thermal cycling and shock, making it ideal for e-textile applications. Because it is flexible and produces a lower exotherm — heat released during the polymerization process — than conventional epoxy systems, EP37-3FLF lessens the stress on sensitive electronic components during cure. Reducing stress during cure is essential for protecting fragile die and other components in ultrathin, flexible electronic packages.

EP37-3FLF bonds well to a variety of substrates, including metals, composites, glass, ceramics, rubber, and many plastics. It offers superior electrical insulation properties, outstanding light transmission, especially in the 350- to 2000-nm range, and is serviceable at temperatures from 4K to 250°F. EP37-3FLF can be cured in 2-3 days at room temperature or in 2-3 hours at 200°F. Optimal properties are achieved by curing overnight at room temperature followed by an additional 1-2 hours at 200°F.

Master Bond EP37-3FLF was selected as one of six adhesives tested in a study of flexible electronic packaging for e-textiles conducted at the University of Southampton.

Learn more about Master Bond EP37-3FLF


The shape of the wearable device, flexing and bending requirements, joining similar or dissimilar substrates, how long it will be worn, and where it will be worn, are some of the factors that are a prerequisite in deciding the type of adhesive. The types of stresses the device will be exposed to and the environmental conditions are also consequential. Viscosity, cure speed, and gel time, working life, and pot life are significant from a processing standpoint.

Adhesives are gaining popularity for wearable electronic devices because many provide structural integrity, good drop, shock, impact performance, thermal stability, and resistance to moisture, fluids such as sunscreen oil, soda, water immersion, sweat, as well as normal wear and tear. Specific grades feature good electrical and thermal conductivity, bond well to dissimilar substrates, minimize stress, have high elongation or flexibility and can be applied in ultra small areas for miniaturized designs. Special, dual curing products have a UV tacking capability combined with a secondary heat cure mechanism for fast cures. User friendly solvent and lead free compositions have low halogen content, excellent thermal cycling capability and adhere well to metals, composites, many plastics, fabrics.

Specific Master Bond adhesives pass USP Class VI and ISO 10993-5 standards for biocompatibility. These may be utilized in wearable, invasive, and non-invasive medical sensors used for surgeries, diagnostics, therapeutics and in monitoring systems. Some prominent applications range from sleep apnea therapy devices, dialysis machines, videoscopes, infusion pumps, monitoring equipment, and respiratory equipment to blood pressure monitoring, instruments and body temperature measurement devices.


Adhesives are gaining popularity for wearable electronic devices because they provide good structural integrity, impact performance, thermal stability, and resistance to moisture as well as wear and tear.


Mobile wellness wearable sensors have been instrumental in monitoring our fitness, calorie/burn consumption, and activity levels. Through the use of many different polymeric systems including many that contain nanofillers, Master Bond has provided medical sensor manufacturers with adhesives that aid in the design of miniaturized, lighter weight, lower power devices.

Several case studies have cited using Master Bond adhesives in their medical sensors. One includes researchers at The University of Tennessee; they used EP30Med in their measurement tools and gauges for their medical device applications. EP30Med was chosen for its low viscosity, non-rapid set up time, USP Class VI approval and other performance properties.

Another case study involves electronic textile (e-textile) technology, in which microelectronics are embedded into fabrics. In this study, the University of Southampton investigated the influence of material selection and component dimensions on the reliability of an e-textile packaging approach under development. The key measures of reliability considered in this study were the shear load and bending stresses of the adhesive and substrate layers of the flexible package. One of the adhesives tested included Master Bond EP37-3FLF.

Jet Fighter With a Steering Wheel: Inside the Augmented-Reality Car HUD

Par Lawrence Ulrich


The 2022 Mercedes-Benz EQS, the first all-electric sedan from the company that essentially invented the automobile in 1885–1886, glides through Brooklyn. But this is definitely the 21st century: Blue directional arrows seem to paint the pavement ahead via an augmented-reality (AR) navigation system and color head-up display, or HUD. Digital street signs and other graphics are superimposed over a camera view on the EQS’s much-hyped “Hyperscreen”—a 142-centimeter (56-inch) dash-spanning wonder that includes a 45-cm (17.7-inch) OLED center display. But here’s my favorite bit: As I approach my destination, AR street numbers appear and then fade in front of buildings as I pass, like flipping through a virtual Rolodex; there’s no more craning your neck and getting distracted while trying to locate a home or business. Finally, a graphical map pin floats over the real-time scene to mark the journey’s end.

It’s cool stuff, albeit for folks who can afford a showboating Mercedes flagship that starts above US $103,000 and topped $135,000 in my EQS 580 test car. But CES 2022 in Las Vegas saw Panasonic unveil a more-affordable HUD that it says should reach a production car by 2024.

Head-up displays have become a familiar automotive feature, with a speedometer, speed limit, engine rpms, or other information that hovers in the driver’s view, helping keep eyes on the road. Luxury cars from Mercedes, BMW, Genesis, and others have recently broadened HUD horizons with larger, crisper, more data-rich displays.

Mercedes Benz augmented reality navigation youtu.be

Panasonic, powered by Qualcomm processing and AI navigation software from Phiar Technologies, hopes to push into the mainstream with its AR HUD 2.0. Its advances include an integrated eye-tracking camera to accurately match AR images to a driver’s line of sight. Phiar’s AI software lets it overlay crisply rendered navigation icons and spot or highlight objects including vehicles, pedestrians, cyclists, barriers, and lane markers. The infrared camera can monitor potential driver distraction, drowsiness, or impairment, with no need for a standalone camera as with GM’s semiautonomous Super Cruise system.

Close up of a car infotainment unit showing a man at the driving wheel, with eye-tracking technology overlayed on his facePanasonic's AR HUD system includes eye-tracking to match AR images to the driver's line of sight. Panasonic

Andrew Poliak, CTO of Panasonic Automotive Systems Company of America, said the eye tracker spots a driver’s height and head movement to adjust images in the HUD’s “eyebox.”

“We can improve fidelity in the driver’s field of view by knowing precisely where the driver is looking, then matching and focusing AR images to the real world much more precisely,” Poliak said.

For a demo on the Las Vegas strip, using a Lincoln Aviator as test mule, Panasonic used its SkipGen infotainment system and a Qualcomm Snapdragon SA8155 processor. But AR HUD 2.0 could work with a range of in-car infotainment systems. That includes a new Snapdragon-powered generation of Android Automotive—an open-source infotainment ecosystem, distinct from the Android Auto phone-mirroring app. The first-gen, Intel-based system made an impressive debut in the Polestar 2, from Volvo’s electric brand. The uprated Android Automotive will run in 2022’s lidar-equipped Polestar 3 SUV—an electric Volvo SUV—and potentially millions of cars from General Motors, Stellantis, and the Renault-Nissan-Mitsubishi alliance.

Gene Karshenboym helped develop Android Automotive for Volvo and Polestar as Google’s head of hardware platforms. Now, he’s chief executive of Phiar, a software company in Redwood, Calif. Karshenboym said AI-powered AR navigation can greatly reduce a driver’s cognitive load, especially as modern cars put ever more information at their eyes and fingertips. Current embedded navigation screens force drivers to look away from the road and translate 2D maps as they hurtle along.

“It’s still too much like using a paper map, and you have to localize that information with your brain,” Karshenboym says.

In contrast, following arrows and stripes displayed on the road itself—a digital yellow brick road, if you will—reduces fatigue and the notorious stress of map reading. It’s something that many direction-dueling couples might give thanks for.

“You feel calmer,” he says. “You’re just looking forward, and you drive.”

Street testing Phiar's AI navigation engine youtu.be

The system classifies objects on a pixel-by-pixel basis at up to 120 frames per second. Potential hazards, like an upcoming crosswalk or a pedestrian about to dash across the road, can be highlighted by AR animations. Phiar’s synthetic model trained its AI for snowstorms, poor lighting, and other conditions, teaching it to fill in the blanks and create a reliable picture of its environment. And the system doesn’t require granular maps, monster computing power, or pricey sensors such as radar or lidar. Its AR tech runs off a single front-facing, roughly 720p camera, powered by a car’s onboard infotainment system and CPU.

“There’s no additional hardware necessary,” Karshenboym says.

The company is also making its AR markers appear more convincing by “occluding” them with elements from the real world. In Mercedes’s system, for example, directional arrows can run atop cars, pedestrians, trees, or other objects, slightly spoiling the illusion. In Phiar’s system, those objects can block off portions of a “magic carpet” guidance stripe, as though it were physically painted on the pavement.

“It brings an incredible sense of depth and realism to AR navigation,” Karshenboym says.

Once visual data is captured, it can be processed and sent anywhere an automaker chooses, whether a center display, a HUD, or passenger entertainment screens. Those passenger screens could be ideal for Pokémon-style games, the metaverse, or other applications that combine real and virtual worlds.

Poliak said some current HUD units hog up to 14 liters of volume in a car. A goal is to reduce that to 7 liters or less, while simplifying and cutting costs. Panasonic says its single optical sensor can effectively mimic a 3D effect, taking a flat image and angling it to offer a generous 10- to 40-meter viewing range. The system also advances an industry trend by integrating display domains—including a HUD or driver’s cluster—in a central, powerful infotainment module.

“You get smaller packaging and a lower price point to get into more entry-level vehicles, but with the HUD experience OEMs are clamoring for,” Poliak said.

❌