❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 28 mars 2020IEEE Spectrum Recent Content full text

To Answer Dire Shortages, This Healthcare Team Designed, 3D-Printed, and Tested Their Own COVID-19 Swabs in One Week

Par Megan Scudellari

Last Wednesday, Todd Goldstein was working on other projects. Then physicians in the New York-based hospital system where he works, hard hit by a surge in COVID-19 cases, told him they were worried about running out of supplies.

Specifically, they needed more nasal test swabs. A nasopharyngeal swab for COVID-19 is no ordinary Q-tip. These specialty swabs cannot be made of cotton, nor have wood handles. They must be long and skinny to fit up behind the nose into the upper part of the throat.

Goldstein, director of 3D Design and Innovation at Northwell Health, a network of 23 hospitals and 800 outpatient facilities, thought, “Well, we can make that.” He quickly organized a collaboration with Summer Decker and Jonathan Ford of the University of South Florida, and 3D-printing manufacturer Formlabs. In one week, the group designed, made, tested, and are now distributing 3D-printed COVID-19 test swabs.

Northwell’s eight 3D printers are now printing about 2,000 swabs a day. Add in Massachusetts-based Formlab’s factory of 250 3D printers, which the company has now dedicated to the effort, and Goldstein estimates they could ramp up to one million swabs per day if the need is great enough.

Todd Goldstein
Photo: Northwell Health
Todd Goldstein 

“This has been the worst-case scenario for hospitals all across the world,” says Goldstein. “We all want the same exact stuff and in huge quantities.”

Before COVID-19, there wasn’t a ton of demand for these swabs except for the occasional flu check, says Goldstein. Now, healthcare workers will swab millions of people within weeks, and the supply shortage has begun. In Iceland, for example, authorities say COVID-19 testing is now limited by a lack of test swabs. To make matters worse, one of the main specialty swab manufacturers, Copan Diagnostics, is based in Italy, the epicenter of an outbreak. The company has asked customers and distributors to ration orders, according to NPR.

When Goldstein first heard of the swab shortage, he turned to Formlabs, which supplies Northwell’s 3D printers, specifically to source the raw material for the job—a biocompatible resin typically used to make dental guides that is safe to use in noses and throats. As chance would have it, last November Formlabs acquired their Ohio-based supplier of that resin and therefore maintains a steady supply of the material, according to Jeff Boehm, a spokesperson for the company. 

Formlabs also proposed a sterilization protocol that could be used to prepare the printed swabs for medical use. “We’re not reinventing the wheel. We were able to use what we had in our toolbox to create these swabs,” says Goldstein. 

3D printed nasal swab tests
Photo: Formlabs

By Friday, Goldstein’s Northwell team had designed six variations of swabs, as had their partners at the University of South Florida, Tampa. Together, the teams narrowed the options down to one design, shaped like a tiny wire bristle brush.

Unlike the Northwell team, whose research facility is closed due to the citywide outbreak, the Tampa-based team had an open wet lab to work in. Over the weekend, while Goldstein tested the swab’s mechanical properties, the Tampa team performed the necessary benchwork, testing to make sure the swab worked correctly, picking up appropriate amounts of mucus, cells, and coronavirus.

The swabs worked, so on Tuesday and Wednesday this week, Goldstein printed more and dropped them off to clinics within the Northwell network, asking physicians to use the 3D printed swabs along with standard swabs on suspected COVID-19 patients and give him feedback.

Formlabs 3D printer with nasal swabs
Photo: Formlabs

On Friday, medical staff reported back that the swabs worked reliably, so on Saturday morning the team began printing full-blast, says Goldstein. Northwell has six automated Form 3D printers, able to operate 24/7 with minimal oversight, plus two stand alone machines that require frequent manual intervention. Under normal circumstances, the printers are used to make things like anatomical models, surgical guides, and the first amphibious prosthetic leg.

The swabs have FDA Class I exempt status, so they can be made and distributed to medical centers. “Our hospitals need these now,” says Goldstein. “If we have enough swabs here and other hospitals around us don’t have enough, we’re happy to send some to them. We’re all in the same boat. If we have extra resources, we’re going to give them to you.”

The team plans to release the design for anyone with a Formlabs printer to print, he adds. “Any single dental lab can start making these swabs tomorrow if they wanted to, and help out their local hospital.” 

Video Friday: Qoobo the Headless Robot Cat Is Back

Par Evan Ackerman

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICARSC 2020 – April 15-17, 2020 – [Online Conference]
ICRA 2020 – May 31-4, 2020 – [TBD]
ICUAS 2020 – June 9-12, 2020 – Athens, Greece
RSS 2020 – July 12-16, 2020 – [Online Conference]
CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

You need this dancing robot right now.

By Vanessa Weiß at UPenn.

[ KodLab ]

Remember Qoobo the headless robot cat? There’s a TINY QOOBO NOW!

It’s available now on a Japanese crowdfunding site, but I can’t tell if it’ll ship to other countries.

[ Qoobo ]

Just what we need, more of this thing.

[ Vstone ]

HiBot, which just received an influx of funding, is adding new RaaS (robotics as a service) offerings to its collection of robot arms and snakebots.

HiBot ]

If social distancing already feels like too much work, Misty is like that one-in-a-thousand child that enjoys cleaning. See her in action here as a robot disinfector and sanitizer for common and high-touch surfaces. Alcohol reservoir, servo actuator, and nozzle not (yet) included. But we will provide the support to help you build the skill.

[ Misty Robotics ]

After seeing this tweet from Kate Darling that mentions an MIT experiment in which “a group of gerbils inhabited an architectural environment made of modular blocks, which were manipulated by a robotic arm in response to the gerbils’ movements,” I had to find a video of the robot arm gerbil habitat. The best I could do was this 2007 German remake, but it’s pretty good:

[ Lutz Dammbeck ]

We posted about this research almost a year ago when it came out in RA-L, but I’m not tired of watching the video yet.

Today’s autonomous drones have reaction times of tens of milliseconds, which is not enough for navigating fast in complex dynamic environments. To safely avoid fast moving objects, drones need low-latency sensors and algorithms. We depart from state of the art approaches by using event cameras, which are novel bioinspired sensors with reaction times of microseconds. We demonstrate the effectiveness of our approach on an autonomous quadrotor using only onboard sensing and computation. Our drone was capable of avoiding multiple obstacles of different sizes and shapes at relative speeds up to 10 meters/second, both indoors and outdoors.

[ UZH ]

In this video we present the autonomous exploration of a staircase with four sub-levels and the transition between two floors of the Satsop Nuclear Power Plant during the DARPA Subterranean Challenge Urban Circuit. The utilized system is a collision-tolerant flying robot capable of multi-modal Localization And Mapping fusing LiDAR, vision and inertial sensing. Autonomous exploration and navigation through the staircase is enabled through a Graph-based Exploration Planner implementing a specific mode for vertical exploration. The collision-tolerance of the platform was of paramount importance especially due to the thin features of the involved geometry such as handrails. The whole mission was conducted fully autonomously.


At Cognizant’s Inclusion in Tech: Work of Belonging conference, Cognizant VP and Managing Director of the Center for the Future of Work, Ben Pring, sits down with Mary “Mary” Cummings. Missy is currently a Professor at Duke University and the Director of the Duke Robotics Labe. Interestingly, Missy began her career as one of the first female fighter pilots in the U.S. Navy. Working in predominantly male fields – the military, tech, academia – Missy understands the prevalence of sexism, bias and gender discrimination.

Let’s hear more from Missy Cummings on, like, everything.

[ Duke ] via [ Cognizant ]

You don’t need to mountain bike for the Skydio 2 to be worth it, but it helps.

[ Skydio ]

Here’s a look at one of the preliminary simulated cave environments for the DARPA SubT Challenge.

[ Robotika ]

SherpaUW is a hybrid walking and driving exploration rover for subsea applications. The locomotive system consists of four legs with 5 active DoF each. Additionally, a 6 DoF manipulation arm is available. All joints of the legs and the manipulation arm are sealed against water. The arm is pressure compensated, allowing the deployment in deep sea applications.

SherpaUW’s hybrid crawler-design is intended to allow for extended long-term missions on the sea floor. Since it requires no extra energy to maintain its posture and position compared to traditional underwater ROVs (Remotely Operated Vehicles), SherpaUW is well suited for repeated and precise sampling operations, for example monitoring black smockers over a longer period of time.

[ DFKI ]

In collaboration with the Army and Marines, 16 active-duty Army soldiers and Marines used Near Earth’s technology to safely execute 64 resupply missions in an operational demonstration at Fort AP Hill, Virginia in Sep 2019. This video shows some of the modes used during the demonstration.

[ NEA ]

For those of us who aren’t either lucky enough or cursed enough to live with our robotic co-workers, HEBI suggests that now might be a great time to try simulation.

[ GitHub ]

DJI Phantom 4 Pro V2.0 is a complete aerial imaging solution, designed for the professional creator. Featuring a 1-inch CMOS sensor that can shoot 4K/60fps videos and 20MP photos, the Phantom 4 Pro V2.0 grants filmmakers absolute creative freedom. The OcuSync 2.0 HD transmission system ensures stable connectivity and reliability, five directions of obstacle sensing ensures additional safety, and a dedicated remote controller with a built-in screen grants even greater precision and control.

US $1600, or $2k with VR goggles.

[ DJI ]

Not sure why now is the right time to introduce the Fetch research robot, but if you forgot it existed, here’s a reminder.

[ Fetch ]

Two keynotes from the MBZIRC Symposium, featuring Oussama Khatib and Ron Arkin.


And here are a couple of talks from the 2020 ROS-I Consortium.

Roger Barga, GM of AWS Robotics and Autonomous Services at Amazon shares some of the latest developments around ROS and advanced robotics in the cloud.

Alex Shikany, VP of Membership and Business Intelligence for A3 shares insights from his organization on the relationship between robotics growth and employment.

[ ROS-I ]

Many tech companies are trying to build machines that detect people’s emotions, using techniques from artificial intelligence. Some companies claim to have succeeded already. Dr. Lisa Feldman Barrett evaluates these claims against the latest scientific evidence on emotion. What does it mean to “detect” emotion in a human face? How often do smiles express happiness and scowls express anger? And what are emotions, scientifically speaking?

[ Microsoft ]

Hier — 27 mars 2020IEEE Spectrum Recent Content full text

IEEE’s Response to the COVID-19 Pandemic

Par Toshio Fukuda
Illustration of a megaphone with announcement icons and the IEEE logo on the megaphone
Illustration: iStockphoto

THE INSTITUTE As you are aware, on 11 March the World Health Organization officially declared the novel coronavirus, COVID-19, a pandemic. This global health crisis is a unique challenge that has impacted many members of the IEEE family. We would like to express our concern and support for all the members of the IEEE community, our staff, our families, and all others affected by this outbreak.

Governments around the world are now issuing restrictions on travel, gatherings, and meetings in an effort to limit and slow the spread of the virus. The health and safety of the IEEE community is our first priority and IEEE is supporting these efforts.

We request that all members avoid conducting in-person activities in areas impacted by the coronavirus threat and instead maximize the use of our online and virtual alternatives. IEEE provides many tools to support our membership with virtual engagement, including our online collaboration space IEEE Collabratec.

Following the advice of local authorities, most IEEE conferences and meetings have already been postponed or replaced with virtual meetings.

IEEE publications continue to accept submissions and publish impactful cutting-edge research. Our online publications remain available to researchers and students around the world.

IEEE standards development also continues, using online collaboration to replace in-person working groups.

IEEE Educational Activities continues to offer online instruction and IEEE’s preuniversity educational resources may be of assistance to families of students whose classroom activities have been disrupted.

All IEEE operations are continuing. At many of our global offices, IEEE staff will support IEEE’s mission while teleworking from their homes to minimize risk. As of this time, on the advice of local authorities, IEEE offices in China remain open.

We know that many of you are directly and indirectly engaged in the fight against this disease: supporting biomedical research and applications, supporting data analysis and modeling, maintaining critical communications and power infrastructure and caring for each other.We are grateful for your work.

We extend our heartfelt thanks and appreciation to all of our IEEE members for your understanding. These are difficult times, but we will get through them by working together. Thank you for your support of our shared mission to advance technology for humanity.

Please stay safe and well.

Toshio Fukuda is the 2020 IEEE president. Stephen Welby is the IEEE executive director.

COVID-19 Makes It Clear That Broadband Access Is a Human Right

Par Stacey Higginbotham
Illustration by hystericalglamour
Illustration: hystericalglamour

Like clean water and electricity, broadband access has become a modern-day necessity. The spread of COVID-19 and the ensuing closure of schools and workplaces and even the need for remote diagnostics make this seem like a new imperative, but the idea is over a decade old. Broadband is a fundamental human right, essential in times like now, but just as essential when the world isn’t in chaos.

A decade ago, Finland declared broadband a legal right. In 2011, the United Nations issued a report [PDF] with a similar conclusion. At the time, the United States was also debating its broadband policy and a series of policy efforts that would ensure everyone had access to broadband. But decisions made by the Federal Communications Commission between 2008 and 2012 pertaining to broadband mapping, network neutrality, data caps and the very definition of broadband are now coming back to haunt the United States as cities lock themselves down to flatten the curve on COVID-19.

While some have voiced concerns about whether the strain of everyone working remotely might break the Internet, the bigger issue is that not everyone has Internet access in the first place. Most U.S. residential networks are built for peak demand, and even the 20 to 40 percent increase in network traffic seen in locations hard hit by the virus won’t be enough to buckle networks.

An estimated 21 to 42 million people in the United States don’t have physical access to broadband, and even more cannot afford it or are reliant on mobile plans with data limits. For a significant portion of our population, this makes remote schooling and work prohibitively expensive at best and simply not an option at worst. This number hasn’t budged significantly in the last decade, and it’s not just a problem for the United States. In Hungary, Spain, and New Zealand, a similar percentage of households also lack a broadband subscription according to data from the Organization for Economic Co-operation and Development.

Faced with the ongoing COVID-19 outbreak, Internet service providers in the United States. have already taken several steps to expand broadband access. Comcast, for example, has made its public Wi-Fi network available to anyone. The company has also expanded its Internet Essentials program—which provides a US $9.95 monthly connection and a subsidized laptop—to a larger number of people on some form of government assistance.

To those who already have access but are now facing financial uncertainty, AT&T, Comcast, and more than 200 other U.S. ISPs have pledged not to cut off subscribers who can’t pay their bills and not to charge late fees, as part of an FCC plan called Keep Americans Connected. Additionally, AT&T, Comcast, and Verizon have also promised to eliminate data caps for the near future, so customers don’t have to worry about blowing past a data limit while learning and working remotely.

It’s good to keep people connected during quarantines and social distancing, but going forward, some of these changes should become permanent. It’s not enough to say that broadband is a basic necessity; we have to push for policies that ensure companies treat it that way.

“If it wasn’t clear before this crisis, it is crystal clear now that broadband is a necessity for every aspect of modern civic and commercial life. U.S. policymakers need to treat it that way,” FCC Commissioner Jessica Rosenworcel says. “We should applaud public spirited efforts from our companies, but we shouldn’t stop there.” 

This article appears in the May 2020 print issue as “We All Deserve Broadband.”

What the Right To Repair Movement Gets Wrong

Par G. Pascal Zachary

The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

THE ENGINEER’S PLACE The end came with a whimper. My personal laser printer showed a persistent error message. In the past, closing the cover cleared the message and let me print. Not this time. I surveyed guidance on the Web, even studied the remedies proposed by the printer’s maker. No joy.

After weeks, and then months after opening and closing the cover, and turning the printer off and on, I surrendered. Last week, I unplugged it, removed the ink cartridge (for re-use) and carried the printer to a nearby responsible electronics recycler.

I cringed and wondered. Should I feel shame for contravening the nifty dictum of the self-styled “right to repair” movement, which insists that "instead of throwing things out,” we should “reuse, salvage and rebuild?” 

In the case of my zombie printer, I’m convinced the recycler was the best destination. A near-identical model, brand new, sells on Amazon for $99. The ink cartridge costs a third as much. Even if the printer could be repaired, at what expense in parts and labor?

So I bought a new printer.

When I ponder the wisdom of my decision, I think “Shame on me.” Rather than fight to repair my wounded device, I did what Big Tech and other manufacturers increasingly want owners to do. I threw it away.

Today repair remains an option, one that makers want to monopolize or eliminate. Apple, the world’s most valuable company, is the worst offender, effectively forbidding owners to repair or maintain their smart phones. Not even the battery is replaceable by an owner. Forbidden also are repairs by owners of cracked screens. Such brazen actions void Apple’s warranty.

Many people have a tale of trying to bootleg an iPhone repair. My favorite is when I found a guy on Yelp! who asked me to meet him inside a Starbucks. His nom de repair is ScreenDoc, and he ran our rendezvous like a drug buy. He only entered the shop after I ordered a coffee and sat down. Seated at my table, working with tiny tools, he swapped my broken screen for a new one. I slipped him $90 in cash, and he left.

Sound tawdry? The nationwide campaign, led by Repair.Org, agrees, which is why Repair.Org supports legislation in at least 20 states to promote “your right to repair,” by requiring manufacturers “to share the information necessary for repair.”

Long before the advent of the repair campaign, and a related movement called the Maintainers, there were loud critics of “planned obsolescence.” During Depression-era America, an influential book published 1932 advocated “creative waste”—the idea that throwing things away and buying new things can fuel a strong economy. One advocate, Bernard London, wrote a paper in 1932, “Ending the Depression Through Planned Obsolescence,” in which he called on the federal government to print expiration dates on manufactured goods. “Furniture and clothing and other commodities should have a span of life, just as humans have,” he wrote. “They should be retired, and replaced by fresh merchandise.”

Manufacturers purposely made stuff that broke or wore out, so consumers would have to buy the stuff again. Echoes of this practice persist. In shopping for new tires, for instance, drivers pay more for those “rated” to last longer.

The big threat to devices today isn’t failure, but rather “creative destruction,” or the new advent of new and improved stuff. Who needs to think about repairs when we are dazzled by the latest “upgrade.”

The newest iPhones, for instance, are promoted on the appeal of their improved cameras. The latest Apple watch series boasts new band colors. Such incremental improvements long pre-date Apple’s popularity. One hundred years ago, General Motors decided to release new models, new colors, and faster engines every year. “The changes in the new model should be so novel and attractive as to create demand…and a certain amount of dissatisfaction with past models as compared with the new one,” wrote Alfred Sloan, then automaker’s CEO, in his 1963 autobiography My Years With General Motors.

Some of us never grow disenchanted with certain machines. We love them forever. And we strive to keep them going. Some cherished cars fall into this category, and computers do, too. I’m typing this article on my beloved 2014 Mac Powerbook. My battery is toast, so I can only securely use the laptop while plugged in. And I type on an external keyboard because the original keys are so worn out that a few won’t function at all even though Apple has twice replaced the key caps for me.

I don’t want my PowerBook Pro to die; yet my repair options are ruled by Apple. And a cruel master is she. My best path forward is to ask Apple to replace the keyboard and battery. I dread finding out whether Apple continues to offer this option. Though I feel no shame regarding my utter dependence on Apple for repairs, I do feel outrage and puzzlement. I am aware that the do-it-yourself (DIY) movement that has transformed how we maintain our homes and our bodies, how we eat and drink, work and play.

But DIY maintenance is not for everybody or appropriate for every situation. Nor does it inevitably produce greater “caring.” Results vary. Quality can suffer. While a person’s self-esteem may rise with every home improvement they carry out, the value of their home may decline as a result (because of the quality of the DIY fixes). I favor a simple rule: encourage consumers to repair if they wish but not insist on self-repair under every circumstance, and leave the option that original makers of complex devices will repair them the best (Tesla owners, take heed!)

When self-reliance becomes non-negotiable, the results can be dispiriting. But when the impulse to do things yourself, like brewing your own beer, baking your own bread, raising your own chickens and building your own computers, takes hold, the results can be good for your soul.

In 1974, a repair enthusiast named Robert Pirsig published a book that proved highly influential and sold millions of copies. Zen and the Art of Motorcycle Maintenance came to define a spiritual and mental outlook by contrasting the approaches of two bike owners. One rides an expensive new bike and relies on professionals to repair. The other rides an older bike that he repairs on his own and, by doing so, hones his problem-solving abilities and, unexpectedly, connects to a deeper wisdom that enhances his sense of dignity and endows his life with greater meaning.

The shift in attitudes a half-century ago was dramatic, reflecting the profound expansion of the human-built world. Once humans sought to “connect” with nature; now they wished to do the same (or more) with their machines. In many ways, the repair movement is a revival of this venerable counter-cultural tradition.

Today’s repair enthusiasts would have us believe that the well-maintained artifact is the new beautiful. But denying consumers the ability to repair their stuff is, to me, chiefly an economic, not a spiritual or aesthetic, issue.

The denial of the repair option is not limited to laptops and smart phones. Automobiles are now essentially computers on wheels. Digital diagnostics make repair no longer the dominion of the clever tinkerer. Specialized software, reading reports from the sensors scattered throughout your car, decides which “modules” to replace. The ease comes at a price. Your dealer now dominates the repair business. Independent car shops often can’t or won’t invest in the car manufacturer’s expensive software. And the hardy souls that once maintained their own vehicles, in their driveway or on the street, are as close to extinction as the white rhino.

The predatory issue is central. The denial of the repair option is often a form of profiteering. The manufacturer earns money from what he or she considers the “after market.” Many makers of popular devices now see repair and maintenance as a kind of annuity, a stream of revenue similar in type to that provided by sales of a printer cartridge or razor blade. For auto dealers, profits from “service” now can exceed profits from sales of new cars. Increasingly products are designed, across many categories, to render impossible, or greatly limit, repair by owner.

I am not sure the practice is wrong, and certainly not wrong in all cases. The profits from repair are often justified by claims of superior service. Brand-name makers, in theory, can control reliability by maintaining their own devices. Reliability easily conflates with “peace of mind,” so that the repair path collides squarely with another basic human urge: convenience.

Not everyone opposes convenience, so the Repair movement might regret choosing to advocate for a “right” to repair rather than an “option.” An option implies protecting a consumer’s choice, not mandating a specific repair scenario. I’m skeptical about applying the language of legal rights to the problem of repair and maintenance; because there are many cases where technology companies especially have the obligation to repair problems, and not foist them onto their customers.

Here’s a live example. Among my chief reasons for my loyalty to the iPhone is that Apple supplies updated software that protects me against viruses and security hacks; Apple even installs this software on my phone sometimes without my conscious assent, or awareness. If I had to assent explicitly to each iPhone software update, I would invariably fail to have the latest protection and then suffer the negative consequences. So I don’t want to be responsible for repairing or maintaining a phone that is inherently collective in nature. I am freer and happier when Apple does it.

I understand that ceding the repair to an impersonal System might seem to libertarians like a road to serfdom. But having the System in charge of repair probably makes sense for essential products and services.

The artifacts in our world are profoundly networked now, and even though some devices look and feel individual to us, they are not. Their discreteness is an illusion. Increasingly no person is a technological island. Our devices are part of systems that depend on collective action and communal support.

Given the deep interconnectedness of our built environment, the distinction between repairing your own devices and letting others do so breaks down; and insisting on maintaining the distinction strikes me as inherently anti-social and destructive to the common good. At the very least the question of who repairs what should be viewed as morally neutral. Our answers should be shaped by economics and practicality, not romantic notions about individual freedom and responsibility.

Because the right-to-repair movement is based on a romantic notion, and pits those who maintain against those who don’t, a backlash against the concept is inevitable. A healthier approach to the genuine challenge of maintaining technological systems, and their dependent devices, would be to also strengthen collective responses and systems of repair and maintenance.

Much is at stake in this argument. Thinking about who is responsible for what aspects of our techno-human condition helps clarify what forms of resistance are possible in a world dominated by Big Tech companies and complex socio-technical systems. Resistance can and should take many forms, but resistance will be far more effective, I submit, if we do not choose repair and maintenance as a proxy for democratic control over innovation.

So I offer different solution. Rather than burden individuals with enhanced rights and duties for repair and maintenance of our devices, let’s demand that makers of digitally-controlled stuff make repairs at fair prices, quickly and reliably. Or maybe we go further and demand that these companies repair and maintain their products at a slight loss, or even a large loss, in order to incentivize them to design and build high-quality stuff in the first place; stuff that requires less maintenance and fewer repairs.

By insuring that repair is fair, reliable and low cost by law and custom, we can achieve the best of both worlds: keep our gadgets running and feel good knowing that the quality of our stuff is not the measure of ourselves.

Wine Is Going Out of Style—in France

Par Vaclav Smil
Two hands hold a string tightrope for a tipping glass of wine.
Photo-Illustration: Francesco Carta Fotografo/Getty Images

France and wine—what an iconic link, and for centuries, how immutable! Wine was introduced by Greeks before the Romans conquered Gaul. Production greatly expanded during the Middle Ages, and since then the very names of the regions—Bordeaux, Bourgogne, Champagne—have become a symbol of quality everywhere. Thus has the French culture of wine long been a key signifier of national identity.

Statistics for French wine consumption begin in 1850 with a high mean of 121 liters per capita per year, which is nearly two glasses per day. By 1890, a Phylloxera infestation had cut the country’s grape harvest by nearly 70 percent from its 1875 peak, and French vineyards had to be reconstituted by grafting on resistant rootstocks from the United States. Although annual consumption of wine did fluctuate, rising imports prevented any steep decline in the total supply. Vineyard recovery brought the per capita consumption to a pre-World War I peak of 125 L in 1909, equaled again only in 1924. The all-time record of 136 L was set in 1926, after which the rate fell only slightly to 124 liters per capita in 1950.

Postwar, the French standard of living remained surprisingly low: According to the 1954 census, only 25 percent of homes had an indoor toilet. But rapidly rising incomes during the 1960s brought dietary shifts, notably a decline in wine drinking per capita. It fell to about 95 L in 1980, to 71 L in 1990, and then to 58 L in 2000—about half what it had been a century before. The latest available data shows the mean at just 40 L.

France’s wine consumption survey of 2015 shows deep gender and generational divides that explain the falling trend. Forty years ago, more than half of French adults drank wine nearly every day; now it’s just 16 percent, with 23 percent among men and only 11 percent among women. Among people over 65, the rate is 38 percent; for people 25 to 34 years of age, it is 5 percent, and for 15- to 24-year-olds, it’s only 1 percent. The same divides apply to all alcoholic drinks, as beer, liquors, and cider have also seen gradual consumption declines, while the beverages with the highest average per capita gains include mineral and spring water, roughly doubling since 1990, as well as fruit juices and carbonated soft drinks.

Alcoholic beverages are thus fast disappearing from French culture. And although no other traditional wine-drinking country has seen greater declines in absolute or relative terms, Italy comes close, and wine consumption has also decreased in Spain and Greece.

Only one upward trend persists: French exports of wine set a new record, at about €9.7 billion, in 2018. Premium prices and exports to the United States and China are the key factors. American drinkers have been the largest importers of French wines, and demand by newly rich Chinese has also claimed a growing share of sales. But in the country that gave the world countless vins ordinaires as well as exorbitantly priced Grand Crus Classés, the clinking of stemmed glasses and wishes of santé have become an endangered habit.

This article appears in the April 2020 print issue as “(Not) Drinking Wine.”

À partir d’avant-hierIEEE Spectrum Recent Content full text

IEEE Standards Association Launches a Platform for Open Source Collaboration

0s and 1s form a globe shape
Illustration: iStockphoto

THE INSTITUTE After adopting a new visual identity last year to signal its growth beyond standards development, the IEEE Standards Association recently introduced a platform for new technical communities to collaborate on open-source projects. Called IEEE SA Open, the platform enables independent software developers, startups, industry, academic institutions, and others to create, test, manage, and deploy innovative projects in a collaborative, safe, and responsible environment.

The neutral platform is available to anyone developing open-source projects. It also will help developers increase their project’s visibility, drive adoption, and grow their community.

Many IEEE members from several technical societies and standards groups have already expressed interest in pursuing open-source collaboration within the organization.


Today, much of the world’s infrastructure is run by software, and that software needs to comply with standards in communications networking, electrical grids, agriculture, and the like, IEEE Fellow Robert Fish, IEEE SA president, said during a recent interview with Radio Kan.

“A lot of standardization work winds up standardizing technologies that are implemented through software,” he said. “Our idea is that the next stage of standardization might include not just producing the documents that have the technical specifications in them, but also the software that implements it.”

As software becomes increasingly prevalent in the world today, ethical alignment, reliability, transparency, and democratic governance become must-haves. IEEE is uniquely positioned to endow open-source projects with these attributes. Indeed, with the addition of the new platform, the IEEE SA provides developers with proven mechanisms throughout the life cycle of incubating promising technologies—including research, open source development, standardization, and go-to-market services. The platform also exposes earlier-stage technology research from academia to industry for potential capitalization opportunities.

IEEE SA Open programs provide exceptional opportunities to all IEEE communities, especially to those members who are working on advanced solutions. It is a platform that exposes earlier stage technology research from academia to industry for potential capitalization opportunities.

To learn more, visit the IEEE SA Open page.

Scientists Use Stem Cells to Treat COVID-19 Patients in China

Par Amy Nordrum

More than 100 COVID-19 patients at a hospital in Beijing are receiving injections of mesenchymal stem cells to help them fend off the disease. The experimental treatment is part of an ongoing clinical trial, which coordinators say has shown early promise in alleviating COVID-19 symptoms.

However, other experts criticize the trial’s design and caution that there’s not sufficient evidence to show that the treatment works for COVID-19. They say other treatments have far greater potential than stem cells in aiding patients during the pandemic.  

Researchers have so far reported results from only seven patients treated with stem cells at Beijing You’an Hospital. Each patient suffered from COVID-19 symptoms including fevers and difficulty breathing. They each received a single infusion of mesenchymal stem cells sometime between 23 January and 16 February. A few days later, investigators say, all symptoms disappeared in all seven patients. They reported no side effects.

Chest computerized tomography (CT) images of the critically severe COVID-19 patient. On Jan 23, no pneumonia performance was observed. On Jan 30, ground-glass opacity and pneumonia infiltration occurred in multi-lobes of the double sides. Cell transplantation was performed on Jan 31. On Feb 2, the pneumonia invaded all through the whole lung. On Feb 9, the pneumonia infiltration faded away largely. On Feb 15, only little ground-glass opacity was residual locally.
Images:  Aging & Disease
This series of computerized tomography (CT) images from Jin’s study shows the progression of a severe case of COVID-19 in Beijing. On 23 Jan, no evidence of pneumonia was seen in the patient. By 30 Jan, some evidence was present. The patient received a stem cell infusion on 31 Jan. By 2 Feb, pneumonia had invaded both lungs. On 9 Feb, the pneumonia faded away. By 15 Feb, little evidence of pneumonia remained.

The team published those results in the journal Aging & Disease on 13 March. In an accompanying editorial, Ashok Shetty of Texas A&M University’s Institute for Regenerative Medicine wrote “the overall improvement was quite extraordinary” but stated that larger clinical trials were needed to validate the findings.

Jahar Bhattacharya, a professor of physiology and cellular biophysics and medicine at Columbia University, who was not involved in the work, says injecting mesenchymal stem cells into a patient’s bloodstream remains an unproven treatment for COVID-19 patients and could cause harmful side effects.

“You are injecting large numbers of cells in a patient’s veins,” Bhattacharya says. “If those cells go and clog the lungs, and cause damage because of the clogging—well, that’s not good at all.”

He adds that the study’s sample size is much too small to draw any meaningful conclusions about the treatment’s efficacy at this stage. “Folks do all kinds of things and they’ll say—we got a result,” Bhattacharya says. “It’s very risky to go by any of those.”

Kunlin Jin, a lead author in the trial and professor of pharmacology and neuroscience at the University of North Texas Health Science Center, says his group now has unpublished data from 31 additional COVID-19 patients who received the treatment. In every case, he claims, their symptoms improved after treatment. “I think the results are very promising,” he says.

According to Jin, 120 COVID-19 patients are now receiving mesenchymal stem cell injections in Beijing for the trial.

COVID-19 is the disease caused by the new coronavirus. There is currently no treatment and researchers around the world are scrambling to identify existing drugs or compounds that could be effective against it.

Jin’s team isn’t alone in considering the use of stem cells to treat COVID-19 patients. Another mesenchymal stem cell trial registered to aims to enroll 20 COVID-19 patients across four hospitals in China. The Australia-based firm Mesoblast says it’s evaluating its stem cell therapy for use against COVID-19. And in the United States, the Biomedical Advanced Research and Development Authority recently contacted the company Athersys to request information about its stem cell treatment called MultiStem for its potential as a COVID-19 therapy.   

Mesenchymal stem cells (a term some experts criticize as too broad) can be isolated from different kinds of tissues and, once injected into a patient, grow into a wide variety of cells. They have not been approved for COVID-19 therapeutic use by the U.S. Food and Drug Administration.

The new coronavirus invades the body through a spike protein that lives on the surface of virus cells. The S protein, as it’s called, binds to a receptor called angiotensin-converting enzyme 2 (ACE2) on a healthy cell’s surface. Once attached, the cells fuse and the virus is able to infect the healthy cell.

ACE2 receptors are present on cells in many places throughout the body, and especially in the lungs. Cells in the lungs are also some of the first to encounter the virus, since the primary form of transmission is thought to be breathing in droplets after an infected person has coughed or sneezed.

However, cells from other parts of the body—including those which produce mesenchymal stem cells—lack ACE2 receptors, which makes them immune to the virus.

In many COVID-19 cases, a patient’s immune system responds to the virus so strongly, it harms healthy cells in the process. Jin explains that, once mesenchymal stem cells are injected into the blood, these cells can travel to the lungs and secrete growth factor and other cytokines—anti-inflammatory substances that modulate the immune system so it doesn’t go into overdrive.

But Lawrence Goldstein, director of UC San Diego’s stem cell program, says it’s not clear from the trial how many of the injected cells actually made it to the lungs, or how long they stayed there. He criticized the classification of patients in the study as “common,” “severe,” or “critically severe,” saying those categories weren’t well defined (Jin says these labels are defined by the National Health Commission of China). And Goldstein noted the lack of information about the properties of the stem cells used in the trial.  

“It’s pretty weak,” Goldstein says of the trial design.

Steven Peckman, deputy director of UCLA’s Broad Stem Cell Research Center, adds: “Researchers and clinicians should use a critical eye when reviewing such reports and avoid the ‘therapeutic misconception,’ namely, a willingness to view experimental interventions as both safe and effective without the support of compelling scientific evidence.”  

Jin himself doesn’t think most COVID-19 patients should receive stem cell infusions. “I think for the moderate patients, maybe don’t need the stem cell treatment,” he says. “For life-threatening cases, I think it’s essential to use mesenchymal stem cell treatment if no other drug is available.”

Goldstein says other potential treatments for COVID-19—such as drugs that modulate the body’s immune system—appear much more promising than stem cells. Many such drugs have been shown to be safe and effective at regulating the immune system and are already approved by regulatory authorities. It’s also easier to use drugs to treat a large number of patients compared with stem cell infusions.  

“When you’ve got a hundred things you want to try, it’s not obvious that this one is on the short list,” Goldstein says of stem cell trials for COVID-19. “It’s a higher priority to test well-known immune modulators than to test these cells.”

New Approach Could Protect Control Systems From Hackers

Par Michelle Hampson
Journal Watch report logo, link to report landing page

Some of the most important industrial control systems (ICSs), such as those that support power generation and traffic control, must accurately transmit data at the milli- or even mirco-second range. This means that hackers need interfere with the transmission of real-time data only for the briefest of moments to succeed in disrupting these systems. The seriousness of this type of threat is illustrated by the Stuxnet incursion in 2010, when attackers succeeded in hacking the system supporting Iran’s uranium enrichment factory, damaging more than 1000 centrifuges.

Now a trio of researchers has disclosed a novel technique that could more easily identify when these types of attacks occur, triggering an automatic shutdown that would prevent further damage.

The problem was first brought up in a conversation over coffee two years ago. “While describing the security measures in current industrial control systems, we realized we did not know any protection method on the real-time channels,” explains Zhen Song, a researcher at Siemens Corporation. The group began to dig deeper into the research, but couldn’t find any existing security measures.

Part of the reason is that traditional encryption techniques do not account for time. “As well, traditional encryption algorithms are not fast enough for industry hard real-time communications, where the acceptable delay is much less than 1 millisecond, even close to 10 microsecond level,” explains Song. “It will often take more than 100 milliseconds for traditional encryption algorithms to process a small chunk of data.”

However, some research has emerged in recent years about the concept of “watermarking” data during transmission, a technique that can indicate when data has been tampered with. Song and his colleagues sought to apply this concept to ICSs, in a way that would be broadly applicable and not require details of the specific ICS. They describe their approach in a study published February 5 in IEEE Transactions on Automation Science and Engineering. Some of the source code is available here

If hackers attempt to disrupt data transmission, the recursive watermark (RWM) signal is altered. This indicates that an attack is taking place.
Image: Zhen Song
If hackers attempt to disrupt data transmission, the recursive watermark (RWM) signal is altered. This indicates that an attack is taking place.

The approach involves the transmission of real-time data over an unencrypted channel, as conventionally done. In the experiment, a specialized algorithm in the form of a recursive watermark (RWM) signal is transmitted at the same time. The algorithm encodes a signal that is similar to “background noise,” but with a distinct pattern. On the receiving end of the data transmission, the RWM signal is monitored for any disruptions, which, if present, indicate an attack is taking place. “If attackers change or delay the real-time channel signal a little bit, the algorithm can detect the suspicious event and raise alarms immediately,” Song says.

Critically, a special “key” for deciphering the RWM algorithm is transmitted through an encrypted channel from the sender to the receiver before the data transmission takes place.

Tests show that this approach works fast to detect attacks. “We found the watermark-based approach, such as the RWM algorithm we proposed, can be 32 to 1375 times faster than traditional encryption algorithms in mainstream industrial controllers. Therefore, it is feasible to protect critical real-time control systems with new algorithms,” says Song.

Moving forward, he says this approach could have broader implications for the Internet of Things, which the researchers plan to explore more. 

Upgraded Google Glass Helps Autistic Kids “See” Emotions

Par Nick Haber
Vivaan Ferose, an 11-year-old boy with autism, wears his Google Glass head-up display.
Photo: Gabriela Hasbun
Looking for Cues: Vivaan Ferose, an 11-year-old boy with autism, sees prompts in his Google Glass head-up display that help him recognize his parents’ emotions.

Imagine this scene: It’s nearly dinnertime, and little Jimmy is in the kitchen. His mom is rushing to get dinner on the table, and she puts all the silverware in a pile on the counter. Jimmy, who’s on the autism spectrum, wants the silverware to be more orderly, and while his mom is at the stove he carefully begins to put each fork, knife, and spoon back in its slot in the silverware drawer. Suddenly Jimmy hears shouting. His mom is loud; her face looks different. He continues what he’s doing.

Now imagine that Jimmy is wearing a special kind of Google Glass, the augmented-reality headset that Google introduced in 2013. When he looks up at his mom, the head-up display lights up with a green box, which alerts Jimmy that he’s “found a face.” As he focuses on her face, an emoji pops up, which tells Jimmy, “You found an angry face.” He thinks about why his mom might be annoyed. Maybe he should stop what he’s doing with the silverware and ask her.

Our team has been working for six years on this assistive technology for children with autism, which the kids themselves named Superpower Glass. Our system provides behavioral therapy to the children in their homes, where social skills are first learned. It uses the glasses’ outward-facing camera to record the children’s interactions with family members; then our software detects the faces in those videos and interprets their expressions of emotion. Through an app, caregivers can review auto-curated videos of social interactions.

Over the years we’ve refined our prototype and run clinical trials to prove its beneficial effects: We’ve found that its use increases kids’ eye contact and social engagement and also improves their recognition of emotions. Our team at Stanford University has worked with coauthor Dennis Wall’s spinoff company, Cognoa, to earn a “breakthrough therapy” designation for Superpower Glass, which puts the technology on a fast track toward approval by the U.S. Food and Drug Administration (FDA). We aim to get health insurance plans to cover the costs of the technology as an augmented-reality therapy.

When Google Glass first came out as a consumer device, many people didn’t see a need for it. Faced with lackluster reviews and sales, Google stopped making the consumer version in 2015. But when the company returned to the market in 2017 with a second iteration of the device, Glass Enterprise Edition, a variety of industries began to see its potential. Here we’ll tell the story of how we used the technology to give kids with autism a new way to look at the world.


Vivaan’s mother, Deepali Kulkarni, and his father, V.R. Ferose, use the Superpower Glass system to play games with their son. Photo: Gabriela Hasbun


The system is built around the second version of the Google Glass system, the Glass Enterprise Edition. Photo: Gabriela Hasbun


The glasses’ head-up display gives Vivaan information about the emotions on his parents’ faces. Photo: Gabriela Hasbun


The system includes games such as “Capture the Smile” and “Guess the Emotion” that encourage Vivaan to interact with his parents and experiment with facial expressions. Photo: Gabriela Hasbun


Many families with autistic children struggle to get the behavioral therapy their kids need. The Superpower Glass system enables them to take the therapy into their own hands. Photo: Gabriela Hasbun


Vivaan, who is nonverbal, uses laminated cards to indicate which emotion he has identified. Most autistic kids who use the Superpower Glass system are verbal and don’t use such cards. Photo: Gabriela Hasbun

Previous Next

When Jimmy puts on the glasses, he quickly gets accustomed to the head-up display (a prism) in the periphery of his field of view. When Jimmy begins to interact with family members, the glasses send the video data to his caregiver’s smartphone. Our app, enabled by the latest artificial-intelligence (AI) techniques, detects faces and emotions and sends the information back to the glasses. The boundary of the head-up display lights up green whenever a face is detected, and the display then identifies the facial expression via an emoticon, emoji, or written word. The users can also choose to have an audio cue—a voice identifying the emotion—from the bone-conducting speaker within the glasses, which sends sound waves through the skull to the inner ear. The system recognizes seven facial expressions—happiness, anger, surprise, sadness, fear, disgust, and contempt, which we labeled “meh” to be more child friendly. It also recognizes a neutral baseline expression.

To encourage children to wear Superpower Glass, the app currently offers two games: “Capture the Smile,” in which the child tries to elicit happiness or another emotion in others, and “Guess the Emotion,” in which people act out emotions for the child to name. The app also logs all activity within a session and tags moments of social engagement. That gives Jimmy and his mom the ability to watch together the video of their conflict in the kitchen, which could prompt a discussion of what happened and what they can do differently next time.

The three elements of our Superpower Glass system—face detection, emotion recognition, and in-app review—help autistic children learn as they go. The kids are motivated to seek out social interactions, they learn that faces are interesting, and they realize they can gather valuable information from the expressions on those faces. But the glasses are not meant to be a permanent prosthesis. The kids do 20-minute sessions a few times a week in their own homes, and the entire intervention currently lasts for six weeks. Children are expected to quickly learn how to detect the emotions of their social partners and then, after they’ve gained social confidence, stop using the glasses.

Our system is intended to ameliorate a serious problem: limited access to intensive behavioral therapy. Although there’s some evidence that such therapy can diminish or even eliminate core symptoms associated with autism, kids must start receiving it before the age of 8 to see real benefits. Currently the average age of diagnosis is between 4 and 5, and waitlists for therapy can stretch over 18 months. Part of the reason for the shortage is the shocking 600 percent rise since 1990 in diagnoses of autism in the United States, where about one in 40 kids is now affected; less dramatic surges have occurred in some parts of Asia and Europe.

Because of the increasing imbalance between the number of children requiring care and the number of specialists able to provide therapy, we believe that clinicians must therefore look to solutions that can scale up in a decentralized fashion. Rather than relying on the experts for everything, we think that data capture, monitoring, and therapy—the tools needed to help all these children—must be placed in the hands of the patients and their parents.

Efforts to provide in situ learning aids for autistic children date back to the 1990s, when Rosalind Picard, a professor at MIT, designed a system with a headset and minicomputer that displayed emotional cues. However, the wearable technology of the day was clunky and obtrusive, and the emotion-recognition software was primitive. Today, we have discreet wearables, such as Google Glass, and powerful AI tools that leverage massive amounts of publicly available data about facial expressions and social interactions.

The design of Google Glass was an impressive feat, as the company’s engineers essentially packed a smartphone into a lightweight frame resembling a pair of eyeglasses. But with that form factor comes an interesting challenge for developers: We had to make trade-offs among battery life, video streaming performance, and heat. For example, on-device processing can generate too much heat and automatically trigger a cutback in operations. When we tried running our computer-vision algorithms on the device, that automatic system often reduced the frame rate of the video being captured, which seriously compromised our ability to quickly identify emotions and provide feedback.

Our solution was to pair Glass with a smartphone via Wi-Fi. The glasses capture video, stream the frames to the phone, and deliver feedback to the wearer. The phone does the heavy computer-vision work of face detection and tracking, feature extraction, and facial-expression recognition, and also stores the video data.

But the Glass-to-phone streaming posed its own problem: While the glasses capture video at a decent resolution, we could stream it only at low resolution. We therefore wrote a protocol to make the glasses zoom in on each newly detected face so that the video stream is detailed enough for our vision algorithms.

Our computer-vision system originally used off-the-shelf tools. The software pipeline was composed of a face detector, a face tracker, and a facial-feature extractor; it fed data into an emotion classifier trained on both standard data sets and our own data sets. When we started developing our pipeline, it wasn’t yet feasible to run deep-learning algorithms that can handle real-time classification tasks on mobile devices. But the past few years have brought remarkable advances, and we’re now working on an updated version of Superpower Glass with deep-learning tools that can simultaneously track faces and classify emotions.

This update isn’t a simple task. Emotion-recognition software is primarily used in the advertising industry to gauge consumers’ emotional responses to ads. Our software differs in a few key ways. First, it won’t be used in computers but rather in wearables and mobile devices, so we have to keep its memory and processing requirements to a minimum. The wearable form factor also means that video will be captured not by stable webcams but by moving cameras worn by kids. We’ve added image stabilizers to cope with the jumpy video, and the face detector also reinitializes frequently to find faces that suddenly shift position within the scene.

Failure modes are also a serious concern. A commercial emotion-recognition system might claim, for example, a 98 percent accuracy rate; such statistics usually mean that the system works well on most people but consistently fails to recognize the expressions of a small handful of individuals. That situation might be fine for studying the aggregate sentiments of people watching an ad. But in the case of Superpower Glass, the software must interpret a child’s interactions with the same people on a regular basis. If the system consistently fails on two people who happen to be the child’s parents, that child is out of luck.

We’ve developed a number of customizations to address these problems. In our “neutral subtraction” method, the system first keeps a record of a particular person’s neutral-expression face. Then the software classifies that person’s expressions based on the differences it detects between the face he or she currently displays and the recorded neutral estimate. For example, the system might come to learn that just because Grandpa has a furrowed brow, it doesn’t mean he’s always angry. And we’re going further: We’re working on machine-learning techniques that will rapidly personalize the software for each user. Making a human–AI interaction system that adapts robustly, without too much frustration for the user, is a considerable challenge. We’re experimenting with several ways to gamify the calibration process, because we think the Superpower Glass system must have adaptive abilities to be commercially successful.

screenshots of apps
Images: Stanford University
The App: The Superpower Glass smartphone app runs the software for facial and emotional recognition and serves as an interface for the family. Parents and children can review videos together that are color-coded by the emotions detected, and the app also launches games that encourage the kids to practice identifying emotions.

We realized from the start that the system would be imperfect, and we’ve designed feedback to reflect that reality. The green box face-detection feature was originally intended to mitigate frustration: If the system isn’t tracking a friend’s face, at least the user knows that and isn’t waiting for feedback that will never come. Over time, however, we came to think of the green box as an intervention in itself, as it provides feedback whenever the wearer looks at a face, a behavior that can be noticeably different for children on the autism spectrum.

To evaluate Superpower Glass, we conducted three studies over the past six years. The first one took place in our lab with a very rudimentary prototype, which we used to test how children on the autism spectrum would respond to wearing Google Glass and receiving emotional cues. Next, we built a proper prototype and ran a design trial in which families with autistic kids took the devices home for several weeks. We interacted with these families regularly and made changes to the prototype based on their feedback.

With a refined prototype in hand, we then set out to test the device’s efficacy in a rigorous way. We ran a randomized control trial in which one group of children received typical at-home behavioral therapy, while a second group received that therapy plus a regimen with Superpower Glass. We used four tests that are commonly deployed in autism research to look for improvement in emotion recognition and broader social skills. As we described in our 2019 paper in JAMA Pediatrics, the intervention group showed significant gains over the control group in one test (the socialization portion of the Vineland Adaptive Behavior Scales [PDF]).

We also asked parents to tell us what they had noticed. Their observations helped us refine the prototype’s design, as they commented on technical functionality, user frustrations, and new features they’d like to see. One email from the beginning of our at-home design trial stands out. The parent reported an immediate and dramatic improvement: “[Participant] is actually looking at us when he talks through google glasses during a conversation…it’s almost like a switch was turned.… Thank you!!! My son is looking into my face.”

This email was extremely encouraging, but it sounded almost too good to be true. Yet comments about increased eye contact continued throughout our studies, and we documented this anecdotal feedback in a publication about that design study. To this day, we continue to hear similar stories from a small group of “light switch” participants.

We’re confident that the Superpower Glass system works, but to be honest, we don’t really know why. We haven’t been able to determine the primary mechanism of action that leads to increased eye contact, social engagement, and emotion recognition. This unknown informs our current research. Is it the emotion-recognition feedback that most helps the children? Or is our device mainly helping by drawing attention to faces with its green box? Or are we simply providing a platform for increased social interaction within the family? Is the system helping all the kids in the same way, or does it meet the needs of various parts of the population differently? If we can answer such questions, we can design interventions in a more pointed and personalized way.

The startup Cognoa, founded by coauthor Dennis Wall, is now working to turn our Superpower Glass prototype into a clinical therapy that doctors can prescribe. The FDA breakthrough therapy designation for the technology, which we earned in February 2019, will speed the journey toward regulatory approval and acceptance by health insurance companies. Cognoa’s augmented-reality therapy will work with most types of smartphones, and it will be compatible not only with Google Glass but also with new brands of smart glasses that are beginning to hit the market. In a separate project, the company is working on a digital tool that physicians can use to diagnose autism in children as young as 18 months, which could prepare these young kids to receive treatment during a crucial window of brain development.

Ultimately, we feel that our treatment approach can be used for childhood concerns beyond autism. We can design games and feedback for kids who struggle with speech and language, for example, or who have been diagnosed with attention deficit hyperactivity disorder. We’re imagining all sorts of ubiquitous AI-powered devices that deliver treatment to users, and which feed into a virtuous cycle of technological improvement; while acting as learning aids, these devices can also capture data that helps us understand how to better personalize the treatment. Maybe we’ll even gain new scientific insights into the disorders in the process. Most important of all, these devices will empower families to take control of their own therapies and family dynamics. Through Superpower Glass and other wearables, they’ll see the way forward.

This article appears in the April 2020 print issue as “Making Emotions Transparent.”

About the Authors

When Stanford professor Dennis Wall met Catalin Voss and Nick Haber in 2013, it felt like a “serendipitous alignment of the stars,” Wall says. He was investigating new therapies for autism, Voss was experimenting with the Google Glass wearable, and Haber was working on machine learning and computer vision. Together, the three embarked on the Superpower Glass project to encourage autistic kids to interact socially and help them recognize emotions.

Here’s What It’s Like Inside a Chip Foundry During the COVID-19 Pandemic

Par Samuel K. Moore

“Ironically, one of the safest places to be right now is in a cleanroom,” points out Thomas Sonderman, president of SkyWater Technology, in Bloomington, Minn.

Like every business, semiconductor foundries like SkyWater and GlobalFoundries have had to make some pretty radical changes to their operations in order to keep their workers safe and comply with new government mandates, but there are some challenges unique to running a 24/7 chip-making operation.

GlobalFoundries’ COVID-19 plan is basically an evolution of its response to a previous coronavirus outbreak, the 2002-3 SARS pandemic. When the company acquired Singapore-based Chartered Semiconductor in 2010, it inherited a set of fabs that had managed to produce chips through the worst of that outbreak. (According to the World Health Organization, Singapore suffered 238 SARS cases and 33 deaths.)

“During that period we established business policies, protocols, and health and safety measures to protect our team while maintaining operations,” says Ronald Sampson, GlobalFoundries’ senior vice president and general manager of U.S. fab operations. “That was a successful protocol that served as the basis for this current pandemic that we’re experiencing together now. Since that time we’ve implemented it worldwide and of course in our three U.S. factories.”

At Fab 8 in Malta, N.Y., GlobalFoundries’ most advanced 300-mm CMOS facility, that translates into a host of procedures. Some of them are common, such as working from home, forbidding travel, limiting visitors, and temperature screening. Others are more unique to the fab operation. For example, workers are split into two teams that never come into contact with each other; they aren’t in the building on the same day, and they even use separate gowning rooms to enter the cleanroom floor. Those gowning rooms are marked off in roughly 2-meter squares, and no two people are allowed to occupy the same square.

Ron Sampson
Photo: GlobalFoundries
Ronald Sampson

Once employees are suited up and in the clean room, they’re taking advantage of it. “It’s one of the cleanest places on earth,” says Sampson. “We’ve moved all of our operations meetings onto the factory floor itself,” instead of having physically separated team members in a conference room.

GlobalFoundries is sharing some of what makes that safety possible, too. It’s assisted healthcare facilities in New York and Vermont, where its U.S. fabs are located, with available personal protective equipment, such as face shields and masks, in addition to making cash donations to local food banks and other causes near its fabs around the world. (SkyWater is currently evaluating what the most significant needs are in its community and whether it is able to play a meaningful role in addressing them.)

SkyWater occupies a very different niche in the foundry universe than does GlobalFoundries Fab 8. It works on 200-mm wafers and invests heavily in co-developing new technology processes with its customers. In addition to manufacturing an essential microfluidic component for a coronavirus sequencing and identification system system, it’s developing 3D carbon-nanotube based chips through a $61-million DARPA program, for example.

But there are plenty of similarities with GlobalFoundries in SkyWater’s current operations, including telecommuting engineers, staggered in-person work shifts, and restricted entry for visitors. There are, of course, few visitors these days. Customers and technology development partners are meeting remotely with SkyWater’s engineers. And many chip making tools can be monitored by service companies remotely.

(Applied Materials, a major chip equipment maker, says that many customers’ tools are monitored and diagnosed remotely already. The company installs a server in the fab that allows field engineers access to the tools without having to set foot on premises.)

Thomas Sonderman
Photo: SkyWater Technology
Thomas Sonderman

With the whole world in economic upheaval, you might expect that the crisis would lead to some surprises in foundry supply chains. Both GlobalFoundries and SkyWater say they are well prepared. For SkyWater, a relatively small US-based foundry with just the one fab, the big reasons for that preparedness was the trade war between the United States and China beginning in 2018.

“If you look at the broader supply chain, we’ve been preparing for this since tariffs began,” says Sonderman. Those necessitated a deep dive into the business’s potential vulnerabilities that’s helped guide the response to the current crisis, he says.

At press time no employees of either company had tested positive for the virus. But that situation is likely to change as the virus spreads, and the companies say they will adapt. Like everybody else, “we’re finding our new normal,” says Sampson.

Halting COVID-19: The Benefits and Risks of Digital Contact Tracing

Par Emily Waltz

As COVID-19 sweeps through the planet, a number of researchers have advocated the use of digital contact tracing to reduce the spread of the disease. The controversial technique can be effective, but can have disastrous consequences if not implemented with proper privacy checks and encryption. 

Ramesh Raskar, an associate professor at MIT Media Lab, and his team have developed an app called Private Kit: Safe Paths that they say can do the job while protecting privacy. The software could get integrated into a new, official WHO app touted as the “Waze  for COVID-19.” IEEE Spectrum spoke with Raskar to better understand the risks and benefits of digital contact tracing. 

IEEE Spectrum: What is conventional contact tracing? 

Ramesh Raskar: It’s back-tracing the steps of the patient, trying to find every individual who might have come in contact [with them] over the last two weeks or so. It’s very manual and involves interviews and phone calls. 

Spectrum: Is it effective? 

Raskar: As long as the patient didn’t fly or take a bus or attend a large event, you can do a reasonably good job. The best case scenario is you find people within one step from the infected person, but you can almost never find people two steps away.

Spectrum: Tell me about digital contact tracing using mobile phones. 

Raskar: It’s a way to figure out if two people were in the same location at the same time, based on co-location tracking. The simplest scenario, and the one we’re deploying, is that everyone downloads an app with a GPS-based location logger. When a person is confirmed as having COVID-19, they donate their GPS data to the app’s server. This gives a location trail of everywhere they’ve been for the last two weeks, but without revealing the person’s identity. Everyone else who uses the app can look at those trails to compare with their own to see if there was significant overlap, but they never have to share their trails.  

Spectrum: The utility of a tool like this would depend in part on how widespread disease testing is, right? 

Raskar: Yes, that information is critical. You have to know who is infected. And that information has to be authentic—confirmed by a test and witnessed by a health care worker. 

Spectrum: But with COVID-19, there are lots of infected people who don’t get tested because they are asymptomatic or their symptoms aren’t bad enough to require care. So what is the point of software like yours when there are so many asymptomatic people? 

Raskar: Epidemics is a game of probabilities, not a game of absolutes. You don’t have to catch everyone. If you trace even a small fraction of people, that will start reducing the R0, which is the average number of people who are infected by a patient.  

Spectrum: Is it too late in this COVID-19 pandemic to start doing contact tracing? 

Raskar: No. Even in places that are locked down, there’s a percentage of people who have to go to work because they are essential—police officers or health care workers or grocery store employees. They need solutions like contact tracing because they can’t shut down those operations when one person gets infected. And we just heard from the U.S. government that lock downs are going to start lifting, so we will need to contact trace when people go back to work. 

There’s a nice graphic from Resolve to Save Lives, which is led by a former director of the CDC, about which solutions are most effective at which stages of disease spread. Contact tracing is effective early in an epidemic, when authorities are trying to contain the virus, and also while it’s being suppressed. 

Spectrum: How do you protect the privacy of app users? 

Raskar: Infected individuals can blur or redact locations that might be sensitive or give away their identity. And for users who are not infected, all the calculations regarding their location trail happen on the smartphone. It never goes to the server. So the only person who knows that they might have crossed paths with an infected person is the user himself or herself. This is very important. For more complex operations, the user can upload an encrypted version of their GPS or Bluetooth trails onto a server.

Spectrum: Who controls the server and what stops those people or malicious outsiders from hacking into the data and invading the privacy of app users? 

Raskar: This is an MIT open-source, no-revenue, no-ads tool. We wanted to build a trusted, impartial, honest broker that can solve problems. So we invented encryption methods that allow us to achieve both utility and privacy. It’s called split learning. It was intended for other types of business but about a month ago we started working on it for COVID-19.

Spectrum: So even the people who built the system cannot access the data uploaded by users. 

Raskar: That’s right. It comes in a strange format that doesn’t allow anyone to retrace or reconstruct the data. In efforts like this where we aggregate powerful data, we have to avoid the temptation to create a big brother who can see everything.

Spectrum: China, South Korea, and Singapore have used digital contact tracing to combat COVID-19, and the ramifications on privacy and people’s lives have been appalling. How does this happen?

Raskar: Some governments have access to location trails of both the user and the patient, creating a surveillance state. That means the state knows exactly which user to go after and will hunt them down, and that becomes a problem. And some countries publicly released unredacted, raw GPS trails of the infected person, leading to public shaming of the infected person. 

Spectrum: Tell me some stories about the kinds of privacy intrusions that are happening. 

Raskar: In South Korea, vigilante groups started forming on Facebook and social media around the data from contact tracing. They became armchair detectives. They would piece together information about a person in their neighborhood and gossip and shame people or discover parts of their life that are extremely private

In China you get a red, yellow, or green code from an app on your phone based on whether you might have been in contact with an infected person or not. People with red on their phone started getting shamed. And then at some point, only the people who had green could use government services or go to the grocery store. Citizens lost a sense of agency. And vigilantes living in tall apartment complexes would see a resident sneezing or coughing and not let that person in the building—they would stop them from going to their home. They knew that if that person was eventually diagnosed with the virus, everyone in that apartment complex would get a red. This was in a country where there’s a homogenous population. Can you imagine the discrimination and racism and bullying that could explode in a country with a heterogeneous population? We might end up pitting neighbor against neighbor and causing civic unrest. 

Spectrum: What’s happening to local businesses that get caught up in this? 

Raskar: Say an infected person goes to a small noodle shop and people see that trail. No one wants to go to the noodle shop now and it goes out of business, even though the infected person was only there for an hour. 

Businesses are also subjected to blackmail. Since some contact tracing apps allow people to self-report their symptoms, bad actors will go to a shop and threaten to report symptoms from that location unless they are given a ransom. There are a lot of those stories from China and South Korea. And the malicious actors don’t even have to physically go to the shop; they can do it remotely sitting on a computer since GPS spoofing and Bluetooth spoofing is pretty straightforward. 

These are problems not only now, but in the future too. The data could persist, and any breach of that data can lead to a lot of repercussions, not only for individual privacy but also national security. The social graph of a city or community or country is basically a national secret. Nation states are always attacking each other behind the scenes in moments of weakness like this. 

SpectrumSo knowing all of this, how do you feel about the fact that you are developing a digital contact tracing app? 

Screenshots from Safe Paths app
Image: Ramesh Raskar

Raskar: We have a table in a recent white paper [PDF] on Safe Paths that shows how contact tracing impacts the patient, the user, business, and non-users. And if you look at the table, it’s a little depressing. There is no perfect solution. So you have to design something that’s appropriate for the values in the society. We came up with a few principles for Safe Paths. The first is: Whose privacy matters? We decided the non-infected user’s privacy gets the most extreme protection. Next is the infected person’s privacy, but there will still be some leakage of their information. The privacy of businesses is a lower priority. The second principle is that you should put as many calculations as possible on the smartphone, and not on the server, so that the government or big companies can’t see it. If the calculations cannot be done on the phone, only encrypted information should be shared with the server. The third principle is that everything is open source so people can check on the ethics of the software. Fourth, the data should only be used for the purpose it was collected. And fifth, there is a tradeoff between accuracy of information and the impact on an individual. In other words, we think that it’s okay to blur an infected person’s location—say by a kilometer or so—to protect their privacy. 

“Brita Filter for Blood” Aims to Remove Harmful Cytokines for COVID-19 Patients

Par Mark Anderson

In a number of critical cases of COVID-19, a hyper-vigilant immune response is triggered in patients that, if untreated, can itself prove fatal. Fortunately, some pharmaceutical treatments are available—although it’s not yet fully understood how well they might work in addressing the new coronavirus.

Meanwhile, a new blood filtration technology has successfully treated other, similar hyper-vigilant immune syndromes for people who underwent heart surgeries and the critically ill. Which could make it a possibly effective therapy (albeit still not FDA-approved) for some severe COVID-19 cases.

Inflammation, says Phillip Chan—an M.D./PhD and CEO of the New Jersey-based company CytoSorbents—is the body's way of dealing with infection and injury. It’s why burns and sprained ankles turn red and swell up. “That’s the body’s way of bringing oxygen and nutrients to heal,” he said.

Inflammation across most or all of the body, so-called systemic inflammation, can be productive in fighting off a flu, for instance. Or, potentially, in combating the new coronavirus. (COVID-19 is the name of the disease caused by that virus.)

One of the mediators of inflammation in the body are proteins called cytokines. As the world well knows, COVID-19 in some patients devolves into a severe and deadly condition. Some doctors are now arguing that those severe infections require treatment for a patient’s increasingly desperate immune response to the coronavirus.

This is where medicine could make a possibly crucial intervention. Turning back the “storm” of cytokines in severe COVID-19 patients may seem like the opposite of what doctors should be doing. After all, it’s effectively telling the body to pull back a bit in its immune response to the viral invader.

“A life threatening infection can often result in a massive immune response,” Chan says. “It’s like a chaotic four alarm fire, when all semblance of organization is lost. The immune system goes into overdrive, churning out inflammatory mediators called cytokines at a very high rate, that then trigger even more cytokine production. Ultimately, this spiral, called a cytokine storm, can directly damage organs and cause such severe whole body inflammation that vital organs like the lungs, heart, and kidneys begin to fail.”

In other words, some of these severe COVID-19 cases, he says, may be creating a new and possibly treatable problem beyond the novel coronavirus infection. “Severe inflammation in the lungs causes the blood vessels in the lungs to become leaky, resulting in inflammatory cells, fluid, and chemicals to fill the air sacs of the lung, essentially drowning a patient from the inside out,” Chan says. “Physicians dealing with COVID-19 pneumonia call it the worst viral pneumonia they have ever seen, resulting in the need for weeks of mechanical ventilation while the lungs try to recover.”

Chan’s company manufactures a blood purification cartridge called CytoSorb that reduces the intensity of a patient’s cytokine storm by physically binding to and removing cytokines from the blood, in a process similar to dialysis. Chan says the goal of this “Brita filter for your blood,” as some call it, is to reduce the level of cytokines to a level where it no longer hurts the body—but still allows the body to fight the infection.

According to Chan, CytoSorb is approved in the European Union as a cytokine filter. He says it’s been used to date in some 80,000 treatments across 58 countries “to treat life-threatening complications such as sepsis, lung failure, and potentially fatal low blood pressure, often called shock.” (He points out many patients with severe COVID-19 have been dying from some of the same causes.)

He says CytoSorb has also been used in more than 70 critically-ill COVID-19 patients in Italy, China, Germany, and France. Clinical data is not yet available on these cases, though Chan describes what has been reported back to him is “preliminary positive results in terms of controlling cytokine storm, improving lung function that has helped patients get off of mechanical ventilation, and reversing shock.”

Not yet approved for use in the United States, CytoSorb had already been in line for consideration with the U.S. Food and Drug Administration (FDA) for cardiac surgeries prior to the coronavirus outbreak.

But COVID-19 has now dialed up global attention on cytokine storms—and any effective therapies that could treat the sometimes deadly coronavirus-induced condition.

“This is a fascinating idea,” said Jessica Manson, consultant rheumatologist and honorary senior lecturer at University College London Hospital. She is one of six co-authors of a 13 March letter to the journal The Lancet arguing that doctors need to be aware of so-called cytokine storm syndrome when treating critical COVID-19 patients.

Manson is careful to point out that fighting cytokine storms may only be relevant for a subgroup of critical COVID-19 patients. Her team’s letter to The Lancet argues that any patient with a severe case of COVID-19 should be lab tested for “hyperinflammation.” This test would be completely separate from a coronavirus test. 

If a severe COVID-19 patient has markers for hyper-inflamed lungs, other organs, or similar conditions, her letter argues, cytokine storm therapies may need to be considered. She says one approved therapy is the drug tocilizumab, a.k.a. Actemra (which targets the cytokine IL-6). In fact, as of Monday, the FDA has launched Phase III trials of tocilizumab for treating COVID-19 pneumonia.

Chan says most pharmaceutical cytokine storm treatments target individual cytokines. Whereas CytoSorb, he says, targets some 100 different cytokines that normally help orchestrate the body’s immune response to infection and injury. 

According to the CytoSorbents website, a patient’s blood is pumped out of their body using a standard blood dialysis machine and sent through the CytoSorb cartridge. Chan says the cartridge contains the company’s proprietary porous polymer beads that act like tiny sponges to extract cytokines from blood.  

The purified blood then recirculates back into the patient’s body. During a 24-hour therapy period, a patient’s entire blood volume could be treated more than 70 times, the website says. 

Manson says she recently talked with an official at University College London Hospital to see if they could set up a clinical trial to test a blood filtration idea for other cytokine storm-like conditions. Although this conversation was “pre-COVID.”

The evidence is still preliminary as to whether CytoSorb will be effective as an emergency COVID-19 intervention. That said, Manson says she’s at least convinced of the concept behind the therapy. “I think this is genius,” she says. “It’s something we should really work on.” 

Chan says he’s already fielded a number of requests from hospitals in the United States to use CytoSorb for COVID-19 patients via the FDA’s “compassionate use” or “expanded access” programs for therapies already proven safe for other conditions. Since CytoSorb is not yet approved there, Chan’s company is in discussions with the FDA to help make the therapy temporarily available where needed during this crisis.

Topological Photonics: What It Is and Why We Need It

Par Charles Q. Choi
Andrea Blanco-Redondo experiments with entangled photons in silicon nanowire lattices.
Photo: Jayne Ion
Playing with Light: Andrea Blanco-Redondo experiments with entangled photons in silicon nanowire lattices.

Since topological insulators were first created in 2007, these novel materials, which are insulating on the inside and conductive on the outside, have intrigued researchers for their potential in electronics. However, a related but more obscure class of materials—topological photonics—may reach practical applications first.

Topology is the branch of mathematics that investigates what aspects of shapes withstand deformation. For example, an object shaped like a ring may deform into the shape of a mug, with the ring’s hole forming the hole in the cup’s handle, but cannot deform into a shape without a hole.

Using insights from topology, researchers developed topological insulators. Electrons traveling along the edges or surfaces of these materials strongly resist any disturbances that might hinder their flow, much as the hole in a deforming ring would resist any change.

Recently, scientists have designed photonic topological insulators in which light is similarly “topologically protected.” These materials possess regular variations in their structures that lead specific wavelengths of light to flow along their exterior without scattering or losses, even around corners and imperfections.

Here are three promising potential uses for topological photonics.

SEM image of the THzQCL, whose optical cavity consists of an in-plane triangular loop.
Image: NanyangTechnologicalUniversity
The electrically-driven topological laser shown in this scanning electron microscopy image operates at terahertz frequencies.

TOPOLOGICAL LASERS Among the first practical applications of these novel materials may be lasers that incorporate topological protection. For example, Mercedeh Khajavikhan of the University of Southern California and her colleagues developed topological lasers that were more efficient and proved more robust against defects than conventional devices.

The first topological lasers each required an external laser to excite them to work, limiting practical use. However, scientists in Singapore and England recently developed an electrically driven topological laser.

The researchers started with a wafer made of gallium arsenide and aluminum gallium arsenide layers sandwiched together. When electrically charged, the wafer emitted bright light.

The scientists drilled a lattice of holes into the wafer. Each hole resembled an equilateral triangle with its corners snipped off. The lattice was surrounded by holes of the same shape oriented the opposite way.

The topologically protected light from the wafer flowed along the interface between the different sets of holes, and emerged from nearby channels as laser beams. The device proved robust against defects, says electrical and optical engineer Qi Jie Wang at Nanyang Technological University in Singapore.

The laser works in terahertz frequencies, which are useful for imaging and security screening. Khajavikhan and her colleagues are now working to develop ones that work at near-infrared wavelengths, possibly for telecommunications, imaging, and lidar.

Scanning electron microscopy (SEM) images of the non-Hermitian photonic topological insulator on the InGaAsP platform.
Images: University of Pennsylvania
Scanning electron microscopy (SEM) images show a photonic topological insulator developed at the University of Pennsylvania.

PHOTONIC CHIPS By using photons instead of electrons, photonic chips promise to process data more quickly than conventional electronics can, potentially supporting high-capacity data routing for 5G or even 6G networks. Photonic topological insulators could prove especially valuable for photonic chips, guiding light around defects.

However, topological protection works only on the outsides of materials, meaning the interiors of photonic topological insulators are effectively wasted space, greatly limiting how compact such devices can get.

To address this problem, optical engineer Liang Feng at the University of Pennsylvania and his colleagues developed a photonic topological insulator with edges they could reconfigure so the entire device could shuttle data. They built a photonic chip 250 micrometers wide and etched it with oval rings. By pumping the chip with an external laser, they could alter the optical properties of individual rings, such that “we could get the light to go anywhere we wanted in the chip,” Feng says—from any input port to any output port, or even multiple outputs at once.

All in all, the chip hosted hundreds of times as many ports as seen in current state-of-the-art photonic routers and switches. Instead of requiring an off-chip laser to reconfigure the chip, the researchers are now developing an integrated way to perform that task.

Artist impression of correlated photons propagating in an topological array of silicon waveguides
Illustration: Andrea Blanco-Redondo
This artist’s rendering shows topologically-protected photons moving across silicon waveguides.

QUANTUM CIRCUITRY Quantum computers based on qubits are theoretically extraordinarily powerful. But qubits based on superconducting circuits and trapped ions are susceptible to electromagnetic interference, making it difficult to scale up to useful machines. Qubits based on photons could avoid such problems.

Quantum computers work only if their qubits are “entangled,” or linked together to work as one. Entanglement is very fragile—researchers hope topological protection could defend photonic qubits from scattering and other disruptions that can occur when photons run across inevitable fabrication errors.

Photonic scientist Andrea Blanco-Redondo, now head of silicon photonics at Nokia Bell Labs, and her colleagues made lattices of silicon nanowires, each 450 nanometers wide, and lined them up in parallel. Occasionally a nanowire in the lattice was separated from the others by two thick gaps. This generated two different topologies within the lattice and entangled photons traveling down the border between these topologies were topologically protected, even when the researchers added imperfections to the lattices. The hope is that such topological protection could help quantum computers based on light scale up to solve problems far beyond the capabilities of mainstream computers.

This article appears in the April 2020 print issue as “3 Practical Uses for Topological Photonics.”

Graphene Solar Thermal Film Could Be a New Way to Harvest Renewable Energy

Par John Boyd

Researchers at the Center for Translational Atomaterials (CTAM) at Swinburne University of Technology in Melbourne, Australia, have developed a new graphene-based film that can absorb sunlight with an efficiency of over 90 percent, while simultaneously eliminating most IR thermal emission loss—the first time such a feat has been reported.

The result is an efficient solar heating metamaterial that can heat up rapidly to 83 degrees C (181 degrees F) in an open environment with minimal heat loss. Proposed applications for the film include thermal energy harvesting and storage, thermoelectricity generation, and seawater desalination.

Suppressing thermal emission loss—also known as blackbody radiation—while simultaneously absorbing solar light is critical for an efficient solar thermal absorber but is extremely challenging to achieve, says Baohua Jia, founding director of CTAM. “That’s because, depending on the absorbed heat and properties of the absorber, the emission temperature differs, which leads to significant differences in its wavelength,” she explains. “But we’ve developed a three-dimensional structured graphene metamaterial (SGM) that is highly absorbent and selectively filters out blackbody radiation.”

The 3D SGM is composed of a 30-nanometer-thick film of alternating graphene and dielectric layers deposited on a trench-like nanostructure that does double duty as a copper substrate to enhance absorption. More importantly, the substrate is patterned in a matrix arrangement to enable flexible tunability of wavelength-selective absorption.

The graphene film is designed to absorb light between 0.28- to 2.5-micrometer wavelengths. And the copper substrate is structured so that it can act as a selective bandpass filter that suppresses the normal emission of internally generated blackbody energy. This retained heat then serves to further raise the metamaterial’s temperature. Hence, the SGM can rapidly heat up to 83 degrees C. Should a different temperature be required for a particular application, a new trench nanostructure can be fabricated and tuned to match that specific blackbody wavelength.

In our previous work, we demonstrated a 90 nm graphene heat-absorbing material,” says Baohua. Though it could heat up to 160 degrees C, “the structure was more complicated, [comprising] four layers: a substrate, a silver layer, a layer of silicon oxide, and a graphene layer. Our new two-layer structure is simpler and doesn’t require vacuum deposition. And the method of fabrication is scalable and low cost.”

Schematic representation of the proposed three-dimensional (3D) structured graphene metamaterial (SGM) absorber.
Images: Swinburne University
A schematic shows the makeup of the 3D structured graphene metamaterial absorber (top). A photograph (bottom left) and thermal image (bottom right) show the absorber under sunlight.

The new material also uses less graphene by significantly reducing the film thickness to one third, and its thinness aids in transferring the absorbed heat more efficiently to other media such as water. Additionally, the film is hydrophobic, which fosters self-cleaning, while the graphene layer effectively protects the copper layer from corrosion, helping to extend the metamaterial’s lifetime. 

“Because the metal substrate’s structural parameters are the main factors governing overall absorption performance of the SGM, rather than its intrinsic features, different metals can be used according to application needs or cost,” says Keng-Te Lin, lead author of a paper on the metamaterial recently published in Nature Communications, and who is also a research fellow at Swinburne University. Aluminum foil can also be used to replace copper without compromising the performance, he notes.

To test the metamaterial’s design and stability, the researchers fabricated a prototype using standard laser nanofabrication, self-assembly graphene oxide coating, and photo-induced reduction. 

“We used the prototype film to produce clean water and achieved an impressive solar-to-vapor efficiency of 96.2 percent,” says Keng-Te. “This is very competitive for clean water generation using a renewable energy source.” 

He adds that the metamaterial can also be used for energy harvesting and conversion applications, steam generation, wastewater cleaning, seawater desalination, and thermoelectricity generation.

One challenge still remaining is finding a manufacturing method for making the substrate scalable. 

“We are working with a private company, Innofocus Photonics Technology, that has commercialized a coating machine to lay down the graphene and dielectric layers,” says Baohua. “And we are satisfied with that. What we are now looking for is a suitable method for large scale production of the copper substrate.” One possibility, she adds, is using a roll-to-roll process.

Meanwhile, the researchers are continuing to fine-tune the nanostructure design and improve the SGM’s stability and absorption efficiency. “As for commercialization,” says Baohua, “we think that will be possible in one to two years.”

Here’s Where and How We Think China Will Land on Mars

Par Andrew Jones

China aims to become only the second country to land and operate a spacecraft on the surface of Mars (NASA was first with a pair of Viking landers in 1976 if you don’t count the former Soviet Union’s 1971 Mars 3 mission). With just a few months before launch, China is still keeping key mission details quiet. But we can discern a few points about where and how it will attempt a landing on the Red Planet from recent presentations and interviews. 

The launch

Long March 5 rocket
Photo: CASC

Celestial mechanics dictate that China, along with NASA’s Perseverance rover and the Hope orbiter from the United Arab Emirates, will launch around late July during a Hohmann transfer window, which comes around only once every 26 months and allows a trip to Mars using as little propellant as possible.

A huge Long March 5 rocket will send the Chinese spacecraft on a journey for about seven months, after which it will fire rockets in order to enter orbit around Mars in February 2021.

The 5-metric-ton spacecraft consists of an orbiter and the landing segment for the rover. It’s expected that the spacecraft will remain coupled in orbit until April. The orbiter will employ a pair of cameras to image the preselected landing sites, before attempting to set down the 240-kilogram rover (which has yet to be publicly named) on the surface.

The landing

Landing on Mars presents unique challenges. There’s a thin atmosphere that dangerously heats spacecraft but does little to slow them and a different gravitational field than is found on Earth. But China has experience from earlier space exploits to guide the way.

Earth and Mars will be around 150 million kilometers apart when the orbiter arrives, so it will take eight minutes for communications signals to travel each way. Therefore the spacecraft’s guidance, navigation, and control, or GNC, for the landing process will be fully autonomous. This system will be based on the GNC of Chang’e-4, which autonomously achieved the first landing on the far side of the moon in 2019.

The blunt body aerodynamics of the entry capsule’s heating shield, which is shaped like a spherical cone whose tip forms a 70-degree angle, will provide the first deceleration as it hits the atmosphere traveling at a rate of kilometers per second. Next, while traveling at supersonic speeds, a disk-gap-band parachute will deploy to further slow the spacecraft, and then be discarded. China has drawn on technology and experience from its Shenzhou crewed spacecraft, which has allowed astronauts to re-enter Earth’s atmosphere and safely land, for these phases.

Retropropulsion will be responsible for slowing the spacecraft during its final descent. Most of this will be provided by a 7,500-Newton variable thrust engine, like the main engine used by China’s Chang’e-3 and -4 lunar landers. The lander will employ a laser range finder and a microwave ranging velocity sensor to gain navigation data—technology that was also developed initially for China’s moon missions.

The lander will separate from the main body of the spacecraft at an altitude of 70 meters, according to Zhang Rongqiao, mission chief designer, and enter a hover phase to search for a safe landing spot. 3D laser scanning, or lidar, will provide terrain data such as elevation. Obstacle-avoiding mode, facilitated by optical cameras, will begin at 20 meters above the surface.

Some of these processes are apparent in this mesmeric footage of the Chang’e-4 landing. An obstacle avoidance phase is apparent as the spacecraft makes its descent to the crater-covered lunar surface which appears fractal in nature. 

The landing site

A Candidate Landing Site in Utopia Planitia
Image: University of Arizona/JPL/NASA
This image shows a candidate landing site in Utopia Planitia on Mars.

China was initially considering several sites within two broad landing areas, which has since been narrowed down to two preliminary sites near Utopia Planitia, according to a presentation at the European Planetary Science Congress meeting in Geneva last September.

Alfred McEwen, director of the Planetary Image Research Laboratory (PIRL) at the University of Arizona, who attended the session, recently produced an image of one of these areas, in Utopia Planitia.

He wrote in a statement released with the image: “While smooth on large scales, HiRISE reveals small-scale roughness elements, including craters, boulders, and other features. Such hazards may be avoided by using ‘terminal hazard avoidance,’ a technology China has demonstrated on the Moon.”

McEwen notes that “Utopia Planitia may have been extensively resurfaced by mud flows, so it is an interesting place to investigate potential past subsurface habitability.”

Other potential targets are within Chryse Planitia, close to the landing sites of Viking 1 and Pathfinder. For these areas, scientists with the Institute of Space Sciences at Shandong University, have formulated probabilities of dust storms occurring during landing.

Whichever spot it targets, the mission will have landing ellipses—the areas in which the spacecraft is statistically likely to land—of around 100 x 40 kilometers. By comparison, NASA, with its vast Mars landing experience, has a proposed ellipse of just 25 x 20 kilometers for Perseverance, thanks to its Range Trigger technology.

Other necessary pieces of China’s mission are also in place. Tracking stations are now operating across China, as well as in Namibia and Argentina. The Long March 5 rocket passed engine tests in January, while the rover underwent final space environment tests—under simulated conditions experienced during launch, cruising in deep space, and on the Martian surface—around the Chinese New Year. The next big step to set up the 2021 landing attempt is a successful launch from Wenchang in July.

Show The World You Can Write A Cool Program Inside A Single Tweet

Par Stephen Cass

Want to give your coding chops a public workout? Then prove what you can do with the BBC Micro Bot. Billed as the world’s first “8-bit cloud,” and launched on 11 February, the BBC Micro Bot is a Twitter account that waits for people to tweet at it. Then the bot takes the tweet, runs it through an emulator of the classic 1980s BBC Microcomputer running Basic, and tweets back an animated gif of three seconds of the emulator’s output. It might sound like that couldn’t amount to much, but folks have been using it to demonstrate some amazing feats of programming, most notably including Ebon Upton, creator of the Raspberry Pi.

“The Bot’s [output] posts received over 10 million impressions in the first few weeks, and it’s running around 1000 Basic programs per week,” said the account’s creator, Dominic Pajak, in an email interview with IEEE Spectrum.

Upton, for example, performed a coding tour de force with an implementation of Conway’s Game of Life, complete with a so-called Gosper Gun, all running fast enough to see the Gun spit out glider patterns in real time. (For those not familiar with Conway’s Game of Life, it’s a set of simple rules for cellular automata that exist on a flat grid. Cells are turned on and off based on the state of neighboring cells according to those rules. Particularly interesting patterns that emerge from the rules have been given all sorts of names.)

— BBC Micro 🦉 bot (@bbcmicrobot) February 21, 2020

Upton did this by writing 150 bytes of data and machine code for the BBC Microcomputer’s original CPU, the 6502, which the emulator behind the BBC Micro Bot is comprehensive enough to handle. He then converted this binary data into tweetable text using Base64 encoding, and wrapped that data with a small Basic program that decoded it and launched the machine code. Since then, people have been using even more elaborate encoding schemes to pack even more in. 

Pajak, who is the vice-president of business development for Arduino, created the BBC Micro Bot because he is a self-described fan of computer history and Twitter. “It struck me that I could combine the two.” He chose the BBC Micro as his target because “growing up in the United Kingdom during the 1980s, I learnt to program the BBC Micro at school. I certainly owe my career to that early start.” Pajak adds that, “There are technical reasons too. BBC Basic was largely developed by Sophie Wilson (who went on to create the Arm architecture) and was by far the best Basic implementation of the era, with some very nice features allowing for ‘minified’ code.”

Pajak explains that the bot is written in Javascript for the Node.js runtime environment, and acts as a front end to Matt Godbolt's JSbeeb emulation of the BBC Micro. When the bot spots a tweet intended for it, “it does a little bit of filtering and then injects the text into the emulated BBC Micro's keyboard buffer. The bot uses ffmpeg to create a 3-second video after 30 seconds of emulation time.” Originally the bot was running on a Raspberry Pi 4, but he’s since moved it to Amazon Web Services.

Pajak has been pleasantly surprised by the response: “The fact that BBC BASIC is being used for the first time by people across the world via Twitter is definitely a surprise, and its great to see people discovering and having fun with it. There were quite a lot of slogans and memes being generated by users in Latin America recently. This I didn't foresee for sure.”

The level of sophistication of the programs has risen sharply, from simple Basic programs through Upton’s Game of Life implementation and beyond. “The barriers keep getting pushed. Every now and then I have to do a double take: Can this really be done with 280 characters of code?Pajak points to Katie Anderson’s tongue-in-cheek encoding of the Windows 3.1 logo, and the replication of a classic bouncing ball demo by Paul Malin—game giant Activision’s technical director—which, Pajak says, uses “a special encoding to squeeze 361 ASCII characters of code into a 280 Unicode character tweet.”

— BBC Micro 🦉 bot (@bbcmicrobot) February 21, 2020

— BBC Micro 🦉 bot (@bbcmicrobot) March 18, 2020

If you’re interested in trying to write a program for the Bot, there are a host of books and other coding advice about the BBC Micro available online, with Pajak hosting a starting set of pointers and a text encoder on his website,

As for the future and other computers, Pajak says he given some tips to people who want to build similar bots for the Apple II and Commodore computers. For himself, he’s contemplating finding a way to execute the tweets on a physical BBC Micro, saying “I already have my BBC Micro connected to the Internet using an Arduino MKR1010...”

When the Most Important Technology Is Teamwork

Par Mark Pesce
Illustration: Greg Mably

I get to meet plenty of smart people through my work as a mentor to entrepreneurs, but few have proven as talented as Mobin Nomvar, cofounder of Sydney startup Automated Process Synthesis Co., which develops machine-learning software for chemical-process engineering. When he first started his company, Nomvar knew a lot about coding, more about chemistry, but little about the biological processes going on within his own body.

You see, as he settled into the sedentary role of full-time managing director, his waistline expanded. So he studied the problem and planned to burn through the fat he’d accumulated by starving his body of carbohydrates, forcing it to metabolize fat instead of sugar for energy. But having never dieted before, he had no idea what to expect. As an engineer, he wanted a number he could measure to gauge how well he was doing. Blood-glucose level would be just the ticket, he thought.

Many millions of diabetics make that measurement every day, sometimes several times a day, by pricking a finger and swiping a drop of blood onto a test strip. Nomvar asked himself: Could there be a less painful way?

He suspected he might find one with a measurement of his body’s bioimpedance. For 20 years, researchers had published promising findings that showed it was possible to measure blood-glucose levels this way, but never with the accuracy of blood-based measurements.

Could a bioimpedance-based blood-glucose sensor be improved and perhaps commercialized? Nomvar built a prototype, strapped it on, and ate a bowl of carbohydrate-packed rice. Then he watched the measurements tick upward—validation enough to keep going. When he reached the limit of his understanding, he assembled a team to help: an expert in bioimpedance, a chemical engineer, and a software designer—people with the sorts of complementary talents needed to develop such a gizmo.

Together they reviewed the science and read the relevant literature, and only then went back to Nomvar’s prototype, seeking more signal from its sensor. Finding the needed signal turned out to be easier than they’d expected, though they had to borrow a US $50,000 commercial bioimpedance unit to confirm the results from the crude prototype, constructed at a thousandth the cost.

Sensitive bioimpedance measurements turned out to be necessary, but not sufficient. Many different body processes affect bioimpedance, and isolating the effects of changing glucose levels appeared nearly impossible. Indeed, trying to clean up the signal using machine learning produced only more noise. Every team member had a go at a solution, drawing from their expertise to tune the algorithm. A week-long burst of activity from one teammate got them over the line, resulting in a device that outperforms every other noninvasive technique that’s ever been used to measure blood glucose.

In just over three months, they perfected their ring-shaped sensor. Slipped over a finger and paired with the proper algorithm, it can measure blood glucose continuously and inexpensively. As the number of diabetics globally passes half a billion, such a device could have quite a large market—if it successfully makes it through the clinical testing needed to certify it as both accurate and safe. Many medical wonders fail to overcome that hurdle, and Nomvar’s work so far is just the first step in what could be a very long race.

It’s a step Nomvar admits he couldn’t have taken alone. But his team—diverse, experienced, and capable—had the strengths needed to succeed.

This article appears in the April 2020 print issue as “Don’t Go It Alone.”

PAM4 Gigabit Ethernet Electrical SERDES Analysis, Debug and Compliance Testing

This white paper shows how the introduction of complicated figures of merit like SNDR, COM, and ERL, plus FEC (forward error correction) changes how we think about SERDES performance. SERDES tests require more than pristine signal generation and error counting. This paper presents the key SERDES tests, the need for FEC test patterns and the ability to insert errors that can probe Reed-Solomon FEC, and techniques for calibrating interference and jitter tolerance tests.

IEEE Plots a Path for Wide Bandgap Semiconductors Used in the Power Industry

Par Kathy Pretz
Illustration icons relating to semicondoctor production.
Illustration: Shutterstock

THE INSTITUTE There’s a lot of excitement in the power industry about devices made with wide bandgap (WBG) semiconductors such as silicon carbide (SiC) and gallium nitride (GaN).

The materials’ bandgaps—the energy difference between insulating and conducting states—are significantly greater than that of silicon. As a result, WBG power devices use less energy, can handle higher voltages, can operate at higher temperatures and frequencies, and can produce more reliable forms of electricity from renewable energy. But their technology is also fairly new, and the devices cost more than silicon-based ones, which have a proven track record.

To encourage the use of WBG technology, the IEEE Power Electronics Society (PELS) recently released the International Technology Roadmap for Wide Bandgap Power Semiconductors (ITRW).

“The road map is a strategic look at the long-term landscape of WBG, its future, what the trends are, and what the possibilities are,” says IEEE Fellow Braham Ferreira, chair of the ITRW steering committee. “The purpose of the document is to facilitate an acceleration in the R&D process to fulfill the potential this new technology has.”

The road map committee is divided into working groups that focus on four areas: substrates and devices, modules and packaging, GaN systems and applications, and SiC systems and applications. Experts from around the world are participating, including materials scientists and engineers, device specialists and researchers, policymakers, and representatives from industry and academia.

The road map identifies key trends, design challenges, and potential applications as well as a preview of future applications.

“We could not give marching orders for industry on the production and development of these devices,” Ferreira says. “By consensus and agreement, we identified what the potential new applications could be, and gave direction for investment in long-term R&D.”


There are several reasons for using WBG semiconductors for power electronics and other applications, according to the ITRW executive summary. SiC and GaN devices are becoming more affordable and widely available. They also offer performance that can’t be achieved with silicon.

The new generation of WBG semiconductor devices made from SiC and GaN power converters have the potential to switch 100 to 1,000 times faster than their silicon counterparts.

They also can save a lot of energy, Ferreira says: “With a typical silicon converter, you get about 95 percent efficiency. But using a WBG converter, the efficiency is closer to 99 percent.”

The road map summary lists the markets that could benefit most from the adoption of WBG technology, including ones for photovoltaic converters, hybrid and pure electric automotive drivetrains, and data centers.

The road map authors also foresee clear benefits for the technology in radiation-hardened electronic equipment used in space and other places where a lot of radiation is present, Ferreira says.

Markets that can benefit from WBG’s smaller converters and reduced losses and noise are power supplies for computers, laptops, televisions, and electric vehicles.


The road map identifies short- (5 years), mid- (5 to 15 years) and long-term time frames for commercialization. Short-term indicators are given for existing products and devices. The mid-term section explains what it would take for specific technologies to turn a profit. Longer-term trends highlight research that could lead to new devices.

Several case studies are included. One looks at integrated switching cells for modular wide-bandgap conversion. Another considers high-voltage packages for silicon carbide MOSFETs.

PELS members can download the road map for free. The cost is US $50 for other IEEE members and $250 for nonmembers.

The society also offers webinars about WBG semiconductors.

Pop-Up Open Source Medical Hardware Projects Won’t Stop Coronavirus, but Might Be Useful Anyway. Here’s Why

Par Lucas Laursen

Halfway to the moon and bleeding oxygen into space, the Apollo 13 spacecraft and its occupants seemed in dire straits. But the astronauts modified their CO2 scrubbers with a duct-tape-and-plastic-bag solution cooked up by NASA engineers and made famous in the 1995 movie. Now, in support of medical workers facing hardware shortages due to the coronavirus pandemic, several networks of volunteers are developing similarly MacGyver’d respiratory equipment using easy-to-find or printable parts.

Several such groups have taken on the open source mantle, and their stories illustrate some of the strengths and weaknesses of the wider open source movement.

One fast-moving team managed to use a 3D printer to produce 100 replacement valves for an Italian hospital’s intensive care unit, but was concerned that it might face legal threats from the original equipment manufacturer.

Another group is targeting a long list of supplies and devices, such as homemade hand sanitizer, 3D-printed face shields, nasal cannulas and ventilator machines. One company is prototyping an open-source oxygen concentrator. Some efforts are much lower-tech: One Indiana hospital asked volunteers to help sew facemasks following CDC guidelines.

The core idea is nothing new: anesthetist John Dingley and colleagues published free instructions for a low-cost emergency ventilator in 2010. But it may feel more urgent now that people are reading headlines about equipment shortages at hospitals in even the richest countries in the world.

One reason there often aren’t many manufacturers of a given medical device is the cost of getting the devices tested and approved for medical use. Even if the individual units don’t cost much, getting a medical device to market the usual way costs anywhere from $31 million to around $94 million, depending on the complexity and application, according to a 2010 estimate.

There are also issues with whether 3D-printed parts can be cleaned properly. Many ad hoc fixes won’t be as durable as products produced with an eye toward longer-term cost-effectiveness.

Still, moonshot-gone-wrong solutions may obtain expedited review from medical regulators for narrow uses, as is the case with the Open Source Ventilator Ireland group, which told Forbes it is getting a sped-up examination from Ireland’s regulator.

Michigan Technological University engineering professor Joshua M. Pearce, one of the editors of a forthcoming special issue of the journal HardwareX focusing on open source COVID-19 medical hardware, predicts that the U.S. Food and Drug Administration (FDA) will likely also waive some licensing requirements in the event of massive shortages.

“In the end I think it comes down to the Golden Rule: Do onto others as you would have them do onto you,” Pearce says. “I know I would be happy to have the option of even a partially tested open source ventilator if I had COVID19, needed it, and all the hospital systems were used.”

If volunteer medical device makers get past legal hurdles, they will also need to get in sync with patients and medical staff about what really works. 

The final users’ needs have often been “a minor part of the decision-making process” in commercial device development, wrote University of Pisa bioengineer Carmelo De Maria and colleagues in a chapter on open-source medical devices in the Clinical Engineering Handbook.

“Sometimes those people don't have any competence in medical devices and they risk creating confusion,” De Maria says.

Already, some members of the Open Source COVID19 Medical Supplies group on Facebook have weighed in with that kind of criticism. One wrote: “None of the mask designs I’ve seen people printing here will do anything to stop the virus.” Another group member, a healthcare worker, pooh-poohed a thread devoted to an automatic bag valve, writing: “There is no real-life scenario an automated Ambubag would be useful. Everyone designing these can turn their skills elsewhere.”

That feedback, visible to any potential contributors, might help steer the group toward more viable solutions. One recent post, for example, suggested concentrating amateur efforts on lower-tech devices aimed at less critical patients, to free up first-line hardware for the most critical patients.

A different issue is coordinating all the digital Good Samaritans. One recently formed group, called Helpful Engineering, reported having over 3000 registered volunteers as of 19 March, and over 11,000 people on Slack, the messaging platform. (And you, newly remote worker, thought your office Slack was getting noisy.)

The speed with which people can talk about, and even design something online may be tantalizing, but it might not reflect how fast the output can spread in the real world. In the Clinical Engineering Handbook, De Maria and colleagues write that the growing ease with which people can make their own medical hardware makes it even more important to create accompanying rules and methods for validating do-it-yourself devices.

De Maria helped build Ubora, a platform where makers can document the work they have done to show their device’s efficacy.

“Open Source can create a reliable prototype but [when] you want to go to the next level you need another type of approach that has to take your brilliant idea, do an experiment together with experts before going to the patients,” De Maria says. 

Generating widely affordable, easily buildable devices that withstand rigorous testing and are legal to distribute, even with the speed of open source and goodwill and skills of thousands of volunteers, may not happen as quickly as we need it to in order to suppress this pandemic.

That doesn’t make the effort a waste. Think of all the engineers who were inspired by the story of Apollo 13’s improvised scrubbers, and the institutional knowledge NASA gained for future missions. If the lessons of the hardware push in response to today’s COVID-19 outbreak stay in the open, they will be useful in the longer term.

With that in mind, De Maria and colleagues are challenging open source hardware makers with a competition calling for European-compliant medical designs that will be well-documented using Ubora. The first deadline is 30 April and awards won’t be presented until June.

“We created the competition looking for a solution, but in perspective,” De Maria says. Creating and validating systematic solutions will take months, not weeks.

While some smaller open source components have already received government approval for so-called “compassionate use” and spare parts such as those valves are welcome, it may be too late for them to make much of a difference in places still on the wrong side of the COVID-19 growth curve.

The real reward is saving lives in future pandemics.

Says Pearce: “I am operating under the assumption that… anything we do now will help for the next pandemic.”

IEEE Spectrum updated this story with quotes from De Maria.

Here’s a Blueprint for a Practical Quantum Computer

Par Richard Versluis
Illustration: Chad Hagen

The classic Rubik’s Cube has 43,252,003,274,489,856,000 different states. You might well wonder how people are able to take a scrambled cube and bring it back to its original configuration, with just one color showing on each side. Some people are even able to do this blindfolded after viewing the scrambled cube once. Such feats are possible because there’s a basic set of rules that always allow someone to restore the cube to its original state in 20 moves or less.

Controlling a quantum computer is a lot like solving a Rubik’s Cube blindfolded: The initial state is well known, and there is a limited set of basic elements (qubits) that can be manipulated by a simple set of rules—rotations of the vector that represents the quantum state. But observing the system during those manipulations comes with a severe penalty: If you take a look too soon, the computation will fail. That’s because you are allowed to view only the machine’s final state.

The power of a quantum computer lies in the fact that the system can be put in a combination of a very large number of states. Sometimes this fact is used to argue that it will be impossible to build or control a quantum computer: The gist of the argument is that the number of parameters needed to describe its state would simply be too high. Yes, it will be quite an engineering challenge to control a quantum computer and to make sure that its state will not be affected by various sources of error. However, the difficulty does not lie in its complex quantum state but in making sure that the basic set of control signals do what they should do and that the qubits behave as you expect them to.

If engineers can figure out how to do that, quantum computers could one day solve problems that are beyond the reach of classical computers. Quantum computers might be able to break codes that were thought to be unbreakable. And they could contribute to the discovery of new drugs, improve machine-learning systems, solve fiendishly complex logistics problems, and so on.

The expectations are indeed high, and tech companies and governments alike are betting on quantum computers to the tune of billions of dollars. But it’s still a gamble, because the same quantum-mechanical effects that promise so much power also cause these machines to be very sensitive and difficult to control.

Must it always be so? The main difference between a classical supercomputer and a quantum computer is that the latter makes use of certain quantum mechanical effects to manipulate data in a way that defies intuition. Here I will briefly touch on just some of these effects. But that description should be enough to help you understand the engineering hurdles—and some possible strategies for overcoming them.

Whereas ordinary classical computers manipulate bits (binary digits), each of which must be either 0 or 1, quantum computers operate on quantum bits, or qubits. Unlike classical bits, qubits can take advantage of a quantum mechanical effect called superposition, allowing a qubit to be in a state where it has a certain amount of zero-ness to it and a certain amount of one-ness to it. The coefficients that describe how much one-ness and how much zero-ness a qubit has are complex numbers, meaning that they have both real and imaginary parts.

In a machine with multiple qubits, you can create those qubits in a very special way, such that the state of one qubit cannot be described independently of the state of the others. This phenomenon is called entanglement. The states that are possible for multiple entangled qubits are more complicated than those for a single qubit.

While two classical bits can be set only to 00, 01, 10, or 11, two entangled qubits can be put into a superposition of these four fundamental states. That is, the entangled pair of qubits can have a certain amount of 00-ness, a certain amount of 01-ness, a certain amount of 10-ness, and a certain amount of 11-ness. Three entangled qubits can be in a superposition of eight fundamental states. And n qubits can be in a superposition of 2n states. When you perform operations on these n entangled qubits, it’s as though you were operating on 2n bits of information at the same time.

The operations you do on a qubit are akin to the rotations done to a Rubik’s Cube. A big difference is that the quantum rotations are never perfect. Because of certain limitations in the quality of the control signals and the sensitivity of the qubits, an operation intended to rotate a qubit by 90 degrees may end up rotating it by 90.1 degrees or by 89.9 degrees, say. Such errors might seem small but they quickly add up, resulting in an output that is completely incorrect.

Another source of error is decoherence: Left by themselves, the qubits will gradually lose the information they contain and also lose their entanglement. This happens because the qubits interact with their environment to some degree, even though the physical substrate used to store them has been engineered to keep them isolated. You can compensate for the effects of control inaccuracy and decoherence using what’s known as quantum error correction, but doing so comes at great cost in terms of the number of physical qubits required and the amount of processing that needs to be done with them.

Once these technical challenges are overcome, quantum computers will be valuable for certain special kinds of calculations. After executing a quantum algorithm, the machine will measure its final state. This measurement, in theory, will yield with high probability the solution to a mathematical problem that a classical computer could not solve in a reasonable period of time.

So how do you begin designing a quantum computer? In engineering, it’s good practice to break down the main function of a machine into groups containing subfunctions that are similar in nature or required performance. These functional groups then can be more easily mapped onto hardware. My colleagues and I at QuTech in the Netherlands have found that the functions needed for a quantum computer can naturally be divided into five such groups, conceptually represented by five layers of control. Researchers at IBM, Google, Intel, and elsewhere are following a similar strategy, although other approaches to building a quantum computer are also possible.

Let me describe that five-layer cake, starting at the top, the highest level of abstraction from the nitty-gritty details of what’s going on deep inside the hardware.

At the top of the pile is the application layer, which is not part of the quantum computer itself but is nevertheless a key part of the overall system. It represents all that’s needed to compose the relevant algorithms: a programming environment, an operating system for the quantum computer, a user interface, and so forth. The algorithms composed using this layer can be fully quantum, but they may also involve a combination of classical and quantum parts. The application layer should not depend on the type of hardware used in the layers under it.

Illustration: Chad Hagen
Layer Cake: The components of a practical quantum computer can be divided into five sections, each carrying out different kinds of processing.

Directly below the application layer is the classical-processing layer, which has three basic functions. First, it optimizes the quantum algorithm being run and compiles it into microinstructions. That’s analogous to what goes on in a classical computer’s CPU, which processes many microinstructions for each machine-code instruction it must carry out. This layer also processes the quantum-state measurements returned by the hardware in the layers below, which may be fed back into a classical algorithm to produce final results. The classical-processing layer will also take care of the calibration and tuning needed for the layers below.

Underneath the classical layer are the digital-, analog-, and quantum-processing layers, which together make up a quantum processing unit (QPU). There is a tight connection between the three layers of the QPU, and the design of one will depend strongly on that of the other two. Let me describe more fully now the three layers that make up the QPU, moving from the top downward.

The digital-processing layer translates microinstructions into pulses, the kinds of signals needed to manipulate qubits, allowing them to act as quantum logic gates. More precisely, this layer provides digital definitions of what those analog pulses should be. The analog pulses themselves are generated in the QPU’s analog-processing layer. The digital layer also feeds back the measurement results of the quantum calculation to the classical-processing layer above it, so that the quantum solution can be combined with results computed classically.

Right now, personal computers or field-programmable gate arrays can handle these tasks. But when error correction is added to quantum computers, the digital-processing layer will have to become much more complicated.

The analog-processing layer creates the various kinds of signals sent to the qubits, one layer below. These are mainly voltage steps and sweeps and bursts of microwave pulses, which are phase and amplitude modulated so as to execute the required qubit operations. Those operations involve qubits connected together to form quantum logic gates, which are used in concert to carry out the overall computation according to the particular quantum algorithm that is being run.

Although it’s not technically difficult to generate such a signal, there are significant hurdles here when it comes to managing the many signals that would be needed for a practical quantum computer. For one, the signals sent to the different qubits would need to be synchronized at picosecond timescales. And you need some way to convey these different signals to the different qubits so as to be able to make them do different things. That’s a big stumbling block.

Illustration: Chad Hagen
Divide and Conquer: In a practical quantum computer, there will be too many qubits to attach separate signal lines to each of them. Instead, a combination of spatial and frequency multiplexing will be used. Qubits will be fabricated in groups attached to a common signal line, with each qubit in a group tuned to respond to signals of just one frequency [shown here as one color]. The computer can then manipulate a subset of its qubits by generating pulses of one particular frequency and using an analog switching network to send these pulses only to certain qubit groups.

In today’s small-scale systems, with just a few dozen qubits, each qubit is tuned to a different frequency—think of it as a radio receiver locked to one channel. You can select which qubit to address on a shared signal line by transmitting at its special frequency. That works, but this strategy doesn’t scale. You see, the signals sent to a qubit must have a reasonable bandwidth, say, 10 megahertz. And if the computer contains a million qubits, such a signaling system would need a bandwidth of 10 terahertz, which of course isn’t feasible. Nor would it be possible to build in a million separate signal lines so that you could attach one to each qubit directly.

The solution will probably involve a combination of frequency and spatial multiplexing. Qubits would be fabricated in groups, with each qubit in the group being tuned to a different frequency. The computer would contain many such groups, all attached to an analog communications network that allows the signal generated in the analog layer to be connected only to a selected subset of groups. By arranging the frequency of the signal and the network connections correctly, you can then manipulate the targeted qubit or set of qubits without affecting the others.

That approach should do the job, but such multiplexing comes with a cost: inaccuracies in control. It remains to be determined how such inaccuracies can be overcome.

In current systems, the digital- and analog-processing layers operate mainly at room temperature. Only the quantum-processing layer beneath them, the layer holding the qubits, is kept near absolute zero temperature. But as the number of qubits increases in future systems, the electronics making up all three of these layers will no doubt have to be integrated into one packaged cryogenic chip.

Some companies are currently building what you might call pre-prototype systems, based mainly on superconducting qubits. These machines contain a maximum of a few dozen qubits and are capable of executing tens to hundreds of coherent quantum operations. The companies pursuing this approach include tech giants Google, IBM, and Intel.

By extending the number of control lines, engineers could expand current architectures to a few hundred qubits, but that’s the very most. And the short time that these qubits remain coherent—today, roughly 50 microseconds—will limit the number of quantum instructions that can be executed before the calculation is consumed by errors.

Given these limitations, the main application I anticipate for systems with a few hundred qubits will be as an accelerator for conventional supercomputers. Specific tasks for which the quantum computer runs faster will be sent from a supercomputer to the quantum computer, with the results then returned to the supercomputer for further processing. The quantum computer will in a sense act like the GPU in your laptop, doing certain specific tasks, like matrix inversion or optimization of initial conditions, a lot faster than the CPU alone ever could.

During this next phase in the development of quantum computers, the application layer will be fairly straightforward to build. The digital-processing layer will also be relatively simple. But building the three layers that make up the QPU will be tricky.

Current fabrication techniques cannot produce completely uniform qubits. So different qubits have slightly different properties. That heterogeneity in turn requires the analog layer of the QPU to be tailored to the specific qubits it controls. The need for customization makes the process of building a QPU difficult to scale. Much greater uniformity in the fabrication of qubits would remove the need to customize what goes on in the analog layer and would allow for the multiplexing of control and measurement signals.

Multiplexing will be required for the large numbers of qubits that researchers will probably start introducing in 5 to 10 years so that they can add error correction to their machines. The basic idea behind such error correction is simple enough: Instead of storing the data in one physical qubit, multiple physical qubits are combined into one error-corrected, logical qubit.

Quantum error correction could solve the fundamental problem of decoherence, but it would require anywhere from 100 to 10,000 physical qubits per logical qubit. And that’s not the only hurdle. Implementing error correction will require a low-latency, high-throughput feedback loop that spans all three layers of the QPU.

It remains to be seen which of the many types of qubits being experimented with now—superconducting circuits, spin qubits, photonic systems, ion traps, nitrogen-vacancy centers, and so forth—will prove to be the most suitable for creating the large numbers of qubits needed for error correction. Regardless of which one proves best, it’s clear that success will require packaging and controlling millions of qubits if not more.

Which brings us to the big question: Can that really be done? The millions of qubits would have to be controlled by continuous analog signals. That’s hard but by no means impossible. I and other researchers have calculated that if device quality could be improved by a few orders of magnitude, the control signals used to perform error correction could be multiplexed and the design of the analog layer would become straightforward, with the digital layer managing the multiplexing scheme. These future QPUs would not require millions of digital connections, just some hundreds or thousands, which could be built using current techniques for IC design and fabrication.

The bigger challenge could well prove to be the measurement side of things: Many thousands of measurements per second would need to be performed on the chip. These measurements would be designed so that they do not disturb the quantum information (which remains unknown until the end of the calculation) while at the same time revealing and correcting any errors that arise along the way. Measuring millions of qubits at this frequency will require a drastic change in measurement philosophy.

The current way of measuring qubits requires the demodulation and digitization of an analog signal. At the measurement rate of many kilohertz, and with millions of qubits in a machine, the total digital throughput would be petabytes per second. That’s far too much data to handle using today’s techniques, which involve room-temperature electronics connected to the chip holding the qubits at temperatures near absolute zero.

Clearly, the analog and digital layers of the QPU will have to be integrated with the quantum-processing layer on the same chip, with some clever schemes implemented there for preprocessing and multiplexing the measurements. Fortunately, for the processing that is done to correct errors, not all qubit measurements would have to be passed up to the digital layer. That only needs to be done when local circuity detects an error, which drastically reduces the required digital bandwidth.

What goes on in the quantum layer will fundamentally determine how well the computer will operate. Imperfections in the qubits mean that you’ll need more of them for error correction, and as those imperfections get worse, the requirements for your quantum computer explode beyond what is feasible. But the converse is also true: Improvements in the quality of the qubits might be costly to engineer, but they would very quickly pay for themselves.

In the current pre-prototyping phase of quantum computing, individual qubit control is still unavoidable: It’s required to get the most out of the few qubits that we now have. Soon, though, as the number of qubits available increases, researchers will have to work out systems for multiplexing control signals and the measurements of the qubits.

The next significant step will be the introduction of rudimentary forms of error correction. Initially, there will be two parallel development paths, one with error correction and the other without, but error-corrected quantum computers will ultimately dominate. There’s simply no other route to a machine that can perform useful, real-world tasks.

To prepare for these developments, chip designers, chip-fabrication-process engineers, cryogenic-control specialists, experts in mass data handling, quantum-algorithm developers, and others will need to work together closely.

Such a complex collaboration would benefit from an international quantum-engineering road map. The various tasks required could then be assigned to the different sets of specialists involved, with the publishers of the road map managing communication between groups. By combining the efforts of academic institutions, research institutes, and commercial companies, we can and will succeed in building practical quantum computers, unleashing immense computing power for the future. 

This article appears in the April 2020 print issue as “Quantum Computers Scale Up.”

About the Author

Richard Versluis is the system architect at QuTech, a quantum-computing collaboration between Delft University of Technology and the Netherlands Organization for Applied Scientific Research

Enevate’s Silicon Anodes Could Yield EV Batteries That Run 400 km on a 5-Minute Charge

Par Prachi Patel

Battery makers have for years been trying to replace the graphite anode in lithium-ion batteries with a version made of silicon, which would give electric vehicles a much longer range. Some batteries with silicon anodes are getting close to market for wearables and electronics. The recipes for these silicon-rich anodes that a handful of companies are developing typically use silicon oxide or a mix of silicon and carbon.

But Irvine, CA-based Enevate is using an engineered porous film made mainly of pure silicon. In addition to being inexpensive, the new anode material, which founder and chief technology officer Benjamin Park has spent more than 10 years developing, will lead to an electric vehicle (EV) that has 30 percent more range on a single charge than today’s EVs. What’s more, the battery Enevate envisions could be charged up enough in five minutes to deliver 400 km of driving range.

Big names in the battery and automotive business are listening. Carmakers Renault, Nissan, and Mitsubishi, as well as battery-makers LG Chem and Samsung, are investors. And lithium battery pioneer and 2019 Chemistry Nobel Prize winner John Goodenough is on the company’s Advisory Board.

When lithium-ion batteries are charged, lithium ions move from the cathode to the anode. The more ions the anode can hold, the higher its energy capacity, and the longer the battery can run. Silicon can in theory hold ten times the energy of graphite. But it also expands and contracts dramatically, falling apart after a few charge cycles.

To get around that, battery makers such as Tesla today add just a tiny bit of silicon to graphite powder. The powder is mixed with a glue-like plastic called a binder and is coated on a thin copper foil to make the anode. But, says Park, lithium ions react with silicon first, before graphite. “The silicon still expands quite a bit, and that plastic binder is weak,” he says, explaining that the whole electrode is more likely to degrade as the amount of silicon is ramped up.

Edge of punched double-sided finished anode, Cu foil in the middle
Image: Enevate
Edge of punched double-sided finished anode, with copper foil in the middle.

Enevate does not use plastic binders. Instead, its patented process creates the porous 10- to 60-µm-thick silicon film directly on a copper foil. The cherry on top is a nanometers-thick protective coating, which, says Park, “prevents the silicon from reacting with the electrolyte.” That type of reaction can also damage a battery.

The process does not require high-quality silicon, so anodes of this type cost less than their graphite counterparts of the same capacity. And because the material is mostly silicon, lithium ions can slip in and out very quickly, charging the battery to 75 percent of its capacity in five minutes, without causing much expansion. Park likens it to a high-capacity movie theater. “If you have a full movie theater it takes a long time to find the one empty seat. We have a theater with ten times more capacity. Even if we fill that theater halfway, [it still doesn’t take long] to find empty seats.”

The company’s roll-to-roll processing techniques can make silicon anodes quickly enough for high-volume manufacturing, says Park. By coupling the silicon anode with conventional cathode materials such as nickel-manganese-cobalt, they have made battery cells with energy densities as high as 350 watt-hours per kilogram, which is about 30 percent more than the specific energy of today’s lithium-ion batteries. Enevate says it is now working with multiple major automotive companies to develop standard-size battery cells for 2024-25 model year EVs.

Google Invents AI That Learns a Key Part of Chip Design

Par Samuel K. Moore

There’s been a lot of intense and well-funded work developing chips that are specially designed to perform AI algorithms faster and more efficiently. The trouble is that it takes years to design a chip, and the universe of machine learning algorithms moves a lot faster than that. Ideally you want a chip that’s optimized to do today’s AI, not the AI of two to five years ago. Google’s solution: have an AI design the AI chip.

“We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fueling advances in the other,” they write in a paper describing the work that posted today to Arxiv.

“We have already seen that there are algorithms or neural network architectures that… don’t perform as well on existing generations of accelerators, because the accelerators were designed like two years ago, and back then these neural nets didn't exist,” says Azalia Mirhoseini, a senior research scientist at Google. “If we reduce the design cycle, we can bridge the gap.”

Mirhoseini and senior software engineer Anna Goldie have come up with a neural network that learn to do a particularly time-consuming part of design called placement. After studying chip designs long enough, it can produce a design for a Google Tensor Processing Unit in less than 24 hours that beats several weeks-worth of design effort by human experts in terms of power, performance, and area.

Placement is so complex and time-consuming because it involves placing blocks of logic and memory or clusters of those blocks called macros in such a way that power and performance are maximized and the area of the chip is minimized. Heightening the challenge is the requirement that all this happen while at the same time obeying rules about the density of interconnects. Goldie and Mirhoseini targeted chip placement, because even with today’s advanced tools, it takes a human expert weeks of iteration to produce an acceptable design.

Goldie and Mirhoseini modeled chip placement as a reinforcement learning problem. Reinforcement learning systems, unlike typical deep learning, do not train on a large set of labeled data. Instead, they learn by doing, adjusting the parameters in their networks according to a reward signal when they succeed. In this case, the reward was a proxy measure of a combination of power reduction, performance improvement, and area reduction. As a result, the placement-bot becomes better at its task the more designs it does.

The team hopes AI systems like theirs will lead to the design of “more chips in the same time period, and also chips that run faster, use less power, cost less to build, and use less area,” says Goldie.

How the Internet Can Cope With the Explosion of Demand for “Right Now” Data During the Coronavirus Outbreak

Par Michael Koziol

The continuing spread of COVID-19 has forced far more people to work and learn remotely than ever before. And more people in self-isolation and quarantine means more people are streaming videos and playing online games. The spike in Internet usage has some countries looking at ways to curb streaming data to avoid overwhelming the Internet.

But the amount of data we’re collectively using now is not actually the main cause of the problem. It’s the fact that we’re all suddenly using many more low-latency applications: teleconferencing, video streaming, and so on. The issue is not that the Internet is running out of throughput. It’s that there’s a lot more demand for data that needs to be delivered without any perceivable delay.

“The Internet is getting overwhelmed,” says Bayan Towfiq, the founder and CEO of Subspace, a startup focusing on improving the delivery of low-latency data. “The problem is going to get worse before it gets better.” Subspace wasn’t planning to come out of stealth mode until the end of this year, but the COVID-19 crisis has caused the company to rethink those plans.

“What’s [been a noticeable problem while everyone is at home streaming and videoconferencing] is less than one percent of data. [For these applications] it’s more important than the other 99 percent,” says Towfiq. While we all collectively use far more data loading webpages and browsing social media, we don’t notice if a photo takes half a second to load in the same way we notice a half-second delay on a video conference call.

So if we’re actually not running out of data throughput, why the concern over streaming services and teleconferencing overloading the Internet?

“The Internet doesn’t know about the applications running on it,” says Towfiq. Put another way, the Internet is agnostic about the type of data moving from point A to point B. What matters most, based on how the Internet has been built, is moving as much data as possible.

And normally that’s fine, if most of the data is in the form of emails or people browsing Amazon. If a certain junction is overwhelmed by data, load times may be a little slower. But again, we barely notice a delay in most of the things for which we use the Internet.

The growing use of low-latency applications, however, means those same bottlenecks are painfully apparent. When a staff Zoom meeting has to contend with someone trying to watch the Mandalorian, the Internet sees no difference between your company’s videochat and Baby Yoda.

For Towfiq, the solution to the Internet’s current stress is not to cut back on the amount of video-conferencing, streaming, and online gaming, as has been suggested. Instead, the solution is what Subspace has been focused on since its founding last year: changing how the Internet works by forcing it to prioritize that one percent of data that absolutely, positively has to get there right away.

Subspace has been installing both software and hardware for ISPs in cities around the world designed to do exactly that. Towfiq says ISPs already saw the value in Subspace’s tech after the company demonstrated that it could make online gaming far smoother for players by reducing the amount of lag they dealt with.

Initially Subspace was sending out engineers to personally install its equipment and software for ISPs and cities they were working with. But with the rising demand and the pandemic itself, the company is transitioning to “palletizing” its equipment: making it so that, after shipping it, the city or ISP can plug in just a few cables and change how their networks function.

Now, Towfiq says, the pandemic has made it clear that the startup needed to immediately come out of stealth. Even though Subspace was already connecting its new tech to cities’ network infrastructure at a rate of five per week in February, coming out of stealth will allow the company to publicly share information about what it’s working on. The urgency, says Towfiq, outweighed the company’s original plans to conduct some proof-of-concept trials and build out a customer base.

“There’s a business need that’s been pulled out of us to move faster and unveil right now,” Towfiq says. He adds that Subspace didn’t make the decision to come out of stealth until last Tuesday. “There’s a macro thing happening with governments and Internet providers not knowing what to do.”

Subspace could offer the guidance these entities need to avoid overwhelming their infrastructure. And once we’re all back to something approximating normal after the COVID-19 outbreak, the Internet will still benefit from the types of changes Subspace is making. As Towfiq says, “We’re becoming a new kind of hub for the Internet.”

Data Centers Are Plagued by Wasteful Computing. Game Theory Could Help

Par Seyed Majid Zahedi
Illustration: Eric Frommelt
Illustration: Eric Frommelt

When you hear the wordsdata center” and “games,” you probably think of massive multiplayer online games like World of Warcraft. But there’s another kind of game going on in data centers, one meant to hog resources from the shared mass of computers and storage systems.

Even employees of Google, the company with perhaps the most massive data footprint, once played these games. When asked to submit a job’s computing requirements, some employees inflated their requests for resources in order to reduce the amount of sharing they’d have to do with others. Interestingly, some other employees deflated their resource requests to pretend that their tasks could easily fit within any computer. Once their tasks were slipped into a machine, those operations would then use up all the resources available on it and squeeze out their colleagues’ tasks.

Such trickery might seem a little comical, but it actually points to a real problem—inefficiency.

Globally, data centers consumed 205 billion kilowatt-hours of electricity in 2018. That’s not much less than all of Australia used, and about 1 percent of the world total. A lot of that energy is wasted because servers are not used to their full capacity. An idle server dissipates as much as 50 percent of the power it consumes when running at its peak; as the server takes on work, its fixed power costs are amortized over that work. Because a user running a single task typically takes up only 20 to 30 percent of the server’s resources, multiple users must share the server to boost its utilization and consequently its energy efficiency. Sharing also reduces capital, operating, and infrastructure costs. Not everybody is rich enough to build their own data centers, after all.

To allocate shared resources, data centers deploy resource-management systems, which divide up available processor cores, memory capacity, and network resources according to users’ needs and the system’s own objectives. At first glance, this task should be straightforward because users often have complementary demands. But in truth, it’s not. Sharing creates competition among users, as we saw with those crafty Googlers, and that can distort the use of resources.

So we have pursued a series of projects using game theory, the mathematical models that describe strategic interactions among rational decision makers, to manage the allocation of resources among self-interested users while maximizing data-center efficiency. In this situation, playing the game makes all the difference.

Helping a group of rational and self-interested users share resources efficiently is not just a product of the big-data age. Economists have been doing it for decades. In economics, market mechanisms set prices for resources based on supply and demand. Indeed, many of these mechanisms are currently deployed in public data centers, such as Amazon EC2 and Microsoft Azure. There, the transfer of real money acts as a tool to align users’ incentives (performance) with the provider’s objectives (efficiency). However, there are many situations where the exchange of money is not useful.

Let’s consider a simple example. Suppose that you are given a ticket to an opera on the day of your best friend’s wedding, and you decide to give the ticket to someone who will best appreciate the event. So you run what’s called a second-price auction: You ask your friends to bid for the ticket, stipulating that the winner pay you the amount of the second-highest bid. It has been mathematically proven that your friends have no incentives to misrepresent how much they value the opera ticket in this kind of auction.

If you do not want money or cannot make your friends pay you any, your options become very limited. If you ask your friends how much they would love to go the opera, nothing stops them from exaggerating their desire for the ticket. The opera ticket is just a simple example, but there are plenty of places—such as Google’s private data centers or an academic computer cluster—where money either can’t or shouldn’t change hands to decide who gets what.

Game theory provides practical solutions for just such a problem, and indeed it has been adapted for use in both computer networks and computer systems. We drew inspiration from those two fields, but we also had to address their limitations. In computer networks, there has been much work in designing mechanisms to manage self-interested and uncoordinated routers to avoid congestion. But these models consider contention over only a single resource—network bandwidth. In data-center computer clusters and servers, there is a wide range of resources to fight over.

In computer systems, there’s been a surge of interest in resource-allocation mechanisms that consider multiple resources, notably one called dominant resource fairness [PDF]. However, this and similar work is restricted to performance models and to ratios of processors and memory that don’t always reflect what goes on in a data center.

To come up with game theory models that would work in the data center, we delved into the details of hardware architecture, starting at the smallest level: the transistor.

Transistors were long made to dissipate ever less power as they scaled down in size, in part by lowering the operating voltage. By the mid-2000s, however, that trend, known as Denard Scaling, had broken down. As a result, for a fixed power budget, processors stopped getting faster at the rate to which we had become accustomed. A temporary solution was to put multiple processor cores on the same chip, so that the enormous number of transistors could still be cooled economically. However, it soon became apparent that you cannot turn on all the cores and run them at full speed for very long without melting the chip.

In 2012, computer architects proposed a workaround called computational sprinting. The concept was that processor cores could safely push past their power budget for short intervals called sprints. After a sprint, the processor has to cool down before the next sprint; otherwise the chip is destroyed. If done correctly, sprinting could make a system more responsive to changes in its workload. Computational sprinting was originally proposed for processors in mobile devices like smartphones, which must limit power usage both to conserve charge and to avoid burning the user. But sprinting soon found its way into data centers, which use the trick to cope with bursts of computational demand.

Here’s where the problem arises. Suppose that self-interested users own sprinting-enabled servers, and those servers all share a power supply in a data center. Users could sprint to increase the computational power of their processors, but if a large fraction of them sprint simultaneously, the power load will spike. The circuit breaker is then tripped. This forces the batteries in the uninterruptible power supply (UPS) to provide power while the system recovers. After such a power emergency, all the servers on that power supply are forced to operate on a nominal power budget—no sprinting allowed—while the batteries recharge.

This scenario is a version of the classic “tragedy of the commons,” first identified by British economist William Forster Lloyd in an 1833 essay. He described the following situation: Suppose that cattle herders share a common parcel of land to graze their cows. If an individual herder puts more than the allotted number of cattle on the common, that herder could achieve marginal benefits. But if many herders do that, the overgrazing will damage the land, hurting everyone.

Together with Songchun Fan, then a Duke University doctoral candidate, we studied sprinting strategies as a tragedy of the commons. We built a model of the system that focused on the two main physical constraints. First, for a server processor, a sprint restricts future action by requiring the processor to wait while the chip dissipates heat. Second, for a server cluster, if the circuit breaker trips, then all the server processors must wait while the UPS batteries recharge.

We formulated a sprinting game in which users, in each round, could be in one of three states: active, cooling after a sprint, or recovering after a power emergency. In each epoch, or round of the game, a user’s only decision is whether or not to sprint when their processor is active. Users want to optimize their sprinting to gain benefits, such as improved throughput or reduction in execution time. You should note that these benefits vary according to when the sprint happens. For instance, sprinting is more beneficial when demand is high.

Consider a simple example. You are at round 5, and you know that if you sprint, you will gain 10 units of benefit. However, you’d have to let your processor cool down for a couple of rounds before you can sprint again. But now, say you sprint, and then it turns out that if you had instead waited for round 6 to sprint, you could have gained 20 units. Alternatively, suppose that you save your sprint for a future round instead of using it in round 5. But it turns out that all the other users decided to sprint at round 5, causing a power emergency that prevents you from sprinting for several rounds. Worse, by then your gains won’t be nearly as high.

All users must make these kinds of decisions based on how much utility they gain and on other users’ sprinting strategies. While it might be fun to play against a few users, making these decisions becomes intractable as the number of competitors grows to data-center scale. Fortunately, we found a way to optimize each user’s strategy in large systems by using what’s called mean field game analysis. This method avoids the complexity of scrutinizing individual competitors’ strategies by instead describing their behavior as a population. Key to this statistical approach is the assumption that any individual user’s actions do not change the average system behavior significantly. Because of that assumption, we can approximate the effect of all the other users on any given user with a single averaged effect.

It’s kind of analogous to the way millions of commuters try to optimize their daily travel. An individual commuter, call her Alice, cannot possibly reason about every other person on the road. Instead she formulates some expectation about the population of commuters as a whole, their desired arrival times on a given day, and how their travel plans will contribute to congestion.

Mean field analysis allows us to find the “mean field equilibrium” of the sprinting game. Users optimize their responses to the population, and, in equilibrium, no user benefits by deviating from their best responses to the population.

In the traffic analogy, Alice optimizes her commute according to her understanding of the commuting population’s average behavior. If that optimized plan does not produce the expected traffic pattern, she revises her expectations and rethinks her plan. With every commuter optimizing at once, over a few days, traffic converges to some recurring pattern and commuters’ independent actions produce an equilibrium.

Using the mean field equilibrium, we formulated the optimal strategy for the sprinting game, which boils down to this: A user should sprint when the performance gains exceed a certain threshold, which varies depending on the user. We can compute this threshold using the data center’s workloads and its physical characteristics.

When everybody operates with their optimal threshold at the mean field equilibrium, the system gets a number of benefits. First, the data center’s power management can be distributed, as users implement their own strategies without having to request permission from a centralized manager to sprint. Such independence makes power control more responsive, saving energy. Users can modulate their processor’s power draw in microseconds or less. That wouldn’t be possible if they had to wait tens of milliseconds for permission requests and answers to wind their way across the data center’s network. Second, the equilibrium gets more computing done, because users optimize strategies for timely sprints that reflect their own workload demands. And finally, a user’s strategy becomes straightforward—sprinting whenever the gain exceeds a threshold. That’s extremely easy to implement and trivial to execute.

Mean field equilibrium
Greed Isn’t Good: Playing the sprinting game using the mean field equilibrium strategy gets more work done with fewer power emergencies than a “greedy” strategy.

The sprinting power-management project is just one in a series of data-center management systems we’ve been working on over the past five years. In each, we use key details of the hardware architecture and system to formulate the games. The results have led to practical management mechanisms that provide guarantees of acceptable system behavior when participants act selfishly. Such guarantees, we believe, will only encourage participation in shared systems and establish solid foundations for energy-efficient and scalable data centers.

Although we’ve managed to address the resource-allocation problem at the levels of server multiprocessors, server racks, and server clusters, putting them to use in large data centers will require more work. For one thing, you have to be able to generate a profile of the data center’s performance. Data centers must therefore deploy the infrastructure necessary to monitor hardware activity, assess performance outcomes, and infer preferences for resources.

Most game theory solutions for such systems require the profiling stage to happen off-line. It might be less intrusive instead to construct online mechanisms that can start with some prior knowledge and then update their parameters during execution as characteristics become clearer. Online mechanisms might even improve the game as it’s being played, using reinforcement learning or another form of artificial intelligence.

There’s also the fact that in a data center, users may arrive and depart from the system at any time; jobs may enter and exit distinct phases of a computation; servers may fail and restart. All of these events require the reallocation of resources, yet these reallocations may disrupt computation throughout the system and require that data be shunted about, using up resources. Juggling all these changes while still keeping everyone playing fairly will surely require a lot more work, but we’re confident that game theory will play a part.

This article appears in the April 2020 print issue as “A Win for Game Theory in the Data Center.”

About the Authors

Benjamin C. Lee, an associate professor of electrical and computer engineering at Duke University, and Seyed Majid Zahedi, an assistant professor at the University of Waterloo, in Ont., Canada, describe a game they developed that can make data centers more efficient. While there’s a large volume of literature on game theory’s use in computer networking, Lee says, computing on the scale of a data center is a very different problem. “For every 10 papers we read, we got maybe half an idea,” he says.

Video Friday: Robots Help Keep Medical Staff Safe at COVID-19 Hospital

Par Evan Ackerman

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – [ONLINE EVENT]
ICARSC 2020 – April 15-17, 2020 – [ONLINE EVENT]
ICRA 2020 – May 31-4, 2020 – [SEE ATTENDANCE SURVEY]
ICUAS 2020 – June 9-12, 2020 – Athens, Greece
CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

UBTECH Robotics’ ATRIS, AIMBOT, and Cruzr robots were deployed at a Shenzhen hospital specialized in treating COVID-19 patients. The company says the robots, which are typically used in retail and hospitality scenarios, were modified to perform tasks that can help keep the hospital safer for everyone, especially front-line healthcare workers. The tasks include providing videoconferencing services between patients and doctors, monitoring the body temperatures of visitors and patients, and disinfecting designated areas.

The Third People’s Hospital of Shenzhen (TPHS), the only designated hospital for treating COVID-19 in Shenzhen, a metropolis with a population of more than 12.5 million, has introduced an intelligent anti-epidemic solution to combat the coronavirus.

AI robots are playing a key role. The UBTECH-developed robot trio, namely ATRIS, AIMBOT, and Cruzr, are giving a helping hand to monitor body temperature, detect people without masks, spray disinfectants and provide medical inquiries.


Someone has spilled gold all over the place! Probably one of those St. Paddy’s leprechauns... Anyways... It happened near a Robotiq Wrist Camera and Epick setup so it only took a couple of minutes to program and ’’pick and place’’ the mess up.

Even in situations like these, it’s important to stay positive and laugh a little, we had this ready and though we’d still share. Stay safe!

[ Robotiq ]

HEBI Robotics is helping out with social distancing by controlling a robot arm in Austria from their lab in Pittsburgh.

Can’t be too careful!

[ HEBI Robotics ]

Thanks Dave!

SLIDER, a new robot under development at Imperial College London, reminds us a little bit of what SCHAFT was working on with its straight-legged design.

[ Imperial ]

Imitation learning is an effective and safe technique to train robot policies in the real world because it does not depend on an expensive random exploration process. However, due to the lack of exploration, learning policies that generalize beyond the demonstrated behaviors is still an open challenge. We present a novel imitation learning framework to enable robots to 1) learn complex real world manipulation tasks efficiently from a small number of human demonstrations, and 2) synthesize new behaviors not contained in the collected demonstrations. Our key insight is that multi-task domains often present a latent structure, where demonstrated trajectories for different tasks intersect at common regions of the state space. We present Generalization Through Imitation (GTI), a two-stage offline imitation learning algorithm that exploits this intersecting structure to train goal-directed policies that generalize to unseen start and goal state combinations.

[ GTI ]

Here are two excellent videos from UPenn’s Kod*lab showing the capabilities of their programmable compliant origami spring things.

[ Kod*lab ]

We met Bornlove when we were reporting on drones in Tanzania in 2018, and it’s good to see that he’s still improving on his built-from-scratch drone.

[ ADF ]

Laser. Guided. Sandwich. Stacking.

[ Kawasaki ]

The Self-Driving Car Research Studio is a highly expandable and powerful platform designed specifically for academic research. It includes the tools and components researchers need to start testing and validating their concepts and technologies on the first day, without spending time and resources on building DYI platforms or implementing hobby-level vehicles. The research studio includes a fleet of vehicles, software tools enabling researchers to work in Simulink, C/C++, Python, or ROS, with pre-built libraries and models and simulated environments support, even a set of reconfigurable floor panels with road patterns and a set of traffic signs. The research studio’s feature vehicle, QCar, is a 1/10 scale model vehicle powered by NVIDIA Jetson TX2 supercomputer and equipped with LIDAR, 360-degree vision, depth sensor, IMU, encoders, and other sensors, as well as user-expandable IO.

[ Quanser ]

Thanks Zuzana!

The Swarm-Probe Enabling ATEG Reactor, or SPEAR, is a nuclear electric propulsion spacecraft that uses a new, lightweight reactor moderator and advanced thermoelectric generators (ATEGs) to greatly reduce overall core mass. If the total mass of an NEP system could be reduced to levels that were able to be launched on smaller vehicles, these devices could deliver scientific payloads to anywhere in the solar system.

One major destination of recent importance is Europa, one of the moons of Jupiter, which may contain traces of extraterrestrial life deep beneath the surface of its icy crust. Occasionally, the subsurface water on Europa violently breaks through the icy crust and bursts into the space above, creating a large water plume. One proposed method of searching for evidence of life on Europa is to orbit the moon and scan these plumes for ejected organic material. By deploying a swarm of Cubesats, these plumes can be flown through and analyzed multiple times to find important scientific data.


This hydraulic cyborg hand costs just $35.

Available next month in Japan.

[ Elekit ]

Microsoft is collaborating with researchers from Carnegie Mellon University and Oregon State University to compete in the DARPA Subterranean (SubT) challenges, collectively named Team Explorer. These challenges are designed to test drones and robots on how they perform in hazardous physical environments where humans can’t access safely. By participating in these challenges, these teams hope to find a solution that will assist emergency first responders to help find survivors more quickly.

[ Team Explorer ]

Aalborg University Hospital is the largest hospital in the North Jutland region of Denmark. Up to 3,000 blood samples arrive here in the lab every day. They must be tested and sorted – a time-consuming and monotonous process which was done manually until now. The university hospital has now automated the procedure: a robot-based system and intelligent transport boxes ensure the quality of the samples – and show how workflows in hospitals can be simplified by automation.

[ Kuka ]

This video shows human-robot collaboration for assembly of a gearbox mount in a realistic replica of a production line of Volkswagen AG. Knowledge-based robot skills enable autonomous operation of a mobile dual arm robot side-by-side of a worker.

[ DFKI ]

A brief overview of what’s going on in Max Likhachev’s lab at CMU.

Always good to see PR2 keeping busy!

[ CMU ]

The Intelligent Autonomous Manipulation (IAM) Lab at the Carnegie Mellon University (CMU) Robotics Institute brings together researchers to address the challenges of creating general purpose robots that are capable of performing manipulation tasks in unstructured and everyday environments. Our research focuses on developing learning methods for robots to model tasks and acquire versatile and robust manipulation skills in a sample-efficient manner.

[ IAM Lab ]

Jesse Hostetler is an Advanced Computer Scientist in the Vision and Learning org at SRI International in Princeton, NJ. In this episode of The Dish TV they explore the different aspects of artificial intelligence, and creating robots that use sleep and dream states to prevent catastrophic forgetting.

[ SRI ]

On the latest episode of the AI Podcast, Lex interviews Anca Dragan from UC Berkeley.

Anca Dragan is a professor at Berkeley, working on human-robot interaction -- algorithms that look beyond the robot’s function in isolation, and generate robot behavior that accounts for interaction and coordination with human beings.

[ AI Podcast ]

An Official WHO Coronavirus App Will Be a “Waze for COVID-19”

Par Eliza Strickland

There’s no shortage of information about the coronavirus pandemic: News sites cover every development, and sites like the Johns Hopkins map constantly update the number of global cases (246,276 as of this writing).

But for the most urgent questions, there seem to be no answers. Is it possible that I caught the virus when I went out today? Did I cross paths with someone who’s infected? How prevalent is the coronavirus in my local community? And if I’m feeling sick, where can I go to get tested or find treatment?

A group of doctors and engineers have come together to create an app that will answer such questions. Daniel Kraft, the U.S.-based physician who’s leading the charge, says his group has “gotten the green light” from the World Health Organization (WHO) to build the open-source app, and that it will be an official WHO app to help people around the world cope with COVID-19, the official name of the illness caused by the new coronavirus.

“We’re putting together a SWAT team of tech avengers,” says Kraft. “We’re building out version 1 of the app, and we’re hoping to get it out by next week.”

Sameer Pujari, a manager of digital health and information at the WHO, confirmed that Kraft is working with a WHO team led by Ray Chambers, WHO ambassador for global strategy. Pujari declined to give further information about the app, saying that it was premature to do so. 

Kraft describes the app as a “Waze for COVID-19,” providing navigation advice not for the roads but instead for people’s “COVID journey.” His goal is to have the app provide hyperlocal information for people, and to have people’s data feed back to public health officials to improve the app’s accuracy. 

Since smartphones maintain a GPS history of the user’s location, they’re uniquely suited for contact tracing, in which public health officials try to determine whom an infected person has been in contact with. Traditionally, officials would ask people with the virus to recall their movements, and then to get contacts who might have caught the virus into self-quarantine. But by looking at the location records stored in infected people’s phones and cross-referencing that information with other people’s data, public health authorities can quickly and precisely determine who’s at risk. 

In China and South Korea, apps that collected data for contact tracing were key to stopping the coronavirus’s spread—but they also enabled mass surveillance in China and the release of private information in South Korea. Epidemiologists are currently debating whether such apps should be used in Europe and the United States, and whether the benefits outweigh the privacy concerns. In Britain, authorities are developing a contact-tracing app that only collects location data from users who opt-in.

The WHO app would similarly rely on people agreeing to share their data with health authorities. Kraft describes their approach to data sharing as “privacy-centric.” 

Kraft, who serves as the chair of medicine for the education company Singularity University, has put out calls for collaborators over the last few weeks. His recruits include a former chief data scientist for Microsoft, a former engineering manager at Google, and MIT professor Ramesh Raskar. He’s still seeking volunteers for his so-called WHO COVID App Collective—interested engineers can sign up here and contribute to the open-source project on GitHub

Version 1 of the app will contain only basic features, Kraft says, and its design is still in flux; his priority is to get the app into Google’s and Apple’s app stores as soon as possible. “Perfection is the enemy of the good,” he says. “We want to lay the ground work for something that would be scalable for COVID—and for other pandemics in the future.”

When users install the app, they’ll first see WHO-approved information about how to stay safe (including guidelines on hand washing and social distancing). Then a chatbot-like interface will ask the user if they’re experiencing symptoms, walk them through a self-assessment, and direct them to a local site for testing or treatment if necessary. In the future, it might also tell people who need treatment which hospitals near them have available beds.

To offer personalized information, the app will ask the user questions regarding their age, location, and preferred language. Kraft says the app will initially offer information in the six official languages of the WHO, and may tailor the information to match the user’s age demographic. “We need to message things differently for baby boomers and millennials,” he says. 

The contact tracing function won’t be included in the first version of the app, but Kraft hopes to get it up and running soon. For the contact tracing, the app will likely rely on existing work by MIT’s Raskar, an associate professor at MIT Media Lab. Raskar says he’s been talking with Kraft, and notes that they’ve known each other for a long time: “Anything he’s working on, I’ll work on,” Raskar tells IEEE Spectrum

Raskar’s team just released a prototype of an app called Private Kit: Safe Paths that enables private location logging on the user’s phone, and cross-checks it against information provided by health authorities, testing sites, and hospitals. In future versions, the Safe Paths app will cross-check people’s location logs against those of infected people who have opted in to the service, but will do so in an encrypted fashion. 

Raskar’s team recently put out a whitepaper [PDF] discussing the difficulty and necessity of protecting personal privacy while creating apps to stop a pandemic. “How do you create a solution that doesn’t end up becoming a tool of the surveillance state? That’s the biggest challenge for us,” Raskar says. 

Build This 8-Bit Home Computer With Just 5 Chips

Par Matt Sarnoff
Illustration: James Provost

There’s something persistently appealing about 8-bit computing: You can put together a self-contained system that’s powerful enough to be user friendly but simple enough to build and program all by yourself. Most 8-bit machines built by hobbyists today are powered by a classic CPU from the heroic age of home computers in the 1980s, when millions of spare TVs were commandeered as displays. I’d built one myself, based on the Motorola 6809. I had tried to use as few chips as possible, yet I still needed 13 supporting ICs to handle things such as RAM or serial communications. I began to wonder: What if I ditched the classic CPU for something more modern yet still 8-bit? How low could I get the chip count?

The result was the Amethyst. Just like a classic home computer, it has an integrated keyboard and can generate audio and video. It also has a built-in high-level programming language for users to write their own programs. And it uses just six chips—an ATMEGA1284P CPU, a USB interface, and four simple integrated circuits.

The ATMEGA1284P (or 1284P), introduced around 2008, has 128 kilobytes of flash memory for program storage and 16 kB of RAM. It can run at up to 20 megahertz, comes with built-in serial-interface controllers, and has 32 digital input/output pins.

Thanks to the onboard memory and serial interfaces, I could eliminate a whole slew of supporting chips. I could generate basic audio directly by toggling an I/O pin on and off again at different frequencies to create tones, albeit with the characteristic harshness of a square wave. But what about generating an analog video signal? Surely that would require some dedicated hardware?

Then, toward the end of 2018, I came across the hack that Steve Wozniak used in the 1970s to give the Apple II its color-graphics capability. This hack was known as NTSC artifact color, and it relied on the fact that U.S. color TV broadcasting was itself a hack of sorts, one that dated back to the 1950s.

Originally, U.S. broadcast television was black and white only, using a fairly straightforward standard called NTSC (for National Television System Committee). Television cathode-ray tubes scanned a beam across the surface of a screen, row after row. The amplitude of the received video signal dictated the luminance of the beam at any given spot along a row. Then in 1953, NTSC was upgraded to support color television while remaining intelligible to existing black-and-white televisions.

Compatibility was achieved by encoding color information in the form of a high-frequency sinusoidal signal. The phase of this signal at a given point, relative to a reference signal (the “colorburst”) transmitted before each row began, determined the color’s underlying hue. The amplitude of the signal determined how saturated the color was. This high-frequency color signal was then added to the relatively low-frequency luminance signal to create so-called composite video, still used today as an input on many TVs and cheaper displays for maker projects.

Illustration: James Provost
TV Trickery: An analog composite color video signal, as used by U.S. televisions [top left], is compatible with black-and-white receivers because a high-frequency sinusoidal chrominance signal is superimposed on the luminance signal [dotted line] that determines the brightness of a TV scan line. Filtering circuits separate out the signals inside the television. The phase of the chrominance signal with regard to a reference “colorburst” signal determines the hue seen on screen. With a high enough bit rate, a digital signal [bottom left] will be separated out as if it were an analog signal, with different bit patterns producing different colors. In this example, with two bits to each pixel, six colors are possible [four shown], but a faster bit rate allows more colors.

To a black-and-white TV, the color signal looks like noise and is largely ignored. But a color TV can separate the color signal from the luminance signal with filtering circuitry.

In the 1970s, engineers realized that this filtering circuitry could be used to great advantage by consumer computers because it permitted a digital, square-wave signal to duplicate much of the effect of a composite analog signal. A stream of 0s sent by a computer to a television as the CRT scanned along a row would be interpreted by the TV as a constant low-analog voltage, representing black. All the 1s would be seen as a constant high voltage, producing pure white. But with a sufficiently fast bit rate, more-complex binary patterns would cause the high-frequency filtering circuits to produce a color signal. This trick allowed the Apple II to display up to 16 colors.

At first I thought to toggle an I/O pin very quickly to generate the video signal directly. I soon realized, however, that with my 1284P operating at a clock speed of 14.318 MHz, I would not be able to switch it fast enough to display more than four colors, because the built-in serial interfaces took two clock cycles to send a bit, limiting my rate to 7.159 MHz. (The Apple II used fast direct memory access to connect its external memory chip to the video output while its CPU was busy doing internal processing, but as my computer’s RAM is integrated into the chip, this approach wasn’t an option.) So I looked in my drawers and pulled out four 7400 chips—two multiplexers and two parallel-to-serial shift registers. I could set eight pins of the 1284P in parallel and send them simultaneously to the multiplexers and shift registers, which would convert them into a high-speed serial bitstream. In this way I can generate bits fast enough to produce some 215 distinct colors on screen. The cost is that keeping up with the video scan line absorbs a lot of computing capacity: Only about 25 percent of the CPU’s time is available for other tasks.

Illustration: James Provost
Compact Computer: The Amethyst is a single-board computer. It uses just six integrated circuits—a CPU, USB interface, and four 7400 chips, which are used to make 215-color video graphics possible. Keyboard switches are soldered directly to the board, which also supports audio and four serial I/O connections for peripherals like game controllers or storage devices. A built-in Forth virtual machine provides a programming environment.

Consequently, I needed a lightweight programming environment for users, which led me to choose Forth over the traditional Basic. Forth is an old language for embedded systems, and it has the nice feature of being both interactive and capable of efficient compilation of code. You can do a lot in a very small amount of space. Because the 1284P does not allow compiled machine code to be executed directly from its RAM, a user’s code is instead compiled to an intermediate bytecode. This bytecode is then fed as data to a virtual machine running from the 1284P’s flash memory. The virtual machine’s code was written in assembly code and hand-tuned to make it as fast as possible.

As an engineer working at Glowforge, I have access to advanced laser-cutting machines, so it was a simple matter to design and build a wooden case (something of a homage to the wood-grain finish of the Atari 2600). The mechanical keyboard switches are soldered directly onto the Amethyst’s single printed circuit board; it does have the peculiarity that there is no space bar, rather a space button located above the Enter key.

Complete schematics, PCB files, and system code are available in my GitHub repository, so you can build an Amethyst of your own or improve on my design. Can you shave a chip or two off the count?

This article appears in the April 2020 print issue as “8 Bits, 6 Chips.”

Coronavirus Pandemic: A Call to Action for the Robotics Community

Par Erico Guizzo

When I reached Professor Guang-Zhong Yang on the phone last week, he was cooped up in a hotel room in Shanghai, where he had self-isolated after returning from a trip abroad. I wanted to hear from Yang, a widely respected figure in the robotics community, about the role that robots are playing in fighting the coronavirus pandemic. He’d been monitoring the situation from his room over the previous week, and during that time his only visitors were a hotel employee, who took his temperature twice a day, and a small wheeled robot, which delivered his meals autonomously.

An IEEE Fellow and founding editor of the journal Science Robotics, Yang is the former director and co-founder of the Hamlyn Centre for Robotic Surgery at Imperial College London. More recently, he became the founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University, often called the MIT of China. Yang wants to build the new institute into a robotics powerhouse, recruiting 500 faculty members and graduate students over the next three years to explore areas like surgical and rehabilitation robots, image-guided systems, and precision mechatronics.

“I ran a lot of the operations for the institute from my hotel room using Zoom,” he told me.

Yang is impressed by the different robotic systems being deployed as part of the COVID-19 response. There are robots checking patients for fever, robots disinfecting hospitals, and robots delivering medicine and food. But he thinks robotics can do even more.

Guang-Zhong Yang, founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University
Photo: Shanghai Jiao Tong University
Professor Guang-Zhong Yang, founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University.

“Robots can be really useful to help you manage this kind of situation, whether to minimize human-to-human contact or as a front-line tool you can use to help contain the outbreak,” he says. While the robots currently being used rely on technologies that are mature enough to be deployed, he argues that roboticists should work more closely with medical experts to develop new types of robots for fighting infectious diseases.

“What I fear is that, there is really no sustained or coherent effort in developing these types of robots,” he says. “We need an orchestrated effort in the medical robotics community, and also the research community at large, to really look at this more seriously.”

Yang calls for a global effort to tackle the problem. “In terms of the way to move forward, I think we need to be more coordinated globally,” he says. “Because many of the challenges require that we work collectively to deal with them.”

Our full conversation, edited for clarity and length, is below.

IEEE Spectrum: How is the situation in Shanghai?

Guang-Zhong Yang: I came back to Shanghai about 10 days ago, via Hong Kong, so I’m now under self-imposed isolation in a hotel room just to be cautious, for two weeks. The general feeling in Shanghai is that it’s really calm and orderly. Everything seems well under control. And as you probably know, in recent days the number of new cases is steadily dropping. So the main priority for the government is to restore normal routines, and also for companies to go back to work. Of course, people are still very cautious, and there are systematic checks in place. In my hotel, for instance, I get checked twice a day for my temperature to make sure that all the people in the hotel are well.

Are most people staying inside, are the streets empty?

No, the streets are not empty. In fact, in Minhang, next to Shanghai Jiao Tong University, things are going back to normal. Not at full capacity, but stores and restaurants are gradually opening. And people are thinking about the essential travels they need to do, what they can do remotely. As you know in China we have very good online order and delivery services, so people use them a lot more. I was really impressed by how the whole thing got under control, really.

Has Shanghai Jiao Tong University switched to online classes?

Yes. Since last week, the students are attending online lectures. The university has 1449 courses for undergrads and 657 for graduate students. I participated in some of them. It’s really well run. You can have the typical format with a presenter teaching the class, but you can also have part of the lecture with the students divided into groups and having discussions. Of course what’s really affected is laboratory-based work. So we’ll need to wait for some more time to get back into action.

What do you think of the robots being used to help fight the outbreak?

I’ve seen reports showing a variety of robots being deployed. Disinfection robots that use UV light in hospitals. Drones being used for transporting samples. There’s a prototype robot, developed by the Chinese Academy of Sciences, to remotely collect oropharyngeal swabs from patients for testing, so a medical worker doesn’t have to directly swab the patient. In my hotel, there’s a robot that brings my meals to my door. This little robot can manage to get into the lift, go to your room, and call you to open the door. I’m a roboticist myself and I find it striking how well this robot works every time! [Laughs.]

A ultraviolet disinfection robot developed by UVD Robots in an ICU
Photo: UVD Robots
UVD Robots has shipped hundreds of ultraviolet-C disinfection robots like the one above to Chinese hospitals. 

After Japan’s Fukushima nuclear emergency, the robotics community realized that it needed to be better prepared. It seems that we’ve made progress with disaster-response robots, but what about dealing with pandemics?

I think that for events involving infectious diseases, like this coronavirus outbreak, when they happen, everybody realizes the importance of robots. The challenge is that at most research institutions, people are more concerned with specific research topics, and that’s indeed the work of a scientist—to dig deep into the scientific issues and solve those specific problems. But we also need to have a global view to deal with big challenges like this pandemic.

So I think what we need to do, starting now, is to have a more systematic effort to make sure those robots can be deployed when we need them. We just need to recompose ourselves and work to identify the technologies that are ready to be deployed, and what are the key directions we need to pursue. There’s a lot we can do. It’s not too late. Because this is not going to disappear. We have to see the worst before it gets better.

So what should we do to be better prepared?

After a major crisis, when everything is under control, people’s priority is to go back to our normal routines. The last thing in people’s minds is, What should we do to prepare for the next crisis? And the thing is, you can’t predict when the next crisis will happen. So I think we need three levels of action, and it really has to be a global effort. One is at the government level, in particular funding agencies: How to make sure we can plan ahead and to prepare for the worst.

Another level is the robotics community, including organizations like the IEEE, we need leadership to advocate for these issues and promote activities like robotics challenges. We see challenges for disasters, logistics, drones—how about a robotic challenge for infectious diseases. I was surprised, and a bit disappointed in myself, that we didn’t think about this before. So for the editorial board of Science Robotics, for instance, this will become an important topic for us to rethink.

And the third level is our interaction with front-line clinicians—our interaction with them needs to be stronger. We need to understand the requirements and not be obsessed with pure technologies, so we can ensure that our systems are effective, safe, and can be rapidly deployed. I think that if we can mobilize and coordinate our effort at all these three levels, that would be transformative. And we’ll be better prepared for the next crisis.

Are there projects taking place at the Institute of Medical Robotics that could help with this pandemic?

The institute has been in full operation for just over a year now. We have three main areas of research: The first is surgical robotics, which is my main area of research. The second area is in rehabilitation and assistive robots. The third area is hospital and laboratory automation. One important lesson that we learned from the coronavirus is that, if we can detect and intervene early, we have a better chance of containing it. And for other diseases, it’s the same. For cancer, early detection based on imaging and other sensing technologies, is critical. So that’s something we want to explore—how robotics, including technologies like laboratory automation, can help with early detection and intervention.

One area we are working on is automated intensive-care unit wards. The idea it to build negative-pressure ICU wards for infectious diseases equipped with robotic capabilities that can take care of certain critical care tasks. Some tasks could be performed remotely by medical personnel, while other tasks could be fully automated. A lot of the technologies that we already use in surgical robotics can be translated into this area. We’re hoping to work with other institutions and share our expertise to continue developing this further. Indeed, this technology is not just for emergency situations. It will also be useful for routine management of infectious disease patients. We really need to rethink how hospitals are organized in the future to avoid unnecessary exposure and cross-infection.

Institute of Medical Robotics at Shanghai Jiao Tong University
Photo: Shanghai Jiao Tong University
Shanghai Jiao Tong University’s Institute of Medical Robotics is researching areas like micro/nano systems, surgical and rehabilitation robotics, and human-robot interaction.

I’ve seen some recent headlines—“China’s tech fights back,” “Coronavirus is the first big test for futuristic tech”—many people expect technology to save the day.

When there’s a major crisis like this pandemic, in the general public’s mind, people want to find a magic cure that will solve all the problems. I completely understand that expectation. But technology can’t always do that, of course. What technology can do is to help us to be better prepared. For example, it’s clear that in the last few years self-navigating robots with localization and mapping are becoming a mature technology, so we should see more of those used for situations like this. I’d also like to see more technologies developed for front-line management of patients, like the robotic ICU I mentioned earlier. Another area is public transportation systems—can they have an element of disease prevention, using technology to minimize the spread of diseases so that lockdowns are only imposed as a last resort?

And then there’s the problem of people being isolated. You probably saw that Italy has imposed a total lockdown. That could have a major psychological impact, particularly for people who are vulnerable and living alone. There is one area of robotics, called social robotics, that could play a part in this as well. I’ve been in this hotel room by myself for days now—I’m really starting to feel the isolation…

We should have done a Zoom call.

Yes, we should. [Laughs.] I guess this isolation, or quarantine for various people, also provides the opportunity for us to reflect on our lives, our work, our daily routines. That’s the silver lining that we may see from this crisis.

Unity Drive autonomous vehicle delivering food during coronavirus pandemic in China
Photo: Unity Drive Innovation
Unity Drive, a startup spun out of Hong Kong University of Science and Technology, is deploying self-driving vehicles to carry out contactless deliveries in three Chinese cities.

While some people say we need more technology during emergencies like this, others worry that companies and governments will use things like cameras and facial recognition to increase surveillance of individuals.

A while ago we published an article listing the 10 grand challenges for robotics in Science Robotics. One of the grand challenges is concerned with legal and ethical issues, which include what you mentioned in your question. Respecting privacy, and also being sensitive about individual and citizens’ rights—these are very, very important. Because we must operate within this legal ethical boundary. We should not use technologies that will intrude in people’s lives. You mentioned that some people say that we don’t have enough technology, and that others say we have too much. And I think both have a point. What we need to do is to develop technologies that are appropriate to be deployed in the right situation and for the right tasks.

Many researchers seem eager to help. What would you say to roboticists interested in helping fight this outbreak or prepare for the next one?

For medical robotics research, my experience is that for your technology to be effective, it has to be application oriented. You need to ensure that end-users like the clinicians who will use your robot, or in the case of assistive robots, the patients, that they are deeply involved in the development of the technology. And the second thing is really to think out of the box—how to develop radically different new technologies. Because robotics research is very hands on and there’s a tendency of adapting what’s readily available out there. For your technology to have a major impact, you need to fundamentally rethink your research and innovation, not just follow the waves.

For example, at our institute we’re investing a lot of effort on the development of micro and nano systems and also new materials that could one day be used in robots. Because for micro robotic systems, we can’t rely on the more traditional approach of using motors and gears that we use in larger systems. So my suggestion is to work on technologies that not only have a deep science element but can also become part of a real-world application. Only then we can be sure to have strong technologies to deal with future crises.