Back in 2014, under the looming shadow of the end of Moore’s Law, IBM embarked on an ambitious, US $3 billion project dubbed “7-nm and Beyond”. The bold aim of that five-year research project was to see how computing would continue into the future as the physics of decreasing chip dimensions conspired against it.
Six years later, Moore’s Law isn’t much of a law anymore. The observation by Gordon Moore (and later the industry-wide adherence to that observation) that the number of transistors on a chip doubled roughly every two years seems now almost to be a quaint vestige of days gone by. But innovation in computing is still required, and the “7-nm and Beyond” project has helped meet that continuing need.
“The search for new device architectures to enable the scaling of devices, and the search for new materials for performance differentiation will never end,” says Huiming Bu, Director at IBM’s Advanced Logic & Memory Technology Research, Semiconductor, and AI Hardware Group.
Although the chip industry may not feel as constrained by Moore’s Law as it has in the past, the “7-nm and Beyond” project has delivered important innovations even while some chip manufacturers have seemingly thrown up their hands in frustration at various points in recent years.
One example of this frustration was the decision two years ago by GlobalFoundries to suspend its 7-nanometer chip development.
Back in 2015, one year into its “7-nm and Beyond” project, IBM announced its first 7-nm test chip in which extreme-ultraviolet lithography (EUV), supplied by ASML, was a key enabling technology. While there have been growing pains in the use of EUV—resulting in the richest chip manufacturers being the only ones continuing on with the scaling down that it enables—it has since become a key enabling technology not only for 7-nm nodes, but also for 5-nm nodes and beyond, according to Bu.
“Back in the 2014-2015 time window, the whole industry had a big question about the practical feasibility of EUV technology,” says Bu. “Now it's not a question. Now, EUV has become the mainstream enabler. The first-kind 7-nm work we delivered based on EUV back then helped to build the confidence and momentum towards EUV manufacturing in our industry.”
Of course, EUV has enabled 7-nm nodes, but the aim of IBM was to look beyond that. IBM believes that the foundational element of chips to enable the scaling beyond FinFET will be the nanosheet transistor, which some have suggested may even be the last step in Moore’s Law.
The nanosheet looks to be the replacement to the FinFET architecture, and is expected to make possible the transition from the 7-nm and 5-nm nodes to the 3-nm node. In the architecture of the nanosheet field-effect transistors, current flows through multiple stacks of silicon that are completely surrounded by the transistor gate. This design greatly reduces the amount of current that can leak during off state, allowing more current to be used in driving the device when the switch is turned on.
“In 2017, the industry had a question about what will be the new device structure beyond FinFET,” says Bu. “At this point, three years later, the whole industry is getting behind nanosheet technology as the next device structure after FinFET.”
The transistors and switches have had some key developments, but the “7-nm and Beyond” project also resulted in some significant insights into how the wiring above all these transistors and switches will be made going into the future.
“Part of our innovation has been to extend copper as far as possible,” says Daniel Edelstein, IBM Fellow; Si Technology Research; MRAM/BEOL Process Strategy. “The hard part, as always,” says Edelstein, “has been simply patterning these extremely tiny and tall trenches and filling them without defects with copper.”
Despite the challenges with using copper, Edelstein doesn’t see the industry migrating away from it to more exotic materials in the near future. “Copper is certainly not at the end of its rope for what's being manufactured today,” said Edelstein.
He adds: “Several companies have indicated that they intend to continue using it. So I can't tell you exactly when it breaks. But we have seen that the so-called resistance crossover point keeps getting pushed farther into the future.”
While chip dimensions, architectures and materials have driven much of the innovations of the “7-nm and Beyond” project, both Edelstein and Bu note that artificial intelligence (AI) is also playing a key role in how they are approaching the future of computing.
“With the onset of AI-type, brain-inspired computing and other kinds of non-digital computing, we're starting to develop, at the research level, additional devices—especially emerging memory devices,” says Edelstein.
Edelstein is referring to emerging memory devices, such as phase-change memory (or “memristors,” as some others refer to them), which are thought of as analog computing devices.
The emergence of these new memory devices has provided a kind of resurrection in thinking about potential applications over and above conventional data storage. Researchers are imagining new roles for the thirty-year-old magnetoresistive random-access memory (MRAM), which IBM has been working on since MRAM’s debut.
“MRAM has finally had enough breakthroughs where it’s now not only manufacturable, but also approaching the kinds of requirements that it needs to achieve to be competitive with SRAM for system cache, which is kind of the holy grail in the end,” says Edelstein.
The evidence of this embedding of MRAM and other nonvolatile memories—including RRAM and phase-change memory—directly into the processor is seen in the move last year by chip equipment manufacturer Applied Materials to give its customers the tools for enabling this change.
The pursuit of new devices, new materials, and new computing architectures for better power-performance will continue, according to Bu. He also believes that the demand to integrate various components into a holistic computing system is starting to drive a whole new world of heterogeneous integration.
Bu adds: “Building these heterogeneous architecture systems is going to become a key in future computing. It is a new innovation strategy driven by the demands of AI.”
Over the last 10 years, the PR2 has helped roboticists make an enormous amount of progress in mobile manipulation over a relatively short time. I mean, it’s been a decade already, but still—robots are hard, and giving a bunch of smart people access to a capable platform where they didn’t have to worry about hardware and could instead focus on doing interesting and useful things helped to establish a precedent for robotics research going forward.
Unfortunately, not everyone can afford an enormous US $400,000 robot, and even if they could, PR2s are getting very close to the end of their lives. There are other mobile manipulators out there taking the place of the PR2, but so far, size and cost have largely restricted them to research labs. Lots of good research is being done, but it’s getting to the point where folks want to take the next step: making mobile manipulators real-world useful.
Today, a company called Hello Robot is announcing a new mobile manipulator called the Stretch RE1. With offices in the San Francisco Bay Area and in Atlanta, Ga., Hello Robot is led by Aaron Edsinger and Charlie Kemp, and by combining decades of experience in industry and academia they’ve managed to come up with a robot that’s small, lightweight, capable, and affordable, all at the same time. For now, it’s a research platform, but eventually, its creators hope that it will be able to come into our homes and take care of us when we need it to.
To understand the concept behind Stretch, it’s worth taking a brief look back at what Edsinger and Kemp have been up to for the past 10 years. Edsinger co-founded Meka Robotics in 2007, which built expensive, high performance humanoid arms, torsos, and heads for the research market. Meka was notable for being the first robotics company (as far as we know) to sell robot arms that used series elastic actuators, and the company worked extensively with Georgia Tech researchers. In 2011, Edsinger was one of the co-founders of Redwood Robotics (along with folks from SRI and Willow Garage), which was going to develop some kind of secret and amazing new robot arm before Google swallowed it in late 2013. At the same time, Google also acquired Meka and a bunch of other robotics companies, and Edsinger ended up at Google as one of the directors of its robotics program, until he left to co-found Hello Robot in 2017.
Meanwhile, since 2007 Kemp has been a robotics professor at Georgia Tech, where he runs the Healthcare Robotics Lab. Kemp’s lab was one of the 11 PR2 beta sites, giving him early experience with a ginormous mobile manipulator. Much of the research that Kemp has spent the last decade on involves robots providing assistance to untrained users, often through direct physical contact, and frequently either in their own homes or in a home environment. We should mention that the Georgia Tech PR2 is still going, most recently doing some clever material classification work in a paper for IROS later this year.
So with all that in mind, where’d Hello Robot come from? As it turns out, both Edsinger and Kemp were in Rodney Brooks’ group at MIT, so it’s perhaps not surprising that they share some of the same philosophies about what robots should be and what they should be used for. After collaborating on a variety of projects over the years, in 2017 Edsinger was thinking about his next step after Google when Kemp stopped by to show off some video of a new robot prototype that he’d been working on—the prototype for Stretch. “As soon as I saw it, I knew that was exactly the kind of thing I wanted to be working on,” Edsinger told us. “I’d become frustrated with the complexity of the robots being built to do manipulation in home environments and around people, and it solved a lot of problems in an elegant way.”
For Kemp, Stretch is an attempt to get everything he’s been teaching his robots out of his lab at Georgia Tech and into the world where it can actually be helpful to people. “Right from the beginning, we were trying to take our robots out to real homes and interact with real people,” says Kemp. Georgia Tech’s PR2, for example, worked extensively with Henry and Jane Evans, helping Henry (a quadriplegic) regain some of the bodily autonomy he had lost. With the assistance of the PR2, Henry was able to keep himself comfortable for hours without needing a human caregiver to be constantly with him. “I felt like I was making a commitment in some ways to some of the people I was working with,” Kemp told us. “But 10 years later, I was like, where are these things? I found that incredibly frustrating. Stretch is an effort to try to push things forward.”
One way to put Stretch in context is to think of it almost as a reaction to the kitchen sink philosophy of the PR2. Where the PR2 was designed to be all the robot anyone could ever need (plus plenty of robot that nobody really needed) embodied in a piece of hardware that weighs 225 kilograms and cost nearly half a million dollars, Stretch is completely focused on being just the robot that is actually necessary in a form factor that’s both much smaller and affordable. The entire robot weighs a mere 23 kg in a footprint that’s just a 34 cm square. As you can see from the video, it’s small enough (and safe enough) that it can be moved by a child. The cost? At $17,950 apiece—or a bit less if you buy a bunch at once—Stretch costs a fraction of what other mobile manipulators sell for.
It might not seem like size or weight should be that big of an issue, but it very much is, explains Maya Cakmak, a robotics professor at the University of Washington, in Seattle. Cakmak worked with PR2 and Henry Evans when she was at Willow Garage, and currently has access to both a PR2 and a Fetch research robot. “When I think about my long term research vision, I want to deploy service robots in real homes,” Cakmak told us. Unfortunately, it’s the robots themselves that have been preventing her from doing this—both the Fetch and the PR2 are large enough that moving them anywhere requires a truck and a lift, which also limits the home that they can be used in. “For me, I felt immediately that Stretch is very different, and it makes a lot of sense,” she says. “It’s safe and lightweight, you can probably put it in the backseat of a car.” For Cakmak, Stretch’s size is the difference between being able to easily take a robot to the places she wants to do research in, and not. And cost is a factor as well, since a cheaper robot means more access for her students. “I got my refurbished PR2 for $180,000,” Cakmak says. “For that, with Stretch I could have 10!”
Of course, a portable robot doesn’t do you any good if the robot itself isn’t sophisticated enough to do what you need it to do. Stretch is certainly a compromise in functionality in the interest of small size and low cost, but it’s a compromise that’s been carefully thought out, based on the experience that Edsinger has building robots and the experience that Kemp has operating robots in homes. For example, most mobile manipulators are essentially multi-degrees-of-freedom arms on mobile bases. Stretch instead leverages its wheeled base to move its arm in the horizontal plane, which (most of the time) works just as well as an extra DoF or two on the arm while saving substantially on weight and cost. Similarly, Stretch relies almost entirely on one sensor, an Intel RealSense D435i on a pan-tilt head that gives it a huge range of motion. The RealSense serves as a navigation camera, manipulation camera, a 3D mapping system, and more. It’s not going to be quite as good for a task that might involve fine manipulation, but most of the time it’s totally workable and you’re saving on cost and complexity.
Stretch has been relentlessly optimized to be the absolutely minimum robot to do mobile manipulation in a home or workplace environment. In practice, this meant figuring out exactly what it was absolutely necessary for Stretch to be able to do. With an emphasis on manipulation, that meant defining the workspace of the robot, or what areas it’s able to usefully reach. “That was one thing we really had to push hard on,” says Edsinger. “Reachability.” He explains that reachability and a small mobile base tend not to go together, because robot arms (which tend to weigh a lot) can cause a small base to tip, especially if they’re moving while holding a payload. At the same time, Stretch needed to be able to access both countertops and the floor, while being able to reach out far enough to hand people things without having to be right next to them. To come up with something that could meet all those requirements, Edsinger and Kemp set out to reinvent the robot arm.
The design they came up with is rather ingenious in its simplicity and how well it works. Edsinger explains that the arm consists of five telescoping links: one fixed and four moving. They are constructed of custom carbon fiber, and are driven by a single motor, which is attached to the robot’s vertical pole. The strong, lightweight structure allows the arm to extend over half a meter and hold up to 1.5 kg. Although the company has a patent pending for the design, Edsinger declined to say whether the links are driven by a belt, cables, or gears. “We don’t want to disclose too much of the secret sauce [with regard to] the drive mechanism.” He added that the arm was “one of the most significant engineering challenges on the robot in terms of getting the desired reach, compactness, precision, smoothness, force sensitivity, and low cost to all happily coexist.”
Another interesting features of Stretch is its interface with the world—its gripper. There are countless different gripper designs out there, each and every one of which is the best at gripping some particular subset of things. But making a generalized gripper for all of the stuff that you’d find in a home is exceptionally difficult. Ideally, you’d want some sort of massive experimental test program where thousands and thousands of people test out different gripper designs in their homes for long periods of time and then tell you which ones work best. Obviously, that’s impractical for a robotics startup, but Kemp realized that someone else was already running the study for him: Amazon.
“I had this idea that there are these assistive grabbers that people with disabilities use to grasp objects in the real world,” he told us. Kemp went on Amazon’s website and looked at the top 10 grabbers and the reviews from thousands of users. He then bought a bunch of different ones and started testing them. “This one [Stretch’s gripper], I almost didn’t order it, it was such a weird looking thing,” he says. “But it had great reviews on Amazon, and oh my gosh, it just blew away the other grabbers. And I was like, that’s it. It just works.”
As with any robot intended to be useful outside of a structured environment, hardware is only part of the story, and arguably not even the most important part. In order for Stretch to be able to operate out from under the supervision of a skilled roboticist, it has to be either easy to control, or autonomous. Ideally, it’s both, and that’s what Hello Robot is working towards, although things didn’t start out that way, Kemp explains. “From a minimalist standpoint, we began with the notion that this would be a teleoperated robot. But in the end, you just don’t get the real power of the robot that way, because you’re tied to a person doing stuff. As much as we fought it, autonomy really is a big part of the future for this kind of system.”
Here’s a look at some of Stretch’s teleoperated capabilities. We’re told that Stretch is very easy to get going right out of the box, although this teleoperation video from Hello Robot looks like it’s got a skilled and experienced user in the loop:
For such a low-cost platform, the autonomy (even at this early stage) is particularly impressive:
Since it’s not entirely clear from the video exactly what’s autonomous, here’s a brief summary of a couple of the more complex behaviors that Kemp sent us:
Much of these autonomous capabilities come directly from Kemp’s lab, and the demo code is available for anyone to use. (Hello Robot says all of Stretch’s software is open source.)
As of right now, Stretch is very much a research platform. You’re going to see it in research labs doing research things, and hopefully in homes and commercial spaces as well, but still under the supervision of professional roboticists. As you may have guessed, though, Hello Robot’s vision is a bit broader than that. “The impact we want to have is through robots that are helpful to people in society,” Edsinger says. “We think primarily in the home context, but it could be in healthcare, or in other places. But we really want to have our robots be impactful, and useful. To us, useful is exciting.” Adds Kemp: “I have a personal bias, but we’d really like this technology to benefit older adults and caregivers. Rather than creating a specialized assistive device, we want to eventually create an inexpensive consumer device for everyone that does lots of things.”
Neither Edsinger nor Kemp would say much more on this for now, and they were very explicit about why—they’re being deliberately cautious about raising expectations, having seen what’s happened to some other robotics companies over the past few years. Without VC funding (Hello Robot is currently bootstrapping itself into existence), Stretch is being sold entirely on its own merits. So far, it seems to be working. Stretch robots are already in a half dozen research labs, and we expect that with today’s announcement, we’ll start seeing them much more frequently.
This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.
Thanks to Moore’s Law, the number of transistors in our computing devices has doubled every two years, driving continued growth in computer speed and capability. Conversely, Wirth’s Law indicates that software is slowing more rapidly than hardware is advancing. The net result is that both hardware and software are becoming more complex. With this complexity, the number of discovered software vulnerabilities is increasing every year; there were over 17,000 vulnerabilities reported last year alone. We at DARPA’s System Security Integrated Through Hardware and firmware (SSITH) program argue that the solution lies not in software patches but in rethinking hardware architecture.
In March 2020, MITRE released version 4.0 of its Common Weakness Enumerations (CWE) list, which catalogues weaknesses in computer systems. For the first time, it included categories of hardware vulnerabilities. Among them are: Rowhammer; Meltdown/Spectre; CacheOut; and LVI, which are becoming more prevalent. In fact, a reported 70 percent of cyber-attacks are the result of memory safety issues [pdf] such as buffer overflow attacks—a category of software exploit that takes advantage of hardware’s inherent “gullibility.” These software exploitations of hardware vulnerabilities affect not only the computer systems we use at home, work, and in the cloud, but also the embedded computers we are becoming increasingly reliant on within Internet-of-Things (IoT) devices.
As 5G and IoT proliferation sweeps across the planet, businesses and consumers are benefiting greatly from increased connectivity. However, this connectivity is also introducing greater risks and security concerns than ever before. Gartner forecasts that there will be 5.81 billion IoT endpoints this year, and IDC estimates the number of IoT devices will grow to 41.6 billion in 2025. Despite these staggering statistics, IoT is still in its infancy. I liken it to the Wild West, where companies come and go, regulations and standards are undefined, and security is often an afterthought. This lawlessness can have significant consequences, as we saw in 2016 when the Mirai bot-net attacked domain registration service provider, Dyn. The attack exploited IoT devices like home routers, security cameras, and air quality monitors to perform a denial of service attack that prevented users from accessing major internet platforms and services in the United States and Europe.
Today, the security research community is able to identify many of these cyberattacks quickly, and solutions are distributed to patch the exploited software. These solutions are applied the same way a doctor prescribes medicine to treat a disease. As new diseases are discovered, new medicines must be developed and dispensed. Security researchers are similarly developing new software patches to address newly discovered vulnerabilities. We call this the “patch and pray” mentality.
Every time a new software vulnerability that exploits hardware is identified, a new software patch is issued. However, these patches only address the software layer and do not actually “treat” the underlying problem in the hardware, leaving it open to the creation of new exploits. In the medical field, this type of treatment regime is expensive and doesn’t cure the disease. In recent years, physicians have been advocating preventive medicine to treat the root causes of chronic diseases. Similarly, we need to adapt and find a better way to protect our computer systems.
Nowadays, embedded computers use multiple pieces of free software or open source utilities that are maintained and updated by the open source community. Conversely, many such computers—with applications in sectors such as Industry 4.0, medical, and automotive—are rarely if ever provided with updated software. They just continue to run old versions with known vulnerabilities. Even though they may use open source components, this slow update cycle is due to devices needing to be requalified to make sure that any updates to the kernel or drivers do not break the system.
Requalifying a device is expensive and even more costly when a new version of an operating system is involved. Often this is not even possible, since many companies outsource part or all of the development of their underlying hardware and software platforms in the form of licensed intellectual property (IP). These third-party components are usually licensed for a prebuilt function or as binary blobs and black boxes. The original equipment manufacturer (OEM) cannot modify these proprietary software components without additional licenses.
The net result is that individual third-party IP components are often not updated and only support certain versions of an operating system and software stack, further preventing the device that uses them from being updated. Additionally, the cost of supporting hardware devices is so large that many companies outsource technical support and device management to third-party companies who were not involved with the original development. This provides another barrier to updates; bugs can go unnoticed or unreported back to the development team. It’s also possible that the original team might no longer exist or might have moved on to its next project.
Because of these issues, protection from malware often requires a hardware upgrade. Take, for example, the cell phone market. Updates are often slow or nonexistent if you are not using one of the major brands. The market leaders are able to provide updates because they have tight control of their supply chains and enjoy sales volume sufficient to recoup their costs. Even then, they keep this up for only for a few years before the consumer is forced to upgrade. In between these hardware updates, software updates are employed in the form of the “patch and pray” approach.
DARPA’s System Security Integrated Through Hardware and firmware (SSITH) program seeks to break this cycle of vulnerability exploitation by developing hardware security architectures to protect systems against entire classes of the hardware vulnerabilities that these software exploits attack. SSITH’s philosophy: By treating the problem at its root—the hardware—it can end the need for continual “patch and pray” cycles.
With the National Institutes of Standards and Technologies, we have grouped the MITRE CWE database of vulnerabilities into seven hardware classes. Our research teams have been developing novel methods to stop buffer errors, privilege escalations, resource management attacks, information leakage attacks, numeric errors, code injection attacks, and cryptographic attacks. This approach has shown promising results with minimal impact to power, performance, chip area, and software compatibility. These architectural techniques can be incorporated into the entire range of computer hardware and scale from IoT endpoints to mobile phones to advanced servers and, ultimately, to supercomputers.
One of the challenges when developing secure hardware is quantifying performance. Since there are no agreed upon standards for doing this, SSITH has developed a security evaluation tool to analyze hardware architectures. This tool quantifies the impacts of security on performance, area, and power consumption while using a battery of synthetic software tests to benchmark the hardware designs for security coverage.
To help further mature the SSITH hardware designs and the security benchmark software, DARPA is conducting its first bug bounty program, entitled Finding Exploits to Thwart Tampering (FETT). Run in partnership with the Department of Defense’s Defense Digital Service and trusted crowdsourced security company, Synack, FETT aims to take a crowdsourced red team approach to test and analyze the initial versions of the SSITH technology. From July to September 2020, members of the Synack Red Team will use their best techniques to attack and stress test this technology. By addressing any discovered weaknesses and vulnerabilities, the SSITH research teams will be able to further harden their novel defenses while making computer hardware safer for everyone.
About the Author:
Keith Rebello is program manager at DARPA’s Microsystems Technology Office.
To date, the fund has received more than US$27,000 from individual donors and philanthropic organizations worldwide.
Here is a selection of activities that have been funded:
• The IEEE Humanitarian Activities Committee and the IEEE Special Interest Group on Humanitarian Technology will receive a $5,000 contribution to the groups’ IEEE SIGHT #COVID19 special-project funding. This initiative is awarding grants to IEEE volunteer-led projects that could immediately impact the fight against the coronavirus and its effects.
“IEEE HAC and IEEE SIGHT are grateful to the IEEE Foundation donors who are making it possible for grassroots IEEE volunteers to combat COVID-19 through innovative solutions in their own communities,” says IEEE Senior Member Sampath Veeraraghavan, chair of the IEEE SIGHT steering committee.
• IEEE Spectrum will receive $10,000 to support its IEEE COVID-19 News and Resources hub. The hub is helping drive COVID-19 innovation through collaboration and sharing of knowledge, by serving as a central location for articles and IEEE resources that focus on the pandemic.
“The hub provides valuable COVID-19 news and information to IEEE members and the wider technology community,” says Susan Hassler, editor in chief of IEEE Spectrum. “To date, 1 million unique visitors have used the content the hub provides. And thanks to donors’ generous support, we are also planning a special print report, ‘Preparing for the Next Pandemic,’ which will appear in the October issue.”
• IEEE Technical Activities will receive a $5,000 contribution to IEEE DataPort’s COVID-19 data competition, which is expected to launch later this year. The IEEE DataPort platform enables users to store, search, access, and manage standard and open-access datasets. The competition will ask contestants to analyze data on the platform with the goal of providing insights into the pandemic. The donation will be used to fund the top prize.
“The competition will engage researchers and technical experts from across the globe, with the goal of yielding data analyses that can provide benefits to all who are seeking to mitigate the impact of COVID-19 on society,” says IEEE Senior Member David Belanger, chair of the IEEE DataPort steering committee.
• IEEE Educational Activities and the IEEE Education Society will receive $5,000 to support the next installment of their online event for university faculty members around the world, Effective Remote Instruction: Reimagining the Engineering Student Experience. This free event, running from 27 to 31 July, will equip instructors with research-driven information that can help them offer effective, remote education.
“Thanks to this grant, we will be able to provide this high-quality event, taught by experts at no cost, potentially impacting thousands of students globally during the course of the pandemic and beyond,” says IEEE Senior Member Stephen Phillips, vice president of the IEEE Educational Activities board of directors.
• The IEEE Foundation Staff Running Team will receive $1,939. The employee team was scheduled to participate in April in the Unite Half Marathon and 8K at Rutgers University in New Brunswick, N.J., but due to the pandemic, the running event was held virtually. The team’s goal had been to raise $8,000, most of which was to be donated to IEEE Smart Village, but due to the state’s mandate for people to stay home, the team fell short of its goal. The IEEE Foundation is making up the difference.
Donations to the IEEE Foundation COVID-19 Response Fund are still being accepted.
Karen Kaufman is senior manager of communications for the IEEE Foundation.
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
Let us know if you have suggestions for next week, and enjoy today’s videos.
Evidently, the folks at Unitree were paying attention to last week’s Video Friday.
[ Unitree ]
RoboSoft 2020 was a virtual conference this year (along with everything else), but they still held a soft robots contest, and here are four short vids—you can watch the rest of them here.
[ RoboSoft 2020 ]
I am now a Hawks fan. GO HAWKS!
Scientists at the University of Liverpool have developed a fully autonomous mobile robot to assist them in their research. Using a type of AI, the robot has been designed to work uninterrupted for weeks at a time, allowing it to analyse data and make decisions on what to do next. Using a flexible arm with customised gripper it can be calibrated to interact with most standard lab equipment and machinery as well as navigate safely around human co-workers and obstacles.
[ Nature ]
Oregon State’s Cassie has been on break for a couple of months, but it’s back in the lab and moving alarmingly quickly.
[ DRL ]
The current situation linked to COVID-19 sadly led to the postponing of this year RoboCup 2020 at Bordeaux. As an official sponsor of The RoboCup, SoftBank Robotics wanted to take this opportunity to thank all RoboCupers and The RoboCup Federation for their support these past 13 years. We invite you to take a look at NAO’s adventure at The RoboCup as the official robot of the Standard Platform League. See you in Bordeaux 2021!
[ RoboCup 2021 ]
Miniature SAW robot crawling inside the intestines of a pig. You’re welcome.
[ Zarrouk Lab ]
The video demonstrates fast autonomous flight experiments in cluttered unknown environments, with the support of a robust and perception-aware replanning framework called RAPTOR. The associated paper is submitted to TRO.
[ HKUST ]
Since we haven’t gotten autonomy quite right yet, there’s a lot of telepresence going on for robots that operate in public spaces. Usually, you’ve got one remote human managing multiple robots, so it would be nice to make that interface a little more friendly, right?
[ HCI Lab ]
Arguable whether or not this is a robot, but it’s cool enough to spend a minute watching.
[ Ishikawa Lab ]
Communication is critical to collaboration; however, too much of it can degrade performance. Motivated by the need for effective use of a robot’s communication modalities, in this work, we present a computational framework that decides if, when, and what to communicate during human-robot collaboration.
Robotiq has released the next generation of the grippers for collaborative robots: the 2F-85 and 2F-140. Both models gain greater robustness, safety, and customizability while retaining the same key benefits that have inspired thousands of manufacturers to choose them since their launch 6 years ago.
[ Robotiq ]
ANYmal C, the autonomous legged robot designed for industrial challenging environments, provides the mobility, autonomy and inspection intelligence to enable safe and efficient inspection operations. In this virtual showcase, discover how ANYmal climbs stairs, recovers from a fall, performs an autonomous mission and avoids obstacles, docks to charge by itself, digitizes analogue sensors and monitors the environment.
[ ANYbotics ]
At Waymo, we are committed to addressing inequality, and we believe listening is a critical first step toward driving positive change. Earlier this year, five Waymonauts sat down to share their thoughts on equity at work, challenging the status quo, and more. This is what they had to say.
[ Waymo ]
Nice of ABB to take in old robots and upgrade them to turn them into new robots again. Robots forever!
[ ABB ]
It’s nice seeing the progress being made by GITAI, one of the teams competing in the ANA Avatar XPRIZE Challenge, and also meet the humans behind the robots.
One more talk from the ICRA Legged Robotics Workshop: Jingyu Liu from DeepRobotics and Qiuguo Zhu from Zhejiang University.
[ Deep Robotics ]
Will the amateur airwaves fall silent? Since the dawn of radio, amateur operators—hams—have transmitted on tenaciously guarded slices of spectrum. Electronic engineering has benefited tremendously from their activity, from the level of the individual engineer to the entire field. But the rise of the Internet in the 1990s, with its ability to easily connect billions of people, captured the attention of many potential hams. Now, with time taking its toll on the ranks of operators, new technologies offer opportunities to revitalize amateur radio, even if in a form that previous generations might not recognize.
The number of U.S. amateur licenses has held at an anemic 1 percent annual growth for the past few years, with about 7,000 new licensees added every year for a total of 755,430 in 2018. The U.S. Federal Communications Commission doesn’t track demographic data of operators, but anecdotally, white men in their 60s and 70s make up much of the population. As these baby boomers age out, the fear is that there are too few young people to sustain the hobby.
“It’s the $60,000 question: How do we get the kids involved?” says Howard Michel, former CEO of the American Radio Relay League (ARRL). (Since speaking with IEEE Spectrum, Michel has left the ARRL. A permanent replacement has not yet been appointed.)
This question of how to attract younger operators also reveals deep divides in the ham community about the future of amateur radio. Like any large population, ham enthusiasts are no monolith; their opinions and outlooks on the decades to come vary widely. And emerging digital technologies are exacerbating these divides: Some hams see them as the future of amateur radio, while others grouse that they are eviscerating some of the best things about it.
No matter where they land on these battle lines, however, everyone understands one fact. The world is changing; the amount of spectrum is not. And it will be hard to argue that spectrum reserved for amateur use and experimentation should not be sold off to commercial users if hardly any amateurs are taking advantage of it.
Before we look to the future, let’s examine the current state of play. In the United States, the ARRL, as the national association for hams, is at the forefront, and with more than 160,000 members it is the largest group of radio amateurs in the world. The 106-year-old organization offers educational courses for hams; holds contests where operators compete on the basis of, say, making the most long-distance contacts in 48 hours; trains emergency communicators for disasters; lobbies to protect amateur radio’s spectrum allocation; and more.
Michel led the ARRL between October 2018 and January 2020, and he fits easily the profile of the “average” American ham: The 66-year-old from Dartmouth, Mass., credits his career in electrical and computer engineering to an early interest in amateur radio. He received his call sign, WB2ITX, 50 years ago and has loved the hobby ever since.
“When our president goes around to speak to groups, he’ll ask, ‘How many people here are under 20 [years old]?’ In a group of 100 people, he might get one raising their hand,” Michel says.
ARRL does sponsor some child-centric activities. The group runs twice-annual Kids Day events, fosters contacts with school clubs across the country, and publishes resources for teachers to lead radio-centric classroom activities. But Michel readily admits “we don’t have the resources to go out to middle schools”—which are key for piquing children’s interest.
Sustained interest is essential because potential hams must clear a particular barrier before they can take to the airwaves: a licensing exam. Licensing requirements vary—in the United States no license is required to listen to ham radio signals—but every country requires operators to demonstrate some technical knowledge and an understanding of the relevant regulations before they can get a registered call sign and begin transmitting.
For those younger people who are drawn to ham radio, up to those in their 30s and 40s, the primary motivating factor is different from that of their predecessors. With the Internet and social media services like WhatsApp and Facebook, they don’t need a transceiver to talk with someone halfway around the world (a big attraction in the days before email and cheap long-distance phone calls). Instead, many are interested in the capacity for public service, such as providing communications in the wake of a disaster, or event comms for activities like city marathons.
“There’s something about this post-9/11 group, having grown up with technology and having seen the impact of climate change,” Michel says. “They see how fragile cellphone infrastructure can be. What we need to do is convince them there’s more than getting licensed and putting a radio in your drawer and waiting for the end of the world.”
The future lies in operators like Dhruv Rebba (KC9ZJX), who won Amateur Radio Newsline’s 2019 Young Ham of the Year award. He’s the 15-year-old son of immigrants from India and a sophomore at Normal Community High School in Illinois, where he also runs varsity cross-country and is active in the Future Business Leaders of America and robotics clubs. And he’s most interested in using amateur radio bands to communicate with astronauts in space.
Rebba earned his technician class license when he was 9, after having visited the annual Dayton Hamvention with his father. (In the United States, there are currently three levels of amateur radio license, issued after completing a written exam for each—technician, general, and extra. Higher levels give operators access to more radio spectrum.)
“My dad had kind of just brought me along, but then I saw all the booths and the stalls and the Morse code, and I thought it was really cool,” Rebba says. “It was something my friends weren’t doing.”
He joined the Central Illinois Radio Club of Bloomington, experimented with making radio contacts, participated in ARRL’s annual Field Days, and volunteered at the communications booths at local races.
But then Rebba found a way to combine ham radio with his passion for space: He learned about the Amateur Radio on the International Space Station (ARISS) program, managed by an international consortium of amateur radio organizations, which allows students to apply to speak directly with crew members onboard the ISS. (There is also an automated digital transponder on the ISS that allows hams to ping the station as it orbits.)
Rebba rallied his principal, science teacher, and classmates at Chiddix Junior High, and on 23 October 2017, they made contact with astronaut Joe Acaba (KE5DAR). For Rebba, who served as lead control operator, it was a crystallizing moment.
“The younger generation would be more interested in emergency communications and the space aspect, I think. We want to be making an impact,” Rebba says. “The hobby aspect is great, but a lot of my friends would argue it’s quite easy to talk to people overseas with texting and everything, so it’s kind of lost its magic.”
That statement might break the hearts of some of the more experienced hams recalling their tinkering time in their childhood basements. But some older operators welcome the change.
Take Bob Heil (K9EID), the famed sound engineer who created touring systems and audio equipment for acts including the Who, the Grateful Dead, and Peter Frampton. His company Heil Sound, in Fairview Heights, Ill., also manufactures amateur radio technology.
“I’d say wake up and smell the roses and see what ham radio is doing for emergencies!” Heil says cheerfully. “Dhruv and all of these kids are doing incredible things. They love that they can plug a kit the size of a cigar box into a computer and the screen becomes a ham radio…. It’s all getting mixed together and it’s wonderful.”
But there are other hams who think that the amateur radio community needs to be much more actively courting change if it is to survive. Sterling Mann (N0SSC), himself a millennial at age 27, wrote on his blog that “Millennials Are Killing Ham Radio.”
It’s a clickbait title, Mann admits: His blog post focuses on the challenge of balancing support for the dominant, graying ham population while pulling in younger people too. “The target demographic of every single amateur radio show, podcast, club, media outlet, society, magazine, livestream, or otherwise, is not young people,” he wrote. To capture the interest of young people, he urges that ham radio give up its century-long focus on person-to-person contacts in favor of activities where human to machine, or machine to machine, communication is the focus.
These differing interests are manifesting in something of an analog-to-digital technological divide. As Spectrum reported in July 2019, one of the key debates in ham radio is its main function in the future: Is it a social hobby? A utility to deliver data traffic? And who gets to decide?
Those questions have no definitive or immediate answers, but they cut to the core of the future of ham radio. Loring Kutchins, president of the Amateur Radio Safety Foundation, Inc. (ARSFi)—which funds and guides the “global radio email” system Winlink—says the divide between hobbyists and utilitarians seems to come down to age.
“Younger people who have come along tend to see amateur radio as a service, as it’s defined by FCC rules, which outline the purpose of amateur radio—especially as it relates to emergency operations,” Kutchins (W3QA) told Spectrum last year.
Kutchins, 68, expanded on the theme in a recent interview: “The people of my era will be gone—the people who got into it when it was magic to tune into Radio Moscow. But Grandpa’s ham radio set isn’t that big a deal compared to today’s technology. That doesn’t have to be sad. That’s normal.”
Gramps’ radios are certainly still around, however. “Ham radio is really a social hobby, or it has been a very social hobby—the rag-chewing has historically been the big part of it,” says Martin F. Jue (K5FLU), founder of radio accessories maker MFJ Enterprises, in Starkville, Miss. “Here in Mississippi, you get to 5 or 6 o’ clock and you have a big network going on and on—some of them are half-drunk chattin’ with you. It’s a social group, and they won’t even talk to you unless you’re in the group.”
But Jue, 76, notes the ham radio space has fragmented significantly beyond rag-chewing and DXing (making very long-distance contacts), and he credits the shift to digital. That’s where MFJ has moved with its antenna-heavy catalog of products.
“Ham radio is connected to the Internet now, where with a simple inexpensive handheld walkie-talkie and through the repeater systems connected to the Internet, you’re set to go,” he says. “You don’t need a HF [high-frequency] radio with a huge antenna to talk to people anywhere in the world.”
To that end, last year MFJ unveiled the RigPi Station Server: a control system made up of a Raspberry Pi paired with open-source software that allows operators to control radios remotely from their iPhones or Web browser.
“Some folks can’t put up an antenna, but that doesn’t matter anymore because they can use somebody else’s radio through these RigPis,” Jue says.
He’s careful to note the RigPi concept isn’t plug and play—“you still need to know something about networking, how to open up a port”—but he sees the space evolving along similar lines.
“It’s all going more and more toward digital modes,” Jue says. “In terms of equipment I think it’ll all be digital at some point, right at the antenna all the way until it becomes audio.”
Outside the United States, there are some notable bright spots, according to Dave Sumner (K1ZZ), secretary of the International Amateur Radio Union (IARU). This collective of national amateur radio associations around the globe represents hams’ interests to the International Telecommunication Union (ITU), a specialized United Nations agency that allocates and manages spectrum. In fact, in China, Indonesia, and Thailand, amateur radio is positively booming, Sumner says.
China’s advancing technology and growing middle class, with disposable income, has led to a “dramatic” increase in operators, Sumner says. Indonesia is subject to natural disasters as an island nation, spurring interest in emergency communication, and its president is a licensed operator. Trends in Thailand are less clear, Sumner says, but he believes here, too, that a desire to build community response teams is driving curiosity about ham radio.
“So,” Sumner says, “you have to be careful not to subscribe to the notion that it’s all collapsing everywhere.”
China is also changing the game in other ways, putting cheap radios on the market. A few years ago, an entry-level handheld UHF/VHF radio cost around US $100. Now, thanks to Chinese manufacturers like Baofeng, you can get one for under $25. HF radios are changing, too, with the rise of software-defined radio.
“It’s the low-cost radios that have changed ham radio and the future thereof, and will continue to do so,” says Jeff Crispino, CEO of Nooelec, a company in Wheatfield, N.Y., that makes test equipment and software-defined radios, where demodulating a signal is done in code, not hardwired electronics. “SDR was originally primarily for military operations because they were the only ones who could afford it, but over the past 10 years, this stuff has trickled down to become $20 if you want.” Activities like plane and boat tracking, and weather satellite communication, were “unheard of with analog” but are made much easier with SDR equipment, Crispino says.
Nooelec often hears from customers about how they’re leveraging the company’s products. For example, about 120 members from the group Space Australia to collect data from the Milky Way as a community project. They are using an SDR and a low-noise amplifier from Nooelec with a homemade horn antenna to detect the radio signal from interstellar clouds of hydrogen gas.
“We will develop products from that feedback loop—like for hydrogen line detection, we’ve developed accessories for that so you can tap into astronomical events with a $20 device and a $30 accessory,” Crispino says.
Looking ahead, the Nooelec team has been talking about how to “flatten the learning curve” and lower the bar to entry, so that the average user—not only the technically adept—can explore and develop their own novel projects within the world of ham radio.
“It is an increasingly fragmented space,” Crispino says. “But I don’t think that has negative connotations. When you can pull in totally unique perspectives, you get unique applications. We certainly haven’t thought of it all yet.”
The ham universe is affected by the world around it—by culture, by technology, by climate change, by the emergence of a new generation. And amateur radio enthusiasts are a varied and vibrant community of millions of operators, new and experienced and old and young, into robotics or chatting or contesting or emergency communications, excited or nervous or pessimistic or upbeat about what ham radio will look like decades from now.
As Michel, the former ARRL CEO, puts it: “Every ham has [their] own perspective. What we’ve learned over the hundred-plus years is that there will always be these battles—AM modulation versus single-sideband modulation, whatever it may be. The technology evolves. And the marketplace will follow where the interests lie.”
Julianne Pepitone is a freelance technology, science, and business journalist and a frequent contributor to IEEE Spectrum. Her work has appeared in print, online, and on television outlets such as Popular Mechanics, CNN, and NBC News.
Can artificial intelligence help the search for life elsewhere in the solar system? NASA thinks the answer may be “yes”—and not just on Mars either.
A pilot AI system is now being tested for use on the ExoMars mission that is currently slated to launch in the summer or fall of 2022. The machine-learning algorithms being developed will help science teams decide how to test Martian soil samples to return only the most meaningful data.
For ExoMars, the AI system will only be used back on earth to analyze data gather by the ExoMars rover. But if the system proves to be as useful to the rovers as now suspected, a NASA mission to Saturn’s moon Titan (now scheduled for 2026 launch) could automate the scientific sleuthing process in the field. This mission will rely on the Dragonfly octocopter drone to fly from surface location to surface location through Titan’s dense atmosphere and drill for signs of life there.
The hunt for microbial life in another world’s soil, either as fossilized remnants or as present-day samples, is very challenging, says Eric Lyness, software lead of the NASA Goddard Planetary Environments Lab in Greenbelt, Md. There is of course no precedent to draw upon, because no one has yet succeeded in astrobiology’s holy grail quest.
But that doesn’t mean AI can’t provide substantial assistance. Lyness explained that for the past few years he’d been puzzling over how to automate portions of an exploratory mission’s geochemical investigation, wherever in the solar system the scientific craft may be.
Last year he decided to try machine learning. “So we got some interns,” he said. “People right out of college or in college, who have been studying machine learning. … And they did some amazing stuff. It turned into much more than we expected.” Lyness and his collaborators presented their scientific analysis algorithm at a geochemistry conference last month.
ExoMars’s rover—named Rosalind Franklin, after one of the co-discoverers of DNA—will be the first that can drill down to 2-meter depths, beyond where solar UV light might penetrate and kill any life forms. In other words, ExoMars will be the first Martian craft with the ability to reach soil depths where living soil bacteria could possibly be found.
“We could potentially find forms of life, microbes or other things like that,” Lyness said. However, he quickly added, very little conclusive evidence today exists to suggest that there’s present-day (microbial) life on Mars. (NASA’s Curiosity rover has sent back some inexplicable observations of both methane and molecular oxygen in the Martian atmosphere that could conceivably be a sign of microbial life forms, though non-biological processes could explain these anomalies too.)
Less controversially, the Rosalind Franklin rover’s drill could also turn up fossilized evidence of life in the Martian soil from earlier epochs when Mars was more hospitable.
NASA’s contribution to the joint Russian/European Space Agency ExoMars project is an instrument called a mass spectrometer that will be used to analyze soil samples from the drill cores. Here, Lyness said, is where AI could really provide a helping hand.
The spectrometer, which studies the mass distribution of ions in a sample of material, works by blasting the drilled soil sample with a laser and then mapping out the atomic masses of the various molecules and portions of molecules that the laser has liberated. The problem is any given mass spectrum could originate from any number of source compounds, minerals and components. Which always makes analyzing a mass spectrum a gigantic puzzle.
Lyness said his group is studying the mineral montmorillonite, a commonplace component of the Martian soil, to see the many ways it might reveal itself in a mass spectrum. Then his team sneaks in an organic compound with the montmorillonite sample to see how that changes the mass spectrometer output.
“It could take a long time to really break down a spectrum and understand why you’re seeing peaks at certain [masses] in the spectrum,” he said. “So anything you can do to point scientists into a direction that says, ‘Don’t worry, I know it’s not this kind of thing or that kind of thing,’ they can more quickly identify what’s in there.”
Lyness said the ExoMars mission will provide a fertile training ground for his team’s as-yet-unnamed AI algorithm. (He said he’s open to suggestions—though, please, no spoof Boaty McBoatface submissions need apply.)
Because the Dragonfly drone and possibly a future astrobiology mission to Jupiter’s moon Europa would be operating in much more hostile environments with much less opportunity for data transmission back and forth to Earth, automating a craft’s astrobiological exploration would be practically a requirement.
All of which points to a future in mid-2030s in which a nuclear-powered octocopter on a moon of Saturn flies from location to location to drill for evidence of life on this tantalizingly bio-possible world. And machine learning will help power the science.
“We should be researching how to make the science instruments smarter,” Lyness said. “If you can make it smarter at the source, especially for planetary exploration, it has huge payoffs.”
In the wake of new Black Lives Matter protests, one company hopes to use virtual reality to help people better understand others by putting them in their colleagues’ shoes. The aim is to create better workplaces by helping employees develop and practice more respectful ways of interacting with each other.
By immersing people in realistic digital environments, virtual reality (VR) can lead to mind-bending experiences, such as making users feel as if they have swapped bodies with someone else. The effects of VR can persist long after these experiences; psychologists hope this can help in therapies for ailments such as phobias and post-traumatic stress disorder.
Previously, clinical psychologist Robin Rosenberg and her colleagues found that when people could use “superpowers” in VR, they acted more virtuously afterward. After this work, as the Black Lives Matter movement rose to prominence in 2014, Rosenberg remembered hearing how some white people responded by saying “white lives matter” or “all lives matter.”
“I thought they might not understand the lived experience of being Black in America,” Rosenberg says. “Not that I presume to know, but as a psychologist, I thought I knew enough to wonder whether virtual reality had the potential to provide powerful emotional learning about the lived experience of others.” A promising proof-of-concept study along these lines that Rosenberg and her colleagues performed in 2018 and 2019 led her to quit her day job and work on this idea full-time.
With her new company, Live in Their World, which publicly launched on 7 July, Rosenberg aims to address bias and incivility in the workplace using virtual reality. Using VR headsets, users can experience the perspectives of a Black man, a Black woman, and a white woman, “placing them in typical workplace scenes in which bias, inequity, and incivility arise, so they would see and hear those typical encounters from the powerful first-person perspective of the employee who is the focus on that VR segment,” Rosenberg says.
For each VR segment, the company sought input from members to whom that perspective belonged. “For instance, for Jovontae’s story—that of a Black man—there were about a dozen Black men of different ages whose professional experiences and contributions were crucial to creating the final product,” Rosenberg says.
Rosenberg stressed they are not seeking to erase a person’s bigotry after just a few hours in VR. “Research shows that ‘de-bias’ training doesn’t work, and that is not our goal,” she says. “But research points to important factors of training that can decrease discrimination and disrespectful workplace behavior.”
So far, “the feedback we’ve received has been very positive: typically, that the white people simply didn’t know, and thus the experience was eye-opening and has stayed with them,” Rosenberg says. “They stress the impact it’s made in both their thinking and their behavior. This has been both exciting and important, since my goal is to make workplace behavior more respectful and equitable.”
After each VR segment, which lasts 20 to 25 minutes, employees go through an online cognitive learning module to help them explicitly recognize what behaviors are problematic and to develop new skills to handle workplace situations more respectfully and effectively. Employees also take surveys to help Rosenberg’s company assess outcomes.
Given the pandemic, the company expanded its options to provide remote ways to experience the VR segments. Employees can not only use VR headsets, but can also view the experiences as 180-degree videos via computer.
Rosenberg’s company works directly with other companies, generally HR teams. “These are usually the people who have been charged with creating a more respectful workplace, so they bring on Live in Their World on behalf of their employees, paying per employee per year for the number of demographic tracks that they have selected.” They can scale up and down for any company size and for employees at all levels, she notes.
Rosenberg notes that previous research suggests voluntary participation yields better outcomes than mandatory participation when it comes to traditional types of training. “However, our approach is significantly different than traditional types of training,” she says. “We look forward to assessing whether the outcomes of our program differ when it is mandatory versus voluntary.”
“The recent very public incidents of murder and harassment of African-Americans is, unfortunately, not new, but the recent increased attention to the issue of systemic racism and racial inequity in the workplace is heartening,” Rosenberg says. “The hope is that it catalyzes substantive change.”
In 2014, economist Lisa D. Cook reported research that illuminated something fundamental about innovation: No matter how well your IP laws are written, innovation won’t happen without security and the rule of law. To prove that she showed how segregation laws, lynchings, and state-supported violence suppressed African American invention during the 20th Century. By tracking the patent filings of African American inventors from 1870-1940, Cook showed that acts of violence have a measurable impact on innovation. IEEE Spectrum spoke to the Michigan State University professor of economics and international relations on 2 July 2020.
IEEE Spectrum: What led you to investigate the effects of violence and segregation on African American innovation?
Lisa D. Cook: I wrote my dissertation on Russia and the Russian economy. This was in the 1990s, and it was a bit of a dangerous place. There was a question that always came up when I was talking to entrepreneurs there: "Why can't we get invention and innovation in Russia?"
They were asking a legitimate question, because they already had IP laws on the books. They were much different from the laws in the Soviet period, which weren't very strong. They were just wondering why invention wasn't happening at the pace they thought it should be happening. So I said to them, "Well, you've got to have things like the rule of law. You've got to have personal security.”
At the time, I didn't have any sort of empirical evidence that could show this. The conventional wisdom in the economics of innovation literature then was that if you have these strong IP laws, that would be sufficient to provide an incentive for patenting. I found that naïve.
So I wondered if there might be a historical experiment that might show this. One that would have an IP regime that stayed the same for some inventors but have other inventors subject to some sort of shock that had to do with violence or lack of rule of law. And I thought, "Well, that describes the United States. So maybe we can find this experiment in U.S. history.” You have a control and a treatment group. And the African Americans were going to be the treatment group.
IEEE Spectrum: How did you actually figure out which inventors were African American during your 1870-1940 study period, given that patent applications don't list race?
Lisa D. Cook: I thought that was going to be easy, because there's this emerging literature on Black names in economics. So I thought I could use the same techniques that my colleagues used at the time. I tried that. Then using census data, I came up with the first-ever list of historical black names. And it was of limited use. It barely identified anybody among the African American inventors.
So I had to try a new method. And that was finding all of the directories of scientists, engineers, and potential inventors that I could. In doing so, I found the survey of Henry Baker, the African American patent examiner in the early 1900s, who conducted surveys of patent agents and patent attorneys in 1900 and 1913.
That was very useful as a start. But it wasn't perfect. So I had to extend it backwards and forwards and check everything, as well. I also checked things like obituaries, because one thing that I knew from all these directories that I was collecting was that they biased the sample towards famous people. So I thought I might get some equalization by just checking newspapers and checking obituaries. And I was able to recover some that way.
IEEE Spectrum: That sounds like a ton of work.
Lisa D. Cook: It was.
IEEE Spectrum: Would you briefly explain the key results?
Lisa D. Cook: The main results over the period are, first, that violence has an impact on all patenting. It has a significant and negative impact on Black patenting. So those who were targeted are going to be disproportionately affected by lynchings, riots, and segregation laws.
Second, the most valuable patents are the most affected by violence. And that's not good news if you're extrapolating this to an economy.
The next group of results would suggest that if White inventors had been subjected to the same type of violence, economic growth would have been a lot slower. Why? Because business investment would have been lower, and business investment is a key component of GDP. So we would have had fewer inventions and fewer patented inventions and therefore less business investment and therefore less growth.
IEEE Spectrum: Electrical patents were particularly affected. Why?
Lisa D. Cook: I separated patents into types of technological categories to see if one category was more affected by violence than others. And we did see that violence disproportionately affected—for that period—electrical patents, which would have been some of the most valuable.
You can imagine how that would be true: You really had to be up on the latest inventions and the latest patents to be able to be productive, to add an increment to the stock of knowledge. Electrical patents, at the time, would probably have taken more collaboration with other inventors and more trips to the patent attorney. And that was something that was cut off as a result of Plessy v. Ferguson [the disastrous 1896 U.S. Supreme Court decision that legitimized anti-Black laws passed by U.S. state legislatures beginning in the late 19th century]. You can imagine that if an inventor no longer had access to, for example, the main library, where patent registries and information about new inventions were and where inventors could gather, that would be detrimental to the free flow of information. If commercial business districts were segregated—there were no patent attorneys who were African Americans until the 1970s—that meant that you really didn't have access to someone who could file and protect your invention.
IEEE Spectrum: What key events impacted African American innovation?
Lisa D. Cook: Plessy v. Ferguson in 1896 was a big one. 1899 was the peak for African American invention, and even using 2010 data [PDF} it was still the peak per capita for African American invention.
Scholars of constitutional law explain that the Plessy v. Ferguson Supreme Court rulings took two or three years to produce effects, for rulemaking to happen, and for laws to be passed. What we did see was a proliferation of laws after Plessy v. Ferguson in states, especially outside of the south, and that's where patenting was happening. So I think it was largely Plessy v. Ferguson that led to this huge drop in patenting by African Americans that hasn't yet recovered.
Blatant violence also had an effect. Before I did anything, I had looked at the time series of patents and I'd noticed several dips. One was 1899, and another one was in 1921. The first thing I did, being an economist of innovation, was try to see if the patent laws changed or if patents became more expensive. But the only thing I came up with was the Tulsa massacre. [In May and June 1921, a White mob attacked and destroyed a relatively wealthy African-American neighborhood in Tulsa, Oklahoma. Many in the mob had been deputized and armed by government officials, and the attack included aerial bombardment.] The local, state, and federal government failed African Americans so much in Tulsa that it had a sizeable effect on all African Americans. They felt terrorized at the time, and there was nobody who had their backs. So I think that that's why 1921 stood out in the data, and I think there's evidence to support that.
IEEE Spectrum: How much potential innovation was lost during that period, and how did you figure that out?
Lisa D. Cook: I extrapolated the trajectory from the pre-1900 trend and found that in the absence of violence and segregation, there should've been roughly 1100 patents at the time. That would've been the output of a mid-sized European country then. But what I found was 726 patents.
IEEE Spectrum: What does your research say generally about violence and innovation?
Lisa D. Cook: In the 2014 paper, what I did was to predict which lynchings (of a series of lynchings) would have the greatest impact on patenting, and it's the first one. So that's the one that you want to try to prevent. There are ways to do this, such as not letting white supremacist groups get out of control.
I think we don't think enough about the conditions that inventors need to be productive, such that there can be a free flow of ideas. I think we put too much weight on the actual laws in place and not the environment in which they are operating. We have direct evidence that the conditions in which one is operating can make a huge difference, whether you're adding to the stock of knowledge broadly or the stock of knowledge related to science and innovation.
IEEE Spectrum: What is holding back black entrepreneurship now?
Lisa D. Cook: I think that one of the big things that is holding back African American participation and women's participation is workplace climate, frankly. There are three stages of innovation—education; training as an inventor, working in a lab, for example; and then the third phase, commercialization of the invention. There are well-known problems associated with workplace climate in each. There is systemic racism at every stage.
With respect to entrepreneurship at the very end, what I found in doing my research interviews is that networks matter a lot more than we have researched in economics. It is social networks, all types of networks that require engagement—like having an internship at an investment bank, or being a member of a golf club—that are needed to get inventions commercialized. Those networks result in introductions for venture capital funding, for example. And African Americans and women are often kept out of those networks. So it's unsurprising that 1 percent of founders who received VC funding are Black.
To Probe Further:
For more on Cook’s work on violence and innovation, listen to her 11 February 2019 interview with National Public Radio’s Cardiff Garcia “How Violence Limits Economic Activity.” And read her chapter “The Innovation Gap in Pink and Black,” in Does America Need More Innovators? MIT Press, 2019.
For more about race and the process of innovation see “The implications of U.S. gender and racial disparities in income and wealth inequality at each stage of the innovation process," (with Jan Gerson), WashingtonCenter for Equitable Growth Policy Brief, July 2019.
During hot weather, it’s nice to open a window to let in a breeze. Maybe not, though, if the window also lets in a cacophony from cars and trucks roaring past.
Street noise is a nuisance, a health hazard, and often cited as a reason to abandon the city for the quieter pastures. Why can’t technology ease the problem? We have noise-cancelling headphones; why not noise-cancelling windows as well? Now, researchers in Singapore have created just such a thing for mockup room, and they are working on adapting their proof of principle to a real room.
The idea is simple: A sensor picks up a regularly repeating waveform, like the sound created by a rolling wheel or a turning propeller. Electronics characterizes the wave, generates a mirror image of it, and emits that second “antiwave” from speaker, causing the two waves’ peaks and troughs to cancel out.
Antinoise works best for frequencies above 300 Hertz and up to about 1000 Hz—so think about the rumble of traffic rather than the cracking of fireworks that has been plaguing some U.S. cities this summer. Antinoise also works best in limited spaces, where the wave and its antiwave are sure to meet up properly, as in the gap between a headphone and an ear. However, with careful engineering, the audio trick can help in an airplane’s cabin and even in a car. In airliners, the antinoise is conveyed through special “shakers” attached to the fuselage; in cars it’s channeled through the existing sound system.
In a paper published today in the British journal Nature, researchers at Nanyang Technological University describe how an array of 24 small speakers placed in a window, together with a sensor, can generate an antinoise signal strong enough to cut the room’s noise by 10 decibels as perceived by the human ear—that is, A-adjusted decibels, or dBA. That’s about the difference between heavy traffic 90 meters away (60 dBA) and a quiet moment in a city (50 dBA).
It’s a striking achievement to make wave and antiwave cancel out perfectly throughout an entire room. The key is that the noise all comes through a relatively small aperture—the window, explains Bhan Lam, an electrical engineer and the leader of the group.
“In a way, we are treating the window opening as the noise source,” he tells IEEE Spectrum. “Effective control of the noise source will result in noise control everywhere in the room.” He adds that simulations show that it ought to work no matter how big the room is.
There are two engineering tradeoffs. First, as you move the speakers further apart, the highest frequency they can cancel goes down. And as you make the speakers smaller, you reduce their maximum output power and their bass response. But if you really want to make the most of today’s speaker technology, Bhan says, you can enlarge the window so that it can accommodate bigger speakers.
Years ago, noise from overflying airliners so ruffled people at the U.S. Open tennis tournament, in Queens, NY, that the city arranged to re-route air traffic to and from LaGuardia Airport for the duration of the event. Why can’t antinoise do that job instead?
“In an open space, if the noise source is far away—say, from an aircraft—it becomes a challenging problem,” Bahn explains. “This type of control is termed as spatial active noise control, and the research is still in the fundamental stage; only simulations have been reported thus far.”
To fully embrace wind and solar power, grid operators need to be able to predict and manage the variability that comes from changes in the wind or clouds dimming sunlight.
One solution may come from a $2-million project backed by the U.S. Department of Energy that aims to develop a risk dashboard for handling more complex power grid scenarios.
Grid operators now use dashboards that report the current status of the power grid and show the impacts of large disturbances—such as storms and other weather contingencies—along with regional constraints in flow and generation. The new dashboard being developed by Columbia University researchers and funded by the Advanced Research Projects Agency–Energy (ARPA-E) would improve upon existing dashboards by modeling more complex factors. This could help the grid better incorporate both renewable power sources and demand response programs that encourage consumers to use less electricity during peak periods.
“[Y]ou have to operate the grid in a way that is looking forward in time and that accepts that there will be variability—you have to start talking about what people in finance would call risk,” says Daniel Bienstock, professor of industrial engineering and operations research, and professor of applied physics and applied mathematics at Columbia University.
The new dashboard would not necessarily help grid operators prepare for catastrophic black swan events that might happen only once in 100 years. Instead, Bienstock and his colleagues hope to apply some lessons from financial modeling to measure and manage risk associated with more common events that could strain the capabilities of the U.S. regional power grids managed by independent system operators (ISOs). The team plans to build and test an alpha version of the dashboard within two years, before demonstrating the dashboard for ISOs and electric utilities in the third year of the project.
Variability already poses a challenge to modern power grids that were designed to handle steady power output from conventional power plants to meet an anticipated level of demand from consumers. Power grids usually rely on gas turbine generators to kick in during peak periods of power usage or to provide backup to intermittent wind and solar power.
But such generators may not provide a fast enough response to compensate for the expected variability in power grids that include more renewable power sources and demand response programs driven by fickle human behavior. In the worst cases, grid operators may shut down power to consumers and create deliberate blackouts in order to protect the grid’s physical equipment.
One of the dashboard project’s main goals involves developing mathematical and statistical models that can quantify the risk from having greater uncertainty in the power grid. Such models would aim to simulate different scenarios based on conditions—such as changes in weather or power demand—that could stress the power grid. Repeatedly playing out such scenarios would force grid operators to fine-tune and adapt their operational plans to handle such surprises in real life.
For example, one scenario might involve a solar farm generating 10 percent less power and a wind farm generating 30 percent more power within a short amount of time, Bienstock explains. The combination of those factors might mean too much power begins flowing on a particular power line and the line subsequently starts running hot at the risk of damage.
Such models would only be as good as the data that trains them. Some ISOs and electric utilities have already been gathering useful data from the power grid for years. Those that already have more experience dealing with the variability of renewable power have been the most proactive. But many of the ISOs are reluctant to share such data with outsiders.
“One of the ISOs has told us that they will let us run our code on their data provided that we actually physically go to their office, but they will not give us the data to play with,” Bienstock says.
For this project, ARPA-E has been working with one ISO to produce synthetic data covering many different scenarios based on historical data. The team is also using publicly available data on factors such as solar irradiation, cloud cover, wind strength, and the power generation capabilities of solar panels and wind turbines.
“You can look at historical events and then you can design stress scenarios that are somehow compatible with what we observe in the past,” says Agostino Capponi, associate professor of industrial engineering and operations research at Columbia University and external consultant for the U.S. Commodity Futures Trading Commission.
A second big part of the dashboard project involves developing tools that grid operators could use to help manage the risks that come from dealing with greater uncertainty. Capponi is leading the team’s effort to design customized energy volatility contracts that could allow grid operators to buy such contracts for a fixed amount and receive compensation for all the variance that occurs over a historical period of time.
But he acknowledged that financial contracts designed to help offset risk in the financial market won’t apply in a straightforward manner to the realities of the power grid that include delays in power transmission, physical constraints, and weather events.
“You cannot really directly use existing financial contracts because in finance you don't have to take into account the physics of the power grid,” Capponi says.
The team’s expertise spans multiple disciplines. Bienstock, Capponi, and their colleague Garud Iyengar, professor of industrial engineering and operations research, are all members of Columbia’s Data Science Institute. The project’s principal investigators also include Michael Chertkov, professor of mathematics at the University of Arizona, and Yury Dvorkin, assistant professor of electrical and computer engineering at New York University.
Once the new dashboard is up and running, it could begin to help grid operators deal with both near-term and long-term challenges for the U.S. power grid. One recent example comes from the current COVID-19 pandemic and associated human behavioral changes—such as more people working from home—having already increased variability in energy consumption across New York City and other parts of the United States. In the future, the risk dashboard might help grid operators quickly identify areas at higher risk of suffering from imbalances between supply and demand and act quickly to avoid straining the grid or having blackouts.
Knowing the long-term risks in specific regions might also drive more investment in additional energy storage technologies and improved transmission lines to help offset such risks. The situation is different for every grid operator’s particular region, but the researchers hope that their dashboard can eventually help level the speed bumps as the U.S. power grid moves toward using more renewable power.
“The ISOs have different levels of renewable penetration, and so they have different exposures and visibility to risk,” Bienstock says. “But this is just the right time to be doing this sort of thing.”
The majority of robot arms are built out of some combination of long straight tubes and actuated joints. This isn’t surprising, since our limbs are built the same way, which was a clever and efficient bit of design. By adding more tubes and joints (or degrees of freedom), you can increase the versatility of your robot arm, but the tradeoff is that complexity, weight, and cost will increase, too.
At ICRA, researchers from Imperial College London’s REDS Lab, headed by Nicolas Rojas, introduced a design for a robot that’s built around a malleable structure rather than a rigid one, allowing you to improve how versatile the arm is without having to add extra degrees of freedom. The idea is that you’re no longer constrained to static tubes and joints but can instead reconfigure your robot to set it up exactly the way you want and easily change it whenever you feel like.
Inside of that bendable section of arm are layers and layers of mylar sheets, cut into flaps and stacked on top of one another so that each flap is overlapping or overlapped by at least 11 other flaps. The mylar is slippery enough that under most circumstances, the flaps can move smoothly against each other, letting you adjust the shape of the arm. The flaps are sealed up between latex membranes, and when air is pumped out from between the membranes, they press down on each other and turn the whole structure rigid, locking itself in whatever shape you’ve put it in.
The nice thing about this system is that it’s a sort of combination of a soft robot and a rigid robot—you get the flexibility (both physical and metaphorical) of a soft system, without necessarily having to deal with all of the control problems. It’s more mechanically complex than either (as hybrid systems tend to be), but you save on cost, size, and weight, and reduce the number of actuators you need, which tend to be points of failure. You do need to deal with creating and maintaining a vacuum, and the fact that the malleable arm is not totally rigid, but depending on your application, those tradeoffs could easily be worth it.
For more details, we spoke with first author Angus B. Clark via email.
IEEE Spectrum: Where did this idea come from?
Angus Clark: The idea of malleable robots came from the realization that the majority of serial robot arms have 6 or more degrees of freedom (DoF)—usually rotary joints—yet are typically performing tasks that only require 2 or 3 DoF. The idea of a robot arm that achieves flexibility and adaptation to tasks but maintains the simplicity of a low DoF system, along with the rapid development of variable stiffness continuum robots for medical applications, inspired us to develop the malleable robot concept.
What are some ways in which a malleable robot arm could provide unique advantages, and what are some potential applications that could leverage these advantages?
Malleable robots have the ability to complete multiple traditional tasks, such as pick and place or bin picking operations, without the added bulk of extra joints that are not directly used within each task, as the flexibility of the robot arm is provided by a malleable link instead. This results in an overall smaller form factor, including weight and footprint of the robot, as well as a lower power requirement and cost of the robot as fewer joints are needed, without sacrificing adaptability. This makes the robot ideal for scenarios where any of these factors are critical, such as in space robotics—where every kilogram saved is vital—or in rehabilitation robotics, where cost reduction may facilitate adoption, to name two examples. Moreover, the collaborative soft-robot-esque nature of malleable robots also tends towards collaborative robots in factories working safely alongside and with humans.
Compared to a conventional rigid link between joints, what are the disadvantages of using a malleable link?
Currently the maximum stiffness of a malleable link is considerably weaker than that of an equivalent solid steel rigid link, and this is one of the key areas we are focusing research on improving as motion precision and accuracy are impacted. We have created the largest existing variable stiffness link at roughly 800 mm length and 50 mm diameter, which suits malleable robots towards small and medium size workspaces. Our current results evaluating this accuracy are good, however achieving a uniform stiffness across the entire malleable link can be problematic due to the production of wrinkles under bending in the encapsulating membrane. As demonstrated by our SCARA topology results, this can produce slight structural variations resulting in reduced accuracy.
Does the robot have any way of knowing its own shape? Potentially, could this system reconfigure itself somehow?
Currently we compute the robot topology using motion tracking, with markers placed on the joints of the robot. Using distance geometry, we are then able to obtain the forward and inverse kinematics of the robot, of which we can use to control the end effector (the gripper) of the robot. Ideally, in the future we would love to develop a system that no longer requires the use of motion tracking cameras.
As for the robot reconfiguring itself, which we call an “intrinsic malleable link,” there are many methods that have been demonstrated for controlling a continuum structure, such as using positive pressure or via tendon wires, however the ability to in real-time determine the curvature of the link, not just the joint positions, is a significant hurdle to solve. However, we hope to see future development on malleable robots work towards solving this problem.
What are you working on next?
For us, refining the kinematics of the robot to enable a robust and complete system for allowing a user to collaboratively reshape the robot, while still achieving the accuracy expected from robotic systems, is our current main goal. Malleable robots are a brand new field we have introduced, and as such provide many opportunities for development and optimization. Over the coming years, we hope to see other researchers work alongside us to solve these problems.
When Hurricane Maria razed Puerto Rico in September 2017, the storm laid bare the serious flaws and pervasive neglect of the island’s electricity system. Nearly all 3.4 million residents lost power for weeks, months, or longer—a disaster unto itself that affected hospitals and schools and shut down businesses and factories.
The following January, then-Gov. Ricardo Rosselló signaled plans to sell off parts of the Puerto Rico Electric Power Authority (PREPA), leaving private companies to do what the state-run utility had failed to accomplish. Rosselló, who resigned last year, said it would take about 18 months to complete the transition.
“Our objective is simple: provide better service, one that’s more efficient and that allows us to jump into new energy models,” he said that June, after signing a law to start the process.
Yet privatization to date has been slow, piecemeal, and mired in controversy. Recent efforts seem unlikely to move the U.S. territory toward a cleaner, more resilient system, power experts say.
As the region braces for an “unusually active” 2020 hurricane season, the aging grid remains vulnerable to disruption, despite US $3.2 billion in post-Maria repairs.
Puerto Rico relies primarily on large fossil fuel power plants and long transmission lines to carry electricity into mountains, coastlines, and urban centers. When storms mow down key power lines, or earthquakes destroy generating units—as was the case in January—outages cascade across the island. Lately, frequent brownouts caused by faulty infrastructure have complicated efforts to confront the COVID-19 outbreak.
“In most of the emergencies that we’ve had, the centralized grid has failed,” says Lionel Orama Exclusa, an electrical engineering professor at the University of Puerto Rico-Mayagüez and member of Puerto Rico’s National Institute of Energy and Island Sustainability.
He and many others have called for building smaller regional grids that can operate independently if other parts fail. Giant oil- and gas-fired power plants should similarly give way to renewable energy projects distributed near or within neighborhoods. Last year, Puerto Rico adopted a mandate to get to 100 percent renewable energy by 2050. (Solar, wind, and hydropower supply just 2.3 percent of today’s total generation.)
So far, however, PREPA’s contracts to private companies have mainly focused on retooling existing infrastructure—not reimagining the monolithic system. The companies are also tied to the U.S. natural gas industry, which has targeted Puerto Rico as a place to offload mainland supplies.
In June, Luma Energy signed a 15-year contract to operate and maintain PREPA’s transmission and distribution system. Luma is a newly formed joint venture between infrastructure company Quanta Services and Canadian Utilities Limited. The contract is valued between $70 million and $105 million per year, plus up to $20 million in annual “incentive fees.”
Wayne Stensby, president and CEO of Luma, said his vision for Puerto Rico includes wind, solar, and natural gas and is “somewhere down the middle” between a centralized and decentralized grid, Greentech Media reported. “It makes no sense to abandon the existing grid,” he told the news site in June, adding that Luma’s role is to “effectively optimize that reinvestment.”
Orama Exclusa says he has “mixed feelings” about the contract.
If the private consortium can effectively use federal disaster funding to fix crumbling poles and power lines, that could significantly improve the system’s reliability, he says. But the arrangement still doesn’t address the “fundamental” problem of centralization.
He is also concerned that the Luma deal lacks transparency. Former utility leaders and consumer watchdogs have noted that regulators did not include public stakeholders in the 18-month selection process. They say they’re wary Puerto Rico may be repeating missteps made in the wake of Hurricane Maria.
As millions of Puerto Ricans recovered in the dark, PREPA quietly inked a no-bid, one-year contract for $300 million with Whitefish Energy Holdings, a two-person Montana firm with ties to then-U.S. Interior Secretary Ryan Zinke. Cobra Acquisitions, a fracking company subsidiary, secured $1.8 billion in federal contracts to repair the battered grid. Last September, U.S. prosecutors charged Cobra’s president and two officials in the Federal Emergency Management Agency with bribery and fraud.
A more recent deal with another private U.S. firm is drawing further scrutiny.
In March 2019, New Fortress Energy won a five-year, $1.5 billion contract to supply natural gas to PREPA and convert two units (totaling 440 megawatts) at the utility’s San Juan power plant from diesel to gas. The company, founded by billionaire CEO Wes Edens, completed the project this May, nearly a year behind schedule. It also finished construction of a liquefied natural gas (LNG) import terminal in the capital city’s harbor.
“This is another step forward in our energy transformation,” Gov. Wanda Vázquez Garced said in May during a tour of the new facilities. Converting the San Juan units “will allow for a cheaper and cleaner fuel” and reduce monthly utility costs for PREPA customers, she said.
Critics have called for canceling the project, which originated after New Fortress submitted an unsolicited proposal to PREPA in late 2017. The ensuing deal gave New Fortress an “unfair advantage,” was full of irregularities, and didn’t undergo sufficient legal review or financial oversight, according to a June report by CAMBIO, a Puerto Rico-based environmental nonprofit, and the Institute for Energy Economics and Financial Analysis.
The project “would continue to lock in fossil fuels on the island and would prevent the aggressive integration of renewable energy,” Ingrid Vila Biaggi, president of CAMBIO, told the independent news program Democracy Now!
The U.S. Federal Regulatory Commission, which oversees the transmission and wholesale sale of electricity and natural gas, also raised questions about the LNG import terminal.
On 18 June, the agency issued a rare show-cause order demanding that New Fortress explain why it didn’t seek prior approval before building the infrastructure at the Port of San Juan. New Fortress has 30 days to explain its failure to seek the agency’s authorization.
Concerns over contracts are among the many challenges to revitalizing Puerto Rico’s grid. The island has been mired in a recession since 2006, amid a series of budget shortfalls, financial crises, and mismanagement—which contributed to PREPA filing for bankruptcy in 2017, just months before Maria struck. The COVID-19 pandemic is further eroding the economy, with Puerto Ricans facing widespread unemployment and rising poverty.
The coming months—typically those with the most extreme weather—will show if recent efforts to privatize the grid will alleviate, or exacerbate, Puerto Rico’s electricity problems.
The Technical Field Awards are given for contributions to or leadership in one of 27 specific IEEE fields of interest. They are among the highest awards presented on behalf of the IEEE Board of Directors.
The Herz Award recognizes sustained contributions by a present or past full-time IEEE staff member with at least 10 years of service.
The deadline to submit nominations for both types of awards is 15 January 2021.
Nomination guidelines, award-specific criteria, and nomination forms can be downloaded from the online portal. All nominations must be submitted through the portal.
Lynn Frassetti is the senior awards presentation and communications specialist for IEEE Awards Activities.
Confusion and skepticism may confound efforts to make use of digital contact tracing technologies during the COVID-19 pandemic. A recent survey found that just 42 percent of American respondents support using so-called contact tracing apps—an indication of a lack of confidence that could weaken or even derail effective deployment of such technologies.
Most contact tracing apps generally try to collect some form of information about a smartphone user’s encounters with other people and notify those users if they were potentially exposed to a confirmed COVID-19 case. But each app has its own approach to privacy and can differ in whether it collects more specific location data based on GPS or merely records close encounters with other smartphones based on Bluetooth radio-wave transmissions. Those differences, coupled with public misunderstanding of different apps, can make it tricky to assess public opinion of specific digital contact tracing technologies.
“We found that there is variation in terms of how willing people are to download the apps based on the features of the app,” says Baobao Zhang, a Klarman postdoctoral fellow at Cornell University whose research focus is the governance of AI. “There's many different kinds of apps that are out there, so if you're just going to ask about a contact tracing app, people might have very different views of what it does.”
In late April and late June, Zhang and her colleagues conducted two rounds of surveys focused on gauging American opinions of such apps. The results are described in a preprint paper first published on 5 May and later updated on 29 June; the update accounts for an initial problem with the survey software and includes a second round of survey findings.
The contact tracing apps in question build upon traditional contact tracing, but they are not the same. Traditional contact tracing is a tried-and-true public health measure that requires large numbers of human contact tracers to call and interview suspected or confirmed COVID-19 cases about their travel history for the purpose of warning family, friends, or strangers who may have been exposed.
But given how labor- and time-intensive manual contact tracing can be, some governments and companies have looked to digital contact tracing systems to help automate part of the process. These systems can include contact tracing apps, which in their most privacy-preserving form may be more accurately described as exposure notification apps. They function primarily to alert individual smartphone users rather than public health officials or human contact tracing teams.
Zhang and her colleagues used conjoint analysis to try gauging how Americans valued or viewed certain features of such apps. For example, app designs can can choose whether to collect GPS location data or to rely primarily upon Bluetooth key code exchanges. Whereas a GPS-based app might notify the user about their potential exposure to COVID-19 at a particular location—information that could also help jog fuzzy human memories—the Bluetooth-based app would typically only tell users that they had potentially been exposed to someone for perhaps a certain amount of time.
A GPS-based app is “probably is more effective from a public health standpoint,” says Sarah Kreps, professor of government and adjunct professor of law at Cornell University and coauthor of the survey research paper. But Kreps adds that the same GPS location data is “very intrusive from a privacy standpoint” because the information could reveal behavioral and lifestyle patterns about a person’s daily life.
Americans who took the survey did not seem to view apps very differently based on whether they incorporated GPS or Bluetooth. But respondents did change their minds when it came to whether an app featured a centralized vs. decentralized system of data storage. The centralized system shares much more information—such as a user’s anonymized ID and Bluetooth key codes—with a central server that might be overseen by a company or government agency. The decentralized system typically stores most of the collected data on individual phones in the interest of better protecting user privacy.
“We found that the decentralized data storage in the contact track tracing app increases people's willingness to download it,” Zhang says.
But the survey research also suggests that it’s easy for people to get confused about which app does what despite the survey’s best attempt to also educate people about different app features. For example, the survey took time up front to explain that such apps would not identify anyone by name. Still, a later question showed that 30 percent of respondents believed the apps would identify infected people by name and share those names with smartphone users who might have been exposed to them.
“What's interesting in our study is that even after informing respondents about how these apps work, we did a manipulation check to see if people understood and they don't always get it right,” Zhang says. “So in terms of public education, I think there's a lot more work to be done to correct some of the misinformation about these apps.”
There is certainly no shortage of confusion about digital contact tracing efforts. One prominent example is the Google and Apple Exposure Notification (GAEN) system. GAEN makes it easier for third-party developers to create apps that harness Bluetooth capabilities in both Android and iOS devices to exchange randomly-generated IDs whenever phone users are in relatively close proximity. Some countries have already built and deployed such apps based on the GAEN framework, but the handful of U.S. efforts attempting to do so have not yet been rolled out to the public.
The tech giants also worked to enable GAEN at an operating system level so that individuals can go into their smartphone system settings and choose to opt-in for receiving Bluetooth beacon notifications about having been in close proximity to a confirmed COVID-19 case who was using a GAEN-compatible app. If they hadn’t downloaded a GAEN-compatible app already, the notified users would then be prompted to download such an app to get more information.
But some smartphone users became alarmed when that option appeared in the system settings of their Android and iOS devices as part of routine software updates in June. Zhang recalled friends calling her and asking about whether Google or Apple had installed an app on their phones that was tracking them somehow. In reality, the GAEN system would not share an anonymized individual’s health status unless that person chose to opt-in via their phone’s system settings, downloaded a compatible app, and then manually entered the fact that they had tested positive for COVID-19 into the app.
"It was sort of this like shadowy feature of the [operating system], that I think because it didn't the accompany the actual app, there was almost this suspicion that something was operating in the background without people knowing about it,” Kreps said.
The public reaction may have something to do with survey results showing that just 35 percent of Americans felt that Google and Apple should automatically install such an app on their phones through a software update. The GAEN system roll-out to Android and iOS devices did not automatically install such an app, but the distinction seems to have been lost on many people.
Furthermore, the survey research found no significant difference in people’s willingness to download an app based on whether it was developed by the Silicon Valley tech giants Apple and Google, by the U.S. Centers for Disease Control and Prevention (CDC), by a state government, or by university researchers. By comparison, an online survey commissioned by the software security company Avira found that American respondents tended to trust Apple and Google more than the government on contact tracing apps, even as overall support for contact tracing apps remained low.
“That's just a sort of public health disaster, because those kinds of episodes really undermine the trust that is necessary from the public for these kinds of apps to work,” Kreps says.
Data compiled by MIT Technology Review on national efforts to deploy contact tracing apps suggests that most have failed to gain traction among even a simple majority of their citizens. But Americans might feel more confident in exposure notification apps if the U.S. enacts more national and state laws that clearly protect individual privacy rights. In that spirit, U.S. lawmakers in Congress have introduced several bills that propose to regulate the health data collected by such apps and similar digital contact tracing technologies.
One of the more narrowly-focused examples is a bipartisan bill named the The Exposure Notification Privacy Act (PDF) that was introduced in the U.S. Senate. “It makes sure that the app is voluntary and it prohibits app developers from using the data collected by the app for commercial purposes, so I think that's moving in the right direction,” Zhang explains. But it’s unclear if such proposals can gain traction while U.S. local and state governments grapple with fresh COVID-19 outbreaks in the wake of attempts to reopen businesses.
The survey suggests that American support for contact tracing apps, which sits at 42 percent, puts it squarely behind approval ratings for several other public health surveillance measures—notably traditional contact tracing and temperature checks. (Both of those measures received the backing of more than 50 percent of respondents.) But public opinion is not necessarily set in stone, if some of the evolving views reflected in the two survey rounds are any indication.
“Temperature checks weren't ranked number one in terms of public support last time and now it is,” Zhang says. “Maybe because it's become more common, people are more accepting of it.”
The survey also found that political affiliation plays a role in American views on some public health surveillance measures. Overall, more Democrats tended to favor many of those measures compared with their Republican counterparts. But the study found no significant partisan difference in support for contact tracing apps. That may hint at an opportunity for a bipartisan push to help Americans better understand and potentially try such apps, the researchers suggest.
”Who would have thought the masks would be politicized?” Kreps says. “But it suggests that probably everything eventually will be,” she predicts. “Given the partisan polarization in the political landscape, it suggests that if [digital contact tracing] is going to be successful, public health authorities might want to get out in front of it to depoliticize it to the extent possible.”
Researchers have been banking on millions of citizen-scientists around the world to help identify new treatments for COVID-19. Much of that work is being done through distributed computing projects that utilize the surplus processing power of PCs to carry out various compute-intensive tasks.
One such project is Folding@home, which helped model how the spike protein of SARS-CoV-2 binds with the ACE2 receptor of human cells to cause infection. Started at Stanford University in 2000, Folding@home is currently based at the Washington University School of Medicine in St. Louis; it undertakes research into various cancers, and neurological and infectious diseases by studying the movement of proteins.
Proteins are made up of a sequence of amino acids that fold into specific structural forms. A protein’s shape is critical in its ability to undertake its specific function. Viruses have proteins that enable them to suppress a host’s immune system, invade cells, and replicate.
Greg Bowman, director of Folding@home, says, “We’re basically building maps of what these viral proteins can do… [The distributed computing network] is like having people around the globe jump in their cars and drive around their local neighborhoods and send us back their GPS coordinates at regular intervals. If we can develop detailed maps of these important viral proteins, we can identify the best drug compounds or antibodies to interfere with the virus and its ability to infect and spread.”
After Covid-19 was declared a global pandemic, Folding@home prioritized research related to the new virus. The number of devices running its software shot up from some 30,000 to over 4 million as a result. Tech behemoths such as Microsoft, Amazon, AMD, Cisco, and others have loaned computing power to Folding@home. The European Organization for Nuclear Research (CERN) has freed up 10,000 CPU cores to add to the project, and the Spanish premier soccer league La Liga has chipped in with its supercomputer that is otherwise dedicated to fighting piracy.
While Folding@home models how proteins fold, another distributed computing project called Rosetta@home—this one at the University of Washington Institute for Protein Design (IPD)—predicts the final folded shape of the protein. Though the projects are quite different, they are complementary.
“A big difference…is that the Rosetta@home distributed computing is…directly contributing to the design of new proteins… These calculations are trying to craft brand new proteins with new functions,” says Ian C. Haydon, science communications manager and former researcher at IPD. He adds that the Rosetta@home community, which comprises about 3.3 million instances of the software, has helped the research team come up with more than 2 million candidate antiviral proteins that recognize the coronavirus’s spike protein and bind very tightly to it. When that happens, the spike is no longer able to recognize or infect a human cell.
“At this point, we’ve tested more than 100,000 of what we think are the most promising options,” Haydon says. “We’re working with collaborators who were able to show that the best of these antiviral proteins…do keep the coronavirus from being able to infect human cells…. [What’s more,] they have a potency that looks at least as good if not better than the best known antibodies.”
There are many possible outcomes for this line of research, Haydon says. “Probably the fastest thing that could emerge… [is a] diagnostic…tool that would let you detect whether or not the virus is present.” Since this doesn’t have to go into a human body, the testing and approval process is likely to be quicker. “These proteins could [also] become a therapy that…slows down or blocks the virus from being able to replicate once it’s already in the human body… They may even be useful as prophylactic.”
While so many of us are working at home during the coronavirus pandemic, we do worry that serendipitous hallway conversations aren’t happening.
Last year, before the pandemic, it was one of those conversations that led researchers at ETH Zurich to develop a way of making chocolates shimmer with color—without any coloring agents or other additives.
The project, announced in December, involves what the scientists call “structural color”. The team indicated that it creates colors in a way similar to what a chameleon does—that is, using the structure of its skin to scatter a particular wavelength of light. The researchers have yet to release details, but Alissa M. Fitzgerald, founder of MEMS product development firm AMFitzgerald, has a pretty good guess.
She explains that Iridescence in nature (like that inside oyster shells and on the wings of butterflies) involves nanoscale patterns in the form of lines, plates, or holes. To make iridescent chocolate, she surmises, the researchers likely created a nanotech chocolate mold, using e-beam lithography to etch lines of about 100 nm wide on a glass or silicon wafer.
The ETH researchers hope to get their technique for coloring chocolate out of the lab and into the mass market. Meanwhile, during the pandemic shutdown, some tech professionals have been playing with rainbow chocolates of their own, like software engineer and startup founder Samy Kamkar, recently profiled in the New York Times. (You can only bake so much bread, after all.)
Chocolate is only the beginning for nanocolors, Fitzgerald says: “The combination of nano- and micro-technology fabrication techniques with atypical materials like food, fabric, paper and plastic is going to lead to some really exciting new products as well as improve or enhance existing products. For example, Teijin Fiber Japan uses structural color methods to make “Morphotex” fabric, named after the iridescent Morpho butterfly, recently demonstrated in the concept Morphotex Dress. Everyday objects are poised to benefit from advances in nanotechnology.”
Intel and the National Science Foundation (NSF) have awarded a three-year grant to a research team for research on delivering distributed machine learning computations over wireless edge networks to enable a broad range of new wireless applications. The team is a joint group from the University of Southern California (USC) and the University of California, Berkeley. The award was part of Intel’s and the NSF’s Machine Learning for Wireless Networking Systems effort, a multi-university research program to accelerate “fundamental, broad-based research” on developing wireless-specific machine learning techniques which can be applied to new wireless systems and architecture design.
Machine learning can hopefully manage the size and complexity of next-generation wireless networks. Intel and the NSF focused on efforts to harness discoveries in machine learning to design new algorithms, schemes, and communication protocols to handle density, latency, and throughput demands of complex networks. In total, US $9,000,000 has been awarded to 15 research teams.
The USC and UC Berkeley team will focus on enhanced federated learning over wireless communications. Federated learning refers to performing machine learning securely across all the data collected by hundreds of millions of devices in a large network. Specifically, the team will be researching how to apply federated learning to devices at the edge of the network, which don’t have much in the way of computational resources. The team is led by Salman Avestimehr, a professor in USC’s electrical and computer engineering department, and Kannan Ramshandran, a professor in UC Berkeley’s electrical engineering and computer science department.
“AI [artificial intelligence] and machine learning has been used in a variety of fields. Why not use it to design better wireless networks?” Avestimehr said.
Many apps and services that use machine learning—such as image processing or transaction history analysis—complete their computations in the cloud because very few devices can handle the heavy workload alone. Demand for these kinds of advanced connected services and devices is expected to grow as 5G networks become more available.
While higher speeds are often touted for next-generation networks, just as important is the scalability to meet demand. If connectivity is poor or bandwidth is low, uploading large data sets is not feasible. Machine learning across thousands, or millions, of devices means a lot of communication between devices. Breaking out the workload across multiple cloud services doesn’t significantly reduce the amount of time it takes to run the training algorithm because at least half of the time is spent on machines communicating with each other, Avestimehr said.
There are also security and privacy concerns because users may not want their data to leave their devices. Future wireless networks need to meet the density, latency, throughput, and security requirements of these applications.
State-of-the-art federated learning schemes are currently limited to hundreds of users, Avestimehr said. “There’s a long way to get to one million.”
Avestimehr and Ramshandran’s research will focus on deploying machine learning services closer to where the data is generated on the wireless edge. They hope that will alleviate bandwidth consumption, increase privacy, reduce latency, and boost the scalability of using machine learning on wireless networks. Their research goal is to apply a “coding-centric approach” to enhance federated learning over wireless networks.
Coded computing is a framework pioneered by research groups at USC and UC Berkeley led by Avestimehr and Ramchandran that takes the concepts and tools from information theory and coding that made communication networks efficient and uses them to solve problems in information systems. The problems they will be looking at are the current performance bottlenecks in large-scale distributed computing and machine learning. For example, coding theory has specific codes which are used for error detection and correction, data compression, and increasing the data rate. An error correction code adds extra or redundant bits to make the transmission of data more robust when moving over unreliable or noisy channels. The research teams will adapt these concepts to work with distributed computing and machine learning.
The research will build on Avestimehr’s past work on a DARPA-funded project to enable coded computing for distributed learning across geographically dispersed networks. His team injected “coded” redundant computations into the network to make computing efficient, scalable, and resilient.
After scalability, the second challenge is performing machine learning in a way that preserves privacy so that the input and output are both protected. Given the results of a training algorithm and the actual model, it is possible to invert the process and learn the original data. The user may be confident the app never took the images off the device, but the fact that the results of the computations are uploaded and can be reversed isn’t ideal.
“You think you just ran an algorithm on your data and gave me the results, but I know what images you had,” Avestimehr said.
Past work has looked at how to keep the data private, and how to make the algorithm robust so that it would be able to handle bad data. For privacy concerns, there needs to be a way to use the data for training the machine learning model without seeing the actual data. But there also has to be a way to trust that the computation result is correct, and not the result of manipulated data or an algorithm used improperly. It is a chicken-and-egg problem, Avestimehr said. One of the focus areas for the research is to make both possible by keeping the data and model private while running the training algorithm across multiple systems.
“The fuel of machine learning is the data,” Avestimehr said. “The better data, the more data that you have, the better models you have.”
THE INSTITUTE Members of the IEEE Kenyatta University Student Branch, in Nairobi, Kenya, [above] have designed and built a low-cost ventilator to combat COVID-19. The project aims to address the shortage of mechanical ventilators in Kenya, says IEEE Student Member Fidel Makatia Omusilibwa. He is studying electrical and electronics engineering at Kenyatta University and is chair of the student branch there. He leads the ventilator team, which includes 15 students from the university’s schools of engineering, medicine, nursing, and pharmacy. He says the university also offered assistance in a number of ways.
The Institute asked Omusilibwa about the ventilator.
This interview has been edited and condensed for clarity.
Explain how your project works.
Our ventilator goes by the name Tiba-Vent. Tiba is a Swahili word for cure. It makes use of the principles of ventilation, fluid mechanics, control engineering, software engineering, and signal processing.
The ventilator has two inputs for clean, compressed air and oxygen gas. The two are blended in a regulated tank and then passed through an oxygen sensor that controls the blending depending on settings. Two valves are used to control the air passed to and from the patient.
The air is humidified to make it warm and moist before inspiration. The exhaled air from the patient is passed through a filter. Its pressure is governed by the exhalation valve.
Pressure sensors and flow sensors are employed to monitor pressures, flow rate, and volume. A graphical user interface has been implemented. Through this, the doctor can interact with the machine and set parameters like tidal volume [the volume of air entering and exiting the lungs after each breath], and FiO2 [the concentration of oxygen that a person inhales]. Alarms have also been integrated to give alerts for a number of incidents. These include the depletion of oxygen supply to the ventilator or the delivered pressure is higher or lower than the doctor intended.
Tiba-Vent is portable and has a backup [battery] in case of a power failure.
Why is Tiba-Vent better suited to increasing the number of ventilators in the country than existing ventilators?
It’s easier to manufacture since 90 percent of the materials used are acquired locally in Kenya. Also, since it’s made locally, it becomes cheaper than imported ones for two reasons: less tax is charged and its [design is] optimized to be cost effective. Cost was one of the parameters on the table during design [discussions]. This will definitely increase the number of ventilators in Kenya by a large margin.
What challenges have you faced, and how did you overcome them?
The main challenge we faced initially was a lack of finances to commence prototyping. Therefore, we did everything on paper and created simulations. Our university got involved and sponsored us in the implementation of the prototype.
Also, we did not have an actual ventilator to refer to [when we first started designing Tiba-Vent]. We had to create a design from the basic principles of mechanical ventilation. Due to this, we made a few mistakes that cost us time. Later, our lecturers joined us and guided us, which made the prototyping phase faster.
How close are you to the final product?
It is currently in clinical trials.
What is the potential impact of the technology?
Tiba-Vent aims to increase the number of ventilators in Kenya from 500 units to more than 30,000. It will aid in treating COVID-19 and other respiratory ailments. It will also help make Kenya a manufacturing country for medical equipment.
How has your university supported you?
The university gave us a space to work in, [room] accommodations for the days we work on the project, meals, and access to any university facility we might need. It also bought all the resources we required to make the ventilator a reality, including tools and equipment.
The university also formed a committee of lecturers from the schools of engineering, medicine, pharmacy, nursing, and economics to mentor and guide us. The team also included professionals from said fields. The university also paid for materials we needed.
How many people are involved, and how many IEEE members are involved?
Out of the 15 students involved, 11 are IEEE student members. The students are studying biomedical engineering, biosystems engineering, civil engineering, electrical and electronics engineering, and mechanical engineering. We belong to the IEEE Kenya Section. We are ambassadors of advancing technology for humanity.
Attention IEEE members: are you part of a team responding to the COVID-19 crisis? We want to hear from you! Wherever you are and whatever you are doing, if you are helping deal with the outbreak in some way, let us know. Send us accounts of anywhere from 200 to 800 words, or simply give us a rough idea of what you are doing and your contact information. Write to: firstname.lastname@example.org.
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
Let us know if you have suggestions for next week, and enjoy today’s videos.
Four-legged HyQ balancing on two legs. Nice results from the team at IIT’s Dynamic Legged Systems Lab. And we can’t wait to see the “ninja walk,” currently shown in simulation, implemented with the real robot!
The development of balance controllers for legged robots with point feet remains a challenge when they have to traverse extremely constrained environments. We present a balance controller that has the potential to achieve line walking for quadruped robots. Our initial experiments show the 90-kg robot HyQ balancing on two feet and recovering from external pushes, as well as some changes in posture achieved without losing balance.
[ IIT ]
Ava Robotics’ telepresence robot has been beheaded by MIT, and it now sports a coronavirus-destroying UV array.
UV-C light has proven to be effective at killing viruses and bacteria on surfaces and aerosols, but it’s unsafe for humans to be exposed. Fortunately, Ava’s telepresence robot doesn’t require any human supervision. Instead of the telepresence top, the team subbed in a UV-C array for disinfecting surfaces. Specifically, the array uses short-wavelength ultraviolet light to kill microorganisms and disrupt their DNA in a process called ultraviolet germicidal irradiation. The complete robot system is capable of mapping the space — in this case, GBFB’s warehouse — and navigating between waypoints and other specified areas. In testing the system, the team used a UV-C dosimeter, which confirmed that the robot was delivering the expected dosage of UV-C light predicted by the model.
[ MIT ]
While it’s hard enough to get quadrupedal robots to walk in complex environments, this work from the Robotic Systems Lab at ETH Zurich shows some impressive whole body planning that allows ANYmal to squeeze its body through small or weirdly shaped spaces.
[ RSL ]
Engineering researchers at North Carolina State University and Temple University have developed soft robots inspired by jellyfish that can outswim their real-life counterparts. More practically, the new jellyfish-bots highlight a technique that uses pre-stressed polymers to make soft robots more powerful.
The researchers also used the technique to make a fast-moving robot that resembles a larval insect curling its body, then jumping forward as it quickly releases its stored energy. Lastly, the researchers created a three-pronged gripping robot – with a twist. Most grippers hang open when “relaxed,” and require energy to hold on to their cargo as it is lifted and moved from point A to point B. But this claw’s default position is clenched shut. Energy is required to open the grippers, but once they’re in position, the grippers return to their “resting” mode – holding their cargo tight.
[ NC State ]
As control skills increase, we are more and more impressed by what a Cassie bipedal robot can do. Those who have been following our channel, know that we always show the limitations of our work. So while there is still much to do, you gotta like the direction things are going. Later this year, you will see this controller integrated with our real-time planner and perception system. Autonomy with agility! Watch out for us!
GITAI’s S1 arm is a little less exciting than their humanoid torso, but it looks like this one might actually be going to the ISS next year.
Here’s how the humanoid would handle a similar task:
[ GITAI ]
If you need a robot that can lift 250 kg at 10 m/s across a workspace of a thousand cubic meters, here’s your answer.
[ Fraunhofer ]
Penn engineers with funding from the National Science Foundation, have nanocardboard plates able to levitate when bright light is shone on them. This fleet of tiny aircraft could someday explore the skies of other worlds, including Mars. The thinner atmosphere there would give the flyers a boost, enabling them to carry payloads ten times as massive as they are, making them an efficient, light-weight alternative to the Mars helicopter.
[ UPenn ]
Erin Sparks, assistant professor in Plant and Soil Sciences, dreamed of a robot she could use in her research. A perfect partnership was formed when Adam Stager, then a mechanical engineering Ph.D. student, reached out about a robot he had a gut feeling might be useful in agriculture. The pair moved forward with their research with corn at the UD Farm, using the robot to capture dynamic phenotyping information of brace roots over time.
[ Sparks Lab ]
This is a video about robot spy turtles but OMG that bird drone landing gear.
[ PBS ]
If you have a DJI Mavic, you now have something new to worry about.
[ DroGone ]
I was able to spot just one single person in the warehouse footage in this video.
[ Berkshire Grey ]
Flyability has partnered with the ROBINS Project to help fill gaps in the technology used in ship inspections. Watch this video to learn more about the ROBINS project and how Flyability’s drones for confined spaces are helping make inspections on ships safer, cheaper, and more efficient.
[ Flyability ]
In this video, a mission of the Alpha Aerial Scout of Team CERBERUS during the DARPA Subterranean Challenge Urban Circuit event is presented. The Alpha Robot operates inside the Satsop Abandoned Power Plant and performs autonomous exploration. This deployment took place during the 3rd field trial of team CERBERUS during the Urban Circuit event of the DARPA Subterranean Challenge.
[ ARL ]
More excellent talks from the remote Legged Robots ICRA workshop- we’ve posted three here, but there are several other good talks this week as well.
I’ve completely lost track of time over the past couple of months (it’s been months, right?), but somehow, the folks over at Festo have held it together well enough to continue working on their Bionic Learning Network robots. Every year or two, Festo shows off some really quite spectacular bio-inspired creations, including robotic ants and butterflies, hopping kangaroos, rolling spiderbots, flying penguins and flying jellyfish, and much more. This year, Festo is demonstrating two new robots: BionicMobileAssistant (a “mobile robot system with pneumatic gripping hand”), and BionicSwift, a swarm of beautiful aerial birds.
The flight of birds has always fascinated humankind. In Festo’s Bionic Learning Network, flying according to the natural world also has a long tradition. With the construction of the BionicSwifts, Festo is consistently continuing the further development of its bionic flying objects.
The BionicMobileAssistant moves autonomously in space and can - thanks to a neural network - independently recognize objects, grasp them adaptively and work with people. The mobile assistance system has a modular structure and consists of three subsystems: a ball bot, an electric robotic arm and the BionicSoftHand 2.0 - a pneumatic gripper that is inspired by the human hand.
Let’s talk about BionicMobileAssistant first, because it’s probably the most practical (albeit least exotic). Developed in partnership with ETH Zurich, it’s a combination of three modules: the mobile base (a ballbot), a robot arm (called DynaArm), and the BionicSoftHand 2.0, a pneumatic hand that was shown last year. The ballbot is a fairly familiar design; it’s nice because it’s completely omnidirectional on a very small footprint, with the disadvantage of being unstable, requiring constant control input to keep from falling over. It’s particularly effective on smooth and mostly flat surfaces, especially in tight quarters, and has the added advantage of being able to handle impulses as long as it has room to maneuver.
For its size, the DynaArm is impressive. It’s 4 DoF with a payload of 8 kg, but the entire arm weighs under 8 kg itself, with each of the motor assemblies (motor, gear unit, motor control electronics, sensors) weighing just 1 kg. On the end of the arm, the BionicSoftHand 2.0 is pneumatic, and covered in a fabric with 113 embedded tactile sensors. Some RealSense cameras make the whole thing at least a little bit autonomous, although these robots that Festo puts together tend to focus more on design rather than autonomy.
The BionicSwifts are not the first birds that Festo has developed, but those flexible, feathered wings are particularly lovely.
To execute flight maneuvers as true to life as possible, the wings are modeled on the plumage of real birds. The individual lamellae are made of an ultralight, flexible but very robust foam and lie on top of each other like shingles. Connected to a carbon quill, they are attached to the actual hand and arm wings as in the natural model.
During the wing upstroke, the individual lamellae fan out so that air can flow through the wing. This means that the birds need less force to pull the wing up. During the downstroke, the lamellae close up so that the birds can generate more power to fly. Due to this close-to-nature replica of the wings, the BionicSwifts have a better flight profile than previous wing-beating drives.
Each BionicSwift (there are five in the flock) weighs a mere 42 grams, of which 6 g is a battery. One motor controls the wing flapping, while just two other motors are required to actuate the flight surfaces for steering. Flight time is a solid seven minutes.
I like how Festo justifies their development work on BionicSwift by saying “the intelligent networking of flight objects and GPS routing makes for a 3D navigation system that could be used in the networked factory of the future.” Um, sure, but I don’t think you needed to develop a beautiful flying robot bird to test out that concept, right? But whatever business case Festo needs to make to keep their bionic learning network up and running, I’m in favor of.
[ Festo ]
As the COVID-19 pandemic began its explosive spread through the United States, tech workers were among the first to switch to working at home in mass numbers. By early March, before regional stay-at-home orders came into play, most tech professionals at Microsoft and Amazon had switched to working at home, others would soon follow. Since then, Twitter announced that it would offer work-at-home as a permanent option to many of its employees, and Facebook also began planning for a large work-from-anywhere staff, but indicated that salaries would be adjusted to account for regional costs of living. On the other end of the spectrum, Apple developed a plan to bring employees back to the office in phases, starting this month.
Of course, with the world still in the midst of the coronavirus pandemic, nobody really knows exactly what the workplace will look like if, as expected, a vaccine proves protective and life in general returns to normal.
Blind, a company that operates private social networks for tech employees, reached out to its members several times during the past few months to find out just how remote work is going for them—and whether permanent remote work would open up the possibility of moving to a less tech-centric part of the country or world. I had a few additional questions, and Blind distributed those for me as a short survey in late June.
In that survey, we included an open-ended question about what tech professionals miss about office life. Some of the answers were a little surprising, and give a clue as to what may be lacking in the typical home office—like standing desks and giant whiteboards. The Spectrum/Blind survey received 2951 responses from 37 companies in the U.S., which Blind sorted by selected regions and companies.
Putting all this data together paints a picture of a tech workforce that is generally OK with staying at home. Facebook employees are the rare exception: Fewer may be looking for a permanent work-at-home option than CEO Mark Zuckerberg anticipates. By contrast, Apple employees, unlikely to be offered work-at-home options, would actually love the opportunity.
Should working at home turn permanent, some tech employees would consider relocating, particularly those who live in expensive areas. In a small survey with just over 1800 respondents, 70 percent of Bay Area residents would consider relocating—but half of those would only do so without a pay cut. In a separate poll of 2768 tech workers, Blind found that 66 percent of Bay Area tech workers would consider relocating, compared with 69 percent of tech professionals in New York and 63 percent in Seattle; the salary question wasn’t asked.
Separately, a recent survey of 2300 tech workers in the U.S., Canada, France, and the United Kingdom by Hired found that 53 percent of these tech workers would be inclined to move to an area with a lower cost of living if work from home became permanent. Some 40 percent generally supported cost of living adjustments in salaries, however, but only 32 percent would be willing to take that kind of pay cut according to the Hired survey. And, despite expressing an interest in cheaper areas, when asked to name the city to which they’d be most likely to relocate, their choices put New York, Seattle, and the San Francisco Bay Area on top.
Post-coronavirus, however, most do expect—indeed want—to go back to the office at least some of the time. According to Blind’s survey, the sweet spot is one to two days a week, followed by three to four days a week; those who would choose to give up either exclusively working at home or only working in the office were a far smaller group. Hired’s survey came up with similar numbers; half of the tech professionals responding want to return to their office at least once a week, but only seven percent want to go in every day.
Working at home does have its challenges, according to Blind’s survey. Most tech professional found working at home presented challenges, with distractions impacting their focus generally being the biggest problem. However, there were distinct differences among professionals at different companies, with Apple and Amazon employees, for example, struggling more with work/life balance, and Cisco employees feeling the negative effects of isolation.
What do tech professionals miss most about the office? Here’s where I asked an open-ended question, and got some interesting answers, with some regional differences.
It was no surprise that many respondents commented that they miss interaction with their colleagues, socializing, friendship, bonding, hallway conversations, and general chitchat. Many also missed the separation between home and work and more contained work hours. The physical environment at the office came into play as well—the space to spread out, the standing desks—and oh, those whiteboards! The free food, coffee, and other perks popped up more often in comments from Bay Area tech workers, though professionals in all regions missed lunches with their colleagues.
About that food, coffee, massages, and other perks: we asked our survey respondents if they think those will be coming back as offices restructure themselves post pandemic. Most think perks will make a comeback, though Bay Area tech workers are more confident about that than those in other regions. Interestingly, according to Hired’s survey, while people might be willing to give up those perks, 43 percent they would expect to be compensated for that loss in additional salary or other benefits.
It will be a while before we truly know whether the changes made to tech work during the pandemic are permanent, or when exactly we will return to normal, or a new normal. But companies are making plans to start bringing employees back.
Later this month, satellite-based remote-sensing in the United States will be getting a big boost. Not from a better rocket, but from the U.S. Commerce Department, which will be relaxing the rules that govern how companies provide such services.
For many years, the Commerce Department has been tightly regulating those satellite-imaging companies, because of worries about geopolitical adversaries buying images for nefarious purposes and compromising U.S. national security. But the newly announced rules, set to go into effect on July 20, represent a significant easing of restrictions.
Previously, obtaining permission to operate a remote-sensing satellite has been a gamble—the criteria by which a company’s plans were judged were vague, as was the process, an inter-agency review requiring input from the U.S. Department of Defense as well as the State Department. But in May of 2018, the Trump administration’s Space Policy Directive-2 made it apparent that the regulatory winds were changing. In an effort to promote economic growth, the Commerce Department was commanded to rescind or revise regulations established in the Land Remote Sensing Policy Act of 1992, a piece of legislation that compelled remote-sensing satellite companies to obtain licenses and required that their operations not compromise national security.
Following that directive, in May of 2019 the Commerce Department issued a Notice of Proposed Rulemaking in an attempt to streamline what many in the satellite remote-sensing industry saw as a cumbersome and restrictive process.
But the proposed rules didn’t please industry players. To the surprise of many of them, though, the final rules announced last May were significantly less strict. For example, they allow satellite remote-sensing companies to sell images of a particular type and resolution if substantially similar images are already commercially available in other countries. The new rules also drop earlier restrictions on nighttime imaging, radar imaging, and short-wave infrared imaging.
On June 25th, Commerce Secretary Wilbur Ross explained at a virtual meeting of the National Oceanic and Atmospheric Administration’s Advisory Committee on Commercial Remote Sensing why the final rules differ so much from what was proposed in 2019:
Last year at this meeting, you told us that our first draft of the rule would be detrimental to the U.S. industry and that it could threaten a decade’s worth of progress. You provided us with assessments of technology, foreign competition, and the impact of new remote sensing applications. We listened. We made the case with our government colleagues that the U.S. industry must innovate and introduce new products as quickly as possible. We argued that it was no longer possible to control new applications in the intensifying global competition for dominance.
In other words, the cat was already out of the bag: there’s no sense prohibiting U.S. companies from offering satellite-imaging services already available from foreign companies.
An area where the new rules remain relatively strict though, concerns the taking of pictures of other objects in orbit. Companies that want to offer satellite inspection or maintenance services would need rules that allow what regulators call “non-Earth imaging.” But there are national security implications here, because pictures obtained in this way could blow the cover of U.S. spy satellites masquerading as space debris.
While the extent to which spy satellites cloak themselves in the guise of space debris isn’t known, it seems clear that this would be an ideal tactic for avoiding detection. That strategy won’t work, though, if images taken by commercial satellites reveal a radar-reflecting object to be a cubesat instead of a mangled mass of metal.
Because of that concern, the current rules demand that companies limit the detailed imaging of other objects in space to ones for which they have obtained permission from the satellite owner and from the Secretary of Commerce at least 5 days in advance of obtaining images. But that stipulation begs a key question: Who should a satellite-imaging company contact if it wants to take pictures of a piece of space debris? Maybe imaging space debris would only require the permission of the Secretary of Commerce. But then, would the Secretary ever give such a request a green light? After all, if permission were typically granted, instances when it wasn’t would become suspicious.
More likely, imaging space debris—or spy satellites trying to pass as junk—is going to remain off the table for the time being. So even though the new rules are a welcome development to most commercial satellite companies, some will remain disappointed, including those companies that make up the Consortium for the Execution of Rendezvous and Servicing Operations (CONFERS), which had recommended that “the U.S. government should declare the space domain as a public space and the ability to conduct [non-Earth imaging] as the equivalent of taking photos of public activities on a public street.”
An inherent characteristic of a robot (I would argue) is embodied motion. We tend to focus on motion rather a lot with robots, and the most dynamic robots get the most attention. This isn’t to say that highly dynamic robots don’t deserve our attention, but there are other robotic philosophies that, while perhaps less visually exciting, are equally valuable under the right circumstances. Magnus Egerstedt, a robotics professor at Georgia Tech, was inspired by some sloths he met in Costa Rica to explore the idea of “slowness as a design paradigm” through an arboreal robot called SlothBot.
Since the robot moves so slowly, why use a robot at all? It may be very energy-efficient, but it’s definitely not more energy efficient than a static sensing system that’s just bolted to a tree or whatever. The robot moves, of course, but it’s also going to be much more expensive (and likely much less reliable) than a handful of static sensors that could cover a similar area. The problem with static sensors, though, is that they’re constrained by power availability, and in environments like under a dense tree canopy, you’re not going to be able to augment their lifetime with solar panels. If your goal is a long-duration study of a small area (over weeks or months or more), SlothBot is uniquely useful in this context because it can crawl out from beneath a tree to find some sun to recharge itself, sunbathe for a while, and then crawl right back again to resume collecting data.
SlothBot is such an interesting concept that we had to check in with Egerstedt with a few more questions.
IEEE Spectrum: Tell us what you find so amazing about sloths!
Magnus Egerstedt: Apart from being kind of cute, the amazing thing about sloths is that they have carved out a successful ecological niche for themselves where being slow is not only acceptable but actually beneficial. Despite their pretty extreme low-energy lifestyle, they exhibit a number of interesting and sometimes outright strange behaviors. And, behaviors having to do with territoriality, foraging, or mating look rather different when you are that slow.
Are you leveraging the slothiness of the design for this robot somehow?
Sadly, the sloth design serves no technical purpose. But we are also viewing the SlothBot as an outreach platform to get kids excited about robotics and/or conservation biology. And having the robot look like a sloth certainly cannot hurt.
Can you talk more about slowness as a design paradigm?
The SlothBot is part of a broader design philosophy that I have started calling “Robot Ecology.” In ecology, the connections between individuals and their environments/habitats play a central role. And the same should hold true in robotics. The robot design must be understood in the environmental context in which it is to be deployed. And, if your task is to be present in a slowly varying environment over a long time scale, being slow seems like the right way to go. Slowness is ideal for use cases that require a long-term, persistent presence in an environment, like for monitoring tasks, where the environment itself is slowly varying. I can imagine slow robots being out on farm fields for entire growing cycles, or suspended on the ocean floor keeping track of pollutants or temperature variations.
How do sloths inspire SlothBot’s functionality?
Its motions are governed by what we call survival constraints. These constraints ensure that the SlothBot is always able to get to a sunny spot to recharge. The actual performance objective that we have given to the robot is to minimize energy consumption, i.e., to simply do nothing subject to the survival constraints. The majority of the time, the robot simply sits there under the trees, measuring various things, seemingly doing absolutely nothing and being rather sloth-like. Whenever the SlothBot does move, it does not move according to some fixed schedule. Instead, it moves because it has to in order to “survive.”
How would you like to improve SlothBot?
I have a few directions I would like to take the SlothBot. One is to make the sensor suites richer to make sure that it can become a versatile and useful science instrument. Another direction involves miniaturization - I would love to see a bunch of small SlothBots "living" among the trees somewhere in a rainforest for years, providing real-time data as to what is happening to the ecosystem.
THE INSTITUTE In the race to develop a vaccine for the novel coronavirus, health care providers and scientists must sift through a growing mountain of research, both new and old. But they face several obstacles. The sheer volume of material makes using traditional search engines difficult because simple keyword searches aren’t sufficient to extract meaning from the published research. This is further complicated by the fact that most search engines present research results in visual file formats like pdfs and bitmaps, which are unreadable to typical web browsers.
IEEE Member Peter Staar, a researcher at IBM Research Europe, in Zurich, and manager of the Scalable Knowledge Ingestion group, has built a platform called Deep Search that could help speed along the process. The cloud-based platform combs through literature, reads and labels each data point, table, image, and paragraph, and translates scientific content into a uniform, searchable structure.
The reading function of the Deep Search platform consists of a natural language processing (NLP) tool called the corpus conversion service (CCS), developed by Staar for other information-dense domains. The CCS trains itself on already-annotated documents to create a ground truth, or knowledge base, of how papers in a given realm are typically arranged, Staar says. After the training phase, new papers uploaded to the service can be quickly compared to the ground truth for faster recognition of each element.
Once the CCS has a general understanding of how papers in a field are structured, Staar says, the Deep Search platform presents two options. It can either generate simple results in response to a traditional search query, essentially serving as an advanced pdf reader, or it can generate a report on a specific topic, such as the dosage of a particular drug, with deeper analysis that the group calls a knowledge graph.
“[The] knowledge graph allows us to answer these relatively complex questions that are not able to be answered with just a keyword lookup,” Staar explains.
To keep the data in the platform’s knowledge base up to the highest standards possible, Staar says the team bolsters their corpora with trusted, open-source databases such as DrugBank for chemical, pharmaceutical, and pharmacological drug data and GenBank for established and publicly available data sequences.
Deep Search is based on a similar platform that Staar built in 2018 for material science and for oil and gas research, fields that both faced a deluge of data. Staar recognized that the same solution could be used to parse the tsunami of data about SARS-CoV-2. The platform was designed to be generic enough to be extended to other domains of research.
“Our goal was to help the medical community with a tool that we already had in our hands,” Staar says. Currently, the COVID-19 Deep Search service supports 460 active users and has ingested nearly 46,000 scientific articles.
The platform can even use search queries to divide results according to scientific camp.
“In the oil and gas business, when different philosophies [on environmental impact] collide, you can say, ‘Okay, if you follow a certain stream of thought, then you might be more interested in papers that are associated with this group of people, rather than with that group,’” Staar says.
If the scientific community is divided on a major attribute of SARS-CoV-2, for example, Deep Search might cluster search results around each camp. When a user searches for that attribute, the platform could analyze the wording of their search string and then guide the user to the cluster of results that most closely aligns with the user’s approach.
This isn’t the first time a pressing global health crisis has prompted scientists to try to streamline the publishing process. A 2010 analysis of literature from the 2003 SARS outbreak found that, despite efforts to shorten wait times for both acceptance and publishing, 93 percent of the papers on SARS didn’t come out until the epidemic had already ended and the bulk of deaths had already occurred.
Unlike their counterparts in 2003, however, present-day epidemic researchers have benefitted from the advent of preprint servers such bioRxiv and medRxiv, which enable uncorrected articles to be shared digitally regardless of acceptance or submission status. Preprints have been around since the early 1990s, but the public health emergency of SARS-CoV-2 prompted a new surge in popularity for the alternative publishing practice, as well as a new round of concern over its impact.
Deep Search capitalizes on the preprint trend to further reduce obstacles to sharing the content of research papers. But it also aims to address one of the chief criticisms of preprints: that without peer review, the average reader may be unable to distinguish high-quality research from low-quality research. Though every new paper has equal weight in the Deep Search algorithms, the volume of data it ingests allows for statistical comparisons among conclusions. Users can easily see whether a result is consistent with previous findings or seems to be an outlier.
These relational functions, in which Deep Search sorts, links, and compares data as it returns results constitute the platform’s signature advantage, Staar says. Developing a treatment molecule, for example, might start with a search to determine which gene to target within the viral RNA.
“If you understand which genes are important, then you can start understanding which proteins are important, which leads you to which kinds of molecules you can build for which kinds of targets,” he says. “That’s what our tool is really built for.”
Right now, almost 70,000 people in the United States alone are on active waiting lists for organ donations. The dream of bio-printing is that one day, instead of waiting for a donor, a patient could receive, say, a kidney assembled on demand from living cells using 3-D printing techniques. But one problem with this dream is that bio-printing an organ outside the body necessarily requires surgery to implant it. This may mean large incisions, which in turn adds the risk of infection and increased recovery time for patients. Doctors would also have to postpone surgery until the necessary implant was bio-printed, vital time patients might not have.
A way around this problem could be provided by new bio-ink, composed of living cells suspended in a gel, that is safe for use inside people and could help enable 3-D printing in the body. Doctors could produce living parts inside patients through small incisions using minimally invasive surgical techniques. Such an option might prove safer and faster than major surgery.
One challenge with bio-printing inside the body is that current bio-inks often require ultraviolet light in order to solidify, but ultraviolet rays can damage internal organs. Another problem is how to attach printed tissues effectively to soft live organs and tissues.
According to a new study published in the journal Biofabrication, researchers developed a bio-ink they could solidify using visible light. Moreover, the ink was printable at the kinds of temperatures found within the body—previous bio-inks were too liquid at body temperature to hold their shape when printed, says study senior author David Hoelzle, a mechanical engineer at Ohio State University.
The scientists used a 3-D printing nozzle affixed onto robotic machinery. This strategy dispensed bio-ink much like an icing tube squeezes out frosting, only in a controlled, programmable manner. The researchers experimented with bio-printing onto soft materials, such as raw chicken breast strips and a gel similar to agar jelly. They first pierced the surfaces of these materials with the nozzle and extruded a little “interlock” knob into the punctured space. Next, they slowly withdrew the nozzle from the materials, trailing behind a filament of material they could keep on printing with.
The knobs left beneath the surface anchored the printed structure to the body, acting a bit like surgical staples, “but with a different type of material and with more flexibility with the shape of material,” Hoelzle says.
The scientists made their bio-printed structures porous to help immerse the cells in fluids carrying nutrients, oxygen and other molecules. Up to 77 percent of mouse cells in the bio-ink remained viable in the structures after 21 days, and the researchers found their strategy of using interlocks resulted in an up to four-fold boost in adhesion strength.
Hoelzle notes they can definitely optimize the interlocks to boost adhesion. “Just like different stitch patterns for textiles have different strengths, there are bound to be different interlocking patterns that improve upon these results,” he says.
The researchers caution they do not aim to bio-print an entire heart or kidney in the body in a minimally invasive manner. “Even more traditional methods of delivery of tissue-engineering materials are years away from this accomplishment,” he says. Instead, “consider the ability to augment a standard surgery by delivering a biomaterial with a tethered growth factor to jumpstart healing, or a tethered drug to prevent infection.”
The scientists foresee bio-printing inside the body using robotic surgery instruments. “In a typical robotic surgery operation, the surgeon is operating four arms, two of them simultaneously,” Hoelzle says. “Each of the arms has an interchangeable tool, so the surgeon can swap out tools depending on what he or she needs at the moment. We envision a biomaterial-bioink printing tool as another tool in the surgeon’s toolset.”
Hoelzle and his colleagues are currently working on the first generation of an interchangeable bio-printing attachment for robotic surgery they aim to report before the end of this year, “although research restrictions from COVID are slowing us down,” he says.
For an inventor, the main challenge might be technical, but sometimes it’s timing that determines success. Steven Sasson had the technical talent but developed his prototype for an all-digital camera a couple of decades too early.
It was 1974, and Sasson, a young electrical engineer at Eastman Kodak Co., in Rochester, N.Y., was looking for a use for Fairchild Semiconductor’s new type 201 charge-coupled device. His boss suggested that he try using the 100-by-100-pixel CCD to digitize an image. So Sasson built a digital camera to capture the photo, store it, and then play it back on another device.
Sasson’s camera was a kluge of components. He salvaged the lens and exposure mechanism from a Kodak XL55 movie camera to serve as his camera’s optical piece. The CCD would capture the image, which would then be run through a Motorola analog-to-digital converter, stored temporarily in a DRAM array of a dozen 4,096-bit chips, and then transferred to audio tape running on a portable Memodyne data cassette recorder. The camera weighed 3.6 kilograms, ran on 16 AA batteries, and was about the size of a toaster.
After working on his camera on and off for a year, Sasson decided on 12 December 1975 that he was ready to take his first picture. Lab technician Joy Marshall agreed to pose. The photo took about 23 seconds to record onto the audio tape. But when Sasson played it back on the lab computer, the image was a mess—although the camera could render shades that were clearly dark or light, anything in between appeared as static. So Marshall’s hair looked okay, but her face was missing. She took one look and said, “Needs work.”
Sasson continued to improve the camera, eventually capturing impressive images of different people and objects around the lab. He and his supervisor, Garreth Lloyd, received U.S. Patent No. 4,131,919 for an electronic still camera in 1978, but the project never went beyond the prototype stage. Sasson estimated that image resolution wouldn’t be competitive with chemical photography until sometime between 1990 and 1995, and that was enough for Kodak to mothball the project.
While Kodak chose to withdraw from digital photography, other companies, including Sony and Fuji, continued to move ahead. After Sony introduced the Mavica, an analog electronic camera, in 1981, Kodak decided to restart its digital camera effort. During the ’80s and into the ’90s, companies made incremental improvements, releasing products that sold for astronomical prices and found limited audiences. [For a recap of these early efforts, see Tekla S. Perry’s IEEE Spectrum article, “Digital Photography: The Power of Pixels.”]
Then, in 1994 Apple unveiled the QuickTake 100, the first digital camera for under US $1,000. Manufactured by Kodak for Apple, it had a maximum resolution of 640 by 480 pixels and could only store up to eight images at that resolution on its memory card, but it was considered the breakthrough to the consumer market. The following year saw the introduction of Apple’s QuickTake 150, with JPEG image compression, and Casio’s QV10, the first digital camera with a built-in LCD screen. It was also the year that Sasson’s original patent expired.
Digital photography really came into its own as a cultural phenomenon when the Kyocera VisualPhone VP-210, the first cellphone with an embedded camera, debuted in Japan in 1999. Three years later, camera phones were introduced in the United States. The first mobile-phone cameras lacked the resolution and quality of stand-alone digital cameras, often taking distorted, fish-eye photographs. Users didn’t seem to care. Suddenly, their phones were no longer just for talking or texting. They were for capturing and sharing images.
The rise of cameras in phones inevitably led to a decline in stand-alone digital cameras, the sales of which peaked in 2012. Sadly, Kodak’s early advantage in digital photography did not prevent the company’s eventual bankruptcy, as Mark Harris recounts in his 2014 Spectrum article “The Lowballing of Kodak’s Patent Portfolio.” Although there is still a market for professional and single-lens reflex cameras, most people now rely on their smartphones for taking photographs—and so much more.
The transformational nature of Sasson’s invention can’t be overstated. Experts estimate that people will take more than 1.4 trillion photographs in 2020. Compare that to 1995, the year Sasson’s patent expired. That spring, a group of historians gathered to study the results of a survey of Americans’ feelings about the past. A quarter century on, two of the survey questions stand out:
During the last 12 months, have you looked at photographs with family or friends?
During the last 12 months, have you taken any photographs or videos to preserve memories?
In the nationwide survey of nearly 1,500 people, 91 percent of respondents said they’d looked at photographs with family or friends and 83 percent said they’d taken a photograph—in the past year. If the survey were repeated today, those numbers would almost certainly be even higher. I know I’ve snapped dozens of pictures in the last week alone, most of them of my ridiculously cute puppy. Thanks to the ubiquity of high-quality smartphone cameras, cheap digital storage, and social media, we’re all taking and sharing photos all the time—last night’s Instagram-worthy dessert; a selfie with your bestie; the spot where you parked your car.
So are all of these captured moments, these personal memories, a part of history? That depends on how you define history.
For Roy Rosenzweig and David Thelen, two of the historians who led the 1995 survey, the very idea of history was in flux. At the time, pundits were criticizing Americans’ ignorance of past events, and professional historians were wringing their hands about the public’s historical illiteracy.
Instead of focusing on what people didn’t know, Rosenzweig and Thelen set out to quantify how people thought about the past. They published their results in the 1998 book The Presence of the Past: Popular Uses of History in American Life (Columbia University Press). This groundbreaking study was heralded by historians, those working within academic settings as well as those working in museums and other public-facing institutions, because it helped them to think about the public’s understanding of their field.
Little did Rosenzweig and Thelen know that the entire discipline of history was about to be disrupted by a whole host of technologies. The digital camera was just the beginning.
For example, a little over a third of the survey’s respondents said they had researched their family history or worked on a family tree. That kind of activity got a whole lot easier the following year, when Paul Brent Allen and Dan Taggart launched Ancestry.com, which is now one of the largest online genealogical databases, with 3 million subscribers and approximately 10 billion records. Researching your family tree no longer means poring over documents in the local library.
Similarly, when the survey was conducted, the Human Genome Project was still years away from mapping our DNA. Today, at-home DNA kits make it simple for anyone to order up their genetic profile. In the process, family secrets and unknown branches on those family trees are revealed, complicating the histories that families might tell about themselves.
Finally, the survey asked whether respondents had watched a movie or television show about history in the last year; four-fifths responded that they had. The survey was conducted shortly before the 1 January 1995 launch of the History Channel, the cable channel that opened the floodgates on history-themed TV. These days, streaming services let people binge-watch historical documentaries and dramas on demand.
Today, people aren’t just watching history. They’re recording it and sharing it in real time. Recall that Sasson’s MacGyvered digital camera included parts from a movie camera. In the early 2000s, cellphones with digital video recording emerged in Japan and South Korea and then spread to the rest of the world. As with the early still cameras, the initial quality of the video was poor, and memory limits kept the video clips short. But by the mid-2000s, digital video had become a standard feature on cellphones.
As these technologies become commonplace, digital photos and video are revealing injustice and brutality in stark and powerful ways. In turn, they are rewriting the official narrative of history. A short video clip taken by a bystander with a mobile phone can now carry more authority than a government report.
Maybe the best way to think about Rosenzweig and Thelen’s survey is that it captured a snapshot of public habits, just as those habits were about to change irrevocably.
For professional historians, the advent of digital photography has had other important implications. Lately, there’s been a lot of discussion about how digital cameras in general, and smartphones in particular, have changed the practice of historical research. At the 2020 annual meeting of the American Historical Association, for instance, Ian Milligan, an associate professor at the University of Waterloo, in Canada, gave a talk in which he revealed that 96 percent of historians have no formal training in digital photography and yet the vast majority use digital photographs extensively in their work. About 40 percent said they took more than 2,000 digital photographs of archival material in their latest project. W. Patrick McCray of the University of California, Santa Barbara, told a writer with The Atlantic that he’d accumulated 77 gigabytes of digitized documents and imagery for his latest book project [an aspect of which he recently wrote about for Spectrum].
So let’s recap: In the last 45 years, Sasson took his first digital picture, digital cameras were brought into the mainstream and then embedded into another pivotal technology—the cellphone and then the smartphone—and people began taking photos with abandon, for any and every reason. And in the last 25 years, historians went from thinking that looking at a photograph within the past year was a significant marker of engagement with the past to themselves compiling gigabytes of archival images in pursuit of their research.
So are those 1.4 trillion digital photographs that we’ll collectively take this year a part of history? I think it helps to consider how they fit into the overall historical narrative. A century ago, nobody, not even a science fiction writer, predicted that someone would take a photo of a parking lot to remember where they’d left their car. A century from now, who knows if people will still be doing the same thing. In that sense, even the most mundane digital photograph can serve as both a personal memory and a piece of the historical record.
An abridged version of this article appears in the July 2020 print issue as “Born Digital.”
Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.
Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.
Two major world powers, the United States and China, have both collected an enormous number of DNA samples from their citizens, the premise being that these samples will help solve crimes that might have otherwise gone unsolved. While DNA evidence can often be crucial when it comes to determining who committed a crime, researchers argue these DNA databases also pose a major threat to human rights.
In the U.S., the Federal Bureau of Investigation (FBI) has a DNA database called the Combined DNA Index System (CODIS) that currently contains over 14 million DNA profiles. This database has a disproportionately high number of profiles of black men, because black Americans are arrested five times as much as white Americans. You don’t even have to be convicted of a crime for law enforcement to take and store your DNA; you simply have to have been arrested as a suspect.
Bradley Malin, co-director of the Center for Genetic Privacy and Identity in Community Settings at Vanderbilt University, tells IEEE that there are many issues that can arise from this database largely being composed of DNA profiles taken from people of color.
“I wouldn’t say that they are only collecting information on minorities, but when you have a skew towards the collection of information from these communities, when you solve a crime or you think you have solved a crime, then it is going to be a disproportionate number of people from the minority groups that are going to end up being implicated,” Malin says. “It’s a non-random collection of data, as an artifact, so that’s a problem. There’s clearly skew with respect to the information that they have.”
Some of the DNA in the FBI’s database is now being collected by immigration agencies that are collecting samples from undocumented immigrants at the border. Not only are we collecting a disproportionate amount of DNA from black Americans who have been arrested, we’re collecting it from immigrants who are detained while trying to come to America. Malin says this further skews the database and could cause serious problems.
“If you combine the information you’re getting on immigrant populations coming into the United States with information that the FBI already holds on minority populations, who’s being left out here? You’ve got big holes in terms of a lack of white, caucasian people within this country,” Malin says. “In the event that you have people who are suspected of a crime, the databases are going to be all about the immigrant, black, and Hispanic populations.”
Malin says immigration agencies are often separating families based on DNA because they will say someone is not part of a family if their DNA doesn’t match. That can mean people who have been adopted or live with a family will be separated from them.
Aside from the clear threat to privacy these databases represent, one of the problems with them is that they can contain contaminated samples, or samples can become contaminated, which can lead law enforcement to make wrongful arrests. Another problem is law enforcement can end up collecting DNA that is a near match to DNA contained in the database and end up harassing people they believe to be related to a criminal in order to find their suspect. Malin says there’s also no guarantee that these DNA samples will not end up being used in controversial ways we have yet to even consider.
“One of the problems you run into is scope creep,” Malin says. “Just because the way the law is currently architected says that it shouldn’t be used for other purposes doesn’t mean that that won’t happen in the future.”
As for China, a report that was published by the Australian Strategic Policy Institute in mid-June claims that China is operating the “world’s largest police-run DNA database” as part of its powerful surveillance state. Chinese authorities have collected DNA samples from possibly as many as 70 million men since 2017, and the total database is believed to contain as many as 140 million profiles. The country hopes to collect DNA from all of its male citizens, as it argues men are most likely to commit crimes.
DNA is reportedly often collected during what are represented as free physicals, and it’s also being collected from children at schools. There are reports of Chinese citizens being threatened with punishment by government officials if they refuse to give a DNA sample. Much of the DNA that’s been collected has been from Uighur Muslims that have been oppressed by the Chinese government and infamously forced into concentration camps in the Xinjiang province.
“You have a country that has historically been known to persecute certain populations,” Malin says. “If you are not just going to persecute a population based on the extent to which they publicly say that they are a particular group, there is certainly a potential to subjugate them on a biological basis.”
James Leibold, a nonresident senior fellow at the Australian Strategic Policy Institute and one of the authors of the report on China’s DNA database, tells Spectrum that he is worried that China building up and utilizing this database could normalize this type of behavior.
“Global norms around genomic data are currently in a state of flux. China is the only country in the world conducting mass harvesting of DNA data outside a major criminal investigation,” Leibold says. “It’s the only forensic DNA database in the world to contain troves of samples from innocent civilians.”
Lebold says ethnic minorities like the Uighurs aren’t the only ones threatened by this mass DNA collection. He says the database could be used against dissidents and any other people who the government sees as a threat.
“With a full genomic map of its citizenry, Chinese authorities could track down those engaged in politically subversive acts (protestors, petitioners, etc.) or even those engaged in ‘abnormal’ or unacceptable behavior (religious groups, drug users, gamblers, prostitutes, etc.),” Leibold says. “We know the Chinese police have planted evidence in the past, and now it is conceivable that they could use planted DNA to convict ‘enemies of the state.’”
As Leibold points out, world powers like China and the U.S. have the ability to change norms in terms of what kind of behavior from a major government is considered acceptable. Thusly, there are many risks to allowing these countries to normalize massive DNA databases. As often happens, what at first seems like a simple law enforcement tool can quickly become a dangerous weapon against marginalized people.
It might seem odd that, earlier this month, Stuttgart-based Bosch, a leading global supplier of automotive parts and equipment, seemed to be asking political leaders to reduce the amount of space on roadways they are allowing for cars and trucks.
This makes more sense when you realize that this call for action came from the folks at Bosch eBike Systems, a division of the company that makes electric bicycles. Their argument is simple enough: The COVID19 pandemic has prompted many people to shift from traveling via mass transit to bicycling, and municipal authorities should respond to this change by beefing up the bike infrastructure in their cities and towns.
There’s no doubt that a tectonic shift in people’s interest in cycling is taking place. Indeed, the current situation appears to rival in ferocity the bike boom of the early 1970s, which was sparked by a variety of factors, including: the maturing into adulthood of many baby boomers who were increasingly concerned about the environment; the 1973 Arab oil embargo; and the mass production of lightweight road bikes.
While the ’70s bike boom was largely a North American affair, the current one, like the pandemic itself, is global. Detailed statistics are hard to come by, but retailers in many countries are reporting a surge of sales, for both conventional bikes and e-bikes—the latter of which may be this bike boom’s technological enabler the way lightweight road bikes were to the boom that took place 50 years ago. Dutch e-bike maker VanMoof, for example, reported a 50 percent year-over-year increase in its March sales. And that’s when many countries were still in lockdown.
Eco Compteur, a French company that sells equipment for tracking pedestrian and bicycle traffic, is documenting the current trend with direct observations. It reports bicycle use in Europe growing strongly since lockdown measures eased. And according to its measurements, in most parts of the United States, bicycle usage is up by double or even triple digits over the same time last year.
Well before Bosch’s electric-bike division went public with its urgings, local officials had been responding with ways to help riders of both regular bikes and e-bikes. In March, for example, the mayor of New York City halted the police crackdown on food-delivery workers using throttle-assisted e-bikes. (Previously, they had been treated as scofflaws and ticketed.) And in April, New York introduced a budget bill that will legalize such e-bikes statewide.
Biking in all forms is indeed getting a boost around the world, as localities create or enlarge bike lanes, accomplishing at breakneck speed what typically would have taken years. Countless cities and towns—including Boston, Berlin, and Bogota, where free e-bikes have even been provided to healthcare workers—are fast creating bike lanes to help their many new bicycle riders get around.
Maybe it’s not accurate to characterize these local improvements to biking infrastructure as countless; some people are indeed trying to keep of tally of these developments. The “Local Actions to Support Walking and Cycling During Social Distancing Dataset” has roughly 700 entries as of this writing. That dataset is the brainchild of Tabitha Combs at the University of North Carolina in Chapel Hill, who does research on transportation planning.
“That’s probably 10 percent of what’s happening in the world right now,” says Combs, who points out that one of the pandemic’s few positive side effects has been its influence on cycling “You’ve got to get out of the house and do something,” she says. “People are rediscovering bicycling.”
The key question is whether the changes in people’s inclination to cycle to work or school or just for exercise—and the many improvements to biking infrastructure that the pandemic has sparked as a result—will endure after this public-health crisis ends. Combs says that cities in Europe appear more committed than those in the United States in this regard, with some allocating substantial funds to planning their new bike infrastructure.
Cycling is perhaps one realm where responding to the pandemic doesn’t force communities to sacrifice economically: Indeed, increasing the opportunities for people to walk and bike often “facilitates spontaneous commerce,” says Combs. And researchers at Portland State have shown that cycling infrastructure can even boost nearby home values. So lots of people should be able to agree that having the world bicycling more is an excellent way to battle the pandemic.
Nothing computes more efficiently than a brain, which is why scientists are working hard to create artificial neural networks that mimic the organ as closely as possible. Conventional approaches use artificial neurons that work together to learn different tasks and analyze data; however, these artificial neurons do not have the ability to actually “fire” like real neurons, releasing bursts of electricity that connect them to other neurons in the network. The third generation of this computing tech aims to capture this real-life process more accurately – but achieving such a feat is hard to do efficiently.
In a study published 30 April in IEEE Transactions on Electron Devices, a group of researchers in India propose a novel approach that allows these artificial neurons to fire in a much more efficient manner, allowing more “neurons” to be packed onto a computer chip. The advancement takes us one step closer to achieving more practical spiking neural networks (SNNs). This type of network could help us better understand the very organ that it’s inspired by, and thus better understand human disease, thought processes, and other mysteries of the brain.
Neurons in the brain “communicate” with each other by transmitting electrical spikes between one another. Networks of artificial spiking neurons can imitate this phenomenon by using leaky capacitors. Once a capacitor reaches a given threshold of electric charge, the voltage or current shoots out and affects the neighboring capacitor (analogous to another neuron).
A group of researchers at the Indian Institute of Technology Bombay designed the new SNN hardware. The design includes silicon-based electrical switches, called Metal-Oxide Semiconductor Field-Effect Transistors (MOSFETs), which are built on an insulating substrate.
To help the “neuron” fire and activate the other capacitors, the team added positively charged holes to the MOSFETs. Based on the nature of the MOSFETs, these holes allow for the quantum mechanical tunneling of electrons out of the capacitor. “The use of quantum mechanical tunneling provides incredible control, which is a huge advantage,” says Tanmay Chavan, a member of the research team that developed the SNN.
What’s more, this design can work in an off-current mode, which allows the capacitors to be 10,000 times smaller than if they required the current to be on. “In fact, the body capacitance is used to integrate the current within the transistor, leading us to utilize such tiny currents accurately without external loss,” explains Udayan Ganguly, another researcher at IIT Bombay involved in the study. “This… leads to extreme energy efficiency that we refer to as ‘computing at the current floor.’ Thus, fantastic energy and density is achieved.”
The researchers are interested in commercializing this design and are currently looking into forming new partnerships. “Given the fantastic performance at a unit neuron level, we plan to demonstrate networks of such neurons to understand how models of networks of neurons behave on silicon. This will enable us to understand the robustness and systems-level efficiency of the technology,” Chavan says.