Moustiques par milliers, fourmis, araignées, chaleur étouffante… c’est ainsi que son plus célèbre résident décrivait la vie sur l’île du Diable : le capitaine Dreyfus (1859-1935). Accusé, à tort, d’intelligence avec l’ennemi, il est retenu quatre ans prisonnier dans cet « enfer vert » , avant d’être grâcié.
L’île du Diable, l’une des trois îles de l’archipel du Salut, au large de la Guyane, est devenu le symbole de l’injustice et de la cruauté. De loin, l’archipel avait pourtant tout d’un paradis. Soleil, cocotiers, mer bleue… Quand les premiers explorateurs, anglais et espagnols, s’en approchent, ils doivent affronter des vents aussi violents que les courants qui agitent en tout sens la mer peuplée de requins. De là viendrait sa funeste appellation.
Quand la Guyane devient française, Louis XV décide d’y envoyer 10 000 colons. Mais la fièvre jaune fait des ravages, et seuls 2 000 d’entre eux accostent les rivages sains et saufs. En 1793, une forteresse est construite pour y déporter les opposants au régime, ce sera l’un des bagnes les plus durs au monde.
« Triangle maudit », « terreur des forçats »… Les trois îles verront passer des dizaines de milliers d’hommes, coupables ou innocents, comme Henri Charrière, qui écrira plus tard le célèbre livre Papillon. En 1923, l’opinion française découvre, sous la plume du journaliste Albert Londres, la réalité du bagne : « Ile du Diable ! Tombeau des vivants, tu dévores des vies entières. » C’est le début de la fin. En 1947, le « bagne des bagnes » ferme ses portes. Aujourd’hui, le centre spatial guyanais est le propriétaire de l’île, interdite au public.
source – crédit photo:© Christian F5UII/Wikimedia Commons
Two-dimensional, atom-thin materials are good for a lot of things, but until two years ago, nobody thought they’d make good memory devices. Then Deji Akinwande, Jack Lee, and their team at UT Austin tried it out. It turns out that sandwiching a 2D material like molybdenum disulfide between two electrodes makes a memristor—a two-terminal device that stores data as a change in resistance. In research reported last week, they’ve proved a very important potential application for these “atomristors”—analog RF switches for 5G and perhaps future 6G radios.
Cellular radios do a lot of switching. They have to switch between transmit and receive, they have to switch between different frequencies to prevent interference, and they may have to switch among signals with different phases to steer their data beams around. RF switches are pretty demanding devices that need a combination of characteristics that are hard to come by. Fast switching, low on resistance, high off-impedance, little leakage, and—this is the part today’s switches don’t do—they should stay in place without power. Battery-dependent IoT systems could potentially last longer if they didn’t have to keep radio switches powered up. That’s what the new nanoscale atomristor switches can now do, not just for 5G frequencies but for possible future 6G frequencies, as well.
Memristors are generally made up of two electrodes sandwiching a pillar of insulating material, such as an oxide material. The device starts off in a high-resistance state, preventing current from passing through it. But raise the voltage high enough, and oxygens are shoved out of place in the oxide to form a conductive pathway. In this state, the device now passes current easily. A high voltage in the reverse direction puts the oxygens back in place, restoring its resistance.
But that can’t be what’s happening in 2D semiconductors, because there’s no vertical dimension to form a conductive path. Instead, Akinwande’s group found, certain naturally occurring defects in the two-dimensional material’s crystal lattice produce the effect. These defects are missing atoms. Ordinarily, the resistance across the 2D material is high, but with enough voltage, gold atoms from the electrodes will temporarily move into the vacancies making the material conductive. “Basically, it’s like Airbnb. They’re just renting the space,” says Akinwande. A strong reverse voltage will push the gold back out.
The atomristor-action was initially discovered using molybdenum disulfide as the 2D material. But for RF switches, which have to strongly block signals when switched off, “what you really need is an insulator,” says Akinwande. So the team and their collaborators at University of Lille turned to hexagonal boron nitride (hBN), an extensively studied 2D insulator.
“Usually when people use hBN, they’re using several layers,” says Akinwande. But over time his team was able to build switches with just a single, 0.3-nanometer-thick layer of material. “People were shocked by this result.” The key was to produce hBN without any flaws big enough to let current leak through. “It has to be near perfect, but not perfect,” he says.
The key figure of merit for RF switches is called cut-off frequency. It’s a combination of on-state resistance and off-state capacitance, both of which should be low in a good switch. Terahertz values for cutoff frequency indicate that a device is a good candidate for an RF switch, and the experimental hBN devices scored 129 terahertz. As part of the testing, the team transmitted real-time high-definition video at a rate of 8.5 gigabits per second using a 100 gigahertz carrier frequency, which they say is more than sufficient for 5G’s streaming needs. At this data rate, several movies can be downloaded in a few seconds. They reported their results in Nature Electronics.
For 5G frequencies, Akinwande is exploring commercialization to further develop the nanometer-scale switches. Although the research device was demonstrated using gold electrodes on a diamond substrate, Akinwande says the process for making these RF switches is compatible with the CMOS processes used in foundries. He points to research done at several universities and TSMC showing the integration of hBN with silicon.
For 6G frequencies, which are expected to include frequencies in the terahertz range (300 to 3000 GHz), the UT Austin team is planning new laboratory measurements.
Vortex lasers could help photons carry more data, a new study finds.
Modern optical telecommunications encode data in multiple aspects of light, such as its brightness and color. In order to store even more data in light, scientists are exploring other properties of light that have proven more difficult to control.
One promising feature of light under investigation has to do with momentum. Light has momentum, just like a physical item moving through space, even though it does not have mass. As such, when light shines on an object, it exerts a force. Whereas the linear momentum of light exerts a push in the direction that light is moving, angular momentum of light exerts torque.
A beam of light can possess two kinds of angular momentum. The spin angular momentum of a ray of light can make objects it shines on rotate in place, whereas its orbital angular momentum can make objects rotate around the center of the ray. A beam of light that carries orbital angular momentum resembles a vortex, moving through space with a spiraling pattern like a corkscrew. Whereas a conventional light beam is brightest at its center, vortex beams have ringlike shapes that are dark in the center, due to how some of the waves making up vortex beams can interfere with one another.
A potentially extraordinarily useful property of vortex beams is that they do not interfere with each other if they all possess different twisting patterns. This means a theoretically infinite number of vortex beams can get overlaid on top of each other to carry an unlimited number of data streams at the same time.
However, until now, all microchip-scale vortex lasers firing at telecommunications wavelengths were each limited to transmitting a single orbital angular momentum pattern. At the same time, existing detectors for vortex beams relied on complex filtering techniques using bulky components, which prevented them from being integrated on chips and made them incompatible with most practical optical telecommunications approaches.
Now scientists at the University of Pennsylvania and their colleagues have made breakthroughs with both vortex lasers and vortex beam detectors. They detailed their findings in two studies in the 15 May issue of the journal Science.
The researchers began with a microring laser consisting of a ring of indium gallium arsenide phosphide only 7 microns in diameter in which light could flow in a loop, via a channel 650 nanometers wide. By varying the light pumped into this circle from microscopic arms on either side of this ring, the researchers could alter the orbital angular momentum of the beam emitted from the laser. Instead of emitting a single orbital angular momentum mode, they showed it could emit five distinct modes.
The scientists also developed a light detector based on tungsten ditelluride, which can act like a so-called Weyl semimetal, a material with properties lying between a conductive metal and a pure semiconductor. Their experiments found that different orbital angular momentum modes of light each generated unique patterns of electrical current within the photodetector, and they suggest this electronic method of detecting the orbital angular momentum of light could be scaled to work on microchips.
"By generating five different orbital angular momentum modes using our laser and sorting them with our detector, the data capacity of orbital angular momentum channels can be boosted by up to five times," says Liang Feng, an optical engineer at the University of Pennsylvania and lead author of the study describing the laser.
"We now have both essential integrated elements—that is, both source and detector—for implementing high-capacity optical communication via orbital angular momentum modes," says Ritesh Agarwal, a materials scientist at the University of Pennsylvania and lead author of the study describing the detector.
In the future, instead of using lasers to tune the orbital angular momentum of the vortex beams, Feng says they could do it electrically, which could help better integrate these devices onto microchips. He also suggests they could increase the number of orbital angular momentum modes to which they could set the vortex beam to whatever they might like.
The scientists also plan on increasing the sensitivity of the detector to single photons so that it could serve in quantum communications and other quantum applications, Agarwal says. "However, it will be challenging to achieve high sensitivity and signal purity, so we will keep on searching for better material platforms and refining fabrication techniques."
Quasi-magnetic materials known as antiferromagnets are attracting research interest for their potential to hold far more data in a computer’s memory than traditional magnets allow.
Though the early work required to prove the concept has only just begun, a series of new studies shows progress in being able to electrically manipulate bits stored in antiferromagnets and to do so with components compatible with standard CMOS manufacturing techniques.
Antiferromagnets exhibit different properties from traditional ferromagnets, which are used in a variety of modern memory technologies including magnetoresistive random-access memory (MRAM).
MRAM has clear advantages over other memory technologies. Reading and writing data using MRAM can be done at speeds similar to those of volatile technologies such as DRAM and SRAM. But MRAM consumes less power and, like flash, is nonvolatile, meaning it doesn’t need a steady power supply to retain data.
Despite its advantages, MRAM could still be considered a boutique memory technology. And in theory, at least, antiferromagnets could fix a problem that has prevented MRAM from achieving broader adoption.
MRAM stores information as the spins of electrons—a property related to an electron’s intrinsic angular momentum. Ferromagnets have unpaired electrons that spin, or point, in one of two directions. Most electrons in a ferromagnet point in the same direction. When a current runs nearby, its magnetic field can cause most of those electrons to change their spins. The magnet records a “1” or a “0” depending on which direction they point.
A drawback of ferromagnets is that they can be influenced by external magnetic fields, which can cause bits to flip unintentionally. And the spins of adjacent ferromagnets can influence one another unless there’s enough space between them—which limits MRAM’s ability to scale to higher densities for lower costs.
Antiferromagnets—which include compounds of common metals such as manganese, platinum, and tin—don’t have that problem. Unlike with ferromagnets, the spins of electrons within the same antiferromagnet don’t all point in the same direction. Electrons on neighboring atoms point opposite to each other, effectively canceling one another out.
The collective orientation of all spins in an antiferromagnet can still record bits, but the magnet as a whole has no magnetic field. As a result, antiferromagnets can’t influence each other, and they aren’t bothered by external fields. Which means you can pack them in tight.
And because the dynamics of the spin in antiferromagnets are much faster, bits can be switched in picoseconds with terahertz frequencies—much faster than the nanoseconds required at gigahertz frequencies used in today’s ferromagnetic MRAM. Theoretically, antiferromagnets could increase the writing speed of MRAM by three orders of magnitude.
Only in the past five years have antiferromagnets been seriously investigated for their potential in memory, since researchers in Europe demonstrated it was possible to use an electric current to control the spins of electrons within an antiferromagnet. That work has led to a flurry of research investigating different types of antiferromagnets and switching techniques.
“There are a very wide range of antiferromagnetic materials one could choose,” says Pedram Khalili-Amiri, an associate professor of electrical and computer engineering at Northwestern University. “There’s more of them than there are ferromagnets. This is a blessing and a curse.”
Researchers have reported several advances using antiferromagnets since the start of this year. Khalili-Amiri led a team that showed switching in tiny pillars of platinum manganese, an antiferromagnet used in hard drives and magnetic field sensors today. The team described its work in February in Nature Electronics. “We wanted to build a device that was CMOS-compatible,” he says.
In March, a group involving Markus Meinert of the Technical University of Darmstadt in Germany wrote in Physical Review Research of an experiment showing a novel MRAM technique for switching bits, known as spin-orbit torque, that could also work for switching bits stored in one type of antiferromagnet.
And in April, Satoru Nakatsuji at the University of Tokyo and his collaborators described in Nature an experiment that successfully switched bits in an antiferromagnet (Mn3Sn) that has a particular type of electrons known as Weyl fermions. The spin states of these fermions are relatively easy to measure and allow for a device to be much simpler than other antiferromagnetic devices.
Despite this progress, Barry Zink from the University of Denver says it’s too early to bet on any one type of antiferromagnet. “It’s a really exciting field. I think it’s not clear yet just exactly which material, or if just one of them by itself, is going to be the winner in all this,” he says.
A number of technical challenges would have to be resolved before antiferromagnets could ever be used in commercial devices. One issue that Zink has written about is that heat from a current appears to cause a voltage pattern in some antiferromagnetic devices that looks similar to what a switch in electron spin may cause. To read data back, it will be important to distinguish between the two.
And reading data from an antiferromagnet is still much slower and more difficult than reading data stored in ferromagnets. “We need to find ways of reading more efficiently,” says Meinert.
Already, companies are beginning to take note. Though he declined to share names, Nakatsuji says he’s been contacted by large technology companies for his lab’s work on antiferromagnets. “I think in the near future, a lot will become possible,” he says.
Optical atomic clocks will likely redefine the international standard for measuring a second in time. They are far more accurate and stable than the current standard, which is based on microwave atomic clocks.
Now, researchers in the United States have figured out how to convert high-performance signals from optical clocks into a microwave signal that can more easily find practical use in modern electronic systems.
Synchronizing modern electronic systems such as the Internet and GPS navigation is currently done using microwave atomic clocks that measure time based on the frequency of natural vibrations of cesium atoms. Those vibrations occur at microwave frequencies that can easily be used in electronic systems.
But newer optical atomic clocks, based on atoms such as ytterbium and strontium, vibrate much faster at higher frequencies and generate optical signals. Such signals must be converted to microwave signals before electronic systems can readily make use of them.
“How do we preserve that timing from this optical to electronic interface?” says Franklyn Quinlan, a lead researcher in the optical frequency measurements group at the U.S. National Institute of Standards and Technology (NIST). “That has been the big piece that really made this new research work.”
By comparing two optical-to-electronic signal generators based on the output of two optical clocks, Quinlan and his colleagues created a 10-gigahertz microwave signal that synchronizes with the ticking of an optical clock. Their highly precise method has an error of just one part in a quintillion (a one followed by 18 zeros). The new development and its implications for scientific research and engineering are described in the 22 May issue of the journal Science.
The improvement comes as many researchers expect the international standard that defines a second in time—the Système International (SI)—to switch over to optical clocks. Today’s cesium-based atomic clocks require a month-long averaging process to achieve the same frequency stability that an optical clock can achieve in seconds.
“Because optical clocks have achieved unprecedented levels of accuracy and stability, linking the frequencies provided by these optical standards with distantly located devices would allow direct calibration of microwave clocks to the future optical SI second,” wrote Anne Curtis, a senior research scientist at the National Physical Laboratory in the United Kingdom, in an accompanying article. Curtis was not involved in the research.
Optical clocks can already be linked together physically through fiber-optic networks, but this approach still limits their usage in many electronic systems. The new achievement by the U.S. research team—with members from NIST, the University of Colorado-Boulder, and the University of Virginia in Charlottesville—could remove such limitations by combining the performance of optical clocks with microwave signals that can travel in areas without a fiber-optic network.
For its demonstration, the team built its own version of an optical frequency comb, a pulsed-laser device that uses very brief light pulses to create a repetition rate that, when converted to frequency numbers, resembles “a comb of evenly spaced frequencies or tones spanning the optical regime,” Curtis explains. Modern optical frequency combs were first developed 20 years ago and have played a starring role in both fundamental research experiments and various technological systems since that time.
By measuring the optical beats between a single comb tone and an unknown optical frequency, researchers knew they should be able to directly link faster optical frequencies to slower microwave frequencies. Doing that required a photodetector developed by researchers at the University of Virginia to carry out the optical-to-microwave conversion and generate an electrical signal. The team also wrote its own software for off-the-shelf digital sampling hardware to help digitize and extract the phase information from the optical clocks.
“The piece that has lagged a bit is the high-fidelity conversion of optical pulses to microwave signals with the optical-to-electrical convertor,” Quinlan says. “So if you have pulses where you know the timing to within a femtosecond (one quadrillionth of a second), how do you convert those photons to electrons while maintaining that level of timing stability? That has taken a lot of effort and work to understand how to do that really well.”
The researchers didn’t quite reach their original benchmark for minimizing the potential instability and errors in the microwave signals synchronized with the optical clocks. But even with the current performance, Quinlan and his colleagues realized: “Okay, great, that'll support current and next-generation optical clocks.”
Curtis describes the improved capability to synchronize microwave signals with optical clock signals as a “paradigm shift” that will impact “fundamental physics, communication, navigation, and microwave engineering.” One of the most immediate applications could involve higher-accuracy Doppler radar systems used in navigation and tracking. A more stable microwave signal can help radar detect even smaller frequency shifts that could, for example, better distinguish slow-moving objects from the background noise of stationary objects.
Future space telescopes based on very-long-baseline interferometry (VLBI) could also benefit from the highly stable microwave signals synchronized with optical clocks. Today’s ground-based VLBI telescopes use receiver devices spread across the globe to detect microwave and millimeter-wave signals and combine them into high-resolution images of cosmic objects such as black holes. A similar VLBI telescope located in space could boost the imaging resolution while avoiding the Earth’s atmospheric distortions that interfere with astronomers’ observations. In that scenario, having optical-clock-level stability to synchronize all the signals received by the VLBI telescope could improve observation time from seconds to hours.
“Essentially you’re collecting signals from multiple receivers and you need to time-stamp those signals to combine them in a meaningful way,” Quinlan says. “Right now the atmosphere distorts the signal enough so that [it] is a limitation rather than the time-stamping from a stable clock, but if you get away from atmospheric distortions, you could do much better and then you can utilize a much more stable clock.”
There is still more work to be done before more electronic systems can take advantage of such optical-to-microwave conversion. For one thing, the sheer size of optical clocks means that nobody should expect a mobile device to have a tiny optical clock inside anytime soon. In the team’s latest research, their optical atomic clock setup occupied a lab table about 32 square feet in size (almost 3 square meters).
“Some of my coauthors on this effort led by Andrew Ludlow at NIST, as well as other folks around the world, are working to make this much more compact and mobile so that we can kind of have optical-clock-level performance on mobile platforms,” Quinlan says.
Another approach that could bypass the need for miniature optical clocks involves figuring out whether microwave transmissions could maintain the stability of the optical clock performance when transmitted across large distances. If this works, stable microwave transmissions could wirelessly synchronize performance across multiple mobile devices.
At the moment, optical clocks can be linked only through either fiber-optic cables or lasers beamed through the air. The latter often becomes ineffective in bad weather. But the team plans to explore the beaming possibility further with microwaves, especially after its initial success and with support from both NIST and the Defense Advanced Research Projects Agency.
“What would be great is if we had a microwave link that basically maintains the stability of the optical signal but can then be transmitted on a microwave carrier that doesn't suffer from rainy days and from dusty conditions,” Quinlan says. “But it's still yet to be determined whether or not such a link could actually maintain the stability of the optical clock on a microwave carrier.”
Cable assemblies fulfill an important role in high-density electronic systems. Often overlooked as ‘a couple of connectors and some cable’, these are in fact an essential element of modern engineering design and should not be underestimated.
In this paper, we outline the process of creating cable assemblies for application sectors requiring the highest levels of reliability, such as aerospace, defense, space and motorsport.
When Rovenso’s cofounder and CEO Thomas Estier started thinking about how autonomous security and monitoring robots could be helpful during the COVID-19 pandemic, adapting them for UV-C disinfection seemed like it made a lot of sense—while you patrol at night, why not also lower the viral load of shared areas? But arguably the first question that a company has to ask when considering a new application, Estier tells us, is whether they can offer something unique.
“For me, what was also interesting is that the crisis motivated us to consider existing solutions for disinfection, and then understanding that [those solutions] are not adapted for large workshops and offices,” he says. “Instead, it would make sense for a robot to ‘understand’ its environment and act intelligently and to better spend its energy, and this loop of sense-analyze-act is the essence of robotics. When you use the full power of robotics, then you can really innovate with new use cases.”
In three weeks, Estier and his team developed what he’s calling “a hack,” turning their highly mobile security robot into an autonomous and efficient coronavirus destroyer.
We’ll get to the disinfecting strategy in a second, but first, a quick word about ROVéo’s design, since it’s a little, uh, different looking. Based on the above video, you might be wondering why Rovenso doesn’t just use a conventional mobile base—a Turtlebot, a Husky, a Freight, or any number of other options that are simple and affordable. And the reason is simple: ROVéo can handle stairs.
Those tiny powered wheels with the enormous, cleverly designed suspension have no problems with stairs or even curbs that are as high as the robot itself, a capability that usually requires a much more sophisticated mechanical system. It’s also able to handle other terrain challenges, including this one, which has got to be infuriating (or catastrophic) for most warehouse robots since it’s effectively invisible to planar lidar.
Relative to other UV-C disinfecting robots we’ve been following, ROVéo is taking a targeted approach, with the goal of being able to disinfect larger spaces like industrial or commercial areas much more efficiently. Hugely powerful UV-C robots for hospitals are designed to “fry” as many surfaces as possible as thoroughly as possible, which is fine in constrained environments like hospitals. But these robots are just not practical for (say) an office complex, where you’ve got to cover a lot more ground.
ROVéo’s solution is to autonomously map its 3D environment with lidar, analyze that map, and then focus its UV-C disinfection system just on surfaces that are likely to be touched by humans, using a simulation of UV-C radiation to determine how long it needs to treat a surface to achieve a 99 percent disinfection rate. Surfaces that the robot targets include desktops, tabletops, counters, handles and handrails, and equipment in common spaces. You don’t get that same whole-environment sterilization that larger UV disinfecting robots offer, but instead you’re significantly reducing the viral load in just the places where it’s most important to do so. This means that your robot is disinfecting more useful areas faster with less downtime to recharge. It may not be the right answer for hospitals, but it could bring a substantial amount of safety to other spaces with less stringent requirements.
Estier says that Rovenso is prepared to supply these robots to interested companies if this prototype gets traction. Specifically, Rovenso is investigating deployments in industries like “pharma, biotech, med tech, and perhaps food tech, where it would make sense to target specifically wet labs.” Like other disinfecting robots, ROVéo would enhance rather than replace existing cleaning processes, and Estier suggests that it could be offered as a service for several hundred dollars per week, which seems like not a whole lot for companies that want to offer an additional layer of protection for their employees.
Back in 2012, Netflix released Chaos Monkey, an open-source tool for triggering random failures in critical computing infrastructure. Similar stress testing, at least in a simulated environment, has been applied in other contexts such as banking, but we’ve never stress tested industrial production during a viral pandemic. Until now.
COVID-19 has demonstrated beyond doubt the fragility of the global system of lean inventories and just-in-time delivery. Many nations have immediate need for critical medical supplies and equipment, even as we grope for the switch that will allow us to turn the global economy back on. That means people have to be able to manufacture stuff, where it’s needed, when it’s needed, and from components that can be locally sourced. That’s a big ask, because most of our technology comes to us from far away, often in seamlessly opaque packaging—like a smartphone, all surface with no visible interior.
The manufacture of even basic products can be so encumbered by secrecy or obscurity that it quickly becomes difficult to learn how to make them or to re-create their functionality in some other way. While we normally tolerate such impediments as part of normal business practice, they have thrown up unexpected roadblocks to keeping the world operating through the present crisis.
We must do whatever we can to lower the barriers to getting things built, and that begins by embracing a newfound flexibility in our approaches to both manufacturing and intellectual property. Companies are already rising to this challenge.
For example, Medtronic shared the designs and code for its portable ventilator at no charge, enabling other capable manufacturers to take up the challenge of building and distributing enough units to meet peak demand during the pandemic. Countless other pieces of electronic equipment—everything from routers to thermostats—operate in critical environments and need immediate replacement should they fail. Where they cannot be replaced, we will fall deeper into the ditch we now find ourselves in.
It would be ideal if any sort of equipment could be “printed” on demand. We already have the capacity for such rapid manufacturing in some realms, but to address the breadth of the present crisis would require a comprehensive database of product designs, testing, firmware, and much else. Little of that infrastructure exists at present, highlighting a real danger: If we don’t construct a distributed, global build-it-here-now capacity, we might burn through our existing inventories without any way to replenish them. Then we will be truly stuck.
Many firms will no doubt have reservations about handing over their intellectual property, even to satisfy critical needs. This tension between normal business practice and public good echoes the contours of the dilemmas facing personal privacy and public health. We nevertheless need urgently to find a way to share trade secrets—temporarily—to preserve the kind of world within which business can one day operate normally.
Some governments have already signaled their greater flexibility in enforcing both patent protection and intellectual property protections during this crisis. Yet more is needed. Like Medtronic, businesses should take the plunge, open up, share their trade secrets, provide guidance to others (even former competitors) to help us speed our way into a post-pandemic economy. Sharing today will make that return much faster and far less painful. To paraphrase a wise old technologist, we either hang together, or we will no doubt hang separately.
This article appears in the June 2020 print issue as “Not Business as Usual.”
Petite pâtisserie originaire de Louhans, la corniotte fait penser aux anciens chapeaux des curés. Celle-ci était fêtée originellement lors de la fête de l’Ascension, et vendue par les religieuses de l’Hôtel Dieu. Vous trouverez-ci-dessous la recette pour réaliser d’excellentes corniottes,
The global semiconductor supply chain is having an interesting year. Having adjusted to the potential and realities of a U.S.-China trade war, it is now faced with an economy-halting pandemic. Friday’s news seemed a microcosm of what is emerging from this moment: a combination of less concentrated advanced manufacturing and attempts to pressure companies to bend to geopolitical objectives.
On 15 May, the world’s largest semiconductor foundry, TSMC, announced that it planned to build a US $12 billion fab in Arizona, which would begin production in 2024. (The $12 billion investment is 2021–2029.) That same day, the Trump administration said it would now require TSMC and other non-U.S. chipmakers to get a license from the U.S. Commerce Department if they want to ship chips to Huawei that were made using U.S. software and technology.
First, the new fab: According to VLSI Research’s Dan Hutcheson, a U.S. fab is partly a ploy to keep Apple happy. The iPhone-etc.-maker’s CEO Tim Cook has been pushing for such a move for some time to ensure supply continuity for the processors that go in the company’s products. These processors have historically used leading-edge chipmaking technology. Currently that’s TSMC’s 7-nanometer process, but the company says the next generation process, 5 nm, is in production now.
TSMC, of course, has other important customers for its leading-edge technologies. AMD, Xilinx, Qualcomm, and Nvidia are among them; and more recently, so are cloud giants such as Google, Microsoft, Facebook, and Amazon, which have been developing their own server and AI designs.
To keep them happy, the Arizona fab will have to operate at the most advanced process available. TSMC is promising a 5-nm fab there, but by 2024 when production is set to begin, TSMC may be moving to another process generation, 3 nm. But fabs are built to be upgraded, Hutcheson points out. They aren’t built around a particular technology, and it seems assured that whatever 3 nm and more advanced processes entail, it will still mainly rely on extreme-ultraviolet lithography, the same tech central to the 7-nm and 5-nm processes.
However, transferring a manufacturing process to a new location and getting it to the point that it yields a profitable proportion of wafers is never easy. Hutcheson notes that TSMC struggled with that when it first built fabs in Tainan, which is little more than an hour away by high-speed rail from its headquarters in Hsinchu. However, depending on where in Arizona the fab is located, the company may benefit from infrastructure and experienced employees related to Intel’s advanced fabs in Chandler.
The plant’s projected 20,000-wafer-per-month capacity figure is actually quite low compared to that of other facilities. It matches the company’s recently built 16-nm Fab 16 in Nanjing, China. But it’s not in the same league as the company’s planned 5-nm Fab 18 in southern Taiwan, which will have a nameplate capacity of 120,000 wafers per month. Still, 20,000 wafers per month is in line with the first phase of other new fabs, says Joanne Itow, managing director at Semico Research. And that capacity could translate to 144 million applications processors per year, according to Itow. That’s enough to partly supply several customers and generate about $1.44 billion in revenue for TSMC.
That’s all assuming this fab actually happens. “Right now, it’s a Powerpoint fab,” says Hutcheson. TSMC’s own press release gives a very conditional feel: TSMC “announced its intention to build and operate an advanced semiconductor fab in the United States with the mutual understanding and commitment to support from the U.S. federal government and the State of Arizona.”
“Technically, it probably doesn’t matter where the chips are manufactured; however, in today’s tense trade arena the optics of having a fab in the United States provide a more positive partnership atmosphere,” says Joanne Itow, managing director at Semico Research.
The other TSMC news is much less of a win-win. The U.S. government has sought to starve Huawei of advanced semiconductors. Its Bureau of Industry and Security (BIS) added Huawei and its affiliates, particularly its semiconductor arm HiSilicon, to its list of entities that U.S. firms can’t sell to without a license in 2019. Huawei got around this by stepping up its own chip design capabilities, though it relies on foundries, especially TSMC, to manufacture its advanced chips. BIS is now seeking to tighten the screws by extending the licensing to foundries using U.S. software and tools to make Huawei’s chips.
In effect, the rule boils down to one country specifying which tools can be used in a factory in another country to produce goods for a customer in a third. TSMC is among the largest customers of U.S. chip tool makers and they have reason to worry, according to the Semiconductor Industry Association. “We are concerned this rule may create uncertainty and disruption for the global semiconductor supply chain, but it seems to be less damaging to the U.S. semiconductor industry than the very broad approaches previously considered,” the organization’s CEO John Neuffer said in a statement.
The new rule will likely accelerate Huawei’s ongoing shift away from U.S. technology, says Nelson Dong, a senior partner in charge of national security at the international law firm Dorsey & Whitney and a board member at the nonprofit advocacy group the National Committee on US-China Relations. Indirectly, “this move may well force the global semiconductor industry to look away from U.S. suppliers of semiconductor design tools and semiconductor production equipment and even to create new rival companies in other countries, including China itself,” he says. He cites the example of export restrictions in the satellite industry, which ultimately led to the growth of competing businesses outside the United States and higher prices for U.S. satellite makers due to their suppliers’ smaller market.
It is difficult to imagine how the U.S. could enforce such a rule in an advanced fab. “Fabs are an extreme version of ‘What happens in Vegas, stays in Vegas,’ ” quips Hutcheson. Manufacturing processes are proprietary and very closely guarded. “How would they even know it was going on?”
La communauté indigène Ashaninka au Brésil a remporté un différend de deux décennies devant le tribunal fédéral contre des intérêts d’exploitation illégale, recevant 3 millions de dollars en compensation et des excuses officielles de la part des entreprises pour avoir abattu des milliers d’acajou, de cèdre et d’autres espèces d’arbres dans la Kampa do Rio Amônia Réserve indigène.
À partir du début des années 80, des sociétés forestières appartenant à la famille Cameli ont illégalement récolté des arbres matures sur les terres ancestrales d’Ashaninka pour approvisionner l’industrie européenne du meuble.
La réparation de 3 millions de dollars ira directement à des projets de protection de la communauté Ashaninka et de la forêt amazonienne.
Les experts ont déclaré que cette affaire pourrait servir de précédent juridique dans d’autres litiges autochtones et environnementaux au Brésil.
source: the_happy_broadcast – Yale Environment – crédit photo: the_happy_broadcast – pixabay