FreshRSS

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 6 août 2020Vos flux RSS

Châlons-en-Campagne : cet artiste transforme les poteaux de sa ville en personnages colorés

Par Sophie Renassia
À Châlons-en-Campagne, les poteaux se sont métamorphosés en personnages pleins de couleurs. Une jolie façon de donner le sourire aux habitants.

À partir d’avant-hierVos flux RSS

Ce photographe met en lumière l’impact que les mots peuvent avoir (10 photos)

Par Mégane Bouron
Cette magnifique série photographique dénonce avec délicatesse et originalité les injustices de notre société. Découverte.

New Records for AI Training

Par Samuel K. Moore

The most broadly accepted suite of seven standard tests for AI systems released its newest rankings Wednesday, and GPU-maker Nvidia swept all the categories for commercially-available systems with its new A100 GPU-based computers, breaking 16 records. It was, however, the only entrant in some of them.

The rankings are by MLPerf, a  consortium with membership from both AI powerhouses like Facebook, Tencent, and Google and startups like Cerebras, Mythic, and Sambanova. MLPerf’s tests measure the time it takes a computer to train a particular set of neural networks to an agreed upon accuracy. Since the previous round of results, released in July 2019, the fastest systems improved by an average of 2.7x, according to MLPerf.

“MLPerf was created to help the industry separate the facts from fiction in AI,” says Paresh Kharya, senior director of product management for data center computing at Nvidia. Nevertheless, most of the consortium members have not submitted training results. Alibaba, Dell, Fujitsu, Google, and Tencent were the only others competing in the commercially- or cloud-available categories. Intel had several entries for systems set to come to market within the next six months.

Among the commercial and cloud-available systems, Nvidia's A100 DGX SuperPOD [bright green] trained neural networks the fastest. 
Image: Nvidia
Among the commercial and cloud-available systems, Nvidia's A100 DGX SuperPOD [bright green] trained neural networks the fastest.

In this, the third round of MLPerf training results, the consortium added two new benchmarks and substantially revised a third, for a total of seven tests. The two new benchmarks are called BERT and DLRM.

BERT, for Bi-directional Encoder Representation from Transformers, is used extensively in natural language processing tasks such as translation, search, understanding and generating text, and answering questions. It is trained using Wikipedia. At 0.81 minutes Nvidia had the fastest training time amongst the commercially available systems for this benchmark, but an internal or R&D Google system nudged past it with a 0.39 minute training run.

DLRM, for Deep Learning Recommendation Model, is representative of the recommender systems used in online shopping, search results, and social media content ranking. It’s trained using a terabyte-sized set of click logs supplied by Criteo AI Lab. That dataset contains the click logs of four billion user and item interactions over a 24-day period. Though, Nvidia stood alone amongst the commercially-available entrants for DLRM with a 3.3-minute training run, a system internal to Google won this category with a 1.2-minute effort.

Besides adding DLRM and BERT, MLPerf upped the difficulty level for the Mini-Go benchmark. Mini-Go uses a form of AI called reinforcement learning to learn to play go on a full-size 19 x 19 board. Previous versions used smaller boards. “It’s the hardest benchmark,” says Kharya. Mini-Go has to simultaneously play the game of Go, process the data from the game, and train the network on that data. “Reinforcement learning is hard because it’s not using an existing data set,” he says. “You're basically creating the dataset as you go along.”

On a per processor-basis Nvidia had a good run as well.  
Image: Nvidia
On a per processor-basis Nvidia had a good run as well.

According to Jonah Alben Nvidia’s vice president of GPU engineering, RL is increasingly important in robotics, because it could allow robots to learn new tasks without the risk of damaging people or property.

Nvidia’s only other competition on Mini-Go were from a not-yet commercial system from Intel, which came in at 409 minutes, and from an internal system at Google, which took just under 160 minutes.

Nvidia tested all its benchmarks using the Selene supercomputer, which is made from the company’s DGX SuperPOD computer architecture. The system ranks 7th in the Top500 supercomputer list and is the second most powerful industrial supercomputer on the planet.

VIDÉO. Les légumes ne sont pas seulement bons pour la santé, ils le sont aussi pour nos oreilles

Par Mégane Bouron
Une fois le concert terminé, les musiciens cuisinent les différents légumes pour ensuite les servir au public. Un concept insolite.

VIDÉO. Cette danseuse en surpoids montre que tous les corps peuvent faire du sport

Par Mégane Bouron
Pour cette danseuse burlesque, le poids n'est pas un handicap dans le sport et elle compte bien le montrer. Une découverte inspirante.

Elle réalise un projet photo pour exposer les vraies couleurs de peau : une prouesse artistique

Par Mégane Bouron
Combien de couleurs y a-t-il dans l'arc-en-ciel humain ? Selon cette photographe, il y aurait plus de 4 000 nuances de peau différentes.

Budget participatif écologique et solidaire : la Région Île-de-France alloue 500 millions d’euros

Par Publi-Rédactionnel
La Région Île-de-France lance le premier budget participatif écologique et solidaire afin d'intégrer ses habitants dans une démarche citoyenne régionale.

Nourrir, soigner, produire, consommer : 10 dessins pour penser le monde d’après

Par Mathilde Sallé de Chou
Lancé par le mouvement Alternatiba, le projet «Et si…» réunit 60 artistes et auteurs pour dessiner les contours de leur monde d’après idéal

Startup and Academics Find Path to Powerful Analog AI

Par Samuel K. Moore

Engineers have been chasing a form of AI that could drastically lower the energy required to do typical AI things like recognize words and images. This analog form of machine learning does one of the key mathematical operations of neural networks using the physics of a circuit instead of digital logic. But one of the main things limiting this approach is that deep learning’s training algorithm, back propagation, has to be done by GPUs or other separate digital systems.

Now University of Montreal AI expert Yoshua Bengio, his student Benjamin Scellier, and colleagues at startup Rain Neuromorphics have come up with way for analog AIs to train themselves. That method, called equilibrium propagation, could lead to continuously learning, low-power analog systems of a far greater computational ability than most in the industry now consider possible, according to Rain CTO Jack Kendall.

Analog circuits could save power in neural networks in part because they can efficiently perform a key calculation, called multiply and accumulate. That calculation multiplies values from inputs according to various weights, and then it sums all those values up. Two fundamental laws of electrical engineering can basically do that, too. Ohm’s Law multiplies voltage and conductance to give current, and Kirchoff’s Current Law sums the currents entering a point. By storing a neural network’s weights in resistive memory devices, such as memristors, multiply-and-accumulate can happen completely in analog, potentially reducing power consumption by orders of magnitude.

The reason analog AI systems can’t train themselves today has a lot to do with the variability of their components. Just like real neurons, those in analog neural networks don’t all behave exactly alike. To do back propagation with analog components, you must build two separate circuit pathways. One going forward to come up with an answer (called inferencing), the other going backward to do the learning so that the answer becomes more accurate. But because of the variability of analog components, the pathways don't match up.

“You end up accumulating error as you go backwards through the network,” says Bengio. To compensate, a network would need lots of power-hungry analog-to-digital and digital-to-analog circuits, defeating the point of going analog.

Equilibrium propagation allows learning and inferencing to happen on the same network, partly by adjusting the behavior of the network as a whole. “What [equilibrium propagation] allows us to do is to say how we should modify each of these devices so that the overall circuit performs the right thing,” he says. “We turn the physical computation that is happening in the analog devices directly to our advantage.”

Right now, equilibrium propagation is only working in simulation. But Rain plans to have a hardware proof-of-principle in late 2021, according to CEO and cofounder Gordon Wilson. “We are really trying to fundamentally reimagine the hardware computational substrate for artificial intelligence, find the right clues from the brain, and use those to inform the design of this,” he says. The result could be what they call end-to-end analog AI systems that capable of running sophisticated robots or even playing a role in data centers. Both of those are currently considered beyond the capabilities of analog AI, which is now focused only on adding inferencing abilities to sensors and other low-power “edge” devices, while leaving the learning to GPUs.

Le Diocèse d’Arras souhaite faire démolir l’église Sainte Germaine à Calais

Par Maximilien Bernard
Le Diocèse d’Arras souhaite faire démolir l’église Sainte Germaine à Calais
Une pétition est en ligne. Avant la démolition, il faut engager la procédure de désacralisation, et obtenir l’accord des Bâtiments de France, puisque les vitraux de l’église sont Lire la suite ...

This AI Can See the Forest and the Trees

Par Zack Parisa
A map with a number of different colors 
Image: SilviaTerra
Color My World: This map, the result of applying machine learning to satellite imagery, shows the mix of dominant tree species for a portion of California’s Sierra Nevada range. Red is used to indicate areas of canyon live oak ( Quercus chrysolepis), green for incense cedar ( genus Calocedrus), and blue for white fir ( Abies concolor).

In 2007, one of us (Parisa) found himself standing alone in the woods of Armenia and fighting off a rising feeling of dread.

Armenia, a former Soviet-bloc country, is about the size of Maryland. Its forests provide residents with mushrooms and berries, habitat for game animals, and firewood to heat homes during the cold winters. The forests also shelter several endangered bird species.

Parisa, then a first-year graduate student studying forestry, was there to help the country figure out a plan for managing those forests. The decisions the Armenian people make about their forests must balance economic, cultural, and conservation values, and those decisions will have repercussions for years, decades, or even centuries to come. To plan properly, Armenians need to answer all sorts of questions. What level of firewood harvest is sustainable? How can those harvests be carried out while minimizing disruption to bird habitat? Can these logging operations open up spaces in a way that helps people to gather more berries?

Across the world, communities depend on expert foresters to help them manage forests in a way that best balances such competing needs. In turn, foresters depend on hard data—and have done so for a very long time.

In the early 1800s, foresters were at the forefront of a “big data” revolution of sorts. It wasn’t feasible to count every tree on every hectare, so foresters had to find another way to evaluate what the land held. The birth of scientific forestry early in the 19th century in Saxony ushered in rudimentary statistical sampling techniques that gave reliable estimates of the distribution of the sizes and species of trees across large swaths of land without someone having to measure every single tree.

A collection of this type of data is called a forest inventory, which foresters use to develop management plans and project what the forest will look like in the future. The techniques forged two centuries ago to create such inventories—laborious field sampling to arrive at population statistics—have remained largely unchanged to this day, and hundreds of foresters working in the United States still count trees with pencil and paper even now.

Parisa was excited to help communities in Armenia develop forest management plans. He had been assured that he’d have good data for the large area where he was to work, in and around Dilijan National Park. But the “forest inventory” he’d been promised turned out to be the translated field notes from Soviet foresters who had visited the area more than 30 years earlier—observations along the lines of “Went on walk on southern exposure of the mountain. Many pine, few beech.” Such casual observations couldn’t possibly provide a solid foundation on which to build a forest management plan.

Parisa needed to inventory hundreds of thousands of hectares of forest. He knew, though, that a single forester can assess roughly 20 ­hectares (about 50 acres) in a day. Unless he wanted to spend the next decade counting trees in Armenia, he had to find a way to get those numbers faster.

Parisa grew up in Huntsville, Ala., where his father worked for NASA. Once, when Parisa was 8 years old, he hit a baseball through a window, and his dad punished him by making him calculate the amount of force behind the ball. He got good at that sort of exercise and later came to study forestry with an unusually quantitative skill set.

In Armenia, Parisa put those skills to work figuring out how to compile a complete forest inventory using remote sensing, which has been the holy grail of forestry for decades. Within 18 months, he developed the core of the machine-learning approach that the two of us later used in founding SilviaTerra, a startup based in San Francisco that’s dedicated to producing forest inventories from remotely sensed data. Here’s an overview of the some of the challenges we faced, how we overcame them, and what we’ve been able to do with this technology.

A castle in a forest.Colors on a map. A map of part of Arkansas.
Photos, from top: Shutterstock; SilviaTerra (2)
Sizing Up Trees: The research behind the authors’ work was initially focused on Dilijan National Park, in Armenia, site of the 13th-century Haghartsin Monastery [top]. Their company now offers specialized maps of U.S. forests, such as this one of Superior National Forest, in Minnesota, with warmer colors showing areas of better-quality moose habitat [middle], and of Arkansas, with warmer colors showing higher amounts of carbon stored in the forest [bottom].

Most people rarely think about forests, yet they play a vital role in our lives. The lumber that was used to build your house, the paper cup that holds your morning coffee, and the cardboard box containing your latest online delivery all came from a tree growing in the woods.

Measuring the potential of forests to provide those things has historically been expensive, slow, and low tech. The biggest forestry companies in the United States spend millions each year paying people to laboriously count and measure trees. The forests owned by such companies make up a sizable fraction of the U.S. total. So it made sense for us to concentrate on such places after we launched SilviaTerra in 2010.

The next year, our fledgling startup won the Sabin Sustainable Venture Prize from the Yale Center for Business and the Environment, in New Haven, Conn. We spent some of the US $25,000 prize money driving around the southeast United States in a pickup truck, looking for companies that owned more than 10,000 acres of forest so we could set up meetings with their executives.

We soon found our first paying customers. Later, we signed contracts with companies elsewhere, eventually applying our technology to all of the major forest types in the United States.

For the most part, our service has proved very attractive—so it isn’t a hard sell. What we offer is analogous to what farmers require to practice precision agriculture, a general approach that often uses remote sensing to inform decisions about what to grow, how to fertilize it, when to harvest the crop, and so forth. You might say that SilviaTerra is enabling “precision forestry.”

Being precise about forests, however, is more difficult than being precise about farmland. For one thing, you almost always know what you’ve planted in your fields, and it’s almost invariably just one crop. But natural forests can have a bewildering mix of tree species. Often, the dominant tree species can hide other kinds of trees lower in the canopy. And while crops are generally planted in rows or other regular geometries, forests usually have a much more organic spatial arrangement (although some managed plantations do have trees growing neatly in rows). What’s more, forests tend to be, um, out in the woods, and their remoteness makes it hard to collect ground truth.

Another technical challenge for us has been dealing with a veritable tsunami of data. The Landsat satellite archive, for example, stretches back to 1972 and is enormously rich, with millions of images, for both optical and infrared bands. And the amount of nationwide high-­resolution aerial imagery, digital elevation maps, and so forth just keeps growing every day. There are now terabytes of relevant data to digest.

A map of the United States.Men standing in a circle.
Photos: SilviaTerra
The Whole Megillah: The authors’ company, SilviaTerra, used satellite imagery for its national Basemap project, which tracks forest characteristics throughout the United States, producing such results as this map of tree density. Members of the SilviaTerra team stand on a giant stump [bottom] at Big Trees State Park in Calveras County, Calif.    

An even taller hurdle was finding a way to analyze the imagery in a way that gives reliable estimates. The executives of publicly owned timber companies are especially keen on having good estimates, because they have to report accurate numbers about their holdings to investors.

Another big challenge was dealing with the fact that most of the satellite imagery available to us was of quite limited resolution—typically 15 meters. That’s much too coarse to make out individual trees in an image. As a result, we had to use a statistical technique rather than computer vision per se here. (One benefit of this statistical approach is that it avoids the biases that commonly result with high-resolution tree-delineation methods.)

For all these reasons, creating an inventory of what’s growing in a forest is technically more difficult than creating an inventory of what’s growing on a farmer’s field. The economic stakes are also different: The value of the annual crop harvest in the United States is about $400 billion, while the annual timber harvest is only $10 billion.

That said, forests provide many benefits that nobody pays for, including wildlife habitat, carbon sequestration, and water filtration, not to mention nice places to camp for the weekend.

More than 20 years ago, the economist Robert Costanza and others examined the value of the various ecosystem services that forests deliver, even though no money changes hands. Based on those results, we estimate that U.S. forests provide about $100 billion worth of ecosystem services every year. Part of our mission at SilviaTerra is to help put real numbers on these ecosystem services for every acre of forest in the United States.

The output of our very complicated machine-learning system for processing remotely sensed forest imagery is actually very simple: For each 1/20 of an acre (0.02 hectare, or a little smaller than the footprint of an average U.S. home), the system builds a list of the trees standing there. The list includes the species of each tree and its diameter as measured 4.5 feet (1.4 meters) off the ground, following standard U.S. forestry practice. Other key metrics, such as tree height and total carbon storage, can be derived from these values. Things like wildfire risk or the suitability of the land as deer habitat can be modeled based on the types of trees present.

To create this giant list of trees, we combined thousands of field measurements with terabytes of satellite imagery. So we needed field data for the entire United States. Fortunately, for decades U.S. taxpayers have paid the U.S. Forest Service to establish a nationwide grid of forest measurements. This amazing collection of observations spans the continental United States, and it provided exactly what we needed to train our machine-learning system to gauge the number, size, and species of trees present in remote-sensing imagery.

In most remote-sensing forestry efforts, a human analyst starts with a single image that he or she hopes will document everything in the area of interest. For example, the analyst might use lidar data in the form of a high-resolution point cloud (the coordinates of a set of points in 3D space) to figure out the number of trees present, as well as their heights and species.

Lidar imagery is expensive to obtain, though, so there’s not much of it around. And what can be had is often sorely out of date or incomplete. For these reasons, we instead relied on a wide range of free satellite and aerial imagery. We used all kinds—visible light, near-infrared, radar—because each kind of image tells you about a different aspect of the forest. Landsat imagery stretching back decades is often great for picking up on the differences among species, while radar typically contains much more information about overall forest structure. The key is to combine these different types of imagery and analyze them in a statistically rigorous way.

Before we took on this problem, a single high-resolution inventory of all U.S. forests did not exist. But if society is going to prevent more wildfires, grow rural economies in a sustainable way, and manage climate change, a much better understanding of our forests is needed. We boosted that understanding in a unique way when we finished our nationwide forest Basemap project last year.

Although we had previously applied our methodology to many focused projects, compiling a forest inventory for the continental United States was an entirely new scale of undertaking. We were very fortunate to partner with Microsoft, which in 2017 launched its AI for Earth grant program to provide the company’s tools to outside teams working on conservation projects. We applied for and ultimately received a grant to expand the forest inventory work we had been doing.

Using Microsoft Azure, the company’s cloud-computing platform, we were able to process over 10 TB of satellite imagery. It wasn’t just a matter of needing more computing power. Modeling the particular kinds of forests present in different regions was a major challenge. So was recognizing issues with data integrity. We spent one confused weekend, for example, trying to sort out problems in the output before we realized that some ­high-resolution aerial imagery is blacked out over military bases!

While we weren’t expecting such artificial holes in the data, we knew from our prior work that it can be hard to find cloud-free images of a given area. For some regions—especially in the Pacific Northwest—you simply can’t find any such images that cover an appreciable area.

Luckily, Lin Yan, now of Michigan State University, published a method for dealing with just this problem in 2018. When an image is obscured by a cloud, his algorithm replaces the cloud, pixel by pixel, with pixels from another image obtained when the sky over that spot of land was clear. We applied Yan’s algorithm to produce a set of cloud-free images, which were much easier to analyze.

We unveiled our nationwide forest inventory last year, but we knew it was just a starting point: Having better information doesn’t do any good unless it actually affects the decisions that people are making about their land. So influencing those decisions is now our focus.

For that, we’ve again partnered with Microsoft, which intends to become carbon negative by 2030. Microsoft can’t cease emitting carbon dioxide entirely, but it plans to offset its emissions, at least in part by paying forest owners to defer their timber harvests and thus sequester carbon through the growth of trees.

Carbon markets are not new, but they’ve been notoriously ineffective because it’s very hard to monitor such carbon sequestration. Our Basemap, which is updated annually, now makes that monitoring straightforward.

New possibilities also open up. The California carbon market, for example, is accessible only to landowners with more than 2,000 hectares of trees—smaller forests are too expensive to monitor. It also requires forest owners to make a 100-year commitment to keep carbon stocks at a certain level. Yet the most important time to sequester carbon is now, not a century in the future. A shorter-term contract of one year would provide the same immediate benefit at a lower cost, allowing much larger areas to be protected, at least in the short term.

Our Basemap dramatically lowers the cost of monitoring forests over time, which will allow millions of small landowners to participate in such markets. And because the Basemap is updated every year, Microsoft and others can make payments to those landowners year after year, providing much greater value for the money spent combating climate change.

Markets work well for commodities like corn, because when you sign a futures contract to sell corn at a certain price, someone down the line has to deliver a quantity of corn to a warehouse. There, the corn will be weighed and examined, so it’s easy enough to measure what’s being bought.

Using markets to influence carbon sequestration or land conservation is much harder, in large part because these processes usually take place out of sight, somewhere out in the woods. It’s difficult enough to put a dollar value on what has been gained by not cutting trees down, but if you can’t even determine whether trees have been harvested from a given area, you’ll be very reluctant to pay a landowner for the promise not to cash in on his or her timber reserves.

SilviaTerra’s Basemap now gives people in the United States a way to measure and pay for trees that are allowed to remain standing so that these forests will continue to provide important ecosystem services. Being able to see the forest and the trees in this way, we believe, will help shape a more sustainable future.

About the Author

Zack Parisa and Max Nova are cofounders of the San Francisco–based precision-forestry startup SilviaTerra.

Power Grids Should Be as Data Driven as the Internet

Par Stacey Higginbotham
Illustration of a hand holding a smart phone with plugs on the screen.
Illustration: Dan Page

Governments are setting ambitious renewable energy goals in response to climate change. The problem is, the availability of renewable sources doesn’t align with the times when our energy demands are the highest. We need more electricity for lights when the sun has set and solar is no longer available, for example. But if utilities could receive information about energy usage in real time, as Internet service providers already do with data usage, it would change the relationship we have with the production and consumption of our energy.

Utilities must still meet energy demands regardless of whether renewable sources are available, and they still have to mull whether to construct expensive new power plants to meet expected spikes in demand. But real-time information would make it easier to use more renewable energy sources when they’re available. Using this information, utilities could set prices in response to current availability and demand. This real-time pricing would serve as an incentive to customers to use more energy when those sources are available, and thus avoid putting more strain on power plants.

California is one example of this strategy. The California Energy Commission hopes establishing rules for real-time pricing for electricity use will demonstrate how overall demand and availability affect the cost. It’s like surge pricing for a ride share: The idea is that electricity would cost more during peak demand. But the strategy would likely generate savings for people most of the time.

Granted, most people won’t be thrilled with the idea of paying more to dry their towels in the afternoons and evenings, as the sun goes down and demand peaks. But new smart devices could make the pricing incentives both easier on the customer and less visible by handling most of the heavy lifting that a truly dynamic and responsive energy grid requires.

For example, companies such as Ecobee, Nest, Schneider Electric, and Siemens could offer small app-controlled computers that would sit on the breaker boxes outside a building. The computer would manage the flow of electricity from the breaker box to the devices in the building, while the app would help set priorities and prices. It might ask the user during setup to decide on an electricity budget, or to set devices to have priority over other devices during peak demand.

Back in 2009, Google created similar software called Google PowerMeter, but the tech was too early—the appliances that could respond to real-time information weren’t yet available. Google shut down the service in 2011. Karen Herter, an energy specialist for the California Energy Commission, believes that the state’s rules for real-time pricing will be the turning point that convinces energy and tech giants to build such smart devices again.

This year, the CEC is writing rules for real-time pricing. The agency is investigating rates that update every hour, every 15 minutes, and every 5 minutes. No matter what, the rates will be publicly available, so that breaker box computers at homes and businesses can make decisions about what to power and when.

We will all need to start caring about when we use electricity—whether to spend more money to run a dryer at 7 p.m., when demand is high, or run it overnight, when electricity may be cheaper. California, with the rules it’s going to have in place by January 2022, could be the first to create a market for real-time energy pricing. Then, we may see a surge of devices and services that could increase our use of renewable energy to 100 percent—and save money on our electric bills along the way.

This article appears in the August 2020 print issue as “Data-Driven Power.”

Peer Review of Scholarly Research Gets an AI Boost

Par Payal Dhar

In the world of academics, peer review is considered the only credible validation of scholarly work. Although the process has its detractors, evaluation of academic research by a cohort of contemporaries has endured for over 350 years, with “relatively minor changes.” However, peer review may be set to undergo its biggest revolution ever—the integration of artificial intelligence.

Open-access publisher Frontiers has debuted an AI tool called the Artificial Intelligence Review Assistant (AIRA), which purports to eliminate much of the grunt work associated with peer review. Since the beginning of June 2020, every one of the 11,000-plus submissions Frontiers received has been run through AIRA, which is integrated into its collaborative peer-review platform. This also makes it accessible to external users, accounting for some 100,000 editors, authors, and reviewers. Altogether, this helps “maximize the efficiency of the publishing process and make peer-review more objective,” says Kamila Markram, founder and CEO of Frontiers.

AIRA’s interactive online platform, which is a first of its kind in the industry, has been in development for three years.. It performs three broad functions, explains Daniel Petrariu, director of project management: assessing the quality of the manuscript, assessing quality of peer review, and recommending editors and reviewers. At the initial validation stage, the AI can make up to 20 recommendations and flag potential issues, including language quality, plagiarism, integrity of images, conflicts of interest, and so on. “This happens almost instantly and with [high] accuracy, far beyond the rate at which a human could be expected to complete a similar task,” Markram says.

“We have used a wide variety of machine-learning models for a diverse set of applications, including computer vision, natural language processing, and recommender systems,” says Markram. This includes simple bag-of-words models, as well as more sophisticated deep-learning ones. AIRA also leverages a large knowledge base of publications and authors.

Markram notes that, to address issues of possible AI bias, “We…[build] our own datasets and [design] our own algorithms. We make sure no statistical biases appear in the sampling of training and testing data. For example, when building a model to assess language quality, scientific fields are equally represented so the model isn’t biased toward any specific topic.” Machine- and deep-learning approaches, along with feedback from domain experts, including errors, are captured and used as additional training data. “By regularly re-training, we make sure our models improve in terms of accuracy and stay up-to-date.”

The AI’s job is to flag concerns; humans take the final decisions, says Petrariu. As an example, he cites image manipulation detection—something AI is super-efficient at but is nearly impossible for a human to perform with the same accuracy. “About 10 percent of our flagged images have some sort of problem,” he adds. “[In academic publishing] nobody has done this kind of comprehensive check [using AI] before,” says Petrariu. AIRA, he adds, facilitates Frontiers’ mission to make science open and knowledge accessible to all.

Le Jamel Comedy Club s’installe à Cannes

Par sophie

La Mairie de Cannes accueille du vendredi 31 juillet au lundi 3 août 2020 dans les jardins de la Médiathèque Noailles, la troupe du Jamel Comedy Club pour des soirées exclusives de stand-up.

Chaque jour, cinq artistes se succèdent sur scène pour enchaîner deux représentations d’une heure et demie à 19h30 puis 22h. Le public est invité à découvrir les nouveaux talents de demain dans une ambiance conviviale et un cadre exceptionnel, les pieds dans l’herbe.

Billetterie www.weezevent.com

Plus d’infos sur l’événement

source – crédit photo: capture

Attention Rogue Drone Pilots: AI Can See You!

Par Mark Anderson

The minute details of rogue drone’s movements in the air may unwittingly reveal the drone pilot’s location—possibly enabling authorities to bring the drone down before, say, it has the opportunity to disrupt air traffic or cause an accident. And it’s possible without requiring expensive arrays of radio triangulation and signal-location antennas.

So says a team of Israeli researchers who have trained an AI drone-tracking algorithm to reveal the drone operator’s whereabouts, with a better than 80 per cent accuracy level. They are now investigating whether the algorithm can also uncover the pilot’s level of expertise and even possibly their identity.

Gera Weiss—professor of computer science at Ben-Gurion University of the Negev in Beersheba, Israel—said the algorithm his team has developed partly relies on the specific terrain around an airport or other high-security location.

After testing neural nets including dense networks and convolutional neural networks, the researchers found that a kind of recurrent neural net called a “gated-recurrent unit” (GRU) network worked best for drone tracking. “Recurrent networks are good at this,” Weiss said. “They consider the sequenced reality of the data—not just in space but in time.”

So, he said, a security professional at an airport, for instance, would hire white-hat malefactors to launch a drone from various locations around the airport. The security team would then record the drone’s exact movements on airport radar systems.

Ultimately, the GRU algorithm would then train on this data—knowing in this case the pilot’s location and the peculiar details of the drone’s flight patterns.

Depending on the specific terrain at any given airport, a pilot operating a drone near a camouflaging patch of forest, for instance, might have an unobstructed view of the runway. But that location might also be a long distance away, possibly making the operator more prone to errors in precise tracking of the drone. Whereas a pilot operating nearer to the runway may not make those same tracking errors but may also have to contend with big blind spots because of their proximity to, say, a parking garage or control tower.

And in every case, he said, simple geometry could begin to reveal important clues about a pilot’s location, too. When a drone is far enough away, motion along a pilot’s line of sight can be harder for the pilot to detect than motion perpendicular to their line of sight. This also could become a significant factor in an AI algorithm working to discover pilot location from a particular drone flight pattern.

The sum total of these various terrain-specific and terrain-agnostic effects, then, could be a giant finger pointing to the operator. This AI application would also be unaffected by any relay towers or other signal spoofing mechanisms the pilot may have put in place.

Weiss said his group tested their drone tracking algorithm using Microsoft Research’s open source drone and autonomous vehicle simulator AirSim. The group presented their work-in-progress at the Fourth International Symposium on Cyber Security, Cryptology and Machine Learning at Ben-Gurion University earlier this month.

Their paper boasts a 73 per cent accuracy rate in discovering drone pilots’ locations. Weiss said that in the few weeks since publishing that result, they’ve now improved the accuracy rate to 83 per cent.

Now that the researchers have proved the algorithm’s concept, Weiss said, they’re hoping next to test it in real-world airport settings. “I’ve already been approached by people who have the flight permissions,” he said. “I am a university professor. I’m not a trained pilot. Now people that do have the facility to fly drones [can] run this physical experiment.”

Weiss said it’s as yet unclear how terrain-agnostic their algorithm is. Could a neural net trained on the terrain surrounding one airport then be effectively deployed at another airport—or another untrained region of the same airport?

Another open question, he said, involves whether the algorithm could also be reversed: Could drone flight patterns around an unmapped terrain then be used to discover features of the terrain?

Weiss said they hope to tackle these questions in future research, alongside possible applications of a series of recent findings that attempt to rate operator skill levels from motion tracking data.

One finding even goes so far as to pinpoint and classify idiosyncrasies in piloting skills—so perhaps repeat offenders might one day be spotted by the thumbprint their copter leaves on the sky.

Le lac du film “Dirty Dancing” de nouveau rempli après 12 ans sans eau

Par sophie

Rendu célèbre par le film Dirty Dancing, le lac situé dans l’état de Virginie est le théâtre d’un phénomène naturel surprenant.

Qui ne connaît pas le film Dirty Dancing ? Sorti dans les salles en 1987, il raconte l’histoire d’une jeune femme, Bébé Houseman (Jennifer Grey), partie passer des vacances ennuyeuses avec sa famille… Jusqu’au jour où elle rencontre Johnny Castle (Patrick Swayze), un professeur de danse de l’hôtel. Grâce à la danse et à l’amour, Bébé s’émancipe. L’une des scènes les plus mythiques du film est sans conteste l’apprentissage du porté dans le lac.

Et ce lac, qui se situe dans la New River Valley sur le terrain du Mountain Lake Lodge, l’hôtel utilisé pour le film sous le nom de Kellerman’s Resort, est le lieu d’un phénomène naturel extraordinaire. Sur un cycle d’environ 400 ans, il se vide complètement de son eau, avant de nouveau de remplir. En se vidant, tel un système de plomberie, le lac s’auto-nettoie en déplaçant les sédiments qui stagnaient au fond. Le lac a ainsi été presque complètement sec entre 2008 et 2012, ressemblant à une petite prairie verdoyante.

Dans une vidéo publiée sur le site de l’hôtel, des scientifiques expliquent qu’ils ont “tenté d’accélérer le travail de mère nature”. Leur but était de réparer les fuites du lac, sans l’empêcher totalement de se vider, afin que l’eau s’évacue en plus petite quantité et que le lac reste majoritairement rempli. Ils ont finalement réussi, “grâce à un processus naturel”, en corrigeant les trous situés dans le fond et les côtés du lac. Et, grâce à un printemps 2020 pluvieux, le lac a commencé à se remplir. Depuis le 12 juillet dernier, l’eau est déjà montée d’un tiers, offrant un paysage de rêve, avec la chaîne de montagnes des Appalaches en toile de fond.

source – crédit photo: capture google

Mode : comment choisir ses vêtements éthiques et solidaires ?

Par Communiqué
S'habiller éthique ? Pas simple, surtout lorsqu'on manque d'informations. L'association Eco-Sapiens répertorie les labels et fabricants de confiance.

Reese Witherspoon boit ce smoothie vert tous les jours – Voici sa recette!

Par sophie

Elle a obtenu la recette de ce smoothie aux légumes il y a des années de sa co-star de Little Fires Everywhere, Kerry Washington.

Reese Witherspoon est un caméléon. Elle est une actrice, productrice, entrepreneure et maman de trois enfants. Récemment, Reese a posté une vidéo sur son compte de son smoothie quotidien. La recette contient une tonne de légumes et des fruits pour la douceur. Et, apparemment, l’ingrédient secret pour rendre ce smoothie aussi délicieux qu’il y paraît, c’est la danse. Enjoy!

Reese dit qu’elle prend cela au lieu du petit-déjeuner à 10 ou 11 heures du matin (il a été noté qu’elle suit un jeûne intermittent) et que cela lui permet de se sentir rassasiée jusqu’à 13 heures quand l’heure du déjeuner arrive. Elle fait toujours un double lot et en économise la moitié pour le lendemain. Voici comment elle le fait:

Recette de smoothie vert de Reese Witherspoon

Ingrédients:

  • Laitue romaine 2 têtes
  • 1/2 tasse d’épinards
  • 1/2 tasse d’eau de coco
  • 1 banane entière
  • 1 pomme entière (cœur enlevé)
  • 1 poire entière (cœur enlevé)
  • 1 citron entier (écorce enlevée)
  • Céleri (facultatif)
  • Beurre d’amande (facultatif)

Instructions:

Hacher grossièrement la romaine et l’ajouter à votre mélangeur avec les épinards, la banane d’eau de coco, la poire, le citron et les autres ingrédients facultatifs si vous en utilisez. 
Une fois que tout est mélangé, répartissez uniformément dans deux grands verres ou bouteilles d’eau. Reese utilise des bouteilles d’eau en verre réutilisables. Mettez-en un dans le réfrigérateur pour demain et profitez de l’autre!

source – Librement traduit de l’anglais par JDBN – crédit photo:ROBYN BECK/GETTY IMAGES

 

Les églises de pierre sont des lieux bénis et précieux

Par Maximilien Bernard
Les églises de pierre sont des lieux bénis et précieux
Communiqué de l’évêque de Limoges : Stupeur et émotion nous saisissent à nouveau en voyant une autre cathédrale terriblement endommagée par les flammes. Quelle que soit la Lire la suite ...

Emmanuel Macron s’est entretenu avec le Président de la Conférence des évêques de France

Par Maximilien Bernard
Emmanuel Macron s’est entretenu avec le Président de la Conférence des évêques de France
Voici le communiqué de la CEF sur l’incendie de la cathédrale de Nantes : La Conférence des évêques de France(CEF) apporte son soutien et sa prière aux fidèles diocésains de Lire la suite ...

Powerful AI Can Now Be Trained on a Single Computer

Par Edd Gent

The enormous computing resources required to train state-of-the-art artificial intelligence systems means well-heeled tech firms are leaving academic teams in the dust. But a new approach could help balance the scales, allowing scientists to tackle cutting-edge AI problems on a single computer.

A 2018 report from OpenAI found the processing power used to train the most powerful AI is increasing at an incredibly fast pace, doubling every 3.4 months. One of the most data-hungry approaches is deep reinforcement learning, where AI learns through trial and error by iterating through millions of simulations. Impressive recent advances on videogames like Starcraft and Dota2 have relied on servers packed with hundreds of CPUs and GPUs.

Specialized hardware such as the Cerebras System’s Wafer Scale Engine promises to replace these racks of processors with a single large chip perfectly optimized for training AI. But with a price tag running into the millions, it’s not much solace for under-funded researchers.

Now a team from the University of Southern California and Intel Labs have created a way to train deep reinforcement learning (RL) algorithms on hardware commonly available in academic labs. In a paper presented at the 2020 International Conference on Machine Learning (ICML) this week, they describe how they were able to use a single high-end workstation to train AI with state-of-the-art performance on the first-person shooter videogame Doom. They also tackle a suite of 30 diverse 3D challenges created by DeepMind using a fraction of the normal computing power.

“Inventing ways to do deep RL on commodity hardware is a fantastic research goal,” says Peter Stone, a professor at the University of Texas at Austin who specializes in deep RL. As well as leaving smaller research groups behind, the computing resources normally required to carry out this kind of research have a significant carbon footprint, he adds. “Any progress towards democratizing RL and reducing the energy needs for doing research is a step in the right direction,” he says.

The inspiration for the project was a classic case of necessity being the mother of invention, says lead author Aleksei Petrenko, a graduate student at USC. As a summer internship at Intel came to an end, Petrenko lost access to the company’s supercomputing cluster putting unfinished deep RL projects in jeopardy. So he and colleagues decided to find a way to continue the work on simpler systems.

“From my experience, a lot of researchers don’t have access to cutting-edge, fancy hardware,” says Petrenko. “We realized that just by rethinking in terms of maximizing the hardware utilization you can actually approach the performance you will usually squeeze out of a big cluster even on a single workstation.”

The leading approach to deep RL places an AI agent in a simulated environment that provides rewards for achieving certain goals, which the agent uses as feedback to work out the best strategy. This involves three main computational jobs: simulating the environment and the agent; deciding what to do next next based on learned rules called a policy; and using the results of those actions to update the policy.

Training is always limited by the slowest process, says Petrenko, but these three jobs are often intertwined in standard deep RL approaches, making it hard to optimize them individually. The researchers’ new approach, dubbed Sample Factory, splits them up so resources can be dedicated to get them all running at peak speeds.

Piping data between processes is another major bottleneck as these can often be spread across multiple machines, Petrenko explains. His group took advantage of working on a single machine by simply cramming all the data to shared memory where all processes can access it instantaneously.

This resulted in significant speed-ups compared to leading deep RL approaches. Using a single machine equipped with a 36-core CPU and one GPU, the researchers were able to process roughly 140,000 frames per second while training on Atari videogames and Doom, or double the next best approach. On the 3D training environment DeepMind Lab, they clocked 40,000 frames per second—about 15 percent better than second place.

To check how frame rate translated into training time the team pitted Sample Factory against an algorithm Google Brain open-sourced in March that is designed to dramatically increase deep RL efficiency. Sample Factory trained on two simple tasks in Doom in a quarter of the time it took the other algorithm. The team also tested their approach on a collection of 30 challenges in DeepMind Lab using a more powerful 36-core 4-GPU machine. The resulting AI significantly outperformed the original AI that DeepMind used to tackle the challenge, which was trained on a large computing cluster.

Edward Beeching, a graduate student working on deep RL at the Institut National des Sciences Appliquées de Lyon, in France, says the approach might struggle with memory intensive challenges like the photo-realistic 3D simulator Habitat released by Facebook last year.

But he adds that these kinds of efficient training approaches are vitally important for smaller research teams. “A four-fold increase compared to the state of the art implementation is huge,” he says. “This means in the same time you can run four times as many experiments.”

While the computers used in the paper are still high-end workstations designed for AI research, Petrenko says he and his collaborators have also been using Sample Factory on much simpler devices. He’s even been able to run some advanced deep RL experiments on his mid-range gaming laptop, he says. “This is unheard of.”

Cette photographie montre 42 éclairs dans la nuit : un spectacle incroyable

Par Mégane Bouron
Fasciné par la foudre depuis sa plus tendre enfance, ce photographe a immortalisé la "Nuit des Mille Fourchettes" en une seule photographie.

Ces dessins façon Disney encouragent l’adoption d’animaux

Par Sophie Renassia
Pour encourager l’adoption d’animaux abandonnés, cette jeune artiste les dessine façon Disney. Une idée brillante ! Découverte.

Spoliées par la France pendant la colonisation, des œuvres d’art seront rendues à l’Afrique

Par Axel Leclercq
Les musées français compteraient quelque 90 000 œuvres d'art africaines. Un certain nombre d'entre elles devraient retrouver leurs pays d'origine.

Une ado crée des dessins à la craie pour amuser son frère : 20 photos amusantes et surréalistes

Par Mégane Bouron
Pour s'évader malgré le confinement, un frère et une sœur imaginent tous les jours des trompe-l'œil à la fois fun, drôles et créatifs.

Lady Gaga encourage le port du masque et tague ses amis célèbres

Par sophie

Devant la situation qui divise vis à vis du port du masque, Lady Gaga encourage son utilisation régulière. Toujours engagée, elle demande via son compte Instagram, à Michelle et Barack Obama, Oprah Winfrey et Ariana Grande de montrer l’exemple comme elle et on trouve ça fun! 

Bravo!

“Soyez vous-même, mais portez un masque! Je crois qu’il faut être gentil avec vous-même, la communauté et la planète. Je défie mes amis géniaux de montrer leur jeu de masque! ❤️ @barackobama @michelleobama @oprah @arianagrande …”(Lady Gaga)
source: JDBN – crédit photo: ladygaga

 
 
 

Visiter la cathédrale de Sées avec un drone

Par Maximilien Bernard
Visiter la cathédrale de Sées avec un drone
Contemplez avec un regard neuf ce magnifique édifice gothique normand : Lire la suite ...

Risk Dashboard Could Help the Power Grid Manage Renewables

Par Jeremy Hsu

To fully embrace wind and solar power, grid operators need to be able to predict and manage the variability that comes from changes in the wind or clouds dimming sunlight.

One solution may come from a $2-million project backed by the U.S. Department of Energy that aims to develop a risk dashboard for handling more complex power grid scenarios.

 

Grid operators now use dashboards that report the current status of the power grid and show the impacts of large disturbances—such as storms and other weather contingencies—along with regional constraints in flow and generation. The new dashboard being developed by Columbia University researchers and funded by the Advanced Research Projects Agency–Energy (ARPA-E) would improve upon existing dashboards by modeling more complex factors. This could help the grid better incorporate both renewable power sources and demand response programs that encourage consumers to use less electricity during peak periods.

“[Y]ou have to operate the grid in a way that is looking forward in time and that accepts that there will be variability—you have to start talking about what people in finance would call risk,” says Daniel Bienstock, professor of industrial engineering and operations research, and professor of applied physics and applied mathematics at Columbia University.

The new dashboard would not necessarily help grid operators prepare for catastrophic black swan events that might happen only once in 100 years. Instead, Bienstock and his colleagues hope to apply some lessons from financial modeling to measure and manage risk associated with more common events that could strain the capabilities of the U.S. regional power grids managed by independent system operators (ISOs). The team plans to build and test an alpha version of the dashboard within two years, before demonstrating the dashboard for ISOs and electric utilities in the third year of the project.

Variability already poses a challenge to modern power grids that were designed to handle steady power output from conventional power plants to meet an anticipated level of demand from consumers. Power grids usually rely on gas turbine generators to kick in during peak periods of power usage or to provide backup to intermittent wind and solar power.

But such generators may not provide a fast enough response to compensate for the expected variability in power grids that include more renewable power sources and demand response programs driven by fickle human behavior. In the worst cases, grid operators may shut down power to consumers and create deliberate blackouts in order to protect the grid’s physical equipment.

One of the dashboard project’s main goals involves developing mathematical and statistical models that can quantify the risk from having greater uncertainty in the power grid. Such models would aim to simulate different scenarios based on conditions—such as changes in weather or power demand—that could stress the power grid. Repeatedly playing out such scenarios would force grid operators to fine-tune and adapt their operational plans to handle such surprises in real life.

For example, one scenario might involve a solar farm generating 10 percent less power and a wind farm generating 30 percent more power within a short amount of time, Bienstock explains. The combination of those factors might mean too much power begins flowing on a particular power line and the line subsequently starts running hot at the risk of damage.

Such models would only be as good as the data that trains them. Some ISOs and electric utilities have already been gathering useful data from the power grid for years. Those that already have more experience dealing with the variability of renewable power have been the most proactive. But many of the ISOs are reluctant to share such data with outsiders.

“One of the ISOs has told us that they will let us run our code on their data provided that we actually physically go to their office, but they will not give us the data to play with,” Bienstock says. 

For this project, ARPA-E has been working with one ISO to produce synthetic data covering many different scenarios based on historical data. The team is also using publicly available data on factors such as solar irradiation, cloud cover, wind strength, and the power generation capabilities of solar panels and wind turbines.

“You can look at historical events and then you can design stress scenarios that are somehow compatible with what we observe in the past,” says Agostino Capponi, associate professor of industrial engineering and operations research at Columbia University and external consultant for the U.S. Commodity Futures Trading Commission.

A second big part of the dashboard project involves developing tools that grid operators could use to help manage the risks that come from dealing with greater uncertainty. Capponi is leading the team’s effort to design customized energy volatility contracts that could allow grid operators to buy such contracts for a fixed amount and receive compensation for all the variance that occurs over a historical period of time.

But he acknowledged that financial contracts designed to help offset risk in the financial market won’t apply in a straightforward manner to the realities of the power grid that include delays in power transmission, physical constraints, and weather events.

“You cannot really directly use existing financial contracts because in finance you don't have to take into account the physics of the power grid,” Capponi says.

The team’s expertise spans multiple disciplines. Bienstock, Capponi, and their colleague Garud Iyengar, professor of industrial engineering and operations research, are all members of Columbia’s Data Science Institute. The project’s principal investigators also include Michael Chertkov, professor of mathematics at the University of Arizona, and Yury Dvorkin, assistant professor of electrical and computer engineering at New York University.

Once the new dashboard is up and running, it could begin to help grid operators deal with both near-term and long-term challenges for the U.S. power grid. One recent example comes from the current COVID-19 pandemic and associated human behavioral changes—such as more people working from home—having already increased variability in energy consumption across New York City and other parts of the United States. In the future, the risk dashboard might help grid operators quickly identify areas at higher risk of suffering from imbalances between supply and demand and act quickly to avoid straining the grid or having blackouts.

Knowing the long-term risks in specific regions might also drive more investment in additional energy storage technologies and improved transmission lines to help offset such risks. The situation is different for every grid operator’s particular region, but the researchers hope that their dashboard can eventually help level the speed bumps as the U.S. power grid moves toward using more renewable power.

“The ISOs have different levels of renewable penetration, and so they have different exposures and visibility to risk,” Bienstock says. “But this is just the right time to be doing this sort of thing.”

❌