Researchers at Intel and Cornell University report that they’ve made an electronic nose that can learn the scent of a chemical after just one exposure to it and then identify that scent even when it’s masked by others. The system is built around Intel’s neuromorphic research chip, Loihi and an array of 72 chemical sensors. Loihi was programmed to mimic the workings of neurons in the olfactory bulb, the part of the brain that distinguishes different smells. The system’s inventors say it could one day watch for hazardous substances in the air, sniff out hidden drugs or explosives, or aid in medical diagnoses.
Loihi’s chip architecture is meant to more closely match the way the brain works than the architectures of CPUs or even new accelerator chips designed to speed deep learning. Researchers hope that such neuromorphic chips will be able do things that today’s AI systems can’t do, or at least can’t do without consuming a lot of power or taking too much time.
One of those things is called “one-shot” learning. Your nose can smell something once, and your brain will immediately recognize it again. But today’s AI systems, which often use deep learning artificial neural networks, must be trained using a huge number of previously identified examples. That makes training a time-consuming, power-hungry process. Even worse, most previously trained AI cannot easily learn a new category without damaging its memory of the old ones, meaning it needs to be completely retrained with all the categories.
Unlike the artificial neurons in today’s AI, Loihi’s neurons carry information in the timing of digitally-represented spikes, which is more analogous to what goes on in your brain.
The scent-learning experiments required only one Loihi chip, but Intel designed them to be seamlessly linked together in much larger systems. The company reported this week that it had produce a multi-board, 768-chip, 100-million neuron system. The largest Loihi system prior to that comprised 64 chips and the equivalent of 8 million neurons.
According to Intel senior research scientist Nabil Imam, the next step is “to generalize this approach to a wider range of problems—from sensory scene analysis (understanding the relationships between objects you observe) to abstract problems like planning and decision-making. Understanding how the brain’s neural circuits solve these complex computational problems will provide important clues for designing efficient and robust machine intelligence.”
However, there are challenges to overcome first. In particular, the system needs to be able to group different, closely related aromas, into a common category. For example, it needs to be able to tell that strawberries from California and strawberries from Europe are the same fruit. “These are challenges in olfactory signal recognition that we're working on and that we hope to solve in the next couple of years before this becomes a product that can solve real-world problems beyond the experimental ones we have demonstrated in the lab,” Imam said in a press release.
This post was updated on 19 March to include mention of the new 100-million neuron Loihi system.
Researchers on WeBank’s AI Moonshot Team have taken a deep learning system developed to detect solar panel installations from satellite imagery and repurposed it to track China’s economic recovery from the novel coronavirus outbreak.
This, as far as the researchers know, is the first time big data and AI have been used to measure the impact of the new coronavirus on China, Haishan Wu, vice general manager of WeBank’s AI department, told IEEE Spectrum. WeBank is a private Chinese online banking company founded by Tencent.
The team used its neural network to analyze visible, near-infrared, and short-wave infrared images from various satellites, including the infrared bands from the Sentinel-2 satellite. This allowed the system to look for hot spots indicative of actual steel manufacturing inside a plant. In the early days of the outbreak, this analysis showed that steel manufacturing had dropped to a low of 29 percent of capacity. But by 9 February, it had recovered to 76 percent.
The researchers then looked at other types of manufacturing and commercial activity using AI. One of the techniques was simply counting cars in large corporate parking lots. From that analysis, it appeared that, by 10 February, Tesla’s Shanghai car production had fully recovered, while tourism operations, like Shanghai Disneyland, are still shut down.
Moving beyond satellite data, the researchers took daily anonymized GPS data from several million mobile phone users in 2019 and 2020, and used AI to determine which of those users were commuters. The software then counted the number of commuters in each city, and compared the number of commuters on a given day in 2019 and its corresponding date in 2020, starting with Chinese New Year. In both cases, Chinese New Year saw a huge dip in commuting, but unlike in 2019, the number of people going to work didn’t bounce back after the holiday. While things picked up slowly, the WeBank researchers calculated that by 10 March 2020, about 75 percent of the workforce had returned to work.
Projecting out from these curves, the researchers concluded that most Chinese workers, with the exception of Wuhan, will be back to work by the end of March. Economic growth in the first quarter, their study indicated, will take a 36 percent hit.
Finally, the team used natural language processing technology to mine Twitter-like services and other social media platforms for mentions of companies that provide online working, gaming, education, streaming video, social networking, e-commerce, and express delivery services. According to this analysis, telecommuting for work is booming, up 537 percent from the first day of 2020; online education is up 169 percent; gaming is up 124 percent; video streaming is up 55 percent; social networking is up 47 percent. Meanwhile, e-commerce is flat, and express delivery is down a little less than 1 percent. The analysis of China’s social media activity also yielded the prediction that the Chinese economy will be mostly back to normal by the end of March.
Demand for data scientists and engineers has, for the past couple of years, been off the charts. The number of openings for machine learning and data engineers posted on recruiting web sites continues to grow by double digits annually, and those working in the field have been commanding ever-higher salaries.
Joining the ranks of these desperately sought after techies takes serious coding chops, definitely expertise in Python, along with familiarity with other languages. That combination—of job openings for data engineers along with the dominance of Python, means Python regularly makes the charts of most in-demand coding languages.
So anyone contemplating a future in data science or machine learning needs to build up software engineering skills, right?
Wrong, says Ryohei Fujimaki, founder and CEO of dotData. Fujimaki has, for nearly a decade, been working to use AI to automate much of the job of the data scientist.
We can, he says, “eliminate the skill barrier. Traditionally, the job of building a machine learning model can only be done by people who know SQL and Python and statistics. Our system automates the entire process, enabling less experienced people to implement machine learning projects.”
DotData—which is currently offering its tools as a cloud-based service—came out of NEC. Fujimaki, then a research fellow at the company, started thinking about automating machine learning in 2011 as a way to make the 100 or so data scientists on his research team more productive. He got sidetracked for a few years, focused on commercializing an algorithm designed to make machine learning transparent, but in 2015 returned to the machine learning project.
“A typical use case for machine learning in the business world is prediction,” he said, “predicting demand of a product to optimize inventory, or predicting the failure of a sensor in a factory to allow preventive maintenance, or scoring a list of possible customers.”
“The first step in developing a machine learning model for prediction is feature engineering—looking at historical patterns and coming up with hypotheses,” he says. Feature engineering generally requires a team of people with a multitude of skill sets—data scientists, SQL experts, analysts, and domain experts. Typically, only after this team comes up with a set of hypotheses does machine learning step in, combining all those hypotheses to figure out how to best weigh them to come up with accurate predictions.
In dotData’s system, AI takes over that first step, coming up and testing its own hypotheses from a set of historical data.
So, he says, “you don’t need domain experts or data scientists, and as a subproduct AI can explore many more hypotheses than human experts—millions instead of hundreds in a limited time window.”
Fujimaki’s group at NEC in 2016 let Japan’s Sumitomo Mitsui Banking Corp. (SMBC) test a prototype against a team using traditional data science tools. “Their team took three months, our process took a day, and our results were better,” he says. NEC spun off the group in early 2018, remaining as a shareholder. Right now DotData has about 70 employees, about 70 percent of those are engineers and data scientists, along with a few dozen customers, Fujimaki says.
“In the near future,” Fujimaki says, “80 percent of machine learning projects can be fully automated. That will free up the most skilled, computer-science-PhD-type of data scientists, to focus on the other 20 percent.”
Demand for data scientists overall won’t drop from what it is today, Fujimaki predicts, though the double-digit growth may slow. The job, however, will become more focused. “Data scientists today are expected to be superman, good at too many things—statistics, and machine learning, and software engineering.”
And a new role is likely to emerge, he predicts. “Call it the business data scientist, or the citizen data scientist. They aren’t machine learning people, they are more business oriented. They know what predictions they need, and how to use those predictions in their business. It will be useful for them to have basic knowledge of statistics, and to understand data structures, but they won’t need deep mathematical understanding or knowledge of programming languages.
“We can’t eliminate the skill barrier, but we can significantly lower it. And here will be many more potential people who will be able to do this.”