À 5 ans, Michael Orlando Clark Jr est un enfant qui partage son bonheur.
Il était si heureux d'être officiellement adopté, qu'il a décidé d'inviter toute sa classe de maternelle à l'événement.
Cet article Vidéo : un enfant invite ses camarades de maternelle pour célébrer son adoption est apparu en premier sur Pepsnews - Le site des news positives.
En Afrique du Sud, Georgina de Kock a mis au point un bol comestible afin de remplacer le plastique, un moyen ludique de protéger la nature.
Cet article Écologie : une entreprise fabrique des bols comestibles est apparu en premier sur Pepsnews - Le site des news positives.
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
Let us know if you have suggestions for next week, and enjoy today’s videos.
In case you somehow missed the massive Skydio 2 review we posted earlier this week, the first batches of the drone are now shipping. Each drone gets a lot of attention before it goes out the door, and here’s a behind-the-scenes clip of the process.
[ Skydio ]
[ RVR ]
NimbRo-OP2 has some impressive recovery skills from the obligatory research-motivated robot abuse.
[ NimbRo ]
Teams seeking to qualify for the Virtual Urban Circuit of the Subterranean Challenge can access practice worlds to test their approaches prior to submitting solutions for the competition. This video previews three of the practice environments.
[ DARPA SubT ]
Stretchable skin-like robots that can be rolled up and put in your pocket have been developed by a University of Bristol team using a new way of embedding artificial muscles and electrical adhesion into soft materials.
[ Bristol ]
Happy Holidays from ABB!
Helping New York celebrate the festive season, twelve ABB robots are interacting with visitors to Bloomingdale’s iconic holiday celebration at their 59th Street flagship store. ABB’s robots are the main attraction in three of Bloomingdale’s twelve-holiday window displays at Lexington and Third Avenue, as ABB demonstrates the potential for its robotics and automation technology to revolutionize visual merchandising and make the retail experience more dynamic and whimsical.
[ ABB ]
We introduce pelican eel–inspired dual-morphing architectures that embody quasi-sequential behaviors of origami unfolding and skin stretching in response to fluid pressure. In the proposed system, fluid paths were enclosed and guided by a set of entirely stretchable origami units that imitate the morphing principle of the pelican eel’s stretchable and foldable frames. This geometric and elastomeric design of fluid networks, in which fluid pressure acts in the direction that the whole body deploys first, resulted in a quasi-sequential dual-morphing response. To verify the effectiveness of our design rule, we built an artificial creature mimicking a pelican eel and reproduced biomimetic dual-morphing behavior.
And here’s a real pelican eel:
[ Science Robotics ]
Delft Dynamics’ updated anti-drone system involves a tether, mid-air net gun, and even a parachute.
[ Delft Dynamics ]
Teleoperation is a great way of helping robots with complex tasks, especially if you can do it through motion capture. But what if you’re teleoperating a non-anthropomorphic robot? Columbia’s ROAM Lab is working on it.
I don’t know how I missed this video last year because it’s got a steely robot hand squeezing a cute lil’ chick.
In this video we present results of a trajectory generation method for autonomous overtaking of unexpected obstacles in a dynamic urban environment. In these settings, blind spots can arise from perception limitations. For example when overtaking unexpected objects on the vehicle’s ego lane on a two-way street. In this case, a human driver would first make sure that the opposite lane is free and that there is enough room to successfully execute the maneuver, and then it would cut into the opposite lane in order to execute the maneuver successfully. We consider the practical problem of autonomous overtaking when the coverage of the perception system is impaired due to occlusion.
[ Paper ]
New weirdness from Toio!
[ Toio ]
Palo Alto City Library won a technology innovation award! Watch to see how Senior Librarian Dan Lou is using Misty to enhance their technology programs to inspire and educate customers.
[ Misty Robotics ]
We consider the problem of reorienting a rigid object with arbitrary known shape on a table using a two-finger pinch gripper. Reorienting problem is challenging because of its non-smoothness and high dimensionality. In this work, we focus on solving reorienting using pivoting, in which we allow the grasped object to rotate between fingers. Pivoting decouples the gripper rotation from the object motion, making it possible to reorient an object under strict robot workspace constraints.
[ CMU ]
How can a mobile robot be a good pedestrian without bumping into you on the sidewalk? It must be hard for a robot to navigate in crowded environments since the flow of traffic follows implied social rules. But researchers from MIT developed an algorithm that teaches mobile robots to maneuver in crowds of people, respecting their natural behaviour.
What happens when humans and robots make art together? In this awe-inspiring talk, artist Sougwen Chung shows how she "taught" her artistic style to a machine -- and shares the results of their collaboration after making an unexpected discovery: robots make mistakes, too. "Part of the beauty of human and machine systems is their inherent, shared fallibility," she says.
[ TED ]
Last month at the Cooper Union in New York City, IEEE TechEthics hosted a public panel session on the facts and misperceptions of autonomous vehicles, part of the IEEE TechEthics Conversations Series. The speakers were: Jason Borenstein from Georgia Tech; Missy Cummings from Duke University; Jack Pokrzywa from SAE; and Heather M. Roff from Johns Hopkins Applied Physics Laboratory. The panel was moderated by Mark A. Vasquez, program manager for IEEE TechEthics.
[ IEEE TechEthics ]
Two videos this week from Lex Fridman’s AI podcast: Noam Chomsky, and Whitney Cummings.
[ AI Podcast ]
This week’s CMU RI Seminar comes from Jeff Clune at the University of Wyoming, on “Improving Robot and Deep Reinforcement Learning via Quality Diversity and Open-Ended Algorithms.”
Quality Diversity (QD) algorithms are those that seek to produce a diverse set of high-performing solutions to problems. I will describe them and a number of their positive attributes. I will then summarize our Nature paper on how they, when combined with Bayesian Optimization, produce a learning algorithm that enables robots, after being damaged, to adapt in 1-2 minutes in order to continue performing their mission, yielding state-of-the-art robot damage recovery. I will next describe our QD-based Go-Explore algorithm, which dramatically improves the ability of deep reinforcement learning algorithms to solve previously unsolvable problems wherein reward signals are sparse, meaning that intelligent exploration is required. Go-Explore solves Montezuma’s Revenge, considered by many to be a major AI research challenge. Finally, I will motivate research into open-ended algorithms, which seek to innovate endlessly, and introduce our POET algorithm, which generates its own training challenges while learning to solve them, automatically creating a curricula for robots to learn an expanding set of diverse skills. POET creates and solves challenges that are unsolvable with traditional deep reinforcement learning techniques.
[ CMU RI ]
The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
THE ENGINEER’S PLACE Steve Jobs took LSD 10 to 15 times and said that taking the drug was one of “two or three” most important things he ever did.
The late cofounder of Apple was an American original. Whatever singular qualities he possessed as a digital savant can’t be explained by his choice of recreational drugs. However, a new generation of engineers and software coders, centered in Silicon Valley but not limited to the world’s premier innovation hub, are now imitating Jobs in a rather dramatic way. They are routinely dropping “microdoses” of acid—about one-tenth the amount of the standard recreational dose—in order to achieve higher levels of creativity on the job, and greater intensity and focus.
Should you be doing the same?
Excuse me for posing such a personal question, but in the years ahead the question of whether you microdose may arise during a job interview or a coffee break with co-workers.
In Silicon Valley and other enclaves of leading-edge technology, the phrase “woke and wired” is coming to describe a certain openness by technologists to using pills and processors.
While the “pill” paradigm fits neatly into modern concepts of how to achieve wellness through supplements, the processor approach raises concerns, especially when it comes to implanting devices in the brain. Obvious risks notwithstanding, trailblazers believe they can enhance cognition using a brain-computer interface (BCI) to make real-world connections more quickly and durably.
It’s a compelling yet controversial vision, one that differs radically from mind-expansion through smart phones and Internet searches. Part of the appeal of implants comes from the passionate interest of serial entrepreneur Elon Musk. He founded Neuralink, in San Francisco, to pursue his dream of using BCIs to control digital devices and connect your thoughts to the Internet.
For some cognitive enhancement enthusiasts, the combination of drugs and chips is a bio-digital marriage made in heaven. They surmise that in the future, engineers may have to pursue parallel paths—microdosing and digital implants—to achieve heightened consciousness and levels of creativity and productivity that translate into more rewards and promotions as well as better designs, devices and services.
My personal position on the pill versus processor, or both, is old-fashioned. For the individual engineer and coder, consider an alternative: try systematically to squeeze more value from mental discipline.
In my view, the best methods to heighten creativity and increase your “out of the box” thinking are traditional, analog, and noninvasive. These methods can be found in John Dewey’s classic “How to Think” primer, first published in 1910 or even earlier from Rene’ Descartes’s Discourse on the Method of Rightly Conducting One’s Reason and Seeking Truth in the Science. In 1637, Descartes famously wrote, Cogito Ergo Sum (“I think, therefore I am”), laying the foundation for centuries cognitive enhancement through varieties of mental discipline.
The advice from Dewey, an American philosopher, also boils down to imposing various rules and routines on your own consciousness. The practice harkens back to Socrates and the memory exercises of medieval monks and includes ancient Asian techniques of meditation and control of mind-over-body. Learning the tools of deductive logic, statistical analysis and scenario planning could also qualify as humble traditional forms that are proven cognitive enhancers.
The perspective I’m advancing calls for first exhausting “analog” means to achieve mind-expansion before pursuing either pills or processors or both in combination.
While I can be fairly accused of being stuck in the past, my objections to microdosing or neural implants are not moralistic, but empirically-based and in tune with how humans study and evaluate risks from emerging technologies.
There are simply too many uncertainties with bio-pharmacological means to expanded consciousness. The costs are too great or entirely unknown. Digital means, especially those which require invasive surgery, such as electronic implants and anything supplying electric charges strike me as equally risky. And I take seriously a point advanced by Martha Farah, a cognitive neuroscience researcher at the University of Pennsylvania, that highly individualized reactions to a range of cognitive interventions could make more difficult, even impossible, rational assessment of relative risks and rewards.
In short, engineers who pursue heightened consciousness by any means available may find themselves trading short-term gain for long-term pain.
Science fiction, of course, is the master teacher of the perils of following new technologies wherever they lead. The drug soma, of Aldous Huxley’s Brave New World, made people happy whether or not they wanted to be. Because humans are entitled to their emotions and feelings, employers instead emphasize performance on tasks that comprise a job. If you do your job well, while miserable or supremely happy, who cares?
Performance metrics, however, seem like fair game to employers. If they find an enhancer that endows their workers with an advantage, can’t they mandate its use, provided the enhancer is lawful?
I think we are the verge of entering this brave new world of work, where enhancers are essentially mandatory. And not only in polities where individual rights are weak or non-existent. The potential benefits are too great to ignore. Engineers of the future, I humbly submit, will face wicked choices over whether to bio-digitally enhance at work or not.
To highlight the challenge, here’s a simple thought-experiment: You and I work as product architects for Corporation-of-Tomorrow. Our managers announce that everyone on our team will begin taking a daily pill to increase our concentration. The pill is legal, has no apparent side effects, and costs nothing to employees. Corporation-of-Tomorrow even declares that taking pill is voluntary. You can opt out. But the company also makes clear that the stakes are high: their products, on which lives depend, must be highly reliable, as perfect as humans can make them, and the daily pill is now viewed by management as an obligation, part of the company’s commitment to excellence and the public good.
Persuaded, you decide to take the pill daily (and be observed doing so by your smart phone). I say no. After six months, your work steadily improves. Mine does not.
I am fired.
The potential for employer-mandated enhancers should force us to reflect deeply about the importance of work, the relative value of enhancers, and illusion of choice. How might engineers respond in ways other than individually?
Collective responses would seem appealing. Engineers might band together and ask their employers to craft better policies. Or they might appeal to government to limit the power of employers to cajole, pressure or compel an employee to use bio-chemical or digital means to perform better on the job. Government could then create rules of the road for cognitive-enhancers on the job.
I figure that most engineers will reject collectivism and be comfortable with a libertarian framing. Confident individuals, educated and experienced in making design trade-offs, they will choose to engineer their own accommodation with enhancement. They will do what they wish and accept the consequences. And that means allowing individuals to opt out without fear or favor.
Some engineers, because they are clever, will divine effective “analog” means of cognitive enhancement. Praise their enterprise but admit there’s a disturbing possibility that invites comparisons to the present controversies over vaccination: that the government, or your employer, may be right and that legislators do know what’s best for your cognitive health. Won’t resisters merely drag down the group, and endanger the rest of us?
A team of European scientists proposes using mountains to build a new type of battery for long-term energy storage.
The intermittent nature of energy sources such as solar and wind has made it difficult to incorporate them into grids, which require a steady power supply. To provide uninterrupted power, grid operators must store extra energy harnessed when the sun is shining or the wind is blowing, so that power can be distributed when there’s no sun or wind.
“One of the big challenges of making 100 percent renewable energy a reality is long-term storage,” says Julian Hunt, an engineering scientist at the International Institute for Applied Systems Analysis in Austria.
Lithium-ion batteries currently dominate the energy storage market, but these are better suited for short-term storage, says Hunt, because the charge they hold dissipates over time. To store sufficient energy for months or years would require many batteries, which is too expensive to be a feasible option.
Hunt and his collaborators have devised a novel system to complement lithium-ion battery use for energy storage over the long run: Mountain Gravity Energy Storage, or MGES for short. Similar to hydroelectric power, MGES involves storing material at elevation to produce gravitational energy. The energy is recovered when the stored material falls and turns turbines to generate electricity. The group describes its system in a paper published 6 November in Energy.
“Instead of building a dam, we propose building a big sand or gravel reservoir,” explains Hunt. The key to MGES lies in finding two mountaintop sites that have a suitable difference in elevation—1,000 meters is ideal. “The greater the height difference, the cheaper the technology,” he says.
The sites will look similar, with each comprised of a mine-like station to store the sand or gravel, and a filling station directly below it. Valves release the material into waiting vessels, which are then transported via cranes and motor-run cables to the upper site. There, the sand or gravel is stored—for weeks, months, or even years—until it’s ready to be used. When the material is moved back down the mountain, that stored gravitational energy is released and converted into electrical energy.
The system is very flexible, says Hunt, because you can easily alter the speed of the cables, increase the load, or change the number of vessels to meet varying energy demands. And MGES is better than traditional long-term storage methods such as pumped-storage hydropower and dams because its impact on the environment is low, Hunt claims. “Also, piles of sand are cheap, cheaper than water. And sand doesn’t evaporate so you can continue using it indefinitely,” he says.
Hunt estimates that the annual cost of storing energy via this system will vary between $50 to $100 per megawatt hour (MWh). Lithium-ion batteries, by comparison, cost at least 10 times more. And he says that the energy expended to transport materials to the upper sits will be offset by the amount of gravitational energy the system produces.
Hunt and his co-authors are not the first to propose using gravitational potential energy as a storage solution. Swiss startup Energy Vault has developed a “battery” that involves raising and releasing 5,000 concrete blocks through a 33-story building; Edinburgh-based Gravitricity has plans to drop weights down disused mine shafts; and Heindl Energy in Germany wants to lift a very large rock mass using water pumps. But so far, no one has suggested using mountains.
MGES technology will be especially useful for grids that have small energy storage demands, says Hunt. These are typically microgrids utilizing less than 20 megawatts, or the amount of energy it takes to power 7,000 four-bedroom houses. The technology can potentially be applied to tiny or isolated islands such as Molokai in Hawaii, the Galapagos, and Cape Verde, where the cost of supplying energy is high and demand is often seasonal due to tourism.
“In these cases, it can be a viable alternative [to fossil fuels],” he says. “It can be a real thing in the future.”
THE INSTITUTELaptop computers, mobile phones, and a host of other electronic devices wouldn’t exist without semiconductors such as monocrystalline silicon.
Early methods of producing semiconductors were unpredictable and unreliable. There was no way for scientists at the time to prevent the semiconductors from being contaminated by impurities in the air. In 1916, however, Polish chemist Jan Czochralski invented a way to grow single crystals of semiconductors, metals, and synthetic gemstones. The process—known as the Czochralski method—allows scientists to have more control over a semiconductor’s quality and is still used today.
Czochralski discovered the method by accident while working in a laboratory at Allgemeine Elektrizitäts-Gesellschaft (AEG), an electrical-equipment company in Berlin. According to JanCzochralski.com, while investigating the crystallization rates of metal, Czochralski dipped his pen into molten tin instead of an inkwell. That caused a tin filament to form on the pen’s tip. Through further research, he was able to prove that the filament was a single crystal. His discovery prompted him to experiment with the bulk production of single crystals of semiconductors.
The Czochralski process of growing single crystals was dedicated as an IEEE Milestone on 14 November during a ceremony held at the Warsaw University of Technology. The IEEE Poland Section and the IEEE Germany Section sponsored the Milestone. Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.
Czochraslski used a silica crucible—a container made of quartz—to grow the crystals. He sat it inside a chamber that was free from oxygen, carbon dioxide, and other potential contaminants. The chamber was surrounded by heaters that converted electric energy into heat. He also used radio waves at a high frequency to melt silica inside the crucible. When the temperature inside the crucible reached about 1,700 kelvins, it melted the high-purity semiconductor-grade silica.
Once the silica melted, he placed a small piece of polycrystalline material—a seed crystal—on the end of a 14-centimeter-long, rotating rod. He then slowly lowered the rod into the crucible until the seed crystal dipped just below the surface of the molten silica. He found that a trace of impurity elements—a dopant—such as boron or phosphorus, could be added to the molten silica in precise amounts to change the silica’s carrier concentration. Depending on what dopants he added, the silica turned into p-type or n-type silicon. They have different electronic properties. When they are put together, they create a diode, which allows for current to flow through the silicon.
Czochraslski simultaneously lifted and rotated the rod that held the seed crystal. During this step, the molten silicon crystallized at the interface of the seed. That formed a new crystal.
The shape of the new crystal, particularly the diameter, can be controlled by adjusting the rod’s heating power, pulling rate, and rotation rate, according to the Encyclopedia of Materials: Science and Technology. That “necking procedure” technique is crucial for limiting the crystal’s structural defects.
Other semiconductors, such as gallium arsenide, also can be grown using the Czochralski method.
The Milestone plaque, mounted at the entrance to the Warsaw University of Technology’s main hall, reads:
In 1916, Jan Czochralski invented a method of crystal growth used to obtain single crystals of semiconductors, metals, salts, and synthetic gemstones during his work at AEG in Berlin, Germany. He developed the process further at the Warsaw University of Technology, Poland. The Czochralski process enabled development of electronic semiconductor devices and modern electronics.
The second half of 2019 saw big engineering workforce moves both positive and negative.
HP (big layoffs), WeWork (more layoffs), Oracle (layoffs and hiring), and TSMC (hiring explosion) made big moves. The bulk of the hiring news came from outside Silicon Valley—with a flurry of activity outside the U.S. And the trends show that it’s a good time to be in AI and machine learning or 5G development, perhaps not such a good time to be developing consumer cybersecurity tools.
The big swings:
HP Inc. in October announced that it would cut up to 16 percent of its workforce, between 7000 and 9000 jobs. How many of those cuts affect technical professionals and how they would be distributed geographically wasn’t announced.
Struggling WeWork in October reportedly decided to lay off 500 from its technology division, including about 150 tech professionals from companies it had recently acquired. In November, WeWork-owned Meetup announced layoffs of 50 employees, mostly engineers, and coding boot camp Flatiron School planned to lay off dozens. Overall, including architects, cleaners, and maintenance workers, WeWork is expected to axe as many as 4000, about a third of its total staff.
Oracle announced in October plans to hire 2000 engineers to work on cloud computing technology around the world, including in Silicon Valley, Seattle, and India and at new data centers to be established. Oracle’s announcement came after a major round of layoffs in March. And in August Oracle laid off at least 300 engineers from its flash storage operations in Silicon Valley and Colorado.
In Silicon Valley:
Apple in October began ramping up hiring of engineers to work on its smart-home platform and new smart-home devices in its Cupertino and San Diego, Calif., offices, according to Bloomberg. Apple hasn’t announced specific numbers.
Robotic pizza-maker Zume, based in Mountain View, Calif., has been steadily increasing its engineering workforce in recent months, Thinknum Media reported in October, but didn’t speculate on exact numbers.
JP Morgan, meanwhile, has been recruiting engineers with AI and machine learning expertise for its San Mateo, Calif., office, according to efinancialcareers.
Around the U.S.:
Amazon in September announced plans to add 400 tech professionals to its Portland, Oregon, tech center, including those with expertise in development, information technology, software architecture. The hires will double the company’s engineering workforce there.
In August, Uber announced a tech hiring freeze for all software and services jobs based in the U.S. and Canada. Then in September, Uber announced that it had cut 435 from its product and engineering teams, the majority from U.S. operations, but lifted the hiring freeze. Just weeks later, Uber announced long-term plans to hire 2000 professionals to staff a headquarters and engineering center for Uber Freight in Chicago.
Stratifyd, a four-year-old artificial intelligence and machine learning startup based in Charlotte, N.C., announced in November that it would add at least 200.
Microsoft is also ramping up in North Carolina, announcing in November that it would be adding 430 jobs at its Charlotte campus, mostly in engineering and management. This expansion followed on Microsoft’s October announcement of 575 new positions opening at its tech center in Irving, Texas.
Health tech startup Well announced in November plans to hire 400 in North Carolina.
Computer security toolmaker McAfee in October gave notice of 107 layoffs in Hillsboro, Oregon, by year-end, including 44 software engineers.
Symantec, another cybersecurity tools company, in October indicated that it would be cutting 213 software engineering and middle management jobs from its California operations and an additional 24 engineers and other professionals from its Oregon staff. (Broadcom acquired part of Symantec in August.)
Samsung in October gave notice that it would cut a significant but unspecified number of engineers working on CPU development from its Austin, Texas, R&D center, according to Extremetech. That month, Samsung also announced plans to hire an additional 1200 engineers in India for its R&D centers there.
Goldman Sachs in August announced plans to hire 100 software engineers to be based in its trading divisions in New York and London.
More from around the world:
The biggest hiring news for the second half of 2019 came from Taiwan Semiconductor Manufacturing Corp. (TSMC). TSMC in late July announced plans to fill 3000 new tech jobs by the end of this year distributed among three Taiwan locations.
Ikea executives in October told the Financial Times that the company aims to add more smart products to its line of home furnishings. The retailer is in the process of adding engineers to its Swedish hub, and is considering setting up development operations in the U.S. and Asia.
Nokia, based in Finland, announced in November that it had recently hired 350 engineers to work on 5G technology.
BFS Capital announced in October that it would be hiring 50 to staff its new data science and engineering hub in Toronto.
Essential, the mobile device developer founded by Andy Rubin, tweeted in October news of a hiring push for engineers and designers in Bangalore, India. Essential didn’t release specifics about the eventual size of this team but at this writing listed 10 openings.
L’initiative du Collectif Pochoirs Pour Tous part du constat que les rues de Paris sont jonchées de messages insidieux, voire haineux tagués sur l’espace public.
Ces chasseurs de Street Art ont voulu réagir dans une démarche qui se veut non politique, mais juste citoyenne. Quand les membres du Collectif repèrent des tags racistes, sexistes ou autres propos discriminatoires, ils les recouvrent en utilisant une méthode identique : celle du pochoir. Mais plutôt que d’écrire des messages, ses membres ont préféré de simples cœurs : répondre à la haine par l’Amour !
Cet article Pochoirs pour tous : des coeurs pour lutter contre les messages de haine dans la rue est apparu en premier sur Pepsnews - Le site des news positives.
When mobile manipulators eventually make it into our homes, self-repair is going to be a very important function. Hopefully, these robots will be durable enough that they won’t need to be repaired very often, but from time to time they’ll almost certainly need minor maintenance. At Humanoids 2019 in Toronto, researchers from the University of Tokyo showed how they taught a PR2 to perform simple repairs on itself by tightening its own screws. And using that skill, the robot was also able to augment itself, adding accessories like hooks to help it carry more stuff. Clever robot!
To keep things simple, the researchers provided the robot with CAD data that tells it exactly where all of its screws are.
At the moment, the robot can’t directly detect on its own whether a particular screw needs tightening, although it can tell if its physical pose doesn’t match its digital model, which suggests that something has gone wonky. It can also check its screws autonomously from time to time, or rely on a human physically pointing out that it has a screw loose, using the human’s finger location to identify which screw it is. Another challenge is that most robots, like most humans, are limited in the areas on themselves that they can comfortably reach. So to tighten up everything, they might have to find themselves a robot friend to help, just like humans help each other put on sunblock.
The actual tightening is either super easy or quite complicated, depending on the location and orientation of the screw. If the robot is lucky, it can just use its continuous wrist rotation for tightening, but if a screw is located in a tight position that requires an Allen wrench, the robot has to regrasp the tool over and over as it incrementally tightens the screw.
The other neat trick that a robot can do once it can tighten screws on its own body is to add new bits of hardware to itself. PR2 was thoughtfully designed with mounting points on its shoulders (or maybe technically its neck) and head, and it turns out that it can reach these points with its manipulators, allowing to modify itself, as the researchers explain:
When PR2 wants to have a lot of things, the only two hands are not enough to realize that. So we let PR2 to use a bag the same as we put it on our shoulder. PR2 started attaching the hook whose pose is calculated with self CAD data with a driver on his shoulder in order to put a bag on his shoulder. PR2 finished attaching the hook, and the people put a lot of cans in a tote bag and put it on PR2’s shoulder.
In 2016, London-based DeepMind Technologies, a subsidiary of Alphabet (which is also the parent company of Google), startled industry watchers when it reported that the application of artificial intelligence had reduced the cooling bill at a Google data center by a whopping 40 percent. What’s more, we learned that year, DeepMind was starting to work with the National Grid in the United Kingdom to save energy throughout the country using deep learning to optimize the flow of electricity.
Could AI really slash energy usage so profoundly? In the three years that have passed, I’ve searched for articles on the application of AI to other data centers but find no evidence of important gains. What’s more, DeepMind’s talks with the National Grid about energy have broken down. And the financial results for DeepMind certainly don’t suggest that customers are lining up for its services: For 2018, the company reported losses of US $571 million on revenues of $125 million, up from losses of $366 million in 2017. Last April, The Economist characterized DeepMind’s 2016 announcement as a publicity stunt, quoting one inside source as saying, “[DeepMind just wants] to have some PR so they can claim some value added within Alphabet.”
This episode encouraged me to look more deeply into the economic promise of AI and the rosy projections made by champions of this technology within the financial sector. This investigation was just the latest twist on a long- standing interest of mine. In the early 1980s, I wrote a doctoral dissertation on the economics of robotics and AI, and throughout my career as a professor and technology consultant I have followed the economic projections for AI, including detailed assessments by consulting organizations such as Accenture, PricewaterhouseCoopers International (PwC), and McKinsey.
These analysts have lately been asserting that AI-enabled technologies will dramatically increase economic output. Accenture claims that by 2035 AI will double growth rates for 12 developed countries and increase labor productivity by as much as a third. PwC claims that AI will add $15.7 trillion to the global economy by 2030, while McKinsey projects a $13 trillion boost by that time.
Other forecasts have focused on specific sectors such as retail, energy, education, and manufacturing. In particular, the McKinsey Global Institute assessed the impact of AI on these four sectors in a 2017 report titled Artificial Intelligence: The New Digital Frontier? and did so for a much longer list of sectors in a 2018 report. In the latter, the institute concluded that AI techniques “have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries. This constitutes about 40 percent of the overall $9.5 trillion to $15.4 trillion annual impact that could potentially be enabled by all analytical techniques.”
Wow. These are big numbers. If true, they create a powerful incentive for companies to pursue AI—with or without help from McKinsey consultants. But are these predictions really valid?
Many of McKinsey’s estimates were made by extrapolating from claims made by various startups. For instance, its prediction of a 10 percent improvement in energy efficiency in the U.K. and elsewhere was based on the purported success of DeepMind and also of Nest Labs, which became part of Google’s hardware division in 2018. In 2017, Nest, which makes a smart thermostat and other intelligent products for the home, lost $621 million on revenues of $726 million. That fact doesn’t mesh with the notion that Nest and similar companies are contributing, or are poised to contribute, hugely to the world economy.
So I decided to investigate more systematically how well such AI startups were doing. I found that many were proving not nearly as valuable to society as all the hype would suggest. This assertion will certainly rub a lot of people the wrong way, the analysts at McKinsey among them. So I’d like to describe here how I reached my much more pessimistic conclusions.
My investigation of Nest Labs expanded into a search for evidence that smart meters in general are leading to large gains in energy efficiency. In 2016, the British government began a coordinated campaign to install smart meters throughout the country by 2020. And since 2010, the U.S. Department of Energy has invested some $4.5 billion installing more than 15 million smart meters throughout the United States. Curiously enough, all that effort has had little observed impact on energy usage. The U.K. government recently revised downward the amount it figures a smart meter will save each household annually, from £26 to just £11. And the cost of smart meters and their installation has risen, warns the U.K.’s National Audit Office. All of this is not good news for startups banking on the notion that smart thermostats, smart home appliances, and smart meters will lead to great energy savings.
Are other kinds of AI startups having a greater positive effect on the economy? Tech sector analyst CB Insights reports that overall venture capital funding in the United States was $115 billion in 2018 [PDF], of which $9.3 billion went to AI startups. While that’s just 8 percent of the total, it’s still a lot of money, indicating that there are many U.S. startups working on AI (although some overstate the role of AI in their business plans to acquire funding).
To probe further, I gathered data on the U.S. AI startups that have received the most funding and looked at which industries they were hoping to disrupt. The reason for focusing on the United States is that it has the longest history of startup success, so it seems likely that its AI startups are more apt to flourish than those in other countries. My intention was to evaluate whether these U.S. startups had succeeded in shaking up various industries and boosting productivity or whether they promise to do so shortly.
In all, I examined 40 U.S. startups working on AI. These either had valuations greater than $1 billion or had more than $70 million in equity funding. Other than two that had been acquired by public companies, the startups I looked at are all private firms. I found their names and product offerings in lists of leading startups that Crunchbase, Fortune, and Datamation had compiled and published. I then updated my data set with more recent news about these companies (including reports of some shutdowns).
I categorized these 40 startups by the type of product or service they offered. Seventeen are working on what I would call basic computer hardware and software (Wave Computing and OpenAI, respectively, are examples), including cybersecurity (CrowdStrike, for instance). That is, I included in this category companies building tools that are intended to support the computing environment itself.
Making up another large fraction—8 of the 40—are companies that develop software that automates various tasks. The robotic process automation software being developed by Automation Anywhere, UiPath, and WorkFusion, for example, enables higher productivity among professionals and other white-collar workers. Software from Brain Corp. converts manual equipment into intelligent robots. Algolia, Conversica, and Xant offer software to improve sales and marketing. ZipRecruiter targets human resources.
The remaining startups on my list are spread among various industries. Three (Flatiron Health, Freenome, Tempus Labs) work in health care; three more (Avant, Upstart, ZestFinance) are focused on financial technology; two (Indigo, Zymergen) target agriculture or synthetic biology; and three others (Nauto, Nuro, Zoox) involve transportation. There is just one startup each for geospatial analytics (Orbital Insight), patterns of human interaction (Afiniti), photo/video recognition (Vicarious), and music recognition (SoundHound).
Are there indications that these startups will bring large productivity improvements in the near future? In my view, software that automates tasks normally carried out by white-collar workers is probably the most promising of the products and services that AI is being applied to. Similar to past improvements in tools for white-collar professionals, including Excel for accountants and computer-aided design for engineers and architects, these types of AI-based tools have the greatest potential impact on productivity. For instance, there are high hopes for generative design, in which teams of people input constraints and the system proposes specific designs.
But looking at the eight startups on my list that are working on automation tools for white-collar workers, I realized that they are not targeting things that would lead to much higher productivity. Three of them are focused on sales and marketing, which is often a zero-sum game: The company with the best software takes customers from competitors, with only small increases in productivity under certain conditions. Another one of these eight companies is working on human-resource software, whose productivity benefits may be larger than those for sales and marketing but probably not as large as you’d get from improved robotic process automation.
This leaves four startups that do offer such software, which may lead to higher productivity and lower costs. But even among these startups, none currently offers software that helps engineers and architects become more productive through, for example, generative design. Software of this kind isn’t coming from the largest startups, perhaps because there is a strong incumbent, Autodesk, or because the relevant AI is still not developed enough to provide truly useful tools in this area.
The relatively large number of startups I classified as working on basic hardware and software for computing (17) also suggests that productivity improvements are still many years away. Although basic hardware and software are a necessary part of developing higher-level AI-based tools, particularly ones utilizing machine learning, it will take time for the former to enable the latter. I suppose this situation simply reflects that AI is still in its infancy. You certainly get that impression from companies like OpenAI: Although it has received $1 billion in funding (and a great deal of attention), the vagueness of its mission—“Benefiting all of humanity”—suggests that it will take many years yet for specific useful products and services to evolve from this company’s research.
The large number of these startups that are focused on cybersecurity (seven) highlights the increasing threat of security problems, which raise the cost of doing business over the Internet. AI’s ability to address cybersecurity issues will likely make the Internet more safe, secure, and useful. But in the end, this thrust reflects yet higher costs in the future for Internet businesses and will not, to my mind, lead to large productivity improvements within the economy as a whole.
If not from the better software tools it brings, where will AI bring substantial economic gains? Health care, you would think, might benefit greatly from AI. Yet the number of startups on my list that are applying AI to health care (three) seems oddly small if that were really the case. Perhaps this has something to do with IBM’s experience with its Watson AI, which proved a disappointment when it was applied to medicine.
Still, many people remain hopeful that AI-fueled health care startups will fill the gap left by Watson’s failures. Arguing against this is Robert Wachter, who points out that it’s much more difficult to apply computers to health care than to other sectors. His 2015 book, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, details the many reasons that health care lags other industries in the application of computers and software. It’s not clear that adding AI to the mix of digital technologies available will do anything to change the situation.
There are also some big applications missing from the list of well-funded AI startups. Housing represents the largest category of consumer expenditures in the United States, but none of these startups are addressing this sector of the economy at all. Transportation is the second largest expenditure, and it is the focus of just three of these startups. One is working on a product that identifies distracted drivers. Another intends to provide automated local deliveries. Only one startup on the list is developing driverless passenger vehicles. That there is only one working on self-driving cars is consistent with the pessimism recently expressed by executives of Ford, General Motors, and Mercedes-Benz about the prospects for driverless vehicles taking to the streets in large numbers anytime soon, even though $35 billion has already been spent on R&D for them.
Admittedly, my assessment of what these 40 companies are doing and whether their offerings will shake up the world over the next decade is subjective. Perhaps it makes better sense to consider a more objective measure of whether these companies are providing value to the world economy: their profitability.
Alas, good financial data is not available on privately held startups, only two of the companies on my list are now part of public companies, and startups often take years to turn a profit (Amazon took seven years). So there isn’t a lot to go on here. Still, there are some broad trends in the tech sector that are quite telling.
The fraction of tech companies that are profitable by the time they go public dropped from 76 percent in 1980 to just 17 percent in 2018, even though the average time to IPO has been rising—it went from 2.8 years in 1998 to 7.7 years in 2016, for example. Also, the losses of some well-known startups that took a long time to go public are huge. For instance, none of the big ride-sharing companies are making a profit, including those in the United States (Uber and Lyft), China, India, and Singapore, with total losses of about $5 billion in 2018. Most bicycle and scooter sharing, office sharing, food delivery, P2P (peer-to peer) lending, health care insurance and analysis, and other consumer service startups are also losing vast amounts of money, not only in the United States but also in China and India.
Most of the 40 AI startups I examined will probably stay private, at least in the near term. But even if some do go public several years down the road, it’s unlikely they’ll be profitable at that point, if the experience of many other tech companies is any guide. It may take these companies years more to achieve the distinction of making more money than they are spending.
For the reasons I’ve given, it’s very hard for me to feel confident that any of the AI startups I examined will provide the U.S. economy with a big boost over the next decade. Similar pessimism is also starting to emerge from such normally cheery publications as Technology Review and Scientific American. Even the AI community is beginning to express concerns in books such as The AI Delusion and Rebooting AI: Building Artificial Intelligence We Can Trust, concerns that are growing amid the rising hype about many new technologies.
The most promising areas for rapid gains in productivity are likely to be found in robotic process automation for white-collar workers, continuing a trend that has existed for decades. But these improvements will be gradual, just as those for computer-aided design and computer-aided engineering software, spreadsheets, and word processing have been.
Viewed over the span of decades, the value of such software is impressive, bringing huge gains in productivity for engineers, accountants, lawyers, architects, journalists, and others—gains that enabled some of these professionals (particularly engineers) to enrich the global economy in countless ways.
Such advances will no doubt continue with the aid of machine learning and other forms of AI. But they are unlikely to be nearly as disruptive—for companies, for workers, or for the economy as a whole—as many observers have been arguing.
Jeffrey Funk retired from the National University of Singapore in 2017, where he taught (among other subjects) a course on the economics of new technology as a professor of technology management. He remains based in Singapore, where he consults in various areas of technology and business.
Researchers have developed a new technique for tracking the hand movements of a non-attentive driver, to calculate how long it would take the driver to assume control of a self-driving car in an emergency.
If manufacturers can overcome the final legal hurdles, cars with Level 3 autonomous vehicle technology will one day be chauffeuring people from A to B. These cars allow a driver to have his or her eyes off the road and the freedom to do minor tasks (such as texting or watching a movie). However, these cars need a way of knowing how quickly—or slowly—a driver can respond when taking control during an emergency.
To address this need, Kevan Yuen and Mohan Trivedi at the University of California, San Diego developed their new hand-tracking system, which is described in a study published 22 November in IEEE Transactions on Intelligent Vehicles.
While tracking someone’s hands may sound simple, it can be hard to do in the cramped confines of a car, where there are only a few good spots to place a camera. A driver’s hands can also become occluded by one another or by objects, and cameras may be hindered by, for example, the harsh lighting of the sun on the driver’s arm.
In their new approach, Yuen and Trivedi took an existing program for tracking the full-body movements of people and adapted it to track the wrists and elbows of a driver, and also of a passenger, if present. It distinguishes between the right and left joints of both riders in the front seats. The researchers then develop and applied machine learning algorithms to train the system to support Level 3 autonomous technology. They trained the system with 8,500 annotated images.
“The approach is capable of highly accurate, and very efficient hand detection, localization, and activity analysis in a very wide range of real-world driving situations, involving multiple humans and multiple vehicles,” says Trivedi.
Their analysis shows that the system was able to identify the location of each of eight joints present (the right/left-side elbows/wrists of both passenger/driver) with 95 percent accuracy. However, the system has a localization error of 10 percent when estimating the average length of someone’s arms.
Some instances where the tracking system did not work include when the driver was wearing unique clothing with heavy artistic texturing that was not represented in the training set, and when one of the driver’s arms blocked the camera’s view of the other arm.
The researchers say some of the problems encountered during their tests can be addressed by placing the camera in a better location to avoid occlusions, using multiple camera views, and increasing the training dataset to include more variety in clothing.
“This project is part of our larger research effort on the development of safe autonomous vehicles,” says Trivedi. He adds that the team is talking with at least one potential client about using this technology in a commercial setting, but said he couldn’t divulge which company has expressed interest.
Our nervous system is specialized to produce and conduct electrical currents, so it’s no surprise that gentle electric stimulation has healing powers. Neural stimulation—also known as neuromodulation, bioelectronic medicine, or electroceuticals—is currently used to treat pain, epilepsy, and migraines, and is being explored as a way to combat paralysis, inflammation, and even hair loss. Muscle stimulation can also bestow superhuman reflexes and improve short-term memory.
But to reach critical areas of the body, such as the brain or the spine, many treatments require surgically implanted devices, such as a cuff that wraps around the spinal cord. Implanting such a device can involve cutting through muscle and nerves (and may require changing a battery every few years).
Now, a team of biomedical engineers has created a type of electrode that can be injected into the body as a liquid, then harden into a stretchy, taffy-like substance. In a paper in the journal Advanced Healthcare Materials, the multi-institutional team used their “injectrodes” to stimulate the nervous systems of rats and pigs, with comparable results to existing implant technologies.
“Instead of cutting down to a nerve, we can just visualize it under ultrasound, inject this around it, and then extrude back a wire to the surface,” says study author Kip Ludwig, a professor of biomedical engineering and neurological surgery at the University of Wisconsin–Madison. That process creates a bypass between the surface of the skin and the deep nerve one wants to stimulate, without damaging tissue in between, he adds.
Researchers have created numerous flexible or stretchy electrodes to mold to the shape of, say, brain tissue, but this technology can be injected into the body and fill in cracks and crevices around nerves.
Working with Andrew Shoffstall at Case Western Reserve University and Manfred Franke of Neuronoff Inc., a California-based biotech company, Ludwig and colleagues developed an electrode consisting of bits of metal and a silicon base—similar to surgical glue—that combine to form a thick liquid. This liquid can be put into a syringe and injected into the space around a nerve, where it hardens into a solid form, with a consistency similar to taffy.
This taffy-like wire is conductive, can move and bend with the nerve or joint, and can be activated to stimulate the nerve with an inexpensive external unit—a transcutaneous electrical nerve stimulation, or TENS, unit—which anyone can buy at a pharmacy or online.
To test their new creation, the researchers injected the material into rats and pigs, and compared the performance to that of silver wires and a clinical electrode implant. The injectrodes worked just as well as both other tools, and even appeared to require a lower current for the same amount of neural activity. It is also possible to tailor the viscosity, or thickness, of the liquid electrodes for different applications, says Ludwig.
Still, to remove the wire, one would have to “go in and get it,” says Ludwig—meaning surgically remove it like any other electrical lead. Currently, his team is testing the safety and efficacy of the injectrodes over long periods of time and testing the possibility of having robots inject the material. Ludwig hopes to apply to the FDA and begin safety testing in humans in two years.
Ludwig and his collaborators co-founded Neuronoff to commercialize the technology. The team also recently received a US $2.1 million grant from the National Institutes of Health to test the injectrodes as an alternative to opioids for treating chronic back pain.
The Application Note explains the main measurement concept and will guide the user during the measurements and mention the main topics in a practical manner. Wherever possible, a hint is given where the user should pay attention.
Think back to when you were a young and an eager beginner in technology. Remember the first time you took apart your first PC, wrote your first line of code, learned how to hack Doom. The easiest way to learn technology was (and is) by being hands-on.
“Hands-on learning is 15X more effective than passive learning (ie. lectures).”
Getting started in technology can be intimidating. If you want to learn technology these days, there aren’t many great options.
Since these are purely digital, you don’t get the hands-on experience of building electronics. These often end up being just coding lessons.
Most schools don’t even offer electronics and programming in their curriculum. Even fewer engage students in hands-on learning. If you’re lucky, you might find a school that has an afterschool program led by a passionate STEM educator.
“There are nearly 500,000 open Computing jobs in the U.S. alone.”
Educational Hands-on Products
You can find some hands-on projects that use block code and snap-on parts, but these are often oversimplified to the point that you don’t even learn the fundamentals. When you remove the potential of making mistakes, you lose the connection to how things work in the real world.
What happened to the good ole days of getting your hands dirty with real hardware and programming?
The truth is, people learn best by doing.
That’s why we created Creation Crate, a tech subscription box that prepares learners for the jobs of the future by teaching them how to build awesome DIY electronic projects!
Would you rather build your own bluetooth speaker, or read a textbook on electronics? Hands-on learning is not only more engaging, but fun too. Learning shouldn’t be a chore, and Creation Crate makes sure of that.
Creation Crate combines hands-on learning with educational electronics courses to teach electronics, circuits, coding, critical thinking, problem solving, and more!
“I am majoring in STEM (physics and computer science double-major), and I ordered this mainly for the purpose of tinkering and expanding my computer engineering knowledge through independent projects, and even for me, this bundle ended up being incredibly handy and interesting. Thank you guys, and good luck in the future!” - Roman F.
If there was ever a time to be an aspiring engineer, it’s now! The cost of components is a fraction of what it was ten years ago. Anyone can get access to the hardware and software used in everyday tech careers.
So why settle for anything but the real thing?
With Creation Crate, everything necessary is delivered in a kit to your door. Kits include all the components needed to build your project. You will also find access to an online classroom with detailed step-by-step video tutorials.
Each project uses an Uno R3 (Arduino-compatible) Microcontroller, a small programmable computer that acts as the brain of the project.
They’ll also learn how to use components like a Breadboard, Ultrasonic Sensor, LED Matrix, 7-Segment Display, Accelerometer, Distance, Pressure, Temperature, & Humidity Sensors, LCD Screen, Keypad, Microphone Module, Resistors, Servo Motors, Motor Driver Board, and more!
Learning how to program is a tedious but rewarding process. Most engineering careers in technology require an understanding of programming languages like Java, C++, Python, Ruby, and others.
With Creation Crate, students will learn how to write their own computer programs in the Arduino language (C/C++) to make their projects come to life!
Each project will introduce different lessons in programming C++. Here are a few examples of what they’ll learn:
What are Comments and Variables?
Arrays and Functions
Detecting variable input values
As your aspiring engineer progresses through the curriculum, they’ll learn how to build and program electronic projects that become more challenging as they learn new lessons. They’ll learn how to build things like...
A color-changing mood lamp that activates when the lights are off
An optical theremin that let you create music simply by waving your hands
A Bluetooth speaker that plays music from your phone
A rover bot that avoids obstacles and follows lines
By the end of the curriculum, they’ll have more hands-on learning experience in hardware and programming than many students receive in a four year degree!
“Creation Crate fills a void that has existed in the Tech Subscription box world. Most kits are aimed at the very young or adults with extra income. Creation Crate is affordable and challenges tweens and teens (and at least one adult!). I especially appreciate the manner in which the project challenges build from month to month.” - Justin D.
Family activities create everlasting memories. The key to a great family activity is to do something that everyone enjoys and is interested in.
Unlike every other “learn electronics kit” out there, this isn’t just for kids and teens. Even adults will find the projects fun and challenging! That’s why Creation Crate makes the perfect family activity for parents looking to spend more quality time with their child.
“By high school, a child will have used up 90% of in-person parent time” - Tim Urban (Author Wait But Why)
Americans spend almost $13 billion on unwanted presents each year. Why not gift something that’s not only fun, but will help develop a lifelong skill?