It takes a certain nimbleness to pick a strawberry or a salad. While crops like wheat and potatoes have been harvested mechanically for decades, many fruits and vegetables have proved resistant to automation. They are too easily bruised, or too hard for heavy farm machinery to locate.
But recently, technological developments and advances in machine learning have led to successful trials of more sensitive and dexterous robots, which use cameras and artificial intelligence to locate ripe fruit and handle it with care and precision.
Developed by engineers at the University of Cambridge, the Vegebot is the first robot that can identify and harvest iceberg lettuce — bringing hope to farmers that one of the most demanding crops for human pickers could finally be automated.
First, a camera scans the lettuce and, with the help of a machine learning algorithm trained on more than a thousand lettuce images, decides if it is ready for harvest. Then a second camera guides the picking cage on top of the plant without crushing it. Sensors feel when it is in the right position, and compressed air drives a blade through the stalk at a high force to get a clean cut.
The Vegebot uses machine learning to identify ripe, immature and diseased lettuce heads
Its success rate is high, with 91% of the crop accurately classified, according to a study published in July. But the robot is still much slower than humans, taking 31 seconds on average to pick one lettuce. Researchers say this could easily be sped up by using lighter materials.
Such adjustments would need to be made if the robot was used commercially. „Our goal was to prove you can do it, and we’ve done it,“ Simon Birrell, co-author of the study, tells CNN Business. „Now it depends on somebody taking the baton and running forward,“ he says.
More mouths to feed, but less manual labor
With the world’s population expected to climb to 9.7 billion in 2050 from 7.7 billion today — meaning roughly 80 million more mouths to feed each year — agriculture is under pressure to meet rising demand for food production.
Added pressures from climate change, such as extreme weather, shrinking agricultural lands and the depletion of natural resources, make innovation and efficiency all the more urgent.
This is one reason behind the industry’s drive to develop robotics. The global market for agricultural drones and robots is projected to grow from $2.5 billion in 2018 to $23 billion in 2028, according to a report from market intelligence firm BIS Research.
„Agriculture robots are expected to have a higher operating speed and accuracy than traditional agriculture machinery, which shall lead to significant improvements in production efficiency,“ Rakhi Tanwar, principal analyst of BIS Research, tells CNN Business.
Fruit picking robots like this one, developed by Fieldwork Robotics, operate for more than 20 hours a day
On top of this, growers are facing a long-term labor shortage. According to the World Bank, the share of total employment in agriculture in the world has declined from 43% in 1991 to 28% in 2018.
Tanwar says this is partly due to a lack of interest from younger generations. „The development of robotics in agriculture could lead to a massive relief to the growers who suffer from economic losses due to labor shortage,“ she says.
Robots can work all day and night, without stopping for breaks, and could be particularly useful during intense harvest periods.
„The main benefit is durability,“ says Martin Stoelen, a lecturer in robotics at the University of Plymouth and founder of Fieldwork Robotics, which has developed a raspberry-picking robot in partnership with Hall Hunter, one of the UK’s major berry growers.
Their robots, expected to go into production next year, will operate more than 20 hours a day and seven days a week during busy periods, „which human pickers obviously can’t do,“ says Stoelen.
Octinion’s robot picks one strawberry every five seconds
Sustainable farming and food waste
Robots could also lead to more sustainable farming practices. They could enable growers to use less water, less fuel, and fewer pesticides, as well as producing less waste, says Tanwar.
At the moment, a field is typically harvested once, and any unripe fruits or vegetables are left to rot. Whereas, a robot could be trained to pick only ripe vegetables and, working around the clock, it could come back to the same field multiple times to pick any stragglers.
Birrell says that this will be the most important impact of robot pickers. „Right now, between a quarter and a third of food just rots in the field, and this is often because you don’t have humans ready at the right time to pick them,“ he says.
A successful example of this is the strawberry-picking robot developed by Octinion, a Belgium-based engineering startup.
The robot — which launched this year and is being used by growers in the UK and the Netherlands — is mounted on a self-driving trolley to serve table top strawberry production.
It uses 3D vision to locate the ripe berry, softly grips it with a pair of plastic pincers, and — just like a human — turns it 90 degrees to snap it from the stalk, before dropping it gently into a punnet.
„Robotics have the potential to convert the market from (being) supply-driven to demand-driven,“ says Tom Coen, CEO and founder of Octinion. „That will then help to reduce food waste and increase prices,“ he adds.
One major challenge with agricultural robots is adapting them for all-weather conditions. Farm machinery tends to be heavy-duty so that it can withstand rain, snow, mud, dust and heat.
„Building robots for agriculture is very different to building it for factories,“ says Birrell. „Until you’re out in the field, you don’t realize how robust it needs to be — it gets banged and crashed, you go over uneven surfaces, you get rained on, you get dust, you get lightning bolts.“
California-based Abundant Robotics has built an apple robot to endure the full range of farm conditions. It consists of an apple-sucking tube on a tractor-like contraption, which drives itself down an orchard row, while using computer vision to locate ripe fruit.
This spells the start of automation for orchard crops, says Dan Steere, CEO of Abundant Robotics. „Automation has steadily improved agricultural productivity for centuries,“ he says. „[We] have missed out on much of those benefits until now.“
The military wants future super-soldiers to control robots with their thoughts.
I. Who Could Object?
“Tonight I would like to share with you an idea that I am extremely passionate about,” the young man said. His long black hair was swept back like a rock star’s, or a gangster’s. “Think about this,” he continued. “Throughout all human history, the way that we have expressed our intent, the way we have expressed our goals, the way we have expressed our desires, has been limited by our bodies.” When he inhaled, his rib cage expanded and filled out the fabric of his shirt. Gesturing toward his body, he said, “We are born into this world with this. Whatever nature or luck has given us.“
His speech then took a turn: “Now, we’ve had a lot of interesting tools over the years, but fundamentally the way that we work with those tools is through our bodies.” Then a further turn: “Here’s a situation that I know all of you know very well—your frustration with your smartphones, right? This is another tool, right? And we are still communicating with these tools through our bodies.”
And then it made a leap: “I would claim to you that these tools are not so smart. And maybe one of the reasons why they’re not so smart is because they’re not connected to our brains. Maybe if we could hook those devices into our brains, they could have some idea of what our goals are, what our intent is, and what our frustration is.”
So began “Beyond Bionics,” a talk by Justin C. Sanchez, then an associate professor of biomedical engineering and neuroscience at the University of Miami, and a faculty member of the Miami Project to Cure Paralysis. He was speaking at a tedx conference in Florida in 2012. What lies beyond bionics? Sanchez described his work as trying to “understand the neural code,” which would involve putting “very fine microwire electrodes”—the diameter of a human hair—“into the brain.” When we do that, he said, we would be able to “listen in to the music of the brain” and “listen in to what somebody’s motor intent might be” and get a glimpse of “your goals and your rewards” and then “start to understand how the brain encodes behavior.”
He explained, “With all of this knowledge, what we’re trying to do is build new medical devices, new implantable chips for the body that can be encoded or programmed with all of these different aspects. Now, you may be wondering, what are we going to do with those chips? Well, the first recipients of these kinds of technologies will be the paralyzed. It would make me so happy by the end of my career if I could help get somebody out of their wheelchair.”
Sanchez went on, “The people that we are trying to help should never be imprisoned by their bodies. And today we can design technologies that can help liberate them from that. I’m truly inspired by that. It drives me every day when I wake up and get out of bed. Thank you so much.” He blew a kiss to the audience.
A year later, Justin Sanchez went to work for the Defense Advanced Research Projects Agency, the Pentagon’s R&D department. At darpa, he now oversees all research on the healing and enhancement of the human mind and body. And his ambition involves more than helping get disabled people out of their wheelchair—much more.
DARPA has dreamed for decades of merging human beings and machines. Some years ago, when the prospect of mind-controlled weapons became a public-relations liability for the agency, officials resorted to characteristic ingenuity. They recast the stated purpose of their neurotechnology research to focus ostensibly on the narrow goal of healing injury and curing illness. The work wasn’t about weaponry or warfare, agency officials claimed. It was about therapy and health care. Who could object? But even if this claim were true, such changes would have extensive ethical, social, and metaphysical implications. Within decades, neurotechnology could cause social disruption on a scale that would make smartphones and the internet look like gentle ripples on the pond of history.
Most unsettling, neurotechnology confounds age-old answers to this question: What is a human being?
II. High Risk, High Reward
In his 1958 State of the Union address, President Dwight Eisenhower declared that the United States of America “must be forward-looking in our research and development to anticipate the unimagined weapons of the future.” A few weeks later, his administration created the Advanced Research Projects Agency, a bureaucratically independent body that reported to the secretary of defense. This move had been prompted by the Soviet launch of the Sputnik satellite. The agency’s original remit was to hasten America’s entry into space.
During the next few years, arpa’s mission grew to encompass research into “man-computer symbiosis” and a classified program of experiments in mind control that was code-named Project Pandora. There were bizarre efforts that involved trying to move objects at a distance by means of thought alone. In 1972, with an increment of candor, the word Defense was added to the name, and the agency became darpa. Pursuing its mission, darpa funded researchers who helped invent technologies that changed the nature of battle (stealth aircraft, drones) and shaped daily life for billions (voice-recognition technology, GPS devices). Its best-known creation is the internet.
The agency’s penchant for what it calls “high-risk, high-reward” research ensured that it would also fund a cavalcade of folly. Project Seesaw, a quintessential Cold War boondoggle, envisioned a “particle-beam weapon” that could be deployed in the event of a Soviet attack. The idea was to set off a series of nuclear explosions beneath the Great Lakes, creating a giant underground chamber. Then the lakes would be drained, in a period of 15 minutes, to generate the electricity needed to set off a particle beam. The beam would accelerate through tunnels hundreds of miles long (also carved out by underground nuclear explosions) in order to muster enough force to shoot up into the atmosphere and knock incoming Soviet missiles out of the sky. During the Vietnam War, darpa tried to build a Cybernetic Anthropomorphous Machine, a jungle vehicle that officials called a “mechanical elephant.”
The diverse and sometimes even opposing goals of darpa scientists and their Defense Department overlords merged into a murky, symbiotic research culture—“unencumbered by the typical bureaucratic oversight and uninhibited by the restraints of scientific peer review,” Sharon Weinberger wrote in a recent book, The Imagineers of War. In Weinberger’s account, darpa’s institutional history involves many episodes of introducing a new technology in the context of one appealing application, while hiding other genuine but more troubling motives. At darpa, the left hand knows, and doesn’t know, what the right hand is doing.
The agency is deceptively compact. A mere 220 employees, supported by about 1,000 contractors, report for work each day at darpa’s headquarters, a nondescript glass-and-steel building in Arlington, Virginia, across the street from the practice rink for the Washington Capitals. About 100 of these employees are program managers—scientists and engineers, part of whose job is to oversee about 2,000 outsourcing arrangements with corporations, universities, and government labs. The effective workforce of darpa actually runs into the range of tens of thousands. The budget is officially said to be about $3 billion, and has stood at roughly that level for an implausibly long time—the past 14 years.
The Biological Technologies Office, created in 2014, is the newest of darpa’s six main divisions. This is the office headed by Justin Sanchez. One purpose of the office is to “restore and maintain warfighter abilities” by various means, including many that emphasize neurotechnology—applying engineering principles to the biology of the nervous system. For instance, the Restoring Active Memory program develops neuroprosthetics—tiny electronic components implanted in brain tissue—that aim to alter memory formation so as to counteract traumatic brain injury. Does darpa also run secret biological programs? In the past, the Department of Defense has done such things. It has conducted tests on human subjects that were questionable, unethical, or, many have argued, illegal. The Big Boy protocol, for example, compared radiation exposure of sailors who worked above and below deck on a battleship, never informing the sailors that they were part of an experiment.
Last year I asked Sanchez directly whether any of darpa’s neurotechnology work, specifically, was classified. He broke eye contact and said, “I can’t—We’ll have to get off that topic, because I can’t answer one way or another.” When I framed the question personally—“Are you involved with any classified neuroscience project?”—he looked me in the eye and said, “I’m not doing any classified work on the neurotechnology end.”
If his speech is careful, it is not spare. Sanchez has appeared at public events with some frequency (videos are posted on darpa’s YouTube channel), to articulate joyful streams of good news about darpa’s proven applications—for instance, brain-controlled prosthetic arms for soldiers who have lost limbs. Occasionally he also mentions some of his more distant aspirations. One of them is the ability, via computer, to transfer knowledge and thoughts from one person’s mind to another’s.
III. “We Try to Find Ways to Say Yes”
Medicine and biology were of minor interest to darpa until the 1990s, when biological weapons became a threat to U.S. national security. The agency made a significant investment in biology in 1997, when darpa created the Controlled Biological Systems program. The zoologist Alan S. Rudolph managed this sprawling effort to integrate the built world with the natural world. As he explained it to me, the aim was “to increase, if you will, the baud rate, or the cross-communication, between living and nonliving systems.” He spent his days working through questions such as “Could we unlock the signals in the brain associated with movement in order to allow you to control something outside your body, like a prosthetic leg or an arm, a robot, a smart home—or to send the signal to somebody else and have them receive it?”
Human enhancement became an agency priority. “Soldiers having no physical, physiological, or cognitive limitation will be key to survival and operational dominance in the future,” predicted Michael Goldblatt, who had been the science and technology officer at McDonald’s before joining darpa in 1999. To enlarge humanity’s capacity to “control evolution,” he assembled a portfolio of programs with names that sounded like they’d been taken from video games or sci-fi movies: Metabolic Dominance, Persistence in Combat, Continuous Assisted Performance, Augmented Cognition, Peak Soldier Performance, Brain-Machine Interface.
The programs of this era, as described by Annie Jacobsen in her 2015 book, The Pentagon’s Brain, often shaded into mad-scientist territory. The Continuous Assisted Performance project attempted to create a “24/7 soldier” who could go without sleep for up to a week. (“My measure of success,” one darpa official said of these programs, “is that the International Olympic Committee bans everything we do.”)
Dick Cheney relished this kind of research. In the summer of 2001, an array of “super-soldier” programs was presented to the vice president. His enthusiasm contributed to the latitude that President George W. Bush’s administration gave darpa—at a time when the agency’s foundation was shifting. Academic science gave way to tech-industry “innovation.” Tony Tether, who had spent his career working alternately for Big Tech, defense contractors, and the Pentagon, became darpa’s director. After the 9/11 attacks, the agency announced plans for a surveillance program called Total Information Awareness, whose logo included an all-seeing eye emitting rays of light that scanned the globe. The pushback was intense, and Congress took darpa to task for Orwellian overreach. The head of the program—Admiral John Poindexter, who had been tainted by scandal back in the Reagan years—later resigned, in 2003. The controversy also drew unwanted attention to darpa’s research on super-soldiers and the melding of mind and machine. That research made people nervous, and Alan Rudolph, too, found himself on the way out.
In this time of crisis, darpa invited Geoff Ling, a neurology‑ICU physician and, at the time, an active-duty Army officer, to join the Defense Sciences Office. (Ling went on to work in the Biological Technologies Office when it spun out from Defense Sciences, in 2014.) When Ling was interviewed for his first job at darpa, in 2002, he was preparing for deployment to Afghanistan and thinking about very specific combat needs. One was a “pharmacy on demand” that would eliminate the bulk of powdery fillers from drugs in pill or capsule form and instead would formulate active ingredients for ingestion via a lighter, more compact, dissolving substance—like Listerine breath strips. This eventually became a darpa program. The agency’s brazen sense of possibility buoyed Ling, who recalls with pleasure how colleagues told him, “We try to find ways to say yes, not ways to say no.” With Rudolph gone, Ling picked up the torch.
Ling talks fast. He has a tough-guy voice. The faster he talks, the tougher he sounds, and when I met him, his voice hit top speed as he described a first principle of Defense Sciences. He said he had learned this “particularly” from Alan Rudolph: “Your brain tells your hands what to do. Your hands basically are its tools, okay? And that was a revelation to me.” He continued, “We are tool users—that’s what humans are. A human wants to fly, he builds an airplane and flies. A human wants to have recorded history, and he creates a pen. Everything we do is because we use tools, right? And the ultimate tools are our hands and feet. Our hands allow us to work with the environment to do stuff, and our feet take us where our brain wants to go. The brain is the most important thing.”
Ling connected this idea of the brain’s primacy with his own clinical experience of the battlefield. He asked himself, “How can I liberate mankind from the limitations of the body?” The program for which Ling became best known is called Revolutionizing Prosthetics. Since the Civil War, as Ling has said, the prosthetic arm given to most amputees has been barely more sophisticated than “a hook,” and not without risks: “Try taking care of your morning ablutions with that bad boy, and you’re going to need a proctologist every goddamn day.” With help from darpa colleagues and academic and corporate researchers, Ling and his team built something that was once all but unimaginable: a brain-controlled prosthetic arm.
No invention since the internet has been such a reliable source of good publicity for darpa. Milestones in its development were hailed with wonder. In 2012, 60 Minutes showed a paralyzed woman named Jan Scheuermann feeding herself a bar of chocolate using a robotic arm that she manipulated by means of a brain implant.
Yet darpa’s work to repair damaged bodies was merely a marker on a road to somewhere else. The agency has always had a larger mission, and in a 2015 presentation, one program manager—a Silicon Valley recruit—described that mission: to “free the mind from the limitations of even healthy bodies.” What the agency learns from healing makes way for enhancement. The mission is to make human beings something other than what we are, with powers beyond the ones we’re born with and beyond the ones we can organically attain.The internal workings of darpa are complicated. The goals and values of its research shift and evolve in the manner of a strange, half-conscious shell game. The line between healing and enhancement blurs. And no one should lose sight of the fact that D is the first letter in darpa’s name. A year and a half after the video of Jan Scheuermann feeding herself chocolate was shown on television, darpa made another video of her, in which her brain-computer interface was connected to an F-35 flight simulator, and she was flying the airplane. darpa later disclosed this at a conference called Future of War.
Geoff Ling’s efforts have been carried on by Justin Sanchez. In 2016, Sanchez appeared at darpa’s “Demo Day” with a man named Johnny Matheny, whom agency officials describe as the first “osseointegrated” upper-limb amputee—the first man with a prosthetic arm attached directly to bone. Matheny demonstrated what was, at the time, darpa’s most advanced prosthetic arm. He told the attendees, “I can sit here and curl a 45-pound dumbbell all day long, till the battery runs dead.” The next day, Gizmodo ran this headline above its report from the event: “darpa’s Mind-Controlled Arm Will Make You Wish You Were a Cyborg.”
Since then, darpa’s work in neurotechnology has avowedly widened in scope, to embrace “the broader aspects of life,” Sanchez told me, “beyond the person in the hospital who is using it to heal.” The logical progression of all this research is the creation of human beings who are ever more perfect, by certain technological standards. New and improved soldiers are necessary and desirable for darpa, but they are just the window-display version of the life that lies ahead.
IV. “Over the Horizon”
Consider memory, Sanchez told me: “Everybody thinks about what it would be like to give memory a boost by 20, 30, 40 percent—pick your favorite number—and how that would be transformative.” He spoke of memory enhancement through neural interface as an alternative form of education. “School in its most fundamental form is a technology that we have developed as a society to help our brains to do more,” he said. “In a different way, neurotechnology uses other tools and techniques to help our brains be the best that they can be.” One technique was described in a 2013 paper, a study involving researchers at Wake Forest University, the University of Southern California, and the University of Kentucky. Researchers performed surgery on 11 rats. Into each rat’s brain, an electronic array—featuring 16 stainless-steel wires—was implanted. After the rats recovered from surgery, they were separated into two groups, and they spent a period of weeks getting educated, though one group was educated more than the other.
The less educated group learned a simple task, involving how to procure a droplet of water. The more educated group learned a complex version of that same task—to procure the water, these rats had to persistently poke levers with their nose despite confounding delays in the delivery of the water droplet. When the more educated group of rats attained mastery of this task, the researchers exported the neural-firing patterns recorded in the rats’ brains—the memory of how to perform the complex task—to a computer.
“What we did then was we took those signals and we gave it to an animal that was stupid,” Geoff Ling said at a darpa event in 2015—meaning that researchers took the neural-firing patterns encoding the memory of how to perform the more complex task, recorded from the brains of the more educated rats, and transferred those patterns into the brains of the less educated rats—“and that stupid animal got it. They were able to execute that full thing.” Ling summarized: “For this rat, we reduced the learning period from eight weeks down to seconds.”
“They could inject memory using the precise neural codes for certain skills,” Sanchez told me. He believes that the Wake Forest experiment amounts to a foundational step toward “memory prosthesis.” This is the stuff of The Matrix. Though many researchers question the findings—cautioning that, really, it can’t be this simple—Sanchez is confident: “If I know the neural codes in one individual, could I give that neural code to another person? I think you could.” Under Sanchez, darpa has funded human experiments at Wake Forest, the University of Southern California, and the University of Pennsylvania, using similar mechanisms in analogous parts of the brain. These experiments did not transfer memory from one person to another, but instead gave individuals a memory “boost.” Implanted electrodes recorded neuronal activity associated with recognizing patterns (at Wake Forest and USC) and memorizing word lists (at Penn) in certain brain circuits. Then electrodes fed back those recordings of neuronal activity into the same circuits as a form of reinforcement. The result, in both cases, was significantly improved memory recall.
Doug Weber, a neural engineer at the University of Pittsburgh who recently finished a four-year term as a darpa program manager, working with Sanchez, is a memory-transfer skeptic. Born in Wisconsin, he has the demeanor of a sitcom dad: not too polished, not too rumpled. “I don’t believe in the infinite limits of technology evolution,” he told me. “I do believe there are going to be some technical challenges which are impossible to achieve.” For instance, when scientists put electrodes in the brain, those devices eventually fail—after a few months or a few years. The most intractable problem is blood leakage. When foreign material is put into the brain, Weber said, “you undergo this process of wounding, bleeding, healing, wounding, bleeding, healing, and whenever blood leaks into the brain compartment, the activity in the cells goes way down, so they become sick, essentially.” More effectively than any fortress, the brain rejects invasion.
Even if the interface problems that limit us now didn’t exist, Weber went on to say, he still would not believe that neuroscientists could enable the memory-prosthesis scenario. Some people like to think about the brain as if it were a computer, Weber explained, “where information goes from A to B to C, like everything is very modular. And certainly there is clear modular organization in the brain. But it’s not nearly as sharp as it is in a computer. All information is everywhere all the time, right? It’s so widely distributed that achieving that level of integration with the brain is far out of reach right now.”
Peripheral nerves, by contrast, conduct signals in a more modular fashion. The biggest, longest peripheral nerve is the vagus. It connects the brain with the heart, the lungs, the digestive tract, and more. Neuroscientists understand the brain’s relationship with the vagus nerve more clearly than they understand the intricacies of memory formation and recall among neurons within the brain. Weber believes that it may be possible to stimulate the vagus nerve in ways that enhance the process of learning—not by transferring experiential memories, but by sharpening the facility for certain skills.
To test this hypothesis, Weber directed the creation of a new program in the Biological Technologies Office, called Targeted Neuroplasticity Training (TNT). Teams of researchers at seven universities are investigating whether vagal-nerve stimulation can enhance learning in three areas: marksmanship, surveillance and reconnaissance, and language. The team at Arizona State has an ethicist on staff whose job, according to Weber, “is to be looking over the horizon to anticipate potential challenges and conflicts that may arise” regarding the ethical dimensions of the program’s technology, “before we let the genie out of the bottle.” At a TNT kickoff meeting, the research teams spent 90 minutes discussing the ethical questions involved in their work—the start of a fraught conversation that will broaden to include many others, and last for a very long time.
DARPA officials refer to the potential consequences of neurotechnology by invoking the acronym elsi, a term of art devised for the Human Genome Project. It stands for “ethical, legal, social implications.” The man who led the discussion on ethics among the research teams was Steven Hyman, a neuroscientist and neuroethicist at MIT and Harvard’s Broad Institute. Hyman is also a former head of the National Institute of Mental Health. When I spoke with him about his work on darpa programs, he noted that one issue needing attention is “cross talk.” A man-machine interface that does not just “read” someone’s brain but also “writes into” someone’s brain would almost certainly create “cross talk between those circuits which we are targeting and the circuits which are engaged in what we might call social and moral emotions,” he said. It is impossible to predict the effects of such cross talk on “the conduct of war” (the example he gave), much less, of course, on ordinary life.
Weber and a darpa spokesperson related some of the questions the researchers asked in their ethics discussion: Who will decide how this technology gets used? Would a superior be able to force subordinates to use it? Will genetic tests be able to determine how responsive someone would be to targeted neuroplasticity training? Would such tests be voluntary or mandatory? Could the results of such tests lead to discrimination in school admissions or employment? What if the technology affects moral or emotional cognition—our ability to tell right from wrong or to control our own behavior?
Recalling the ethics discussion, Weber told me, “The main thing I remember is that we ran out of time.”
V. “You Can Weaponize Anything”
In The Pentagon’s Brain, Annie Jacobsen suggested that darpa’s neurotechnology research, including upper-limb prosthetics and the brain-machine interface, is not what it seems: “It is likely that darpa’s primary goal in advancing prosthetics is to give robots, not men, better arms and hands.” Geoff Ling rejected the gist of her conclusion when I summarized it for him (he hadn’t read the book). He told me, “When we talk about stuff like this, and people are looking for nefarious things, I always say to them, ‘Do you honestly believe that the military that your grandfather served in, your uncle served in, has changed into being Nazis or the Russian army?’ Everything we did in the Revolutionizing Prosthetics program—everything we did—is published. If we were really building an autonomous-weapons system, why would we publish it in the open literature for our adversaries to read? We hid nothing. We hid not a thing. And you know what? That meant that we didn’t just do it for America. We did it for the world.”
I started to say that publishing this research would not prevent its being misused. But the terms use and misuse overlook a bigger issue at the core of any meaningful neurotechnology-ethics discussion. Will an enhanced human being—a human being possessing a neural interface with a computer—still be human, as people have experienced humanity through all of time? Or will such a person be a different sort of creature?
The U.S. government has put limits on darpa’s power to experiment with enhancing human capabilities. Ling says colleagues told him of a “directive”: “Congress was very specific,” he said. “They don’t want us to build a superperson.” This can’t be the announced goal, Congress seems to be saying, but if we get there by accident—well, that’s another story. Ling’s imagination remains at large. He told me, “If I gave you a third eye, and the eye can see in the ultraviolet, that would be incorporated into everything that you do. If I gave you a third ear that could hear at a very high frequency, like a bat or like a snake, then you would incorporate all those senses into your experience and you would use that to your advantage. If you can see at night, you’re better than the person who can’t see at night.”
Enhancing the senses to gain superior advantage—this language suggests weaponry. Such capacities could certainly have military applications, Ling acknowledged—“You can weaponize anything, right?”—before he dismissed the idea and returned to the party line: “No, actually, this has to do with increasing a human’s capability” in a way that he compared to military training and civilian education, and justified in economic terms.
“Let’s say I gave you a third arm,” and then a fourth arm—so, two additional hands, he said. “You would be more capable; you would do more things, right?” And if you could control four hands as seamlessly as you’re controlling your current two hands, he continued, “you would actually be doing double the amount of work that you would normally do. It’s as simple as that. You’re increasing your productivity to do whatever you want to do.” I started to picture his vision—working with four arms, four hands—and asked, “Where does it end?”
“It won’t ever end,” Ling said. “I mean, it will constantly get better and better—” His cellphone rang. He took the call, then resumed where he had left off: “What darpa does is we provide a fundamental tool so that other people can take those tools and do great things with them that we’re not even thinking about.”
Judging by what he said next, however, the number of things that darpa is thinking about far exceeds what it typically talks about in public. “If a brain can control a robot that looks like a hand,” Ling said, “why can’t it control a robot that looks like a snake? Why can’t that brain control a robot that looks like a big mass of Jell-O, able to get around corners and up and down and through things? I mean, somebody will find an application for that. They couldn’t do it now, because they can’t become that glob, right? But in my world, with their brain now having a direct interface with that glob, that glob is the embodiment of them. So now they’re basically the glob, and they can go do everything a glob can do.”
VI. Gold Rush
darpa’s developing capabilities still hover at or near a proof-of-concept stage. But that’s close enough to have drawn investment from some of the world’s richest corporations. In 1990, during the administration of President George H. W. Bush, darpa Director Craig I. Fields lost his job because, according to contemporary news accounts, he intentionally fostered business development with some Silicon Valley companies, and White House officials deemed that inappropriate. Since the administration of the second President Bush, however, such sensitivities have faded.
Over time, darpa has become something of a farm team for Silicon Valley. Regina Dugan, who was appointed darpa director by President Barack Obama, went on to head Google’s Advanced Technology and Projects group, and other former darpa officials went to work for her there. She then led R&D for the analogous group at Facebook, called Building 8. (She has since left Facebook.)
darpa’s neurotechnology research has been affected in recent years by corporate poaching. Doug Weber told me that some darpa researchers have been “scooped up” by companies including Verily, the life-sciences division of Alphabet (the parent company of Google), which, in partnership with the British pharmaceutical conglomerate GlaxoSmithKline, created a company called Galvani Bioelectronics, to bring neuro-modulation devices to market. Galvani calls its business “bioelectric medicine,” which conveys an aura of warmth and trustworthiness. Ted Berger, a University of Southern California biomedical engineer who collaborated with the Wake Forest researchers on their studies of memory transfer in rats, worked as the chief science officer at the neurotechnology company Kernel, which plans to build “advanced neural interfaces to treat disease and dysfunction, illuminate the mechanisms of intelligence, and extend cognition.” Elon Musk has courted darpa researchers to join his company Neuralink, which is said to be developing an interface known as “neural lace.” Facebook’s Building 8 is working on a neural interface too. In 2017, Regina Dugan said that 60 engineers were at work on a system with the goal of allowing users to type 100 words a minute “directly from your brain.” Geoff Ling is on Building 8’s advisory board.
Talking with Justin Sanchez, I speculated that if he realizes his ambitions, he could change daily life in even more fundamental and lasting ways than Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey have. Sanchez blushes easily, and he breaks eye contact when he is uncomfortable, but he did not look away when he heard his name mentioned in such company. Remembering a remark that he had once made about his hope for neurotechnology’s wide adoption, but with “appropriate checks to make sure that it’s done in the right way,” I asked him to talk about what the right way might look like. Did any member of Congress strike him as having good ideas about legal or regulatory structures that might shape an emerging neural-interface industry? He demurred (“darpa’s mission isn’t to define or even direct those things”) and suggested that, in reality, market forces would do more to shape the evolution of neurotechnology than laws or regulations or deliberate policy choices. What will happen, he said, is that scientists at universities will sell their discoveries or create start-ups. The marketplace will take it from there: “As they develop their companies, and as they develop their products, they’re going to be subject to convincing people that whatever they’re developing makes sense, that it helps people to be a better version of themselves. And that process—that day-to-day development—will ultimately guide where these technologies go. I mean, I think that’s the frank reality of how it ultimately will unfold.”
He seemed entirely untroubled by what may be the most troubling aspect of darpa’s work: not that it discovers what it discovers, but that the world has, so far, always been ready to buy it.
This article appears in the November 2018 print edition with the headline “The Pentagon Wants to Weaponize the Brain. What Could Go Wrong?”
Imagine, throughout your day, you could know exactly what your body chemistry was up to. More specifically, imagine if the information from your body could instantly go to your doctor and he could make a diagnosis of what your body was doing or what was wrong.
It’s nearly here. Today at CES 2016, a company called Profusa demonstrated a wearable biointegrated sensor, Lumee, that allows for long-term continuous monitoring of your body chemistry. This wearable smart tech device provides actionable data on your body’s key chemistry in one continuous data stream which changes the way we will monitor our health.
Lumee, a biointegrated wearable sensor.
“In between annual physicals we really don’t know what’s going on in our body,” said Ben Hwang, Ph.D., CEO, Profusa. “While fitness trackers and other wearables provide insights into our heart rate, respiration and other physical measures, they don’t provide information on the most important aspect of our health: our body’s chemistry. What if there was a better way of knowing how you’re doing — how you’re really doing?”
According to Statista, the digital health market is expected to reach $233.3 billion by 2020, and that market is being led by the mobile health market.
Since the iPhone hit it big in 2007, consumers and physicians alike (52%) use their smartphone to search for advice, drugs, therapies, etc, and 80% of physicians use smartphones and medical apps. With wearables, physicians can now collect long-term and specialized data that’s much easier to obtain and track patient health behaviors over longer periods of time. This has already changed our relationship with our health care providers and their relationships with us.
“Profusa’s Lumee is a bold attempt at one of the holy grails of personalized medicine, continuous, real time, non-invasive glucose and oxygen monitoring, it’s applications are vast,” said Ryan Bethencourt, Program Director and Venture Partner at Indie.Bio, a bio-tech accelerator in San Francisco. “From Type 1 and Type 2 diabetes monitoring through to fitness and finding optimal training patterns for your body, with data that’s currently impossible to acquire any other way continously. I’m rarely this optimistic about a new medical device, especially one that will require implantation approval from the FDA but in this case, I think the optical biosensor technology and device design warrant the optimism.”
This is why Profusa hopes their tiny (3-5 mm) bioengineered biosensors will enable real-time detection of our body’s unique chemistry in order to give greater insight into a person’s overall health status. Dr. Hwang believes Lumee can be applied to consumer health and wellness applications but also to the management of chronic diseases like Peripheral ArteryDisease (PAD), diabetes and Chronic Obstructive Pulmonary Disease (COPD).
Researchers at MIT, the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound.
Researchers from Temple University have used the CRISPR/Cas9 gene editing tool to clear out the entire HIV-1 genome from a patient’s infected immune cells. It’s a remarkable achievement that could have profound implications for the treatment of AIDS and other retroviruses.
Retroviruses, unlike regular run-of-the-mill viruses, insert copies of their genomes into host cells in order to replicate. Antiretroviral drugs have proven effective at controlling HIV after infection, but patients who stop taking these drugs suffer a quick relapse. Once treatment stops, the HIV reasserts itself, weakening the immune system, thus triggering the onset of acquired immune deficiency syndrome, or AIDS.
Over the years, scientists have struggled to remove HIV from infected CD4+ T-cells, a type of white blood cell that fights infection. Many of these “shock and kill” efforts have been unsuccessful. The recent introduction of CRISPR/Cas9 has now inspired a new approach.
Geneticist Kamel Khalili and colleagues from Temple University extracted infected T-cells from a patient. The team’s modified version of CRISPR/Cas9—which specifically targets HIV-1 DNA—did the rest. First, guide RNA methodically made its way across the entire T-cell genome searching for signs of the viral components. Once it recognized a match, a nuclease enzyme ripped out out the offending strands from the T-cell DNA. Then the cell’s built-in DNA repair machinery patched up the loose ends.
Not only did this remove the viral DNA, it did so permanently. What’s more, because this microscopic genetic system remained within the cell, it staved off further infections when particles of HIV-1 tried to sneak their way back in from unedited cells.
The study was performed on T-cells in a petri dish, but the technique successfully lowered the viral load in the patient’s extracted cells. This strongly suggests it could be used as a treatment. However, it could be years before we see that happen. Still, the researchers ruled out off-target effects (i.e. unanticipated side-effects of gene-editing) and potential toxicity. They also demonstrated that the HIV-1-eradicated cells were growing and functioning normally.
These findings “demonstrate the effectiveness of our gene editing system in eliminating HIV from the DNA of CD4 T-cells and, by introducing mutations into the viral genome, permanently inactivating its replication,” Khalili said in a statement. “Further, they show that the system can protect cells from reinfection and that the technology is safe for the cells, with no toxic effects.”
Mickey D’s uses varieties like the Russet Burbank, which have a nice oval shape and just the right balance of starch and sugar. Excess sugar can cause a fry to have brown spots where it’s over-caramelized, leaving a burnt taste and deviating from the uniform yellow-arches color. Just in case, the spuds are blanched after slicing, removing surplus sugar.
SODIUM ACID PYROPHOSPHATE
Taters can turn a nasty hue even after they’re fried—iron in the spud reacts with the potato’s phenolic compounds, discoloring the tissue. The phosphate ions in SAPP trap the iron ions, stalling the reaction and keeping the potatoes nice and white throughout the process.
In the good old days, McDonald’s fries were cooked in beef tallow. But customer demand for less saturated fat prompted a switch to vegetable oil in the early ’90s. Here, that means oils of varying saturations combined into something reminiscent of beef tallow. There’s canola (about 8 percent saturated fat), soybean oil (16 percent), and hydrogenated soybean oil (94 percent). And to replace the essence of beef tallow? “Natural beef flavor,” which contains hydrolyzed wheat and milk proteins that could be a source of meaty-tasting amino acids.
MORE VEGETABLE OIL
That’s right, the fries get two batches of vegetable oil—one for par-frying at the factory and another for the frying bath on location. The second one adds corn oil and an additive called TBHQ, or tertbutylhydroquinone, which at high doses can cause nasty side effects in rats (mmmm … stomach tumors). McDonald’s uses this oil for all its frying, so the stuff usually sits around in big vats, which means it can go rancid as oxygen plucks hydrogens from lipids. TBHQ acts as an antioxidant, replacing those pilfered hydrogens with its own supply.
A brief dip in a corn-based sugar solution replaces just enough of the natural sweet stuff that was removed by blanching. The result is a homogeneous outer layer that caramelizes evenly. You’ll add more sugar later when you squirt on the ketchup.
Sprinkled on just after frying, the crystals are a uniform diameter—just big enough to get absorbed quickly by crackling-hot oil. Now add ketchup and you’ve achieved the hedonistic trifecta: fat, salt, and sugar.