Archiv der Kategorie: Biologie

The First Crispr-Edited Salad Is Here

A startup used gene editing to make mustard greens more appetizing to consumers. Next up: fruits.

A gene-editing startup wants to help you eat healthier salads. This month, North Carolina–based Pairwise is rolling out a new type of mustard greens engineered to be less bitter than the original plant. The vegetable is the first Crispr-edited food to hit the US market.

Mustard greens are packed with vitamins and minerals but have a strong peppery flavor when eaten raw. To make them more palatable, they’re usually cooked. Pairwise wanted to retain the health benefits of mustard greens but make them tastier to the average shopper, so scientists at the company used the DNA-editing tool Crispr to remove a gene responsible for their pungency. The company hopes consumers will opt for its greens over less nutritious ones like iceberg and butter lettuce.

“We basically created a new category of salad,” says Tom Adams, cofounder and CEO of Pairwise. The greens will initially be available in select restaurants and other outlets in the Minneapolis–St. Paul region, St. Louis, and Springfield, Massachusetts. The company plans to start stocking the greens in grocery stores this summer, likely in the Pacific Northwest first.

 

A naturally occurring part of bacteria’s immune system, Crispr was first harnessed as a gene-editing tool in 2012. Ever since, scientists have envisioned lofty uses for the technique. If you could tweak the genetic code of plants, you could—at least in theory—install any number of favorable traits into them. For instance, you could make crops that produce larger yields, resist pests and disease, or require less water. Crispr has yet to end world hunger, but in the short term, it may give consumers more variety in what they eat.

Pairwise’s goal is to make already healthy foods more convenient and enjoyable. Beyond mustard greens, the company is also trying to improve fruits. It’s using Crispr to develop seedless blackberries and pitless cherries. “Our lifestyle and needs are evolving and we’re becoming more aware of our nutrition deficit,” says Haven Baker, cofounder and chief business officer at Pairwise. In 2019, only about one in 10 adults in the US met the daily recommended intake of 1.5 to 2 cups of fruit and 2 to 3 cups of vegetables, according to the Centers for Disease Control and Prevention.

Technically, the new mustard greens aren’t a genetically modified organism, or GMO. In agriculture, GMOs are those made by adding genetic material from a completely different species. These are crops that could not be produced through conventional selective breeding—that is, choosing parent plants with certain characteristics to produce offspring with more desirable traits.

Instead, Crispr involves tweaking an organism’s own genes; no foreign DNA is added. One benefit of Crispr is that it can achieve new plant varieties in a fraction of the time it takes to produce a new one through traditional breeding. It took Pairwise just four years to bring its mustard greens to the market; it can take a decade or longer to bring out desired characteristics through the centuries-old practice of crossbreeding.

 

In the US, gene-edited foods aren’t subject to the same regulations as GMOs, so long as their genetic changes could have otherwise occurred through traditional breeding—such as a simple gene deletion or swapping of some DNA letters. As a result, gene-edited foods don’t have to be labeled as such. By contrast, GMOs need to be labeled as “bioengineered” or “derived from bioengineering” under new federal requirements, which went into effect at the beginning of 2022.

 

The US Department of Agriculture reviews applications for gene-edited foods to determine whether these altered plants could become a pest, and the Food and Drug Administration recommends that producers consult with the agency before bringing these new foods to market. In 2020, the USDA determined Pairwise’s mustard greens were not plant pests. The company also met with the FDA prior to introducing its new greens.

The mustard greens aren’t the first Crispr food to be launched commercially. In 2021, a Tokyo firm introduced a Crispr-edited tomato in Japan that contains high amounts of y-aminobutyric acid, or GABA. A chemical messenger in the brain, GABA blocks impulses between nerve cells. The company behind the tomato, Sanatech Seeds, claims that eating GABA can help relieve stress and lower blood pressure.

Scientists are using Crispr in an attempt to improve other crops, such as boosting the number of kernels on ears of corn or breeding cacao trees with enhanced resistance to disease. And last year, the US approved Crispr-edited cattle for use in meat production. Minnesota company Acceligen used the gene-editing tool to give cows a short, slick-hair coat. Cattle with this trait may be able to better withstand hot temperatures. Beef from these cows hasn’t come onto the market yet.

Another Minnesota firm, Calyxt, came out with a gene-edited soybean oil in 2019 that’s free of trans fats, but the product uses an older form of gene editing known as TALENs.

Some question the value of using Crispr to make less bitter greens. People who don’t eat enough vegetables are unlikely to change their habits just because a new salad alternative is available, says Peter Lurie, president and executive director of the Center for Science in the Public Interest, a Washington, DC–based nonprofit that advocates for safer and healthier foods. “I don’t think this is likely to be the answer to any nutritional problems,” he says, adding that a staple crop like fortified rice would likely have a much bigger nutritional impact.

When genetic engineering was first introduced to agriculture in the 1990s, proponents touted the potential consumer benefits of GMOs, such as healthier or fortified foods. In reality, most of the GMOs on the market today were developed to help farmers prevent crop loss and increase yield. That may be starting to change. Last year, a GMO purple tomato was introduced in the US with consumers in mind. It’s engineered to contain more antioxidants than the regular red variety of tomato, and its shelf life is also twice as long.

Gene-edited foods like the new mustard greens may offer similar consumer benefits without the baggage of the GMO label. Despite decades of evidence showing that GMOs are safe, many Americans are still wary of these foods. In a 2019 poll by the Pew Research Center, about 51 percent of respondents thought GMOs were worse for people’s health than those with no genetically modified ingredients.

However, gene-edited foods could still face obstacles with public acceptance, says Christopher Cummings, a senior research fellow at North Carolina State University and Iowa State University. Most people have not made up their minds about whether they would actively avoid or eat them, according to a 2022 study that Cummings conducted. Respondents who indicated a willingness to eat them tended to be under 30 with higher levels of education and household income, and many expressed a preference for transparency around gene-edited foods. Almost 75 percent of those surveyed wanted gene-edited foods to be labeled as such.

 

“People want to know how their food is made. They don’t want to feel duped,” Cummings says. He thinks developers of these products should be transparent about the technology they use to avoid future backlash.

As for wider acceptance of gene-edited foods, developers need to learn lessons from GMOs. One reason consumers have a negative or ambivalent view of GMOs is because they don’t often benefit directly from these foods. “The direct-to-consumer benefit has not manifested in many technological food products in the past 30 years,” says Cummings. “If gene-edited foods are really going to take off, they need to provide a clear and direct benefit to people that helps them financially or nutritionally.”

Source: https://www.wired.com/story/wired30-crispr-edited-salad-greens/

How Apple and Google Are Enabling Covid-19 Contact-Tracing

Source: https://www.wired.com/story/apple-google-bluetooth-contact-tracing-covid-19/

The tech giants have teamed up to use a Bluetooth-based framework to keep track of the spread of infections without compromising location privacy.
a man walking in the street in boston
The companies chose to skirt privacy pitfalls and implement a system that collects no location data.Photograph: Craig F. Walker/Boston Globe/Getty Images

Since Covid-19 began its spread across the world, technologists have proposed using so-called contact-tracing apps to track infections via smartphones. Now, Google and Apple are teaming up to give contact-tracers the ingredients to make that system possible—while in theory still preserving the privacy of those who use it.

On Friday, the two companies announced a rare joint project to create the groundwork for Bluetooth-based contact-tracing apps that can work across both iOS and Android phones. In mid-May, they plan to release an application programming interface that apps from public health organizations can tap into. The API will let those apps use a phone’s Bluetooth radios—which have a range of about 30 feet—to keep track of whether a smartphone’s owner has come into contact with someone who later turns out to have been infected with Covid-19. Once alerted, that user can then self-isolate or get tested themselves.

Crucially, Google and Apple say the system won’t involve tracking user locations or even collecting any identifying data that would be stored on a server. „This is a very unprecedented situation for the world,“ said one of the joint project’s spokespeople in a phone call with WIRED. „As platform companies we’ve both been thinking hard about what we can do to help get people back to normal life and back to work effectively. We think in bringing the two platforms together we can solve digital contact tracing at scale in partnership with public health authorities and do it in a privacy-preserving way.“

Unlike Apple, which has complete control over its software and hardware and can push system-wide changes with relative ease, Google faces a fragmented Android ecosystem. The company will still make the framework available to all devices running Android 6.0 or higher by delivering the update through Google Play Services, which does not require hardware partners to sign off.

Several projects, including ones led by developers at MIT, Stanford, and the governments of Singapore and Germany, have already proposed, and in some cases implemented, similar Bluetooth-based contact-tracing systems. Google and Apple declined to say which specific groups or government agencies they’ve been working with. But they argue that by building operating-level functions those applications can tap into, the apps will be far more effective and energy efficient. Most importantly, they’ll be interoperable between the two dominant smartphone platforms.

In the version of the system set to roll out next month, the operating-system-level Bluetooth tracing would allow users to opt in to a Bluetooth-based proximity-detection scheme when they download a contact-tracing app. Their phone would then constantly ping out Bluetooth signals to others nearby while also listening for communications from nearby phones.

If two phones spend more than a few minutes within range of one another, they would each record contact with the other phone, exchanging unique, rotating identifier “beacon” numbers that are based on keys stored on each device. Public heath app developers would be able to „tune“ both the proximity and the amount of time necessary to qualify as a contact based on current information about how Covid-19 spreads.

If a user is later diagnosed with Covid-19, they would alert their app with a tap. The app would then upload their last two weeks of keys to a server, which would then generate their recent “beacon” numbers and send them out to other phones in the system. If someone else’s phone finds that one of these beacon numbers matches one stored on their phone, they would be notified that they’ve been in contact with a potentially infected person and given information about how to help prevent further spread.

graph with illustrations of phones and humans
Courtesy of Google
graph with illustrations of phones and humans
Courtesy of Google

The advantage of that system, in terms of privacy, is that it doesn’t depend on collecting location data. „People’s identities aren’t tied to any contact events,“ said Cristina White, a Stanford computer scientist who described a very similar Bluetooth-based contact tracing project known as Covid-Watch to WIRED last week. „What the app uploads instead of any identifying information is just this random number that the two phones would be able to track down later but that nobody else would, because it’s stored locally on their phones.“

Until now, however, Bluetooth-based schemes like the one White described suffered from how Apple limits access to Bluetooth when apps run in the background of iOS, a privacy and power-saving safeguard. It will lift that restriction specifically for contact-tracing apps. And Apple and Google say that the protocol they’re releasing will be designed to use minimal power to save phones‘ battery lives. „This thing has to run 24-7, so it has to really only sip the battery life,“ said one of the project’s spokespeople.

In a second iteration of the system rolling out in June, Apple and Google say they’ll allow users to enable Bluetooth-based contact-tracing even without an app installed, building the system into the operating systems themselves. This would be opt-in as well. But while the phones would exchange „beacon“ numbers via Bluetooth, users would still need to download a contact-tracing app to either declare themselves as Covid-19 positive or to learn if someone they’ve come into contact with was diagnosed.

Google and Apple’s Bluetooth-based system has some significant privacy advantages over GPS-based location-tracking systems that have been proposed by other researchers including at MIT, the University of Toronto, McGill, and Harvard. Since those systems collect location data, they would require complex cryptographic systems to avoid collecting information about users‘ movements that could potentially expose highly personal information, from political dissent to extramarital affairs.

With Google and Apple’s announcement, it’s clear that the companies chose to skirt those privacy pitfalls and implement a system that collects no location data. „It looks like we won,“ says Stanford’s White, whose Covid-Watch project, part of a consortium of projects using a Bluetooth-based system, had advocated for the Bluetooth-only approach. „It’s clear from the API that it was influenced by our work. It’s following the exact suggestions from our engineers about how implement it.“

Sticking to Bluetooth alone doesn’t guarantee the system won’t violate users’ privacy, White notes. Although Google and Apple say they’ll only upload anonymous identifiers from users’ phones, a server could nonetheless identify Covid-19 users in other ways, such as based on their IP address. The organization running a given app still needs to act responsibly. “Exactly what they’re proposing for the backend still isn’t clear, and that’s really important,” White says. “We need to keep advocating to make sure this is done properly and the server isn’t collecting information it shouldn’t.”

Even with Bluetooth tracing, the app still faces some practical challenges. First, it would need significant adoption and broad willingness to share Covid-19 infection information to work. And it will also require a safeguard that only allows users to declare themselves Covid-19 positive after a healthcare provider has officially diagnosed them, so that the system isn’t overrun with false positives. Covid-Watch, for instance, would require the user to get a confirmation code from a health care provider.

Bluetooth-based systems, in contrast with location-based systems, also have some problems of their own. If someone leaves behind traces of the novel coronavirus on a surface, for instance, someone can be infected by it without their phones ever being in proximity.

A spokesperson for the Google and Apple project didn’t deny that possibility, but argued that those cases of „environmental transmission“ are relatively rare compared to direct transmission from people in proximity of each other. „This won’t cut every chain of every transmission,“ the spokesperson said. „But if you cut enough of them, you modulate the transmission enough to flatten the curve.“

 

Why outbreaks like coronavirus spread exponentially, and how to “flatten the curve”

FREE-FOR-ALL VS. ATTEMPTED QUARANTINE

MODERATE SOCIAL DISTANCING vs. EXTENSIVE SOCIAL DISTANCING

This so-called exponential curve has experts worried. If the number of cases were to continue to double every three days, there would be about a hundred million cases in the United States by May.

That is math, not prophecy. The spread can be slowed, public health professionals say, if people practice “social distancing” by avoiding public spaces and generally limiting their movement.

Still, without any measures to slow it down, covid-19 will continue to spread exponentially for months. To understand why, it is instructive to simulate the spread of a fake disease through a population.

We will call our fake disease simulitis. It spreads even more easily than covid-19: whenever a healthy person comes into contact with a sick person, the healthy person becomes sick, too.

In a population of just five people, it did not take long for everyone to catch simulitis.

In real life, of course, people eventually recover. A recovered person can neither transmit simulitis to a healthy person nor become sick again after coming in contact with a sick person.

Let’s see what happens when simulitis spreads in a town of 200 people. We will start everyone in town at a random position, moving at a random angle, and we will make one person sick.

Notice how the slope of the red curve, which represents the number of sick people, rises rapidly as the disease spreads and then tapers off as people recover.

Our simulation town is small — about the size of Whittier, Alaska — so simulitis was able to spread quickly across the entire population. In a country like the United States, with its 330 million people, the curve could steepen for a long time before it started to slow.

[Mapping the spread of the coronavirus in the U.S. and worldwide]

When it comes to the real covid-19, we would prefer to slow the spread of the virus before it infects a large portion of the U.S. population. To slow simulitis, let’s try to create a forced quarantine, such as the one the Chinese government imposed on Hubei province, covid-19’s ground zero.

Whoops! As health experts would expect, it proved impossible to completely seal off the sick population from the healthy.

Leana Wen, the former health commissioner for the city of Baltimore, explained the impracticalities of forced quarantines to The Washington Post in January. “Many people work in the city and live in neighboring counties, and vice versa,“ Wen said. “Would people be separated from their families? How would every road be blocked? How would supplies reach residents?”

As Lawrence O. Gostin, a professor of global health law at Georgetown University, put it: “The truth is those kinds of lockdowns are very rare and never effective.”

Fortunately, there are other ways to slow an outbreak. Above all, health officials have encouraged people to avoid public gatherings, to stay home more often and to keep their distance from others. If people are less mobile and interact with each other less, the virus has fewer opportunities to spread.

Some people will still go out. Maybe they cannot stay home because of their work or other obligations, or maybe they simply refuse to heed public health warnings. Those people are not only more likely to get sick themselves, they are more likely to spread simulitis, too.

Let’s see what happens when a quarter of our population continues to move around while the other three quarters adopt a strategy of what health experts call “social distancing.”

More social distancing keeps even more people healthy, and people can be nudged away from public places by removing their allure.

“We control the desire to be in public spaces by closing down public spaces. Italy is closing all of its restaurants. China is closing everything, and we are closing things now, too,” said Drew Harris, a population health researcher and assistant professor at The Thomas Jefferson University College of Public Health. “Reducing the opportunities for gathering helps folks social distance.”

To simulate more social distancing, instead of allowing a quarter of the population to move, we will see what happens when we let just one of every eight people move.

The four simulations you just watched — a free-for-all, an attempted quarantine, moderate social distancing and extensive social distancing — were random. That means the results of each one were unique to your reading of this article; if you scroll up and rerun the simulations, or if you revisit this page later, your results will change.

Even with different results, moderate social distancing will usually outperform the attempted quarantine, and extensive social distancing usually works best of all. Below is a comparison of your results.

Finishing simulations…

Simulitis is not covid-19, and these simulations vastly oversimplify the complexity of real life. Yet just as simulitis spread through the networks of bouncing balls on your screen, covid-19 is spreading through our human networks — through our countries, our towns, our workplaces, our families. And, like a ball bouncing across the screen, a single person’s behavior can cause ripple effects that touch faraway people.

[What you need to know about coronavirus]

In one crucial respect, though, these simulations are nothing like reality: Unlike simulitis, covid-19 can kill. Though the fatality rate is not precisely known, it is clear that the elderly members of our community are most at risk of dying from covid-19.

“If you want this to be more realistic,” Harris said after seeing a preview of this story, “some of the dots should disappear.”

What China’s coronavirus response can teach the rest of the world

Researchers are studying the effects of China’s lockdowns to glean insights about controlling the viral pandemic.
A man wearing a mask sells breakfast to nurses behind a makeshift barricade wall in Wuhan, China.

Social distancing has been used to halt the transmission of the coronavirus in China.Credit: Getty

As the new coronavirus marches around the globe, countries with escalating outbreaks are eager to learn whether China’s extreme lockdowns were responsible for bringing the crisis there under control. Other nations are now following China’s lead and limiting movement within their borders, while dozens of countries have restricted international visitors.

In mid-January, Chinese authorities introduced unprecedented measures to contain the virus, stopping movement in and out of Wuhan, the centre of the epidemic, and 15 other cities in Hubei province — home to more than 60 million people. Flights and trains were suspended, and roads were blocked.

Soon after, people in many Chinese cities were told to stay at home and venture out only to get food or medical help. Some 760 million people, roughly half the country’s population, were confined to their homes, according to The New York Times.

It’s now two months since the lockdowns began — some of which are still in place — and the number of new cases there is around a couple of dozen per day, down from thousands per day at the peak. “These extreme limitations on population movement have been quite successful,” says Michael Osterholm, an infectious-disease scientist at the University of Minnesota in Minneapolis.

In a report released late last month, the World Health Organization (WHO) congratulated China on a “unique and unprecedented public health response [that] reversed the escalating cases”.

But the crucial question is which interventions in China were the most important in driving down the spread of the virus, says Gabriel Leung, an infectious-disease researcher at the University of Hong Kong. “The countries now facing their first wave [of infections] need to know this,” he says.

Nature talked to epidemiologists about whether the lockdowns really worked, if encouraging people to avoid large gatherings would have been enough and what other countries can learn from China’s experience.

What happened after the lockdowns?

Before the interventions, scientists estimated that each infected person passed on the coronavirus to more than two others, giving it the potential to spread rapidly. Early models of the disease’s spread, which did not factor in containment efforts, suggested that the virus, called SARS-CoV-2, would infect 40% of China’s population — some 500 million people.

But between 16 and 30 January, a period that included the first 7 days of the lockdown, the number of people each infected individual gave the virus to dropped to 1.05, estimates Adam Kucharski, who models infectious-disease spread at the London School of Hygiene and Tropical Medicine. “That was amazing,” he says.

The number of new daily infections in China seems to have peaked on 25 January — just two days after Wuhan was locked down.

As of 16 March, roughly 81,000 cases have been reported in China, according to the WHO. Some scientists think that many cases there were unreported — either because symptoms were not severe enough for people to seek medical care, or because tests were not carried out. But it seems clear that measures implemented during this time did work, says Christopher Dye, an epidemiologist at the University of Oxford, UK. “Even if there were 20 or 40 times more cases, which seems unlikely, the control measures worked,” says Dye.

Could China’s response have worked better?

Epidemiologists say China’s mammoth response had one glaring flaw: it started too late. In the initial weeks of the outbreak in December and January, Wuhan authorities were slow to report cases of the mysterious infection, which delayed measures to contain it, says Howard Markel, a public-health researcher at the University of Michigan in Ann Arbor. “The delay of China to act is probably responsible for this world event,” says Markel.

A model simulation by Lai Shengjie and Andrew Tatem, emerging-disease researchers at the University of Southampton, UK, shows that if China had implemented its control measures a week earlier, it could have prevented 67% of all cases there. Implementing the measures 3 weeks earlier, from the beginning of January, would have cut the number of infections to 5% of the total.

A volunteer disinfects a Christian church in Wuhan

A church in Wuhan is sprayed with disinfectant. Credit: Feature China/Barcroft Media/Getty

Data from other cities also show the benefits of speed. Cities that suspended public transport, closed entertainment venues and banned public gatherings before their first COVID-19 case had 37% fewer cases than cities that didn’t implement such measures, according to a preprint1 by Dye on the containment measures used in 296 Chinese cities.

Did China’s travel bans specifically work?

Multiple analyses of air travel suggest that the Hubei travel bans, which stopped people leaving the province on planes, trains or in cars, slowed the virus’ spread, but not for long2. A 6 March study3 published in Science by scientists in Italy, China and the United States found that cutting off Wuhan delayed disease spread to other cities in China by roughly four days.

The bans had a more lasting effect internationally, stopping four of five cases from being exported from China to other countries for two to three weeks, the team found. But after that, travellers from other cities transported the virus to other international cities, seeding new outbreaks. The team’s model suggests that even blocking 90% of travel slows the virus’s spread only moderately unless other measures are introduced.

Since travel bans can only slow the spread of this type of disease, it’s important that bans be implemented in a way that encourages trust, says Justin Lessler, an epidemiologist at Johns Hopkins University in Baltimore. “If you encourage people to lie or try to circumvent the ban, it is destined to fail,” he says.

Dozens of countries across Europe, the Americas and Africa and Asia have now introduced travel restrictions.

Although the WHO warns against them, saying they aren’t usually effective in preventing an infection’s spread, and they could divert resources from other more helpful measures and block aid and technical support, in addition to harming many industries.

What are the lessons for other countries?

Tatem and Lai’s model assesses the combined effect of China’s early detection and isolation, the resulting drop in contact between people and the country’s intercity travel bans on reducing the virus’s spread. Together, these measures prevented cases from increasing by 67-fold — otherwise, there would have been nearly 8 million cases by the end of February.

The effect of the drop in contact between people was significant on its own. Using mobile-phone location data from Chinese Internet giant Baidu, the team found a dramatic reduction in people’s movements, which they say represents a huge drop in person-to-person contact. Without this decrease, there would have been about 2.6 times as many people infected at the end of February, the pair says.

But early detection and isolation was the most important factor in reducing COVID-19 cases. In the absence of those efforts, China would have had five times as many infections as it did at the end of February. “If you are to prioritize, early detection and isolation are the most important,” says Tatem.

Early detection paid off for Singapore. The country was one of the quickest to identify cases, because doctors had been warned to look out for a ‘mysterious pneumonia’, says Vernon Lee, who heads the communicable-disease response team for Singapore’s health ministry. As the first cases popped up in Singapore, doctors promptly identified and isolated those people and started contact tracing, says Lee.

The country still has under 250 COVID-19 cases, and it didn’t need to introduce the drastic movement restrictions used in China. Some events have been cancelled, people with COVID-19 are being quarantined and temperature screening and other community measures are in place, says Lee. “But life is still going on,” he says.

The impact of school closures in China is unknown. A preprint study4 of the spread of COVID-19 in Shenzhen has found that although children are just as likely to be infected as adults, it is still not clear whether children, many of whom don’t show symptoms, can transmit the virus. “This will be critical in evaluating the impact of school closures,” says Lessler, the co-author of the study.

Are COVID cases coming to an end in China?

New cases of COVID-19 have slowed dramatically in China, but some fear that once the country fully eases its control measures, the virus could start circulating again. It could even be reintroduced into China from the countries now experiencing outbreaks. Because China’s measures protected so many people from infection, a large pool of people have no immunity against the virus, says Leung.

China is suppressing the virus, not eradicating it, says Osterholm. The world will need to wait until about eight weeks after China resumes to some form of normality to know what it did or didn’t accomplish with its population-movement limitations, he says .

There is probably a fierce debate going on in China about when to relax the lockdown measures, says Roy Anderson, an epidemiologist at Imperial College London. He suggests there could be a second wave of new infections when they are lifted.

Lockdowns have to end at some point, and governments should remind people to maintain social distancing and good hygiene, says Anderson. “It’s our actions more than government measures that will matter,” he says.

Source: https://www.nature.com/articles/d41586-020-00741-x

Why robots will soon be picking soft fruits and salad

London (CNN Business)

It takes a certain nimbleness to pick a strawberry or a salad. While crops like wheat and potatoes have been harvested mechanically for decades, many fruits and vegetables have proved resistant to automation. They are too easily bruised, or too hard for heavy farm machinery to locate.

But recently, technological developments and advances in machine learning have led to successful trials of more sensitive and dexterous robots, which use cameras and artificial intelligence to locate ripe fruit and handle it with care and precision.
Developed by engineers at the University of Cambridge, the Vegebot is the first robot that can identify and harvest iceberg lettuce — bringing hope to farmers that one of the most demanding crops for human pickers could finally be automated.
First, a camera scans the lettuce and, with the help of a machine learning algorithm trained on more than a thousand lettuce images, decides if it is ready for harvest. Then a second camera guides the picking cage on top of the plant without crushing it. Sensors feel when it is in the right position, and compressed air drives a blade through the stalk at a high force to get a clean cut.

The Vegebot uses machine learning to identify ripe, immature and diseased lettuce heads

Its success rate is high, with 91% of the crop accurately classified, according to a study published in July. But the robot is still much slower than humans, taking 31 seconds on average to pick one lettuce. Researchers say this could easily be sped up by using lighter materials.
Such adjustments would need to be made if the robot was used commercially. „Our goal was to prove you can do it, and we’ve done it,“ Simon Birrell, co-author of the study, tells CNN Business. „Now it depends on somebody taking the baton and running forward,“ he says.

More mouths to feed, but less manual labor

With the world’s population expected to climb to 9.7 billion in 2050 from 7.7 billion today — meaning roughly 80 million more mouths to feed each year — agriculture is under pressure to meet rising demand for food production.
Added pressures from climate change, such as extreme weather, shrinking agricultural lands and the depletion of natural resources, make innovation and efficiency all the more urgent.
This is one reason behind the industry’s drive to develop robotics. The global market for agricultural drones and robots is projected to grow from $2.5 billion in 2018 to $23 billion in 2028, according to a report from market intelligence firm BIS Research.
„Agriculture robots are expected to have a higher operating speed and accuracy than traditional agriculture machinery, which shall lead to significant improvements in production efficiency,“ Rakhi Tanwar, principal analyst of BIS Research, tells CNN Business.

Fruit picking robots like this one, developed by Fieldwork Robotics, operate for more than 20 hours a day

On top of this, growers are facing a long-term labor shortage. According to the World Bank, the share of total employment in agriculture in the world has declined from 43% in 1991 to 28% in 2018.
Tanwar says this is partly due to a lack of interest from younger generations. „The development of robotics in agriculture could lead to a massive relief to the growers who suffer from economic losses due to labor shortage,“ she says.
Robots can work all day and night, without stopping for breaks, and could be particularly useful during intense harvest periods.
„The main benefit is durability,“ says Martin Stoelen, a lecturer in robotics at the University of Plymouth and founder of Fieldwork Robotics, which has developed a raspberry-picking robot in partnership with Hall Hunter, one of the UK’s major berry growers.
Their robots, expected to go into production next year, will operate more than 20 hours a day and seven days a week during busy periods, „which human pickers obviously can’t do,“ says Stoelen.

Octinion's robot picks one strawberry every five seconds

Sustainable farming and food waste

Robots could also lead to more sustainable farming practices. They could enable growers to use less water, less fuel, and fewer pesticides, as well as producing less waste, says Tanwar.
At the moment, a field is typically harvested once, and any unripe fruits or vegetables are left to rot. Whereas, a robot could be trained to pick only ripe vegetables and, working around the clock, it could come back to the same field multiple times to pick any stragglers.
Birrell says that this will be the most important impact of robot pickers. „Right now, between a quarter and a third of food just rots in the field, and this is often because you don’t have humans ready at the right time to pick them,“ he says.
A successful example of this is the strawberry-picking robot developed by Octinion, a Belgium-based engineering startup.
The robot — which launched this year and is being used by growers in the UK and the Netherlands — is mounted on a self-driving trolley to serve table top strawberry production.
It uses 3D vision to locate the ripe berry, softly grips it with a pair of plastic pincers, and — just like a human — turns it 90 degrees to snap it from the stalk, before dropping it gently into a punnet.
„Robotics have the potential to convert the market from (being) supply-driven to demand-driven,“ says Tom Coen, CEO and founder of Octinion. „That will then help to reduce food waste and increase prices,“ he adds.

Harsh conditions

One major challenge with agricultural robots is adapting them for all-weather conditions. Farm machinery tends to be heavy-duty so that it can withstand rain, snow, mud, dust and heat.
„Building robots for agriculture is very different to building it for factories,“ says Birrell. „Until you’re out in the field, you don’t realize how robust it needs to be — it gets banged and crashed, you go over uneven surfaces, you get rained on, you get dust, you get lightning bolts.“
California-based Abundant Robotics has built an apple robot to endure the full range of farm conditions. It consists of an apple-sucking tube on a tractor-like contraption, which drives itself down an orchard row, while using computer vision to locate ripe fruit.
This spells the start of automation for orchard crops, says Dan Steere, CEO of Abundant Robotics. „Automation has steadily improved agricultural productivity for centuries,“ he says. „[We] have missed out on much of those benefits until now.“

The Pentagon’s Push to Program Soldiers’ Brains

The military wants future super-soldiers to control robots with their thoughts.

I. Who Could Object?

“Tonight I would like to share with you an idea that I am extremely passionate about,” the young man said. His long black hair was swept back like a rock star’s, or a gangster’s. “Think about this,” he continued. “Throughout all human history, the way that we have expressed our intent, the way we have expressed our goals, the way we have expressed our desires, has been limited by our bodies.” When he inhaled, his rib cage expanded and filled out the fabric of his shirt. Gesturing toward his body, he said, “We are born into this world with this. Whatever nature or luck has given us.“

His speech then took a turn: “Now, we’ve had a lot of interesting tools over the years, but fundamentally the way that we work with those tools is through our bodies.” Then a further turn: “Here’s a situation that I know all of you know very well—your frustration with your smartphones, right? This is another tool, right? And we are still communicating with these tools through our bodies.”

And then it made a leap: “I would claim to you that these tools are not so smart. And maybe one of the reasons why they’re not so smart is because they’re not connected to our brains. Maybe if we could hook those devices into our brains, they could have some idea of what our goals are, what our intent is, and what our frustration is.”

So began “Beyond Bionics,” a talk by Justin C. Sanchez, then an associate professor of biomedical engineering and neuroscience at the University of Miami, and a faculty member of the Miami Project to Cure Paralysis. He was speaking at a tedx conference in Florida in 2012. What lies beyond bionics? Sanchez described his work as trying to “understand the neural code,” which would involve putting “very fine microwire electrodes”—the diameter of a human hair—“into the brain.” When we do that, he said, we would be able to “listen in to the music of the brain” and “listen in to what somebody’s motor intent might be” and get a glimpse of “your goals and your rewards” and then “start to understand how the brain encodes behavior.”

He explained, “With all of this knowledge, what we’re trying to do is build new medical devices, new implantable chips for the body that can be encoded or programmed with all of these different aspects. Now, you may be wondering, what are we going to do with those chips? Well, the first recipients of these kinds of technologies will be the paralyzed. It would make me so happy by the end of my career if I could help get somebody out of their wheelchair.”

Sanchez went on, “The people that we are trying to help should never be imprisoned by their bodies. And today we can design technologies that can help liberate them from that. I’m truly inspired by that. It drives me every day when I wake up and get out of bed. Thank you so much.” He blew a kiss to the audience.

A year later, Justin Sanchez went to work for the Defense Advanced Research Projects Agency, the Pentagon’s R&D department. At darpa, he now oversees all research on the healing and enhancement of the human mind and body. And his ambition involves more than helping get disabled people out of their wheelchair—much more.

DARPA has dreamed for decades of merging human beings and machines. Some years ago, when the prospect of mind-controlled weapons became a public-relations liability for the agency, officials resorted to characteristic ingenuity. They recast the stated purpose of their neurotechnology research to focus ostensibly on the narrow goal of healing injury and curing illness. The work wasn’t about weaponry or warfare, agency officials claimed. It was about therapy and health care. Who could object? But even if this claim were true, such changes would have extensive ethical, social, and metaphysical implications. Within decades, neurotechnology could cause social disruption on a scale that would make smartphones and the internet look like gentle ripples on the pond of history.

Most unsettling, neurotechnology confounds age-old answers to this question: What is a human being?

II. High Risk, High Reward

In his 1958 State of the Union address, President Dwight Eisenhower declared that the United States of America “must be forward-looking in our research and development to anticipate the unimagined weapons of the future.” A few weeks later, his administration created the Advanced Research Projects Agency, a bureaucratically independent body that reported to the secretary of defense. This move had been prompted by the Soviet launch of the Sputnik satellite. The agency’s original remit was to hasten America’s entry into space.

During the next few years, arpa’s mission grew to encompass research into “man-computer symbiosis” and a classified program of experiments in mind control that was code-named Project Pandora. There were bizarre efforts that involved trying to move objects at a distance by means of thought alone. In 1972, with an increment of candor, the word Defense was added to the name, and the agency became darpa. Pursuing its mission, darpa funded researchers who helped invent technologies that changed the nature of battle (stealth aircraft, drones) and shaped daily life for billions (voice-recognition technology, GPS devices). Its best-known creation is the internet.

The agency’s penchant for what it calls “high-risk, high-reward” research ensured that it would also fund a cavalcade of folly. Project Seesaw, a quintessential Cold War boondoggle, envisioned a “particle-beam weapon” that could be deployed in the event of a Soviet attack. The idea was to set off a series of nuclear explosions beneath the Great Lakes, creating a giant underground chamber. Then the lakes would be drained, in a period of 15 minutes, to generate the electricity needed to set off a particle beam. The beam would accelerate through tunnels hundreds of miles long (also carved out by underground nuclear explosions) in order to muster enough force to shoot up into the atmosphere and knock incoming Soviet missiles out of the sky. During the Vietnam War, darpa tried to build a Cybernetic Anthropomorphous Machine, a jungle vehicle that officials called a “mechanical elephant.”

The diverse and sometimes even opposing goals of darpa scientists and their Defense Department overlords merged into a murky, symbiotic research culture—“unencumbered by the typical bureaucratic oversight and uninhibited by the restraints of scientific peer review,” Sharon Weinberger wrote in a recent book, The Imagineers of War. In Weinberger’s account, darpa’s institutional history involves many episodes of introducing a new technology in the context of one appealing application, while hiding other genuine but more troubling motives. At darpa, the left hand knows, and doesn’t know, what the right hand is doing.

The agency is deceptively compact. A mere 220 employees, supported by about 1,000 contractors, report for work each day at darpa’s headquarters, a nondescript glass-and-steel building in Arlington, Virginia, across the street from the practice rink for the Washington Capitals. About 100 of these employees are program managers—scientists and engineers, part of whose job is to oversee about 2,000 outsourcing arrangements with corporations, universities, and government labs. The effective workforce of darpa actually runs into the range of tens of thousands. The budget is officially said to be about $3 billion, and has stood at roughly that level for an implausibly long time—the past 14 years.

The Biological Technologies Office, created in 2014, is the newest of darpa’s six main divisions. This is the office headed by Justin Sanchez. One purpose of the office is to “restore and maintain warfighter abilities” by various means, including many that emphasize neurotechnology—applying engineering principles to the biology of the nervous system. For instance, the Restoring Active Memory program develops neuroprosthetics—tiny electronic components implanted in brain tissue—that aim to alter memory formation so as to counteract traumatic brain injury. Does darpa also run secret biological programs? In the past, the Department of Defense has done such things. It has conducted tests on human subjects that were questionable, unethical, or, many have argued, illegal. The Big Boy protocol, for example, compared radiation exposure of sailors who worked above and below deck on a battleship, never informing the sailors that they were part of an experiment.

Eddie Guy
Last year I asked Sanchez directly whether any of darpa’s neurotechnology work, specifically, was classified. He broke eye contact and said, “I can’t—We’ll have to get off that topic, because I can’t answer one way or another.” When I framed the question personally—“Are you involved with any classified neuroscience project?”—he looked me in the eye and said, “I’m not doing any classified work on the neurotechnology end.”

If his speech is careful, it is not spare. Sanchez has appeared at public events with some frequency (videos are posted on darpa’s YouTube channel), to articulate joyful streams of good news about darpa’s proven applications—for instance, brain-controlled prosthetic arms for soldiers who have lost limbs. Occasionally he also mentions some of his more distant aspirations. One of them is the ability, via computer, to transfer knowledge and thoughts from one person’s mind to another’s.

III. “We Try to Find Ways to Say Yes”

Medicine and biology were of minor interest to darpa until the 1990s, when biological weapons became a threat to U.S. national security. The agency made a significant investment in biology in 1997, when darpa created the Controlled Biological Systems program. The zoologist Alan S. Rudolph managed this sprawling effort to integrate the built world with the natural world. As he explained it to me, the aim was “to increase, if you will, the baud rate, or the cross-communication, between living and nonliving systems.” He spent his days working through questions such as “Could we unlock the signals in the brain associated with movement in order to allow you to control something outside your body, like a prosthetic leg or an arm, a robot, a smart home—or to send the signal to somebody else and have them receive it?”

Human enhancement became an agency priority. “Soldiers having no physical, physiological, or cognitive limitation will be key to survival and operational dominance in the future,” predicted Michael Goldblatt, who had been the science and technology officer at McDonald’s before joining darpa in 1999. To enlarge humanity’s capacity to “control evolution,” he assembled a portfolio of programs with names that sounded like they’d been taken from video games or sci-fi movies: Metabolic Dominance, Persistence in Combat, Continuous Assisted Performance, Augmented Cognition, Peak Soldier Performance, Brain-Machine Interface.

The programs of this era, as described by Annie Jacobsen in her 2015 book, The Pentagon’s Brain, often shaded into mad-scientist territory. The Continuous Assisted Performance project attempted to create a “24/7 soldier” who could go without sleep for up to a week. (“My measure of success,” one darpa official said of these programs, “is that the International Olympic Committee bans everything we do.”)

Dick Cheney relished this kind of research. In the summer of 2001, an array of “super-soldier” programs was presented to the vice president. His enthusiasm contributed to the latitude that President George W. Bush’s administration gave darpa—at a time when the agency’s foundation was shifting. Academic science gave way to tech-industry “innovation.” Tony Tether, who had spent his career working alternately for Big Tech, defense contractors, and the Pentagon, became darpa’s director. After the 9/11 attacks, the agency announced plans for a surveillance program called Total Information Awareness, whose logo included an all-seeing eye emitting rays of light that scanned the globe. The pushback was intense, and Congress took darpa to task for Orwellian overreach. The head of the program—Admiral John Poindexter, who had been tainted by scandal back in the Reagan years—later resigned, in 2003. The controversy also drew unwanted attention to darpa’s research on super-soldiers and the melding of mind and machine. That research made people nervous, and Alan Rudolph, too, found himself on the way out.

In this time of crisis, darpa invited Geoff Ling, a neurology‑ICU physician and, at the time, an active-duty Army officer, to join the Defense Sciences Office. (Ling went on to work in the Biological Technologies Office when it spun out from Defense Sciences, in 2014.) When Ling was interviewed for his first job at darpa, in 2002, he was preparing for deployment to Afghanistan and thinking about very specific combat needs. One was a “pharmacy on demand” that would eliminate the bulk of powdery fillers from drugs in pill or capsule form and instead would formulate active ingredients for ingestion via a lighter, more compact, dissolving substance—like Listerine breath strips. This eventually became a darpa program. The agency’s brazen sense of possibility buoyed Ling, who recalls with pleasure how colleagues told him, “We try to find ways to say yes, not ways to say no.” With Rudolph gone, Ling picked up the torch.

Ling talks fast. He has a tough-guy voice. The faster he talks, the tougher he sounds, and when I met him, his voice hit top speed as he described a first principle of Defense Sciences. He said he had learned this “particularly” from Alan Rudolph: “Your brain tells your hands what to do. Your hands basically are its tools, okay? And that was a revelation to me.” He continued, “We are tool users—that’s what humans are. A human wants to fly, he builds an airplane and flies. A human wants to have recorded history, and he creates a pen. Everything we do is because we use tools, right? And the ultimate tools are our hands and feet. Our hands allow us to work with the environment to do stuff, and our feet take us where our brain wants to go. The brain is the most important thing.”

Ling connected this idea of the brain’s primacy with his own clinical experience of the battlefield. He asked himself, “How can I liberate mankind from the limitations of the body?” The program for which Ling became best known is called Revolutionizing Prosthetics. Since the Civil War, as Ling has said, the prosthetic arm given to most amputees has been barely more sophisticated than “a hook,” and not without risks: “Try taking care of your morning ablutions with that bad boy, and you’re going to need a proctologist every goddamn day.” With help from darpa colleagues and academic and corporate researchers, Ling and his team built something that was once all but unimaginable: a brain-controlled prosthetic arm.

No invention since the internet has been such a reliable source of good publicity for darpa. Milestones in its development were hailed with wonder. In 2012, 60 Minutes showed a paralyzed woman named Jan Scheuermann feeding herself a bar of chocolate using a robotic arm that she manipulated by means of a brain implant.

​Eddie Guy
Yet darpa’s work to repair damaged bodies was merely a marker on a road to somewhere else. The agency has always had a larger mission, and in a 2015 presentation, one program manager—a Silicon Valley recruit—described that mission: to “free the mind from the limitations of even healthy bodies.” What the agency learns from healing makes way for enhancement. The mission is to make human beings something other than what we are, with powers beyond the ones we’re born with and beyond the ones we can organically attain.

The internal workings of darpa are complicated. The goals and values of its research shift and evolve in the manner of a strange, half-conscious shell game. The line between healing and enhancement blurs. And no one should lose sight of the fact that D is the first letter in darpa’s name. A year and a half after the video of Jan Scheuermann feeding herself chocolate was shown on television, darpa made another video of her, in which her brain-computer interface was connected to an F-35 flight simulator, and she was flying the airplane. darpa later disclosed this at a conference called Future of War.

Geoff Ling’s efforts have been carried on by Justin Sanchez. In 2016, Sanchez appeared at darpa’s “Demo Day” with a man named Johnny Matheny, whom agency officials describe as the first “osseointegrated” upper-limb amputee—the first man with a prosthetic arm attached directly to bone. Matheny demonstrated what was, at the time, darpa’s most advanced prosthetic arm. He told the attendees, “I can sit here and curl a 45-pound dumbbell all day long, till the battery runs dead.” The next day, Gizmodo ran this headline above its report from the event: “darpa’s Mind-Controlled Arm Will Make You Wish You Were a Cyborg.”

Since then, darpa’s work in neurotechnology has avowedly widened in scope, to embrace “the broader aspects of life,” Sanchez told me, “beyond the person in the hospital who is using it to heal.” The logical progression of all this research is the creation of human beings who are ever more perfect, by certain technological standards. New and improved soldiers are necessary and desirable for darpa, but they are just the window-display version of the life that lies ahead.

IV. “Over the Horizon”

Consider memory, Sanchez told me: “Everybody thinks about what it would be like to give memory a boost by 20, 30, 40 percent—pick your favorite number—and how that would be transformative.” He spoke of memory enhancement through neural interface as an alternative form of education. “School in its most fundamental form is a technology that we have developed as a society to help our brains to do more,” he said. “In a different way, neurotechnology uses other tools and techniques to help our brains be the best that they can be.” One technique was described in a 2013 paper, a study involving researchers at Wake Forest University, the University of Southern California, and the University of Kentucky. Researchers performed surgery on 11 rats. Into each rat’s brain, an electronic array—featuring 16 stainless-steel wires—was implanted. After the rats recovered from surgery, they were separated into two groups, and they spent a period of weeks getting educated, though one group was educated more than the other.

The less educated group learned a simple task, involving how to procure a droplet of water. The more educated group learned a complex version of that same task—to procure the water, these rats had to persistently poke levers with their nose despite confounding delays in the delivery of the water droplet. When the more educated group of rats attained mastery of this task, the researchers exported the neural-firing patterns recorded in the rats’ brains—the memory of how to perform the complex task—to a computer.

“What we did then was we took those signals and we gave it to an animal that was stupid,” Geoff Ling said at a darpa event in 2015—meaning that researchers took the neural-firing patterns encoding the memory of how to perform the more complex task, recorded from the brains of the more educated rats, and transferred those patterns into the brains of the less educated rats—“and that stupid animal got it. They were able to execute that full thing.” Ling summarized: “For this rat, we reduced the learning period from eight weeks down to seconds.”

“They could inject memory using the precise neural codes for certain skills,” Sanchez told me. He believes that the Wake Forest experiment amounts to a foundational step toward “memory prosthesis.” This is the stuff of The Matrix. Though many researchers question the findings—cautioning that, really, it can’t be this simple—Sanchez is confident: “If I know the neural codes in one individual, could I give that neural code to another person? I think you could.” Under Sanchez, darpa has funded human experiments at Wake Forest, the University of Southern California, and the University of Pennsylvania, using similar mechanisms in analogous parts of the brain. These experiments did not transfer memory from one person to another, but instead gave individuals a memory “boost.” Implanted electrodes recorded neuronal activity associated with recognizing patterns (at Wake Forest and USC) and memorizing word lists (at Penn) in certain brain circuits. Then electrodes fed back those recordings of neuronal activity into the same circuits as a form of reinforcement. The result, in both cases, was significantly improved memory recall.

Doug Weber, a neural engineer at the University of Pittsburgh who recently finished a four-year term as a darpa program manager, working with Sanchez, is a memory-transfer skeptic. Born in Wisconsin, he has the demeanor of a sitcom dad: not too polished, not too rumpled. “I don’t believe in the infinite limits of technology evolution,” he told me. “I do believe there are going to be some technical challenges which are impossible to achieve.” For instance, when scientists put electrodes in the brain, those devices eventually fail—after a few months or a few years. The most intractable problem is blood leakage. When foreign material is put into the brain, Weber said, “you undergo this process of wounding, bleeding, healing, wounding, bleeding, healing, and whenever blood leaks into the brain compartment, the activity in the cells goes way down, so they become sick, essentially.” More effectively than any fortress, the brain rejects invasion.

Even if the interface problems that limit us now didn’t exist, Weber went on to say, he still would not believe that neuroscientists could enable the memory-prosthesis scenario. Some people like to think about the brain as if it were a computer, Weber explained, “where information goes from A to B to C, like everything is very modular. And certainly there is clear modular organization in the brain. But it’s not nearly as sharp as it is in a computer. All information is everywhere all the time, right? It’s so widely distributed that achieving that level of integration with the brain is far out of reach right now.”

Peripheral nerves, by contrast, conduct signals in a more modular fashion. The biggest, longest peripheral nerve is the vagus. It connects the brain with the heart, the lungs, the digestive tract, and more. Neuroscientists understand the brain’s relationship with the vagus nerve more clearly than they understand the intricacies of memory formation and recall among neurons within the brain. Weber believes that it may be possible to stimulate the vagus nerve in ways that enhance the process of learning—not by transferring experiential memories, but by sharpening the facility for certain skills.

To test this hypothesis, Weber directed the creation of a new program in the Biological Technologies Office, called Targeted Neuroplasticity Training (TNT). Teams of researchers at seven universities are investigating whether vagal-nerve stimulation can enhance learning in three areas: marksmanship, surveillance and reconnaissance, and language. The team at Arizona State has an ethicist on staff whose job, according to Weber, “is to be looking over the horizon to anticipate potential challenges and conflicts that may arise” regarding the ethical dimensions of the program’s technology, “before we let the genie out of the bottle.” At a TNT kickoff meeting, the research teams spent 90 minutes discussing the ethical questions involved in their work—the start of a fraught conversation that will broaden to include many others, and last for a very long time.

DARPA officials refer to the potential consequences of neurotechnology by invoking the acronym elsi, a term of art devised for the Human Genome Project. It stands for “ethical, legal, social implications.” The man who led the discussion on ethics among the research teams was Steven Hyman, a neuroscientist and neuroethicist at MIT and Harvard’s Broad Institute. Hyman is also a former head of the National Institute of Mental Health. When I spoke with him about his work on darpa programs, he noted that one issue needing attention is “cross talk.” A man-machine interface that does not just “read” someone’s brain but also “writes into” someone’s brain would almost certainly create “cross talk between those circuits which we are targeting and the circuits which are engaged in what we might call social and moral emotions,” he said. It is impossible to predict the effects of such cross talk on “the conduct of war” (the example he gave), much less, of course, on ordinary life.

Weber and a darpa spokesperson related some of the questions the researchers asked in their ethics discussion: Who will decide how this technology gets used? Would a superior be able to force subordinates to use it? Will genetic tests be able to determine how responsive someone would be to targeted neuroplasticity training? Would such tests be voluntary or mandatory? Could the results of such tests lead to discrimination in school admissions or employment? What if the technology affects moral or emotional cognition—our ability to tell right from wrong or to control our own behavior?

Recalling the ethics discussion, Weber told me, “The main thing I remember is that we ran out of time.”

V. “You Can Weaponize Anything”

In The Pentagon’s Brain, Annie Jacobsen suggested that darpa’s neurotechnology research, including upper-limb prosthetics and the brain-machine interface, is not what it seems: “It is likely that darpa’s primary goal in advancing prosthetics is to give robots, not men, better arms and hands.” Geoff Ling rejected the gist of her conclusion when I summarized it for him (he hadn’t read the book). He told me, “When we talk about stuff like this, and people are looking for nefarious things, I always say to them, ‘Do you honestly believe that the military that your grandfather served in, your uncle served in, has changed into being Nazis or the Russian army?’ Everything we did in the Revolutionizing Prosthetics program—everything we did—is published. If we were really building an autonomous-weapons system, why would we publish it in the open literature for our adversaries to read? We hid nothing. We hid not a thing. And you know what? That meant that we didn’t just do it for America. We did it for the world.”

I started to say that publishing this research would not prevent its being misused. But the terms use and misuse overlook a bigger issue at the core of any meaningful neurotechnology-ethics discussion. Will an enhanced human being—a human being possessing a neural interface with a computer—still be human, as people have experienced humanity through all of time? Or will such a person be a different sort of creature?

​Eddie Guy
The U.S. government has put limits on darpa’s power to experiment with enhancing human capabilities. Ling says colleagues told him of a “directive”: “Congress was very specific,” he said. “They don’t want us to build a superperson.” This can’t be the announced goal, Congress seems to be saying, but if we get there by accident—well, that’s another story. Ling’s imagination remains at large. He told me, “If I gave you a third eye, and the eye can see in the ultraviolet, that would be incorporated into everything that you do. If I gave you a third ear that could hear at a very high frequency, like a bat or like a snake, then you would incorporate all those senses into your experience and you would use that to your advantage. If you can see at night, you’re better than the person who can’t see at night.”

Enhancing the senses to gain superior advantage—this language suggests weaponry. Such capacities could certainly have military applications, Ling acknowledged—“You can weaponize anything, right?”—before he dismissed the idea and returned to the party line: “No, actually, this has to do with increasing a human’s capability” in a way that he compared to military training and civilian education, and justified in economic terms.

“Let’s say I gave you a third arm,” and then a fourth arm—so, two additional hands, he said. “You would be more capable; you would do more things, right?” And if you could control four hands as seamlessly as you’re controlling your current two hands, he continued, “you would actually be doing double the amount of work that you would normally do. It’s as simple as that. You’re increasing your productivity to do whatever you want to do.” I started to picture his vision—working with four arms, four hands—and asked, “Where does it end?”

“It won’t ever end,” Ling said. “I mean, it will constantly get better and better—” His cellphone rang. He took the call, then resumed where he had left off: “What darpa does is we provide a fundamental tool so that other people can take those tools and do great things with them that we’re not even thinking about.”

Judging by what he said next, however, the number of things that darpa is thinking about far exceeds what it typically talks about in public. “If a brain can control a robot that looks like a hand,” Ling said, “why can’t it control a robot that looks like a snake? Why can’t that brain control a robot that looks like a big mass of Jell-O, able to get around corners and up and down and through things? I mean, somebody will find an application for that. They couldn’t do it now, because they can’t become that glob, right? But in my world, with their brain now having a direct interface with that glob, that glob is the embodiment of them. So now they’re basically the glob, and they can go do everything a glob can do.”

VI. Gold Rush

darpa’s developing capabilities still hover at or near a proof-of-concept stage. But that’s close enough to have drawn investment from some of the world’s richest corporations. In 1990, during the administration of President George H. W. Bush, darpa Director Craig I. Fields lost his job because, according to contemporary news accounts, he intentionally fostered business development with some Silicon Valley companies, and White House officials deemed that inappropriate. Since the administration of the second President Bush, however, such sensitivities have faded.

Over time, darpa has become something of a farm team for Silicon Valley. Regina Dugan, who was appointed darpa director by President Barack Obama, went on to head Google’s Advanced Technology and Projects group, and other former darpa officials went to work for her there. She then led R&D for the analogous group at Facebook, called Building 8. (She has since left Facebook.)

darpa’s neurotechnology research has been affected in recent years by corporate poaching. Doug Weber told me that some darpa researchers have been “scooped up” by companies including Verily, the life-sciences division of Alphabet (the parent company of Google), which, in partnership with the British pharmaceutical conglomerate GlaxoSmithKline, created a company called Galvani Bioelectronics, to bring neuro-modulation devices to market. Galvani calls its business “bioelectric medicine,” which conveys an aura of warmth and trustworthiness. Ted Berger, a University of Southern California biomedical engineer who collaborated with the Wake Forest researchers on their studies of memory transfer in rats, worked as the chief science officer at the neurotechnology company Kernel, which plans to build “advanced neural interfaces to treat disease and dysfunction, illuminate the mechanisms of intelligence, and extend cognition.” Elon Musk has courted darpa researchers to join his company Neuralink, which is said to be developing an interface known as “neural lace.” Facebook’s Building 8 is working on a neural interface too. In 2017, Regina Dugan said that 60 engineers were at work on a system with the goal of allowing users to type 100 words a minute “directly from your brain.” Geoff Ling is on Building 8’s advisory board.

Talking with Justin Sanchez, I speculated that if he realizes his ambitions, he could change daily life in even more fundamental and lasting ways than Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey have. Sanchez blushes easily, and he breaks eye contact when he is uncomfortable, but he did not look away when he heard his name mentioned in such company. Remembering a remark that he had once made about his hope for neurotechnology’s wide adoption, but with “appropriate checks to make sure that it’s done in the right way,” I asked him to talk about what the right way might look like. Did any member of Congress strike him as having good ideas about legal or regulatory structures that might shape an emerging neural-interface industry? He demurred (“darpa’s mission isn’t to define or even direct those things”) and suggested that, in reality, market forces would do more to shape the evolution of neurotechnology than laws or regulations or deliberate policy choices. What will happen, he said, is that scientists at universities will sell their discoveries or create start-ups. The marketplace will take it from there: “As they develop their companies, and as they develop their products, they’re going to be subject to convincing people that whatever they’re developing makes sense, that it helps people to be a better version of themselves. And that process—that day-to-day development—will ultimately guide where these technologies go. I mean, I think that’s the frank reality of how it ultimately will unfold.”

He seemed entirely untroubled by what may be the most troubling aspect of darpa’s work: not that it discovers what it discovers, but that the world has, so far, always been ready to buy it.


This article appears in the November 2018 print edition with the headline “The Pentagon Wants to Weaponize the Brain. What Could Go Wrong?”

https://www.theatlantic.com/magazine/archive/2018/11/the-pentagon-wants-to-weaponize-the-brain-what-could-go-wrong/570841/

Tiny Wearable Biosensor Continuously Monitors Your Body Chemistry

Imagine, throughout your day, you could know exactly what your body chemistry was up to. More specifically, imagine if the information from your body could instantly go to your doctor and he could make a diagnosis of what your body was doing or what was wrong.

It’s nearly here. Today at CES 2016, a company called Profusa demonstrated a wearable biointegrated sensor, Lumee, that allows for long-term continuous monitoring of your body chemistry. This wearable smart tech device provides actionable data on your body’s key chemistry in one continuous data stream which changes the way we will monitor our health.

Lumee, a biointegrated wearable sensor.

Lumee, a biointegrated wearable sensor.

“In between annual physicals we really don’t know what’s going on in our body,” said Ben Hwang, Ph.D., CEO, Profusa. “While fitness trackers and other wearables provide insights into our heart rate, respiration and other physical measures, they don’t provide information on the most important aspect of our health: our body’s chemistry. What if there was a better way of knowing how you’re doing — how you’re really doing?”

According to Statista, the digital health market is expected to reach $233.3 billion by 2020, and that market is being led by the mobile health market.

Since the iPhone hit it big in 2007, consumers and physicians alike (52%) use their smartphone to search for advice, drugs, therapies, etc, and 80% of physicians use smartphones and medical apps. With wearables, physicians can now collect long-term and specialized data that’s much easier to obtain and track patient health behaviors over longer periods of time. This has already changed our relationship with our health care providers and their relationships with us.

“Profusa’s Lumee is a bold attempt at one of the holy grails of personalized medicine, continuous, real time, non-invasive glucose and oxygen monitoring, it’s applications are vast,” said Ryan Bethencourt, Program Director and Venture Partner at Indie.Bio, a bio-tech accelerator in San Francisco. “From Type 1 and Type 2 diabetes monitoring through to fitness and finding optimal training patterns for your body, with data that’s currently impossible to acquire any other way continously. I’m rarely this optimistic about a new medical device, especially one that will require implantation approval from the FDA but in this case, I think the optical biosensor technology and device design warrant the optimism.”

This is why Profusa hopes their tiny (3-5 mm) bioengineered biosensors will enable real-time detection of our body’s unique chemistry in order to give greater insight into a person’s overall health status. Dr. Hwang believes Lumee can be applied to consumer health and wellness applications but also to the management of chronic diseases like Peripheral ArteryDisease (PAD), diabetes and Chronic Obstructive Pulmonary Disease (COPD).

http://www.forbes.com/sites/jenniferhicks/2016/01/07/beyond-fitness-trackers-at-ces-tiny-wearable-biosensor-continuously-monitors-your-body-chemistry/#67cc511a6019

HIV Genes Successfully Edited Out of Immune Cells

HIV Genes Successfully Edited Out of Immune Cells
An electron micrograph of HIV particles infecting a human T cell. Image: National Institute of Allergy and Infectious Diseases

Researchers from Temple University have used the CRISPR/Cas9 gene editing tool to clear out the entire HIV-1 genome from a patient’s infected immune cells. It’s a remarkable achievement that could have profound implications for the treatment of AIDS and other retroviruses.

When we think about CRISPR/Cas9 we tend to think of it as a tool to eliminate heritable genetic diseases, or as a way to introduce new genes altogether. But as this new research shows, it also holds great promise as a means to eliminate viruses that have planted their nefarious genetic codes within host cells. This latest achievement now appears in Nature Scientific Reports.

Retroviruses, unlike regular run-of-the-mill viruses, insert copies of their genomes into host cells in order to replicate. Antiretroviral drugs have proven effective at controlling HIV after infection, but patients who stop taking these drugs suffer a quick relapse. Once treatment stops, the HIV reasserts itself, weakening the immune system, thus triggering the onset of acquired immune deficiency syndrome, or AIDS.

Over the years, scientists have struggled to remove HIV from infected CD4+ T-cells, a type of white blood cell that fights infection. Many of these “shock and kill” efforts have been unsuccessful. The recent introduction of CRISPR/Cas9 has now inspired a new approach.

Geneticist Kamel Khalili and colleagues from Temple University extracted infected T-cells from a patient. The team’s modified version of CRISPR/Cas9—which specifically targets HIV-1 DNA—did the rest. First, guide RNA methodically made its way across the entire T-cell genome searching for signs of the viral components. Once it recognized a match, a nuclease enzyme ripped out out the offending strands from the T-cell DNA. Then the cell’s built-in DNA repair machinery patched up the loose ends.

Not only did this remove the viral DNA, it did so permanently. What’s more, because this microscopic genetic system remained within the cell, it staved off further infections when particles of HIV-1 tried to sneak their way back in from unedited cells.

The study was performed on T-cells in a petri dish, but the technique successfully lowered the viral load in the patient’s extracted cells. This strongly suggests it could be used as a treatment. However, it could be years before we see that happen. Still, the researchers ruled out off-target effects (i.e. unanticipated side-effects of gene-editing) and potential toxicity. They also demonstrated that the HIV-1-eradicated cells were growing and functioning normally.

These findings “demonstrate the effectiveness of our gene editing system in eliminating HIV from the DNA of CD4 T-cells and, by introducing mutations into the viral genome, permanently inactivating its replication,” Khalili said in a statement. “Further, they show that the system can protect cells from reinfection and that the technology is safe for the cells, with no toxic effects.”

This technique for snipping out alien DNA could have implications for related research, including treatments for retroviruses that cause cancer and leukemia, and the suite of retroviruses currently affecting companion and farm animals. As noted by Excision BioTherapeutics’ CEO and President Thomas Malcolm, “These exciting results also reflect our ability to select viral gene targets for safe eradication of any viral genome in our current pipeline of gene editing therapeutics.”

And Malcolm has good reason to be excited: his company holds exclusive rights to commercialize this technology.

http://gizmodo.com/hiv-successfully-edited-out-of-immune-cells-1766413957?rev=1458672609052