Archiv der Kategorie: Security

Anyone Can Buy Data Tracking US Soldiers and Spies to Nuclear Vaults and Brothels in Germany

Source: https://www.wired.com/story/phone-data-us-soldiers-spies-nuclear-germany/

by Dhruv Mehrotra Dell Cameron

Nearly every weekday morning, a device leaves a two-story home near Wiesbaden, Germany, and makes a 15-minute commute along a major autobahn. By around 7 am, it arrives at Lucius D. Clay Kaserne—the US Army’s European headquarters and a key hub for US intelligence operations.

The device stops near a restaurant before heading to an office near the base that belongs to a major government contractor responsible for outfitting and securing some of the nation’s most sensitive facilities.

For roughly two months in 2023, this device followed a predictable routine: stops at the contractor’s office, visits to a discreet hangar on base, and lunchtime trips to the base’s dining facility. Twice in November of last year, it made a 30-minute drive to the Dagger Complex, a former intelligence and NSA signals processing facility. On weekends, the device could be traced to restaurants and shops in Wiesbaden.

The individual carrying this device likely isn’t a spy or high-ranking intelligence official. Instead, experts believe, they’re a contractor who works on critical systems—HVAC, computing infrastructure, or possibly securing the newly built Consolidated Intelligence Center, a state-of-the-art facility suspected to be used by the National Security Agency.

Whoever they are, the device they’re carrying with them everywhere is putting US national security at risk.

A joint investigation by WIRED, Bayerischer Rundfunk (BR), and Netzpolitik.org reveals that US companies legally collecting digital advertising data are also providing the world a cheap and reliable way to track the movements of American military and intelligence personnel overseas, from their homes and their children’s schools to hardened aircraft shelters within an airbase where US nuclear weapons are believed to be stored.

A collaborative analysis of billions of location coordinates obtained from a US-based data broker provides extraordinary insight into the daily routines of US service members. The findings also provide a vivid example of the significant risks the unregulated sale of mobile location data poses to the integrity of the US military and the safety of its service members and their families overseas.

We tracked hundreds of thousands of signals from devices inside sensitive US installations in Germany. That includes scores of devices within suspected NSA monitoring or signals-analysis facilities, more than a thousand devices at a sprawling US compound where Ukrainian troops were being being trained in 2023, and nearly 2,000 others at an air force base that has crucially supported American drone operations.

A device likely tied to an NSA or intelligence employee broadcast coordinates from inside a windowless building with a metal exterior known as the “Tin Can,” which is reportedly used for NSA surveillance, according to agency documents leaked by Edward Snowden. Another device transmitted signals from within a restricted weapons testing facility, revealing its zig-zagging movements across a high-security zone used for tank maneuvers and live munitions drills.

We traced these devices from barracks to work buildings, Italian restaurants, Aldi grocery stores, and bars. As many as four devices that regularly pinged from Ramstein Air Base were later tracked to nearby brothels off base, including a multistory facility called SexWorld.

Experts caution that foreign governments could use this data to identify individuals with access to sensitive areas; terrorists or criminals could decipher when US nuclear weapons are least guarded; or spies and other nefarious actors could leverage embarrassing information for blackmail.

“The unregulated data broker industry poses a clear threat to national security,” says Ron Wyden, a US senator from Oregon with more than 20 years overseeing intelligence work. “It is outrageous that American data brokers are selling location data collected from thousands of brave members of the armed forces who serve in harms’ way around the world.”

Wyden approached the US Defense Department in September after initial reporting by BR and netzpolitik.org raised concerns about the tracking of potential US service members. DoD failed to respond. Likewise, Wyden’s office has yet to hear back from members of US president Joe Biden’s National Security Council, despite repeated inquiries. The NSC did not immediately respond to a request for comment.

“There is ample blame to go around,” says Wyden, “but unless the incoming administration and Congress act, these kinds of abuses will keep happening, and they’ll cost service members‘ lives.”

The Oregon senator also raised the issue earlier this year with the Federal Trade Commission, following an FTC order that imposed unprecedented restrictions against a US company it accused of gathering data around “sensitive locations.” Douglas Farrar, the FTC’s director of public affairs, declined a request to comment.

WIRED can now exclusively report, however, that the FTC is on the verge of fulfilling Wyden’s request. An FTC source, granted anonymity to discuss internal matters, says the agency is planning to file multiple lawsuits soon that will formally recognize US military installations as protected sites. The source adds that the lawsuits are in keeping with years‘ worth of work by FTC Chair Lina Khan aimed at shielding US consumers—including service members—from harmful surveillance practices.

Before a targeted ad appears on an app or website, third-party software often embedded in apps called software development kits transmit information about their users to data brokers, real-time bidding platforms, and ad exchanges—often including location data. Data brokers often will collect that data, analyze it, repackage it, and sell it.

In February of 2024, reporters from BR and Netzpolitik.org obtained a free sample of this kind of data from Datastream Group, a Florida-based data broker. The dataset contains 3.6 billion coordinates—some recorded at millisecond intervals—from up to 11 million mobile advertising IDs in Germany over what the company says is a 59-day span from October through December 2023.

Mobile advertising IDs are unique identifiers used by the advertising industry to serve personalized ads to smartphones. These strings of letters and numbers allow companies to track user behavior and target ads effectively. However, mobile advertising IDs can also reveal much more sensitive information, particularly when combined with precise geolocation data.

In total, our analysis revealed granular location data from up to 12,313 devices that appeared to spend time at or near at least 11 military and intelligence sites, potentially exposing crucial details like entry points, security practices, and guard schedules—information that, in the hands of hostile foreign governments or terrorists, could be deadly.

Our investigation uncovered 38,474 location signals from up to 189 devices inside Büchel Air Base, a high-security German installation where as many as 15 US nuclear weapons are reportedly stored in underground bunkers. At Grafenwöhr Training Area, where thousands of US troops are stationed and have trained Ukrainian soldiers on Abrams tanks, we tracked 191,415 signals from up to 1,257 devices.

At Lucius D. Clay Kaserne, the US Army’s European headquarters, we identified 74,968 location signals from as many as 799 devices, including some at the European Technical Center, once the NSA’s communication hub in Europe.Courtesy of OpenMapTiles

In Wiesbaden, home to the US Army’s European headquarters at Lucius D. Clay Kaserne, 74,968 location signals from as many as 799 devices were detected—some originating from sensitive intelligence facilities like the European Technical Center, once the NSA’s communication hub in Europe, and newly built intelligence operations centers.

At Ramstein Air Base, which supports some US drone operations, 164,223 signals from nearly 2,000 devices were tracked. That included devices tracked to Ramstein Elementary and High School, base schools for the children of military personnel.

Of these devices, 1,326 appeared at more than one of these highly sensitive military sites, potentially mapping the movements of US service members across Europe’s most secure locations.

The data is not infallible. Mobile ad IDs can be reset, meaning multiple IDs can be assigned to the same device. Our analysis found that, in some instances, devices were assigned more than 10 mobile ad IDs.

The location data’s precision at the individual device level can also be inconsistent. By contacting several people whose movements were revealed in the dataset, the reporting collective confirmed that much of the data was highly accurate—identifying work commutes and dog walks of individuals contacted. However, this wasn’t always the case. One reporter whose ID appears in the dataset found that it often placed him a block away from his apartment and during times when he was out of town. A study from NATO Strategic Communications Center of Excellence found that “quantity overshadows quality” in the data broker industry and that, on average, only up to 60 percent of the data surveyed can be considered precise.

According to its website, Datastream Group appears to offer “internet advertising data coupled with hashed emails, cookies, and mobile location data.” Its listed datasets include niche categories like boat owners, mortgage seekers, and cigarette smokers. The company, one of many in a multibillion-dollar location-data industry, did not respond to our request for comment about the data it provided on US military and intelligence personnel in Germany, where the US maintains a force of at least 35,000 troops, according to the most recent estimates.

Defense Department officials have known about the threat that commercial data brokers pose to national security since at least 2016, when Mike Yeagley, a government contractor and technologist, delivered a briefing to senior military officials at the Joint Special Operations Command compound in Fort Liberty (formerly Fort Bragg), North Carolina, about the issue. Yeagley’s presentation aimed to show how commercially available mobile data—already pervasive in conflict zones like Syria—could be weaponized for pattern of life analysis.

Midway through the presentation, Yeagley decided to raise the stakes. “Well, here’s the behavior of an ISIS operator,” he tells WIRED, recalling his presentation. “Let me turn the mirror around—let me show you how it works for your own personnel.” He then displayed data revealing phones as they moved from Fort Bragg in North Carolina and MacDill Air Force Base in Florida—critical hubs for elite US special operations units. The devices traveled through transit points like Turkey before clustering in northern Syria at a seemingly abandoned cement factory near Kobane, a known ISIS stronghold. The location he pinpointed was a covert forward operating base.

Yeagley says he was quickly escorted to a secured room to continue his presentation behind closed doors. There, officials questioned him on how he had obtained the data, concerned that his stunt had involved hacking personnel or unauthorized intercepts.

The data wasn’t sourced from espionage but from unregulated commercial brokers, he explained to the concerned DOD officials. “I didn’t hack, intercept, or engineer this data,” he told them. “I bought it.”

Now, years later, Yeagley remains deeply frustrated with the DODs inability to control the situation. What WIRED, BR, and Netzpolitik.org are now reporting is “very similar to the alarms we raised almost 10 years ago,” he says, shaking his head. “And it doesn’t seem like anything’s changed.”

US law requires the director of national intelligence to provide “protection support” for the personal devices of “at risk” intelligence personnel who are deemed susceptible to “hostile information collection activities.” But which personnel meet this criteria is unclear, as is the extent of the protections beyond periodic training and advice. The location data we acquired demonstrates, regardless, that commercial surveillance is far too pervasive and complex to be reduced to individual responsibility.

Biden’s outgoing director of national intelligence, Avril Haines, did not respond to a request for comment.

A report declassified by Haines last summeracknowledges that US intelligence agencies had purchased a “large amount” of “sensitive and intimate information” about US citizens from commercial data brokers, adding that “in the wrong hands,” the data could “facilitate blackmail, stalking, harassment, and public shaming.” The report, which contains numerous redactions, notes that, while the US government „would never have been permitted to compel billions of people to carry location tracking devices on their persons at all times,” smartphones, connected cars, and web tracking have all made this possible “without government participation.”

Mike Rogers, the Republican chair of the House Armed Services Committee, did not respond to multiple requests for comment. A spokesperson for Adam Smith, the committee’s ranking Democrat, said Smith was unavailable to discuss the matter, busy negotiating a must-pass bill to fund the Pentagon’s policy priorities next year.

Jack Reed and Roger Wicker, the leading Democrat and Republican on the Senate Armed Services Committee, respectively, did not respond to multiple requests for comment. Inquiries placed with House and Senate leaders and top lawmakers on both congressional intelligence committees have gone unanswered.

The DOD and the NSA declined to answer specific questions related to our investigation. However, DOD spokesperson Javan Rasnake says that the Pentagon is aware that geolocation services could put personnel at risk and urged service members to remember their training and adhere strictly to operational security protocols. “Within the USEUCOM region, members are reminded of the need to execute proper OPSEC when conducting mission activities inside operational areas,” Rasnake says, using the shorthand for operational security.

An internal Pentagon presentation obtained by the reporting collective, though, claims that not only is the domestic data collection likely capable of revealing military secrets, it is essentially unavoidable at the personal level, service members’ lives being simply too intertwined with the technology permitting it. This conclusion closely mirrors the observations of Chief Justice John Roberts of the US Supreme Court, who in landmark privacy cases within the past decade described cell phones as being a “pervasive and insistent part of daily life” and that owning one was “indispensable to participation in modern society.”

The presentation, which a source says was delivered to high-ranking general officers, including the US Army’s chief information officer, warns that despite promises from major ad tech companies, “de-anonymization” is all but trivial given the widespread availability of commercial data collected on Pentagon employees. The document emphasizes that the caches of location data on US individuals is a “force protection issue,” likely capable of revealing troop movements and other highly guarded military secrets.

While instances of blackmail inside the Pentagon have seen a sharp decline since the Cold War, many of the structural barriers to persistently surveilling Americans have also vanished. In recent decades, US courts have repeatedly found that new technologies pose a threat to privacy by enabling surveillance that, “in earlier times, would have been prohibitively expensive,“ as the 7th Circuit Court of Appeals noted in 2007.

In an August 2024 ruling, another US appeals court disregarded claims by tech companies that users who “opt in” to surveillance were actually “informed” and doing so “voluntarily,” declaring the opposite is clear to “anyone with a smartphone.” The internal presentation for military staff presses that adversarial nations can gain access to advertising data with ease, using it to exploit, manipulate, and coerce military personnel for purposes of espionage.

Patronizing sex workers, whether legal in a foreign country or not, is a violation of the Uniform Code of Military Justice. The penalties can be severe, including forfeiture of pay, dishonorable discharge, and up to one year of imprisonment. But the ban on solicitation is not merely imposed on principle alone, says Michael Waddington, a criminal defense attorney who specializes in court-marial cases. “There’s a genuine danger of encountering foreign agents in these establishments, which can lead to blackmail or exploitation,” he says.

“This issue is particularly concerning given the current geopolitical climate. Many US servicemembers in Europe are involved in supporting Ukraine in its defense against the Russian invasion,” Waddington says. “Any compromise of their integrity could have serious implications for our operations and national security.”

When it comes to jeopardizing national security, even data on low-level personnel can pose a risk, says Vivek Chilukuri, senior fellow and program director of the Technology and National Security Program at the Center for a New American Security (CNAS). Before joining CNAS, Chilukuri served in part as legislative director and tech policy advisor to US senator Michael Bennet on the Senate Intelligence Committee and previously worked at the US State Department, specializing in countering violent extremism.

„Low-value targets can lead to high-value compromises,“ Chilukuri says. „Even if someone isn’t senior in an organization, they may have access to highly sensitive infrastructure. A system is only as secure as its weakest link.“ He points out that if adversaries can target someone with access to a crucial server or database, they could exploit that vulnerability to cause serious damage. “It just takes one USB stick plugged into the right device to compromise an organization.”

It’s not just individual service members who are at risk—entire security protocols and operational routines can be exposed through location data. At Büchel Air Base, where the US is believed to have stored an estimated 10 to 15 B61 nuclear weapons, the data reveals the daily activity patterns of devices on the base, including when personnel are most active and, more concerningly, potentially when the base is least populated.

Overview of the Air Mobility Command ramp at Ramstein Air Base, Germany.Photograph: Timm Ziegenthaler/Stocktrek Images; Getty Images

Büchel has 11 protective aircraft shelters equipped with hardened vaults for nuclear weapons storage. Each vault, which is located in a so-called WS3, or Weapons Storage and Security System, can hold up to four warheads. Our investigation traced precise location data for as many as 40 cellular devices that were present in or near these bunkers.

The patterns we could observe from devices at Büchel go far beyond just understanding the working hours of people on base. In aggregate, it’s possible to map key entry and exit points, pinpointing frequently visited areas, and even tracing personnel to their off-base routines. For a terrorist, this information could be a gold mine—an opportunity to identify weak points, plan an attack, or target individuals with access to sensitive areas.

This month, German authorities arrested a former civilian contractor employed by the US military on allegations of offering to pass sensitive information about American military operations in Germany to Chinese intelligence agencies.

In April, German authorities arrested two German-Russian nationals accused of scouting US military sites for potential sabotage, including allegedly arson. One of the targeted locations was the US Army’s Grafenwöhr Training Area in Bavaria, a critical hub for US military operations in Europe that spans 233 square kilometers.

At Grafenwöhr, WIRED, BR, and Netzpolitik.org could track the precise movements from up to 1,257 devices. Some devices could even be observed zigzagging through Range 301, an armored vehicle course, before returning to nearby barracks.

Our investigation found 38,474 location signals from up to 189 devices inside Büchel Air Base, around a dozen US nuclear weapons are reportedly stored.Courtesy of OpenMapTiles

A senior fellow at Duke University’s Sanford School of Public Policy and head of its data brokerage research project, Justin Sherman also leads Global Cyber Strategies, a firm specializing in cybersecurity and tech policy. In 2023, he and his coauthors at Duke secured $250,000 in funding from the United States Military Academy to investigate how easy it is to purchase sensitive data about military personnel from data brokers. The results were alarming: They were able to buy highly sensitive, nonpublic, individually identifiable health and financial data on active-duty service members, without any vetting.

“It shows you how bad the situation is,” Sherman says, explaining how they geofenced requests to specific special operations bases. “We didn’t pretend to be a marketing firm in LA. We just wanted to see what the data brokers would ask.” Most brokers didn’t question their requests, and one even offered to bypass an ID verification check if they paid by wire.

During the study, Sherman helped draft an amendment to the National Defense Authorization Act that requires the Defense Department to ensure that highly identifiable individual data shared with contractors cannot be resold. He found the overall impact of the study underwhelming, however. “The scope of the industry is the problem,” he says. “It’s great to pass focused controls on parts of the ecosystem, but if you don’t address the rest of the industry, you leave the door wide open for anyone wanting location data on intelligence officers.”

Efforts by the US Congress to pass comprehensive privacy legislation have been stalled for the better part of a decade. The latest effort, known as the American Privacy Rights Act, failed to advance in June after GOP leaders threatened to scuttle the bill, which was significantly weakened before being shelved.

Another current privacy bill, the Fourth Amendment Is Not For Sale Act, seeks to ban the US government from purchasing data on Americans that it would normally need a warrant to obtain. While the bill would not prohibit the sale of commercial location data altogether, it would bar federal agencies from using those purchases to circumvent constitutional protections upheld by the Supreme Court. Its fate rests in the hands of House and Senate leaders, whose negotiations are private.

“The government needs to stop subsidizing what is now for good reason one of the world’s least popular industries,” says Sean Vitka, policy director at the nonprofit Demand Progress. “There are a lot of members of Congress who take seriously the severe threats to privacy and national security posed by data brokers, but we’ve seen many actions by congressional leaders that only furthers the problem. There shouldn’t need to be a body count for these people to take action.”

A hack nearly gained access to millions of computers. Here’s what we should learn from this.

The internet is far less secure than it ought to be.

https://www.vox.com/future-perfect/24127433/linux-hack-cyberattack-computer-security-internet-open-source-software

One of the most fascinating and frightening incidents in computer security history started in 2022 with a few pushy emails to the mailing list for a small, one-person open source project.

A user had submitted a complex bit of code that was now waiting for the maintainer to review. But a different user with the name Jigar Kumar felt that this wasn’t happening fast enough. “Patches spend years on this mailing list,” he complained. “5.2.0 release was 7 years ago. There is no reason to think anything is coming soon.”.

A month later, he followed up: “Over 1 month and no closer to being merged. Not a suprise.” [sic]

And a month after that: “Is there any progress on this?” Kumar stuck around for about four months complaining about the pace of updates and then was never heard from again.

A few weeks ago, the world learned a shocking twist. “Jigar Kumar” does not seem to exist at all. There are no records of any person by that name outside the pushy emails. He — along with a number of other accounts — was apparently part of a campaign to compromise nearly every Linux-running computer in the world. (Linux is an open source operating system — as opposed to closed systems from companies like Apple — that runs on tens of millions of devices.)

That campaign, experts believe, was likely the work of a well-resourced state actor, one who almost pulled off an attack that could have made it possible for the attackers to remotely access millions of computers, effectively logging in as anyone they wanted. The security ramifications would have been huge.

How to (almost) hack everything

Here’s how events played out: In 2005, software engineer Lasse Collin wrote a series of tools for better-compressing files (it’s similar to the process behind a .zip file). He made those tools available for free online, and lots of larger projects incorporated Collin’s work, which was eventually called XZ Utils.

Collin’s tool became one part of the vast open source ecosystem that powers much of the modern internet. We might think that something as central to modern life as the internet has a professionally maintained structure, but as an XKCD comic published well before the hack shows, it’s closer to the truth that “all modern digital infrastructure” rests on “a project some random person in Nebraska has been thanklessly maintaining since 2003.” XZ Utils was one such project — and yes, you should find it a little worrying that there are many of them.

Starting in 2021, a user going by the name “Jia Tan” — he, too, doesn’t seem to exist anywhere else — started making contributions to the XZ project. At first, they were harmless small fixes. Then, Tan started submitting larger additions.

The way an open source project like this one works is that a maintainer — Collin, in this case — has to read and approve each such submission. Effectively, Tan was overloading Collin with homework.

That’s when “Kumar” showed up to complain that Collin was taking too long. Another account that doesn’t seem to exist joined the chorus. They argued that Collin clearly wasn’t up to the task of maintaining his project alone and pushed for him to add “Jia Tan” as another maintainer.

“It seems likely that they were fakes created to push Lasse to give Jia more control,” engineer Russ Cox writes in a detailed timeline of the incident. “It worked. Over the next few months, Jia started replying to threads on xz-devel authoritatively about the upcoming 5.4.0 release.” He’d become a trusted “maintainer” who could add code to XZ Utils himself.

Why does any of this matter? Because one of the many, many open source tools that happened to incorporate XZ Utils was OpenSSH, which is used to remotely access computers and is used by millions of servers around the world.

“Tan” carefully added to XZ Utils some well-disguised code that compromised OpenSSH, effectively allowing the creators to log in remotely to any computer running OpenSSH. The files containing the (heavily disguised) code were accepted as part of the larger project.

Fortunately, almost all of the millions of potentially targeted computers were not affected because it’s routine for such a new update to first be released as “unstable” (meaning expected to have some bugs), and most administrators wait for a subsequent “stable” release.

Before that happened, “Jia Tan”’s work got caught. Andres Freund, a software engineer at Microsoft, was off work and doing some testing on a computer that had the “unstable” new release. Under most circumstances, the hack ran seamlessly, but under the circumstances he was testing in, it slowed down SSH performance. He dug deeper and quickly unraveled the whole scheme.

Which means that, thanks to one Microsoft engineer doing some work off-hours, your computer remains secure — at least, as far as I know.

Can we do better than getting lucky?

There was nothing inevitable about this hack getting discovered. Lots of other people were running the unstable new build without noticing any problems. What made Freund suspicious in the first place wasn’t the suspicious code but a bug that had been accidentally introduced by “Jia Tan.”

If the “Jia Tan” team had avoided that error, they might well have pulled this off. Catching the suspicious code “really required a lot of coincidences,” Freund said later on Mastadon.

No one wants to believe that modern computer security essentially relies on “a lot of coincidences.” We’d much rather have reliable processes. But I hope this narrative makes it clear just how hard it is to reliably defend the jury-rigged internet we have against an attack like this.

The people behind “Jia Tan” spent more than two years building the access they needed for this attack. Some of the specifics have to do with the dynamics of open source software, where decades-old projects are often in a quiet maintenance stage from which, as we saw, an aggressive actor can seize control. But with the same resources and dedication that were behind “Jia Tan,” you could get hired at a software company to pull off the same thing on closed-source software too.

Most of all, it’s very hard to guess whether this attempted attack was unprecedented or unusual simply in that it got caught. Which means we have no idea whether there are other land mines lurking in the bowels of the internet.

Personally, as someone who doesn’t work in computer security, the main thing I took away from this was less a specific policy prescription and more a sense of awe and appreciation. Our world runs on unsung contributions by engineers like Collin and Freund, people who spend their free time building stuff, testing stuff, and sharing what they build for the benefit of everyone. This is inconvenient for security, but it’s also really cool.

I wasn’t able to reach Collin for comment. (His website said: “To media and reporters: I won’t reply for now because first I need to understand the situation thoroughly enough. It’s enough to reload this page once per 48 hours to check if this message has changed.”) But I hope he ultimately comes to think that being personally targeted by this fairly extraordinary effort to make his work on XZ utils feel inadequate is, in fact, a remarkable vindication of its importance.

Motivations behind XZ Utils backdoor may extend beyond rogue maintainer

Security researchers are raising questions about whether the actor behind an attempted supply chain attack was engaged in a random, solo endeavor.

Source: https://www.cybersecuritydive.com/news/motivations-xz-utils-backdoor/712080/

The attempted supply chain attack against XZ Utils is raising troubling questions about the motivations of the suspected threat actor behind the incident as well as the overall security of the larger open source ecosystem. 

A Microsoft engineer accidentally found obfuscated malicious code installed in the xz library, which could lead to a major supply chain compromise. 

Security researchers and other industry experts are pointing to the suspicion that a longtime contributor is behind what is now considered a multiyear effort to establish themselves as an insider, leading up to the attempted supply chain attack. 

XZ Utils, a data compression software utility found in most Linux distributions, has long been considered a widely trusted project, according to researchers. 

“The most unique and unsettling aspect of this attack is the significant effort and investment made by the attacker in gradually establishing themselves over several years as a credible open-source contributor and carefully advancing their position until they gained trust and the opportunity to maintain and add malicious code into a widely used package,” Jonathan Sar Shalom, director of threat research at JFrog, said via email. 

Researchers point to a Github account @JiaT75, which has since been suspended, as the suspected original source of the backdoor. 

GitHub confirmed that it “suspended user accounts and removed the content” in keeping with its acceptable use policies, however after an investigation the account belonging to @Larhzu was reinstated. 

The @Larhzu account is linked to Lassie Collin, the original and legitimate maintainer of the site. 

What followed was a multiyear effort to gain trust within the community, while at the same time allegedly testing the waters by making subtle changes that failed to raise any immediate alarm bells. 

“Now when we look back at the tale of the tape, what we see is Jia kind of surreptitiously inserted all these little changes over time,” Omkhar Arasaratnam, general manager at the Open Source Security Foundation, said in an interview. “None of them catastrophic, none of them very flashy. But you know, just to see if people were watching.”

Maintainers in focus

The open source community has seen previous cases of maintainers throwing tantrums or using the community as a platform to protest larger issues. But the patience and sophistication of this attack is raising questions for an increasing pool of experts about whether nation-state support is a factor.

“Our analysis suggests that the sophistication and operational security observed in this incident, including the strategic use of email addresses and IP addresses, point to a highly trained and sophisticated adversary,” said Brian Fox, co-founder and CTO of Sonatype, a supply chain management platform. “The lack of tangible evidence of the threat actor’s existence beyond their precise and limited engagements further distinguishes this from the actions of a rogue open source contributor.”

Red Hat on Friday warned that malicious code was present in the latest versions of xz tools and libraries. The vulnerability was assigned CVE-2024-3094 with a CVSS score of 10. 

Users were urged to immediately stop using Fedora Rawhide instances for work or personal use and the Cybersecurity and Infrastructure Security Agency warned developers and users to downgrade to an uncompromised version. 

Andres Freund, a principal software engineer at Microsoft, stumbled upon some anomalous activity last week and publicly disclosed the incident. Freund observed sshd processes using an unusual amount of CPU, however noted that the wrong usernames had been applied. 

“Recalled that I had seen an odd valgrind complaint in automated testing of postgres, a few weeks earlier after package updates,” Freund said in a post on Mastodon

Microsoft confirmed his role in discovering the attack and released guidance on how to respond, with a list of impacted Linux distributions. 

Jake Williams, a faculty member at IANS Research, said the incident highlights the need for defense in depth, including the need to have properly staffed vulnerability intelligence teams and proper investments in tooling.

“Organizations with strict firewall rules preventing access to their SSH servers limited exploitation opportunities, even for vulnerable deployments,” Williams said via email. “Some [cloud security posture management systems] had scans for vulnerable instances released the same day this was detected.”

Real World Use Cases for Apples Vision Pro + Version 2 – with the new operating system ChatGPT „GPT-4o“

The integration of advanced AI like OpenAI’s GPT-4o into Apple’s Vision Pro + Version 2 can significantly enhance its vision understanding capabilities.
Here are ten possible use cases:

1. Augmented Reality (AR) Applications:
– Interactive AR Experiences: Enhance AR applications by providing real-time object recognition and interaction. For example, users can point the device at a historical landmark and receive detailed information and interactive visuals about it.
– AR Navigation: Offer real-time navigation assistance in complex environments like malls or airports, overlaying directions onto the user’s view.

2. Enhanced Photography and Videography:
– Intelligent Scene Recognition: Automatically adjust camera settings based on the scene being captured, such as landscapes, portraits, or low-light environments, ensuring optimal photo and video quality.
– Content Creation Assistance: Provide suggestions and enhancements for capturing creative content, such as framing tips, real-time filters, and effects.

3. Healthcare and Medical Diagnosis:
– Medical Imaging Analysis: Assist in analyzing medical images (e.g., X-rays, MRIs) to identify potential issues, providing preliminary diagnostic support to healthcare professionals.
– Remote Health Monitoring: Enable remote health monitoring by analyzing visual data from wearable devices to track health metrics and detect anomalies.

4. Retail and Shopping:
– Virtual Try-Ons: Allow users to virtually try on clothing, accessories, or cosmetics using the device’s camera, enhancing the online shopping experience.
– Product Recognition: Identify products in stores and provide information, reviews, and price comparisons, helping users make informed purchasing decisions.

5. Security and Surveillance:
– Facial Recognition: Enhance security systems with facial recognition capabilities for authorized access and threat detection.
– Anomaly Detection: Monitor and analyze security footage to detect unusual activities or potential security threats in real-time.

6. Education and Training:
– Interactive Learning: Use vision understanding to create interactive educational experiences, such as identifying objects or animals in educational content and providing detailed explanations.
– Skill Training: Offer real-time feedback and guidance for skills training, such as in sports or technical tasks, by analyzing movements and techniques.

7. Accessibility and Assistive Technology:
– Object Recognition for the Visually Impaired: Help visually impaired users navigate their surroundings by identifying objects and providing auditory descriptions.
– Sign Language Recognition: Recognize and translate sign language in real-time, facilitating communication for hearing-impaired individuals.

8. Home Automation and Smart Living:
– Smart Home Integration: Recognize household items and provide control over smart home devices. For instance, identifying a lamp and allowing users to turn it on or off via voice commands.
– Activity Monitoring: Monitor and analyze daily activities to provide insights and recommendations for improving household efficiency and safety.

9. Automotive and Driver Assistance:
– Driver Monitoring: Monitor driver attentiveness and detect signs of drowsiness or distraction, providing alerts to enhance safety.
– Object Detection: Enhance autonomous driving systems with better object detection and classification, improving vehicle navigation and safety.

10. Environmental Monitoring:
– Wildlife Tracking: Use vision understanding to monitor and track wildlife in natural habitats for research and conservation efforts.
– Pollution Detection: Identify and analyze environmental pollutants or changes in landscapes, aiding in environmental protection and management.

These use cases demonstrate the broad potential of integrating advanced vision understanding capabilities into Apple’s Vision Pro + Version 2, enhancing its functionality across various domains and providing significant value to users.

Open letter on the feasibility of “Chat Control”: Assessments from a scientific point of view

Source: https://www.ins.jku.at/chatcontrol/

Open letter on the feasibility of „Chat Control“:Assessments from a scientific point of view

Update: A parallel initiative is aimed at the EU institutions and is available in English at the CSA Academia Open Letter . Since the very similar arguments were formulated in parallel, they support each other.

The initiative of the EU Commission discussed under the name “ Chat Control ”, the unprovoked monitoring of various communication channels to detect child pornography, terrorist or other “undesirable” material – including attempts at early detection (e.g. “grooming” minors through text messages that build trust) – mandatory for mobile devices and communication services, has recently been expanded to include the monitoring of direct audio communications . Some states, including Austria and Germany , have already publicly declared that they will not support this initiative for monitoring without cause. AlsoCivil protection and children’s rights organizations have rejected this approach as excessive and at the same time ineffective . Recently, even the legal service of the EU Council of Ministers diagnosed an incompatibility with European fundamental rights. Irrespective of this, the draft will be tightened up even more and extended to other channels: in the last version even to audio messages and conversations. The approach appears to be coordinated with corresponding attempts in the US ( “EARN IT” and “STOP CSAM” Acts ) and the UK (“Online Safety Bill”).

As scientists who are actively researching in various areas of this topic, we therefore make the declaration in all clarity: This advance cannot be implemented safely and effectively. There is currently no foreseeable further development of the corresponding technologies that would technically make such an implementation possible. In addition, according to our assessment, the hoped-for effects of these monitoring measures are not to be expected. This legislative initiative therefore misses its target, is socio-politically dangerous and would permanently damage the security of our communication channels for the majority of the population.

The main reasons against the feasibility of „Chat Control“ have already been mentioned several times. In the following, we would like to discuss these specifically in the interdisciplinary connection between artificial intelligence (AI, artificial intelligence / AI), security (information security / technical data protection) and law .

Our concerns are:

  1. Security: a) Encryption is the best method for internet security. Successful attacks are almost always due to faulty software. b) A systematic and automated monitoring (ie „scanning“) of encrypted content is technically only possible if the security that can be achieved through encryption is massively violated, which is associated with considerable additional risks. c) A legal obligation to integrate such scanners will make secure digital communications in the EU unavailable to the majority of the population, but will have little impact on criminal communications.
  2. AI: a) Automated classification of content, including methods based on machine learning, is always subject to errors, which in this case will lead to high false positives. b) Special monitoring methods, which are carried out on the end devices, open up additional possibilities for attacks up to the extraction of possibly illegal training material.
  3. Law: a) A sensible demarcation from the explicitly permitted use of specific content, for example in the educational sector or for criticism and parody, does not appear to be automatically possible. b) The massive encroachment on fundamental rights through such an instrument of mass surveillance is not proportionate and would cause great collateral damage in society.

In detail, these concerns are based on the following scientifically recognized facts:

  1. Security
    1. Encryption using modern methods is an indispensable basis for practically all technical mechanisms for maintaining security and data protection on the Internet. In this way, communication on the Internet is currently protected as the cornerstone for current services, right through to critical infrastructure such as telephone, electricity, water networks, hospitals, etc. Trust in good encryption methods is significantly higher among experts than in other security mechanisms. Above all, the average poor quality of software in general is the reason for the many publicly known security incidents. Improving this situation in terms of better security therefore relies primarily on encryption.
    2. Automatic monitoring („scanning“) of correctly encrypted content is not effectively possible according to the current state of knowledge. Procedures such as „Fully Homomorphic Encryption“ (FHE) are currently not suitable for this application – neither is the procedure capable of this, nor is the necessary computing power realistically available. A rapid improvement is not foreseeable here either.
    3. For these reasons, earlier attempts to ban or restrict end-to-end encryption were mostly quickly abandoned internationally. The current chat control push aims to have monitoring functionality built into the end devices in the form of scanning modules (“Client-Side Scanning” / CSS) and therefore to scan the plain text content before secure encryption or after secure decryption . Providers of communication services would have to be legally obliged to implement this for all content. Since this is not in the core interest of such organizations and requires effort in implementation and operation as well as increased technical complexity, it cannot be assumed that the introduction of such scanners will be voluntary – in contrast to scanning on the server side.
    4. Secure messengers such as Signal or Threema and WhatsApp have already publicly announced that they will not implement such client scanners, but to withdraw from the corresponding regions. This has different implications for communication depending on the use case: (i) (adult) criminals will simply communicate with each other via “non-compliant” messenger services to further benefit from secure encryption. The increased effort, for example to install other apps on Android via sideloading that are not available in the usual app stores in the respective country, is not a significant hurdle for criminal elements. (ii) Criminals communicate with possible future victims via popular platforms, which would be the target of the mandatory surveillance measures discussed. In this case, it can be assumed that informed criminals will quickly lure their victims to alternative but still internationally recognized channels such as Signal, which are not covered by the monitoring. (iii) Participants exchange problematic material without being aware that they are committing a crime. This case would be reported automatically and possibly also lead to the criminalization of minors without intent. The restrictions would therefore primarily affect the broad – and irreproachable – mass of the population.It would be utterly delusional to think that without built-in monitoring, secure encryption could still be reversed. Tools like Signal, Tor, Cwtch, Briar and many others are widely available as open source and can easily be removed from central control. Knowledge of secure encryption is already common knowledge and can no longer be censored. There is no effective way to technically block the use of strong encryption without Client Side Scanning (CSS). If surveillance measures are prescribed in messengers, only criminals whose actual crimes outweigh the violation of the surveillance obligation will maintain their privacy.
    5. Furthermore, the complex implementation forced by proposed scanner modules creates additional security problems that do not currently exist. On the one hand, this represents new software components, which in turn will be vulnerable. On the other hand, the Chat Control proposals consistently assume that the scanner modules themselves will remain confidential, since they would be trained on content that is already punishable for mere possession (built into the Messenger app), on the one hand, and simply for testing evasion methods, on the other can be used. It is also an illusion that such machine learning models or other scanner modules, distributed to billions of devices under the control of end users, can ever be kept secret.NeuralHash “ module for CSAM detection, which was extracted almost immediately from corresponding iOS versions and is thus openly available . The assumption by Chat Control proposals that these scanner modules could be kept confidential is therefore completely unfounded and incorrect Corresponding data leaks are almost unavoidable here.
  2. artificial intelligence
    1. We have to assume that machine learning (ML) models on end devices cannot, in principle, be kept completely secret. This is in contrast to server-side scanning, which is currently legally possible and also actively practiced by various providers to scan content that has not been end-to-end encrypted. ML models on the server side can be reasonably protected from being read with the current state of the art and are less the focus of this consideration.
    2. A general problem with all ML-based filters are false classifications, i.e. that known “undesirable” material is not recognized as such with small changes (also referred to as “false negative” or “false non-match”). For parts of the push, it is currently unknown how ML models should be able to recognize complex, unfamiliar material with changing context (e.g. „grooming“ in text chats) with even approximate accuracy. The probability of high false negative rates is high.In terms of risk, however, it is significantly more serious if harmless material is classified as “undesirable” (also referred to as “false positive” or “false match” or also as “collision”). Such errors can be reduced, but in principle cannot be ruled out. In addition to the false accusation of uninvolved persons, false positives also lead to (possibly very) many false reports for the investigative authorities, which already have too few resources to investigate reports.
    3. The assumed open availability of ML models also creates various new attack possibilities. Using the example of Apple NeuralHash , random collisions were found very quickly and programs were freely released to generate any collisions between images . This method, also known as “malicious collisions”, uses so-called adversarial attacks against the neural network and thus enables attackers to deliberately classify harmless material as a “match” in the ML model and thus classify it as “undesirable”. In this way, innocent people can be harmed in a targeted manner by automatic false reports and brought under suspicion – without any illegal action on the part of the attacked or attacker.
    4. The open availability of the models can also be used for so-called „training input recovery“ in order to extract (at least partially) the content used for training from the ML model. In the case of prohibited content (e.g. child pornography), this poses another massive problem and can further increase the damage to those affected by the fact that their sensitive data (e.g. images of abuse used for training) can continue to be published. Because of these and other problems, Apple, for example, withdrew the proposal .We note that this latter danger does not occur with server-side scanning by ML models, but is newly added by the chat control proposal with client scanner.
  3. Legal Aspects
    1. The right to privacy is a fundamental right that may only be interfered with under very strict conditions. Whoever makes use of this basic right must not be suspected from the outset of wanting to hide something criminal. The often-used phrase: „If you have nothing to hide, you have nothing to fear!“ denies people the exercise of their basic rights and promotes totalitarian surveillance tendencies. The use of chat control would fuel this.
    2. The area of ​​terrorism in particular overlaps with political activity and freedom of expression in its breadth. It is precisely against this background that the „preliminary criminalisation“, which has increasingly taken place in recent years under the guise of fighting terrorism, is viewed particularly critically. Chat control measures go in the same direction. They can severely curtail this basic right and make people who are politically critical the focus of criminal prosecution. The resulting severe curtailment of politically critical activity hinders the further development of democracy and harbors the danger of promoting radicalized underground movements.
    3. The field of law and social sciences includes researching criminal phenomena and questioning regulatory mechanisms. From this point of view, scientific discourse also runs the risk of being identified as “suspicious” by chat control and thus indirectly restricted. The possible stigmatization of critical legal and social sciences is in tension with the freedom of science, which also requires “research independent of the mainstream” for further development.
    4. In education, there is a need to educate young people to be critically conscious. This also includes passing on facts about terrorism. Through the use of chat control, the provision of teaching material by teachers could put them in a criminal focus. The same applies to addressing sexual abuse, so that control measures could make this sensitive subject more taboo, even if “self-empowerment mechanisms” are to be promoted.
    5. Interventions in fundamental rights must always be appropriate and proportionate, even if they are made in the context of criminal prosecution. The technical considerations presented show that these requirements are not met with Chat Control. Such measures thus lack any legal or ethical legitimacy.

In summary, the current proposal for chat control legislation is not technically sound from either a security or AI point of view and is highly problematic and excessive from a legal point of view. The chat control push brings significantly greater dangers for the general public than a possible improvement for those affected and should therefore be rejected.

Instead, existing options for human-driven reporting of potentially problematic material by recipients, as is already possible with various messenger services, should be strengthened and made even more easily accessible. It should be considered whether anonymous registration options for correspondingly illegal material could be created and made easily accessible from messengers. Existing criminal prosecution options, such as the monitoring of social media or open chat groups by police officers, as well as the legally required analysis of suspects‘ smartphones, can continue to be used accordingly.

For more detailed information and further details please contact:

Security issues:
Univ.-Prof. dr
Rene Mayrhofer

+43 732 2468-4121

rm@ins.jku.at

AI questions:
DI Dr.
Bernard Nessler

+43 732 2468-4489

nessler@ml.jku.at

Questions of law:
Univ.-Prof. dr
Alois Birklbauer

+43 732 2468-7447

alois.birklbauer@jku.at

Signatories: inside

  • AI Austria ,
    association for the promotion of artificial intelligence in Austria, Wollzeile 24/12, 1010 Vienna
  • Austrian Society for Artificial Intelligence (ASAI) ,
    association for the promotion of scientific research in the field of AI in Austria
  • Univ.-Prof. dr Alois Birklbauer, JKU Linz
    Head of the practice department for criminal law and medical criminal law )
  • Ass.-Prof. dr Maria Eichlseder, Graz University of Technology
  • Univ.-Prof. dr Sepp Hochreiter, JKU Linz
    Board of Directors of the Institute for Machine Learning, Head of the LIT AI Lab )
  • dr Tobias Höller, JKU Linz
    (post-doc at the Institute for Networks and Security)
  • FH Prof. TUE Peter Kieseberg, St. Pölten University of Applied Sciences
    Head of the Institute for IT Security Research )
  • dr Brigitte Krenn, Austrian Research Institute for Artificial Intelligence
    Board Member Austrian Society for Artificial Intelligence )
  • Univ.-Prof. dr Matteo Maffei, TU Vienna
    Head of the Security and Privacy Research Department, Co-Head of the TU Vienna Cyber ​​Security Center )
  • Univ.-Prof. dr Stefan Mangard, TU Graz
    Head of the Institute for Applied Information Processing and Communication Technology )
  • Univ.-Prof. dr René Mayrhofer, JKU Linz
    Board of Directors of the Institute for Networks and Security, Co-Head of the LIT Secure and Correct System Lab )
  • DI Dr. Bernhard Nessler, JKU Linz/SCCH
    Vice President of the Austrian Society for Artificial Intelligence )
  • Univ.-Prof. dr Christian Rechberger, Graz University of Technology
  • dr Michael Roland, JKU Linz
    (post-doc at the Institute for Networks and Security)
  • a.Univ.-Prof. dr Johannes Sametinger, JKU Linz
    Institute for Business Informatics – Software Engineering, LIT Secure and Correct System Labs )
  • Univ.-Prof. DI Georg Weissenbacher, DPhil (Oxon), TU Vienna
    (Prof. Rigorous Systems Engineering)

Published on 07/04/2023

Facebook Knows It’s Losing The Battle Against TikTok

Facebook Knows It’s Losing The Battle Against TikTok

Meta and Mark Zuckerberg face a six-letter problem. Spell it out with me: T-i-k-T-o-k.

Yeah, TikTok, the short-form video app that has hoovered up a billion-plus users and become a Hot Thing in Tech, means trouble for Zuckerberg and his social networks. He admitted as much several times in a call with Wall Street analysts earlier this week about quarterly earnings, a briefing in which he sought to explain his apps’ plateauing growth—and an actual decline in Facebook’s daily users, the first such drop in the company’s 18-year history.

Zuckerberg has insisted a major part of his TikTok defense strategy is Reels, the TikTok clone—ahem, short-form video format—introduced on Instagram and Facebook and launched in August 2020.

If Zuckerberg believed in Reels’ long-term viability, he would take a real run at TikTok by pouring money into Reels and its creators. Lots and lots of money. Something approaching the kind spent by YouTube, which remains the most lucrative income source for social media celebrities. (Those creators produce content to draw in engaged users. The platforms sell ads to appear with the content—more creators, more content, more users, more potential ad revenue. It’s a virtous cycle.)

Now, here’s as good a time as any for a crash course in creator economics. For this, there’s no better guide than Hank Green, whose YouTube video on the subject recently went viral. His fame is most rooted there on YouTube, where he has nine channels run from his Montana home. His most popular channel is Crash Course (13.1 million subscribers—an enviable YouTube base), to which he posts education videos for kids about subjects like Black Americans in World War II and the Israeli-Palestinian conflict.

Like the savviest social media publishers, Green fully understands that YouTube offers the best avenue for making money. It shares 55% of all ad revenue earned on a video with its creator. “YouTube is good at selling advertisements: It’s been around a long time, and it’s getting better every year,” Green says. On YouTube, he earns around $2 per thousand views. (In all, YouTube distributed nearly $16 billion to creators last year.)

Green sports an expansive mindset, though, and he has accounts on TikTok, Instagram and Facebook, too. TikTok doesn’t come close to paying as well as YouTube: On TikTok, Green earns pennies per every thousand views.

Meta is already beginning to offer some payouts for Reels. Over the last month, Reels has finally amassed enough of an audience for Green’s videos to accumulate 16 million views and earn around 60 cents per thousand views. Many times over TikTok’s but still not enough to get Green to divert any substantial his focus to Reels, which has never managed to replicate TikTok’s zeitgeisty place in pop culture. (Tiktok “has deeper content, something fascinating and weird,” explains Green. Reels, however, is “very surface level. None of it is deeper,” he says.) Another factor weighing on Reels: Meta’s bad reputation. “Facebook has traditionally been the company that has been kind of worst at being a good partner to creators,” he says, citing in particular Facebook’s earlier pivot to long-form video that led to the demise of several promising media startups, like Mic and Mashable.

This is where Zuckerberg could use Meta’s thick profit margin (36%, better even than Alphabet’s) and fat cash pile ($48 billion) to shell out YouTube-style cash to users posting Reels, creating an obvious enticement to prioritize Reels over TikTok. Maybe even Reels over YouTube, which has launched its own TikTok competitor, Shorts.

Now, imagine how someone like Green might get more motivated to think about Meta if Reels’ number crept up to 80 cents or a dollar per thousand views. Or $1.50. Or a YouTube-worthy $2. Or higher still: YouTube earnings can climb over $5, double even for the most popular creators.

Meta has earmarked up to a $1 billion for these checks to creators, which sounds big until you remember the amount of capital Meta has available to it. (And think about the sum YouTube disburses.) Moreover, Meta has set a timeframe for dispensing those funds, saying last July it would continue through December 2022. Setting a timetable indicates that Meta could (will likely?) turn off the financing come next Christmas.

Zuckerberg has demonstrated a willingness to plunk down Everest-size mountains of money over many years for projects he does fully believe in. The most obvious example is the metaverse, the latest Zuckerberg pivot. Meta ran up a $10.1 billion bill on it last year to develop new augmented and virtual reality software and headsets and binge hire engineers. Costs are expected to grow in 2022. And unlike Reels, metaverse spending has no semblance of a time schedule; Wall Street has been told the splurge will continue for the foreseeable future. Overall, Meta’s view on the metaverse seems to be, We’ll spend as much as possible—for as long as it takes—for this to happen.

The same freewheeled mindset doesn’t seem to appply to Reels. But Zuckerberg knows he can’t let TikTok take over the short-form video space unopposed. Meta needs to hang onto the advertising revenue generated by Instagram and Facebook until it can make the metaverse materialize. (Instagram and Facebook, for perspective, generated 98% of Meta’s $118 billion revenue last year; sales of Meta’s VR headset, the Quest 2, accounted for the remaining 2%.) And advertising dollars will increasingly move to short-form video, following users’ increased demand for this type of content over the last several years.

Reality is, Zuckerberg has already admitted he doesn’t see Reels as a long-term solution to his T-i-k-T-o-k problem. If he did, he’d spend more on it and creators like Green than what the metaverse costs him over six weeks.

Penlink – A small Nebraska company is helping law enforcement around the world spy on users of Google, Facebook and other tech giants

A small Nebraska company is helping law enforcement around the world spy on users of Google, Facebook and other tech giants. A secretly recorded presentation to police reveals how deeply embedded in the U.S. surveillance machine PenLink has become.


PenLink might be the most pervasive wiretapper you’ve never heard of.

The Lincoln, Nebraska-based company is often the first choice of law enforcement looking to keep tabs on the communications of criminal suspects. It’s probably best known, if it’s known at all, for its work helping convict Scott Peterson, who murdered his wife Laci and their unborn son in a case that fomented a tabloid frenzy in the early 2000s. Nowadays the company has been helping cops keep tabs on suspected wrongdoing by users of Google, Facebook and WhatsApp – whatever web tool that law enforcement requests.

With $20 million revenue every year from U.S. government customers such as the Drug Enforcement Administration, the FBI, Immigration Customs Enforcement (ICE) and almost every other law enforcement agency in the federal directory, PenLink enjoys a steady stream of income. That doesn’t include its sales to local and state police, where it also does significant business but for which there are no available revenue figures. Forbes viewed contracts across the U.S., including towns and cities in California, Florida, Illinois, Hawaii, North Carolina and Nevada.

“PenLink is proud to support law enforcement across the U.S. and internationally in their effort to fight wrongdoing,” the company said. “We do not publicly discuss how our solution is being utilized by our customers.”

Sometimes it takes a spy to get transparency from a surveillance company. Jack Poulson, founder of technology watchdog Tech Inquiry, went incognito at the National Sheriffs’ Association’s winter conference in Washington. He recorded a longtime PenLink employee showing off what the company could do for law enforcement and discussing the scale of its operations. Not only does the recording lift the lid on how deeply involved PenLink is in wiretapping operations across the U.S., it also reveals in granular detail just how tech providers such as Apple, Facebook and Google provide information to police when they’re confronted with a valid warrant or subpoena.

Scott Tuma, a 15-year PenLink veteran, told attendees at the conference that the business got off the ground in 1987 when a law enforcement agency had an abundance of call records that it needed help organizing. It was in 1998 that the company deployed its first wiretap system. “We’ve got those, generally, scattered all over the U.S. and all over the world,” Tuma said. Though he didn’t describe that tool in detail, the company calls it Lincoln.

Today, it’s social media rather than phones that’s proving to be fertile ground for PenLink and its law enforcement customers. Tuma described working with one Justice Department gang investigator in California, saying he was running as many as 50 social media “intercepts.” PenLink’s trade is in collecting and organizing that information for police as it streams in from the likes of Facebook and Google.

The PenLink rep said that tech companies can be ordered to provide near-live tracking of suspects free of charge. One downside is that the social-media feeds don’t come in real time, like phone taps. There’s a delay – 15 minutes in the case of Facebook and its offshoot, Instagram. Snapchat, however, won’t give cops data much more than four times a day, he said. In some “exigent circumstances,” however, Tuma said he’d seen companies providing intercepts in near real time.

Making matters trickier for the police, to get the intercept data from Facebook, they have to log in to a portal and download the files. If an investigator doesn’t log in every hour during an intercept, they get locked out. “This is how big of a pain in the ass Facebook is,” Tuma said. PenLink automates the process, however, so if law enforcement officers have to take a break or their working day ends, they’ll still have the intercept response when they return.

A spokesperson for Meta, Facebook’s owner, said: “Meta complies with valid legal processes submitted by law enforcement and only produces requested information directly to the requesting law enforcement official, including ensuring the type of legal process used permits the disclosure of the information.”

Jennifer Granick, surveillance and cybersecurity counsel at the American Civil Liberties Union, reviewed the comments made by Tuma. She raised concerns about the amount of information the government was collecting via PenLink. “The law requires police to minimize intercepted data, as well as give notice and show necessity,” she said. “It’s hard to imagine that wiretapping 50 social media accounts is regularly necessary, and I question whether the police are then going back to all the people who comment on Facebook posts or are members of groups to tell them that they’ve been eavesdropped upon.”

She suggested that Tuma’s claim that a “simple subpoena” to Facebook could yield granular information – such as when and where a photo was uploaded, or when a credit-card transaction took place on Facebook Marketplace – may be an overreach of the law.

There’s a lot of nuance involving where government actions might stray over the line, said Randy Milch, a New York University law professor and former general counsel at telecoms giant Verizon Communications. “While I’m sympathetic to the idea that the government is going to ask for more than it needs, simply saying ‘too much data must mean an overreach’ is the kind of arbitrary rule that isn’t workable,” he told Forbes. “The government doesn’t know the amount of the data it’s seeking” before the fact. Milch noted that the Stored Communications Act explicitly allows for subpoenas to collect records including names, addresses, means and source of payment, as well as information on session times and durations.

‘Google’s the best’

In his Washington talk, Tuma gushed over Google’s location-tracking data. Google “can get me within three feet of a precise location,” he said. “I cannot tell you how many cold cases I’ve helped work on where this is five, six, seven years old and people need to put [the suspect] at a hit-and-run or it was a sexual assault that took place.” If people are carrying their phones and have Gmail accounts, he said, law enforcement “can get really lucky. And it happens a lot.” Facebook, by comparison, will get a target within 60 to 90 feet, Tuma said, while Snapchat has started providing more accurate location information within 15 feet.

Snapchat didn’t respond to requests for comment.

Tuma also described having a lot of success in asking Google for search histories. “Multiple homicide investigations, I’ve seen it: ‘How to dispose of a human body,’ ‘best place to dump a body.’ Swear to God, that’s what they search for. It’s in their Google history. They cleared their browser and their cookies and things, they think it’s gone. Google’s the best.” A Google spokesperson said the company tries to balance privacy concerns with the needs of police. “As with all law enforcement requests, we have a rigorous process that is designed to protect the privacy of our users while supporting the important work of law enforcement,” the spokesperson said.

Tuma described Apple’s iCloud warrants as “phenomenal.” “If you did something bad, I bet you I could find it on that backup,” he said. (Apple didn’t respond to requests for comment.) It was also possible, Tuma said, to look at WhatsApp messages, despite the platform’s assurances of tight security. Users who back up messages effectively remove the protection provided by the app’s end-to-end encryption. Tuma said he was working on a case in New York where he was sitting on “about a thousand recordings from WhatsApp.” The Facebook-owned app may not be so susceptible to near real-time interception, however, as backups can only be done as frequently as once a day. Metadata, however, showing how a WhatsApp account was used and which numbers were contacting one another and when, can be tracked with a surveillance technology known as a pen-register. PenLink provides that tool as a service.

All messages on WhatsApp are end-to-end encrypted, said a company spokesperson, and it’s transparent about how it works with law enforcement. “We know that people want their messaging services to be reliable and safe – and that requires WhatsApp to have limited data,” the spokesperson said. “We carefully review, validate and respond to law enforcement requests based on applicable law and in accordance with our terms of service, and are clear about this on our website and in regular transparency reports. This work has helped us lead the industry in delivering private communications while keeping people safe, and has led to arrests in criminal cases.” They pointed to a release last year of a feature that allows users to encrypt their backups in the iCloud or Google Drive, while noting that when they respond to a law enforcement request, they don’t provide the data to any private company like PenLink, but directly to law enforcement.

Going dark or swimming in data?

In recent years, the FBI and various police agencies have raised concerns about end-to-end encryption from Google or Facebook cutting off valuable data sources. But Tuma said that Silicon Valley’s heavyweights aren’t likely to start hiding information from police because it would mean doing the same to advertisers. “I always call B.S. on it for this reason right here: Google’s ad revenue in 2020 was $182 billion,” Tuma said.

Granick of the ACLU said that such claims showed that the FBI, contrary to what the bureau claimed, wasn’t losing sight of suspects because of encrypted apps like WhatsApp. “The fact that backups and other data are not encrypted creates a treasure trove for police,” Granick said. “Far from going dark, they are swimming in data.” It’s noteworthy that Signal, an encrypted communications app that’s become hugely popular in recent years, does not have a feature that allows users to back up their data to the cloud.

Indeed, the amount of data being sent by the likes of Google and Facebook to police can be astonishing. Forbes recently reviewed a search warrant in which the police were sent 27,000 pages of information on a Facebook account of a man accused of giving illegal tours of the Grand Canyon. Tuma said he’d seen even bigger returns, the largest being around 340,000.

Though its headcount is small – less than 100 employees, according to LinkedIn – PenLink’s ability to tap a wide range of telecoms and internet businesses at scale has made the company very attractive to police over the last two decades. Over the last month alone, the DEA ordered nearly $2 million in licenses and the FBI $750,000.

Through a Freedom of Information Act request, Forbes obtained information on a $16.5 million PenLink contract with ICE that was signed in 2017 and continued to 2021. It details a need for the company’s suite of telecommunications analysis and intercept software applications, including what it called its PLX tool. The contract requires PenLink, at a minimum, to help wiretap a large number of providers, including AT&T, Iridium Satellite, Sprint, Verizon, T-Mobile, Cricket, Cablevision, Comcast, Time Warner, Cox, Skype, Vonage, Virgin Mobile and what the government calls “social media and advertising websites” such as Facebook and WhatsApp.

PenLink’s work wouldn’t be possible without the compliance of tech providers, who, according to Granick, “are storing too much data for too long, and then turning too much over to investigators. Social media companies are able to filter by date, type of data, and even sender and recipient. Terabytes of data are almost never going to be responsive to probable cause, which is what the Fourth Amendment requires.”

Follow Thomas on Twitter. Check out his website. Send him a secure tip.

A Guide to Apple’s New App-Tracking Controls (ATT) in IOS 14.5

It’s the biggest lie of our time: “I have read the terms and conditions and privacy policy.”Read a bajillion words of legalese before hitting “agree” to use an app? Surrre.Yet I have one request for you when iOS 14.5 arrives on your iPhone and privacy pop-upalooza begins: Read them. Lucky for you, they’re short and crucial to understanding how your most personal info is used.

As for how you choose to answer these prompts, I have some advice on that, too.

On Monday, after many months of anticipation, Apple AAPL -0.24% released iOS 14.5. The update isn’t as big as the full-digit release that typically arrives each September, but it does have a few useful upgrades.Siri has some new, more realistic voices. If you’re setting up a new device, the virtual assistant no longer defaults to a female voice —something I’ve long advocated for. Then, there’s the new mask-unlock trick. If you’re wearing a mask and want to unlock your iPhone without punching in a passcode, you can use your Apple Watch to confirm it’s you. Oh, and there’s a redesigned syringe emoji. No sore arm included.But the most important and most controversial update? App Tracking Transparency—abbreviated to ATT. The privacy feature requires any app that wants to track your activity and share it with other apps or websites to ask for permission.“We really just want to give users a choice,” Craig Federighi, Apple’s senior vice president of software engineering, told me in an exclusive video interview. “These devices are so intimately a part of our lives and contain so much of what we’re thinking and where we’ve been and who we’ve been with that users deserve and need control of that information.” He added, “The abuses can range from creepy to dangerous.”

Many apps on your phone will begin showing pop-ups like these.

PHOTO: JOANNA STERN/THE WALL STREET JOURNAL

App developers, advertisers and social networks dependent on ad revenue don’t see it as such a humanitarian decision. For years, they’ve relied on this sort of tracking and sharing your info with data brokers to build a dossier on your digital habits to serve you highly personalized ads. Facebook has been vocal about Apple’s move, calling it “harmful to small businesses,” “anticompetitive” and “hypocritical.”“It’s people opting out without understanding the impact,” said Graham Mudd, Facebook’s vice president of Ads & Business Product Marketing. “If you look at Apple’s language and the lack of explanation, we’re concerned that people will opt out because of this discouraging prompt, and we will find ourselves in a world where the internet has more paywalls and where far fewer small businesses are able to reach their customers.”

“It wasn’t surprising to us to hear that some people were going to push back on this, but at the same time, we were completely confident that it’s the right thing,” Mr. Federighi said. While the feature’s rollout has been delayed, Mr. Federighi said that was caused not by backlash but because Apple had to make sure app developers could comply when a user opted out of tracking. Mr. Federighi said Apple worked hard on the clarity of the prompts and has created privacy-respecting ad tools for developers.After years of writing about the need for more privacy control, I’m grateful for the choice. But this is much more than just some eeny-meeny-miny-moe decision. This is a choice about who you think deserves your personal information, and how targeted you want the marketing in your feeds to be. When presented with a pop-up, here’s what to consider.

Option 1: Ask App Not to Track

This is your hands-off-my-data choice.Tapping this tells the system not to share something you probably never knew you were sharing, called an IDFA—Identifier for Advertisers. For years all iPhones have had this invisible string of numbers used for tracking and identifying you and your activity in and across apps. (Android has something similar.)Here’s an example of how it works: You download a free, ad-supported sleep app. A few hours later you start seeing ads for adult onesies in your Facebook feed. You also start seeing ads in the sleep app pertaining to other interests of yours—potentially as innocent as dish soap or as personal as fertility treatments.Behind the scenes the sleep app and Facebook were communicating about you using that identifier. And since most apps use it, the data attached to yours can include the apps you’ve downloaded, your search history, your purchase history, your recent locations and more.Tapping this option will restrict the app from accessing that tracking number (which your device no longer shares by default), but it also tells that app you don’t want to be tracked using sneakier means. That’s why it says “Ask App Not to Track” rather than “Do Not Track,” Mr. Federighi explained.Apps that might ignore the policy and continue to track through other means could be punished in the App Store, he added. “They might not be able to provide updates or their app could even be removed from the store.” Translation: Follow the rules or get out.The appeal of this option doesn’t need my explanation: Stop the tracking and the “surveillance capitalism,” as some call it, that’s been happening behind the scenes all these years.Those who prioritize privacy—or just don’t like pop-ups—can opt out of tracking altogether with a universal setting that tells all apps, “No.” On your iPhone go to Settings > Privacy > Tracking. You’ll see “Allow Apps to Request to Track.” Turn it off and apps won’t ask—and they won’t have access to your identifier.

If you want to stop tracking across all apps, and prevent future pop-ups, go to Settings and turn off ‚Allow Apps to Request to Track.‘

PHOTO: JOANNA STERN/THE WALL STREET JOURNAL

If an app doesn’t have a pop-up, it doesn’t have your identifier and it shouldn’t be tracking and sharing your info with other apps. Apple’s own apps won’t have pop-ups, Mr. Federighi said. Google has also announced that many of its iOS apps will no longer use the IDFA.

Option 2: Allow Tracking

Tap this option and your data flows like the Mississippi—at least among the apps that get your consent. App makers have two opportunities to explain how they will use the data and convince you they’re worthy.When you get the pop-up, under the question “Allow [app] to track your activity across other companies’ apps and websites?” you’ll see a message from the app maker in small text. Most are short and tend to explain the need to track for “relevant” or “personalized” ads. Still, read them—you may be surprised by what’s said.Others go a step further. Before you get to that official pop-up, some will show a full screen explaining the benefits of advertising and how they use personal data.Merriam-Webster sure got my attention: “The Collegiate Dictionary and Thesaurus with hundreds of thousands of entries are free, but we couldn’t do that without ads.” That’s one way to pull at the heartstrings of a professional writer. The McDonald’s app offering more ads for “food you love”? Not as compelling.

Before you see the official iOS prompt, apps may show a full screen encouraging you to opt into tracking.

PHOTO: JOANNA STERN/THE WALL STREET JOURNAL

When I asked business owners and execs in the ad industry and social media to explain why people should tap “Allow,” their answers boiled down to the following:

  • You want relevant ads. Many tracking pleas mentioned the days when our social-media feeds were full of pointless ads. “I don’t have a baby. I don’t even like babies! Why are you trying to sell me diapers?” But remember tapping this won’t make all ads—and not even all relevant ads—go away. There are still ways to deliver targeted ads without this sort of tracking.
  • You want to support small businesses. “As a consumer and mother, I get it. As a business owner, this sucks,” Erin LaCkore, a 35-year-old owner of LaCkore Couture, a small jewelry brand, told me. “There are so many more people I would be able to reach.” Facebook’s ad tools allow her and many other small businesses to carefully target people who would be interested in their products.

“When people go to make this decision, I want them to A) think of their safety but B) what you might have missed out on that you might have loved as a consumer,” she added. (My colleague Christopher Mims explored the impact on small businesses in a recent column.)

  • You want the internet to remain free. Facebook argues this move threatens the ability for apps to remain free and ad-supported. Mr. Federighi said that there was a similar response years back when Apple introduced privacy features in Safari, yet ads still appear on websites viewed in Safari.

Unsurprisingly, the vast majority of people will likely say no to tracking. AppsFlyer is a measurement firm that helps businesses evaluate ad-campaign performance. According to the company’s data, based on the early use of ATT in iOS, the opt-in rate was an average of 26% per app across nearly 550 apps. People are more likely to allow tracking with nongaming apps and brands that they trust.Whatever you decide, you can always change your mind. In that Tracking section of your Privacy settings, you can adjust your choice for each app.“People have their own sense of privacy and how important it is to them,” Mr. Federighi said. “So we will all make our personal decisions.”His personal decision? Oh, he’ll be opting out. I plan to do the same for many apps—especially ones that handle my most personal information—but I will consider it case by case, and read each pop-up with care.

Apple vs. Facebook: Why iOS 14.5 Started a Big Tech Fight
YOU MAY ALSO LIKE
UP NEXT
0:00 / 8:51
0:00
Apple vs. Facebook: Why iOS 14.5 Started a Big Tech Fight
Apple vs. Facebook: Why iOS 14.5 Started a Big Tech Fight
A new privacy feature in Apple’s iOS 14.5 requires apps to request permission to track you. And Facebook isn’t happy about it. WSJ’s Joanna Stern put Facebook CEO Mark Zuckerberg and Apple CEO Tim Cook into the ring to explain why this software update has kicked off a tech slugfest. Photo illustration: Preston Jessee for The Wall Street Journal

How This Apple IOS Feature Will Change Your iPhone Forever

Apple’s biggest mid cycle operating system update ever, iOS 14.5, is due to launch over the next few days, the iPhone maker has confirmed. The iOS 14.5 ugrade includes a barrage of cool new features, but the most outstanding by far is App Tracking Transparency (ATT)—and it will change your iPhone forever.

ATT has ruffled many feathers across the advertising industry because it effectively spells the end of the IDFA (identifier for advertisers), a unique device code that companies use to track your activity across iPhone apps and services. The iOS 14.5 privacy change hurts companies such as Facebook the most, and the social network has been protesting against ATT for months.

What exactly is ATT?

ATT is a feature that requires app makers to ask for your permission to track you across iPhone apps and services. In reality, that means after upgrading to iOS 14.5, you will see a pop-up box (see picture below), which reads: “Allow X to track your activity across other companies’ apps and websites?”You can then choose “Ask App not to Track” or “Allow.”

In iOS 14.5, if you ask the app not to track, it will lose access to the IDFA, the unique device code I mentioned earlier. Apple has also stipulated that app makers must not track iPhone users in other ways using data such as email addresses.

Why has Facebook kicked up such a fuss about ATT?

Facebook has been very vocal in its opposition to ATT since the feature was delayed from the initial launch of iOS 14.5 last year. The social network even took out full page newspaper ads to criticize Apple’s privacy move, saying it would hurt small businesses the most.It’s true the iOS 14.5 privacy change will impact small advertisers, but it is the likes of Facebook who will be impacted the most. Unlike Apple, whose business model is based around the hardware and services it sells, Facebook’s is based around advertising. Access to the IDFA has helped data-hungry Facebook to demonstrate the effectiveness of ad campaigns. You might see an ad on Facebook, then Google the company’s website and make a purchase. If you allow iPhone IDFA tracking, this data can be collected and used to measure the success of ad campaigns to improve personalized ads.Facebook says iOS 14.5’s ATT is being used by Apple to push its own business model for profit, at the expense of Facebook’s and others. Indeed, a recent Financial Times report detailed how the iPhone maker is due to dip its own toes back into mobile ads, via an expansion of its App Store ads business. There is also the argument that Apple is trying to force app developers to charge more for things such as in app purchases and subscriptions, and the iPhone maker of course takes a cut.

What does ATT mean for me and my iPhone? 

In reality, ATT is good for you and privacy on your iPhone. The reason? Transparency. Even if you choose to allow tracking, at least you have done so with the full knowledge that it is happening. Apple’s iOS 14.5 is game-changing for mobile advertising more widely too. It’s thought Google’s Android will bring in something similar, which ultimately would see internet advertising changed, for the better, forever. So the implications of ATT are great for the privacy of iPhone users, and internet and smartphone users more broadly too. Privacy experts approve of Apple’s iOS 14.5 move. Sean Wright, SME application security lead at Immersive Labs says ATT’s “a good move by Apple.”As well as making things more transparent to users, he hopes it will force app developers “to seriously consider all the data they are attempting to collect, and if they really require it.”

How do I use ATT?

Once you’ve downloaded iOS 14.5, which is coming at some point during the next week, using ATT is easy. You simply wait for the pop up to appear in each app you use and allow, or don’t allow, tracking on a per app basis.Another cool tip that you might find useful is, you can also go to your settings in iOS 14.5 and turn off tracking altogether. Just go to Settings > Privacy > Tracking > Allow Apps to Request to Track.This will be automatically toggled to “on,” but you can toggle off the ability to track altogether here. That will stop a potentially annoying pop up appearing in each iOS app you open. You can also control the apps you have allowed to track here, if you want to turn them off, or enable them to track you.

Is there anything else I need to know?

The iOS 14.5 move is massive for iPhone privacy, but you need to be aware that apps do still collect your data. Apple’s privacy labels made that clear—they were a stark reminder that Facebook owned WhatsApp collects vast amounts of information and way more than its rivals. There is a decision you make when you use free apps and services and that’s whether to give them your data. If you are not paying for the product, you are the product, after all. At the same time, Apple does say ATT applies to its own apps, and we will hopefully see this in action in iOS 14.5.Experts have pointed out that like Cookie notices, the pop up to allow tracking may get annoying, so it’s important not to just “Allow” in a bid to speed things up. If you don’t want tracking at all, you can toggle it off in the settings as I described. Jake Moore, cybersecurity specialist at ESET says: “ATT should not be ignored and viewed as yet another pop up which gently forces you to agree and accept it. This is a perfect time to allow people to reflect on their personal data and what the large corporations are doing with it. Companies such as Facebook heavily rely on iPhone users to consent to data sharing and such intrusion shouldn’t be taken lightly.” 

Should I turn iPhone IDFA tracking off for all apps?

IOS.14.5’s ATT really is an outstanding new feature and to track, or not to track, is the key question here. If you care about privacy on your iPhone, and you are uncomfortable about the data being collected about you online, ATT now gives you the means to turn that off. In iOS 14.5, the choice, as they say, is yours—and that’s the truly important thing.

Source: https://www.forbes.com/sites/kateoflahertyuk/2021/04/24/ios-145-how-this-outstanding-new-feature-will-change-your-iphone-forever/

Signal Founder May Have Been More Than a Tech Adviser to MobileCoin

  • Signal founder Moxie Marlinspike, whom MobileCoin previously described as a technical adviser, may have been more deeply involved in the cryptocurrency project.

  • An earlier, nearly identical white paper found online, which MobileCoin CEO Joshua Goldbard called „erroneous,“ lists Marlinspike as the project’s original CTO.

The founder and CEO of encrypted messaging app Signal, Moxie Marlinspike may have been the former CTO of MobileCoin, a cryptocurrency that Signal recently integrated for in-app payments, early versions of MobileCoin technical documents suggest.

MobileCoin CEO Joshua Goldbard told CoinDesk this 2017 white paper is “not something [he] or anyone at MobileCoin wrote,” though it is very nearly a verbatim precursor to MobileCoin’s current white paper. Additionally, snapshots of MobileCoin’s homepage from Dec. 18, 2017, until April 2018, list Marlinspike as one of three members of “The Team,” though his title is not given there. He is not listed as an adviser until May 2018.

The team for the self-described privacy coin has always acknowledged Marlinspike as an adviser to the project, but neither the team nor Marlinspike has ever disclosed direct involvement through an in-house role, much less one so involved as Chief Technical Officer.

If Marlinspike actually was involved as a CTO in MobileCoin’s early days, the recent Signal integration raises questions of MobileCoin’s motivation for associating itself with the renowned cryptographer, along with his own motive for aligning with the project, given the MOB team has historically downplayed this involvement.

“Signal sold out their user base by creating and marketing a cryptocurrency based solely on their ability to sell the future tokens to a captive audience,” said Bitcoin Core developer Matt Corallo, who also used to contribute to Signal’s open-source software.

A screenshot of MobileCoin’s website frontpage on Dec. 18, 2017. Marlinspike is listed as a team member until May 2018.
(Wayback Machine)

Goldbard shared another document dated Nov. 13, 2017, same as the other white paper, which does not list a team for the project. He claimed that this white paper was the authentic one and the other was not.

“Moxie was never CTO. A white paper we never wrote was erroneously linked to in our new book, ‘The Mechanics of MobileCoin.’ That erroneous white paper listed Moxie as CTO and, again, we never wrote that paper and Moxie was never CTO,” Goldbard told CoinDesk.

This book is actually the most recent “comprehensive, conceptual (and technical) exploration of the cryptocurrency MobileCoin” posted on the MobileCoin Foundation GitHub, which Goldbard describes as project’s “source of truth” and serves as the most up-to-date technical documentation for the project.

This ”real” version of the paper is nearly identical to the “erroneous” white paper except there is no mention of team members or MobileCoin’s pre-sale details. (Both white papers and current MobileCoin technical documents are embedded at the end of this article for reference.)

Goldbard said the “erroneous” white paper was accidentally added as a footnote to this latest collection of technical documents compiled by Koe, a pseudonymous cryptographer who recently joined MobileCoin’s team. That footnote also lists Marlinspike as a co-author of the paper along with Goldbard.

“He just googled it, like everyone on the internet seems to be doing today, and put [it in] as a footnote. It was an oversight. I did not notice it in my review of the book prior to publishing,” Goldbard told CoinDesk.

A metadata analysis of the papers run by CoinDesk shows that the “erroneous” paper was generated on Dec. 9, 2017, while the “real” paper was generated two days later. 

A meta analysis of MobileCoin’s disputed white paper.
(Colin Harper)
A meta analysis of MobileCoin’s „real“ white paper.
(Colin Harper)

Marlinspike declined to comment on the record about his professional relationship with MobileCoin.

A tale of two papers

In a December 2017 Wired article titled “The Creator of Signal Has a Plan to Fix Cryptocurrency,” Marlinspike went on the record as a “technical adviser,” a title CoinDesk has also used to describe his relationship with MobileCoin in the past.

“There are lots of potential applications for MobileCoin, but Goldbard and Marlinspike envision it first as an integration in chat apps like Signal or WhatsApp,” the article reads. 

It also states that “Marlinspike first experimented with [Software Guard Extensions (SGX)] for Signal.” These special (and expensive) Intel SGX chips create a “secure enclave” within a device to protect software, and MobileCoin validators require them to function (validators, as in other permissioned databases, are chosen by the foundation behind MobileCoin).

In the 2017 white paper that Goldbard disavows, Marlinspike is listed under the “team” section as CTO, with experience including being “the lead developer of Open Whisper Systems, [meaning] Moxie is responsible for the entirety of Signal,” which had just over 10 million users at the time. This same white paper describes MobileCoin’s Goldbard as a “high school dropout who thinks deeply about narratives and information systems.”

Signal’s code has historically been open source, though this changed about a year ago; code for the MobileCoin integration was added in Signal’s last beta. The nonprofit, which has five full-time employees, subsists largely on donations and has no clear revenue model, though Whatsapp co-founder Brian Acton injected $50 million into the app in 2018. A 2018 tax filing shows revenue of just over $600,000 for the fiscal year and over $100,000,000 in assets and $105,000,000 in liabilities.

MobileCoin supply and other details

The disavowed white paper also shows details of MobileCoin’s proposed distribution, which the paper says included selling 37.5 million MOB tokens (out of a 250 million supply) in a private presale at a price of $0.80 each for a total of $30 million. 

Indeed, in the spring of 2018, MOB raised $30 million from crypto exchange Binance and others in such a private presale, TechCrunch’s Taylor Hatmaker reported. Goldbard referred to the TechCrunch article when discussing MobileCoin’s financing with CoinDesk.

In a MobileCoin forum on Jan. 8, one user asked for details about MOB’s circulating supply.

“Supply: 250mill MOB; Circulating supply: impossible to know (‘circulating’ is pretty hard to define anyway),” Koe responded. MobileCoin does not currently have online tools such as a blockchain explorer to search the network for data.

One user chimed in to say that because all 250 million MOB were generated from a “premine,” or creation of maximum supply before launch, there’s no way for users to earn them through staking or mining.

“I suppose you could request donations,” Koe replied. 

Perhaps summing up the sense of betrayal the Signal community feels, one post simply reads, ‚Et tu, Signal?‘

MobileCoin’s consensus model copies Stellar’s, meaning only MobileCoin Foundation-approved nodes, which must run on a machine that uses the aforementioned Intel SGX chips, can partake in consensus. The white paper makes no references to rewards or payouts to validators from MOB supply.

MobileCoin Token Services, an affiliate of the MobileCoin Foundation, is currently selling MOB (presumably the remaining coins that did not sell in the presale) to non-U.S. investors by taking orders over email. 

MOB, for now, trades on FTX  and Bitfinex, two popular crypto exchanges, and a few smaller venues.

When the coin began trading in January, it first listed for around $5. Now, it’s worth about $55 (which, assuming a supply of 250 million MOB, gives the coin roughly the same market cap as Chainlink or Litecoin, the 10th and 9th most value cryptoassets by market cap). The coin clocked over $15 million in volume over the past 24 hours between FTX and Bitfinex, according to exchange data.

Speaking to the coin’s design, the founder of privacy coin monero (XMR, +2.85%), Richard Spagni, claimed that MobileCoin uses the privacy building blocks of his project’s source code for its own design without giving credit.

Who is Moxie Marlinspike?

Something of a legend in cryptography circles, Marlinspike began working on Signal in 2014 after founding Open Whisper Systems in 2013. Before this, he served as Twitter’s head of security after his 2010 startup, Whisper Systems, was acquired by the social network in 2011.

His only on-the-record professional relationship with MobileCoin comes from his technical advisory role, which he took on in late 2017 at the height of bitcoin’s last bull market and its accompanying initial coin offering bubble. 

Reporting on the project in 2019, the New York Times’ Nathaniel Popper and Mike Isaac originally wrote that “Signal … has its own coin in the works” before amending the article to clarify that “MobileCoin will work with Signal, but it is being developed independently of Signal.” The correction seems to typify the shifting narrative of Marlinspike’s and MOB’s relationship across various records. (Wired’s 2017 coverage, for example, says that “The Creator of Signal Has a Plan to Fix Cryptocurrency.”)

“I think usability is the biggest challenge with cryptocurrency today,” Marlinspike told Wired in the December 2017 article. “The innovations I want to see are ones that make cryptocurrency deployable in normal environments, without sacrificing the properties that distinguish cryptocurrency from existing payment mechanisms.”

Signal’s own users are less convinced.

The app’s Reddit page is plastered with submissions complaining about the decision to add MOB, with many confused as to why Signal would integrate a coin in the first place, let alone one that isn’t very well known (and which only went live this year).

“Using your messenger service to sit on the blockchain hype for no good reason, bloat a clean messenger app and introduce privacy concerns was more than unnecessary,” one post reads.

Perhaps summing up the sense of betrayal the Signal community feels, one post simply reads, “Et tu Signal?”

Speaking on Moxie’s involvement and the app’s decision to add MOB, Anderson Kill partner Stephen Palley said, “I can’t speak to the discrepancy between investor materials and what you’re being told, but I don’t necessarily judge them for wanting to make a buck after years of providing great open-source software basically for free.”

Signal first out the gate (but tripping)

Other messaging apps like Telegram and Kik have tried and failed to launch in-app cryptocurrency payments by rolling their own coins. Both attempts were promptly quashed by regulators. Encrypted messaging app Keybase was the first messaging app to add cryptocurrency payments when it integrated Stellar’s XLM (+14.33%) in 2018.

Given Facebook’s ownership of WhatsApp, its involvement in the Libra coin project (now known as Diem) may be seen as a similar attempt.

Oddly, Signal’s addition of MobileCoin is the first instance of a messaging app actually pulling off a crypto integration. 

The question now is how many of Signal’s 50 million users, many of whom aren’t crypto enthusiasts, will use it.

Read the official and disputed MobileCoin white papers below:

https://www.scribd.com/embeds/502074292/content?start_page=undefined&view_mode=undefined&show_recommendations=undefined

https://www.scribd.com/embeds/502074632/content?start_page=undefined&view_mode=undefined&show_recommendations=undefined

https://www.scribd.com/embeds/502244393/content?start_page=undefined&view_mode=undefined&show_recommendations=undefined

Source: https://www.coindesk.com/signal-founder-may-have-been-more-than-tech-adviser-mobilecoin