Temu, the Chinese e-commerce platform, offers products at remarkably low prices, which raises concerns about its business practices. One significant issue is the undervaluation of parcels entering the EU. Estimates suggest that around 65% of parcels are deliberately undervalued in customs declarations to avoid tariffs, which undermines local businesses and creates an uneven playing field [1]. Additionally, Temu employs a direct-to-consumer model, sourcing products directly from manufacturers in China, allowing them to benefit from bulk discounts and reduced shipping costs [2].
Benefits for the Chinese State
The low pricing strategy of Temu serves multiple purposes for the Chinese state. Firstly, it helps expand China’s influence in global e-commerce by increasing the market share of Chinese companies abroad. This can lead to greater economic ties and dependency on Chinese goods. Secondly, by facilitating the export of low-cost products, Temu contributes to the Chinese economy by boosting manufacturing and logistics sectors. Lastly, the data collected from users can be leveraged for insights into consumer behavior, which may benefit Chinese businesses and potentially the state itself in terms of economic planning and strategy [1].
Overall, while Temu’s low prices attract consumers, they also raise significant regulatory and ethical concerns in Europe, prompting scrutiny from authorities regarding compliance with local laws and standards.
Deeper Analysis of Future Benefits for the Chinese State
Temu’s aggressive pricing strategy in Europe not only serves immediate commercial interests but also aligns with broader strategic goals of the Chinese state. Here are several potential future benefits for China:
Economic Expansion and Market Penetration: By establishing a strong foothold in European markets through low prices, Temu can facilitate the expansion of Chinese goods into new territories. This not only increases sales volume but also enhances brand recognition and loyalty among European consumers. As more consumers become accustomed to purchasing Chinese products, it could lead to a long-term shift in buying habits, favoring Chinese brands over local alternatives.
Strengthening Supply Chains: Temu’s model emphasizes direct sourcing from manufacturers, which can help streamline supply chains. This efficiency can be replicated across various sectors, allowing China to become a dominant player in global supply chains. By controlling more aspects of production and distribution, China can mitigate risks associated with international trade tensions and disruptions, ensuring a more resilient economic structure.
Data Collection and Consumer Insights: The platform’s operations will generate vast amounts of consumer data, which can be analyzed to gain insights into European consumer behavior. This data can inform not only marketing strategies but also product development, allowing Chinese manufacturers to tailor their offerings to meet the specific preferences of European consumers. Such insights can enhance competitiveness and drive innovation within Chinese industries.
Geopolitical Influence: By increasing its economic presence in Europe, China can leverage its commercial relationships to enhance its geopolitical influence. Economic ties often translate into political goodwill, which can be beneficial in negotiations on various fronts, including trade agreements and international policies. This strategy aligns with China’s broader goal of expanding its influence globally, as outlined in its recent political resolutions emphasizing the importance of state power and common prosperity.
Promotion of Technological Advancements: As Temu grows, it may invest in technology to improve logistics, customer service, and user experience. This could lead to advancements in e-commerce technologies that can be exported back to China, enhancing domestic capabilities. Moreover, the emphasis on technology aligns with China’s ambitions to become a leader in areas such as artificial intelligence and data analytics, as highlighted in its national strategies.
Cultural Exchange and Soft Power: By making Chinese products more accessible and appealing to European consumers, Temu can facilitate a form of cultural exchange. As consumers engage with Chinese brands, they may also become more receptive to Chinese culture and values, enhancing China’s soft power. This cultural integration can help counter negative perceptions and foster a more favorable view of China in the long term.
In conclusion, Temu’s low pricing strategy is not merely a tactic for market entry; it is a multifaceted approach that can yield significant long-term benefits for the Chinese state. By enhancing economic ties, gathering valuable consumer data, and promoting technological advancements, China positions itself to strengthen its global influence and economic resilience in an increasingly competitive landscape.
Arati Prabhakar has the ear of the US president and a massive mission: help manage AI, revive the semiconductor industry, and pull off a cancer moonshot.
one day in March 2023, Arati Prabhakar brought a laptop into the Oval Office and showed the future to Joe Biden. Six months later, the president issued a sweeping executive order that set a regulatory course for AI.
This all happened because ChatGPT had stunned the world. In an instant it became very, very obvious that the United States needed to speed up its efforts to regulate the AI industry—and adopt policies to take advantage of it. While the potential benefits were unlimited (Social Security customer service that works!), so were the potential downsides, like floods of disinformation or even, in the view of some, human extinction. Someone had to demonstrate that to the president.
The job fell to Prabhakar, because she is the director of the White House Office of Science and Technology Policy and holds cabinet status as the president’s chief science and technology adviser; she’d already been methodically educating top officials about the transformative power of AI. But she also has the experience and bureaucratic savvy to make an impact with the most powerful person in the world.
Born in India and raised in Texas, Prabhakar has a PhD in applied physics from Caltech and previously ran two US agencies: the National Institute of Standards and Technology and the Department of Defense’s Advanced Research Projects Agency. She also spent 15 years in Silicon Valley as a venture capitalist, including as president of Interval Research, Paul Allen’s legendary tech incubator, and has served as vice president or chief technology officer at several companies.
Prabhakar assumed her current job in October 2022—just in time to have AI dominate the agenda—and helped to push out that 20,000-word executive order, which mandates safety standards, boosts innovation, promotes AI in government and education, and even tries to mitigate job losses. She replaced biologist Eric Lander, who had resigned after an investigation concluded that he ran a toxic workplace. Prabhakar is the first person of color and first woman to be appointed director of the office.
We spoke at the kitchen table of Prabhakar’s Silicon Valley condo—a simply decorated space that, if my recollection is correct, is very unlike the OSTP offices in the ghostly, intimidating Eisenhower Executive Office Building in DC. Happily, the California vibes prevailed, and our conversation felt very unintimidating—even at ease. We talked about how Bruce Springsteen figured into Biden’s first ChatGPT demo, her hopes for a semiconductor renaissance in the US, and why Biden’s war on cancer is different from every other president’s war on cancer. I also asked her about the status of the unfilled role of chief technology officer for the nation—a single person, ideally kind of geeky, whose entire job revolves around the technology issues driving the 21st century.
Steven Levy: Why did you sign up for this job?
Arati Prabhakar: Because President Biden asked. He sees science and technology as enabling us to do big things, which is exactly how I think about their purpose.
What kinds of big things?
The mission of OSTP is to advance the entire science and technology ecosystem. We have a system that follows a set of priorities. We spend an enormous amount on R&D in health. But both public and corporate funding are largely focused on pharmaceuticals and medical devices, and very little on prevention or clinical care practices—the things that could change health as opposed to dealing with disease. We also have to meet the climate crisis. For technologies like clean energy, we don’t do a great job of getting things out of research and turning them into impact for Americans. It’s the unfinished business of this country.
It’s almost predestined that you’d be in this job. As soon as you got your physics degree at Caltech, you went to DC and got enmeshed in policy.
Yeah, I left the track I was supposed to be on. My family came here from India when I was 3, and I was raised in a household where my mom started sentences with, “When you get your PhD and become an academic …” It wasn’t a joke. Caltech, especially when I finished my degree in 1984, was extremely ivory tower, a place of worship for science. I learned a tremendous amount, but I also learned that my joy did not come from being in a lab at 2 in the morning and having that eureka moment. Just on a lark, I came to Washington for, quote-unquote, one year on a congressional fellowship. The big change was in 1986, when I went to Darpa as a young program manager. The mission of the organization was to use science and technology to change the arc of the future. I had found my home.
How did you wind up at Darpa?
I had written a study on microelectronics R&D. We were just starting to figure out that the semiconductor industry wasn’t always going to be dominated by the US. We worked on a bunch of stuff that didn’t pan out but also laid the groundwork for things that did. I was there for seven years, left for 19, and came back as director. Two decades later the portfolio was quite different, as it should be. I got to christen the first self-driving ship that could leave a port and navigate across open oceans without a single sailor on board. The other classic Darpa thing is to figure out what might be the foundation for new capabilities. I ended up starting a Biological Technologies Office. One of the many things that came out of that was the rapid development and distribution of mRNA vaccines, which never would have happened without the Darpa investment.
One difference today is that tech giants are doing a lot of their own R&D, though not necessarily for the big leaps Darpa was built for.
Every developed economy has this pattern. First there’s public investment in R&D. That’s part of how you germinate new industries and boost your economy. As those industries grow, so does their investment in R&D, and that ends up being dominant. There was a time when it was sort of 50-50 public-private. Now it’s much more private investment. For Darpa, of course, the mission is breakthrough technologies and capabilities for national security.
Are you worried about that shift?
It’s not a competition! Absolutely there’s been a huge shift. That private tech companies are building the leading edge LLMs today has huge implications. It’s a tremendous American advantage, but it has implications for how the technology is developed and used. We have to make sure we get what we need for public purposes.
Is the US government investing enough to make that happen?
I don’t think we are. We need to increase the funding. One component of the AI executive order is a National AI Research Resource. Researchers don’t have the access to data and computation that companies have. An initiative that Congress is considering, that the administration is very supportive of, would place something like $3 billion of resources with the National Science Foundation.
That’s a tiny percentage of the funds going into a company like OpenAI.
It costs a lot to build these leading-edge models. The question is, how do we have governance of advanced AI and how do we make sure we can use it for public purposes? The government has got to do more. We need help from Congress. But we also have to chart a different kind of relationship with industry than we’ve had in the past.
What might that look like?
Look at semiconductor manufacturing and the CHIPS Act.
We’ll get to that later. First let’s talk about the president. How deep is his understanding of things like AI?
Some of the most fun I’ve gotten on the job was working with the president and helping him understand where the technology is, like when we got to do the chatbot demonstrations for the president in the Oval Office.
What was that like?
Using a laptop with ChatGPT, we picked a topic that was of particular interest. The president had just been at a ceremony where he gave Bruce Springsteen the National Medal of Arts. He had joked about how Springsteen was from New Jersey, just across the river from his state, Delaware, and then he made reference to a lawsuit between those two states. I had never heard of it. We thought it would be fun to make use of this legal case. For the first prompt, we asked ChatGPT to explain the case to a first grader. Immediately these words start coming out like, “OK, kiddo, let me tell you, if you had a fight with someone …” Then we asked the bot to write a legal brief for a Supreme Court case. And out comes this very formal legal analysis. Then we wrote a song in the style of Bruce Springsteen about the case. We also did image demonstrations. We generated one of his dog Commander sitting behind the Resolute desk in the Oval Office.
So what was the president’s reaction?
He was like, “Wow, I can’t believe it could do that.” It wasn’t the first time he was aware of AI, but it gave him direct experience. It allowed us to dive into what was really going on. It seems like a crazy magical thing, but you need to get under the hood and understand that these models are computer systems that people train on data and then use to make startlingly good statistical predictions.
There are a ton of issues covered in the executive order. Which are the ones that you sense engaged the president most after he saw the demo?
The main thing that changed in that period was his sense of urgency. The task that he put out for all of us was to manage the risks so that we can see the benefits. We deliberately took the approach of dealing with a broad set of categories. That’s why you saw an extremely broad, bulky, large executive order. The risks to the integrity of information from deception and fraud, risks to safety and security, risks to civil rights and civil liberties, discrimination and privacy issues, and then risks to workers and the economy and IP—they’re all going to manifest in different ways for different people over different timelines. Sometimes we have laws that already address those risks—turns out it’s illegal to commit fraud! But other things, like the IP questions, don’t have clean answers.
There are a lot of provisions in the order that must meet set deadlines. How are you doing on those?
They are being met. We just rolled out all the 90-day milestones that were met. One part of the order I’m really getting a kick out of is the AI Council, which includes cabinet secretaries and heads of various regulatory agencies. When they come together, it’s not like most senior meetings where all the work has been done. These are meetings with rich discussion, where people engage with enthusiasm, because they know that we’ve got to get AI right.
There’s a fear that the technology will be concentrated among a few big companies. Microsoft essentially subsumed one leading startup, Inflection. Are you concerned about this centralization?
Competition is absolutely part of this discussion. The executive order talks specifically about that. One of the many dimensions of this issue is the extent to which power will reside only with those who are able to build these massive models.
The order calls for AI technology to embody equity and not include biases. A lot of people in DC are devoted to fighting diversity mandates. Others are uncomfortable with the government determining what constitutes bias. How does the government legally and morally put its finger on the scale?
Here’s what we’re doing. The president signed the executive order at the end of October. A couple of days later, the Office of Management and Budget came out with a memo—a draft of guidance about how all of government will use AI. Now we’re in the deep, wonky part, but this is where the rubber meets the road. It’s that guidance that will build in processes to make sure that when the government uses AI tools it’s not embedding bias.
That’s the strategy? You won’t mandate rules for the private sector but will impose them on the government, and because the government is such a big customer, companies will adopt them for everyone?
That can be helpful for setting a way that things work broadly. But there are also laws and regulations in place that ban discrimination in employment and lending decisions. So you can feel free to use AI, but it doesn’t get you off the hook.
There’s a line in there that basically says that if you’re slowing down the progress of AI, you are the equivalent of a murderer, because going forward without restraints will save lives.
That’s such an oversimplified view of the world. All of human history tells us that powerful technologies get used for good and for ill. The reason I love what I’ve gotten to do across four or five decades now is because I see over and over again that after a lot of work we end up making forward progress. That doesn’t happen automatically because of some cool new technology. It happens because of a lot of very human choices about how we use it, how we don’t use it, how we make sure people have access to it, and how we manage the downsides.
“I’m trying to figure out if you’re going to write a bunch of nice research papers, or you’re gonna move the needle on cancer.”
How are you encouraging the use of AI in government?
Right now AI is being used in government in more modest ways. Veterans Affairs is using it to get feedback from veterans to improve their services. The Social Security Administration is using it to accelerate the processing of disability claims.
Those are older programs. What’s next? Government bureaucrats spend a lot of time drafting documents. Will AI be part of that process?
That’s one place where you can see generative AI being used. Like in a corporation, we have to sort out how to use it responsibly, to make sure that sensitive data aren’t being leaked, and also that it’s not embedding bias. One of the things I’m really excited about in the executive order is an AI talent surge, saying to people who are experts in AI, “If you want to move the world, this is a great time to bring your skills to the government.” We published that on AI.gov.
How far along are you in that process?
We’re in the matchmaking process. We have great people coming in.
OK, let’s turn to the CHIPS Act, which is the Biden administration’s centerpiece for reviving the semiconductor industry in the US. The legislation provides more than $50 billion to grow the US-based chip industry, but it was designed to spur even more private investment, right?
That story starts decades ago with US dominance in semiconductor manufacturing. Over a few decades the industry got globalized, then it got very dangerously concentrated in one geopolitically fragile part of the world. A year and a half ago the president got Congress to act on a bipartisan basis, and we are crafting a completely different way to work with the semiconductor industry in the US.
Different in what sense?
It won’t work if the government goes off and builds its own fabs. So our partnership is one where companies decide what products are the right ones to build and where we will build them, and government incentives come on the basis of that. It’s the first time the US has done that with this industry, but it’s how it was done elsewhere around the world.
Some people say it’s a fantasy to think we can return to the day when the US had a significant share of chip and electronics manufacturing. Obviously, you feel differently.
We’re not trying to turn the clock back to the 1980s and saying, “Bring everything to the US.” Our strategy is to make sure that we have the robustness we need for the US and to make sure we’re meeting our national security needs.
The biggest grant recipient was Intel, which got $8 billion. Its CEO, Pat Gelsinger, said that the CHIPS Act wasn’t enough to make the US competitive, and we’d need a CHIPS 2. Is he right?
I don’t think anyone knows the answer yet. There’s so many factors. The job right now is to build the fabs.
As the former head of Darpa, you were part of the military establishment. How do you view the sentiment among employees of some companies, like Google, that they should not take on military contracts?
It’s great for people in companies to be asking hard questions about how their work is used. I respect that. My personal view is that our national security is essential for all of us. Here in Silicon Valley, we completely take for granted that you get up every morning and try to build and fund businesses. That doesn’t happen by accident. It’s shaped by the work that we do in national security.
Your office is spearheading what the president calls a Cancer Moonshot. It seems every president in my lifetime had some project to cure cancer. I remember President Nixon talking about a war on cancer. Why should we believe this one?
We’ve made real progress. The president and the first lady set two goals. One is to cut the age-adjusted cancer death rate in half over 25 years. The other is to change the experience of people going through cancer. We’ve come to understand that cancer is a very complex disease with many different aspects. American health outcomes are not acceptable for the most wealthy country in the world. When I spoke to Danielle Carnival, who leads the Cancer Moonshot for us—she worked for the vice president in the Obama administration—I said to her, “I’m trying to figure out if you’re going to write a bunch of nice research papers or you’re gonna move the needle on cancer.” She talked about new therapies but also critically important work to expand access to early screening, because if you catch some of them early, it changes the whole story. When I heard that I said, “Good, we’re actually going to move the needle.”
Don’t you think there’s a hostility to science in much of the population?
People are more skeptical about everything. I do think that there has been a shift that is specific to some hot-button issues, like climate and vaccines or other infectious disease measures. Scientists want to explain more, but they should be humble. I don’t think it’s very effective to treat science as a religion. In year two of the pandemic, people kept saying that the guidance keeps changing, and all I could think was, “Of course the guidance is changing, our understanding is changing.” The moment called for a little humility from the research community rather than saying, “We’re the know-it-alls.”
Is it awkward to be in charge of science policy at a time when many people don’t believe in empiricism?
I don’t think it’s as extreme as that. People have always made choices not just based on hard facts but also on the factors in their lives and the network of thought that they are enmeshed in. We have to accept that people are complex.
Part of your job is to hire and oversee the nation’s chief technology officer. But we don’t have one. Why not?
That had already been a long endeavor when I came on board. That’s been a huge challenge. It’s very difficult to recruit, because those working in tech almost always have financial entanglements.
I find it hard to believe that in a country full of great talent there isn’t someone qualified for that job who doesn’t own stock or can’t get rid of their holdings. Is this just a low priority for you?
We spent a lot of time working on that and haven’t succeeded.
Are we going to go through the whole term without a CTO?
I have no predictions. I’ve got nothing more than that.
There are only a few months left in the current term of this administration. President Biden has given your role cabinet status. Have science and technology found their appropriate influence in government?
Yes, I see it very clearly. Look at some of the biggest changes—for example, the first really meaningful advances on climate, deploying solutions at a scale that the climate actually notices. I see these changes in every area and I’m delighted.
Microsoft’s Recall technology, an AI tool designed to assist users by automatically reminding them of important information and tasks, bears resemblance to George Orwell’s „1984“ dystopia in several key aspects:
1. Surveillance and Data Collection: – 1984: The Party constantly monitors citizens through telescreens and other surveillance methods, ensuring that every action, word, and even thought aligns with the Party’s ideology. – Recall Technology: While intended for productivity, Recall collects and analyzes large amounts of personal data, emails, and other communications to provide reminders. This level of data collection can raise concerns about privacy and the potential for misuse or unauthorized access to personal information.
2. Memory and Thought Control: – 1984: The Party manipulates historical records and uses propaganda to control citizens‘ memories and perceptions of reality, essentially rewriting history to fit its narrative. – Recall Technology: By determining what information is deemed important and what reminders to provide, Recall could influence users‘ focus and priorities. This selective emphasis on certain data could subtly shape users‘ perceptions and decisions, akin to a form of soft memory control.
3. Dependence on Technology:
– 1984: The populace is heavily reliant on the Party’s technology for information, entertainment, and even personal relationships, which are monitored and controlled by the state. – Recall Technology: Users might become increasingly dependent on Recall to manage their schedules and information, potentially diminishing their own capacity to remember and prioritize tasks independently. This dependence can create a vulnerability where the technology has significant control over daily life.
4. Loss of Personal Autonomy:
– 1984: Individual autonomy is obliterated as the Party dictates all aspects of life, from public behavior to private thoughts. – Recall Technology: Although not as extreme, the automation and AI-driven suggestions in Recall could erode personal decision-making over time. As users rely more on technology to dictate their actions and reminders, their sense of personal control and autonomy may diminish.
5. Potential for Abuse:
– 1984: The totalitarian regime abuses its power to maintain control over the population, using technology as a tool of oppression. – Recall Technology: In a worst-case scenario, the data collected by Recall could be exploited by malicious actors or for unethical purposes. If misused by corporations or governments, it could lead to scenarios where users‘ personal information is leveraged against them, echoing the coercive control seen in Orwell’s dystopia.
While Microsoft’s Recall technology is designed with productivity in mind, its potential implications for privacy, autonomy, and the influence over personal information draw unsettling parallels to the controlled and monitored society depicted in „1984.“
Natural Language Interaction:GPT-4o’s advanced natural language processing capabilities allow for seamless, conversational interaction between the driver and the vehicle. This makes controlling the vehicle and accessing information more intuitive and user-friendly.
Personalized Experience:The AI can learn from individual driver behaviors and preferences, offering tailored suggestions for routes, entertainment, climate settings, and more, enhancing overall user satisfaction and engagement.
Enhanced Autonomous Driving and Safety:
Superior Decision-Making:GPT-4o can significantly enhance Tesla’s autonomous driving capabilities by processing and analyzing vast amounts of real-time data to make better driving decisions. This improves the safety, reliability, and efficiency of the vehicle’s self-driving features.
Proactive Safety Features:The AI can provide real-time monitoring of the vehicle’s surroundings and driver behavior, offering proactive alerts and interventions to prevent accidents and ensure passenger safety.
Next-Level Infotainment and Connectivity:
Smart Infotainment System: With GPT-4o, the SUV’s infotainment system can offer highly intelligent and personalized content recommendations, including music, podcasts, audiobooks, and more, making long journeys more enjoyable.
Seamless Connectivity:The AI can integrate with a wide range of apps and services, enabling drivers to manage their schedules, communicate, and access information without distraction, thus enhancing productivity and convenience.
Continuous Improvement and Future-Proofing:
Self-Learning Capabilities:GPT-4o continuously learns and adapts from user interactions and external data, ensuring that the vehicle’s performance and features improve over time. This results in an ever-evolving user experience that keeps getting better.
Over-the-Air Updates: Regular over-the-air updates from OpenAI ensure that the SUV remains at the forefront of technology, with the latest features, security enhancements, and improvements being seamlessly integrated.
Market Differentiation and Brand Leadership:
Innovative Edge:Integrating GPT-4o positions Tesla’s new SUV as a cutting-edge vehicle, showcasing the latest in AI and automotive technology. This differentiates Tesla from competitors and strengthens its reputation as a leader in innovation.
Enhanced Customer Engagement: The unique AI-driven features and personalized experiences can drive stronger customer engagement and loyalty, attracting tech-savvy consumers and enhancing the overall brand image.
By leveraging these advantages, Tesla can create a groundbreaking SUV that not only meets but exceeds consumer expectations, setting new standards for the automotive industry and reinforcing Tesla’s position as a pioneer in automotive and AI technology.
The integration of advanced AI like OpenAI’s GPT-4o into Apple’s Vision Pro + Version 2 can significantly enhance its vision understanding capabilities. Here are ten possible use cases:
1. Augmented Reality (AR) Applications: – Interactive AR Experiences: Enhance AR applications by providing real-time object recognition and interaction. For example, users can point the device at a historical landmark and receive detailed information and interactive visuals about it. – AR Navigation: Offer real-time navigation assistance in complex environments like malls or airports, overlaying directions onto the user’s view.
2. Enhanced Photography and Videography: – Intelligent Scene Recognition: Automatically adjust camera settings based on the scene being captured, such as landscapes, portraits, or low-light environments, ensuring optimal photo and video quality. – Content Creation Assistance: Provide suggestions and enhancements for capturing creative content, such as framing tips, real-time filters, and effects.
3. Healthcare and Medical Diagnosis: – Medical Imaging Analysis: Assist in analyzing medical images (e.g., X-rays, MRIs) to identify potential issues, providing preliminary diagnostic support to healthcare professionals. – Remote Health Monitoring: Enable remote health monitoring by analyzing visual data from wearable devices to track health metrics and detect anomalies.
4. Retail and Shopping: – Virtual Try-Ons: Allow users to virtually try on clothing, accessories, or cosmetics using the device’s camera, enhancing the online shopping experience. – Product Recognition: Identify products in stores and provide information, reviews, and price comparisons, helping users make informed purchasing decisions.
5. Security and Surveillance: – Facial Recognition: Enhance security systems with facial recognition capabilities for authorized access and threat detection. – Anomaly Detection: Monitor and analyze security footage to detect unusual activities or potential security threats in real-time.
6. Education and Training: – Interactive Learning: Use vision understanding to create interactive educational experiences, such as identifying objects or animals in educational content and providing detailed explanations. – Skill Training: Offer real-time feedback and guidance for skills training, such as in sports or technical tasks, by analyzing movements and techniques.
7. Accessibility and Assistive Technology: – Object Recognition for the Visually Impaired: Help visually impaired users navigate their surroundings by identifying objects and providing auditory descriptions. – Sign Language Recognition: Recognize and translate sign language in real-time, facilitating communication for hearing-impaired individuals.
8. Home Automation and Smart Living: – Smart Home Integration: Recognize household items and provide control over smart home devices. For instance, identifying a lamp and allowing users to turn it on or off via voice commands. – Activity Monitoring: Monitor and analyze daily activities to provide insights and recommendations for improving household efficiency and safety.
9. Automotive and Driver Assistance: – Driver Monitoring: Monitor driver attentiveness and detect signs of drowsiness or distraction, providing alerts to enhance safety. – Object Detection: Enhance autonomous driving systems with better object detection and classification, improving vehicle navigation and safety.
10. Environmental Monitoring: – Wildlife Tracking: Use vision understanding to monitor and track wildlife in natural habitats for research and conservation efforts. – Pollution Detection: Identify and analyze environmental pollutants or changes in landscapes, aiding in environmental protection and management.
These use cases demonstrate the broad potential of integrating advanced vision understanding capabilities into Apple’s Vision Pro + Version 2, enhancing its functionality across various domains and providing significant value to users.
Apple’s Vision Pro + Version 2, utilizing OpenAI’s ChatGPT „GPT-4o“ as the operating system offers several compelling marketing benefits. Here are the key advantages to highlight:
1. Revolutionary User Interface: – Conversational AI: GPT-4o’s advanced natural language processing capabilities allow for a conversational user interface, making interactions with Vision Pro + more intuitive and user-friendly. – Personalized Interactions: The AI can provide highly personalized responses and suggestions based on user behavior and preferences, enhancing user satisfaction and engagement.
2. Unmatched Productivity: – AI-Driven Multitasking: GPT-4o can manage and streamline multiple tasks simultaneously, significantly boosting productivity by handling scheduling, reminders, and real-time information retrieval seamlessly. – Voice-Activated Efficiency: Hands-free operation through advanced voice commands allows users to multitask efficiently, whether they are working, driving, or engaged in other activities.
3. Advanced Accessibility: – Inclusive Design: GPT-4o enhances accessibility with superior voice recognition, understanding diverse speech patterns, and offering multilingual support, making Vision Pro + more accessible to a broader audience. – Adaptive Assistance: The AI can provide context-aware assistance to users with disabilities, further promoting inclusivity and ease of use.
4. Superior Integration and Ecosystem: – Apple Ecosystem Synergy: GPT-4o integrates seamlessly with other Apple devices and services, offering a cohesive and interconnected user experience across the Apple ecosystem. – Unified User Experience: Users can enjoy a consistent and unified experience across all their Apple devices, enhancing brand loyalty and overall user satisfaction.
5. Enhanced Security and Privacy: – Secure Interactions: Emphasize GPT-4o’s robust security measures to ensure user data privacy and protection, leveraging OpenAI’s commitment to ethical AI practices. – Trustworthy AI: Highlight OpenAI’s dedication to ethical AI usage, reinforcing user trust in the AI-driven functionalities of Vision Pro +.
6. Market Differentiation: – Innovative Edge: Position Vision Pro + as a cutting-edge product that stands out in the market due to its integration with GPT-4o, setting it apart from competitors. – Leadership in AI: Showcase Apple’s leadership in technology innovation by leveraging OpenAI’s state-of-the-art advancements in AI.
7. Future-Proofing: – Continuous Innovation: Regular updates from OpenAI ensure that Vision Pro + remains at the forefront of AI technology, with continuous improvements and new features. – Scalable Solutions: The AI platform’s scalability allows for future enhancements, ensuring the product remains relevant and competitive over time.
8. Customer Engagement: – Proactive Support: GPT-4o can offer proactive customer support and real-time problem-solving, leading to higher customer satisfaction and loyalty. – Engaging Experiences: The AI can create engaging and interactive experiences, making the device more enjoyable and useful for daily activities.
9. Enhanced Creativity: – Creative Assistance: GPT-4o can assist users with creative tasks such as content creation, brainstorming, and project management, providing valuable support for both personal and professional use. – Innovative Features: Highlight the unique AI-driven features that empower users to explore new creative possibilities, enhancing the appeal of Vision Pro +.
10. Efficient Learning and Adaptation: – User Learning: GPT-4o continuously learns from user interactions, becoming more efficient and effective over time, offering a progressively improving user experience. – Adaptive Technology: The AI adapts to user needs and preferences, ensuring that the device remains relevant and useful in a variety of contexts.
By leveraging these benefits, Apple can market the Vision Pro + Version 2 as a pioneering product that offers unparalleled user experience, productivity, and innovation, driven by the advanced capabilities of OpenAI’s GPT-4o.
Google is rethinking its most iconic and lucrative product by adding new AI features to search. One expert tells WIRED it’s “a change in the world order.”
Google Search is about to fundamentally change—for better or worse. To align with Alphabet-owned Google’s grand vision of artificial intelligence, and prompted by competition from AI upstarts like ChatGPT, the company’s core product is getting reorganized, more personalized, and much more summarized by AI.
At Google’s annual I/O developer conference in Mountain View, California, today, Liz Reid showed off these changes, setting her stamp early on in her tenure as the new head of all things Google search. (Reid has been at Google a mere 20 years, where she has worked on a variety of search products.) Her AI-soaked demo was part of a broader theme throughout Google’s keynote, led primarily by CEO Sundar Pichai: AI is now underpinning nearly every product at Google, and the company only plans to accelerate that shift.
“In the era of Gemini we think we can make a dramatic amount of improvements to search,” Reid said in an interview with WIRED ahead of the event, referring to the flagship generative AI model launched late last year. “People’s time is valuable, right? They deal with hard things. If you have an opportunity with technology to help people get answers to their questions, to take more of the work out of it, why wouldn’t we want to go after that?”
It’s as though Google took the index cards for the screenplay it’s been writing for the past 25 years and tossed them into the air to see where the cards might fall. Also: The screenplay was written by AI.
These changes to Google Search have been long in the making. Last year the company carved out a section of its Search Labs, which lets users try experimental new features, for something called Search Generative Experience. The big question since has been whether, or when, those features would become a permanent part of Google Search. The answer is, well, now.
Google’s search overhaul comes at a time when critics are becoming increasingly vocal about what feels to some like a degraded search experience, and for the first time in a long time, the company is feeling the heat of competition, from the massive mashup between Microsoft and OpenAI. Smaller startups like Perplexity, You.com, and Brave have also been riding the generative AI wave and getting attention, if not significant mindshare yet, for the way they’ve rejiggered the whole concept of search.
Automatic Answers
Google says it has made a customized version of its Gemini AI model for these new Search features, though it declined to share any information about the size of this model, its speeds, or the guardrails it has put in place around the technology.
This search-specific spin on Gemini will power at least a few different elements of the new Google Search. AI Overviews, which Google has already been experimenting with in its labs, is likely the most significant. AI-generated summaries will now appear at the top of search results.
One example from WIRED’s testing: In response to the query “Where is the best place for me to see the northern lights?” Google will, instead of listing web pages, tell you in authoritative text that the best places to see the northern lights, aka the aurora borealis, are in the Arctic Circle in places with minimal light pollution. It will also offer a link to NordicVisitor.com. But then the AI continues yapping on below that, saying “Other places to see the northern lights include Russia and Canada’s northwest territories.”
Reid says that AI Overviews like this won’t show up for every search result, even if the feature is now becoming more prevalent. It’s reserved for more complex questions. Every time a person searches, Google is attempting to make an algorithmic value judgment behind the scenes as to whether it should serve up AI-generated answers or a conventional blue link to click. “If you search for Walmart.com, you really just want to go to Walmart.com,” Reid says. “But if you have an extremely customized question, that’s where we’re going to bring this.”
AI Overviews are rolling out this week to all Google search users in the US. The feature will come to more countries by the end of the year, Reid said, which means more than a billion people will see AI Overviews in their search results. They will appear across all platforms—the web, mobile, and as part of the search engine experience in browsers, such as when people search through Google on Safari.
Another update coming to search is a function for planning ahead. You can, for example, ask Google to meal-plan for you, or to find a pilates studio nearby that’s offering a class with an introductory discount. In the Googley-eyed future of search, an AI agent can round up a few studios nearby, summarize reviews of them, and plot out the time it would take someone to walk there. This is one of Google’s most obvious advantages over upstart search engines, which don’t have anything close to the troves of reviews, mapping data, or other knowledge that Google has, and may not be able to tap into APIs for real-time or local information so easily.
The most jarring changes that Google has been exploring in its Search Labs is an “AI-organized” results page. This at first glance looks to eschew the blue-links search experience entirely.
One example provided by Reid: A search for where to go for an anniversary dinner in the greater Dallas area would return a page with a few “chips” or buttons at the top to refine the results. Those might include categories like Dine-In, Takeout, and Open Now. Below that might be a sponsored result—Google’s gonna ad—and then a grouping of what Google judges to be “anniversary-worthy restaurants” or “romantic steakhouses.” That might be followed by some suggested questions to tweak the search even more, like, “Is Dallas a romantic city?”
AI-organized search is still being rolled out, but it will start appearing in the US in English “in the coming weeks.” So will an enhanced video search option, like Google Lens on steroids, where you can point your phone’s camera at an object like a broken record player and ask how to fix it.
If all these new AI features sound confusing, you might have missed Google’s latest galaxy-brain ambitions for what was once a humble text box. Reid makes clear that she thinks most consumers assume Google Search is just one thing, where in fact it’s many things to different people, who all search in different ways.
“That’s one of the reasons why we’re excited about working on some of the AI-organized results pages,” she said. “Like, how do you make sense of space? The fact that you want lots of different content is great. But is it as easy as it can be yet in terms of browsing through and consuming the information?”
But by generating AI Overviews—and by determining when those overviews should appear—Google is essentially deciding what is a complex question and what is not, and then making a judgment on what kind of web content should inform its AI-generated summary. Sure, it’s a new era of search where search does the work for you; it’s also a search bot that has the potential to algorithmically favor one kind of result over others.
“One of the biggest changes to happen in search with these AI models is that the AI actually creates a kind of informed opinion,” says Jim Yu, the executive chairman of BrightEdge, a search engine optimization firm that has been closely monitoring web traffic for more than 17 years. “The paradigm of search for the last 20 years has been that the search engine pulls a lot of information and gives you the links. Now the search engine does all the searches for you and summarizes the results and gives you a formative opinion.”
Doing that raises the stakes for Google’s search results. When algorithms are deciding that what a person needs is one coagulated answer, instead of coughing up several links for them to then click through and read, errors are more consequential. Gemini has not been immune to hallucinations—instances where the AI shares blatantly wrong or made-up information.
Last year a writer for The Atlantic asked Google to name an African country beginning with the letter “K,” and the search engine responded with a snippet of text—originally generated by ChatGPT—that none of the countries in Africa begin with the letter K, clearly overlooking Kenya. Google’s AI image-generation tool was very publicly criticized earlier this year when it depicted some historical figures, such as George Washington, as Black. Google temporarily paused that tool.
New World Order
Google’s reimagined version of AI search shoves the famous “10 blue links” it used to provide on results pages further into the rearview. First ads and info boxes began to take priority at the top of Google’s pages; now, AI-generated overviews and categories will take up a good chunk of search real estate. And web publishers and content creators are nervous about these changes—rightfully.
The research firm Gartner predicted earlier this year that by 2026, traditional search engine volume will drop by 25 percent, as a more “agent”-led search approach, in which AI models retrieve and generate more direct answers, takes hold.
“Generative AI solutions are becoming substitute answer engines, replacing user queries that previously may have been executed in traditional search engines,” Alan Antin, a vice president analyst at Gartner, said in a statement that accompanied the report. “This will force companies to rethink their marketing channels strategy.”
What does that mean for the web? “It’s a change in the world order,” says Yu, of BrightEdge. “We’re at this moment where everything in search is starting to change with AI.”
Eight months ago BrightEdge developed something it calls a generative parser, which monitors what happens when searchers interact with AI-generated results on the web. He says over the past month the parser has detected that Google is less frequently asking people if they want an AI-generated answer, which was part of the experimental phase of generative search, and more frequently assuming they do. “We think it shows they have a lot more confidence that you’re going to want to interact with AI in search, rather than prompting you to opt in to an AI-generated result.”
Changes to search also have major implications for Google’s advertising business, which makes up the vast majority of the company’s revenue. In a recent quarterly earnings call, Pichai declined to share revenue from its generative AI experiments broadly. But as WIRED’s Paresh Dave pointed out, by offering more direct answers to searchers, “Google could end up with fewer opportunities to show search ads if people spend less time doing additional, more refined searches.” And the kinds of ads shown may have to evolve along with Google’s generative AI tools.
Google has said it will prioritize traffic to websites, creators, and merchants even as these changes roll out, but it hasn’t pulled back the curtain to reveal exactly how it plans to do this.
When asked in a press briefing ahead of I/O whether Google believes users will still click on links beyond the AI-generated web summary, Reid said that so far Google sees people “actually digging deeper, so they start with the AI overview and then click on additional websites.”
In the past, Reid continued, a searcher would have to poke around to eventually land on a website that gave them the info they wanted, but now Google will assemble an answer culled from various websites of its choosing. In the hive mind at the Googleplex, that will still spark exploration. “[People] will just use search more often, and that provides an additional opportunity to send valuable traffic to the web,” Reid said.
It’s a rosy vision for the future of search, one where being served bite-size AI-generated answers somehow prompts people to spend more time digging deeper into ideas. Google Search still promises to put the world’s information at our fingertips, but it’s less clear now who is actually tapping the keys.
New EU rules mean WhatsApp and Messenger must be interoperable with other chat apps. Here’s how that will work.
A frequent annoyance of contemporary life is having to shuffle through different messaging apps to reach the right person. Messenger, iMessage, WhatsApp, Signal—they all exist in their own silos of group chats and contacts. Soon, though, WhatsApp will do the previously unthinkable for its 2 billion users: allow people to message you from another app. At least, that’s the plan.
For about the past two years, WhatsApp has been building a way for other messaging apps to plug themselves into its service and let people chat across apps—all without breaking the end-to-end encryption it uses to protect the privacy and security of people’s messages. The move is the first time the chat app has opened itself up this way, and it potentially offers greater competition.
It isn’t a shift entirely of WhatsApp’s own making. In September, European, lawmakers designated WhatsApp parent Meta as one of six influential “gatekeeer” companies under its sweeping Digital Markets Act, giving it six months to open its walled garden to others. With just a few weeks to go before that time is up, WhatsApp is detailing how its interoperability with other apps may work.
“There’s real tension between offering an easy way to offer this interoperability to third parties whilst at the same time preserving the WhatsApp privacy, security, and integrity bar,” says Dick Brouwer, an engineering director at WhatsApp who has worked on Meta rolling out encryption to its Messenger app. “I think we’re pretty happy with where we’ve landed.”
Interoperability in both WhatsApp and Messenger—as dictated by Europe’s rules—will initially focus on text messaging, sending images, voice messages, videos, and files between two people. Calls and group chats will come years down the line. Europe’s rules apply only to messaging services, not traditional SMS messaging. “One of the core requirements here, and this is really important, is for users for this to be opt-in,” says Brouwer. “I can choose whether or not I want to participate in being open to exchanging messages with third parties. This is important, because it could be a big source of spam and scams.”
WhatsApp users who opt in will see messages from other apps in a separate section at the top of their inbox. This “third-party chats” inbox has previously been spotted in development versions of the app. “The early thinking here is to put a separate inbox, given that these networks are very different,” Brouwer says. “We cannot offer the same level of privacy and security,” he says. If WhatsApp were to add SMS, it would use a separate inbox as well, although there are no plans to add it, he says.
Overall, the idea behind interoperability is simple. You shouldn’t need to know what messaging app your friends or family use to get in touch with them, and you should be able to communicate from one app to another without having to download both. In an ideal interoperable world, you could, for example, use Apple’s iMessage to chat with someone on Telegram. However, for apps with millions or billions of users, making this a reality isn’t straightforward—encrypted messaging apps use their own configurations and different protocols and have different standards when it comes to privacy.
Despite WhatsApp working on its interoperability plan for more than a year, it will still take some time for third-party chats to hit people’s apps. Messaging companies that want to interoperate with WhatsApp or Messenger will need to sign an agreement with the company and follow its terms. The full details of the plan will be published in March, Brouwer says; under EU laws, the company will have several months to implement it.
Brouwer says Meta would prefer if other apps use the Signal encryption protocol, which its systems are based upon. Other than its namesake app and the Meta-owned messengers, the Signal Protocol is publicly disclosed as being used in Google Messages and Skype. To send messages, third-party apps will need to encrypt content using the Signal Protocol and then package it into message stanzas in the eXtensible Markup Language (XML). When receiving messages, apps will need to connect to WhatsApp’s servers.
“We think that the best way to deliver this approach is through a solution that is built on WhatsApp’s existing client-server architecture,” Brouwer says, adding it has been working with other companies on the plans. “This effectively means that the approach that we’re trying to take is for WhatsApp to document our client- server protocol and letting third-party clients connect directly to our infrastructure and exchange messages with WhatsApp clients.”
There is some flexibility to WhatsApp interoperability. Meta’s app will also allow other apps to use different encryption protocols if they can “demonstrate” they reach the security standards that WhatsApp outlines in its guidance. There will also be the option, Brouwer says, for third-party developers to add a proxy between their apps and WhatsApp’s server. This, he says, could give developers more “flexibility” and remove the need for them to use WhatsApp’s client-server protocols, but it also “increases the potential attack vectors.”
So far, it is unclear which companies, if any, are planning to connect their services to WhatsApp. WIRED asked 10 owners of messaging or chat services—including Google, Telegram, Viber, and Signal—whether they intend to look at interoperability or had worked with WhatsApp on its plans. The majority of companies didn’t respond to the request for comment. Those that did, Snap and Discord, said they had nothing to add. (The European Commission is investigating whether Apple’s iMessage meets the thresholds to offer interoperability with other apps itself. The company did not respond to a request for comment. It has also faced recent challenges in the US about the closed nature of iMessage.)
Matthew Hodgson, the cofounder of Matrix, which is building an open source standard for encryption and operates the messaging app Element, confirms that his company has worked with WhatsApp on interoperability in an “experimental” way but that he cannot say any more due to signing a nondisclosure agreement. In a talk last weekend, Hodgson demonstrated “hypothetical” architectures for ways that Matrix could connect to the systems of two gatekeepers that don’t use the same encryption protocols.
Meanwhile, Julia Weis, a spokesperson for the Swiss messaging app Threema, says that while WhatsApp did approach it to discuss its interoperability plans, the proposed system didn’t meet Threema’s security and privacy standards. “WhatsApp specifies all the protocols, and we’d have no way of knowing what actually happens with the user data that gets transferred to WhatsApp—after all, WhatsApp is closed source,” Weis says. (WhatsApp’s privacy policy states how it uses people’s data.)
When the EU first announced that messaging apps may have to work together in early 2022, many leading cryptographers opposed the idea, saying it adds complexity and potentially introduces more security and privacy risks. Carmela Troncoso, an associate professor at the Swiss university École Polytechnique Fédérale de Lausanne, who focuses on security and privacy engineering, says interoperability moves could potentially lead to different power relationships between companies, depending on how they are implemented.
“This move for interoperability will, on the one hand, open the market, but also maybe close the market in the sense that now the bigger players are going to have more decisional power,” Troncoso says. “Now, if the big player makes a move and you want to continue being interoperable with this big player, because your users are hooked up to this, you’re going to have to follow.”
While the interoperability of encrypted messaging apps may be possible, there are some fundamental challenges about how the systems will work in the real world. How much of a problem spam and scamming will be across apps is largely unknown until people start using interoperable setups. There are also questions about how people will find each other across different apps. For instance, WhatsApp uses your phone number to interact and message other people, while Threema randomly generates eight-digit IDs for people’s accounts. Linking up with WhatsApp “could de-anonymize Threema users,” Weis, the Threema spokesperson says.
Meta’s Brouwer says the company is still working on the interoperability features and the level of support it will make available for companies wanting to integrate with it. “Nobody quite knows how this works,” Brouwer says. “We have no idea what the demand is.” However, he says, the decision was made to use WhatsApp’s existing architecture to run interoperability, as it means that it can more easily scale up the system for group chats in the future. It also reduces the potential for people’s data to be exposed to multiple servers, Brouwer says.
Ultimately, interoperability will evolve over time, and from Meta’s perspective, Brouwer says, it will be more challenging to add new features to it quickly. “We don’t believe interop chats and WhatsApp chats can evolve at the same pace,” he says, claiming it is “harder to evolve an open network” compared to a closed one. “The second you do something different—than what we know works really well—you open up a wormhole of security, privacy issues, and complexity that is always going to be much bigger than you think it is.”
Last year, scientists reported that the US Atlantic Coast is dropping by several millimeters annually, with some areas, like Delaware, notching figures several times that rate. So just as the seas are rising, the land along the eastern seaboard is sinking, greatly compounding the hazard for coastal communities.
In a follow-up study just published in the journal PNAS Nexus, the researchers tally up the mounting costs of subsidence—due to settling, groundwater extraction, and other factors—for those communities and their infrastructure. Using satellite measurements, they have found that up to 74,000 square kilometers (29,000 square miles) of the Atlantic Coast are exposed to subsidence of up to 2 millimeters (0.08 inches) a year, affecting up to 14 million people and 6 million properties. And over 3,700 square kilometers along the Atlantic Coast are sinking more than 5 millimeters annually. That’s an even faster change than sea level rise, currently at 4 millimeters a year. (In the map below, warmer colors represent more subsidence, up to 6 millimeters.)
Courtesy of Leonard O Ohenhen
With each millimeter of subsidence, it gets easier for storm surges—essentially a wall of seawater, which hurricanes are particularly good at pushing onshore—to creep farther inland, destroying more and more infrastructure. “And it’s not just about sea levels,” says the study’s lead author, Leonard Ohenhen, an environmental security expert at Virginia Tech. “You also have potential to disrupt the topography of the land, for example, so you have areas that can get full of flooding when it rains.”
A few millimeters of annual subsidence may not sound like much, but these forces are relentless: Unless coastal areas stop extracting groundwater, the land will keep sinking deeper and deeper. The social forces are relentless, too, as more people around the world move to coastal cities, creating even more demand for groundwater. “There are processes that are sometimes even cyclic. For example, in summers you pump a lot more water, so land subsides rapidly in a short period of time,” says Manoochehr Shirzaei, an environmental security expert at Virginia Tech and coauthor of the paper. “That causes large areas to subside below a threshold that leads the water to flood a large area.” When it comes to flooding, falling elevation of land is a tipping element that has been largely ignored by research so far, Shirzaei says.
In Jakarta, Indonesia, for example, the land is sinking nearly a foot a year because of collapsing aquifers. Accordingly, within the next three decades, 95 percent of North Jakarta could be underwater. The city is planning a giant seawall to hold back the ocean, but it’ll be useless unless subsidence is stopped.
This new study warns that levees and other critical infrastructure along the Atlantic Coast are in similar danger. If the land were to sink uniformly, you might just need to keep raising the elevation of a levee to compensate. But the bigger problem is “differential subsidence,” in which different areas of land sink at different rates. “If you have a building or a runway or something that’s settling uniformly, it’s probably not that big a deal,” says Tom Parsons, a geophysicist with the United States Geological Survey who studies subsidence but wasn’t involved in the new paper. “But if you have one end that’s sinking faster than the other, then you start to distort things.”
The researchers selected 10 levees on the Atlantic Coast and found that all were impacted by subsidence of at least 1 millimeter a year. That puts at risk something like 46,000 people, 27,000 buildings, and $12 billion worth of property. But they note that the actual population and property at risk of exposure behind the 116 East Coast levees vulnerable to subsidence could be two to three times greater. “Levees are heavy, and when they’re set on land that’s already subsiding, it can accelerate that subsidence,” says independent scientist Natalie Snider, who studies coastal resilience but wasn’t involved in the new research. “It definitely can impact the integrity of the protection system and lead to failures that can be catastrophic.”
Courtesy of Leonard O Ohenhen
The same vulnerability affects other infrastructure that stretches across the landscape. The new analysis finds that along the Atlantic Coast, between 77 and 99 percent of interstate highways and between 76 and 99 percent of primary and secondary roads are exposed to subsidence. (In the map above, you can see roads sinking at different rates across Hampton and Norfolk, Virginia.) Between 81 and 99 percent of railway tracks and 42 percent of train stations are exposed on the East Coast.
Below is New York’s JFK Airport—notice the red hot spots of high subsidence against the teal of more mild elevation change. The airport’s average subsidence rate is 1.7 millimeters a year (similar to the LaGuardia and Newark airports), but across JFK that varies between 0.8 and 2.8 millimeters a year, depending on the exact spot.
Courtesy of Leonard O Ohenhen
This sort of differential subsidence can also bork much smaller structures, like buildings, where one side might drop faster than another. “Even if that is just a few millimeters per year, you can potentially cause cracks along structures,” says Ohenhen.
The study finds that subsidence is highly variable along the Atlantic Coast, both regionally and locally, as different stretches have different geology and topography, and different rates of groundwater extraction. It’s looking particularly problematic for several communities, like Virginia Beach, where 451,000 people and 177,000 properties are at risk. In Baltimore, Maryland, it’s 826,000 people and 335,000 properties, while in NYC—in Queens, Bronx, and Nassau—that leaps to 5 million people and 1.8 million properties.
So there’s two components to addressing the problem of subsidence: Getting high-resolution data like in this study, and then pairing that with groundwater data. “Subsidence is so spatially variable,” says Snider. “Having the details of where groundwater extraction is really having an impact, and being able to then demonstrate that we need to change our management of that water, that reduces subsidence in the future.”
The time to act is now, Shirzaei emphasizes. Facing down subsidence is like treating a disease: You spend less money by diagnosing and treating the problem now, saving money later by avoiding disaster. “This kind of data and the study could be an essential component of the health care system for infrastructure management,” he says. “Like cancers—if you diagnose it early on, it can be curable. But if you are late, you invest a lot of money, and the outcome is uncertain.”
Open letter on the feasibility of „Chat Control“:Assessments from a scientific point of view
Update: A parallel initiative is aimed at the EU institutions and is available in English at the CSA Academia Open Letter . Since the very similar arguments were formulated in parallel, they support each other.
The initiative of the EU Commission discussed under the name “ Chat Control ”, the unprovoked monitoring of various communication channels to detect child pornography, terrorist or other “undesirable” material – including attempts at early detection (e.g. “grooming” minors through text messages that build trust) – mandatory for mobile devices and communication services, has recently been expanded to include the monitoring of direct audio communications . Some states, including Austria and Germany , have already publicly declared that they will not support this initiative for monitoring without cause. AlsoCivil protection and children’s rights organizations have rejected this approach as excessive and at the same time ineffective . Recently, even the legal service of the EU Council of Ministers diagnosed an incompatibility with European fundamental rights. Irrespective of this, the draft will be tightened up even more and extended to other channels: in the last version even to audio messages and conversations. The approach appears to be coordinated with corresponding attempts in the US ( “EARN IT” and “STOP CSAM” Acts ) and the UK (“Online Safety Bill”).
As scientists who are actively researching in various areas of this topic, we therefore make the declaration in all clarity: This advance cannot be implemented safely and effectively. There is currently no foreseeable further development of the corresponding technologies that would technically make such an implementation possible. In addition, according to our assessment, the hoped-for effects of these monitoring measures are not to be expected. This legislative initiative therefore misses its target, is socio-politically dangerous and would permanently damage the security of our communication channels for the majority of the population.
The main reasons against the feasibility of „Chat Control“ have already been mentioned several times. In the following, we would like to discuss these specifically in the interdisciplinary connection between artificial intelligence (AI, artificial intelligence / AI), security (information security / technical data protection) and law .
Our concerns are:
Security: a) Encryption is the best method for internet security. Successful attacks are almost always due to faulty software. b) A systematic and automated monitoring (ie „scanning“) of encrypted content is technically only possible if the security that can be achieved through encryption is massively violated, which is associated with considerable additional risks. c) A legal obligation to integrate such scanners will make secure digital communications in the EU unavailable to the majority of the population, but will have little impact on criminal communications.
AI: a) Automated classification of content, including methods based on machine learning, is always subject to errors, which in this case will lead to high false positives. b) Special monitoring methods, which are carried out on the end devices, open up additional possibilities for attacks up to the extraction of possibly illegal training material.
Law: a) A sensible demarcation from the explicitly permitted use of specific content, for example in the educational sector or for criticism and parody, does not appear to be automatically possible. b) The massive encroachment on fundamental rights through such an instrument of mass surveillance is not proportionate and would cause great collateral damage in society.
In detail, these concerns are based on the following scientifically recognized facts:
Security
Encryption using modern methods is an indispensable basis for practically all technical mechanisms for maintaining security and data protection on the Internet. In this way, communication on the Internet is currently protected as the cornerstone for current services, right through to critical infrastructure such as telephone, electricity, water networks, hospitals, etc. Trust in good encryption methods is significantly higher among experts than in other security mechanisms. Above all, the average poor quality of software in general is the reason for the many publicly known security incidents. Improving this situation in terms of better security therefore relies primarily on encryption.
Automatic monitoring („scanning“) of correctly encrypted content is not effectively possible according to the current state of knowledge. Procedures such as „Fully Homomorphic Encryption“ (FHE) are currently not suitable for this application – neither is the procedure capable of this, nor is the necessary computing power realistically available. A rapid improvement is not foreseeable here either.
For these reasons, earlier attempts to ban or restrict end-to-end encryption were mostly quickly abandoned internationally. The current chat control push aims to have monitoring functionality built into the end devices in the form of scanning modules (“Client-Side Scanning” / CSS) and therefore to scan the plain text content before secure encryption or after secure decryption . Providers of communication services would have to be legally obliged to implement this for all content. Since this is not in the core interest of such organizations and requires effort in implementation and operation as well as increased technical complexity, it cannot be assumed that the introduction of such scanners will be voluntary – in contrast to scanning on the server side.
Secure messengers such as Signal or Threema and WhatsApp have already publicly announced that they will not implement such client scanners, but to withdraw from the corresponding regions. This has different implications for communication depending on the use case: (i) (adult) criminals will simply communicate with each other via “non-compliant” messenger services to further benefit from secure encryption. The increased effort, for example to install other apps on Android via sideloading that are not available in the usual app stores in the respective country, is not a significant hurdle for criminal elements. (ii) Criminals communicate with possible future victims via popular platforms, which would be the target of the mandatory surveillance measures discussed. In this case, it can be assumed that informed criminals will quickly lure their victims to alternative but still internationally recognized channels such as Signal, which are not covered by the monitoring. (iii) Participants exchange problematic material without being aware that they are committing a crime. This case would be reported automatically and possibly also lead to the criminalization of minors without intent. The restrictions would therefore primarily affect the broad – and irreproachable – mass of the population.It would be utterly delusional to think that without built-in monitoring, secure encryption could still be reversed. Tools like Signal, Tor, Cwtch, Briar and many others are widely available as open source and can easily be removed from central control. Knowledge of secure encryption is already common knowledge and can no longer be censored. There is no effective way to technically block the use of strong encryption without Client Side Scanning (CSS). If surveillance measures are prescribed in messengers, only criminals whose actual crimes outweigh the violation of the surveillance obligation will maintain their privacy.
Furthermore, the complex implementation forced by proposed scanner modules creates additional security problems that do not currently exist. On the one hand, this represents new software components, which in turn will be vulnerable. On the other hand, the Chat Control proposals consistently assume that the scanner modules themselves will remain confidential, since they would be trained on content that is already punishable for mere possession (built into the Messenger app), on the one hand, and simply for testing evasion methods, on the other can be used. It is also an illusion that such machine learning models or other scanner modules, distributed to billions of devices under the control of end users, can ever be kept secret.NeuralHash “ module for CSAM detection, which was extracted almost immediately from corresponding iOS versions and is thus openly available . The assumption by Chat Control proposals that these scanner modules could be kept confidential is therefore completely unfounded and incorrect Corresponding data leaks are almost unavoidable here.
artificial intelligence
We have to assume that machine learning (ML) models on end devices cannot, in principle, be kept completely secret. This is in contrast to server-side scanning, which is currently legally possible and also actively practiced by various providers to scan content that has not been end-to-end encrypted. ML models on the server side can be reasonably protected from being read with the current state of the art and are less the focus of this consideration.
A general problem with all ML-based filters are false classifications, i.e. that known “undesirable” material is not recognized as such with small changes (also referred to as “false negative” or “false non-match”). For parts of the push, it is currently unknown how ML models should be able to recognize complex, unfamiliar material with changing context (e.g. „grooming“ in text chats) with even approximate accuracy. The probability of high false negative rates is high.In terms of risk, however, it is significantly more serious if harmless material is classified as “undesirable” (also referred to as “false positive” or “false match” or also as “collision”). Such errors can be reduced, but in principle cannot be ruled out. In addition to the false accusation of uninvolved persons, false positives also lead to (possibly very) many false reports for the investigative authorities, which already have too few resources to investigate reports.
The assumed open availability of ML models also creates various new attack possibilities. Using the example of Apple NeuralHash , random collisions were found very quickly and programs were freely released to generate any collisions between images . This method, also known as “malicious collisions”, uses so-called adversarial attacks against the neural network and thus enables attackers to deliberately classify harmless material as a “match” in the ML model and thus classify it as “undesirable”. In this way, innocent people can be harmed in a targeted manner by automatic false reports and brought under suspicion – without any illegal action on the part of the attacked or attacker.
The open availability of the models can also be used for so-called „training input recovery“ in order to extract (at least partially) the content used for training from the ML model. In the case of prohibited content (e.g. child pornography), this poses another massive problem and can further increase the damage to those affected by the fact that their sensitive data (e.g. images of abuse used for training) can continue to be published. Because of these and other problems, Apple, for example, withdrew the proposal .We note that this latter danger does not occur with server-side scanning by ML models, but is newly added by the chat control proposal with client scanner.
Legal Aspects
The right to privacy is a fundamental right that may only be interfered with under very strict conditions. Whoever makes use of this basic right must not be suspected from the outset of wanting to hide something criminal. The often-used phrase: „If you have nothing to hide, you have nothing to fear!“ denies people the exercise of their basic rights and promotes totalitarian surveillance tendencies. The use of chat control would fuel this.
The area of terrorism in particular overlaps with political activity and freedom of expression in its breadth. It is precisely against this background that the „preliminary criminalisation“, which has increasingly taken place in recent years under the guise of fighting terrorism, is viewed particularly critically. Chat control measures go in the same direction. They can severely curtail this basic right and make people who are politically critical the focus of criminal prosecution. The resulting severe curtailment of politically critical activity hinders the further development of democracy and harbors the danger of promoting radicalized underground movements.
The field of law and social sciences includes researching criminal phenomena and questioning regulatory mechanisms. From this point of view, scientific discourse also runs the risk of being identified as “suspicious” by chat control and thus indirectly restricted. The possible stigmatization of critical legal and social sciences is in tension with the freedom of science, which also requires “research independent of the mainstream” for further development.
In education, there is a need to educate young people to be critically conscious. This also includes passing on facts about terrorism. Through the use of chat control, the provision of teaching material by teachers could put them in a criminal focus. The same applies to addressing sexual abuse, so that control measures could make this sensitive subject more taboo, even if “self-empowerment mechanisms” are to be promoted.
Interventions in fundamental rights must always be appropriate and proportionate, even if they are made in the context of criminal prosecution. The technical considerations presented show that these requirements are not met with Chat Control. Such measures thus lack any legal or ethical legitimacy.
In summary, the current proposal for chat control legislation is not technically sound from either a security or AI point of view and is highly problematic and excessive from a legal point of view. The chat control push brings significantly greater dangers for the general public than a possible improvement for those affected and should therefore be rejected.
Instead, existing options for human-driven reporting of potentially problematic material by recipients, as is already possible with various messenger services, should be strengthened and made even more easily accessible. It should be considered whether anonymous registration options for correspondingly illegal material could be created and made easily accessible from messengers. Existing criminal prosecution options, such as the monitoring of social media or open chat groups by police officers, as well as the legally required analysis of suspects‘ smartphones, can continue to be used accordingly.
For more detailed information and further details please contact:
AI Austria , association for the promotion of artificial intelligence in Austria, Wollzeile 24/12, 1010 Vienna
Austrian Society for Artificial Intelligence (ASAI) , association for the promotion of scientific research in the field of AI in Austria
Univ.-Prof. dr Alois Birklbauer, JKU Linz ( Head of the practice department for criminal law and medical criminal law )
Ass.-Prof. dr Maria Eichlseder, Graz University of Technology
Univ.-Prof. dr Sepp Hochreiter, JKU Linz ( Board of Directors of the Institute for Machine Learning, Head of the LIT AI Lab )
dr Tobias Höller, JKU Linz (post-doc at the Institute for Networks and Security)
FH Prof. TUE Peter Kieseberg, St. Pölten University of Applied Sciences ( Head of the Institute for IT Security Research )
dr Brigitte Krenn, Austrian Research Institute for Artificial Intelligence ( Board Member Austrian Society for Artificial Intelligence )
Univ.-Prof. dr Matteo Maffei, TU Vienna ( Head of the Security and Privacy Research Department, Co-Head of the TU Vienna Cyber Security Center )
Univ.-Prof. dr Stefan Mangard, TU Graz ( Head of the Institute for Applied Information Processing and Communication Technology )
Univ.-Prof. dr René Mayrhofer, JKU Linz ( Board of Directors of the Institute for Networks and Security, Co-Head of the LIT Secure and Correct System Lab )
DI Dr. Bernhard Nessler, JKU Linz/SCCH ( Vice President of the Austrian Society for Artificial Intelligence )
Univ.-Prof. dr Christian Rechberger, Graz University of Technology
dr Michael Roland, JKU Linz (post-doc at the Institute for Networks and Security)
a.Univ.-Prof. dr Johannes Sametinger, JKU Linz ( Institute for Business Informatics – Software Engineering, LIT Secure and Correct System Labs )
Univ.-Prof. DI Georg Weissenbacher, DPhil (Oxon), TU Vienna (Prof. Rigorous Systems Engineering)