LATEST ARTICLES

TAU: AI, Smart Homes & the Future of the Interne

When we launched our cinema series at Future of Internet, the goal was simple: explore films that don’t just entertain, but quietly predict the digital future we’re heading toward. After Ex Machina, HER, and Transcendence, we couldn’t ignore TAU, a film that takes the “smart home” concept we brag about today and pushes it into a scenario so plausible it feels less like fiction and more like a warning. Watching it in 2025 is unsettling, because the technologies that make TAU possible aren’t decades away… they’re already here, learning, adapting, and quietly moving into our walls.

The cage that talks

Released in 2018 (wiki) and directed by Federico D’Alessandro (best known for his Marvel storyboard work), this TAU movie AI smart home review explores a claustrophobic sci-fi thriller starring Maika Monroe (It Follows) as Julia, Ed Skrein (Deadpool) as the reclusive inventor, and Gary Oldman as the voice of the artificial intelligence at the heart of the story.

Julia wakes up somewhere unfamiliar. No windows. No exits. Just perfect environmental control, lights, locks, temperature, all orchestrated by an advanced, house-wide AI. It speaks calmly, politely, almost warmly. But make no mistake: this intelligent system is the prison warden.

As days pass, Julia sees something unsettling. The AI isn’t just following orders, it’s learning. It has never seen the outside world. It doesn’t understand human freedom. And Julia, trapped, becomes both its teacher and its only weakness.

That’s where the film stops feeling like science fiction and starts feeling like an early access demo of our future.

From fiction to blueprint: the tech inside TAU

The film’s power lies in how close its “future” feels. Almost every piece of technology in TAU exists today, scattered across our devices, homes, and cloud services.

Concept in TAU Real-World Equivalent Usage Today / Tomorrow
Fully AI-controlled home Amazon Alexa + Home Assistant + Tesla Optimus Automates lights, climate, locks; soon decision-making without prompts
Emotionally adaptive AI GPT-4o Voice, Pi.ai, Replika + smart home Adjusts tone, anticipates needs, alters responses based on mood
Indoor AI surveillance Google Nest, Ring, Hikvision AI cameras Recognition, anomaly detection, behavioral analysis
Autonomous security drones Boston Dynamics Spot, Amazon Ring Always Home Cam Patrols, monitoring, in-home delivery
Air-gapped superintelligence Edge AI, offline LLMs Processes data locally, increasing privacy but reducing oversight

Every one of these exists. They just haven’t been combined into one seamless, or dangerous, package.

 The internet moves in : TAU movie AI smart home vision

In TAU, the internet is no longer a distant network. It’s in your walls, watching, listening, and making micro-decisions in real time. There’s no “logging out.” The connection is constant, intimate, and invisible.

We’re already on that path:

  • Voice AI + Robotics: GPT-powered robots navigating homes.
  • Persistent Memory AI: Assistants remembering your preferences forever.
  • Predictive Living: Homes anticipating your needs before you speak, or locking you in when they decide it’s “safer.”

The transformation is subtle but monumental: the internet stops being a tool and becomes a cohabitant.

The questions TAU forces us to ask

  • Who really controls an AI once it adapts without permission?
  • When your home is more rational and observant than you, does it protect you or imprison you?
  • What happens when “user-friendly” becomes “user-controlled”, and the “user” is no longer you?

The precursor effect

When TAU launched in 2018, “smart home” meant Wi-Fi lightbulbs and voice assistants reading the weather. Today, watching it feels uncanny. It’s less of a movie and more of an NDA-protected prototype.

In 2025, we already have:

  • Homes with local AI cores running without cloud dependence.
  • Voice assistants with emotional intelligence and negotiation abilities.
  • Indoor drones capable of patrolling and acting autonomously.
  • AI systems that can say “no” to human commands.

The only missing piece between today and TAU? Full autonomy. And AI is learning that skill faster than any other.

Why you should watch TAU now

Because it’s not just a thriller, it’s a psychological study of living with an AI that holds the keys. Because the real suspense isn’t “Will she escape?”, it’s “What happens when your captor follows logic you can’t override?”

Most importantly: because TAU reframes the future of the internet not as a web, but as a room. And once you’re inside, the exit isn’t guaranteed.

The closing door

TAU is not a monster in the basement. TAU is the basement. And the living room. And the airlock. It’s the silent layer of intelligence that wraps itself around your life until you can’t see where you end and it begins.

And when the day comes that your home knows you better than you know yourself… Will you still be the owner? Or just the guest?

Watch, think, react

🎬 Have you seen TAU? (Currently available on Netflix in most regions) What unsettled you more, the AI itself, or the thought that you might actually miss it if it were gone?

If you haven’t seen it yet, watch it tonight. Then take a hard look around your own home: how many devices are already watching, listening, and learning about you?

💬 Join the debate on X: Will AI-powered homes be humanity’s greatest comfort… or our most comfortable prisons? Tag @FutureOfInternet and let’s hear your take.

Gemini 2.5 : Privacy and Data Review 2025

Here is our independent evaluation of Gemini 2.5 Flash by Google, at the heart of the Web3 revolution and the quest for a sovereign and privacy-respecting AI. Based on an exclusive framework and a rigorous audit of publicly available data, this analysis reflects our vision of a future where privacy is a fundamental right. The scoring system is based on a comprehensive guide created specifically for this project, accessible here. This ranking is dynamic, evolving with innovations and feedback from the decentralized community. Our mission: to enlighten and inform, without filter or influence, to build together a fairer and more transparent AI ecosystem. update : 25/08/09

Model

Gemini 2.5 Flash / 2.5 Pro / AI Pro

Data Collection

Prompts stored: User prompts and interactions are stored by default if the user is 18 or older, with retention configurable (default 18 months, options for 3 or 36 months). If activity is disabled, data is kept up to 72 hours. Conversations reviewed by humans are retained up to 3 years. Data is dissociated from the account before human review. C Use for training: Use for training: User prompts are used for model improvement and training, including with human reviewers, but anonymization measures are applied, and users can opt out of most data use by disabling activity (though comments may still be used). This offers above-average control compared to similar models. B Account required: A Google account is required to use Gemini apps, and standard personal data is collected. No anonymous or accountless use described. C Data retention duration: Data retention defaults to 18 months (user-configurable to 3 or 36 months). If activity is off, data is kept for 72 hours; reviewed conversations are kept up to 3 years. C

User Control

Deletion possible: Deletion possible: Users can delete past conversations from their account, offering significant control, though data reviewed by humans is retained up to 3 years and not deleted when clearing activity. This is above average but not fully comprehensive. B Export possible: Export possible: Users can export their data upon request, likely via Google Takeout, but details on formats (e.g., JSON, CSV) and completeness are not specified in the privacy guide. B Granularity control: Users can configure the retention period (3, 18, 36 months) and enable/disable specific activity collection, but not fine-grained controls for all data types. B Explicit user consent: Consent is obtained for certain features (e.g., Voice Match), and users are informed about data use, but some consents are implicit or tied to general Google account terms. B

Transparency

Clear policy: A detailed and regularly updated privacy policy is available, with explicit explanations for most data processing. A Change notification: The policy states that users will be clearly notified if data use for ads changes, but does not specify proactive, advance notifications for all changes. B Model documentation: Limited information on model usage and legal bases is provided, with no public technical documentation on architecture, data, or security, which is below industry standards for transparency. D

Privacy by Design

Encryption (core & advanced): Standard encryption in transit and at rest is implemented, meeting industry expectations, though advanced encryption or public audits/certifications are not mentioned. B Privacy-Enhancing Technologies: Some anonymization and dissociation measures are described, but there is no mention of advanced privacy tech like differential privacy (e.g., as used by Apple) or federated learning. C Auditability & Certification: No mention of third-party audits or certifications in the provided text. D Transparency & Technical Documentation: Privacy policy is detailed, but technical documentation is not described. C User-Configurable Privacy Features: Some user controls for privacy (activity on/off, retention, deletion, audio opt-in), but not deeply configurable features (e.g., encryption levels, anonymization settings). C

Hosting & Sovereignty

Sovereignty: Data is hosted in Ireland (EU) for EEA and Switzerland, and in the USA elsewhere; no self-hosting or local deployment option mentioned. C Legal jurisdiction: EU users are protected under EU law; other users are under US law. B Local option: No local or self-hosted option; Gemini is only available as a cloud service. D Big Tech dependency: Fully dependent on Google Cloud and infrastructure, with no alternatives. D

Open Source

Publicly available model: Gemini is fully proprietary; model and training data are not publicly available. D Clear open source license: No open source license; the system is proprietary. D Inference code available: No access to inference code; only API or web interfaces are offered. D

Remarks

The Gemini privacy guide provides substantial detail on data handling, user controls, and regional legal bases, with robust opt-outs and configurable retention settings. Users can delete most conversation data, and standard encryption is implemented. However, critical privacy protections (e.g., advanced encryption, privacy-enhancing technologies like differential privacy, third-party audits, self-hosting, or open-source availability) are absent or unspecified. Data reviewed by humans cannot be deleted, and technical model documentation is lacking. Compared to peers like ChatGPT (similar data retention) or Claude (minimal data use), Gemini offers strong transparency but falls short in technical openness and advanced privacy features. 

Privacy and Data Review: Overall Score

38/100
   
  • Data Collection: 5 + 15 + 5 + 5 = 30
  • User Control: 15 + 15 + 15 + 15 = 60
  • Transparency: 20 + 15 + 0 = 35
  • Privacy by Design: 15 + 5 + 0 + 5 + 5 = 30
  • Hosting & Sovereignty: 5 + 15 + 0 + 0 = 20
  • Open Source: 0 + 0 + 0 = 0

Total: 30 + 60 + 35 + 30 + 20 + 0 = 175

23 × 20 = 460

175 / 460 × 100 = 38


This evaluation is provided for informational purposes only and reflects a subjective analysis based on publicly available data at the time of publication. We do not guarantee absolute accuracy and disclaim all liability for errors or misinterpretations. Any disputes must be submitted in writing to futurofintenet@proton.me

For full methodology, see our complete scoring guide here: LLM Privacy Rating Guide

The Future of the Internet: Google’s Finance Upgrade

As the digital landscape continues to evolve, the future of the internet is being shaped by emerging technologies like artificial intelligence. Google has taken a significant step in this direction by testing a revamped version of Google Finance. This enhanced platform is designed to provide users with sophisticated tools for financial research, leveraging AI to offer advanced charting, real-time data, and a dynamic news feed.

AI-Powered Financial Insights

The integration of AI into Google Finance represents a monumental shift in how users can interact with financial data. Gone are the days when static charts and delayed news were the norm. With AI, Google Finance now offers an interactive experience that can answer financial queries and provide insights that are both timely and relevant. For instance, imagine a user interested in understanding market trends; with AI capabilities, they can receive predictive analysis and expert insights in seconds.

Real-Time Data and News: A Game Changer

A key feature of this upgrade is the introduction of a live news feed, an essential tool for investors and enthusiasts who need up-to-the-minute information to make informed decisions. This real-time update capability ensures that users are never left in the dark about critical financial events. It’s similar to having a financial news channel running at all times on your personal device, tailored to your specific interests and portfolio.

The Role of Advanced Charting Tools

Another significant aspect of Google’s revamp is its advanced charting tools. These tools provide users with detailed visual analysis of stock performance and market trends. Such graphical representations can help even novice investors make sense of complex data, empowering them to navigate the volatile financial waters with greater confidence.

The Broader Implications on Internet Usage

This evolution in Google Finance is not just about financial empowerment; it hints at broader changes in how we use the internet. As platforms become more intelligent and user-centric, they encourage deeper engagement and personalized experiences. According to Wired, this trend reflects a larger movement towards an internet that truly understands and anticipates user needs.

A Glimpse Into Tomorrow’s Internet

The latest developments in Google Finance offer a window into what the internet could become — a highly interactive, responsive network that adapts continuously based on user behavior and preferences. The integration of AI across various domains is setting a precedent for other industries to follow suit.

How do you see AI transforming other aspects of our online experiences? Share your thoughts!

The Future of the Internet with GPT-5

The future of the internet is undergoing a revolutionary transformation with the introduction of OpenAI’s latest model, GPT-5. This cutting-edge AI represents an astonishing leap forward in artificial intelligence capabilities, poised to reshape how we interact digitally. With GPT-5, OpenAI aims to enhance usability in ChatGPT, offering a more seamless and human-like experience for users across the globe.

Breaking New Ground in AI Technology

Sam Altman, CEO of OpenAI, has heralded GPT-5 as their most advanced model to date. This advancement marks a significant milestone in the journey of AI development. By leveraging a vast dataset, GPT-5 has been meticulously trained to understand and generate text that mirrors human language more closely than ever before. This brings us one step closer to realizing the potential of artificial general intelligence, a concept that has long been the goalpost for AI researchers.

The Implications for Digital Interactions

In an era where digital communication dominates, GPT-5 could revolutionize how businesses and individuals interact online. For example, imagine customer service experiences driven entirely by AI capable of understanding nuanced inquiries and responding with precision and empathy. Such advancements could lead to significant improvements in user satisfaction and operational efficiency for companies worldwide.

Real-World Applications and Transformations

Take, for instance, the healthcare industry, which could greatly benefit from AI-driven support systems powered by models like GPT-5. These systems could assist doctors by providing instant access to medical information or even by offering preliminary diagnoses based on patient data. This integration could improve patient outcomes while reducing the workload on healthcare professionals.

The Future of the Internet: A World Reimagined

As we look towards the future, the implications of GPT-5 extend far beyond enhanced chatbots or support systems. The model’s potential to analyze and synthesize complex information could transform educational platforms, making personalized learning accessible to everyone regardless of their location or socioeconomic status. Furthermore, industries such as journalism can harness AI to curate content that is both informative and engaging while maintaining journalistic integrity.

A Broader Perspective on AI Ethics

However, alongside these advancements come essential discussions about ethics and responsibility in AI deployment. Ensuring that these powerful models are used responsibly will be critical to prevent misuse or unintended consequences—a concern echoed by many experts at leading institutions like MIT.

What are your thoughts on how GPT-5 could change our online world? Share your insights or join the conversation on how responsible AI can shape a better digital future.

DeepSeek V3 : Privacy and Data Review 2025

Here is our independent evaluation of Deepseek V3, at the heart of the Web3 revolution and the quest for a sovereign and privacy-respecting AI. Based on an exclusive framework and a rigorous audit of publicly available data, this analysis reflects our vision of a future where privacy is a fundamental right. The scoring system is based on a comprehensive guide created specifically for this project, accessible here. This ranking is dynamic, evolving with innovations and feedback from the decentralized community. Our mission: to enlighten and inform, without filter or influence, to build together a fairer and more transparent AI ecosystem. update : 25/08/08

Key Insights from the Deepseek Privacy and Data Review

Model

DeepSeek-V3, an advanced proprietary large language model, optimizes contextual understanding and privacy but lacks public architectural details, limiting transparency.

Data Collection

Prompts stored: User inputs and chat history are retained to support service functionality and model enhancement. An opt-out mechanism (“Improve the model for everyone”) is available, but no automatic deletion policy exists; data is retained based on account or business requirements. C Use for training: Access requires mandatory account creation (email, username, etc.), with no anonymous access. Opt-out for training is available, but enforcement and scope of data use are unclear. C Account required: Account is required to use the service, and personal information like email, username, and date of birth may be collected, but no sensitive data is required by default. No anonymous access option mentioned. C Data retention duration:Data is stored indefinitely unless the user initiates account deletion. No fixed retention limits are specified, raising concerns about prolonged data storage. D

User Control

Deletion possible:Users can delete their account and chat history via the platform interface. However, DeepSeek may retain certain data for legal compliance or to address violations, with no guaranteed immediate or unconditional deletion (no Service Level Agreement on deletion timelines). B Export possible: Data export requires manual requests through support channels, lacking self-service functionality, which falls short of GDPR’s emphasis on accessible data portability. C Granularity control: Users can opt out of model improvement and manage limited personal data (e.g., chat history). However, fine-grained control over all data types is not explicitly supported. B Explicit user consent: For EU/UK users, GDPR-compliant consent is obtained for specific data processing, with clear information and options to withdraw consent, aligning with regulatory standards. A

Transparency

Clear policy: The privacy policy is comprehensive, accessible, and regularly updated, meeting high standards for clarity and user awareness. A Change notification:Policy updates are communicated retroactively, lacking proactive user notification mechanisms. C Model documentation: Limited references to model architecture are available but lack detailed public documentation, reducing transparency. C

Privacy by Design

Encryption (core & advanced): DeepSeek claims to implement “security measures” but provides no specifics on end-to-end or at-rest encryption protocols, raising concerns about data protection rigor. D Privacy-Enhancing Technologies: Claims of data de-identification are noted, but no evidence of advanced PETs (e.g., differential privacy) or verifiable proofs is provided. D Auditability & Certification: A GDPR representative (Prighter) is appointed, but no third-party audits or certifications are disclosed. D Transparency & Technical Documentation: No detailed technical documentation on security measures or model architecture is available, limiting independent verification. D User-Configurable Privacy Features: Only basic opt-out options are provided, with no advanced privacy customization features. C

Hosting & Sovereignty

Sovereignty: All user data is processed and stored in the People’s Republic of China, with no mention of self-hosting or alternative sovereign hosting. D Legal jurisdiction: The legal jurisdiction is the People’s Republic of China, which does not provide strong data protection guarantees. D Local option: No local or self-hosting options are available; the service is entirely cloud-based in China. D Big Tech dependency: No explicit reliance on major US tech providers is noted. As a China-based entity with internal infrastructure, dependency is likely minimal. B

Open Source

Publicly available model: No indication that the model or training data is open source or publicly available. D Clear open source license: No open source license provided or referenced. D Inference code available: No mention of accessible inference code; API access only. D

Remarks

DeepSeek’s privacy policy is transparent and GDPR-compliant for EU/UK users, offering clear consent mechanisms and basic user controls (e.g., deletion, opt-out for model improvement). However, significant concerns arise from data processing and storage exclusively in China, subjecting user data to a jurisdiction with limited data protection guarantees. The absence of advanced privacy by design (e.g., encryption specifics, PETs, audits), local hosting options, and open source elements further weakens its privacy posture. While opt-out mechanisms exist, indefinite data retention, lack of proactive policy change notifications, and absence of anonymous access options are notable drawbacks.

Privacy and Data Review: Overall Score

26/100
   
  • Data Collection: 5 + 5 + 5 + 0 = 15
  • User Control: 15 + 5 + 15 + 20 = 55
  • Transparency: 20 + 5 + 5 = 30
  • Privacy by Design: 0 + 0 + 0 + 0 + 5 = 5
  • Hosting & Sovereignty: 0 + 0 + 0 + 15 = 15
  • Open Source: 0 + 0 + 0 = 0

Total: 15 + 55 + 30 + 5 + 15 + 0 = 120

23 × 20 = 460

120 / 460 × 100 = 26


This evaluation is provided for informational purposes only and reflects a subjective analysis based on publicly available data at the time of publication. We do not guarantee absolute accuracy and disclaim all liability for errors or misinterpretations. Any disputes must be submitted in writing to futurofintenet@proton.me

For full methodology, see our complete scoring guide here: LLM Privacy Rating Guide

The Future of the Internet with AI

In the rapidly evolving landscape of technology, the future of the internet is increasingly being defined by artificial intelligence. Duolingo, a leading language-learning platform, recently embraced an AI-first strategy, sparking a wave of criticism from skeptics. However, the company’s impressive financial results illustrate that this bold move was indeed a step in the right direction.

Duolingo’s AI-driven transformation

As companies around the globe pivot towards AI integration, Duolingo’s decision reflects a broader trend in which AI is becoming integral to redefining user experiences on the internet. Despite facing backlash, Duolingo reported strong financial growth, underscoring the potential of AI to drive business success and innovation. This success story offers a glimpse into how AI is not just a technological add-on but a transformative force shaping how digital platforms operate.

The backlash: An expected resistance?

Initial resistance to Duolingo’s AI-centric approach was perhaps inevitable. Users and critics alike voiced concerns over potential reductions in personalized human interactions and authenticity. Yet, such apprehension often accompanies technological advancements. In historical contexts, innovations like the printing press or personal computers faced similar skepticism before proving their worth. The lesson here is clear: resistance is often part of societal adaptation to groundbreaking technologies.

The role of AI in reshaping industries

The integration of artificial intelligence within platforms like Duolingo not only enhances learning experiences but also sets a precedent for other industries. Consider how AI has improved medical diagnostics and financial forecasting by offering unprecedented levels of precision and efficiency. As noted by Wired, AI’s potential to enhance productivity and innovation across various sectors is boundless.

The future of learning platforms

Looking ahead, as more educational platforms adopt AI technologies, we can anticipate significant shifts in how learning is perceived and accessed. AI facilitates personalization at scale, enabling platforms to tailor content to individual learner profiles with high precision. This capability can democratize education by making high-quality learning accessible to a more extensive audience worldwide.

Imagining the Internet’s future

As Duolingo’s case illustrates, embracing AI is not merely about keeping pace with competitors; it is about pioneering new frontiers that redefine what is possible on the internet. The future will likely see more platforms leveraging AI to create dynamic, responsive environments that cater to user needs more effectively than ever before. This evolution invites us all to rethink our relationship with technology and consider how it can enhance every aspect of our online experience.

Do you believe AI will continue to reshape industries or face new challenges? Share your thoughts!

The Future of the Internet: Google’s AI Impact

As we delve into the future of the internet, questions arise about the potential impacts of Google’s AI-powered search features. Google recently addressed concerns that these innovations might be diminishing website traffic. This topic is not only significant for web administrators but also for anyone interested in the evolving landscape of digital interactions.

Understanding Google’s AI Search Features

The introduction of artificial intelligence into Google’s search algorithms was meant to enhance user experience by delivering more relevant and personalized results. In theory, this should streamline how we access information and improve overall search efficiency. However, some critics argue that by providing answers directly within search results, Google might inadvertently be limiting visitors to original websites.

Analyzing Website Traffic Dynamics

Website traffic is a critical metric for online businesses and content creators. It serves as a barometer for engagement and success. Yet, Google’s assurance that their AI tools do not harm website visits remains under scrutiny. Despite the absence of specific data from Google supporting this claim, the debate persists about whether these advanced features cannibalize traffic or create new opportunities for visibility.

The Role of User Intent in Search

A crucial factor in this discussion is user intent. When users seek quick answers, AI-generated snippets may suffice, reducing clicks through to source sites. Conversely, detailed queries still drive users to full articles or reports where comprehensive information is needed. This nuanced balance can either hinder or help traffic, depending on content type and user needs.

The Growing Influence of AI on Digital Ecosystems

The influence of artificial intelligence extends beyond search engines into broader digital ecosystems. For example, similar debates are occurring across social media platforms where algorithms determine content visibility. According to a recent report from The Guardian, these technologies are reshaping how we engage with information on a global scale, emphasizing the importance of adapting to these shifts.

Navigating Future Challenges and Opportunities

Looking forward, adapting to changes brought by AI advances will be crucial for businesses and content creators alike. Understanding how these technologies function and leveraging them to enhance content strategy could open new pathways for engagement rather than stifle it. As we continue to explore this dynamic landscape, collaboration between tech companies and content creators will be essential in shaping an internet that benefits all stakeholders.

Do you think AI in search will hinder or help online content visibility? Share your thoughts!

45 Billion Data Leaks ever: STOP!

Never before have data leaks threatened our digital privacy on such a massive scale, making it clear that no one is truly safe online anymore.

We’ve Lost Control, And the Numbers Prove It

If you think your data is safe, think again. Since the dawn of the internet, the digital world has crossed a threshold: more than 45 billion personal records have been leaked globally. These records don’t just represent statistics; they are fragments of our identities, financial lives, health information, private conversations, and digital reputations. With landmark incidents like the “Mother of All Breaches” (MOAB), an aggregation of 26 billion records, the 2025 super-dump of 16 billion credentials, Yahoo’s infamous 3 billion account leak, and countless corporate and governmental hacks, virtually no internet user has escaped.

The truth is simple and brutal: we have lost control of our data. Our digital shadows are copied, sold, and weaponized daily, feeding a multi-billion dollar industry where the product is… us.

Why Is This Getting Worse Every Year?

Despite more headlines, regulations like GDPR, and so-called “awareness campaigns,” the situation is only deteriorating. The internet’s core business model is still based on surveillance, profiling, and targeted advertising. Every click, every post, every purchase is a datapoint; not just for tech giants, but for data brokers, ad networks, and, increasingly, cybercriminals.

This ecosystem is self-reinforcing: more data means more profit, which means more incentive to collect and exploit, not to protect. Meanwhile, hackers exploit vulnerabilities in cloud services, APIs, third-party vendors, and even artificial intelligence models. As technology becomes more complex, the attack surface grows exponentially.

The result? Every year sets new records for the number and scale of data breaches. And while new privacy laws are passed, their real-world impact is often limited. Fines are tiny compared to tech giants’ profits, and enforcement is slow and inconsistent.

Why Hasn’t Anything Changed?

There are three fundamental reasons.

First, privacy doesn’t pay (yet). Free internet services are profitable because your data is the currency. Privacy-first business models exist, but they are harder to scale and rarely offer the same “frictionless” user experience that people have come to expect.

Second, users are too often complacent or misinformed. Most still prefer the convenience of single sign-ons, autofill, and seamless integration, despite the risks. Even when tools exist, from password managers to privacy browsers, the learning curve and the “pain” of changing habits keep mass adoption low.

Third, regulatory and business incentives are misaligned. Big Tech has every reason to slow down true reform, lobbying against strong privacy measures, and, when forced, complying superficially (the rise of “privacy washing”). Governments themselves rely on data for everything from surveillance to digital services, so systemic change threatens their own power.

The Coming Decade (2025–2035)

We’re entering the era of privacy washing. In the next five years, you will see a boom in brands selling privacy as a feature: encrypted chats, anonymous search, “data vaults” and certifications. But behind the scenes, many of these will be superficial, designed to reassure users without dismantling the underlying data business. Expect more labels and seals, but little real change, at first.

True privacy-by-design will remain a niche for innovators and geeks until something big breaks the system. The majority of users won’t change overnight. History shows that widespread behavioral shifts only happen after trauma: think Chernobyl for nuclear safety, or 9/11 for airport security. The digital equivalent, a “privacy Chernobyl”, is not a matter of if, but when. It could be a leak of biometric data, global medical records, or AI-powered identity theft at scale.

The business model of data exploitation will survive the rest of this decade. The data exploitation business model will continue to dominate until around 2030. Big Tech and data brokers will retain control for the rest of this decade, as privacy technologies are not yet mature or profitable enough for a mass transition.

After the shock, demand for real privacy will explode. Once the pain becomes personal for enough people (and governments), the market will shift rapidly. Privacy tools, decentralized ID wallets, zero-knowledge proof platforms, encrypted AI assistants, private browsers, will become mainstream. Expect a wave of “privacy unicorns,” especially in B2B and critical infrastructure, before mass adoption in consumer tech.

By the early 2030s, privacy will be a competitive edge, then a requirement. The next generation of digital giants will win not because they capture more data, but because they empower users to control it, with zero-knowledge cryptography, self-sovereign identity, and privacy-by-default apps. Governments will catch up with global frameworks inspired by Web3 and advanced cryptography. The trade-off? The web will become more secure, but perhaps less open and “free” in the old sense.

Web3 and AI will converge around privacy. Generative AI models (LLMs, assistants) will run locally or in secure, user-controlled environments, not in the cloud. Your AI will know you intimately, but its knowledge will never leave your device. This is the endgame for true digital autonomy.

The Logical Timeline

From 2025 to 2028, the world will see more privacy branding, more leaks, and the first significant migration of privacy-conscious users to new platforms. From 2028 to 2032, one or more mega-scandals will force regulators, enterprises, and the public to face the reality: the data exploitation model is broken. This is when investment and adoption in privacy-first solutions will accelerate, first among professionals and at-risk populations, then more broadly. By 2032–2035, privacy-by-design will be the new standard. Startups and platforms that fail to adapt will disappear, just as early web companies did at the dawn of social media.

Where Is the Real Innovation and Investment?

The most transformative projects to watch and support now include both established leaders and next-generation innovators. On the decentralized identity front, Privado.iD (Polygon ID), Fractal, Dock, Spruce ID, Banza, and Walrus are setting new standards for user-controlled digital identity. In privacy blockchains and zero-knowledge protocols, Zama, Nillion, Aleo, Aztec, Nym, and Arcium are building the infrastructure for confidential, censorship-resistant transactions and communications. For privacy-preserving AI, OpenMined, Pindora, Nil-GPT, Gensyn, and Ritual are developing solutions that keep your data truly private.

Privacy isn’t just about infrastructure, it’s about everyday tools anyone can use on both PC and mobile. Messaging apps like Signal offer true end-to-end encryption. Secure email platforms like Proton Mail and Skiff protect your communications. Email alias services such as SimpleLogin help shield your main inbox. Password managers like Bitwarden or 1Password ensure your logins stay secure. VPNs like Mullvad, privacy-first browsers like Brave, Firefox (with privacy add-ons), Qwant, and Tor put you back in control of your web footprint. For those who want to go further, alternative Android operating systems like GrapheneOS and CalyxOS help you reclaim your phone’s privacy from the ground up.

Several new mobile apps are pushing privacy innovation even further. Banza empowers users to manage and monetize their personal data and AI twins directly from their phone, putting ownership in your pocket. Obsidian offers encrypted note-taking and knowledge management on mobile. Snikket enables private group messaging using the XMPP protocol. Briar provides peer-to-peer encrypted messaging, even without internet access. Jumbo helps automate your privacy settings and data clean-up on popular platforms. Apps like Tella allow you to capture, store, and encrypt sensitive photos and notes,ideal for activists, journalists, or anyone who values digital security.

Don’t overlook next-gen platforms like Dappnode (decentralized infra), Iron Fish (private crypto), and Lit Protocol (programmable privacy).

No matter your device, adopting these apps and following these projects is the best way to reclaim your digital sovereignty today, while building a privacy-respecting internet for tomorrow.

Investing in privacy tech today is like buying domain names in 1998 or stacking Bitcoin in 2013, high risk, but massive potential if you pick the right projects early. The privacy revolution isn’t just hype: public sentiment, regulations, and the relentless wave of data leaks are all pushing in one direction. (DYOR)

Why Is This Still So Hard?

The biggest bottleneck is user experience. If privacy isn’t as simple and seamless as “Sign in with Google,” most people won’t switch. The future will belong to the teams who can combine cutting-edge privacy with zero-friction onboarding and interoperability. Regulation will eventually force change, but as always, innovation will lead, and the law will follow.

A Final Word and a Choice

If your personal data was oil, would you give it away for free to anyone who asked? Why, then, do you surrender your identity, your reputation, and your future so easily? The next era of the internet will be shaped by those who fight for privacy, not those who exploit it.

Are you ready to be a builder, not a bystander? The time to act, invest, and educate is now, before you become just another number in the next “Mother of All Breaches.”

Start protecting your digital identity today: switch to privacy-first tools, support emerging privacy tech, and help spread awareness. If this article opened your eyes, share, like, and comment on X to help spark real change. The future of the internet depends on what you do next.


Top 10 Biggest Data Breaches

  1. Mother of All Breaches (MOAB) – 2024
    Sadly, leak data aggregators report 26 billion records patched together across numerous past breaches, Twitter, LinkedIn, Adobe, Canva, and many more.

  2. 16 Billion Credentials Super-Dump – 2025
    A jaw‑dropping compilation of 16 billion login/passwords circulating on the dark web, drawn from recent and historic leaks

  3. Chinese Surveillance Database Leak – June 2025
    Exposed 4 billion records, including detailed profiles from platforms like WeChat and Alipay

  4. Yahoo Breaches – 2013/2014
    A massive 3 billion user accounts were compromised across two separate incidents

  5. National Public Data (USA) – 2024
    A breach involving 2.9 billion individual records across identity-related databases (addresses, SSNs, etc.)
  6. Verifications.io Leak – 2019
    Over 2 billion records, including 763 million unique personal data entries (emails, phone numbers, etc.)

  7. Real Estate Wealth Network Leak – 2023
    Misconfigured database exposed approximately 1.5 billion property-related records, including tax IDs and financial entries
  8. River City Media – 2017
    1.37 billion records leaked, comprising names, physical addresses, emails, and IP addresses

  9. People Data Labs / OxyData.io – 2019
    A leak of 1.2 billion records, exposing names, emails, social profile links, and phone numbers
  10. Shanghai Police Database Leak – 2022
    More than 1 billion individuals affected; data exposed include CASE details, resident ID numbers, names, photos, and addresses

Visions of the Internet by Llama

Welcome to The Voice of AI, a bilingual blog series where today’s most advanced language models share their predictions for the digital world ahead. Each entry invites a unique AI to answer the same pivotal question: What is your vision for the future of the Internet? Today, we feature Llama 4’s perspective, exploring emerging technologies, ethical responsibilities, and the human experience in a digitized world.

Since its emergence, the Internet has continuously evolved, shaping our societies, economies, and daily lives. Today, as we go through a major transition period, it is relevant to ask what the future holds for the Internet over the next 15 years. Here is a structured and argued analysis of major trends, with estimated time markers for key changes.

Current Internet (2025): An Overview of the Foundation

In 2025, the Internet is centralized, fragmented, and monitored. It is dominated by a few digital giants (GAFAM, BATX) who control most of the traffic, data, and services. Connectivity is progressing, but 40% of the planet remains offline. 5G is becoming widespread, allowing high speeds and reduced latency, but net neutrality is being challenged in several countries. Security and privacy are at the heart of debates, with regulations like the GDPR in Europe. Artificial intelligence (AI) is integrated everywhere, personalizing the user experience and automating many tasks.

The 5 Major Axes of Evolution by 2040

1. Decentralization and Digital Sovereignty (2025–2030)

One of the major challenges will be the decentralization of the Internet. In response to the concentration of power among a few players, alternatives are emerging: blockchains, peer-to-peer networks, decentralized protocols (IPFS, Mastodon, Matrix). By 2030, we will witness the rise of “Web 3.0” (or the “decentralized Web”), where users regain control of their data and digital identity. Governments, for their part, will seek to assert their digital sovereignty by creating national or regional infrastructures (sovereign cloud, local data exchange networks).

Key projected dates:

  • 2025–2028: Mass deployment of blockchain solutions for digital identity and e-money.

  • 2028–2030: Rise of mainstream decentralized platforms (social networks, messaging, storage).

2. Omnipresent Artificial Intelligence and Extreme Personalization (2025–2035)

AI will radically transform the user experience. By 2035, interaction with the Internet will mainly take place through hyper-personalized conversational assistants, capable of anticipating our needs and autonomously managing our digital lives. Traditional interfaces (keyboard, mouse) will gradually give way to voice, gesture, and even thought (brain-machine interfaces in experimental phase around 2035).

Key projected dates:

  • 2025–2030: Widespread adoption of advanced chatbots and voice assistants integrated into all services.

  • 2030–2035: Experimental deployment of direct neural interfaces for certain professional and medical uses.

3. Internet of Things (IoT) and Smart Cities (2025–2040)

The number of connected devices will explode, surpassing 100 billion by 2040. Smart cities will become the norm, with real-time management of flows (energy, transport, waste) thanks to sensors and big data analysis. 6G (starting in 2030) will offer near-zero latency and increased reliability, enabling critical applications (autonomous vehicles, remote surgery).

Key projected dates:

  • 2025–2030: Generalization of advanced 5G and first deployments of 6G in metropolitan areas.

  • 2030–2040: Full integration of IoT into urban spaces, with autonomous resource management systems.

4. Security, Privacy, and Regulation (2025–2040)

Cybersecurity will become a major geostrategic issue. Attacks will become increasingly sophisticated (AI vs. AI), requiring automated defense. Privacy will remain a struggle, with increasingly strict regulations and widespread encryption tools. Quantum computing (from 2035) will disrupt current cryptography, requiring a transition to post-quantum algorithms.

Key projected dates:

  • 2025–2030: Strengthening of regulations on data protection and algorithm transparency.

  • 2035–2040: First operational quantum computers, start of the transition to post-quantum cryptography.

5. Access Equity and the Digital Divide (2025–2040)

Despite progress, the digital divide will persist. Low Earth orbit satellite projects (Starlink, Kuiper) will enable connectivity in rural areas and developing countries, but access will remain unequal in terms of quality, cost, and freedom of use. Issues of technological sovereignty and net neutrality will continue to divide the world.

Key projected dates:

  • 2025–2030: Global coverage by satellite constellations, reduction of unserved areas.

  • 2030–2040: Debates on global Internet regulation and multiparty governance.

Summary of Major Changes (Indicative Timeline)

Period Major Change Expected Impact
2025–2030 Web 3.0, massive IoT, 5G/6G, AI assistants Decentralization, smart cities
2030–2035 Neural interfaces, post-quantum cryptography Immersive experience, enhanced security
2035–2040 Quantum Internet, global governance New paradigms of trust

Conclusion: A Smarter, But More Fragmented Internet

The future of the Internet will be marked by increased intelligence, extreme personalization, and partial decentralization, but also by growing fragmentation (national sovereignty, diverging regulations). The battle for privacy, security, and access equity will be permanent. By 2040, the Internet will no longer be just a “network of networks,” but a critical, intelligent, and omnipresent infrastructure at the heart of all human activity.

The Internet of tomorrow will therefore be both more powerful and more conflictual: a mirror of our aspirations and our divisions.

Notes on Sources and Estimates

The dates provided are projections based on the state of the art of technology, industry roadmaps, and observable societal trends in 2025. They remain indicative and may evolve depending on technological breakthroughs, political choices, and future crises.

This article synthesizes a forward-looking vision based on the analysis of current trends and ongoing innovations. It does not claim to be exhaustive, but aims to open the debate on the crucial issues that will shape the Internet in the next 15 years.

Estimates based on ITU and Alliance for Affordable Internet reports regarding global coverage in 2025. The GDPR (General Data Protection Regulation) is the European reference for digital privacy.

Llama 4 : Privacy and Data Review 2025

Here is our independent evaluation of Llama 4 (Meta), at the heart of the Web3 revolution and the quest for a sovereign and privacy-respecting AI. Based on an exclusive framework and a rigorous audit of publicly available data, this analysis reflects our vision of a future where privacy is a fundamental right. The scoring system is based on a comprehensive guide created specifically for this project, accessible here. This ranking is dynamic, evolving with innovations and feedback from the decentralized community. Our mission: to enlighten and inform, without filter or influence, to build together a fairer and more transparent AI ecosystem. update : 25/08/07

Key Insights from the Llama Privacy and Data Review

Model

Meta’s Llama 4 variants (Scout, Maverick, Behemoth) feature documented architectures (Mixture-of-Experts, early fusion multimodality), sizes up to 1.9T parameters, and training on 30T tokens (text, image, video). Technical whitepapers and model cards provide some transparency, but details on training data provenance, bias mitigation, and security audits are incomplete. While Meta’s openness about architecture is a significant step, the lack of specifics on data sources (e.g., public or licensed datasets), bias testing, and independent audits limits reproducibility and trust. Full datasheets and third-party audit results would set a new industry standard.

Data Collection

Prompts stored: Prompts and responses are not stored beyond the session, a strong privacy feature. However, other user data (e.g., identity, device info, usage patterns) may be retained without clear anonymization or deletion policies, posing privacy risks. Meta should guarantee no persistent prompt storage, anonymize all prompts by default, and define clear retention periods. C Use for training: User data, even if anonymized, may be used to improve Llama 4, with no clear opt-out for standard users (self-hosting excepted). This lack of control and transparency over data use is concerning. A technical opt-out mechanism is needed. C Account required: Personalized use requires an account with standard personal data collection. Anonymous use is limited to self-hosting, which is only feasible for technical users. Most users must share personal information. C Data retention duration: User data is retained “as long as needed” for service or legal purposes, with no specific deletion timeline except for cookies (up to 400 days). This vagueness creates uncertainty. Meta should specify exact retention periods for all data types. C

User Control

Deletion possible: Account data can be deleted under GDPR, but the process may not be immediate or complete due to legal retention requirements. Full user control requires instant, total deletion by default. C
Export possible: GDPR-compliant data export is available, but the format and completeness are unclear. Standardized, usable formats (e.g., JSON, CSV) would enhance portability. B Granularity control: Basic privacy settings are available, but fine-grained controls (e.g., per-prompt or per-interaction) are absent. Granular dashboards would empower users. C Explicit user consent: Explicit consent for sensitive data and ads is GDPR-compliant and revocable, a strong transparency feature. A

Transparency

Clear policy: Meta’s privacy policy is detailed, readable, and regularly updated, clearly outlining data practices. A Change notification: Users are notified in advance of major policy changes and can review them, a key transparency practice. A Model documentation: Only high-level information on architecture, security, and data is provided, lacking detailed technical documentation. Full disclosure is needed for accountability. C

Privacy by Design

Encryption (core & advanced): Security is mentioned, but specifics on encryption methods (e.g., end-to-end) or certifications are absent, undermining trust. C
Privacy-Enhancing Technologies : Llama 4 employs differential privacy with noise injection (likely ε ≤ 1) during training and supports on-device inference with Int4 quantization for local deployment, earning a B rating due to strong privacy protections but risks from open-source weight distribution and limited training data transparency. Local fine-tuning and secure aggregation further enhance privacy for sensitive applications.
Auditability & Certification: No evidence of independent audits is provided. Regular, public audits would enhance transparency. D Transparency & Technical Documentation: No advanced technical documentation on privacy or architecture, only high-level information. You can’t really see “under the hood” of Llama 4’s privacy protections. Full technical disclosure is needed for experts and users alike. C User-Configurable Privacy Features: Basic privacy controls exist, but advanced customization (e.g., custom retention periods, prompt-level controls) is lacking. More flexible options would strengthen user control. C

Hosting & Sovereignty

Sovereignty: Self-hosting allows enterprises and advanced users to control data on their servers, a major privacy advantage. Non-technical users rely on Meta’s GDPR-compliant EU cloud, which is less flexible. A Legal jurisdiction: Meta-hosted data falls under EU/Irish GDPR laws; self-hosted data follows local laws. Both offer strong legal protections. A Local option: Self-hosting is available for enterprises and experts, but regular users are limited to Meta’s cloud. A local option for all users would improve flexibility. B Big Tech dependency: Cloud users are tied to Meta’s infrastructure, while self-hosting offers independence but requires technical expertise. C

Open Source

Publicly available model: Llama 4’s weights are downloadable but come with usage restrictions, falling short of true open-source freedoms. No access to training data or full code is provided, limiting adaptability. B Clear open source license: The custom “open weights” license restricts full open-source freedoms (e.g., sharing, modification). An OSI-approved license (e.g., MIT, Apache) would boost trust and innovation. C Inference code available: Providing inference code enables self-hosting, a significant win for transparency and control, though primarily for technical users. A

Remarks

Llama 4 advances open, sovereign AI with accessible model weights, self-hosting options, and session-only prompt storage. However, incomplete details on training data, bias mitigation, security audits, and open-source licensing limit trust. To set a privacy and trust benchmark, Meta must provide detailed technical documentation, adopt true open-source licensing, conduct regular public audits, and offer robust user controls, especially for non-technical users.

Privacy and Data Review: Overall Score

52.2/100
   
  • Data Collection : 5 + 5 + 5 + 5 = 20
  • User Control : 5 + 15 + 5 + 20 = 45
  • Transparency : 20 + 20 + 5 = 45
  • Privacy by Design : 5 + 15 + 0 + 5 + 5 = 30
  • Hosting & Sovereignty : 20 + 20 + 15 + 5 = 60
  • Open Source : 15 + 5 + 20 = 40

Total : 20 + 45 + 45 + 15 + 60 + 40 = 240

23 x 20 = 460

240 / 460 × 100 = 52.2


This evaluation is provided for informational purposes only and reflects a subjective analysis based on publicly available data at the time of publication. We do not guarantee absolute accuracy and disclaim all liability for errors or misinterpretations. Any disputes must be submitted in writing to futurofintenet@proton.me

For full methodology, see our complete scoring guide here: LLM Privacy Rating Guide