At the heart of the Web3 revolution and the pursuit of sovereign, privacy-respecting AI, we present the nilGPT Privacy Data Review. This independent evaluation of Nillion Labs’ AI model is built on an exclusive framework and a rigorous audit of publicly available data, reflecting our vision of a future where privacy is treated as a fundamental right.
The scoring system follows a comprehensive guide created specifically for this project, accessible here, and evolves dynamically with innovations and feedback from the decentralized community.
This nilGPT Privacy Data Review reinforces our mission to promote transparency, data protection, and digital sovereignty for AI users. Our goal remains clear: to enlighten and inform, without filter or influence, so we can build together a fairer and more transparent AI ecosystem.
update : 25/08/13
Key Findings from the nilGPT Privacy Data Review
Model
Meta-Llama-3.1-8B currently, (70B soon), then Gemma-3-27B-IT (in a few weeks)
Data Collection
Prompts stored: Chat histories are securely stored in nilDB, a decentralized MPC-based storage network, where no data can be reconstructed or leaked even in case of compromise. A
Use for training: No, inputs/outputs are not used to train any AI model, ensuring data confidentiality. A
Account required: Registration via email or wallet. Pseudonymity is possible, but an account is always required. B
Data retention duration: Data is stored in nilDB and owned by the user, and data deletion from within the app will be available in the next release. B
User Control
Deletion possible: Yes, data deletion is possible via email request, but in-app deletion is planned for the future. B
Export possible: Users can request access and data portability (copy of data) via email, but they cannot access their data directly or export it themselves. B
Granularity control: There are no fine-grained controls within the app to choose what is collected or stored. C
Explicit user consent: Obtained where legally required (e.g., marketing), but no strong evidence of explicit, easy consent for all processing activities. C
Transparency
Clear policy: Accessible, detailed, and up-to-date (July 2025). A
Change notification: Users are informed of material changes with updated effective dates. A
Model documentation: Models are listed with references; source code to be published within ~1 month. B
Privacy by Design
Encryption (core & advanced): Inputs/outputs are encrypted locally in the browser using a user-chosen passphrase (which never leaves the device), then secret-shared across nilDB nodes. Models run within TEEs, and the entire backend operates on nilCC (Nillion’s confidential compute layer). All data is encrypted in transit and at rest. A
Privacy-Enhancing Technologies: MPC for data at rest, TEEs for inference. Differential privacy/federated learning not needed in current model. A
Auditability & Certification: No third-party audits yet. Code to be open-sourced in ~1 month. C
Transparency & Technical Documentation: Architecture and privacy principles are described publicly, but no full technical documentation covering all privacy and technical measures. B
User-Configurable Privacy Features: Publicly described architecture and privacy principles; full technical documentation not yet available. C
Hosting & Sovereignty
Sovereignty: B
- The backend of NilGPT is hosted in nilCC, which uses Nillion’s bare metal servers in Virginia.
- nilDB nodes are used for data storage.
- There are 3 nodes used for NilGPT, one managed by Nillion and the other two by external parties.
Legal jurisdiction: Nillion Labs is a company based in Ireland, and the data collected by NilGPT is subject to Irish (EU) law and the General Data Protection Regulation (GDPR). A
Local option: It is not possible to host NilGPT locally, as it is only accessible via a web application. D
Big Tech dependence: NilGPT does not rely on public cloud infrastructure. In fact, nilCC, which hosts the backend and runs AI models, runs on bare metal, while nilDB nodes can be hosted anywhere without compromising user data security and confidentiality. A
Open Source
Publicly available model: The models used are open source. The nilGPT application code is scheduled to be released under an open source license within approximately one month. B
Clear open source license: Clear open source license: The models operate under specific licenses (Llama, DeepSeek). The application’s license will be disclosed when the code is released as open source, expected within about one month. While nilAI offers DeepSeek currently, only the Llama model is currently in nilGPT. B
Inference code available: Models are open source, code and license to be open sourced in ~1 month. B
Remarks
NilGPT, developed by Nillion Labs, implements advanced privacy engineering, with user data split and stored using MPC, encrypted, and processed within secure Trusted Execution Environments (TEEs). The platform operates independently of Big Tech cloud infrastructure, relying instead on sovereign bare-metal hosting. Its policies are detailed, transparent, and regularly updated. The models in use are open source with clearly defined licenses. While the application and inference code have not yet been released, both are scheduled for open-source publication in the near term, along with in-app, self-service data deletion capabilities. These planned enhancements, combined with the platform’s already robust architecture, are expected to further strengthen its privacy and sovereignty profile. A re-evaluation is planned once these changes are implemented.
nilGPT Privacy Data Review : Overall Score
71.7/100
- Data Collection : 20 + 20 + 15 + 15 = 70
- User Control : 15 + 15 + 5 + 5 = 40
- Transparency : 20 + 20 + 15 = 55
- Privacy by Design : 20 + 20 + 5 + 15 + 5 = 65
- Hosting & Sovereignty : 15 + 20 + 0 + 20 = 55
- Open Source : 15 + 15 + 15 = 45
Total : 70 + 40 + 55 + 65 + 55 + 45 = 330
23 x 20 = 460
330 / 460 × 100 = 71.7 (future release ~81)
This evaluation is provided for informational purposes only and reflects a subjective analysis based on publicly available data at the time of publication. We do not guarantee absolute accuracy and disclaim all liability for errors or misinterpretations. Any disputes must be submitted in writing to futurofintenet@proton.me
For full methodology, see our complete scoring guide here: LLM Privacy Rating Guide