OpenAI launches Privacy Filter, an open source, on-device data sanitization model that removes personal information from enterprise datasets

Carl Franzen 11:01 am, PT, April 22, 2026 Credit: VentureBeat made with OpenAI ChatGPT-Images-2.0In a significant shift toward local-first privacy infrastructure, OpenAI has released Privacy Filter, a specialized open-source model designed to detect and redact personally identifiable information (PII) before it ever reaches a cloud-based server. Launched today on AI code sharing community Hugging Face under a permissive Apache 2.0 license, the tool addresses a growing industry bottleneck: the risk of sensitive data “leaking” into training sets or being exposed during high-throughput inference.By providing a 1.5-billion-parameter model that can run on a standard laptop or directly in a web browser, the company is effectively handing developers a “privacy-by-design” toolkit that functions…

Read more on VentureBeat