OpenAI wants to stop ChatGPT from validating users’ political views

Text settings Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Minimize to nav “ChatGPT shouldn’t have political bias in any direction.” That’s OpenAI’s stated goal in a new research paper released Thursday about measuring and reducing political bias in its AI models. The company says that “people use ChatGPT as a tool to learn and explore ideas” and argues “that only works if they trust ChatGPT to be objective.” But a closer reading of OpenAI’s paper reveals something different from what the company’s framing of objectivity suggests. The company never actually defines what it means by “bias.” And its evaluation axes show that it’s focused on stopping ChatGPT from several…

Read more on Ars Technica