The Dangers and Consequences of AI Openwashing

The Dangers and Consequences of AI Openwashing
Source: unsplash - Van Tay Media

The phenomenon of openwashing poses a serious threat to the AI industry, companies, and users alike, as more AI models falsely appear as open source. The Linux Foundation AI & Data Generative AI Commons recently created the Model Openness Framework, which defines three levels of openness to ensure genuine transparency. Openwashing occurs when an AI model developer releases only selective components—such as the model architecture—while withholding critical elements like the complete dataset or documentation, or by applying restrictive licensing terms.

Using falsely open AI models carries significant legal, compliance, and financial risks for companies. According to Cloud Geometry, openwashed models often use legally questionable datasets or code, violating licensing terms, which can lead to fines of up to €500 million, especially in regulated industries like healthcare where HIPAA and other data protection regulations are strict. These models may contain security vulnerabilities, incorporate biases, and use or share customer data in non-transparent ways, severely damaging corporate reputation. As The Economist reported, Mark Zuckerberg and Meta's LLaMA models exemplify this problem, as they release weights but withhold training data and impose licensing restrictions.

The long-term consequences of AI openwashing include slowing down the innovation process, losing access to open models, and eroding trust in AI. According to recently issued draft standards by the Open Source Initiative (OSI), open-source AI requires developers to make available sufficient information about training data, source code, and internal model weights to enable replication—criteria that Meta's popular Llama models fail to meet. The Linux Foundation's Model Openness Tool diagnostic questionnaire helps developers determine their model's openness rating, though this is currently self-reported, requiring users to conduct their own due diligence before using models.

Sources:

Why You Need to Worry About Openwashing in Generative AI Models
Discover the hidden risks of openwashing in generative AI tools like ChatGPT and Dall-E. Learn about the Linux Foundation’s new Model Openness Framework and Tool, designed to ensure transparency and protect your business and customers.
Openwashing: a Closer Look at Transparency in Open Source AI
Explore the debate on transparency in open source AI, its challenges, and the impact of openwashing practices.
Meta accused of “open washing” AI models, clashing with open-source purists
Meta has been criticized by the open-source community for its approach to AI development, as the company seeks to define open-source AI on its own terms while potentially exploiting regulatory loopholes.