The phenomenon of openwashing poses a serious threat to the AI industry, companies, and users alike, as more AI models falsely appear as open source. The Linux Foundation AI & Data Generative AI Commons recently created the Model Openness Framework, which defines three levels of openness to ensure genuine transparency. Openwashing occurs when an AI model developer releases only selective components—such as the model architecture—while withholding critical elements like the complete dataset or documentation, or by applying restrictive licensing terms.
Using falsely open AI models carries significant legal, compliance, and financial risks for companies. According to Cloud Geometry, openwashed models often use legally questionable datasets or code, violating licensing terms, which can lead to fines of up to €500 million, especially in regulated industries like healthcare where HIPAA and other data protection regulations are strict. These models may contain security vulnerabilities, incorporate biases, and use or share customer data in non-transparent ways, severely damaging corporate reputation. As The Economist reported, Mark Zuckerberg and Meta's LLaMA models exemplify this problem, as they release weights but withhold training data and impose licensing restrictions.
The long-term consequences of AI openwashing include slowing down the innovation process, losing access to open models, and eroding trust in AI. According to recently issued draft standards by the Open Source Initiative (OSI), open-source AI requires developers to make available sufficient information about training data, source code, and internal model weights to enable replication—criteria that Meta's popular Llama models fail to meet. The Linux Foundation's Model Openness Tool diagnostic questionnaire helps developers determine their model's openness rating, though this is currently self-reported, requiring users to conduct their own due diligence before using models.
Sources:


