What is Sage’s AI Trust Label and When Will You Actually See It?

Sage recently announced its “AI Trust Label” initiative, a direct attempt to solve a problem it quantifies with its own research: while 85% of SMBs who trust AI actively use it, that number drops to just 48% for those who don’t. To close this “trust gap,” the label aims to provide clear, non-technical information about how an AI feature works and what safeguards are in place. The immediate counterpoint to this, however, is that it’s a form of self-certification. Sage is creating the criteria and applying the label to its own products, meaning the value of the label is directly proportional to a customer’s existing trust in the Sage brand.

Our articles are free, but with your email we can deliver the latest news directly to your inbox.

Loading...

While Sage is applying the label itself, the company emphasizes that its framework is guided by established, external standards. It cites its adoption of the US NIST AI Risk Management Framework and the UK Government’s AI Cyber Security Code of Practice as foundational to its efforts. With this as its basis, Sage is now calling for collaboration with industry and government to create a unified, certified labeling system. This positions the initiative as both a practical tool and a strategic move to shape the future conversation around AI governance, using existing frameworks as a starting point to build a new standard.

The rollout for the AI Trust Label is scheduled to begin later this year, starting in the UK and the US. According to the announcement, the label will not appear on all products at once, but will be applied to selected AI-powered features initially. Customers will see the label directly within the product experience, with links to more detailed information about the underlying data usage and ethical framework available on the company’s online Trust & Security Hub.