SECURITYAINEWSEMPLOYEE COMMSSTRATEGYLEADERSHIPDIGITAL SIGNAGEINTERNAL COMMUNICATIONSIT

By Chris Payne

 — October 10th, 2025

Behind the Certification: How Poppulo Built the Foundations for Responsible AI with ISO 42001
The headlines are already out there: Poppulo is the first in our industry to achieve ISO 42001, the world’s first internationally recognized standard for Artificial Intelligence Management Systems (AIMS).

This post is about what didn’t make the press release—how we actually did it. The decisions, the groundwork, the lessons, and the moments that shaped the process from idea to audit.

The part that matters most to me about getting this prestigious achievement is how we got there, not the certificate itself—though we’re obviously very proud of that achievement.

Starting With Governance as a Strategic Advantage

When we started developing our agentic AI tools, we made one decision that shaped everything that followed: governance, security, trust, and scalability would be built in from day one—just as they’ve been part of Poppulo’s DNA since the very beginning

That early commitment gave us something invaluable: time to experiment, iterate, and build controls that actually worked for how we operate, rather than retrofitting governance frameworks after the fact.

Building on Strong Foundations

We made a smart decision early on recognizing we didn't need to start from scratch. Rather than treating ISO 42001 as an entirely separate effort, we used our ISO 27001 framework as the foundation.

With some inventiveness and strategic thinking, we expanded those existing controls into the AI-specific requirements. This approach significantly reduced the initial workload and allowed us to move faster while maintaining rigor.

Our security controls became the backbone for AI security. Our data handling protocols evolved to address AI-specific data considerations. Our risk management frameworks extended naturally into AI risk assessment. We were intelligently expanding what we'd already built and avoiding a parallel governance structure.

This also meant our teams were working with familiar frameworks rather than learning entirely new systems. It created continuity and made the process feel like a natural evolution rather than a disruptive overhaul.

Cross-Functional Innovation in Building Controls

One of the most inspiring outcomes was seeing the creativity that flourished when our teams rallied around a shared purpose and common goal.

We established cross-functional partnerships across engineering, product, legal, security, customer success, and leadership. But rather than treating this as a compliance exercise, we approached it as a design challenge: How do we build controls that are both rigorous and practical? How do we create evidence that demonstrates what we do, not just what we say we do?

Engineering teams worked closely with security and legal teams to build data handling protocols that protect privacy while still enabling the AI capabilities our customers need.

Leadership took an active interest in AI usage, culminating in the creation of an AI Governance Council to ensure departments across the organization have visibility and a voice.

Our customer success and product teams contributed real-world use cases that helped us stress test our governance frameworks. Their insights shaped how we think about incident response, user feedback loops, and continuous improvement.

Our teams helped shape it from the ground up, so what emerged was a living system they genuinely believed in, not a compliance checklist handed down from above.

Building Evidence That Tells the Story

Figuring out how to demonstrate our governance in action turned out to be one of the most interesting challenges. The standard doesn't just want to see policies on paper, but evidence that those policies are embedded in how you actually operate.

We got creative in how we captured and presented that evidence. We built automated monitoring systems that continuously track AI model performance and set up live dashboarding to track AI use case review and approvals.

We stood up an internal AI Innovation Hub that provides central access to guidelines and process written in everyday English, allowing any employee to participate in the journey with use cases and ideas.

We developed frameworks for categorizing AI use cases by risk level, which allowed us to apply the right level of governance to each application.

We created decision logs that document why we made specific architectural choices in our AI systems, preserving the reasoning behind our governance decisions for future reference and audit.

We established regular governance reviews that bring together stakeholders from across the company to evaluate new AI initiatives, assess emerging risks, and adjust our frameworks as neededwith meeting records and action items tracked systematically.

We implemented feedback mechanisms that allowed us to capture insights from customers, employees, and external connections, then demonstrate how that feedback influences our AI development.

These were tools that made our governance more visible, more actionable, and more effective for our day-to-day operations.

The Collaborative Audit Process

When the accredited certification body conducted their rigorous audit, we were ready. Not because we'd created a perfect paper trail, but because we'd built governance into the fabric of how we work.

The auditors examined our processes, interviewed our teams, and validated that our governance, risk management, and accountability practices meet the strictest international benchmarks.

What could have felt adversarial became collaborative as our teams were genuinely excited to show how our systems work.

They verified our AI risk assessment frameworks, our accountability structures, our monitoring systems, our incident response processes, and our compliance with data privacy and security standards across global regulations. The comprehensive nature of the audit validated that we'd built something substantial.

What We Learned Along the Way

This journey reminded us that governance doesn't have to be bureaucratic or burdensome. When you involve the right people, give them creative freedom, and align everyone around protecting your customers and building trustworthy AI, governance becomes an accelerator rather than a barrier.

Evidence is most powerful when it's a natural byproduct of good processes, not something you create after the fact for auditors.

Cross-functional partnerships proved essential. The best governance ideas came from unexpected places: a privacy lawyer who suggested weekly technical road mapping sessions, a customer success manager who identified incident reporting gaps, a product manager who drove regular feature demos that made everything clearer and sped our evolution.

Building controls collaboratively creates buy-in that makes those controls work in practice.

Why This Matters for Our Customers

This third-party validation provides our customers with confidence that Poppulo's AI features are built on the strongest possible foundation of responsible practices and resilient systems.

As organizations increasingly operate on a global stage, including in regions with some of the world's strictest AI regulations, this certification ensures consistent and responsible AI governance. It gives enterprises confidence in engaging employees and customers through our platform, knowing we provide a clear, global benchmark for AI readiness.

Looking Ahead

Achieving ISO 42001 certification is a milestone, but maintaining it requires continuous improvement, regular audits, and ongoing vigilance as new governance challenges emerge. The frameworks we've built give us the structure to adapt and evolve while maintaining trust.

We're proud to set a new standard for responsible AI in employee communications and digital signage. But more than that, we're excited about the collaborative, innovative culture we've built around governance. One that will serve us well as AI technology continues to evolve.

To every team member who brought creative solutions, who collaborated across silos, who built controls that actually work, thank you.

This achievement belongs to all of us.

The best on communications delivered weekly to your inbox.