Do No Harm: Why AI Regulation Must Focus on Outcomes, Not the Technology Itself
By Shawnna Hoffman, President, Guardrail Technologies
The recent announcement that the UK will chart its own path on AI regulation highlights the global race to balance technological innovation with responsible oversight. While nations debate frameworks and carve out strategies, one thing remains clear: regulating outcomes, not technology, is the only way to ensure AI systems are safe, ethical, and beneficial for humanity.
As President of Guardrail Technologies, I often sound like a broken record urging legislatures worldwide to adopt a principle-focused approach to AI regulation. Instead of fixating on the specifics of algorithms, data sets, or architectures, policymakers must focus on the real-world impacts these technologies have on individuals and societies. Why? Because the pace of technological advancement is so rapid that by the time a regulation targeting a specific technology is drafted, debated, and implemented, the technology will have already evolved beyond its scope.
I recently had the honor of speaking at the Ethical Dragon Quest event hosted by the Worshipful Company of Information Technologists in London, where I discussed the critical need for guardrails in AI. At the heart of my message was a simple yet powerful statement that should guide us in this era of transformative innovation: “Do no harm.” This principle, deeply rooted in the ethos of medical and legal professionals, must become the cornerstone of AI governance.
Imagine if every AI deployment were held to the same standard of care as a surgeon in the operating room or a lawyer defending a client. It’s not about stifling innovation—it’s about ensuring that AI systems do not harm the people they are meant to serve. By focusing on outcomes, we can establish a universal benchmark for safety, fairness, and accountability, regardless of how fast the underlying technology evolves.
The UK’s move to “do its own thing” on AI regulation could serve as a model for outcome-focused governance. By prioritizing principles over prescriptive rules, nations can create flexible regulatory frameworks that adapt to rapid innovation while protecting society from harm. As a global community, we must recognize that AI doesn’t respect borders, and a collaborative, principles-driven approach is essential to ensure safe and responsible AI use worldwide.
At Guardrail Technologies, we are committed to providing solutions that empower organizations to use AI responsibly, embedding ethical guardrails that align with this “do no harm” philosophy. The technology is here, the momentum is unstoppable, and the stakes are higher than ever. Let’s ensure that as AI reshapes our world, it does so responsibly, safely, and with humanity at its core.