This is the first guest interview episode of Algorithm Integrity Matters.
Interview: Patrick Sullivan, A-LIGN
This is the second guest interview episode of Algorithm Integrity Matters.
Patrick Sullivan is Vice President of Strategy and Innovation at A-LIGN and an expert in cybersecurity and AI compliance with over 25 years of experience.
Patrick shares his career journey, discusses his passion for educating executives and directors on effective governance, and explains the critical role of management systems like ISO 42001 in AI compliance.
We discuss the complexities of AI governance, risk assessment, and the importance of clear organizational context.
Patrick also highlights the challenges and benefits of AI assurance and offers insights into the changing landscape of AI standards and regulations.
00:00 Introduction
00:23 Patrick's Career Journey
02:31 Focus on AI Governance
04:19 Importance of Education and Internal Training
08:08 Involvement in Industry Associations
14:13 AI Standards and Governance
20:06 Challenges with preparing for AI Certification
28:04 Future of AI Assurance
Yusuf:
Today we have a special guest. Patrick Sullivan is vice president of strategy and innovation at A-Lign. Patrick is a seasoned expert in cybersecurity compliance and AI compliance with over 25 years of experience. In it security and assurance. Welcome Patrick.
Patrick:
Yusuf, thank you so much for having me. Grateful to be here.
Yusuf:
Can we start by just talking a bit, about your background and how you landed at A-Lign?
Patrick:
Oh goodness, Yusuf. I'm not sure we have enough time. When I talk about this, I refer to a Joseph Campbell quote. I don't know if you're a Campbell fan or not, but Campbell has this quote follow your bliss. If you follow your bliss, doors will open where before there were only walls. And Yusuf, honestly, that's the story of my career. I started out. Studying electronics of all things in the mid 90s. That experience opened my eyes to what was going on in IT at the time, which was not a new field, but considering where we are now, a relatively infant field, if you will. So I transitioned into IT. That led to transitions into network engineering, transitions into cyber security. Ultimately, about seven years ago, I had recognized that my real passion wasn't necessarily working with the technical components anymore. I really love teaching and there is no better way to teach than to position yourself in between the decision makers in the business and the technical experts that are implementing the things that need to be implemented to actually offer a level of security or control to the business to mitigate and manage risk in that business. And so, too many words to say, I transition to the role that I currently have in Align, which is really, uh, one of working to help the market think differently about what is and is not important. Really my responsibility is to ensure that executives and corporate directors aren't wasting their time chasing things that are inconsequential. But we're really focusing on those areas of leverage, where for the minimum input they can derive maximum output.
Yusuf:
And what would that encompass? So the scale of A Line's activities is quite broad. Where do you find yourself playing the most?
Patrick:
Yusuf, today I would be lying if I did not say it mainly centers around AI, AI governance, and to some extent AI security. I think there's still a little bit of confusion in the overlap between AI security and AI governance, but mostly 95 percent of my conversations center on governance. And then structures that support building good governance infrastructures inside organizations.
Yusuf:
Okay. And we talk AI security. We're talking about the security of AI as opposed to AI for security.
Patrick:
Correct? Absolutely. Right. those two things are different perspectives. but yes, yes.
Yusuf:
what exactly is it that you do in your role at AI? And I know you're very prolific at producing, information and guidance on various standards. And you've got several roles that we'll talk about later, but what does that look like for you in terms of, shifting that that perspective that people might have?
Patrick:
Goodness. It really is about education and, and I know I said that earlier, uh, I really want come back to that. I mean, I think in order for executives and directors to make the best decisions. They have to have the best information and context of their organization at hand. You know, it really is, as a knowledge leader, which is sometimes what we refer to my role internally, it really is my responsibility, my obligation, to ensure that the market understands what the obstacles are in its way. So that they can make good decisions about how to either go over or go around those obstacles in a meaningful way while staying on purpose and while continuing to work toward the mission of the organization, not becoming completely distracted, which we do see some businesses doing today, unfortunately,
Yusuf:
And would that translate into internal education as well? So, A line staff and what they need to do for your clients?
Patrick:
without question, without question, prime example of many of your listeners are likely aware of ISO's newer standard on AI management, ISO 42001, um, As an organization that performs certification assessments against management system standards, aligned as a certification body, I'll just refer to us as a CB. That's the common vernacular. But to become a CB necessarily means that everyone in our organization has to understand our obligations as a certification body. And then, more importantly, our sales team has to understand what these products and services associated with this new standard look like, so that as they interface with the market, we're not misleading, misrepresenting, in any way adding confusion to what's already a process that really is loaded with fear, uncertainty, doubt. FUD is a very big part of any new standards change. or any new regulatory change. And so in many ways, through education, we hope to minimize that FUD and maximize the confidence that directors and executives have in making decisions about how to proceed.
Yusuf:
Okay, And so reducing some of the anxiety that might go with trying to keep up with all of these standards.
Patrick:
Where do I start? There's so much. I feel like I have to boil the ocean. It's all confusing. These are all new words. I want to focus on my business and creating the products or services that we create. I don't want to deal with this, but I have to. That being said, how can we help those individuals approach this process in as direct and meaningful a way as possible? That really is, the burden that I've taken on.
Yusuf:
Align being a certification body. Do you often find yourself involved in, pre audit type consultation services to help people get to where they need to as well?
Patrick:
So yes, but no. Yes, but no. As a certification body, we have very strict guidelines around maintaining impartiality and independence. And so we can't advise, to maintain our status in good standing as an auditor, we cannot also advise, but we do have very, very strong partnerships with advisory firms that can step in and do the advisory, the pre work that's necessary. to help organizations prepare to withstand an audit delivered by us. That said, we can perform readiness assessments, which are essentially assessments outside the bounds of the certification assessment. The results of those readiness assessments, however, are simply our perspective on whether or not an AIMS, an ISMS, a PIMS, meets the intent of the standard. We can offer no guidance or no recommendation on corrective action without violating our impartiality.
Yusuf:
And of course, Align is well known. You've got customers all over the world.
Patrick:
We do.
Yusuf:
I know here in Australia, we see your reports come up every now and then for, our organizations who rely on various service providers that, you provide certifications to.
Patrick:
Yes, we operate globally. We do, and in our pre call, I think I mentioned this, we've got, of course, we're headquartered here in the States, in Tampa, Florida. We have offices in Ireland, in India, in Panama, and to my corporate marketing team, I apologize for not remembering all of our locations. But no, we do have a global footprint, so we operate internationally.
Yusuf:
Excellent. All right. So I want to touch quickly on some of the associations that you're involved in. And I know you and I met through the IAAA. So the,
Patrick:
International Association of Algorithmic Auditors.
Yusuf:
there we
Patrick:
There we go.
Yusuf:
I couldn't quite get it for a second.
Patrick:
That's funny how that works. Yeah. And the harder you try to remember the further away it creeps.
Yusuf:
That's right, yeah. Anyway, also, you're involved in IAAA, and then you're also involved quite interestingly and importantly in the JDC1 SC42 initiative, creating standards,
Patrick:
Yes. Yes. And I've recently taken on a role with SC27 as well. So the information security, subcommittee inside ISO.
Yusuf:
what, what inspired that sort of extracurricular activity? you've got a big day job as VP of strategy and innovation. you've got a growing family as we spoke about earlier, and these extra initiatives or extra activities. really would take up, I imagine a fair amount of time outside of that. So there must have been some pretty strong inspiration drive to get involved in those. What can you talk a bit more about that?
Patrick:
Sure, and I think there's more of an indirect answer to that Yusuf. Maybe 20 years ago now, I was at a place in my career. Actually it was 20 years ago. It was 2006, so not quite, but close. I was at a place in my career where I had taken on a really big job, in helping start a medical school. It was so cool. I had so much fun. I also failed miserably. Because I did not engage in that job as my full self. And so too many words to say through that failure. I found a book, called the On Purpose Person. We're in, in the book, you, you essentially walk through these exercises to be able for yourself to finish the statement, I exist to serve so that dot, dot, dot. And so I really took time to get my head right about what's most meaningful and important for me, what my purpose is. And Yusuf, I settled on, and I still to this day absolutely believe that I exist to serve so that others can reach their full potential. That's it. And so everything I do, whether it's technical, non technical, AI related, non AI related, my family work, whatever, Everything I do is about ensuring that I stay on purpose and helping others reach their full potential. Now, as I look at what we're doing with AI in our various markets today, there is a desperate need in our market for education. There are so many standards out here, so many organizations, some with really, really positive intent. But very, very poor execution ability. there are so many data points now, that I recognize in our field, in our market, there's a desperate need for help. And so, I've taken on these extracurricular activities, not necessarily for any other reason than I absolutely believe it's the right thing to do. Because we have a market, we have a community, full of not just experts, But people that are incredible human beings looking to create something better for humanity. And so I see my role, though small, as being one of ensuring that people have the best data they can have, the best framing, the best perspective they can have to make good decisions about where to go next. So it is a burden, but I love it, Yusuf. I absolutely love it.
Yusuf:
And, and thank you for that. I think one of the things like when I think about standards and you mentioned it just there, there's a lot of work going on. one of the things that I struggle with and I know that many people that I speak to struggle with is how do these things all fit together? Yeah. So, there's various standards and I know that you do spend quite a bit of time thinking about that and putting together information or material about that. But at a high level, how do you think about how all the standards work together and what that means for implementation for somebody that wants better AI governance, let's
Patrick:
Yeah, yeah. And I think first, and this may be the disappointing statement, but I don't think all the standards work together. I think some of the standards, some of the regulations have a very specific view that can be complemented by other standards, but don't necessarily support, if that makes sense. and Yusuf, honestly, I don't know if you remember Balance Scorecard, the whole Norton and Kaplan approach to taking these very soft virtues or goals and slowly working through a process of connecting those virtues and goals. That's what's missing in our field, until you begin looking at what ISO has done with their management systems. In ISO management systems, whether it's information security, or since we're talking about AI, The AIMS, what ISO has offered is an extensible structure, a set of best practices that any organization can take on and bend and twist and extend and cut and really mold to form the foundation of what allows that organization to then bolt on in a very meaningful way, in a very practical way, the requirements that they need to meet external stakeholder obligations. So too many words to say, though there is no Rosetta Stone today for the overlap of the various regulations and standards, in my mind, beginning with an ISO management system, whether it's the ISMS or AIMS, or AIMS. puts organizations in the best position to reasonably and responsibly extend and grow to meet the needs of those external obligations. In terms of in particular and AI standards. Of course, 42001 is one of the major standards that we're working on. Everybody's talking about what exactly is it and why is it significant for organizations that are developing or using AI systems? So it again is a governance and management system. So it's a framework. Take a step back very quickly governance. So there's a lot of different ways to define governance. I really like the definition that ISACA has offered, which is governance is a value creation process That centers around creating desired outcomes at optimized risk and cost. That said, we know the three variables that we really have to focus on if we're going to ensure creating value or governing inside of our organization. That's what do we really want, what risk are we prepared to take on, and what cost are we prepared to take on to do so. And so as we think about the management system, it is a structure that walks us through a process. of thinking about, first of all, who are we? What is the context of our organization? where do we sit in our ecosystem? Who are our internal stakeholders, external stakeholders? What is this thing that we say we're even doing? So we have to, as an entity, define those things very clearly, very articulately, so that we can move on to the next steps of the process. Which is get management buy in. You know, once we understand who we are, what it is we claim to do, the next thing all management systems require is that management buy in and actually commit to ensuring that not only resources are available to execute against this vision, but also that appropriate planning, appropriate commitments are made to ensure that those resources can do what they need to do. So, what we see through the management system structure itself is a series of what are referred to as clauses. These are just commitments or steps. They're gate checks, if you will. I need to do this. Once this is done, I can move to next. Once that's done, I'll move to next. But we walk through this process of defining our context, of getting management and leadership buy in, of really planning. You know, it's one thing to say that we want to do something, but if we begin executing before we've planned any changes, we're not going to create what we want. The management system recognizes this and forces us to operationalize planning so that we can then allocate appropriate resources. We can provide appropriate support. We can operationalize. And then on the backend, the management system itself is based on ideas offered to us by W Edwards Damning. This whole idea of if we have a system that we want to improve over time, we really need to do a few things over and over and over again. We've got to plan our efforts, we've got to execute, we've got to do those things we said we would do. Periodically, we have to check to be sure we're still on track to create the outcomes that we want. And if we're not, we have to remediate. We have to act in some way, shape, or form to course correct. And so we create this cycle of plan, do, check, act that's built into the management system itself. So again, in my mind, it becomes a very easy playbook. It's almost, in meditation, what do they call it, a mantra? The management system almost becomes a mantra for your organization to set the tone, to set the expectation for how we'll think about changes, how we'll execute those changes, and then how we'll ensure over time in a consistent, meaningful way that those changes are actually creating the outcomes that we intended for them to create. Now, with that as the backdrop, plug in Regulation A, plug in Regulation B, Plug in Regulation C, and suddenly you have this operating system inside your business that allows you to not ensure success, but put you in the best position to actually create what it is you say you want to create.
Yusuf:
Okay. So it's translating intention into action specifically for AI.
Patrick:
You said that so much better than I did Yusuf in so many fewer words. Yes, yes.
Yusuf:
Okay. And why would an organization want to adopt 42001 and then certify against it and particularly certify
Patrick:
Yeah, so two different things. Oh, Three different things here. So first of all, why, why hopefully we articulated just a moment ago, all the positive reasons, all the benefits for an organization, why implement that is, why certify, we see two different things happening right now. We see market pressure on organizations, particularly here in the States, to offer assurance to their customers, to those in their value chain, that they are using AI, that they are developing AI in a responsible way. We also see regulatory pressure. Not so much here in the States, but overseas, in Europe particularly. I think many folks are familiar with that. The EU AI Act. Regardless, organizations are developing and using AI. And organizations need to have a way to offer assurance either to regulators or to their individual market that they are doing the right things in that development. The right things, to protect against bias. To protect against harm. To protect against manipulation. All those things that we need to ensure that we protect the community against. The AI management system, the governance system, allows us to have an external independent entity audit us, audit our activities, and offer third party validation that yes, in fact, we are doing those things correctly.
Yusuf:
In implementing those and as you go through certification processes, what's the one, two or three most common challenges that your clients face? And you guys have been doing this for quite some time now, so you will have seen some patterns emerge, I imagine. What, What are those common challenges?
Patrick:
honestly, one of the biggest, and challenge is probably the right way to frame this, one of the biggest challenges I've seen for organizations is that they just don't know who they are. You know, I mentioned the importance of scoping and setting the context, understanding who you really are. That is the key to anything else that happens inside a management system. Many organizations have developed a habit of running so quickly and being so opportunistic that they struggle to really step back and articulate who it is they really are. And what's most important to them as an organization. So that's challenge number one, scoping and context, which without doubt are the keystones that they are the most important part of building your management system. other areas of concern, risk, believe it or not, we see organizations that are really, really good. about risk from an enterprise perspective. But as we start thinking about our AI systems, we have to think about risk in context specifically of those systems. What risks are we introducing? What vulnerabilities are we exposing to the world? You know, what is the likelihood of someone actually compromising? What it is we've put out there, that becomes a very difficult thing for a lot of organizations because we're asking them to apply tools and processes that they're already comfortable with in a different context. And then the third biggest thing, I think, really ties back to scoping and context, and that is, impact assessment. You know, we see a lot of organizations that have built up really, really strong privacy programs. And therefore, they're very familiar with data protection impact assessments and the concept. As we think about AI impact assessments, however, The impact to stakeholders absolutely hinges on our ability to articulate cleanly our context. Who are our internal and stakeholders? So we're asking organizations today that are struggling with basics to really perform complicated maths in a way that they're just not prepared to be successful. And so Yusuf, you didn't ask, but I will say as we think about ways for organizations to better prepare? I always encourage, always advise organizations that are looking at starting this journey to partner with an advisory firm, with people that have walked this path before and already have an understanding of where the landmines are and where the obstacles are that need to be addressed head on so that everything else can be easier.
Yusuf:
So in thinking about with the various risk scenarios that might exist, particularly as AI systems are developed in different ways by different organizations. What are you seeing in terms of building your own versus buying something that somebody else has created? it, is it, Is it more organizations building, which means that they have to do their own risk assessments or risk assessments from scratch, or is there more focus on buying, which means that they are able to do Leverage the developers thinking around risk and then extend it for their own organization.
Patrick:
not a direct answer here, Yusuf, but I will say based on our market, because we work with so many. Entities that sit in the middle, SAS providers, we'll use SAS providers as an example. Those organizations are very, very nimble, and in being nimble, they recognize that you don't want to recreate the wheel. So many of our customer organizations are consuming upstream services. Open AI, Anthropic, Google, you name it. So they're consuming upstream services that then they apply their development to, to create the service that's sold to their customers. And so we're, we're honestly seeing a little bit of both. It's not salt water, it's not fresh water, very much we're dealing with brackish water. And the cool thing is, in clarifying the context of your organization, the ISO management system actually has roles created to help you easily label your position, to make it very, very plain, not just for you, but also for your customer base. How you're participating in the overall ecosystem. So I'm not creating the foundation models. I'm consuming a foundation model and then applying my own, development, my own practices to create something very specific than I then turn around and sell. But we can easily document that chain and easily audit and then report on it. As we think about risk to your point, it can be difficult to assess risk in those situations. But we do the best we can based on what we see, and what we know is associated with the individual customer SDLC. That really is where the focus, rests.
Yusuf:
Talking about some of those upstream providers, are you seeing more AI implementation nowadays anyway, along the lines of The use of large language models and generative AI or more traditional models, or is that shifting?
Patrick:
I think it is very much vertical specific. As an example, I know many of your listeners are financial services. You know, as I think about fraud detection, I'm not seeing a ton of LLM use for fraud detection. We see more traditional machine learning models associated with some of those activities. But that doesn't mean that organizations haven't created new and interesting ways to interact with their customers. Using large language models.
Yusuf:
And suspect we might see more and more of that
Patrick:
Well, and, and my, and Yusuf, I'm not sure if you share this opinion, but with agentic AI, what we see happening with agents. That, without question, is the next frontier, and it's already here. So that, that is something that I think will catch many organizations off guard if they're not ready.
Yusuf:
What does that mean for risk though? so we're talking about traditional models, it's hard to think about risk.
Patrick:
Yes,
Yusuf:
you introduce generative AI, becomes a bit harder to think about risk. And now we're going agentic, like, what, what's actually happening there in terms of the ability to identify those risk scenarios and just know what your risks are.
Patrick:
well, and so I can't answer that question because there are so many other smarter people working on this. This is a recognized issue in the industry today. What I will say for those listening, to outline the problem, as Yusuf laid out, with LLMs, we kind of understand what's happening. We know a little bit of the data is always going to be obfuscated, so we might not necessarily be have full understanding or full explainability, but we kind of have a concept of what risks are associated with LLM use. Just imagine your LLM, if somehow you were able to create a persona for that LLM, give that persona a goal, a target, a name, and then turn it loose, let it run wild. That is agentic AI, and so necessarily we're introducing the potential for so many emergent properties and so many emergent risks. We're still not sure how far in the future that's going to go but I think we're going to have to figure out how we're going to best proceed with this. I don't think that we're fully aware of the full extent of the risks that we're introducing in our system Yusuf. Nonetheless, again, a lot of really really smart people are working on addressing this problem today.
Yusuf:
So looking ahead to the rest of 2025, because we've almost, almost through the first what do you see coming over the next year in terms of know, the big things that you think will happen in terms of AI assurance? that you're starting to prepare for now.
Patrick:
Agentic AI, first and foremost, that is something that I think is keeping a lot of people up at night right now. you know, honestly, one of the things that I don't know where this will land. And I don't know how much your listeners are involved in this process, but especially out of the EU, with harmonized standards, what we're saying right now is a lot of positive intent to clean up standards, regulations, and harmonize them across a common theme. What we're also saying is that because people systems are generally fraught with politics and ego and all those things, that that process is broken. And so I do have a concern. I think we all had a little bit of a pressure about where we'll land with not just global regulation. Well, I will say with global regulation around AI governance in 2025 but that that's something that I'm very, very keen on tracking because I think that's going to have an impact on more organizations. Now, why does the EU have a conformity standard that's different from what we currently realize? As an example for your listeners that do have a presence in the EU, you're very likely bound to the requirements of the EU AI Act. I can say that, were you to ask me what the conformity standard was to show that you're in conformance, I would have to tell you there's not one, and I don't expect there to be one for you anytime soon, which is a very, very uncomfortable position to be in. and even more reason we're encouraging organizations to So again, it's an on site review. I'm going to put this together in the next video for you. Thanks. Organizations around the world will be positioned to pivot a little bit rather than recreate from scratch with no time.
Yusuf:
Yeah, that makes sense. I think whether regulation exists for you or not, something like the EOI Act is quite, as, as legislation is quite extensive anyway. And so if you could find a way to interpret that and start preparing for conformance, then really any residual is just a gap that can be fixed as opposed to starting from scratch is what we say.
Patrick:
Right. And because management systems are built to be extended, effectively all you're doing is executing the management system appropriately at that point. Yeah, so lots of wins. Lots of wins for organizations that choose to pursue it.
Yusuf:
Choose to do the right thing. And then finally, as we get to close to the top of the hour, where can our listeners learn more about airline services, your work in air governance, and potentially connect with you?
Patrick:
Feel free to connect with me on LinkedIn. Yusuf, I'm not sure if we can provide links.
Yusuf:
Yes, absolutely.
Patrick:
Happy to provide a link. And then again, I serve with Align. If we could provide a link to Align as well, for those of you that are in need of third party compliance, attestation, certification, authorization, we're one of the few companies in the world that effectively handle it all. I will say we don't do a ton with regulation today, but third party certification and attestation, we've developed an expertise. That I'm really, really proud of to, to be perfectly transparent.
Yusuf:
Excellent. Patrick, thank you for joining us on the show today.
Patrick:
Yusuf, thank you so much for having me. It's great, great speaking with you on this venue.