Skip to content

Interview: Ryan Carrier, ForHumanity

This is the first guest interview episode of Algorithm Integrity Matters.

Ryan Carrier is founder and executive director of ForHumanity, a non-profit focused on mitigating the risks associated with AI, autonomous, and algorithmic systems.

With 25 years of experience in financial services, Ryan discusses ForHumanity's mission to analyze and mitigate the downside risks of AI to benefit society.

The conversation includes insights on the foundation of ForHumanity, the role of independent AI audits, educational programs offered by the ForHumanity AI Education and Training Center, AI governance, and the development of audit certification schemes.

Ryan also highlights the importance of AI literacy, stakeholder management, and the future of AI governance and compliance.

 
 

 

00:00 Introduction to Ryan Carrier and ForHumanity

00:57 Ryan's Background and Journey to AI

02:10 Founding ForHumanity: Mission and Early Challenges

05:15 Developing Independent Audits for AI

08:02 ForHumanity's Role and Activities

17:26 Education Programs and Certifications

29:21 AI Literacy and Future of Independent Audits

42:06 Getting Involved with ForHumanity

Yusuf: 

Today we have a special guest, Ryan Carrier. Ryan is the founder and executive director of ForHumanity, a non profit dedicated to mitigating the downside risks of artificial intelligence, autonomous and algorithmic systems. ForHumanity's mission is to examine and analyze the downside risks associated with the ubiquitous advance of AI and automation to engage in risk mitigation and ensure the optimal outcome for humanity. With over 25 years of experience in financial services, Ryan brings a unique perspective to AI risk management and governance that's directly relevant to financial services professionals. Welcome to the podcast, Ryan Carrier.

Ryan Carrier: 

Thank you, Yusuf. It's great to be here and, talk to my former brethren. So, happy to do it.

Yusuf: 

Excellent. So, we've been working together for some time now, but for the benefit of listeners, can you tell us about ForHumanity, and how it came about?

Ryan Carrier: 

Sure, and my background in finances is very much a part of where that came from. So, as you mentioned, starting back in the 90s, I worked for the World Bank, Standard and Poor's, I actually ran a commodity trading desk for Macquarie, for about five years. And then when the financial crisis came along, our division was cut by Macquarie. And so I started my own hedge fund at that time, using a lot of that, commodities knowledge and creating a quantitative trading strategy. And notably, Applying artificial intelligence to the portfolio management of that hedge fund. And so now you get this process coming into the 2010s, from my perspective of, you know, sort of quantitative strategies, but also an awareness, understanding and usage of artificial intelligence. And so the hedge fund was something that I survived. And I think most people recognize what that means, but I say it to everybody so that they know, you know, I'm not sitting on a big pool of capital, right? I didn't, you know, have a successful hedge fund career, make myself super wealthy, and I'm just kind of kicking back and giving back to society. I survived the hedge fund. So what that meant in 2016 was that I had to close my hedge fund. So I'm closing the hedge fund. As a lot of people might know, you know, it's not a full time job at that point. So I've got some time on my hands. I know artificial intelligence and anybody who was kind of paying attention to AI at that time, we had problems with Facebook influencing U. S. elections. Microsoft and. You know, released this, racist, chat bot. It turned racist within 24 hours of release. And we were having a lot of problems , with AI or with social media or the use of these tools and how they were impacting society. And I don't mind telling your listeners or anybody I run into that I got scared. sufficiently scared when thinking about the future for my boys. I have two boys. Back in 2016, they were four and six years old, so I was kind of projecting their future out and thinking about all of these impacts from technology, and I got scared enough that I started a non profit public charity with no funding and no plan. It was really just that mission statement that you read. And a lot of that is informed by finance, right? Specific terms like downside risk. Those aren't terms that are really used outside of finance, but they're really meaningful in finance in the sense that we recognize that reward comes with taking risk. In finance, right? So what we want to do, or what we're going to focus on it for humanity is really just look at those downside risks, those negative impacts, those detrimental impacts to society, and see if we can't mitigate them as best as possible. So that's why we examine and analyze those downside risks, and then we're going to engage in the maximum amount of risk mitigations. With the theory being if we mitigate as much risk as possible in these tools, then we maximize the benefit for humanity, and that's where the overly ambitious name of the organization comes from. But not having a plan, now I had to figure out what the plan was. So starting that mission, I wrote a lot of words back in late 2016, early 2017 about the future of work, technological unemployment, rights and freedoms in the 4th Industrial Revolution, data, should we own our own data? Is that part of the solution here? transhumanism, which is a term your listeners might not be familiar with, but it really means taking machines. and crossing the very critical barrier of our skin. So putting machines into our body. and the reason I bring that up as an example is, is that's actually an enormous challenge to our species as we begin to deal with augmented humans and un augmented humans and how we'll deal with each other in society and Unfortunately, I have to say, I don't think that's going to go very well. So in looking at all of those things, they all, as your listeners will hear, is these are enormous challenges, societal challenge, generational type of challenges. and so in thinking about those, I realized, well, wait, I need something that we can do today. What can we get started at today? And I saw a lack of trust, a lack of transparency, a lack of, of. what I would call an infrastructure of trust built into these tools. Really no risk management culture whatsoever coming from finance where everything is risk management culture. And so what I, what I realized or recognized is that independent audit of financial accounts and reporting is an amazing way through a 50 plus year track record of building up an enormous amount of public trust. In opaque corporate behavior, translating the numbers into 10 Qs and 10 Ks that the public can possibly understand what is happening with these companies, whether that happens through analysts or people writing reports or now podcasts and all sorts of other ways that people examine companies today. It starts often with understanding the numbers. About these entities, and that leads to this infrastructure of trust of the goings on of these companies. And so the idea is, well, we need that same understanding about how AI, how algorithmic systems, how autonomous systems operate. Are they built in a compliance by design kind of manner? And so the theory is very simple. Replicate financial audit. very much. In audits, and the terminology that we use is independent audit of AI systems. Now, you're not doing, you know, balance sheets and cash flow statements and, and you know, things like that. Instead, what we're looking at is risks associated with ethics, bias, privacy, trust, and cybersecurity. So it's different disciplines, but the principle of the audit function is And what is being audited is very similar. And so for the last seven and a half years, I first wrote about that in 2017. Look, audits and technology have been around for decades, so I'm not pretending that that was anything new. But the idea of taking the financial audit and adapting and adopting it to artificial intelligence, I do think I was the first person in the world to make that sort of statement and claim, and that was in June of 2017. And ForHumanity has been Developing that ever since, and so we've come a long way from just me sort of screaming in the wind back then to 2, 500 volunteers or more. From 98 countries around the world, we've drafted audit rules for more than 7, 000 effective risk controls, treatments and mitigations. And we work with governments and regulators in many jurisdictions around the world. So hopefully that's some decent background.

Yusuf: 

Yeah, thank you. So, in terms of what ForHumanity now does, am I correct in saying that primary activities are around education and drafting audit criteria for certification purposes?

Ryan Carrier: 

That's correct. So I'd start by saying we aim to support. That infrastructure of trust called independent audit of AI systems. Our role is to replicate, and these acronyms that that your listeners might know, but most of the rest of the world doesn't, which is FASB and IFRS. So Financial Accounting Standards Board, International Financial Reporting Standards, the two independent bodies. Who are industry bodies that established global accountancy rules, so GAAP and of course, IFRS, and those are the standards by which 10Q and 10K audits are conducted. For Humanity aims to play a similar role to FASB and IFRS with one major enhancement. The primary certification body for individuals in financial audit world is the AICPA, right? Accrediting people as CPAs. I don't see a reason that those two groups should be separate, FASB and AICPA, except that AICPA existed before FASB. if I remember my history correctly. So they were already separate to begin with, whereas when ForHumanity started this process, we said, well, we can help to establish what the rules are in a consensus, crowdsourced, transparent process that we welcome all people to join. And all corporations to join. so we aim to establish the rules, submitting them to governments and regulators where, appropriate and where they're interested in, endorsing them. And then the other role is to train individuals. On those criteria, and then further, we, facilitate the business of this world by licensing all of our intellectual property for auditors, pre audit service providers, technology providers, and other kinds of teachers. So those are the, that's the main role that ForHumanity aims to play. We do this as a nonprofit public charity, just trying to facilitate and foster what is eventually going to be a world of maybe as frequent as annual, independent audits. On AI systems.

Yusuf: 

as part of that, a big chunk of the effort that you would be involved in With the sizable volunteer pool that you now have is developing those audit certification schemes and the criteria that go within them. Can you explain to us what that process looks like?

Ryan Carrier: 

Sure. So I'm going to use GDPR as an example, and I'm not sure how familiar, you know, certain members of your, of your audience may be. GDPR is the general data protection regulation. It is the primary driver of data privacy and protection. around personal data based out of the, out of the EU. It's kind of the first major law, maybe cyber, certain cybersecurity laws, but kind of the first major law that really impacted the AI space. Although back then it was more about personal data than it was about AI, even though AI was starting to, to generate a need. This data through profiling and systems like that. So in that particular case, that law allows for the creation and adoption by governments, national data authorities of certification schemes. A certification scheme defines a scope, what can be certified under that scheme, and then the set of rules that are applicable and appropriate. Now, there's no real rules governing how those certification schemes are built. However, there are standards. ISO provides some standards that many follow, but one of the key things that we would highlight is standards work. For more information, visit www. fema. gov Is often too general. It's not specific enough to allow a third party at risk auditor to conduct an audit. So often what is conducted is assurance, which is a similar kind of process, but it has less structure and formality to it. Assurance versus a full, full audit. What we find is that the audit creates the most robust, Assurance of compliance and conformity. And so what we aim to do is take the law, like GDPR, or take the law, like the EU AI Act, or take the law, like the new privacy law adapted to AI in Australia, and what we aim to do is create binary, compliant, or non compliant Rules, and what we mean by that is, is we work in a team, we work as a group, and we look at what the law says, and we translate that into, well, what criteria would prove. That someone is able to uphold that aspect of the law could be a duty or an obligation, whatever it might be. So, we, we use our words, we use language to try to identify the specific detail that would allow someone to say, yes, I have complied with that aspect or that requirement under the law. So, in the case of GDPR, one example that we have is that they have to provide the right for any data subject to come and access the data. And so when they access that data, they put in a request and then they might look to correct it or rectifies that the technical terminology, or they might want to erase it or they have other rights as well. Well, that process is only defined as a process in the law. Those are rights granted to the data subject. Well, what we do is we get into the details to say, well, So in granting those rights, the law has also said the process needs to be things like transparent and accessible. Okay. And so what we do is we actually define what that process looks like to meet the obligations under the law. And then we ensure that the rights are upheld through that process. And so what we're doing is getting into the details. And in that particular case of GDPR, it's 258 binary. Audit rules that are established, and then we've submitted that to the European Data Protection Board, which is the, the body that oversees all of the 27 national data authorities. And they have the right to approve or uphold that certification scheme or not. And we'll see, we're in the middle of that process and have been for the better part of 18 months.

Yusuf: 

Okay, and that, those audit criteria will be a combination of standardized criteria across the for humanity certification schemes based on, ensuring infrastructure of trust generally, and then it would be items from the individual regulations that need to specifically be audited. Is that, right? So there's two, sets of sources for the criteria. One being the standard for humanity. What we expect from all AAA systems and the other being what does this specific regulation require?

Ryan Carrier: 

Yeah, and again, it's a great question, and it's a great area to specify on. A lot of this is built or, structured from that financial history that I have. In finance, we have robust governance, oversight, accountability. So, I might be in charge of doing something, but there's going to be people who validate my models, or there's going to be people who oversee the risk that I might trade or use or the risk budget. There might be compliance and attorneys who want to see my documentation, right? So everybody has their duty and their responsibility. So we start by establishing robust governance oversight accountability that starts at the level of what's known as top management and oversight bodies. Top management and oversight bodies, CEO, chief risk officer, chief technology officer, and the board of directors. They have duties in, in regards to the responsible oversight, governance and accountability of AI systems. So they will have some duties in our, in our audit criteria and those aren't specific. Well, sorry, sometimes those are specified by the law. So sometimes the law will say, look, you know, I'll give you an example. In GDPR, the person who's in charge of data protection is called the data protection officer. And the law actually specifically states that that DPO, data protection officer, must report to the highest level of management. So that's an example of where they do specify a role of a DPO. for top management to play, which is a reporting structure to the DPO. Most times they do not do that, but sometimes they do, and we want to be prepared for that. So we do have, just as you kind of alluded, right, governance, oversight, accountability functions we need to build in, and then we have how do we comply with the specific law. The other side of that coin would be a privacy policy. Dictated by GDPR or dictated by the Privacy Act in Australia, and the idea that what goes into that privacy policy is specifically laid out in the regulation. At a minimum, you have to have, you know, your categories of data. Who are the recipients of data? How can they contact the data protection officer? So on and so forth.

Yusuf: 

we're talking about that, can we talk about the specific education programs that you have that aim to lead to, make better auditors and certify auditors. That starts with, the foundations program that is required across the board. Can you tell us a little bit about that?

Ryan Carrier: 

Yes, so ForHumanity 3 years ago, almost 3 years to the day this is January 15th and started on January 3rd of 2022. So, yeah, a little over 3 years. We established the ForHumanity, it was originally called 4Humanity University. University is not a protected term in the United States, but it is in many other jurisdictions. So I didn't realize that. So we just called it ForHumanity University, trying to educate people. And so we've now changed it to the ForHumanity AI Education and Training Center. So that's the official name, it is a fully online set of courses. That people can access through a platform called Moodle. There are 9 courses today. There will be more courses coming this, well, spring in the Northern Hemisphere and those courses range from as short as 3 weeks to as long as 9 weeks. They are 30 minutes per lecture, and they come with associated quizzes. The opening course, the course everyone should take first if they're interested in learning more about, about this infrastructure of trust is called Foundations of Independent Audit of AI Systems. It starts with something as very basic as understanding what an audit is. You know, what are the foundations of this theory of independent audit of AI as it related to this, this 50 year track record in finance of, of independent audit of financial accounts and reporting. So we spend a lot of time sort of teaching that background, but also teaching the lexicon, the language of audit. Not everybody's familiar with what audit looks like, right? Not everybody's familiar with the difference between an assessment, assurance, audit. Okay. very much. What are the roles? What are the liabilities? Who's responsible for what? Who does what to whom in this ecosystem? So we explain all that. It's a five week course. All of our courses are free, I guess I should mention. So it doesn't cost anybody anything to just check it out, right? You can go to forhumanity. center, which is our website. You can register for a student account and you can check out the courses. They're YouTube, 25 roughly minute long lectures, and you could do two or three and go, wow, this is interesting, or wow, this is boring. I have no interest in this, right? But it's easy to check out, right? Low, Barrier to entry. Foundations, is a five week course, as I mentioned. It gets people started in the space, and then every Friday we offer a certification exam. So anyone who's passed, there's quizzes associated with each lecture. Anyone who pays attention to the lecture will pass the quizzes. It's a verification of knowledge. So then every Friday people can take an exam to say, I have earned this knowledge. Anybody who passes the exam receives a certificate from us that they are certified in foundations. That does have a cost associated with it. So the knowledge is free. The course is free. If you want to sit the certification exam, it starts at 100, but then there's discounts applicable. So once someone has foundations, then they can begin to get into the more specifics and the details. The next level up is kind of our equivalent to a CPA. So someone who's taken foundations and who says, I want to get that GDPR knowledge. I'm involved in GDPR and I want to help my firm have compliance or I recognize that there's going to be a growing need for audits of GDPR compliance and I want to provide audit services or I work for, for PwC or, or, or KPMG and my firm has asked me to get information on how I can, and learn how to do audits of AI systems, right? So they could go and they can then take our GDPR course. Or our EU AI Act course or any one of six courses built on the back of laws and regulations that are being developed and passed all around the world. So those six courses GDPR, EU AI Act, something called the Children's Code, which governs basically how kids interface with personal data online. So, it governs the nature, and it's based on a UK law. There's the Digital Services Act, which governs online platforms and online search engines. There's disability inclusion and accessibility, what organizations should do to meet their, a lot of their disability or their anti discrimination or non discrimination laws. And then the last 1 is based on a very specific law in the United States called the New York City Automated Employment Decision Tool, Bias Audit Law, which requires anybody who's providing automated employment decision tools to bias audit some of their work. It's a very, very small law, and it actually shrunk since it was first proposed. But we have a course for that as well. Then also, on top of that, we have two expert certificates, one on risk management and one on algorithm ethics. Let me just, risk management is, is what it seems, which is how do you manage the risk, build processes, very similar to risk management and finance, around managing the risks, but they're multidisciplinary risks around the, the use, implementation, delivery of AI, algorithmic, and autonomous systems. Thanks. Algorithm ethics, however, is something that that your listeners likely would not be familiar with, which is the idea that these tools are socio technical tools. As a result, and when I say socio technical, what I mean is, like, when you use a calculator, you punch in the numbers and the machine produces the output, but nowhere in the process was the human involved in the process of that calculation, right? It was a tool that spits out a result. Well, with artificial intelligence, a lot of times humans are in the equation. Their personal data is drawn in for recommendation engines, example, or for credit loan applications, lots of ways for biometrics, right? The human is part of the equation of the assessment process. So, results of those tools is that there are many, many instances of ethical choice built into the design, development, deployment, and management. monitoring, even decommissioning of these tools. And as a result of that, we see all these instances of ethical choice, which we call algorithm ethics. And we recognize that there weren't people in the world who were trained to do this work. And so we've begun to do that training. That course has been around for two plus years now, and we'll grow and expand into a full certificate for ethics officers trained to engage in the managing of these instances of ethical choice. On behalf of their organizations. So the sum of all of those programs, which is 9, is our, body of, course work in the AI Education and Training Center. And people can, can sign up for any of those, all of them, as they might like.

Yusuf: 

That foundations course that we started with. and say Algorithm Ethics. while the Foundations is called Foundations of Independent Audit of AI Systems, there's a lot else that goes into it beyond audit to be able to audit. So, it's not just about audit concepts, and it goes into, five core pillars being bias privacy, ethics, trust, and cybersecurity., plus the audit process. And so, whether you're interested in, actually auditing or not, that course could almost be foundations of AI governance governance of AI systems or control of AI systems, if you like. It is really applicable to a broader audience than just people that want to do audits. And then when you go into things like algorithm ethics, that will be for auditors, yes, but also for those people that want to be involved in ensuring the ethics of algorithms in a deep way. And so, the courses may have audit in their names, they're not necessarily. restricted to, people that have a desire to conduct audits.

Ryan Carrier: 

That's correct. I can describe that in another way as well, right? People ask me regularly, who should take these? and they also asked me, well, what does the business of auditing AI look like? And, you'll see why it's important to make this distinction right now. There is no business. for auditing AI today. Why? Because most organizations aren't ready for their audits yet. They have not built themselves to be compliant by design. They've not put in place the processes, the procedures, the infrastructure to create a compliance by design environment like we have through COSO. With financial audits. So really what we have is a pre audit world, but also when you have a pre audit world, you have pre auditors, advisors, consultants, people who are knowledgeable, lawyers, right, plugging in, but also you have people inside the companies going, well, I want to know what our duties are. I want to understand what my responsibilities are. And they have just as much merit in taking these courses as somebody who wants to become an auditor. And so you're absolutely right. It isn't just for auditors. It is for anyone who wants to know and understand implementable auditable criteria for achieving compliance by design solutions. And in addition to that, it could be people who are interested in teaching this as well. often joke with groups who come to me and are interested in teaching. I'm like, well, My English is great, or maybe some people think it's great. Yeah, I do claim to be an English to English to English to English translator, so I think I have skill, in such a thing, but my Portuguese and my Spanish and my French and my Italian and many other things are terrible and I will never be able to teach them. Therefore, I need teachers, we need teachers, for humanity does, needs teachers who will eventually teach all of these courses in many, many different languages for people and translate our work into others. So we want to encourage other people and we encourage everyone around the world to commercialize all of our work through our licensing program. You want to be an advisor, It's a higher risk for you to say, well, I'm telling you what my advice is. Well, you might say, I'm, I'm telling you what advice for humanity, this non profit public charity of 2, 500 people from 98 countries with 7, 000 risk mitigations already drafted. Here's what they And I can advise you on how to implement that. It is a different sale, right? It is a different approach on how to provide advice to these, these sophisticated organizations in a field that is just beginning to grow. So you know, there's a lot of ways. That the knowledge and information of what it means to be compliant with laws, regulations, best practices and standards, what that means there's a lot of people who can benefit from that knowledge.

Yusuf: 

Okay, excellent. And I'll let that English to English to English thing go because I know you are American English just because you spent a few years in Sydney and were driving on the correct side of the road rather than the right side of the road at that point.

Ryan Carrier: 

I do know the difference between a footpath and a sidewalk. And a bin and a garbage can. So I have a decent start.

Yusuf: 

That's good. in terms of financial services leaders, what, sort of skills should they, you know, we're talking about people that are interested in ensuring that they understand what needs to be done and do it. and that's important. What sort of skills should leaders be developing themselves or helping the teams develop to stay ahead. Well, at least abreast, but ideally ahead of AI governance expectations today coming.

Ryan Carrier: 

I have a kind of a rigorous list. It's a little bit too long for me to recite. So I'm going to generalize them, but let's assume that your question was focusing on, again, top management and oversight bodies. What are their duties? In terms of, you know, we, in our organization, we're deploying AI. So, number 1, governance oversight accountability. Number 2, proper resource allocation. It is not sufficient to talk and not do in terms of budget, time, and infrastructure. Right? Putting in place the right resources to properly manage these risks. You have to have data integrity. You have to have good data. Garbage in, garbage out has great meaning for artificial intelligence. But not only do you have to have good data, so system integrity, but you also have to have the robust technical infrastructure to support that. You must have risk management. You must have quality management in place. You must think about all stakeholders, not just your organization. It was interesting when we first were getting involved with NIST, as NIST was going from their risk management framework, which they had had for years, and cyber security risk and things like that, and they were expanding into AI, we were one of the leading voices basically saying, look, the biggest risks here are to people impacted by these tools. where bias exists. So you need a 360 degree perspective of stakeholders, whereas NIST previously only thought of risk at the organizational level. So defining your stakeholders, figuring out what your duty to those stakeholders looks like, ensuring that you have diverse input and multi stakeholder feedback, and taking care of vulnerable populations and how they may be impacted by these tools. We also require human oversight. So humans should always own or, be responsible and accountable for these tools, depending on how involved they are in the process, that's human oversight, and that can be defined on a case by case basis. There should be transparency. There should be supply chain management. There should be training and education. And then finally, there should be processes for decommissioning. There's a list of 20, we call it top management and oversight bodies. I can get into more detail for anybody who's interested. I think we even have some of our PowerPoints on the website that that walk people through this, but those are the kind of things that It's not just about strategy and the benefits of the tool, and here's what cost, meaning the financial cost is, and can we implement it? There are significant and meaningful risk to brand, to ethics, harms to individuals by not thinking about a robust approach that starts with governance oversight accountability. Does that answer the question, Yusuf?

Yusuf: 

Yes, thank you. to extend that a little bit so we've got top management and oversight bodies and expectations that exist of them. And then the EU AI Act, which is the first that is broad across high risk AI systems. there may be others here and there, but that's probably the, largest, longest that everybody's talking about. And Article 4 of that Act talks about AI literacy. And so there's been a significant effort within for humanity to, um, help enable AI literacy expectations to be met. What, like, what does that look like?

Ryan Carrier: 

Sure, so, let me mention just a couple of quick things about the EUAI Act. Many of the provisions don't come into enforcement until August of 2026, which is a little bit more than a 2 year runway from when the law was enacted. However, one of the provisions that applies to all artificial intelligence And not just high risk AI, which most of the law is about high risk AI, but this applies to all artificial intelligence. It's called Article 4 and it's called AI literacy. Providers and deployers. Those are the two main descriptions about what an organization is. Of AI, so providers and deployers

Yusuf: 

providers and deployers are basically people that build the stuff and people that use the stuff.

Ryan Carrier: 

That's correct. Yeah. Good. Good translation. So providers and deployers of of AI systems must engage in the training of AI literacy. Now, the rule under Article 4 says you must train your employees. The people who work directly on the AI system, and even your leaders, based on their expertise, based on the context of the system, you must train them on what AI literacy means. Elsewhere in the law, they affirm that that AI literacy also has to include training the people who use the system. So as a result of that, what ForHumanity has done in support of AI literacy is we are, we, we basically just finished defining five personas. So the first persona, moms and pops, sort of a retail audience, the people impacted by the system. whose AI literacy will be very low. The second are all the employees of the organization. There is no distinction in what the law calls for that, you know, certain staff don't have to be trained. It basically says all your employees. So we treat them like moms and pops. We treat them like retail effectively because it could be the, and I don't mean this pejoratively, it could be the cafeteria worker and the janitor, technically speaking. So those, those people are being trained at a very basic level. That's personas one and two. Persona three are the people who work with the AI system. So now we're going to train more on the context. We're going to train more on the risk controls, treatments, and mitigations, processes of governance, oversight, and accountability for that tool. The fourth persona is top management and oversight bodies. Now, if your organization only has one tool, then what, what persona four looks like, it's going to be very similar to persona three. But if you're Google, Where you might have a thousand AIs, the training that goes on for top management and oversight bodies is very different than the training of an employee who works on a single AI. So we allow for that difference to basically say, you need to understand what are the main, what's the main infrastructure that is put in place to govern and operate these risks. A little bit like what we talked about before. The 5th, or the highest level, are what we call AI leaders. These are the groups that are responsible for owning how these tools are designed, developed, and deployed. Okay. And here, what we find is we don't need to focus on benefits of these tools. They're already sold on what the benefits are, often so, so much so that they've actually neglected the risks. And so really what we're focused on with them is what are the risks, harms, but most importantly, have you identified all of your stakeholders? Have you thought about who is impacted by your tool? And then do you have a process for assessing what is your duty? Are there vulnerable populations in those stakeholders, and what risk controls, treatments, and mitigations do we need to be putting in place to mitigate common risks, common problems, either associated directly with the tool or in general that may not have been dealt with in the tool itself. In all of these 5 cases, what we're doing is we're establishing the learning objectives, and we're making those learning objectives available to groups who might want to create the training programs on behalf of a company. or two companies, or seven companies, or whatever it might be, because it all has to be context driven. So ForHumanity could never do it. We simply want to support hopefully hundreds and thousands of people who will help advance AI literacy. We, however, will play our role, which is to advance generic AI literacy and make that available freely for that retail kind of audience. And we will give that away. We're just, we're just beginning that process. We're seeking funding to, to help us do that. And what we want to do is we want to augment what all the corporations are required to do. We want to give them our resource to say, and by the way, if your users, if your employees want to learn more, That's not your duty to train them, but ForHumanity is more than happy to give you our resources to do that. So that's the role we think we need to play in AI literacy.

Yusuf: 

That's great. And so, there'll be hopefully quite wide reaching in terms of lifting the bar around what people know and understand about, these systems and how they operate. how do you see the role of independent audits evolving over the next couple of years? I know there's been quite a bit happening in the last few and then a few regulations that have come out that have started to accelerate the need for independent audits. Where do you see that going in the next couple of years?

Ryan Carrier: 

Well, in the very long run, I see it replicating or looking very similar to independent audit of financial accounts and reporting. So a very similar ecosystem to financial audit. But that's that's a process that's going to have to grow over a decade. and we actually see the recognition of that maturity in the EU as an example, when they first. put out the law for the EU AI Act, they only called for a conformity assessment. So they asked the European Standards Body, and that's who's working on this now, to create standards, and then they required that their notified bodies, a lot of the testing and evaluation bodies in Europe, would be able to do a conformity assessment Based on those standards, that was their only choice for verification. Why? Because there's no ecosystem for this verification otherwise. And there is a robust product centric ecosystem for these notified bodies and these conformity assessments. So, it made a lot of sense to treat this as, you know, kind of a product liability type of an approach. And so, They're, they're doing that. It made no sense to call for an audit. Why? Because even by 2026, there's just not going to be enough people out there who could conduct these audits. Enough firms with enough groups trained. It's estimated that there's 8 million AIs that would have to go through this process. Just no possible way, right? So I see it growing. I see the ecosystem evolving. I see procurement. and the process of deployers wanting to buy from providers, I see that influencing the need and demand and the value of independent audit. There is nothing more valuable for a provider of a tool. If they're one of five providers, and they show up to a procurement contest, an RFP, let's say, and four of them show up with nothing, and the fifth one shows with an independent, third party, validated audit. From a, with a reputable set of audit rules built in, conducted by reputable auditors who are trained and certified. The meaning of their ability to say, we are compliant with the EU AI Act. We are compliant with the Privacy Act in Australia. Whatever it might be, they're able to deliver that information. Whereas the others, you know, you're taking their word for it. That is an enormous competitive advantage. In that RFP process. And so I see deployers. I see the procurement process. I see voluntary compliance actually generating a lot of the early demand. If we think back to financial audit in 1972, 73, when GAP was created. It was voluntary adoption by corporations that allowed the US SEC in 75 and 76 to pass a law that said you will follow GAP if you're going to be publicly traded. It was because everybody was already following GAP. It was a kind of a no brainer, right? That was through voluntary adoption. We see the same sort of process taking place. And in addition, we're about to launch the equivalent of regulatory sandboxes. We call it an AI policy accelerator, where we will begin to get voluntary adoption for the reasons I already mentioned from organizations bringing actual, you know, Artificial intelligence into the sandbox, into the accelerator to begin to prove their compliance with the audit rules. And that is, I'm literally putting ink to that contract this week. So we're just starting that process.

Yusuf: 

Fantastic. as we wrap up here, where can listeners find out more about 4Humanity? Where would you point people to, how can people get involved? How can people get in touch with you if they need to? What does that all look like?

Ryan Carrier: 

Yep, LinkedIn is a great place to find me. If you type in Ryan Carrier and, and for humanity or executive director, I pop up pretty easily. accept most requests and we'll be glad to do so on the back of this as well. our website is pretty informative. So that's, for humanity. center, C E N T E R, the American version of center C E N T E R. so that's the, the, the website. address of the website. That's also where you can register for our community. Which is a Slack based community, or you can register for a student account for those, educational courses that we talked about. Slack is where we do most of our work. Everyone is welcome in Slack. There is no commitment beyond what I'm about to express, which is, you'll find a small form called get involved at the very top of the page, and that asks you to provide your name, email address, So we demand your email address, and we demand that you agree to the Code of Conduct. The Code of Conduct governs how we behave inside the Slack community. And so once you've provided us those two things, that is 100 percent of what I would ever demand of anyone. And then you receive access, full transparency, to our Slack community. And that's the point, is to give you access to all of these tools. My whole life. Yusuf is spent trying to take all these tools that we're, we've built and are continuing to develop and get them into hands of people who can commercialize this work. That is the main thing that we're trying to do. We love volunteers that come and say, Ryan, you're doing a good thing. ForHumanity is doing a good thing. How do I help? Love that, right? But we also know that when people come with their commercial interests involved, we're going to get their attention. We're going to keep their focus. And so we love both people that come encourage people to come and commercialize their work. Register through that Get Involved link. You'll receive an invitation to Slack from our Slack community. And then you'll be off and running, whether it's through the courses or through Slack. Reach out to me on LinkedIn. These are all the best ways to get connected.

Yusuf: 

Excellent. Thank you. Ryan, thank you so much for taking time to talk to us today and we probably will see you again sometime soon.

Ryan Carrier: 

My pleasure. Thank you.