top of page

Understanding Risk of AI Self-Assessments

Balancing Self-Assessments with External Audits

By Tommy Coooke

Nov 8, 2024

Key Points: 


  • AI self-assessments can uncover compliance gaps and build trust but should be paired with external expertise to ensure objectivity 

 

  • Collaborating with an external auditor helps create a comprehensive assessment that aligns AI adoption with an organization’s goals 

 

  • Maintaining a feedback loop with external auditors ensures ongoing improvements, keeping AI systems compliant and aligned with evolving organizational needs


The United Kingdom recently announced that it has launched a new platform to promote AI adoption in the private sector. The goal of the platform is to efficiently allow a business to have a look at its operations and organizational design to identify, assess, and mitigate risks associated with AI before getting too entrenched in adopting AI incorrectly.  


The announcement is timely and encouraging given that complex generative AI models are reportedly struggling with EU legal compliance benchmarks around AI bias and cybersecurity. Moreover, the UK's initiative appears to allow UK businesses to tackle the looming uncertainty around AI adoption head-on; despite its potential impact, 31% of British-based businesses are nervous to adopt AI with 39% of businesses reporting that it would be safer to stick with technologies that they already know how to use.  


The Value of AI Self-Assessments 


A self-assessment can generate the kind of clarity and confidence an organization requires to adopt AI. Here are a few benefits: 


Identify Compliance Gaps Early:  A self-assessment can be a quick way to identify potential compliance blind-spots internally before they galvanize into substantive risks down the road. Scanning the regulatory landscape, both at home and abroad (especially for organizations with employees and stakeholders beyond its national borders), reveals what kinds of policy and procedure preparations are required prior to adopting AI. 


Foster Trust and Transparency: A self-assessment process can also play a crucial role in encouraging businesses to be transparent about their AI use. Per Cisco's 2024 Consumer Privacy Survey, 75% of respondents indicated that trustworthy and transparent data practices directly influence their buying choices. As importantly, trust and transparency not only protect and foster relationships with customers, but with regulators and insurers as well. 


Validate Internal Stakeholders from the Bottom-Up: Enlisting a workforce to assist in implementing a self-assessment is a powerful way of building trust in AI. When employees understand AI’s impact on their daily work and have an opportunity to assess a model or product, they are more likely to embrace AI rather than resist it. This is a bottom-up engagement strategy and is one that is proven to foster a culture that prioritizes communication, adaptation, and innovation. 


The Risks of Self-Assessments 


While there is value in conducting AI impact self-assessments, the process is not without risks. At voyAIge strategy, we encourage organizations to be mindful about the potential shortcomings of self-assessments, which include: 


A Lack of Objectivity: Conducting a self-assessment without external input  and feedback can generate biases that may take years to discover. Despite the power of a self-assessment tool to empower a workforce, they also signal to them quite clearly that AI is coming. Internal stakeholders may overlook weaknesses or ethical concerns because an organization is committed to adopting AI. These two quick examples reveal how a lack of objectivity can generate problems that may undermine the credibility of the assessment altogether. 


Limited Expertise: While self-assessment tools tend to be designed by AI experts, they are generally one-size-fits-all approaches to understanding an organization's needs. They also tend to look at many times of AI with the same lenses. Moreover and as importantly, those conducting a self-assessment usually lack the depth of knowledge required to fully understand not only AI’s implications but the rationale behind the design of a self-assessment tool. AI systems and assessment criteria are complex, necessitating a deep understanding of  technical and regulatory challenges. on the one hand, and an organization's strengths and weaknesses on the other. There are many nuances at stake on either side of the equation. Failing to recognize and meet these nuances can result in superficial assessments that fail to uncover significant issues. 


Global Blind Spots: One of the biggest risks of a self-assessment is its oft failure to understand the regulatory landscape of neighboring jurisdictions. The key here is recognizing that AI laws are created and updated very quickly - and they do so around the globe at different rates and frequencies. A self-assessment might not fully capture these nuances, particularly if the evaluators behind the design of a self-assessment tool do not conduct adequate research and/or fail to regularly update the self-assessment tool. 

 

Balancing the Benefits and Risks of AI Self-Assessments 


Successfully conduct a self-assessment requires striking a balance between internal accountability and external assistance. Here are three steps your organization can take to responsibly and compellingly conduct a self-assessment: 


Step One - Enlist the Assistance of an External Auditor: Rather than leaving the entirety of the responsibility of a self-assessment to your workforce, bring in an external auditor as a project manager. By doing so, the auditor can guide key stakeholders in how to identify and recognize the key areas that may have otherwise been missed.  


Step Two - Engage Cross-Functional Teams with the Auditor: Involve the external auditor in a cross-functional team comprised of AI- and tech-savvy individuals from as many business units as possible. By having them work with the external auditor, specialized insight can be generated in a comprehensive way that minimizes blind spots and ensures that an AI's adoption fits across multiple business lines without being disruptive.  


Step Three - Develop a Feedback Loop: After your self-assessment is conducted, maintain check-ins with the external auditor. Continuous monitoring and improvement is always highly recommended when onboarding an AI system of any shape or size, especially as an organization grows and changes along the way. As time passes, have the external auditor provide updates on regulatory changes and provide advice on refining an AI system's KPIs to ensure that the system remains compliant and aligned with your organization's goals. 


Self-assessments can be crucial for generating alignment and excitement in a workforce. They're important tools for uncovering hidden risks and also for generating trust and transparency. However, they have their limitations. Understanding those limitations and overcoming them through external assistance is important for ensuring the successful implementation of any AI system. 

bottom of page