BSR | Recommendations 12 5. Ensure a high level of data protection Many of the human rights risks related to AI stem from the use of personal data. While it can be tempting to focus on compliance with relevant privacy and data protection frameworks, many of these put the focus on the rightsholder to assert their right to privacy, rather than requiring the integration of privacy and data protection by design. Companies should go beyond regulatory compliance and align their internal data protection and privacy commitments, policies, and practices with the highest international standards. 6. Test AI models for bias and externalities. AI models rely on data input, which can be biased and lead to potential adverse human rights impacts around discrimination and the unfair distribution of goods and services. Companies should continually review data inputs that are used by the AI models, through data audits and assessments. 7. Undertake adversarial testing AI solutions may lead to different impacts when used in different contexts or for different use cases. Companies should undertake adversarial testing to new risks as they arise, especially if the use of AI solutions expands to new functional areas or geographies. Adversarial testing refers to exercises where the AI system is stress tested to discover the ways in which the system might be misused or lead to harmful outcomes. Methodologies might include futures thinking or red team/blue team testing (traditionally used in the cybersecurity 昀椀eld).10 8. Provide transparency about how the AI models work Developers of AI models should communicate the details of the model to its users, including training data sources, metrics that the model11 optimizes for, and key limitations of the model. Companies that are using AI solutions should also consider how AI models can be explained to end users or employees who engage with these models. 9. Integrate feedback Establish a reporting channel where potential misuse and abuse of the AI solutions can be reported to the teams or third parties who have developed the solution. Workers’ voices should be central when making decisions on how to deploy a new technology. Ensure that the necessary mechanisms are in place to integrate employee feedback into the way AI solutions are used by the company. 10. Prepare for upcoming regulation EU Corporate Sustainability Ensure that your company is prepared for upcoming regulation (e.g., Due Diligence Directive (CSDDD), proposed EU AI Act). As a 昀椀rst step, companies can either 1) ensure that AI is included in company-wide human rights due diligence processes and/or 2) conduct due diligence on speci昀椀c AI use cases to identify human rights risks.

AI and Human Rights in Healthcare - Page 14 AI and Human Rights in Healthcare Page 13 Page 15