We are continuously seeing how developments in AI are shaping the market, whether it's resulting in job redundancies, child safety discussions, or national security. AI is scaling faster than the guardrails around it, showcasing how Responsible AI has become a material issue for companies and investors. In this paper, our Senior ESG analyst Rita Wyshelesky highlights our research approach, our proprietary Responsible AI due diligence framework and engagement priorities.
Artificial intelligence (“AI”) has rapidly evolved from a niche technology to a central driver of corporate transformation, accelerated by the rise of Generative AI (“GenAI”). Its impact spans sectors; from financial firms using predictive analytics to optimise portfolios1, to healthcare systems applying machine learning for early disease detection2. Yet the same technologies that create value also introduce risks, including bias, misinformation, and systemic disruption.
The 2025 Stanford AI Index reports a 56.4% increase in AI-related incidents in 20243, and deepfake content is projected to surge from 500,000 files in 2023 to more than 8 million in 20254. With reportedly 71% of companies now regularly using GenAI5, these trends underscore the urgency of adopting Responsible AI frameworks that ensure innovation is aligned with safety, trust, and long-term value creation.

The term “Responsible AI” (or sometimes “Ethical AI”) is widely used yet variably interpreted. At its core, it refers to the principled design, deployment and oversight of AI systems in ways that respect human rights, ensure safety and build trust. This concept is key to ensuring that technology scales in a measured way that avoids societal damage or attracts over-regulation.
While existing frameworks differ, five pillars appear consistently across regulatory and industry standards6,7:
These values are integral to minimising operational and reputational risk. Poorly governed AI can result in biased credit decisions, unsafe medical recommendations or flawed trading signals, translating into an increase in legal exposure, remediation costs and regulatory penalties for companies.
From a reputational point of view, AI incidents can undermine customer trust and brand equity far quicker than traditional operational failures. For example, Google’s Bard demonstration error in 2023 wiped roughly $100bn from Alphabet’s market value within days, underscoring how AI hallucinations can trigger outsized reputational consequences8. Microsoft also suffered from similar issues, when Bing’s chatbot was sending users threatening or hostile messages back in 2023, leading to a share price drop of 4% in response to the events9.
Although the potential negative consequences of AI are considerable, current regulatory measures have not kept pace with rapid technological progress.
The EU AI Act10, which stands out as one of the most comprehensive frameworks for AI governance, remains largely reactive, emphasising risk classification and compliance over systemic accountability. And even that faces the possibility of dilution as a result of Big Tech lobbying11.
The U.S., UK, and Asia-Pacific markets, on the other hand, have largely adopted principle-based or voluntary frameworks, leading to inconsistent implementation. This fragmented landscape creates uncertainty for global companies and encourages “jurisdiction shopping” for laxer environments.
While regulations set minimum compliance standards, stakeholders increasingly expect organisations to uphold higher ethical benchmarks. This shift has led to growing demands for greater transparency, algorithmic fairness, and responsible environmental management.
For investors, Responsible AI translates into three main workstreams:
Integration into ESG analysis and research: Responsible AI can be integrated into existing ESG frameworks, rather than treated as a separate overlay. Environmental considerations are increasingly relevant as well, particularly the energy intensity and carbon footprint associated with training and operating advanced AI models.
Enhanced due diligence on AI governance and model transparency: Investors can engage with companies to drive further transparency on AI governance and evaluate how the company monitors potential adverse impacts.
The rise of AI governance metrics and reporting standards. While still nascent, we can expect the emergence of industry-specific AI governance KPIs.
Carmignac incorporates material ESG factors into its investment analysis and decision-making processes, acknowledging their potential to significantly impact risk and return. Therefore, when analysing companies where AI is strategically relevant, we would typically look for:
These factors guide our stewardship activities (voting and engagement[LM1.1][RW1.2]), with the objective of enhancing risk-adjusted returns while promoting resilient, trustworthy AI adoption. In addition to individual engagement with companies, we also engage collaboratively with corporates through the World Benchmarking Alliance’s Collective Impact Coalition for Ethical AI. With regards to proxy voting, we review shareholder proposals on a case-by-case basis, and have historically supported shareholder resolutions with companies like Meta, Alphabet and Amazon, asking for increased transparency on GenAI risks.
Our Responsible AI due diligence framework translates principles into concrete scores that can be used to make and compare decisions. The framework assesses companies across seven pillars, from governance and risk management to fairness, transparency and environmental impact, using a structured, KPI-based checklist.
We have identified a lack of structured performance analysis for Responsible AI – one that investors can use to price this material issue. As a result, we have developed our own framework. These indicators are scored and normalised into an overall Responsible AI score, allowing us to distinguish leaders from laggards across AI developers, deployers and hybrid players.

We find that the highest scores are in our User Rights, Feedback & Redress metrics (excluding energy, which is covered by only one KPI) while the lowest scores are in Content Integrity. This outcome is expected given the current stage of AI deployment. While organisations are establishing appropriate governance structures and policies, these measures have not yet been fully implemented across all product suites or consistently reflected in outcomes.
When we overlay these scores with real valuation multiples12, we do not find direct correlation across all categories, implying that the market is yet to take Responsible AI into account. This is to be expected given the lack of standardised information for the market to absorb. Whilst for AI infrastructure platforms we found that the top-scoring listed names in the framework traded on higher P/E multiples, this was not the case for other sectors within the AI supply chain. Currently, growth, margins, market position and general governance remain the main drivers of valuation, but we anticipate this will change in the future as the topic becomes increasingly material.
We believe that with time, companies that proactively manage AI ethics will trade at higher valuation multiples due to three reinforcing dynamics:
Reduced Risk Discount: Investors assign lower risk premiums to firms with strong governance. Transparent AI governance lowers exposure to litigation, regulatory fines, and brand crises.
Enhanced Innovation Capacity: Firms with mature Responsible AI frameworks often exhibit more disciplined data management and governance processes, which improve model efficiency and accelerate scalable innovation.
Talent and Brand Premium: Evidence shows that strong ethical governance improves employer attractiveness and customer trust13. Therefore, companies with visible Responsible-AI commitments illustrate how such governance can strengthen these competitive advantages.
Since its early days of adopting GenAI models, Microsoft has emerged as a leader for Responsible AI by embedding ethical principles into every stage of its AI lifecycle. Unlike many peers who mainly publish high-level AI ethics statements, Microsoft has built a dedicated Office of Responsible AI and a company-wide Responsible AI Council that review high-risk use cases, require impact assessments, and enforce guardrails before launch14.
What sets Microsoft apart is its proactive approach: publishing detailed transparency notes for AI systems, investing in fairness and bias mitigation tools, and ongoing monitoring. This [RW2.1]combination of governance, tooling, and industry collaboration positions Microsoft as a leader in building trustworthy AI ecosystems.
Beyond internal controls, the most material tests of Responsible AI are playing out in high-stakes societal domains where harms scale faster than safeguards:
Child online safety has become an increasingly important topic of discussion, as the rapid evolution of technology makes it difficult to introduce appropriate controls in a timely manner. It is increasingly being shaped not only by governments but also by civil society and investor stewardship. We are increasingly seeing countries moving to tighten protections for minors online, including minimum age requirements and stronger age-verification expectations for social media platforms15.
Investors, meanwhile, are increasingly framing child safety as a governance issue; for example, shareholder proposals at companies such as Meta calling for clearer oversight, quantitative metrics, and greater transparency received 59% support among non‑management‑controlled votes in 202416. At the same time, GenAI is emerging as a distinct new catalyst for action beyond social media.
Child-rights groups warn that GenAI can scale new risks17, such as synthetic sexual abuse material and more persuasive misinformation, and are issuing guidance aimed directly at AI developers and governments, not only content-hosting platforms. Policymakers are also beginning to scrutinise AI chatbots and companion-style tools18, signalling a shift in focus from “social media safety” toward broader “youth safety in AI-mediated digital services.”
One of the most sensitive aspects of GenAI is its potential impact on jobs. For many companies, large-scale automation can unlock productivity and margin expansion. But abrupt headcount reductions, opaque restructuring decisions or the use of AI in hiring without proper safeguards could trigger employee backlash, union disputes and reputational damage for companies.
While media narratives often link GenAI to widespread unemployment, current macroeconomic data does not indicate a measurable increase in unemployment attributable to AI19. The World Economic Forum projects that by 2030 AI will help create 170 million jobs, while displacing 92 million positions20, indicating a net increase in vacancies. Historically, the diffusion of horizontal technologies has tended to reshape roles and reallocate work rather than reduce overall employment levels, though task restructuring is already visible in several industries, and the long-term distributional impact remains uncertain.
Radiology offers a useful illustration. In 2016, AI was predicted to replace radiologists within five years21. A decade later, the reality is quite different: radiologist numbers have increased, yet many healthcare systems still face shortages because imaging demand and clinical complexity have outpaced capacity22. Rather than replacing radiologists, AI has become a valuable complement, improving triage, quality, and workflow, while keeping expert oversight and accountability firmly in the loop.
Therefore, as investors we look for evidence on how companies manage AI’s impact on employment beyond short-term cost savings. Firms that treat AI as a complement to human capital, rather than a blunt cost-cutting tool, are more likely to preserve culture, attract talent and avoid social, regulatory or political pushback.
The question is no longer whether AI will reshape industries, but how responsibly that transformation will occur. In the event that large-scale unemployment emerges without effective economic reallocation, the introduction of an AI tax would become increasingly inevitable. Investors should now prioritise discussions on AI governance, transparency and long-term resilience, ensuring that innovation aligns with sustainable value creation.
1https://www.researchgate.net/publication/387448709_Predictive_Analytics_in_Enhancing_Investment_Portfolio_Performance.
2https://news.sky.com/story/ai-technology-can-detect-early-signs-of-over-1-000-diseases-say-researchers-13212855.
3https://hai.stanford.edu/news/ai-index-2025-state-of-ai-in-10-charts.
4https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/775855/EPRS_BRI(2025)775855_EN.pdf.
5https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai.
6AI principles | OECD.
7https://www.nist.gov/itl/ai-risk-management-framework.
8https://www.bbc.co.uk/news/business-64576225.
9https://futurism.com/the-byte/microsoft-stock-falling-as-bing-ai-descends-into-madness.
10EU AI Act.
11EU could water down AI Act amid pressure from Trump and big tech | European Commission | The Guardian.
12Bloomberg, January 2026.
13"https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2021.607108/full".
14https://www.microsoft.com/en-us/ai/responsible-ai.
157 countries with social media bans for children.
16https://www.unicef.org/innocenti/generative-ai-risks-and-opportunities-children.
17https://www.unicef.org/innocenti/generative-ai-risks-and-opportunities-children.
18Senators announce bill to ban AI chatbot companions for minors.
19AI and Jobs: The Final Word (Until the Next One) - Economic Innovation Group.
20The Future of Jobs Report 2025 | World Economic Forum.
21Geoff Hinton: On Radiology.
22https://www.rcr.ac.uk/media/4imb5jge/_rcr-2024-clinical-radiology-workforce-census-report.pdf.
This is a marketing communication. This document is intended for professional clients and is not intended for distribution to, or use by, any person in any jurisdiction where such distribution or use would be contrary to applicable laws or regulations. This material may not be reproduced, in whole or in part, without prior authorisation from the Management Company. This document was prepared by Carmignac Gestion, Carmignac Gestion Luxembourg or Carmignac UK Ltd and is being distributed in the UK by Carmignac Gestion Luxembourg . This material does not constitute a subscription offer, nor does it constitute a recommendation or investment advice. The information contained in this material may be partial information and may be modified without prior notice. The information is expressed in good faith as of the date of writing and is derived from proprietary and non-proprietary sources deemed by Carmignac to be reliable, is not necessarily all-inclusive and are not guaranteed as to accuracy. As such, no warranty of accuracy or reliability is given and no responsibility arising in any other way for errors and omissions (including responsibility to any person by reason of negligence) is accepted by Carmignac, its officers, employees or agents. Opinions expressed are subject to change without notice. Copyright: The data published herein is the exclusive property of its owners, as mentioned on each page. “Carmignac” is a registered trademark. “Investing in your Interest” is a slogan associated with the Carmignac trademark.
Carmignac Gestion - 24, place Vendôme - 75001 Paris. Tel: (+33) 01 42 86 53 35 – Investment management company approved by the AMF. Public limited company with share capital of € 13,500,000 - RCS Paris B 349 501 676.
Carmignac Gestion Luxembourg - City Link - 7, rue de la Chapelle - L-1325 Luxembourg. Tel: (+352) 46 70 60 1 – Subsidiary of Carmignac Gestion - Investment fund management company approved by the CSSF. Public limited company with share capital of € 23,000,000 - RCS Luxembourg B 67 549.