AI Adoption for Legacy Organizations Driving Transformation
Introduction: AI Adoption - A Strategic Imperative for Legacy Organizations
AI has the potential to step change business opportunities, whilst also stopping your organization from falling behind. It’s not a niche capability of digital-native enterprises, but a mainstream driver of business transformation across all sectors. For executives leading legacy organizations, the question is no longer if AI should be adopted, but how to do so responsibly, effectively, and at scale.
Yet despite its promise, many traditional businesses struggle to extract real value from AI. Low digital maturity, siloed operations, and unclear implementation strategies often stall progress and erode confidence in return on investment.
I’ve shared a lot on Web3 for sustainability, but not yet on AI’s role. If your organization has a climate or sustainability mission and isn’t leveraging AI, you’re already falling behind. This guide isn’t just for green-focused companies, it’s for any legacy business facing cultural inertia, outdated systems, or low digital readiness.
In this 9-step executive playbook, I distill insights from leading publications, consultancies and my own research into AI adoption, dropping it into a practical roadmap for responsible adoption. If you’re a transformation lead, CDO, or CTO, this framework can help you align AI with strategy, build trust, and unlock competitive advantage.













1..Leadership
For AI adoption to take hold meaningfully across the organization, visible leadership engagement is essential. Leaders must actively role model the behaviors they want to see, embracing and incentivizing new ways of working, communicating a clear vision for how AI supports business goals, and setting expectations around innovation.
Just as importantly, you need to create psychological safety by acknowledging and addressing the concerns and uncertainties employees may have about AI. This could take the form of open conversations about its risks, limitations, and ethical use build trust and reduce resistance.
Celebrating experimentation, even when imperfect, and publicly recognizing teams that are responsibly using AI signals that learning and curiosity are valued. This combination of strategic clarity, empathy, and encouragement helps embed a culture where AI becomes a trusted tool for progress, not a source of anxiety.
2..Innovative Culture
In this age of generative AI, companies that are outpacing their peers in revenue and profit growth aren’t simply scaling existing products or services, they are engineering new growth curves through innovation. They do this by embedding a culture of innovation, underpinned by strategic investments in advanced digital technologies, and reinforced by leadership behaviors that enable rapid experimentation.
To do this:
Lead with a "Test and Learn" Mindset
Executives need to set the tone for experimentation over perfection. Data shows that high-performing organizations embrace failure as a necessary input to innovation, not a risk to be avoided. They reframe failures as learning and discover. They institutionalize "fail fast" thinking, with systems in place to quickly spot poor outputs, stop non-performing initiatives quickly, and pivot without bureaucratic friction. In the context of generative AI, this mindset is critical: models can produce “confidently wrong” answers (hallucinations), and teams need both the skill and the authority to identify and respond swiftly
Ask Better Questions, More Often
The most advanced AI adopters are organizations that are skilled not just at deploying tools, but at framing the right problems. They understand that generative AI is only as effective as the inputs and context it receives. These organizations have already honed their algorithmic thinking: recognizing both the promise and limitations of the technology. This discernment allows them to unlock value quickly and avoid the “garbage in, garbage out” trap.
Build Systems That Learn, Not Just Execute
Generative AI’s greatest potential lies in self-evolving workflows, particularly where human intervention adds little value. Organizations already ahead in this space have identified “no human touch” processes and invested in the data architecture, governance, and change management required to support them.
Compete on Proprietary Data
Ultimately, the edge in generative AI won’t come from access to models. It will come from access to context-rich, proprietary data. Organizations that are ‘top innovators’ are five times more likely to design internal processes, products, and customer interactions for continuous data capture. This allows them to extract deeper insights and produce more relevant, business-specific AI outputs; a critical advantage in a landscape where off-the-shelf models alone are quickly commoditized.
Learning Velocity is Your Competitive Edge
The most successful innovators are those that learn fastest. We know this already. They are eight times more likely to have enterprise-wide agile practices, not limited to IT. This organizational agility enables rapid adaptation as AI tools evolve, and ensures teams can act quickly when risks, errors, or new opportunities emerge.
3..Strategy: Evaluating Use Cases Through a Strategic and Responsible Lens
When implementing AI initiatives, the first consideration is whether the use case aligns with organizational objectives and stakeholder commitments. Generative AI should only be deployed when it is clearly the most effective technology to solve a defined problem, and where its impact can be clearly communicated across business units.
Here are mechanisms for figuring out how it’s use can be aligned to your strategy.
Three Impact Mechanisms of Generative AI
Leaders should evaluate alignment of use cases against your organization’s strategic objectives and stakeholder responsibility.
That being said, use cases tend to fall into one of three categories:
1. Scaling Human Capability
Enhancing productivity by accelerating existing workflows (e.g. instant content generation, creative iteration, summarizing research at speed).
2. Raising the Floor
Democratizing access to advanced capabilities, such as code generation or design tools, that previously required specialist expertise.
3. Raising the Ceiling
Solving previously intractable problems, such as accelerating R&D or uncovering novel insights through generative synthesis (e.g. new molecular structures in pharmaceuticals).
Each mechanism offers a different type of ROI, productivity, accessibility, or innovation, and should be measured accordingly.
Speed, Reputation, and Talent: Strategic Levers for AI Adoption
For executive leaders, generative AI presents not just a technological opportunity, but a strategic one, particularly in fast-paced sectors like marketing, consumer goods, and retail. In these industries, speed to market is a critical differentiator. Use cases that accelerate delivery, personalize customer experiences, or streamline decision-making can yield immediate competitive advantage. Inaction, by contrast, carries significant opportunity costs, especially as peers and competitors move fast to scale AI-driven capabilities.
Beyond speed, generative AI shapes brand reputation and workforce dynamics. Externally, its use signals innovation and forward thinking; internally, it can serve as a powerful tool for talent attraction and retention. Empowering teams with AI that automates repetitive tasks and frees time for higher-order work increases engagement, particularly among knowledge workers who value creativity, autonomy, and purpose. In a competitive talent market, offering access to advanced AI tools can position the organization as a destination for future-focused professionals.
Executives must treat these dynamics of speed, reputation, and talent as interconnected levers in the AI strategy. When aligned, they reinforce each other to drive accelerated growth, cultural momentum, and long-term value creation.
Generative AI use cases should align with strategic goals and stakeholder needs, delivering value across productivity, accessibility, or innovation. Leaders must assess where AI offers real advantage while also leveraging it to boost speed-to-market, strengthen brand reputation, and attract talent, treating AI as both a technology investment and a strategic growth enabler.
4..Governance Structure
As generative AI becomes embedded in core business functions, organizations will be held accountable for its outcomes: legally, ethically, and reputationally. This shift requires new governance structures, and regulatory fluency. High-performing organizations are already responding by embedding cross-functional, stakeholder-driven models to manage risk and unlock responsible innovation.
Regulatory Complexity Demands New Capabilities
Navigating the global patchwork of AI regulations will require new roles and competencies in compliance, legal, risk, and ethics. AI governance must evolve from a technical oversight function to a strategic enabler of responsible transformation, ensuring that AI deployments align with corporate values, emerging regulations, and stakeholder expectations.
Executives must lead the charge in:
Setting-up internal policies that reflect ethical principles like transparency, safety, and fairness
Defining ownership and accountability cross-functionally
Creating the rapid feedback loops for monitoring, adapting, and improving AI models and deployments
From Guardrails to Ongoing Oversight
Generative AI’s evolutionary nature; its ability to learn, generate novel content, and produce downstream effects, means one-time risk assessments are no longer sufficient. Instead there will be a need to continually evaluate even if the necessary guardrails are in place. Even with guardrails in place, continuous evaluation is essential to ensure integrity, accuracy, and alignment with intended use.
Organizations need to anticipate:
The potential for unintended consequences in AI outputs
The ethical implications of autonomous decision-making
Workforce and public skepticism around trust, control, and fairness
Multi-stakeholder Governance is Non-Negotiable
Crucially, human oversight is needed to ensure responsible and effective application, address potential risks and maintain quality outcomes. Leading organizations are adopting distributed, multi-stakeholder governance models. This includes representation from:
The Role of Ethics Councils and Governance Bodies
The positive and negative externalities of generative AI expand the conventional responsibilities in governance towards a more holistic, human-centred and values-driven approach. An AI ethics council, rooted in value-based principles, is becoming an essential governance layer. Larger enterprises may form councils that include stakeholder and shareholder representation, while smaller organizations can convene internal committees or work with external advisors.
These councils should guide:
Workplace policy and employee use (and even consider informal use on personal devices)
Use case prioritization and risk management
Strategic foresight across emerging tech intersections
Emerging and intersecting strategies on open technologies more broadly, beyond only artificial general intelligence (AGI), through to 5G, Web3, Blockchain and quantum
Structuring for Scale: Hub, Spoke, or Hybrid?
Your organization's maturity and innovation culture should shape how you structure AI capabilities:
Hub-and-spoke models are ideal for younger, innovative companies. If you have a mature, homogenized organization, organizations should centralise their AI function in a hub. However, if you're a young or consensus-based organization, and you're particularly innovative, I'd recommend building more AI capability in the spokes. In less mature organizations, AI capability in your spokes can be particularly useful for scaling
Whereas Centralized hubs will drive scaling in more homogenous, regulated environments.
I'd also establish:
A governance body with executive accountability, inclusive of your ethics council to guide ethical AI use, ensure alignment with values (e.g., fairness, safety, privacy), and track progress
Below that, I’d add a cross-functional taskforce or community of practice to foster collaboration and continuous learning
5..Preparing People
What separates top performing organizations is not just technical investment - it’s organizational readiness. Leading innovators are four-to-five times more likely to have tech-savvy business leaders who can identify where AI delivers the most value. They are also significantly more likely to have agile, cross-functional teams that are comfortable with both code and ambiguity. In many of these firms, generative AI augments already mature workflows, accelerating deployment and experimentation across the enterprise.
We know that AI adoption is challenging for companies that were not “born digital.” For these businesses, the journey requires not just new systems, but a profound change in mindset. Leaders must recognize that introducing AI into established workflows can disrupt long-standing cultural norms and professional identities. Hiring 'translators' that connect ML engineers and data scientists with the wider business can do wonders. These translators will provide the bridge between the data scientists and engineers and business needs. I’ve brought translators into the fold for other programs that have had diverse stakeholders - the benefits to active engagement went through the roof.
A common underlying fear in the workforce is the belief that “a machine cannot know more than me.” I came across this whilst talking to a sales executive a few months ago. This concern, whether voiced openly or not, can foster quiet resistance. Without intentional communication and trust-building, resistance to AI adoption may build. And alongside the fear of job displacement, loss of prestige, or diminished control, especially in organizations where power is concentrated through gatekeeping or siloed knowledge, this can become a major barrier to adoption.
To counter this, leaders need to bring end users into the process early. Co-designing AI tools with the people who will ultimately use them not only enhances adoption but also builds understanding and trust. When users are part of the build process, they’re more likely to comprehend the assumptions, data limitations, and logic behind the system. This understanding empowers teams to make better decisions and treat AI as a collaborative partner, not an opaque black box. And ensuring human feedback loops are in place mitigates risk, because it means that more robust, valuable models are developed.
Another challenge executives must manage is expectation. Many organizations hope for plug-and-play solutions that deliver immediate returns. In reality, AI adoption is iterative. It requires time to test, refine, integrate with existing systems, and train people, not just machines. Leaders need to set realistic expectations around timelines and impact. Managing stakeholder patience is as important as managing technical risk.
Finally, it’s important to be strategic about where AI is applied. Not every workflow is a good candidate for automation or augmentation. High-performing organizations focus first on areas where the business case is strongest, where AI can provide speed, pattern recognition, or decision support without compromising trust or increasing complexity. At King's College London in May 2025, the advice was to go narrow in your first AI application project. And when you think it's narrow enough, look to go even more narrow. Prioritizing the right use cases and showing ROI through early wins, significantly reduces friction, which is essential for momentum.
In summary, AI adoption isn’t just a technological endeavor, it’s a people transformation challenge. The organizations that succeed will be those whose leaders invest just as much energy in mindset, culture, and trust as they do in data and infrastructure.
6..Managing Data and Process Workflows
Successful generative AI adoption hinges on more than experimentation, it requires operational and digital readiness. Before integrating GenAI into the business, organizations must ensure that their data is accurate, secure, representative, and accessible. Without this foundation, AI outputs risk being irrelevant at best, and harmful at worst.
At the core of readiness is a modern, adaptable digital infrastructure. GenAI intensifies demands on the digital core, requiring scalable data pipelines, agile platforms, and sufficient computing power. This makes a strong digital backbone, not just cloud access, but well-curated, well-governed data non-negotiable for sustained transformation.
Leaders should ask:
Do we have the right talent, tooling, and architecture to support AI at scale?
Can we track model lineage and data provenance to ensure trust and auditability?
Are our cybersecurity practices mature enough for AI-driven systems?
Common Pitfall: Skipping Digitization
A frequent misstep is leaping into AI without digitizing key workflows or understanding where AI can create the most value. Instead, organizations should begin by mapping workflows to:
Identify high-value, high-impact areas where AI can enhance decisions or automate complexity.
Prioritize use cases that support both employee enablement and measurable business outcomes.
Ensure the underlying data is clean, classified for purpose, and integrated across systems.
It’s important to meet the business where it is, i.e. if you’re not digital, start there.
Infrastructure Imperatives
From peer and industry discussions, several themes are clear:
Centralized, secure, and fit-for-purpose systems are critical.
Autmated data management and integration across platforms boosts agility and decision speed.
An open architecture enables flexibility and scalability, but requires thoughtful planning, as different systems mature at different rates.
Deploying scalable AI strategies is dependent upon 1) establishing a strong digital core, which consists of AI applications and digital platforms, 2) a well-structured data and AI “backbone”, and 3) physical and digital infrastructure.
Build the AI ready infrastructure by becoming digital first with a strong understanding of your analytics so you know where to focus your AI efforts when you get there, rather than falling into the trap above.
A flexible, secure, and interoperable architecture is essential for scalable AI. Insights from industry leaders highlight the importance of centralized, purpose-built systems with seamless integration and automated data management. Open architecture enables agility across diverse use cases, but successful adoption requires a thoughtful approach to integration, as technology ecosystems mature at different speeds.
GenAI doesn’t only start with AI. It starts with data fluency, workflow clarity, and a digital-first mindset. Build the right foundations now, and AI will amplify your strategy. Skip these steps, and you risk investing in complexity without value.
7.. Training and Communication
The successful adoption of generative AI depends as much on people readiness as on technical capability. Surveys consistently show that employees are concerned about job displacement, burnout, or role ambiguity as AI becomes embedded in their daily work. To build trust in AI-driven processes, organizations must invest in transparent communication and continuous learning.
Training should be tied directly to business strategy and introduced early. Not after deployment. Executives, particularly from HR, IT, and Finance, should collaborate to align on talent transformation plans that equip employees with the tools, skills, and support needed to adapt and thrive. This begins with a clearly articulated vision for AI use cases, including how they benefit both customers and employees, and what new professional development pathways will emerge.
Change management is not a side activity. It’s a strategic enabler. HR must be involved from the outset of any AI pilot to assess impact and co-design workforce interventions. Including employees in ideation for AI use cases and enabling them to shape their career evolution enhances engagement and trust.
Company-wide initiatives like hackathons and dedicated training days serve dual purposes: they accelerate upskilling while cultivating a culture of experimentation. The most successful organizations treat workforce readiness as an ongoing, organization-wide transformation effort, where communication, training, and employee empowerment are as dynamic and responsive as the technology itself.
8.. Budget
Many organizations fail to realize the full value of generative AI not because of poor technology choices, but because of misaligned or incomplete investment strategies. A common pitfall is over-investing in models and tools, while underfunding the human and operational enablers needed to support adoption at scale.
Attempting to do too much too quickly, without aligning investments to the right time horizons, can stall momentum, exhaust teams, and sabotage future programs. Similarly, investing heavily in pilots or model development without dedicating budget to training, change management, and user empowerment leads to uneven uptake and diminished ROI.
A well-structured AI adoption budget must go beyond initial deployment and account for the full lifecycle of transformation.
This includes:
Ongoing training tailored by role, function, and digital readiness, ensuring teams evolve alongside the technology.
Tools and guardrails that empower users to interact with AI responsibly, confidently, and within defined parameters.
Resources to track adoption, performance, and impact, including feedback mechanisms, usage metrics, and model effectiveness reviews.
Change leadership investments, such as dedicated taskforces, communities of practice, and internal hubs that connect technical and business domains.
Critically, leaders must fund the infrastructure of culture change: mechanisms that support knowledge sharing, cross-functional experimentation, and safe-to-fail environments. These soft investments are often overlooked, yet they are what differentiate organizations that merely experiment with AI from those that scale it successfully and responsibly.
9.. Metrics for Success
Metrics should be directly linked to the organization's overarching objectives. For instance, if the goal is to enhance customer satisfaction, relevant metrics might include Net Promoter Score (NPS) changes post-AI implementation, or reductions in customer service response times. Aligning AI metrics with business goals ensures that AI initiatives contribute meaningfully to the organization's success.
Tailoring Metrics to AI Maturity and Organizational Size
Organizations at different stages of AI maturity and of varying sizes will require different metrics:
Early-stage or Smaller Organizations:
For early-stage or smaller organizations, focus on tracking the percentage of employees actively using AI tools, the completion rate of AI training programs, and how many pilot projects meet their success criteria. These metrics help assess engagement, readiness, and early value delivery.
Mature or Larger Organizations:
For more mature or larger organizations, key metrics should include the depth of AI integration into core business processes, measurable gains in efficiency or cost reduction, and the rate at which new AI-driven products or services are launched. Tailoring metrics in this way ensures they remain relevant and actionable, enabling more accurate assessments of progress and impact.
Recommended AI Adoption Metrics
Based on industry best practices, consider the following metrics:
User Engagement Metrics:
Understanding how teams interact with AI tools is critical for measuring adoption success. Metrics like active usage - tracking how often users engage with AI on a daily, weekly, or monthly basis - help identify usage trends and potential engagement gaps. Additionally, feature utilization indicates how comprehensively tools are being used, while user satisfaction scores from surveys or feedback loops provide qualitative insights into the perceived value and usability of AI systems.
Performance Metrics:
Performance metrics assess the technical effectiveness of AI systems. Accuracy rates determine how closely AI outputs match expected benchmarks, ensuring reliability. Response times measure how quickly the system delivers results, which is vital for maintaining user productivity. Meanwhile, system uptime reflects the availability and stability of AI services, minimizing disruptions and ensuring consistent performance across operations.
Business Impact Metrics:
To evaluate the real value AI delivers, organizations should track its direct business impact. Return on Investment quantifies financial return relative to AI spend, while cost savings capture operational efficiencies gained. Revenue growth linked to AI-driven initiatives offers a clear view of its role in enabling innovation, new offerings, or enhanced customer experiences that contribute to top-line growth.
Governance and Compliance Metrics:
Ensuring AI is deployed ethically and in line with regulations requires robust governance metrics. Compliance rates measure adherence to legal and internal standards. Bias detection tracks the frequency and severity of algorithmic bias, promoting fairness and trust. And, audit trail completeness ensures transparent and traceable decision-making, enabling accountability and simplifying regulatory reporting.
Frequency of Metrics Review
Regularly reviewing AI adoption metrics is essential for sustaining momentum and ensuring alignment with business goals. Monthly reviews are valuable for monitoring user engagement and performance, allowing for timely course corrections. Quarterly reviews help assess business impact and compliance, syncing well with financial reporting cycles. Annual reviews provide a strategic lens, evaluating long-term outcomes such as ROI and innovation impact.
That being said, review frequency should be tailored to the organization’s pace of change and the complexity of its AI initiatives to maintain relevance and effectiveness.
Indicators of Successful AI Adoption
Success in AI adoption can be indicated by:
Indicators of successful AI adoption include high user adoption rates, where a large proportion of intended users consistently engage with AI tools. Organizations should also see measurable business improvements such as greater efficiency, cost reductions, or increased revenue, aligned with the three impact mechanisms above. Positive user feedback, both in satisfaction scores and qualitative input, signals that the tools are delivering real value. Scalability is another key marker, reflected in the organization’s ability to expand AI solutions across functions or departments. Finally, sustained compliance with ethical standards and regulatory requirements demonstrates responsible and trustworthy AI integration.
Tracking AI Adoption
To monitor AI adoption effectively, organizations should implement real-time dashboards that display key metrics for stakeholders, enabling transparent and timely insights. Regular reports should be generated to summarize progress, highlight challenges, and identify new opportunities. Establishing robust feedback mechanisms is also essential to capture user experiences and continually improve AI tools. Additionally, benchmarking against industry standards or historical performance helps contextualize progress and ensures the organization stays competitive and aligned with best practices.
By systematically tracking these elements, organizations can ensure that AI adoption aligns with strategic goals and delivers the intended value.
Implementing a structured approach to measuring AI adoption, tailored to the organization's maturity and size, and aligned with strategic objectives, is essential for realizing the full potential of AI initiatives.
Conclusion
A structured, goal-aligned approach to measuring AI adoption is not just a best practice, it’s a strategic necessity. As organizations evolve in their AI maturity, their ability to track usage, impact, and readiness will define the difference between experimentation and scalable value creation. By aligning metrics with business objectives, engaging stakeholders, and reviewing performance at the right cadence, leaders can identify what's working, adapt where needed, and accelerate responsible adoption. Ultimately, successful AI integration depends on seeing metrics not as compliance tools, but as strategic levers that guide growth, governance, and long-term competitive advantage in an AI-driven world.