Generative AI

Why CIOs Must Embrace Eight Tough Lessons When Scaling Generative AI

April 25, 2025

X min read
IT Services

Author

Joshua (Josh) Santiago, Managing Partner of Santiago & Company

Josh Santiago

Managing Partner

LinkedIn Logo

Key Takeaways

Generative AI promises transformative gains but requires CIOs to manage complexity, focus on impact, and scale responsibly. By following eight core lessons, from concentrating on key use cases to embedding ethical guardrails, leaders can harness gen AI for sustainable growth and true competitive advantage.

  • Meaningful outcomes depend on prioritizing a handful of high-potential use cases and removing distractions from underperforming pilots.
  • Effective orchestration, spanning data quality, cost management, and robust governance are crucial for turning isolated proofs of concept into enterprise-wide capabilities.
  • Responsible design, including transparency, ethical guardrails, and strong user trust, ensures regulatory compliance and long-term viability as gen AI becomes more deeply embedded in operations.

Getting to scale with generative AI (gen AI) calls for CIOs to concentrate on fewer high-impact initiatives and execute them with greater discipline. The earliest phase of gen AI, a period filled with impressive yet isolated pilots, has given way to the more complex work of creating enterprise-wide capabilities. Although developing a proof of concept may look simple, turning that prototype into a large-scale solution remains a daunting challenge. According to recent research on Santiago & Company tech trends, just 9 percent of companies have successfully adopted gen AI at scale.

This transitional moment offers CIOs a valuable chance to transform the promise of gen AI into measurable business value. However, many technology leaders underestimate the extensive effort required to convert pilots into production-ready systems. Achieving the full potential of gen AI means rewiring many aspects of business operations, and establishing the right technology foundation is a critical part of that process.

Earlier discussions explored the initial technology concerns that companies often face. Here, we examine eight essential lessons for organizations taking the "Guardian" path, which connects large language models (LLMs) to an enterprise's internal data and applications. In this approach, companies shape bespoke gen AI capabilities on top of third-party foundation models, rather than using only off-the-shelf software or building their own large language models from scratch.

CIOs in every industry are discovering that scaling generative AI requires discipline and deliberate focus. The following eight lessons provide a practical roadmap to help organizations harness gen AI effectively while managing complexity and risk:

  1. Cut Through the Noise to Find Real Value. Concentrate on the few initiatives that truly matter and jettison underperforming pilots. This approach frees critical resources for solutions that offer the highest return.
  2. Focus on the Bigger Picture, Not Just the Parts. Orchestrating interactions between large language models, data, and applications is far more challenging than picking any single tool. End-to-end automation and a robust API gateway are pivotal for controlling rapidly expanding complexity.
  3. Manage Costs Before They Spiral Out of Control. Expenses for running gen AI often exceed initial build costs, and labor for model maintenance can be substantial. Leaders should continuously optimize usage and tie spending to clear business outcomes.
  4. Streamline Tools and Tech for Lasting Scale. Too many platforms and services can overwhelm teams and inflate costs. Standardizing on a small, flexible set of solutions enables faster deployment at scale.
  5. Assemble Impact-Focused Teams, Not Just Models. Integrate business, tech, and risk experts to ensure solutions align with strategic priorities. Cross-functional collaboration fosters a shared sense of ownership and accelerates impact.
  6. Prioritize the Right Data Over the Perfect Data. Identifying and refining your most relevant data sources yields faster, more reliable results than striving for absolute accuracy. A well-organized data foundation is the cornerstone of consistent gen AI performance.
  7. Reuse Smart Solutions Instead of Reinventing the Wheel. Building standard reuse components across use cases saves time and resources. This modular approach actively simplifies maintenance and scaling efforts.
  8. Embed Responsible AI at Every Step. Embedding ethical considerations, robust privacy safeguards, and clear regulatory protocols builds trust and ensures compliance. Responsible design lays the groundwork for long-term viability and societal acceptance of gen AI.

1. Cut Through the Noise to Find Real Value

Even when business leaders publicly affirm their desire to move beyond pilot programs, day-to-day activity often remains experimental. Gen AI adoption is rising, but many organizations struggle to find compelling examples of bottom-line impact. In a Santiago & Company AI survey earlier this year, only around 11 percent of companies reported that gen AI meaningfully lifted earnings before interest and taxes (EBIT).

Several factors contribute to this shortfall, including misleading lessons drawn from pilots that, for instance, emphasize a mere chat interface over robust enterprise applications. Others may deem a pilot "successful" when it affects a negligible corner of the business. The biggest stumbling block emerges when a company actively spreads its resources and executive attention too thinly across a dozen or more gen AI projects. This mistake mirrors those made in earlier waves of technology adoption.

Therefore, a CIO's central responsibility is to cut nonperforming pilots and heavily invest in the few that address vital parts of the organization while keeping risks in check. This demands close collaboration with business unit leaders, whose insights can guide the CIO in setting priorities and tackling technical complexities. By concentrating scarce energy and budget on high-potential initiatives, an organization can progress from flashy pilots to large-scale transformation.

Figure 1: Example Use Cases for Practical Value

2. Focus on the Bigger Picture, Not Just the Parts

Much of the current dialogue around gen AI focuses on individual tools: LLMs, vector databases, and sophisticated prompts. The greatest challenges come from orchestrating how these components work at scale. Each use case might require several models or libraries, drawing on different data sources that could reside in multiple environments: on-premises servers, third-party clouds, or hybrid systems. Adding a single new feature creates a ripple effect across everything else.

To manage this complexity, organizations need a clear orchestration framework. Typically, this involves an API gateway that authenticates users, meets compliance requirements, and routes queries to the right models. The gateway also logs requests and tracks usage, helping leaders manage and optimize costs. A fully automated process that spans data preparation, model development, deployment, and monitoring is essential to moving quickly and safely. Santiago & Company research shows that companies considered gen AI high performers are more than four times as likely as others to embed testing and validation into every release.

Figure 2: Gen AI Orchestration Framework Example

Because gen AI models rely on probabilistic methods, their outputs can change unexpectedly. Providers often update underlying models weekly, generating variable results. For this reason, tech leaders need "hyperattentive" monitoring systems that quickly flag performance anomalies and automatically adjust model parameters or update prompt templates. Companies that rely on manual checks or lack a modern MLOps platform risk letting errors linger.

3. Manage Costs Before They Spiral Out of Control

Gen AI deployments can devour budgets. The scale of data usage, frequent queries to foundation models, and substantial infrastructure outlays mean costs escalate rapidly if left unchecked. Crucially, the models often account for a fraction of the total investment, around 16 percent. Instead, labor costs for model maintenance, data pipeline upkeep, and rigorous risk oversight drive overall spending. CIOs should spend time focusing their energies on four critical areas:

  • Change management represents the largest cost category in many gen AI programs. Unlike previous digital initiatives, where the ratio of development costs to change-management spending was closer to 1:1, Gen AI demands triple the outlay on user training, role modeling, and performance tracking for every dollar spent on model development. This need stems from Gen AI's transformative potential, which can remake fundamental workflows rather than improve existing ones.
  • Operating expenses often surpass build expenses because real-time gen AI interactions with employees or customers generate large volumes of queries. Foundation model usage fees and the staff required for continuous monitoring can produce significant and ongoing overhead. CIOs can reduce these strains by matching each gen AI use case's performance requirements with its business value. The bar might be high for a customer-facing tool but lower for an internal application that can tolerate slower response times.
  • Cost optimization is a continuous effort. Innovative organizations experiment with architecture decisions, such as embedding data to speed up queries and iterate on design to find more efficient paths. Some companies have slashed usage costs from a dollar per query to a fraction of a cent. However, they did so by adopting a deliberate, phased approach that includes choosing cheaper infrastructure, rethinking data storage, and pruning usage where it yields minimal benefit.

4. Streamline Tools and Tech for Lasting Scale

One reason scaling gen AI becomes difficult is the sheer variety of methods that different teams adopt in parallel. Small groups can fire up their own clouds in early phases, use proprietary models, or tap multiple vendor solutions. However, as the number of tools and platforms grows, complexity balloons, and so do costs. Eventually, standardizing and integrating these fractured pieces becomes unwieldy.

To avoid this "Wild West," companies must pare down infrastructures, libraries, and toolsets to a manageable few. In many cases, the logical choice is to leverage a primary cloud service provider (CSP), mainly if that CSP already stores most of the enterprise's data and has strong gen AI offerings. Endless reviews of the pros and cons of different LLMs or hosting services can lead to analysis paralysis. In truth, LLMs are quickly becoming commodities, so the ability to swap out or update a model may matter more than the initial selection.

Actively pursue flexibility but set limits. Adopting multiple CSPs or overengineering for every potential scenario will quickly increase costs. A good rule of thumb is to build enough adaptability to switch providers or integrate new features when necessary, but not so much that every new tool burdens the enterprise with another custom integration.

5. Assemble Impact-Focused Teams, Not Just Models

A common pitfall is viewing gen AI primarily as a technology initiative. The lessons from previous digital transformations are clear: technology alone rarely unlocks full impact. Instead, businesses must create multidisciplinary teams integrating technologists, domain experts, risk leads, and product managers. Agile development methods often help these teams move fast, but the real game changer is weaving together business, technical, and governance skills.

Many companies structure this support through a central team, often labeled a "center of excellence" or "gen AI hub." This group evaluates use cases, allocates resources, sets standards, and manages program performance. Others split these responsibilities between strategic and tactical pods. Regardless of the model, success hinges on forging close collaboration among technology, risk, and business leaders. Their shared mission should be to track initiatives against specific metrics, take corrective action quickly, and shut down endeavors that fail to deliver.

The oversight mechanism must also enforce risk protocols. Each potential use case requires an assessment of possible adverse outcomes, and companies need robust "human-in-the-loop" safeguards when certain thresholds are exceeded. Monitoring cannot stop at the pilot phase. Gen AI evolves constantly, so the teams and standards that shape it must also evolve.

In one financial services organization, the CIO and chief strategy officer established a high-level steering group that oversaw enterprise governance, resource allocation, and use-case approvals. Meanwhile, the CTO spearheaded a separate but coordinated enablement effort that standardized data architecture, data science practices, and engineering approaches. This dual structure allowed the company to scale from five to more than 50 active gen AI projects.

6. Prioritize the Right Data Over the Perfect Data

Although gen AI can appear adept at synthesizing and refining information, the underlying data must still be accurate and well-organized. Some businesses mistakenly assume gen AI can scrape data from myriad silos and automatically refine it, but this rarely holds. Instead, successful companies invest in data infrastructure to ensure high-quality input while focusing on what matters most.

For instance, retrieval-augmented generation (RAG) often benefits from targeted labeling of core data sets. Adding transparent "authority weighting" also helps the model determine which sources to trust over others, preventing confusion that arises from competing entries in multiple repositories. Companies must repeatedly ensure that the gen AI models remain stable and relevant as they incorporate new data.

One materials science organization exemplifies this challenge. Each department stored a separate version of the exact product details, whether for safety data sheets or customer support. Over time, these subtle differences led to conflicts that made it difficult for Gen AI applications to interpret the data consistently. Centralizing updates in a single repository, a step that demanded agreement from multiple stakeholders, gradually solved the problem, enabling smoother adoption of Gen AI in areas ranging from research to customer service.

7. Reuse Smart Solutions Instead of Reinventing the Wheel

Companies can accelerate gen AI development by 30 to 50 percent through reusable assets and standardized components. However, teams often dive into narrowly focused projects that reinvent the exact solutions repeatedly. CIOs can shift the organizational mindset by encouraging a higher-level perspective.

Instead of equipping each individual project with its own pipeline and code, leaders can identify standard functions among three to five core use cases and then build dedicated modules. Examples include translators, synthesizers, sentiment analyzers, and structured data loaders. These assets increase development speed, improve consistency, and reduce maintenance headaches.

Achieving this modular approach requires formal governance. A platform owner or cross-functional team might maintain a library of validated tools and code bases, which product teams can easily adapt. Such reuse fosters reliability, security, and speed, making it far easier to launch new-gen AI applications without starting from scratch.


8. Embed Responsible AI at Every Step

For many companies, the race to deploy gen AI can overshadow the need for robust, ethically grounded practices. However, public trust hinges on whether AI systems handle data responsibly, remain free from bias, and respect essential privacy boundaries. In fast-moving innovation cycles, developers may be tempted to shortcut risk assessments or rely on prebuilt model guardrails that do not address the organization's unique standards and values. Ignoring these concerns invites reputational harm and can expose the company to regulatory challenges, especially as governments worldwide sharpen their focus on AI governance.

When leadership embeds responsible use principles from the outset, it clarifies system design choices, shapes stronger "human-in-the-loop" protocols, and fuels a feedback loop between risk managers and product teams. Most importantly, it fosters greater transparency around how the organization collects, processes, and deploys data. This openness can help employees, partners, and customers trust that the gen AI solutions they interact with will not jeopardize their privacy or produce harmful outcomes. Over time, a well-articulated responsible-use framework becomes a competitive differentiator, allowing teams to innovate confidently and scale gen AI in ways that generate both business value and societal benefit.

Leading businesses forward

Generative AI can transform how enterprises build products, interact with customers, and make decisions. Realizing that promise depends on achieving scale is significantly more complicated than crafting a smart demo or pilot. CIOs must tackle uncomfortable truths, such as eliminating dozens of mediocre experiments and investing deeply in a few strategic initiatives.

They must create flexible orchestration platforms and rewire how data flows through the organization. They must also assemble diverse teams with the mandate to integrate technology and business thinking. Those willing to act decisively can unify their technical foundations and governance mechanisms, sharpen their approach to data quality, and consciously manage the total cost of ownership. By addressing these eight core lessons, CIOs will help their organizations capture the full value of gen AI, ensuring that it becomes an exciting innovation and a scalable engine of competitive advantage.

Citations & Sources

Related Industry

Related Services

How We Can Help Your Organization

Our case studies not only illustrate the effectiveness of our approach but also reflect our deep understanding of leading transformative business successes. Become part of our growing legacy of triumphs.

Start Your Journey

Client Results

Read more

Our Latest Insights

Read more