
4 Surprising Truths About Adopting and Building AI
4 Surprising Truths About Adopting and Building AI
Introduction: More Than a Magic Box
When most people think of artificial intelligence today, they picture a conversational tool like ChatGPT—a powerful, seemingly magical box that can write, code, and brainstorm on command. While these tools are impressive, they represent only the most visible crest of a much deeper wave of transformation. The real story of AI is far more nuanced, complex, and surprising, both in how organizations can successfully adopt it and where the technology itself is heading.
Beneath the surface of generative text and images, a new reality is taking shape. The most impactful lessons from the frontiers of AI strategy and research are often counter-intuitive. They reveal that the future of AI isn't just about bigger models and more data, but about architectures inspired by the human brain and organizational frameworks that prioritize people over platforms. The following points distil the essential, counter-intuitive truths that separate successful AI pioneers from those who merely follow the hype.
1. AI Success Is a People Problem, Not a Tech Problem
Successful AI implementation is fundamentally about organizational structure, culture, and governance—not just technology. Organizations often make the mistake of thinking the first step is choosing an AI tool or platform. However, the most effective strategies begin by building a human framework first—a framework that treats AI not as a static tool, but as a dynamic, learning capability that needs the same kind of guidance and acculturation as a new human team.
The backbone of this framework is an "AI Governance Group," a central body responsible for connecting the organization's strategic vision with its foundational capabilities and on-the-ground execution. This group ensures alignment between three critical functions:
* Executive: Leadership that sets the strategic direction, including roles like the Chief AI Officer and CTOs.
* Operational: Teams building the foundational capabilities, managing everything from IT/OT infrastructure and data management to Human Resources, Legal, and Procurement.
* Delivery: Teams responsible for on-the-ground implementation, from proofs-of-concept and pilots to scaled deployments.
This human-centric approach requires embedding principles like safety, transparency, and accountability into the organization's culture before scaling up the technology. By establishing clear roles, responsibilities, and ethical standards from the outset, organizations can build the trust necessary to ensure AI initiatives deliver real, measurable value instead of becoming expensive, disconnected experiments. This human-centric governance is not just for managing today's static AI; it's essential for navigating the next generation of models that learn and evolve continuously, presenting entirely new challenges for oversight and control.
2. The Next AI Won't Just Be Smarter—It Will Evolve
The dominant AI models of today, including Large Language Models (LLMs), are static. Once trained on a massive dataset, their knowledge is frozen in time; updating them requires a complete and costly retraining process. A new wave of "post-Transformer" models is challenging this paradigm with brain-inspired architectures designed to learn continuously.
One such model is "Baby Dragon Hatchling (BDH)" from the AI startup Pathway. It is designed to mirror how the human brain learns through a principle known as Hebbian learning, often summarized as "neurons that fire together wire together." This allows the model's artificial neurons to form and strengthen connections based on experience, enabling it to evolve over time. As Pathway's CEO notes, the static nature of current models is a major limitation:
"Current LLMs are re-living Groundhog day (if you know the movie). They are trained once then wake up every day with the same state of memory (and potentially with access to a large library of notes), without having any consistent learning that could happen over time."
Another example, as reported by The Economic Times, is the Hierarchical Reasoning Model (HRM) developed by the AI firm Sapient. Also inspired by the brain's layered structure, HRM has already outperformed models from OpenAI and Anthropic on complex reasoning benchmarks. This shift toward AI systems that can "get better 'on the job'" could make AI development cheaper, more sustainable, and ultimately far more powerful.
3. Some of the Most Advanced AI is Being Designed to Forget
While the race to build AI that can learn everything is well-publicized, an equally important frontier is emerging: designing AI that can selectively forget. This capability, known as "Machine Unlearning," is the process of surgically removing the influence of specific data points from a trained model without having to retrain it from scratch.
This concept is crucial for addressing modern privacy regulations like GDPR, which includes the "right to be forgotten." If a user requests their data be deleted, an organization must be able to prove that the data's influence has also been removed from its AI models.
The inspiration for this comes directly from human cognitive processes. The brain constantly prunes unnecessary or redundant neural connections to maintain efficiency and optimize memory. In the same way, machine unlearning allows an AI to forget information that is outdated, incorrect, biased, or subject to a privacy request. This capability is not merely a technical fix for privacy laws; it is a fundamental building block for trustworthy AI. By enabling models to correct for biased, outdated, or harmful data, unlearning addresses core ethical concerns around accountability and fairness. Designing AI that can surgically "forget" is proving to be just as important as designing AI that can learn, as it is essential for building trustworthy, compliant, and adaptable systems.
4. You Don't Flip an AI Switch; You Climb a Maturity Ladder
Organizations don't simply "adopt AI." Instead, they embark on an incremental journey through distinct stages of organizational maturity. Attempting to deploy advanced AI solutions without the proper foundations is a recipe for failure. A maturity model, like the one developed for transportation agencies, can serve as a universal roadmap for any organization looking to implement AI successfully.
This journey progresses through five distinct levels, each with its own focus and set of milestones:
* Level 1 (Aspirational): The organization begins to understand AI's potential and assesses its current gaps in technology, data, skills, and capabilities.
* Level 2 (Planning): A formal AI strategy is established, a governance structure is created, and the organization begins developing initial proofs of concept.
* Level 3 (Building Foundations): The focus shifts to enhancing infrastructure, processes, and skills while implementing pilot programs to test AI in real-world scenarios.
* Level 4 (Deployment): Data and technical capabilities are scaled up to support wider use of AI across the organization, and internal processes are refined.
* Level 5 (Optimization): The organization incorporates continuous improvements, refines its AI systems for sustained impact, and prepares for the next wave of innovation.
This staged approach is incredibly valuable because it prevents organizations from wasting resources on technology they aren't ready for. It provides a clear, practical roadmap that aligns investment with capability, ensuring progress is both manageable and sustainable. This ladder isn't just a project plan; it's a roadmap for evolving an organization's 'collective intelligence' to match the increasingly sophisticated, brain-like technologies it seeks to deploy, from systems that can forget on command (Truth #3) to those that learn on the job (Truth #2).
Conclusion: A More Human-like Future for AI
The future of artificial intelligence is becoming more "human," but not just in its ability to mimic conversation or create art. The true shift is deeper. It's happening in how we must organize ourselves to manage it, with a focus on governance, culture, and ethics. And it's happening in how the technology itself is being designed—with brain-inspired architectures that can learn continuously, adapt to new information, and even forget. These surprising truths reveal that our path forward with AI is not just a technical challenge, but a deeply human one.
The critical question for every leader is therefore no longer just about technology adoption. As AI begins to learn and adapt more like a human brain, how must we fundamentally re-architect the way we work with it, not just use it?
