After a few years of transformation in the field of artificial intelligence, the gap among engineers is not talent. It’s coordination: common standards and a common language for how AI fits into everyday engineering work. Some teams actually get real value. They moved beyond individual experiments and began building repeatable ways of working with AI. Others did not, even when the motive was there. The reason is often simple: the cost of routing has risen dramatically. The landscape is saturated with tools and tips, and it’s hard to know what matters, where to start, and what “good” looks like once you pay attention to the reality of production.
The missing map
What is missing is a common reference model. Not another tool. map. What engineering activities can AI responsibly support? What does quality mean for those outputs? What changes when part of the workflow becomes probabilistic? What are the guardrails that keep integration secure, observable, and accountable? Without this map, it’s easy to get caught up in novelty, and it’s easy to confuse large-scale experimentation with reliable integration. Teams with the least time, budget and local support pay the highest price, and the gap doubles.
This gap is now evident at the organizational level. More organizations are trying to turn AI into business value, and the difference between hype and integration is showing in practice. It’s easy to give impressive presentations. It is very difficult to make AI-powered work reliable under real-world constraints: measurable quality, controllable failure modes, clear data boundaries, operational ownership, and predictable cost and latency. This is where engineering discipline matters most. Artificial intelligence does not eliminate the need for it; It magnifies the cost of losing it. The question is how to move from scattered experiences to integrated practice without burning cycles while mixing tools. To do this at scale, we need common scaffolding: a common model and common language for what “good” looks like in indigenous AI architecture.
We’ve seen why this type of joint scaffolding is important before. At the beginning of the Internet era, promises and hype moved faster than common standards and practices. What made the Internet durable was not a single vendor or methodology, but rather the cultural infrastructure: open knowledge sharing, global collaboration, and a common language that made practices comparable and teachable. AI engineering needs the same kind of cultural infrastructure, because integration only scales when industry can coordinate around what “good” means. Artificial intelligence does not eliminate the need for careful engineering. On the contrary, he is punished for his absence.
A generic scaffold for native AI architecture
In the second half of 2025, I began to notice increasing anxiety among the engineers I worked with and my IT friends. There was a clear sense that AI would change our work in profound ways, but there was much less clarity about what that would actually mean for a person’s role, skills and daily practice. There has been no shortage of courses, guides, blogs, or tools, but the more resources emerge, the harder it becomes to judge what is relevant, what is useful, and where to start. It felt overwhelming. How do you know which topics really matter to you when suddenly everything is labeled as AI? How do you move from hype to useful integration?
I was feeling a lot of the same uncertainty myself. I’ve been trying to understand this shift as well, and I think for a while I’ve been waiting for a clearer structure to emerge from elsewhere. It was only when friends started reaching out to me for help and guidance that I realized I might have something useful to contribute. I don’t consider myself an AI expert. I’m finding my way through these changes just like many other engineers. But over the years, I have become known for my work in IT workforce development, skills and capabilities frameworks, and engineering excellence and empowerment. I know how to help people overcome complexity in a practical and sustainable way, and I enjoy bringing clarity to chaos.
This prompted me to start working on AI Flower as a hobby project in early October 2025, building on frameworks and methods I already had experience with.
When I started sharing it with friends in IT to gather feedback, I saw how much it resonated. It helped them understand the complexity surrounding AI, think more clearly about improving their skills, and begin shaping their own AI adoption strategies. That’s when I realized that this informal experience had real value, and I decided I wanted to spread it so I could help empower other engineers and IT organizations in the same way I helped my friends.
With AI Flower, I provide a common underpinning for AI-driven engineering work: a common reference model that helps engineers, teams, and organizations adopt and integrate AI sustainably and reliably. It aims to guide and organize the conversation around AI-assisted engineering, inviting targeted feedback about what’s broken, what’s missing, and what “good” should mean in real production contexts. It’s not meant to be perfect. It’s meant to be useful, freely available, open to contribution, and shaped by the most powerful resource our industry has: collective intelligence.
Knowledge sharing and open collaboration cannot be optional. If AI is to become part of how we design, build, operate, secure and manage systems, we need more than just tools and enthusiasm. Many of us work on systems that people rely on every day. When these systems fail, the impact is real. That’s why we owe it to the people who rely on these systems to do so carefully, and that’s why we won’t get to this in isolation. We need the industry to converge, globally, around common standards of reliable practice.
About flower artificial intelligence
AI Flower identifies the core activities that make up engineering work across major engineering disciplines. For each activity, it defines good form, based on practices that engineers should already be familiar with. It then helps people explore how AI can support those activities in practice, provides guidance on how to get started using AI in this work, shares links to useful learning resources, and identifies key risks, trade-offs and mitigations.
But the AI landscape is changing rapidly. This activity-based approach helps engineers understand how AI can support core engineering tasks, where risks may arise, and how to begin building practical expertise. But this alone is not enough as a long-term model for AI adoption.
As AI capabilities evolve, many engineering activities will become more abstract, more automated, or absorbed into the infrastructure layer. This means that engineers will need to do more than just learn how to use AI in today’s activities. They will also need to work with emerging approaches like context engineering and agent workflows, which are already reshaping what we consider core engineering work. A concept I call the skill petrification model captures this progress. It shows how both engineering and AI skills evolve over time, and how some become less obvious as work moves to a higher level of abstraction. Together, the AI Flower and Skill Scaling Model aim to help engineers remain adaptable as the field continues to change.
The main purpose of AI Flower is to help engineers find their way through these rapid changes and grow with them. While I provide content for each section and activity, the real value is in the framework and structure itself. To be truly valuable, it needs the vision, care, and contribution of engineers across disciplines, perspectives, and regions.
I really believe that AI Flower, as an open and freely available framework, can serve as a scaffold for this work. This is my contribution to a changing industry. But they won’t be useful – they won’t “thrive” unless the community tests them, challenges them, and improves them over time.
If any industry can turn open criticism and contribution into common standards on a global scale, then it’s ours, right?
Join me at AI Codecon to learn more
If AI Flower resonates with you and you want the full walkthrough, I’ll provide it in O’Reilly’s upcoming AI Codecon. (Registration is free and open to everyone.)
If you’re worried about how quickly AI engineering patterns will evolve, you’re right. We’ve already seen the center of gravity shift from personalized real-time work, to context engineering, to increasingly agented workflows, and there’s more to come. AI Flower’s primary design goal is to maintain stability across those transitions by focusing on core capabilities rather than specific technologies. I’ll be diving deeper into this stability principle, including the skill drilling model, at AI Codecon as well.






