How should engineering teams be structured when AI is doing more of the building?
Engineering 2028: Leading Human + AI Teams Responsibly, our report produced in partnership with Damilah, finds that the remote vs. office debate is giving way to a different and perhaps more fundamental question about organisational shape.
The conversation about where engineers, or many other professionals, work has been running for many years, and it hasn’t produced a clean answer in all that time. The data on proximity and productivity can be split, and the rise of AI tooling hasn’t resolved it so much as having reframed it. The more pressing question for 2028 isn’t how your team is located or distributed, but whether its structure is suited to the kind of work AI-augmented engineering now demands.
Engineering 2028 asked leaders whether AI would make proximity more or less important by 2028, 47% expected no change, 28% expected it to become less important, and 15% expected it to become more important. That lack of consensus shows AI is neither solving nor exacerbating the question.
But it may be changing what proximity is actually for.
Core and Edge
The organisations navigating this most deliberately are building what might best be called a dual-speed structure. At the centre sits a core team focused on stability and the high-stakes work that carries the most regulatory and architectural weight. The systems where failure is expensive, changes require careful governance, and the premium is on reliability over speed.
Running alongside it are smaller and faster edge teams built for high-speed experimentation with AI augmentation. These pods take on the work that benefits most from speed, exploring new markets, prototyping new capabilities, and pushing the boundaries of what the organisation can deliver. They’re designed to move quickly and learn fast, with the core team providing the stable infrastructure they build against.
The skills and working rhythms these two modes require are often very different, and trying to run both at the same pace, within the same structure, is one of the more common reasons AI adoption ends up producing a high degree of friction.
Close Enough to Move Fast
What the data actually points toward is a more nuanced resolution than either side of the remote vs. office debate has wanted to allow.
Proximity may be becoming less important at the organisational level, but more important at the team level. As teams shrink and become more autonomous, the members of each small pod benefit from tight collaboration, while different pods can be distributed globally without the same cost to effectiveness.
In practice, this means a team of four people might work in the same location and time zone, allowing a rapid rate of iteration and communication that fast work demands, while multiple other teams operate across different cities, countries, or even continents, while sharing learnings through documentation and periodic synchronisation.
It’s a model that tries to capture both the efficiency of tight collaboration and the scale of global talent access, rather than treating them as a trade-off.
“AI can bridge a technical gap, but it cannot replace the cultural foundations that allow a small engineering team to communicate with high-bandwidth efficiency.”
– Filip Berlikowski, CTO, Payall
Cultural foundation is ultimately what makes a pod or team function. Culture cannot be replicated by better documentation or AI-assisted communication tools. However, AI tools can minimise the effects of the absence of proximity across pods by highlighting institutional memory that would otherwise require a teammate to explain.
What AI Can and Can’t Bridge
Trust and transparency emerged as the dominant factor in distributed team effectiveness in the Engineering 2028 data, cited by 57% of respondents, well ahead of proximity and communication, which ranked lowest at 19%. Geographic closeness matters less than whether teams share work norms, communication styles, and a culture of proactive escalation.
“The future is not in just writing code or shipping product features as an engineer anymore. We need to be thinking about an infrastructure level, how do we enable AI agents to work in the way we want them to, but not to introduce critical vulnerabilities.”
– Anna McDougall, Field CTO, HashiCorp/IBM
That infrastructure layer is where the 2028 team structure gets much of its shape. Core teams own it, maintaining the guardrails, governance frameworks, and architectural standards that edge teams work within. Meanwhile, edge teams push against those boundaries, generating the learning that feeds back into how the infrastructure evolves. The two aren’t in tension, but they do need to be very deliberately connected, with clear channels established for what edge teams discover to inform how core teams adapt.
The Multi-Hat Engineer
The team structure question connects directly to what kind of engineer thrives within it. The narrow specialist who rarely crossed paths with product, operations, or design is giving way to something the report describes as the multi-hat engineer, someone who understands product and code, development and operations, and can work fluently alongside AI systems.
This isn’t a generalist who does everything adequately. It’s someone who can design systems, orchestrate AI agents, and still reason about products and customers. Curiosity becomes a hiring criterion in a way it hasn’t always been, because AI can provide answers, but only when a curious human knows which questions to ask.
Role data in Engineering 2028 reflects where that pressure is landing. Product management looks resilient, with more leaders expecting it to grow than shrink, reinforcing the value of problem-framing and prioritisation as AI lowers the cost of building. QA and testing face the strongest headwinds, with 60 respondents expecting the role to shrink and 9 expecting it to disappear entirely, as quality work gets absorbed into AI-driven tooling and newer, more specialised roles like AI Systems Engineer and AI Orchestration Engineer begin to take shape.
Architecting for What’s Actually Coming
The remote vs. office debate framed team structure as a question of individual preference and productivity. The more useful frame for 2028 is organisational design, specifically whether the shape of your team is suited to the work AI augmentation makes possible and the risks it introduces.
Leaders who are getting this right aren’t just adopting tools and adjusting working policies. They’re rethinking how their organisations are structured from the ground up, where stability lives, where speed and quickness are encouraged, and how everything interacts. The proximity within pods model gives rethinking a practical shape, but the underlying principle is the same across all of it: intentional design for a specific context, rather than assuming AI solves all the hard organisational questions and challenges just by existing.
Ultimately, the question isn’t whether your team is ready for AI, but whether your organisation is shaped to get the most out of it.
The full Engineering 2028: Leading Human + AI Teams Responsibly report goes further into governance frameworks, the trust gap in AI-generated code, and how leaders are thinking about quality at scale. You can download it here.
Want to go deeper? We have two upcoming Bytes sessions diving even further into the findings of Engineering 2028: Leading Human + AI Teams Responsibly.
Our online Bytes session, The Maturity Roadmap: From Early Adoption to AI-Enabled Leadership, takes place on 23rd April and tackles the messy gap between AI experimentation and genuinely AI-enabled leadership, while our in-person Bytes, Engineering 2028: A Leadership Masterclass, takes place on 7th May in London and gets concrete about what mature human and AI orchestration actually looks like when teams move beyond the hype.
Both are free to attend and offer the chance to explore the findings in more depth and connect with peers who are navigating the same challenges.
