After delivering a variety of data and AI projects for Fortune 500 companies, I’ve watched the same mistake play out dozens of times. Companies pour resources into designing perfect system architectures while almost neglecting the one thing that actually determines success: the team that will build and maintain it.
Here’s the pattern I see repeatedly, and why it matters for your next data initiative.
The Architecture Obsession#
When companies launch AI and data transformation projects, the focus lands squarely on technical excellence. Tech leaders want optimized architecture, scalable systems, and maximum ROI. They spend months aligning technical requirements with business objectives, hire expensive consultants for sophisticated designs, and create detailed project plans (100% compliant with PMO standards, of course) that look flawless on paper.
The architecture reviews are thorough. The documentation is comprehensive. The technology stack is cutting-edge.
And then implementation begins, and everything starts to unravel.
The Uncomfortable Truth About Perfect Plans#
Over the years I’ve tested both approaches extensively. We’ve worked with clients who started with comprehensive architecture plans, and others who started by assembling exceptional teams. The results?
Teams with strong problem-solving skills, high morale, and motivation consistently outperform mediocre or sporadically assembled teams with perfect plans.
Here’s why: skilled engineers don’t just follow blueprints —- they adapt when requirements change, pivot when initial approaches fail, and innovate solutions to challenges nobody anticipated. They see problems coming and solve them before they escalate.
Hand that same perfect plan to an average team, and watch them struggle with implementation, get stuck on problems great engineers solve before lunch, and need hand-holding through decisions skilled professionals make instinctively.
Why “Both” Usually Means “Neither”#
The obvious question: why not have excellent architecture AND a top-tier team?
In theory, yes. In practice? I’ve never seen it work that way.
What I suspect happens is that comprehensive documentation and functional architecture create a false sense of security. Teams think, “If we document everything perfectly, even average engineers can execute.” But complex data systems don’t work that way. They require constant decision-making, troubleshooting, and adaptation —- all capabilities that live in people, not documents.
Where This Pattern Hurts Most#
This dynamic plays out in two scenarios:
In-house teams: You hire consultants for architecture, they deliver a beautiful design, then leave. Your team stares at the documentation, unsure how to implement it or adapt it when real-world conditions don’t match the assumptions..
Outsourced projects: This is where the pain hits hardest. Vendors deliver the system, collect payment, and disappear. Six months later, your team needs to extend functionality or fix a critical bug, and realizes they can’t. The system becomes a black box. Business impact stalls.
At A17, we’ve inherited a number of these “orphaned” systems. The original architecture was often sound—but the team left maintaining it couldn’t understand the decisions made, couldn’t debug the edge cases, and couldn’t evolve the system as business needs changed.
The Framework: How to Assess Team Capability and Help It Deliver#
Before your next data or AI initiative, run this assessment and follow these principles:
1. Can your team explain WHY, not just WHAT?#
Strong teams understand the reasoning behind architectural decisions. When you ask, “Why did we choose this approach?” they should articulate trade-offs, alternatives considered, and business context.
If they can only describe what was built—not why—that’s a warning sign.
2. How do they handle ambiguity?#
Give your team an underspecified problem: “We need to improve customer churn prediction.” Watch what happens.
- Weak teams wait for detailed requirements
- Strong teams ask clarifying questions, propose approaches, identify data needs, and start experimenting
3. What’s their problem-solving velocity?#
How quickly do they move from “we have a problem” to “we’ve tested three solutions”?
4. If using outsourced workforce, engage in-house engineers early#
Skip the temptation to open data scientist and engineer vacancies based only on what was actually built, which frameworks and approaches were used in the solution. Hire or rotate them in early, and have them witness—or better yet, participate in—decision-making and design. This helps them “own” the result, not just be aware of the building blocks.
5. Plan individual engineer development to align with your AI and data strategy#
Problem-solving as a skill is great, but hard skills are equally important. I’m convinced it’s essential to include necessary platform training, stack upskilling, and even theoretical courses to ensure your squad is well-trained and ready.
Sometimes this isn’t easy —- everyone has their own career track and vision. Sometimes it means “selling” the ideas of further development in the right direction. And even if it might be hard to sell the idea of Azure Synapse to Redshift people, it pays off when engineers finally have a playground to practice their new skills and express their technical expertise while building your next efficient AI-based solution.
6. Prioritize communication loops over documentation volume#
We all know how hard it can be to sustain quick, informal feedback in the world of cross-audits, risk management policies, and strict PMO oversight—but it’s necessary. Frequent, short communication loops between engineers, analysts, and business stakeholders prevent the kind of misalignment that no documentation can fix later.
Instead of long silences between design approval and delivery, create continuous “build–discuss–adjust” cycles where decisions evolve alongside reality. For example, a five-minute Slack conversation about schema changes can save two weeks of rework in a data warehouse.
7. Build psychological safety and trust within the team#
Teams that feel safe to question assumptions or admit uncertainty identify risks earlier and innovate faster. When engineers trust that mistakes won’t be punished, they’re more likely to surface problems before they grow into incidents.
This is especially visible in cross-functional data projects where analysts, ML engineers, and backend developers must share ownership of complex pipelines.
8. Make architectural decisions reversible by design#
Empower your team to make decisions—and make mistakes. Especially in AI and data science, where technologies evolve every few months, no decision should be treated as permanent. Encourage engineers to record reasoning in lightweight ADRs (Architecture Decision Records) and revisit them when assumptions change.
A small example: a team once picked a “perfect” feature store after months of comparison, only to find six months later that open-source alternatives now offered 80% of that functionality with less vendor lock-in. Because they had designed reversibility from the start, switching took days, not months.
Giving people the power—and psychological permission—to change direction is what keeps architectures alive instead of fossilized.
The Team-First Approach That Actually Works#
When we scaled A17 from startup to 30+ senior engineers across multiple countries, we learned something critical: retention and capability are linked. Our 90%+ retention rate isn’t accidental—it comes from hiring for problem-solving ability first, technical skills second, and creating an environment where strong engineers want to stay.
Here’s what this looks like in practice:
For in-house teams:
- Invest 60% of your budget in top-tier people, 40% in systems
- Hire generalists who can learn, not specialists who can’t adapt
- Build teams around problem-solvers, then let them choose architecture
For outsourced projects:
- Require knowledge transfer, not just delivery
- Embed your team in development from day one
- Evaluate vendors on their ability to upskill your people, not just deliver systems
The Real Cost of Getting This Wrong#
Let me share a real example. A fintech company implemented an ML-based scoring process for their clients as part of a credit pipeline. Their in-house R&D team didn’t have enough capacity to approach this task as quickly as needed and guarantee results, so they ran a competitive procurement procedure and selected the most suitable contractor to deliver the module.
This module, integrated with the company’s BPMN and MDM systems, began operation in the third month and was completely handed off by month seven, including all documentation, training, and a roadmap. The IT executive fully understood the risk of not having the needed competence to develop this data science module further and improve it as the business evolved.
However, hiring two mid-level data scientists didn’t solve the problem. They didn’t feel they owned the solution, even though they were technically capable. After almost a year of blaming “those guys” for every possible inconvenience and struggling to make iterative improvements, the engineers convinced the IT executive of the necessity to redesign the scoring module.
This redesign project took another nine months and stalled during the validation phase—the scoring became less consistent on the new data model and with the new approach. Finally, the same R&D contractor was re-engaged to complete the project and deliver a second version of the scorer module.
Total estimated cost of this mistake: two years of two qualified data scientists’ salaries, plus management overhead, plus approximately 10% extra on the contractor’s invoice to analyze and refactor what had been done in-house. It’s hard to estimate the business impact of not having the module updated in a timely manner throughout the entire period.
I completely agree with the decision to switch to in-house development. But when you make that choice, building data engineering teams and developing their capabilities should be the focus—not roadmaps and making things look perfect on paper.
The most sophisticated AI solution is worthless if your team can’t operate, debug, or evolve it as business needs change. I’ve seen companies spend millions on perfect architectures that become shelfware within two years. I’ve also seen scrappy teams with mediocre tools deliver extraordinary business value because they could adapt, learn, and execute.
What This Means for Your Next Data Initiative#
Systems don’t solve problems. People do.
Before you invest another dollar in architectural consultants or enterprise platforms, honestly assess your team’s capability. Can they actually build, maintain, and evolve what you’re planning? Do they have the problem-solving skills to adapt when reality doesn’t match the plan?
If the answer is “no” or “I’m not sure,” pause. Shift your focus to data team building first. The architecture will follow.
The winning approach comes down to three principles:
Hire and engage early. Don’t wait until the system is built to bring your team in. Embed them from day one—whether you’re building in-house or working with vendors. Ownership begins with participation in decision-making, not just documentation handoffs.
Develop continuously with strategic alignment. Plan your team’s growth to match your AI and data adoption strategy. Invest in training, upskilling, and creating environments where engineers can practice new technologies on real problems. The best data teams aren’t just solving today’s challenges—they’re preparing for tomorrow’s.
Prioritize people over perfection. A 60/40 split favoring people over systems consistently outperforms the reverse. Strong teams with adequate tools will always outperform average teams with perfect architecture. Perfect designs don’t compensate for capability gaps—problem-solving velocity does.
The data and AI initiatives that succeed aren’t the ones with the most sophisticated architecture or comprehensive documentation. They’re the ones with teams capable of understanding why decisions were made, adapting when assumptions change, and owning the solution from design through evolution.
Invest in your people first. Everything else follows.