Quick Summary
- Your QA org chart was designed for scripted automation and manual testing. AI is changing what QA teams actually do, and the team structure has not kept up.
- 3 forces are converging: AI features shipping inside the ERPs you test, automation hitting diminishing returns, and QA data going to waste.
- The QA team of 2027 has 3 layers: Strategy and Intelligence at the top, Execution and Specialization in the middle, and Platform and Compliance as the foundation.
- 7 specific roles define this new structure, most of them filled by promoting people you already have.
- Each role connects directly to a measurable release outcome: cycle time, defect escape rate, maintenance cost, or audit preparation time.
- An 18-month transition roadmap lets you build these capabilities without disrupting current release cycles.
- Start with one role matched to your biggest pain point. Prove the value over a quarter. Then expand.
- Most teams have the right people already. What they do not have is the mandate, the structure, or the urgency to start building these capabilities before the gap gets too wide.
A QA director I know tried to hire someone to own the AI testing strategy for her enterprise QA team. She had the budget approved, the business case was clear, and her VP was on board. The problem was that she could not find a single resume that matched what she needed. Not because the candidates were weak, but because the role she was hiring for did not have a name yet. There was no standard title, no established career path, and no recruiter who had filled this position before.
She ended up promoting someone internal, which turned out to be the right call. But the experience stuck with her. “I’m building a team around roles that don’t have job descriptions yet,” she told me.
If you lead a QA team of any real size, you have probably felt something similar. The org chart was built for a world of manual testing and scripted automation. That world is changing fast, and most QA org charts have not caught up.
So I sat down and mapped out what the next version of the QA team actually looks like. I am talking about the actual team structure with reporting lines, how you get from today’s team to tomorrow’s over 18 months, and why each new role connects directly to the release outcomes your leadership cares about.
3 Pressures Your Current Team Was Not Designed For
Your existing roles are not broken, they were just built for a different era. And right now, 3 things are converging at once that together create a gap no amount of upskilling in the current structure can close.
1. AI Is Shipping Inside the Platforms You Test
SAP, Oracle, and Workday are embedding AI-driven recommendations, automated suggestions, and intelligent workflows into their core products. Your QA team now needs to validate what the AI does, and that is a fundamentally different skill than testing a deterministic business process.
When an AI feature suggests a vendor for a purchase order or auto-classifies a journal entry, there is no single right answer to test against anymore. The output changes depending on context, and most teams are still testing these features the same way they test a dropdown menu.
2. Your Automation Investment Is Hitting Diminishing Returns
Most automation teams I talk to tell me the same thing: they spend more time fixing yesterday’s scripts than building tomorrow’s coverage. They have thousands of automated tests, but those tests break every release, need constant maintenance, and still miss critical defects. Meanwhile, AI agents that can generate, execute, and self-heal tests are already here, and somebody on these teams needs to start learning how to direct them.
3. Your QA Data Is Going to Waste
Every test run, every defect, every pipeline produces signals about where quality risk actually lives. But most QA teams are not mining that data to predict where the next defect will show up. They test reactively when the information for predictive testing already exists in their own systems.
The World Quality Report 2025-26 found that 43% of organizations are experimenting with Gen AI in QA, but only 15% have scaled it enterprise-wide. The tools are there. What is missing is the team structure to use them properly.
You need new capabilities, and those capabilities need a home in your org chart. So what does that org chart actually look like?
What the 2027 QA Org Chart Actually Looks Like
Most QA org charts today still have 2 layers: test leads and testers, with maybe an automation team bolted on. That structure worked fine when QA was primarily about executing test cases and maintaining scripts. But now your QA team also needs to make strategic decisions about where AI fits. They need to mine data for risk prediction. They also need to build self-service infrastructure and validate AI-driven product features. The old 2-layer org chart was never designed to hold all of that.
The QA team of 2027 has 3 layers.
Layer 1: Strategy and Intelligence
This layer reports to the VP of Quality Engineering or directly to the CIO. It contains 2 roles:
- The AI Test Strategist, who owns the human-AI operating model for the entire QA organization.
- The Quality Intelligence Analyst, who owns the data that tells everyone else what to test and where risk lives.
These 2 roles make the decisions that shape how the rest of the team spends its time. In a team of 20 to 25 people, both of these can start as part-time responsibilities on existing senior leads. Once your team crosses 40 or 50, they probably need to be dedicated.
Layer 2: Execution and Specialization
This layer reports to the QA Manager or senior test lead. It is where the actual testing work happens, but the nature of that work has changed. This layer contains 3 roles:
- The Agentic Test Automation Architect, who designs and orchestrates AI testing agents instead of writing scripts by hand.
- The AI Output QA Analyst, who validates AI-driven features inside your enterprise applications.
- The Continuous Quality Engineer, who extends QA into production by monitoring live systems.
These roles typically need dedicated people because the work is too specialized to bolt onto an existing job.
Layer 3: Platform and Compliance
This layer can report to the QA Manager or cross-functionally to IT and DevOps. It contains 2 roles:
- The Quality Platform Engineer, who builds the self-service infrastructure that keeps QA from becoming a bottleneck.
- The Compliance Automation Lead, who embeds regulatory requirements directly into the pipeline so audits stop delaying releases.
In many enterprises, these roles start as shared responsibilities with DevOps before becoming dedicated QA positions.
The traditional roles do not vanish. Your functional testers, your domain experts, your manual regression testers still have work that AI cannot do, especially complex cross-module exploratory testing, business process validation, and stakeholder communication. The 3-layer structure does not replace the existing team. It adds the capabilities the existing team does not have.
With that structure in mind, let me walk through each of these 7 roles and what they actually do day to day.
A Closer Look at the 7 Roles Your QA Team Will Need by 2027
Not every enterprise needs all 7 of these roles. But every enterprise running complex ERP applications probably needs at least 3 or 4 of them within the next 18 months. Here is what each one does, who should fill it, and what release outcome it drives.
1. AI Test Strategist
You have adopted AI testing tools across your team, maybe even across multiple teams. But your release velocity has not changed. Different groups are using different tools for different things, and nobody owns the big picture of how AI actually fits into your QA process.
The AI Test Strategist owns this problem. They design the human-AI operating model for your QA organization. They decide which test types are handled by AI, like regression, smoke testing, and data validation, and which stay with humans, like exploratory testing, complex cross-module business processes, and usability.
They own the ROI story for AI investment in QA. And most importantly, they track whether AI is actually making releases faster, or just making individual tasks quicker. Those are 2 different things, and most teams are not measuring the difference.
Best candidate: Your most strategic senior QA lead, the one who already thinks in terms of process design rather than test execution. This is almost always a promote-from-within role because it demands deep knowledge of your specific systems and release cycles. An outsider will not have the context.
Start here: Pick one senior QA lead. Ask them to spend 20% of their time evaluating which test types AI handles well and which it does not in your environment. Have them document the findings. That person becomes your AI Test Strategist candidate.
What it moves: Releases get faster, not just individual tasks, because someone is finally tracking whether AI tools are improving cycle times or just speeding up isolated steps.
2. AI Output QA Analyst
Your ERP vendor just shipped AI-powered recommendations inside your procurement or payroll module. Who on your team is validating whether those recommendations are accurate? In a procure-to-pay workflow, a wrong AI suggestion means real financial exposure. In HR, biased AI decisions create compliance and legal risk. That is exactly what the AI Output QA Analyst handles.
They test AI-driven features embedded in enterprise applications, checking for hallucinated outputs, biased recommendations, inaccurate predictions, and prompt manipulation risks. And this requires a completely different skillset from testing a standard business process, because the outputs are probabilistic and context-dependent. You cannot write a simple pass/fail script for them.
Best candidate: A senior functional tester with deep domain knowledge, trained in AI evaluation techniques. Hiring a pure ML engineer will not work because they will not know your business rules. Growing a domain expert into this role is the better path.
Try this now: Identify which AI features your ERP vendor has released or announced. Assign a senior functional tester to test one of them for accuracy and edge cases. That exercise alone will reveal how different this testing is.
Impact: Post-release defects in AI-assisted processes go down, and compliance and financial risk drop with them.
3. Agentic Test Automation Architect
Your automation suite has thousands of scripts. A significant percentage break every release, and maintenance eats a large chunk of your automation team’s capacity. You keep adding scripts, but the maintenance burden grows faster than the coverage gains. The Malaysian Software Testing Board’s 2026-2030 report describes this shift as moving from “manual scribe to strategic orchestrator,” and that is exactly what this role does.
Instead of writing and maintaining scripts, this person designs and orchestrates AI testing agents that autonomously explore your applications, generate test cases, execute them, and self-heal when the UI changes. Their job is no longer “I write automation” but “I direct AI that does automation.”
Agentic testing tools are maturing fast, and someone on your team needs to be the expert on where they work, where they do not, and how to deploy them in complex enterprise environments with dozens of interconnected modules.
Who fits: Your best SDET or automation architect, ideally the one who is already frustrated with script maintenance. They need strong architectural thinking and a willingness to move beyond the “I code everything” mindset.
Run this pilot: Assign one SDET to test one agentic testing tool on a single module for 30 days. Have them document what the tool handles well, where it fails, and how much maintenance time it saves. That pilot becomes your business case.
What it changes: Test maintenance cost drops and automation ROI goes up, because your automation investment finally starts compounding instead of plateauing.
4. Quality Intelligence Analyst
Your QA team runs thousands of test cases every release, but you cannot answer a basic question: are we testing the right things? There is no data-driven way to predict where defects will cluster, identify which tests are redundant, or prove that your testing effort correlates with actual quality improvement. The Quality Intelligence Analyst solves this by treating QA as a data problem.
They mine defect trends, CI/CD history, production incident logs, and test execution data to predict risk areas, eliminate redundant tests, and focus effort where it matters most.
A recent TestRail survey found that 34% of QA professionals are already spending more time on high-value strategic work because of AI. But 36% reported no meaningful change to their role at all. The ones seeing results are actually analyzing the data. The rest are just collecting it.
Where to find this person: This one often comes from outside traditional QA. Look for someone with an analytics or data engineering background who can learn your domain. A senior QA engineer with strong analytical skills and SQL proficiency can also grow into this, but they will need support.
Release impact: You find more defects with fewer tests, because effort goes where risk actually lives instead of being spread evenly across everything.
5. Quality Platform Engineer (QAOps)
If you ask around your organization, “we are waiting for the test environment” probably shows up in standups more than anyone wants to admit. Product teams need QA to set up environments, provision test data, and configure pipelines before they can test anything. And that makes QA a bottleneck, because the setup takes longer than the actual testing.
This role fixes that by building infrastructure that teams can use on their own. They create test environments that teams can spin up without filing a ticket, automate the test data pipelines so nobody waits days for data, and build dashboards that give everyone visibility into quality metrics. In large enterprises running multiple ERP modules, this eliminates one of the biggest hidden time-sinks in the release cycle.
Who builds this: Someone at the intersection of QA and DevOps. Either a DevOps engineer who understands testing workflows, or a senior QA automation engineer with infrastructure skills. This can start as a shared responsibility with your DevOps team.
What it unlocks: Environment setup goes from days to minutes, and QA throughput goes up because teams stop waiting around for infrastructure.
6. Compliance Automation Lead
Every audit cycle, your team spends weeks manually gathering evidence that tests were run, requirements were traced, and defects were resolved. Regulatory requirements keep multiplying, and your compliance testing still runs as a separate manual cycle that delays every release. The Compliance Automation Lead changes that by embedding regulatory requirements directly into the pipeline.
Policy-as-code, automated traceability, and real-time compliance dashboards mean that compliance evidence gets generated automatically as testing happens, instead of becoming a last-minute scramble. In financial services or manufacturing with SOX obligations, the cost of not having this shows up directly in audit preparation weeks and delayed go-lives.
Who already does this informally: A senior QA lead who owns compliance testing today, paired with automation skills. In regulated industries, this person probably already exists on your team. They just do not have the formal mandate or the automation resources to do it properly.
Bottom-line impact: Audit preparation shrinks from weeks to days, and compliance stops being the reason your releases get delayed.
7. Continuous Quality Engineer
Your QA team validates everything before release, and you still get production incidents. Users report issues that your test suite did not catch because pre-production testing simply cannot replicate real user behavior, real data volumes, and real infrastructure conditions at the same time. The Continuous Quality Engineer closes that gap by extending QA into the live environment.
They use production telemetry, synthetic monitoring, and error budgets to validate quality in live systems. For ERP environments, think about monitoring actual user journeys through procure-to-pay or order-to-cash flows in production, and catching degradation before users report it.
How to grow this capability: Look for a senior QA engineer with an interest in production systems, or an SRE who wants to bring quality thinking into monitoring. Very few people today have both QA depth and SRE skills, so expect 12 to 18 months to develop this rather than hiring someone ready-made.
What it catches: Production issues get detected before users report them, closing the distance between “we tested everything” and “users are still finding problems.”
Give Your People the Time to Transition, Not Just the Title
You might now be wondering whether you need to go on a hiring spree. You probably do not. Most of these roles are best filled by people already on your team, and you have already seen who fits where in each role description above.
The harder challenge is giving your people the time and space to build these capabilities while they are still delivering on current release commitments. The World Quality Report 2025-26 found that 58% of enterprises are already upskilling QA teams in AI tools. But most of them are doing it as a side project on top of full workloads, and that is why it stalls. You cannot ask someone to become your AI Test Strategist while they are still running 100% of their current test execution responsibilities.
An old colleague of mine who leads QA at a manufacturing company transitioned their most experienced SAP functional tester into an AI output validation role. She had 12 years of deep domain knowledge in procure-to-pay and order-to-cash. When SAP started shipping AI-driven suggestions, she was the only person on the team who could tell whether the AI’s output made business sense.
The transition took about 2 months of focused training on AI evaluation techniques. Teaching someone AI skills takes weeks. Building the kind of domain expertise she had takes years, and you cannot shortcut that. That is why growing your own people into these roles works better than hiring from outside.
But the reason it actually worked is that her manager freed up 60% of her existing workload for those 2 months. Without that, she would still be stuck doing the same job with an AI course bookmarked in her browser.
An 18-Month Roadmap for Building the New Team
So you need to free up capacity, build new skills, and keep current releases running at the same time. That means you cannot do this all at once. You need a sequence. The organizations getting this right are adding one role at a time, proving the value, and then expanding. And the ones doing nothing? Their best QA people are already updating their LinkedIn profiles, because they can see the companies that are investing in these roles, and they want to work there.
Months 1 to 3: Audit Your Team and Pick Your First Role to Pilot
Start with a single focused session where you audit your existing team against the 7 roles. For each one, ask: do we have this capability today, even informally? If yes, who owns it and is it working? If no, what is that gap costing us? That session gives you the map.
Then pick one role to pilot. Match it to your biggest pain point using the decision framework in the next section. Assign a senior person to spend part of their time building the capability. Give them a clear question to answer: “Can agentic tools reduce our maintenance burden by 30%?” or “What defect patterns can we find in our last 4 releases?” Start one 30-day tool pilot alongside this. Real data beats theoretical planning every time.
Months 4 to 9: Make the First Roles Official and Set Your Baselines
Formalize the first role with a real title, a defined scope, and dedicated time allocation. Then add a second role from a different layer. If your first role was in the execution layer, pick something from strategy or platform next. This gives you coverage across the team, not just depth in one area.
During this phase, start upskilling the broader team. Internal training on AI testing concepts does not need to be elaborate. A monthly lunch-and-learn where someone demonstrates an AI testing tool or walks through a defect data analysis goes a long way. Also start measuring the things each role is supposed to improve. How long do releases take today? How much time goes into test maintenance? How long do teams wait for environments? You cannot prove improvement without knowing where you started.
Months 10 to 18: Expand the Roles, Formalize the Org Chart, and Build Career Paths
Third and fourth roles go live. Start measuring outcome improvements against baselines. Adjust your org chart formally by updating reporting lines and creating the 3-layer structure. Then evaluate which of the remaining roles are actually needed based on your specific pain points. Not all 7 are required for every enterprise.
Build career paths during this phase. Show existing team members the specific progression from their current role to an evolved role, with concrete skill milestones along the way. Gartner analysts have noted that across the industry, jobs are being redesigned and consolidated, and entry-level hiring is slowing down. QA professionals who adapt into these new roles will be more valuable than they are today. But they need a clear path to follow.
Which Role to Build First
Here is something most people get wrong: they start with the most technically exciting role instead of the one that solves their most expensive problem. The Agentic Automation Architect sounds impressive, but if your biggest bottleneck is audit preparation eating 3 weeks every quarter, the Compliance Automation Lead will deliver ROI faster than any other role on this list.
So ask yourself one question: what is costing us the most time or money right now in our release process?
If AI tools are adopted but releases are still slow, start with the AI Test Strategist. If your ERP vendor is shipping AI features that nobody tests properly, start with the AI Output QA Analyst. If script maintenance is consuming your automation team, start with the Agentic Test Automation Architect. If you cannot tell whether you are testing the right things, start with the Quality Intelligence Analyst.
If teams wait days for test environments, the Quality Platform Engineer is your first move. If audits delay every release, the Compliance Automation Lead unblocks you fastest. And if production incidents keep showing up despite thorough pre-release testing, the Continuous Quality Engineer closes that problem.
Pick one, build the capability over a quarter, and prove the value before you expand to the next.
Everything You Need Is Already on Your Team. Start Building.
The QA director from the beginning of this article ended up filling her first new role by promoting her most senior QA lead. She gave him 60 days to run pilots with 3 AI testing tools. His job was simple: recommend where AI should and should not be used in their release process. His report reshaped how the entire team works. “The role paid for itself in the first quarter,” she said. “I just wish I’d started sooner.”
You do not need all 7 roles tomorrow. But the ones you need, you will wish you had started sooner.







