Why Most ERP Role Migrations Fail Their First Audit
Organizations spend hundreds of thousands of dollars on the security workstream of an ERP migration. They hire specialized consultants, run extensive mapping exercises, and invest weeks in user acceptance testing. Then, within 12 months of go-live, the external auditors arrive and issue findings. The role design that was supposed to be clean turns out to have problems that should have been caught during the migration itself.
This pattern repeats often enough that it warrants examination. The failures aren't typically caused by incompetent teams or insufficient budgets. They stem from structural problems in how most migration security workstreams are organized.
The Documentation Gap
Auditors evaluate access controls against a specific standard: can you demonstrate why each user has the access they have, and can you prove the assignments don't create unacceptable risks? Meeting this standard requires documentation that traces from business requirements to role design to individual user assignments.
Most migration teams produce some documentation, but it's rarely structured for audit consumption. The mapping work lives in spreadsheets that track source-to-target assignments without recording the rationale behind each decision. When an auditor asks "why does this user have access to both vendor master maintenance and payment processing?", the answer often comes down to "that's what the mapping team decided" rather than a documented justification tied to the user's job function.
The problem is compounded by team turnover. The consultants who made the mapping decisions during the migration may have rolled off the engagement by the time the first audit occurs. Their institutional knowledge leaves with them, and the remaining team is left defending decisions they didn't make and can't fully explain.
Scope Gaps in SoD Analysis
Most migration teams run some form of segregation of duties analysis before go-live. But the scope of that analysis is often too narrow. Common gaps include cross-application SoD risks (where the conflict spans two different systems), custom transaction codes that aren't covered by the standard ruleset, and indirect access paths through authorization objects that the analysis tool doesn't evaluate.
These gaps don't surface during the migration because the team is focused on getting the obvious conflicts resolved. They surface during the audit because auditors take a broader view of what constitutes an SoD risk and may apply a different or more comprehensive ruleset than the migration team used.
The fix isn't simply to use a bigger ruleset. It's to ensure that the ruleset used during migration aligns with what the auditors will evaluate against, and to document the scope of the analysis so that any known limitations are transparent rather than discovered as surprises.
The Over-Provisioning Problem
Under time pressure, migration teams consistently err on the side of giving users more access rather than less. The reasoning is understandable: an under-provisioned user can't do their job on day one, which creates visible business disruption. An over-provisioned user creates a security risk that won't be visible until the next access review or audit.
This asymmetric incentive structure means that the target environment typically has more access than it should. Users receive broad roles "to be cleaned up later," exception requests get approved without thorough review, and the principle of least privilege gets sacrificed to the principle of keeping the business running.
Auditors see this pattern clearly. When they sample user access and find systematic over-provisioning, the finding isn't about any individual user. It's about the control environment: the organization's migration process didn't produce a properly right-sized access model.
Testing That Doesn't Test Enough
User acceptance testing during a migration typically focuses on whether users can complete their core business processes. Can the AP clerk process an invoice? Can the procurement buyer create a purchase order? If the answer is yes, the test passes.
What UAT rarely tests is whether users can do things they shouldn't be able to do. Negative testing, where you verify that a user cannot access functions outside their role, is time-consuming and requires a clear definition of what each user shouldn't have access to. Most migration timelines don't allocate sufficient time for this type of testing, so it gets abbreviated or skipped.
The result is a go-live where positive access works correctly but negative access control hasn't been validated. The audit team will test both.
Building for Audit Readiness
The migrations that survive their first audit share a few common characteristics. They start with a clear, documented methodology for deriving user access in the target system. They use a comprehensive SoD ruleset that's been validated against the auditors' expectations. They produce documentation as a byproduct of the mapping process rather than trying to create it retroactively. And they invest in negative testing before go-live.
None of these practices require exotic tools or unlimited budgets. They require a migration approach that treats audit readiness as a design constraint from the beginning rather than a box to check at the end.
The organizations that get this right tend to spend less on post-go-live remediation than those that don't. The upfront investment in documentation, comprehensive SoD analysis, and proper testing pays for itself through avoided audit findings and the remediation costs that follow them.
See Provisum in action
Automated persona mapping, real-time SOD analysis, and audit-ready documentation for your next ERP migration.
Request a demo