Image 2
View All Posts

Azure Governance Starter – Part 3: Mastering the Adopt Phase with Migrations, Modernizations, and Cloud-Native Solutions

The Adopt phase of the Microsoft Cloud Adoption Framework marks the transition from concept and planning to the actual use of the cloud. It includes migrating existing systems, modernizing applications, and developing cloud-native solutions.

Microsoft Azure
Cloud
Governance
Image

From Strategy to Actual Cloud Usage

The phases covered so far Strategy, Plan, and Ready form the foundation for successful cloud adoption. But the actual value only emerges when workloads are actually operated in the cloud. This is where the Adopt phase of the Microsoft Cloud Adoption Framework (CAF) comes into play.

While the initial steps are often shaped by concepts, architectures, and platform setup, the Adopt phase is about practical implementation such as migrating existing systems, modernizing existing applications, and building new cloud-native solutions.

Governance is no less important here than in the earlier phases. It ensures that the rapid introduction of workloads does not lead to security risks, cost explosions, or architectural sprawl.

Positioning the Adopt Phase

The previous phases of the Cloud Adoption Framework have laid the groundwork on which actual cloud usage can build. In the Strategy Phase, motivation, goals, and business value were defined. Building on that, the Plan Phase refined the organizational and technical framework and outlined the future operating model. With the Ready Phase, the basic Azure configuration was finally created through landing zones, governance policies, and core platform architecture.

With that, the prerequisites are in place to enter the Adopt Phase, in which real workloads are migrated, modernized, or newly developed in the cloud.

The Adopt Phase is divided into three main areas of action:

  • Migration of workloads: Step-by-step transfer of existing systems from data centers or other clouds to Azure.
  • Modernization of existing applications: Adaptation or redevelopment to increase agility, scalability, and efficiency.
  • Development of cloud-native solutions: Leveraging modern Azure services to deliver innovative products and features faster.

These three streams often run in parallel in practice. While one team migrates workloads, another modernizes applications, and new cloud-native solutions emerge simultaneously. The key is that these activities are not carried out in isolation but are coordinated and supported by governance, automation, and a Cloud Center of Excellence (CCoE).

The following diagram positions the Adopt Phase in the context of the entire Cloud Adoption Framework and shows which topics we will cover in this blog post:

Overview of the Adopt Phase in the Cloud Adoption Framework

With this understanding in mind, we can now turn to the first area of action in the Adopt Phase.

Migration of Workloads to Azure

The first area of action in the Adopt Phase is the migration of existing systems from the data center or from other clouds to Azure. The Microsoft documentation provides a very useful diagram of the individual steps during workload migration. From the following figure, the steps of this field of action can be derived:

CAF Migrate Steps

Migration Planning

A migration plan defines the sequence, methods, and timing for transferring workloads to Azure. Strategic decisions from the Strategy, Plan, and Ready phases are translated into concrete deployment sequences.

Using the Results from the Strategy, Plan, and Ready Phases

A central basis for the Adopt Phase are the results from the previous steps. Does the team have the necessary Azure skills? What gaps still exist? Here, the insights and preliminary work from the Strategy, Plan, and Ready phases, such as the Cloud Strategy Paper, Cloud Journey Planning Document, or the results of the Cloud Readiness Validation, are helpful.

Defining Methods and Technologies for Data Migration and Connectivity

It must then be determined how data will be transferred from the current location to the Azure environment. For this data migration path, there are different solutions to enable secure, fast, and cost-effective transfer.

Migration PathWhen to UseAdvantagesLimitations
ExpressRouteWhen a private, dedicated connection to Azure exists or is plannedHighest security, high bandwidth, low latencyRequires setup and additional costs
VPNWhen ExpressRoute is not available but secure transfer is neededEncrypted tunnel over the Internet, cheaper than ExpressRouteLower speed, requires VPN gateway setup
Azure Data BoxFor very large data volumes that need to be migrated offlineOffloads network, suitable for bulk dataShipping time and longer overall project duration
Public InternetFor less sensitive data or when no other option is availableAvailable everywhere and usable immediatelyLeast secure, uses existing Internet bandwidth, slower

Defining the Order of Workloads to be Migrated

In the Plan Phase, we had already outlined initial dependencies, responsibilities, budget cycles, and migration waves in a cloud roadmap. Now it is necessary to consolidate this initial planning and define the sequence of workloads to be migrated. To do this, interconnected systems with strong coupling and dependencies are identified, and the system and application landscape from the Strategy Phase is extended with technical dependencies. The goal should be that dependencies can be explicitly named and categorized. In Microsoft’s documentation, dependency types are distinguished as direct and indirect dependencies, as well as business dependencies.

Tightly coupled systems must be migrated together, while loosely coupled workloads can follow in separate waves. This results in so-called Migration Waves, i.e., groups of interconnected applications and systems that are planned and implemented iteratively. Grouping workloads may, for example, involve the APIs used, databases, authentication services, or network connections. A proven approach is to start with less complex workloads and non-production environments to gain experience and practice processes before moving on to critical systems.

Planning a Potential Parallel Operation

It is not always possible to move all systems to the cloud simultaneously. In many organizations, there are immovable dependencies, such as components that must remain in the source environment for regulatory, technical, or business reasons. The key is to consciously identify and transparently document them. Which systems remain, which interfaces exist, and which data flows need to be ensured during the transition period? Such split-environment scenarios place special demands on planning and governance. The goal should always be to keep the duration of a parallel operation as short as possible. If an immediate move is not feasible, clear transition plans with timelines, risk assessments, and dependencies are required. In some cases, it may even be advisable to deliberately postpone migration until additional systems can be moved together in a single wave.

Where a parallel operating phase is unavoidable, integration mechanisms such as API gateways, message queues, or data synchronization services can help. They ensure reliable communication, reduce latency, and at the same time enable compliance with security and regulatory requirements. In this way, continuous operations can be maintained while the remaining components are gradually migrated to Azure.

Defining and Prioritizing Workloads

The order in which workloads are later migrated has a decisive impact on speed, risk, and acceptance of cloud adoption. Proper prioritization ensures that resources are concentrated where they generate the greatest business value without taking unnecessary risks.

This requires business and technical details of each workload to be reviewed together with the responsible stakeholders. Aspects such as expected downtime, criticality, dependencies, and organizational responsibilities are factored into the evaluation. The basis is the overview documented in the migration acceptance plan, covering systems, owners, and business value from the Strategy, Plan, and Ready phases.

Workloads can be roughly divided into four categories:

  • Quick Wins: Systems with high business value but low effort. They are suitable as early candidates to make quick successes visible and build trust in the migration.
  • Strategic Investments: Critical applications with high business value and high effort. These require careful planning, extensive testing, and close coordination with business units.
  • Simple Candidates: Workloads with low business value and low effort. They can fill gaps between major migrations and help the team build additional routine.
  • Low-Priority Systems: Applications with low business value and high effort. These are either postponed or migrated only when external factors such as hardware or license cycles require it.

A sensible approach is to start with simple workloads and non-production environments to practice processes. After these initial experiences, business-critical systems should be migrated. In addition, individual complex applications can be tackled in early waves to make typical challenges visible in time. This creates a resilient roadmap that reduces risks and optimally leverages learning curves.

Creating a Migration Schedule

In addition to content prioritization, every migration requires a concrete schedule that takes both technical and business requirements into account. A clearly structured timeline creates accountability, reduces risks, and enables effective resource management.

Key components of such a schedule include:

  • Defined start and end dates for each migration, supplemented with sufficient buffer time for testing, stabilization, and fixing unexpected issues. This helps absorb delays without jeopardizing the overall plan.
  • Alignment with business events, to avoid placing migrations during critical periods such as financial closings, product launches, or seasonal peaks. This alignment significantly increases acceptance and stakeholder trust.
  • Transparent progress tracking using project management and collaboration tools. This allows dependencies to be managed, milestones monitored, and necessary adjustments communicated early.

A detailed schedule not only makes the migration process more manageable but also transparent to all stakeholders. It strikes the necessary balance between ambitious goals and realistic feasibility, making it a central steering instrument in the Adopt Phase.

Selecting Appropriate Migration Methods

In addition to timing, the choice of the right migration method is crucial for a smooth transition. The central question is how much downtime the system can tolerate and how critical it is to business operations. Basically, two approaches can be distinguished:

  1. Migration with planned downtime: This method is suitable for workloads that can withstand temporary interruption, such as development or test environments, or applications with defined maintenance windows. This approach is relatively simple since no continuous synchronization between source and target environments is required. It is important to document the acceptable duration of downtime in advance and to schedule migration deliberately during periods of low usage.

  2. Migration with near-zero downtime: For business-critical systems such as customer-facing applications, real-time transaction systems, or workloads with strict SLAs, a nearly interruption-free migration is required. This relies on continuous data replication and a carefully planned cutover process. A prerequisite is that the workload architecture supports replication and that network bandwidth allows real-time transfer. Before production, such procedures should be tested in a non-production environment to minimize risks.

The decision for the right method should therefore be made individually for each workload and always balance simplicity, speed, and business criticality. This allows risks to be controlled without unnecessarily investing resources in overly complex procedures.

Defining a Rollback Strategy for Problems

Even with the best planning, every migration is an intervention with risks. This makes a reliable rollback plan all the more important — one that allows a quick and controlled return to a stable baseline in the event of failure. Such a plan minimizes downtime, limits business impact, and strengthens trust in the overall migration process. A rollback plan typically includes the following core elements:

Clear definition of failure scenarios: Together with business units, workload owners, and operations teams, it should be defined what constitutes a failed migration. Typical criteria include failed health checks, unexpected performance drops, security issues, or failure to meet defined success metrics. Clear thresholds — such as CPU utilization, response times, or error rates — create transparency and consistency in decision-making.

Automated rollbacks: Rollbacks should not depend on manual intervention. Through CI/CD pipelines, rollbacks can be automated, such as redeploying a stable previous version if integrity tests fail.

Workload-specific instructions: Since not every environment is the same, precise instructions are needed for different scenarios. In Infrastructure-as-Code deployments, this may mean re-executing older templates; for applications, rolling back to a previous container image. Scripts, configuration snapshots, and IaC templates should always be part of the plan.

Regular testing: A rollback is only as good as its tested functionality. Rollback scenarios should be simulated in pre-production environments to identify gaps in permissions, automation, or dependencies. The goal is to reliably return the system to a known stable state.

Continuous improvement: After each use — whether migration or rollback — a short retrospective is recommended. Procedures, criteria, and automation are then adjusted so that the plan remains current and keeps pace with technical and organizational changes.

A well-thought-out rollback plan is therefore not just a safeguard but also a key governance artifact that demonstrates professionalism and significantly strengthens stakeholder confidence in cloud migration.

Preparing Workloads

Before workloads are actually migrated to Azure, they must be thoroughly prepared and checked for their cloud readiness. This preparation phase minimizes risks, increases the probability of success, and ensures that systems run stably, securely, and with high performance in the new environment. It builds directly on the findings from the Strategy, Plan, and Ready Phases and translates them into concrete technical measures.

Fixing Compatibility Issues

A common stumbling block in migrations is technical incompatibilities between the source environment and Azure. Not every operating system, driver, or configuration is supported in the cloud. If these problems are only discovered during migration, it leads to delays and, in the worst case, failures.

Ideally, during the Ready Phase, the Azure platform components were already provisioned — including subscriptions, management groups, policies, and core Azure resources such as App Service plans, virtual machines, VPN gateways, etc.

Together with the identified system and application landscape and the defined individual migration strategy for specific systems and applications, the known problems and Azure’s compatibility requirements can now be used to derive concrete steps for adjusting the systems. This is where the tasks captured during the first three phases (Strategy, Plan, and Ready) are put into practice.

Verifying Workload Functionality

Once the fundamental compatibility issues are resolved, the next step is the functionality check in Azure. To approve these adjustments, the following checkpoints must be considered:

  • Network connectivity: Communication between Azure services or external systems should be validated by reviewing Network Security Groups, routing tables, and DNS configurations, supported by tools such as Azure Network Watcher. In addition, connections to external APIs, databases, and external services should be monitored to detect problematic firewall rules.
  • Authentication and authorization: Test your application-specific login and authentication flows to ensure no unauthorized access to your applications is possible. This should be validated for both user and client-to-client authentication flows. Technically, app registrations, app roles, RBAC assignments, and group assignments may be relevant here.
  • Functional and integration tests: User acceptance tests (UAT) and regression tests verify whether workflows and business processes remain usable in Azure without changes.
  • Load and performance tests: With Azure Load Testing, workloads can be simulated under real load conditions. Results are compared with baselines from the source environment to identify bottlenecks early. Ideally, these baselines come from the results of the Strategy, Plan, and Ready Phases.
  • Stakeholder approval: Beyond technical testing, business units should be involved. Only if workloads deliver the expected business value are they truly ready for production.

Creating Reusable Infrastructure

After successful testing, the proven environment should be made reproducible. For this, the use of Infrastructure as Code (IaC) is perfectly suited. Instead of manually clicking resources in the Azure portal, the entire infrastructure is described with tools like Terraform, Bicep, or ARM templates and deployed automatically.

Advantages of this approach include:

  • Consistency: Every environment is based on the same templates, reducing errors.
  • Speed: New environments can be deployed in minutes instead of days.
  • Traceability: Code changes are versioned, reviewed, and documented.
  • Scalability: The same template can be reused for Dev, Test, and Prod.

IaC templates should be stored in Git repositories, versioned, and rolled out via CI/CD pipelines. This ensures that future changes are controlled, tested, and automated. Additionally, reusable infrastructure can already be developed in the Ready Phase, not only automating workload deployment but also all platform components of the Azure environment.

Deployment Documentation

Even though automation simplifies many tasks, structured documentation remains indispensable. It serves as a reference for operations teams, supports onboarding of new colleagues, and is crucial for quick responses in case of incidents.

A complete deployment documentation should include:

  • Configurations and dependencies (e.g., connection strings, service endpoints, security rules).
  • Step-by-step instructions for deployments, tests, and rollbacks.
  • Emergency and recovery plans, tested and updated regularly.

All documents should be made centrally available for example, in an internal wiki, SharePoint, or GitHub and accessible to all relevant teams. This ensures that documentation remains up to date and relevant.

Migration Execution

After thoroughly preparing the workloads, the actual transfer of workloads to the Azure Cloud follows. This step is complex because it involves not only technical aspects but also organizational, communication, and timing factors. The goal is to make the migration as smooth as possible with minimal disruption to business operations. To ensure this, the necessary steps were initiated during migration planning and workload preparation. In this chapter, we will take a closer look at the tasks and areas of action during the migration.

Preparing Stakeholders for Migration

The success of a migration does not depend solely on tools and technologies, but above all on the clear coordination of the people and teams involved. Only when all stakeholders know their roles, resources are allocated, and communication channels are in place can migration be carried out with minimal disruption.

A structured preparation process includes three key elements:

  • Transparent communication: The detailed migration plan from preparation and previous CAF steps, which clearly documents the timeline, expected service impacts, responsibilities, and contingency plans, is now used. Share this plan early with all stakeholders and include the contact information of the responsible team members. This transparency prevents misunderstandings and ensures a consistent set of expectations across the organization.

  • Ensuring technical support: Qualified experts must be available at all times during the migration period. Assign responsible contacts for each critical workload and define binding escalation paths, including clear response times for critical issues. This ensures that problems can be addressed and resolved immediately.

  • Readiness check before starting: A joint preparation meeting with all support teams is essential to finalize roles, access rights, and monitoring procedures. This session should also review the rollback criteria and procedures to avoid uncertainty in case of an incident.

This stakeholder preparation creates the organizational framework for a smoother execution of the migration and significantly reduces the risk of delays or uncoordinated decisions.

Introducing a Change Freeze

One of the biggest risk factors in migrations is unplanned changes to source systems. Even small modifications — such as a patch, a new deployment, or an altered database table — can have serious consequences. The result can be inconsistent data states, faulty tests, or even the complete failure of the migration window. To prevent this, many organizations enforce a Change Freeze, a controlled freeze of all changes during the critical migration phase.

An effective Change Freeze includes several layers:

  • Automated control mechanisms: Instead of relying solely on organizational agreements, change freezes should be technically enforced. Deployment pipelines can be configured so that no builds or releases reach the source environment during the freeze period. Approval gates or manual approval steps in CI/CD systems ensure that even accidental deployments are blocked immediately. This technical safeguard creates confidence and relieves the migration teams.

  • Defined emergency exceptions: Completely rigid change freezes are not always possible, as security-critical patches or severe issues cannot always be postponed. Therefore, clear emergency processes and rules are required.

  • Transparency through monitoring and auditing: A freeze only works if it is consistently monitored. Configuration management tools or Azure-native monitoring solutions can report changes to files, deployments, or database schemas in real time. Alerts immediately notify responsible teams of violations. This ensures not only compliance but also traceability.

A change freeze is therefore much more than an organizational note. Properly implemented, it is a governance mechanism that guarantees stability, minimizes risks, and provides the necessary calm so that the migration can proceed without surprises.

Finalizing the Production Environment

Before a workload is fully migrated to Azure, the production environment must be completely prepared and secured. This step is crucial to ensure consistency, security, and operational reliability. A properly configured target environment reduces the risk of configuration deviations, provides a stable foundation for migration, and increases the confidence of business units and operations teams in the overall process.

A core principle for provisioning production resources, as previously mentioned, is the consistent use of Infrastructure as Code (IaC). Manual configurations through the Azure Portal are error-prone and difficult to reproduce.

In addition, the production environment must apply production-grade configurations, which are generally stricter than in development or test environments. This includes, above all, network security with restrictive rules in Network Security Groups, clear segmentation of traffic according to Zero Trust principles, and firewalls that allow only necessary traffic.

Equally important is identity and access management. The principle of least privilege should be applied, supplemented by the use of managed identities and role-based access control. Databases and storage services must also be provisioned in the correct version and secured with appropriate firewall rules, access controls, and a clear separation between service principals and user permissions. In addition, compliance requirements should be considered, and monitoring solutions such as Azure Policy and Azure Monitor enabled to enforce regulatory requirements and governance standards.

Beyond security and configuration aspects, it is important that the environment is technically validated for functionality. This includes verifying that all Azure resources have been created as planned, as well as workload-specific tests — for example, ensuring that databases are reachable, queues process messages, or applications start without errors. Network connectivity must also be validated early. Common problem areas such as faulty routing tables, insufficient DNS resolution, or blocked ports can be identified and fixed using tools like Azure Network Watcher before they cause outages in production.

Performing the Cutover

The cutover is the moment of migration when workloads are permanently moved from the source environment to Azure and production operations are switched to the new infrastructure. This phase reveals how thoroughly planning, testing, and coordination were done beforehand. To cover both business-critical systems with high availability requirements and less sensitive workloads, a distinction is made between approaches with near-zero downtime and traditional approaches with planned downtime windows.

For near-seamless migrations, continuous data replication is usually set up. Databases are synchronized with the target environment using native replication mechanisms until the source and target systems are in sync. A key metric here is monitoring replication latency. Only once the lag is completely caught up can the final switch occur. During this stable replication phase, it is advisable to pre-transfer unstructured data such as files or objects to Azure to minimize the volume at the actual cutover.

Next, write operations are paused or systems are put into read-only mode so that no transactions are lost. Then a final synchronization takes place, data integrity is verified with hash or checksum checks, and workloads are activated in Azure. Switching DNS entries and load balancers ensures that users are seamlessly redirected to the new environment. Particularly important afterward is intensive validation through functional tests, performance monitoring, and close involvement of application owners.

For systems that can tolerate planned downtime, the process is less complex but involves a clearly defined outage. Before starting, all write operations are stopped and it is ensured that no open transactions remain. Then databases, files, and objects are transferred to Azure with tools such as Azure Migrate, AzCopy, or the Database Migration Service (DMS). After successful import, data validation through row and metadata comparisons takes place before applications are started and tested in the Azure environment. Once all checks are passed, production traffic is switched to the new systems, and final functional tests confirm stable operation.

Safeguarding with a Fallback Scenario

Even if a migration is carefully planned and tested, there should always be a fallback plan that enables a quick return to the source environment. Especially in the critical initial phase after cutover, unexpected problems may arise — whether due to performance issues, integration errors, or unforeseen dependencies. By temporarily keeping the source environment available, organizations create a kind of insurance policy. If a serious issue occurs, systems can quickly be switched back to the previous state. This also includes maintaining the ability to roll back DNS entries and configurations. It is essential that this fallback mechanism is documented and tested so that there is no need to improvise in an emergency.

Validating Migration Success

Migration should never be declared complete prematurely. Only comprehensive validation ensures that all requirements are met and workloads are stable in Azure. This includes, in particular, verifying data integrity through checksums, hashes, or metadata comparisons, as well as confirming that user access and system performance meet expectations. Especially in the first hours and days after cutover, metrics such as response times, error rates, and utilization should be continuously monitored. Another important element is the formal approval by stakeholders such as business units, application owners, and testers. Only once this approval has been obtained should the migration be officially declared successful. This prevents overlooked issues or premature celebrations.

Stabilization and Extended Operational Support

The work does not end with a successful cutover. The first weeks are a particularly sensitive phase during which the reliability of the new environment must be proven. It is advisable to introduce a model of enhanced operational support. Experienced IT staff or external partners should be available during this time with increased responsiveness to resolve issues more quickly than in normal operations. In parallel, operational documentation and systems must also be updated. Only if the operational reality is accurately reflected can monitoring, incident management, and governance work effectively in the long term. Stabilization thus concludes the migration process and prepares the transition into regular cloud operations.

Optimization of Workloads

It is only in the optimization phase that it becomes clear whether workloads are not only functional but also efficient, secure, and cost-effective. This phase lays the foundation for long-term, mature cloud usage and connects migration activities with operational routines.

Fine-tuning workload configurations

After a migration, workloads often change their behavior, whether due to different infrastructures, scaling mechanisms, or service architectures. To stabilize performance and avoid unnecessary costs, short-term configuration adjustments are required.

Azure Advisor provides practical recommendations in the areas of cost, reliability, security, and performance. In addition, service-specific best practices from the Azure Well-Architected Framework should be applied. Security-related recommendations from Microsoft Defender for Cloud should also be implemented quickly to reduce misconfigurations and attack surfaces.

Validating critical configurations

Stable operations depend on central monitoring, cost, and backup mechanisms working reliably in the new environment.

  • Monitoring: Check that metrics and logs are fully collected and that alerts are properly adjusted to new thresholds. Dashboards should also reflect the current architecture and be relevant for operational decisions.
  • Cost control: With Azure Cost Management, current costs can be compared with pre-migration benchmarks. Deviations often indicate inefficient scaling policies or oversized resources.
  • Backups: Ensure that backups not only complete successfully but also pass recovery tests for defined RPOs/RTOs. Governance policies and audit trails should be updated to reflect new storage locations and retention rules.

Systematically incorporating user feedback

Technical metrics are important but do not capture all aspects of operational success. End-user feedback provides valuable insights into performance, usability, or stability that monitoring may not reveal. Feedback can be gathered systematically via surveys, support tickets, or interviews and documented in a central backlog. Prioritized issues should be assigned clear responsibilities, and their resolution should be tracked transparently. Visible improvements — such as reduced latency or increased stability — strengthen user confidence in the cloud transformation.

Conducting regular workload reviews

Optimization is not a one-time step but a continuous process. Regular reviews, embedded in governance and operational cycles, help identify opportunities for cost, security, reliability, or performance improvements at an early stage. The Azure Well-Architected Framework is a proven tool for systematically documenting improvement opportunities and guiding workload evolution.

Optimizing hybrid and multi-cloud dependencies

Many environments remain hybrid or multi-cloud after migration. These dependencies pose risks such as increased latency or security gaps. Proactive monitoring of cross-cloud and on-premises workloads helps detect bottlenecks early. Equally important is the securing of connections using ExpressRoute, VPN, or Bastion, complemented by continuous diagnostics and alerting. In the long term, a roadmap to reduce external dependencies should be developed, gradually replacing hybrid components with Azure-native services.

Making migration results transparent

Finally, the results of migration and optimization should be documented and communicated transparently. This includes metrics such as cost reductions, performance gains, or improved resilience, which can be derived from Azure Monitor, Cost Management, or incident reports. Concrete, business-relevant examples increase stakeholder acceptance and build trust for future cloud initiatives.

Decommissioning Workloads

The migration to Azure is only truly complete once the legacy systems have been properly decommissioned. Structured decommissioning reduces operational overhead, avoids unnecessary costs, ensures regulatory compliance, and prevents the risk of prematurely shutting down business-critical systems.

Obtaining stakeholder approval

Before a workload is permanently deactivated, both business and technical approval must be obtained. This ensures that the target system in Azure meets all requirements and that no dependencies have been overlooked. Approval should be given by workload owners, IT operations teams, and security officers. Success criteria should be documented — for example, a defined number of error-free operating weeks or meeting specific performance metrics. These criteria also serve as audit evidence and provide a reliable basis for the final shutdown date.

Reclaiming and optimizing software licenses

Decommissioning often frees up licenses, which can offer significant cost savings. In particular, Windows and SQL Server licenses can be checked for eligibility under the Azure Hybrid Benefit to reduce ongoing cloud costs. At the same time, license inventories should be updated, and compliance systems aligned with the new environment. Unused licenses can be reassigned to other systems within the organization or, depending on contract terms, even returned to vendors. This keeps the license landscape not only cost-efficient but also audit-ready.

Ensuring data retention for compliance and recovery

Even when systems are shut down, not all data may be deleted. Many industries are subject to strict data retention regulations. Therefore, it is crucial to carefully inventory and classify legacy system data and archive it in compliance-ready Azure storage solutions.

Features such as immutable storage policies (WORM), legal holds, or automated tiering strategies between hot, cool, and archive storage can be applied. It is equally important to document clear processes for retrieving archived data when needed — for example, for an audit or legal review. This ensures the right balance between cost optimization and regulatory security.

Updating documentation and operational processes

After decommissioning, all operational documentation must be updated to reflect the new reality. Architecture diagrams should show only the current Azure environment, and outdated references to on-premises systems must be removed. Standard operational processes such as incident response, maintenance, or escalation paths must be adapted to Azure operating models.

Another important step is cleaning up monitoring and alerting mechanisms. Obsolete dashboards or alerts for decommissioned systems must be removed to avoid false alarms and wasted effort. At the same time, monitoring baselines should be reestablished based on new Azure performance data.

Finally, legacy documentation should be clearly marked with deprecation notes, moved to archival systems, and access restricted. This preserves historical knowledge for traceability and audits without misleading operational teams with outdated information.

Modernizing Workloads for Azure

After examining the migration approach to the Azure Cloud, the focus of this chapter is on the modernization of migrated systems and applications. Concretely, this means adapting already migrated systems with cloud-specific capabilities to reduce costs or gain other benefits.

Preparing for Modernization

A successful modernization of workloads does not begin with the first technical step but with organizational and strategic preparation. Before teams refactor code, rebuild platforms, or redesign architectures, unified definitions, competencies, priorities, and responsibilities must be clarified. This phase creates the foundation for modernization initiatives to progress purposefully without failing due to misunderstandings, skill gaps, or misguided investments.

It becomes clear that the organizational preparations and framework conditions are similar to those required during the initial workload migration phase.

Defining modernization for the organization

To ensure that everyone is aligned, a shared definition of modernization is essential. In this context, modernization means the improvement and adaptation of existing workloads to cloud best practices, without developing entirely new functionalities or building new systems from scratch. Typical measures range from:

  • Replatforming (e.g., moving databases to managed cloud services) to
  • Refactoring (e.g., cleaning up or restructuring code) to
  • Re-architecting (e.g., transitioning from monolithic applications to containerized or microservice-based applications).

This definition must be communicated transparently so that project managers, developers, operations teams, security specialists, and leadership share the same understanding of what qualifies as modernization. Only this common ground prevents different departments from working toward divergent objectives. At the same time, modernization should be established as a shared responsibility across all teams.

Development, operations, and architecture departments bring different expertise to the table and must work in close coordination to ensure integration, security, and stability. Insights and challenges identified during the initial migration into Azure can now be leveraged for specific modernization measures — potentially introducing Azure cloud-native Platform as a Service (PaaS) solutions.

Assessing modernization readiness and competencies

The next step is to analyze whether the organization is ready for modernization at all. Four areas of competency play a key role:

  • Cloud knowledge: Do the technical teams have sufficient expertise with the relevant Azure services to be used in modernization? A solid understanding of service feature sets is critical for making informed modernization decisions. If needed, external Azure expertise should be brought in.
  • DevOps & automation: Were CI/CD pipelines, automated tests, and infrastructure-as-code practices already implemented during migration? If these foundations are in place, they now provide the perfect basis for granular and reproducible changes in systems and infrastructure components.
  • Architectural understanding: Is there sufficient knowledge of modern design patterns and technologies such as microservices, containers, or serverless architectures? Mastery of these methods supports both technical and strategic modernization planning.
  • Monitoring & operations: Can existing monitoring and logging tools reliably cover expanded cloud scenarios? If Azure-native observability and logging solutions have not yet been introduced, now is the right time to discuss scalable, long-term approaches.

Where gaps are identified, an action plan should be developed, including training (e.g., Azure certifications, architecture workshops), targeted hiring, or the temporary involvement of external partners. A team that has internalized the principles of modern cloud architectures can flexibly adapt to new tools.

Prioritizing workloads

In environments with many systems, it may make sense to prioritize the order of workloads for modernization. A suitable decision basis is categorization by business value and technical risk.

For example, systems with low business value and high technical risk may only be addressed selectively, when clear benefits exist. Experience shows that triggers such as security vulnerabilities, expiring vendor support, or rapidly growing technical debt strongly influence prioritization.

Creating a shared understanding of approaches and best practices

Before actual implementation begins, all workload teams should develop a common understanding of modernization approaches. The Azure Well-Architected Framework (WAF) is particularly useful here, with its five pillars: security, reliability, cost optimization, performance efficiency, and operational excellence.

Equally important is enabling workload teams themselves to make concrete decisions. As the experts who maintain the systems daily, they have the deepest insight into weaknesses. Leadership should provide context — such as growth goals, cost reduction requirements, or compliance mandates — while teams propose and implement solutions. Regular check-ins, along with clear parameters for budgets, timelines, and architectural standards, ensure that decisions remain aligned with business objectives.

Planning the Modernization

After the preparatory steps for modernization have been completed, the next task is to define the individual modernization measures and the concrete migration strategy for specific workloads.

Choosing the right modernization strategy

Every workload is different, and there is never just one right approach. During the workload migration phase, the focus was primarily on replatforming, the classic lift-and-shift approach, where applications were moved into Azure with minimal code changes.

After the initial replatforming, refactoring may be the next step. Here, code is restructured to make it more cloud-optimized, maintainable, and secure. This strategy is particularly suitable if there is significant technical debt.

For more extensive transformations, re-architecting can be considered — such as moving from monolithic applications to microservices or serverless architectures. This approach opens up major innovation potential but also requires the most effort, longer testing cycles, and fundamental changes.

It is important to select the strategy based not on technological appeal but on business value, timelines, and available resources. Over-modernization — choosing the most complex approach without clear added value — is a common mistake. Every decision should be based on a realistic cost-benefit analysis.

Planning modernization in phases

Redesigning a complex workload all at once is a high-risk endeavor. A better approach is to break modernization into clearly defined phases. Each phase should deliver tangible value, remain manageable for teams, and allow lessons learned before progressing to the next stage.

Phases can be organized in different ways:

  • By components or layers (e.g., first the database, then the business logic, finally the user interface).
  • By priority and complexity (e.g., start with internal services, then critical business logic, lastly customer-facing systems).
  • By business processes (e.g., first user management, then payment processing, finally reporting).

Ideally, begin with a smaller adjustment that has low risk and high benefit, which can be completed within a few weeks. Such a proof of success builds stakeholder confidence and motivates the team for subsequent phases.

Each phase should have clear success criteria to prevent scope creep. Once a phase is completed, technical goals, quality benchmarks, and time/budget limits should be validated before starting the next round of improvements.

Establishing governance for modernization

Just like with the initial workload migration to Azure, change management processes, change freezes, and scope management methods must be in place to ensure that modernization initiatives are carried out consistently.

Strategically planning deployments

Another critical point is the rollout strategy for modernized components. Two main models exist:

  • In-place deployment: Changes are made directly in the existing production system. This saves infrastructure costs and works for smaller, reversible adjustments, but carries a higher risk of outages.
  • Parallel deployment: A new environment is built and kept in sync until the switchover point. This is necessary for complex or mission-critical systems since it ensures availability but incurs higher costs.

Approaches such as canary releases or blue-green deployments allow gradual rollout of new versions with immediate rollback options if issues arise. Each phase should include a documented rollback procedure, ideally automated via Infrastructure-as-Code, so teams can quickly revert to the last stable version.

Engaging stakeholders and securing approvals

Technical planning alone is not enough. Modernization, just like initial migration, requires active support from business and IT leadership. Clear communication of the expected benefits is essential.

While IT teams often focus on efficiency and stability, decision-makers are primarily interested in cost reduction, time-to-market, and customer experience.

Clear roadmaps with measurable milestones are helpful, e.g., “In six weeks, migration of component X with a 20% performance increase”. Pre-prepared before-and-after metrics, such as expected cost savings (20–40%), productivity gains (50–80%), or reduced risks from fewer outages, add credibility. Transparent risk assessments and regular communication routines (e.g., weekly status updates) further increase stakeholder acceptance.

Executing the Modernization

Now the actual modernization is implemented — from preparing project stakeholders to developing in secure test environments and finally deploying into production. The central goal of this phase is to deliver changes with maximum safety and minimal disruption to operations.

Development in secure test environments

The core principle of any modernization is: first develop and test outside of production, then introduce changes in a controlled manner into production. For this purpose, development, test, and staging environments are set up to mirror the production environment as closely as possible.

  • Apply Well-Architected principles: All implementations follow established frameworks such as the Azure Well-Architected Framework, ensuring best practices and compliance requirements are met.
  • Production-like environments: Even if smaller SKUs are used to save costs, the structure of test environments should remain identical to production to ensure realistic results.
  • CI/CD and version control: Every change is versioned via source control (e.g., Git) and tested in a CI/CD pipeline. Small, incremental changes increase transparency and reduce risk.

Comprehensive testing for assurance

Testing is the heart of this phase, since modernization usually does not bring new features but rather transforms existing systems. The focus is on stability, performance, and security. The scope of testing closely resembles the procedures used during the initial migration phase.

Deployment of modernization

The final deployment into production is the most critical moment of the entire project. Depending on the chosen strategy, this can be either in-place or via a parallel environment. In both scenarios, data consistency must be ensured through appropriate migration and replication strategies, and rollback mechanisms must be available at all times.

Validation and stabilization

After the cutover, stabilization becomes critical. This phase verifies that the workload functions as intended and that users can work without restrictions.

  • Success validation: User access, system metrics, and error rates are closely monitored. Only after positive validation by all stakeholders should the modernization be declared officially complete.
  • Enhanced support: In the first days or weeks after the switchover, additional support should be available to quickly address any issues.
  • Updated documentation: All guides, support processes, and onboarding materials must be updated to reflect the new reality.

Optimizing Workloads

With modernization successfully completed, the work is not finished. Instead, a new phase begins, ensuring that the full benefits of transformation are realized and the path for continuous improvement is established. A modernized system typically introduces new functions and tuning options — such as auto-scaling, advanced security features, or performance configurations — that only become relevant after go-live. This is where optimization begins: refining configurations, securing operations, collecting user feedback, and embedding a permanent improvement process.

Fine-tuning configurations

Modernization does not end with deployment. Only during live operation does it become apparent where additional cost, performance, or security optimizations are possible. As during the migration phase, tools such as Azure Advisor and Microsoft Defender for Cloud provide recommendations and alerts that should be acted upon, alongside service-specific best practices.

Ensuring operational readiness

An optimized cloud workload is not only about performance but also about reliability in operations. Three aspects are critical here: monitoring, cost control, and backup.

  • Monitoring: Validate that monitoring and alerting cover the entire architecture end-to-end. New components require their own log and metric configurations. Dashboards must be updated, and chaos tests scheduled to ensure alerts trigger reliably.
  • Cost control: With tools like Microsoft Cost Management, spending patterns can be analyzed, budget alerts set, and cost drivers identified early. Regular reviews help eliminate unused resources or resize over-provisioned components.
  • Backups: Test restores of databases or backups ensure that defined RTO/RPO targets are met. Newly added resources must immediately be integrated into the backup strategy to maintain long-term resilience.

Establishing continuous modernization

Perhaps the most important lesson after modernization is: stagnation is regression. Without regular adjustments, new legacy structures will inevitably form. Therefore, optimization must be firmly embedded into the IT strategy.

This includes regular health checks and Well-Architected reviews, which incorporate new technologies, changing usage patterns, and identified weaknesses into continuous improvement. Where possible, optimization should be automated — for example through auto-scaling rules that dynamically adapt workloads, or anomaly detection for costs.

Finally, insights and best practices should be documented and shared. Internal knowledge bases and lessons learned help teams benefit from each other’s experiences, making future projects faster and more successful.

Developing Cloud-Native Solutions in Azure

So far, we have explored the migration and modernization of workloads. In this chapter, we turn to the development of cloud-native solutions in Azure.

Planning Cloud-Native Solutions

The development of cloud-native applications in Azure differs fundamentally from traditional software delivery. While classic projects often focus on developing applications in fixed cycles and releasing them afterward, cloud technologies rely on continuous delivery, rapid iterations, and resilient operating models.

To succeed, a carefully considered planning phase is essential. As in the approaches covered previously, this stage provides the foundation for automation, clear responsibilities, and robust contingency mechanisms.

DevOps as the foundation for automation

A central success factor is the adoption of consistent DevOps practices. Development and operations teams collaborate closely, using automation as the connecting element. Build, test, and deployment processes are orchestrated through CI/CD pipelines.

This automation reduces human error, accelerates release cycles, and ensures deployments run consistently across all environments. Version control and shared workflows foster close coordination between teams and provide transparency across the entire application lifecycle.

Operational readiness in focus

Technical delivery alone is not enough. Applications must be operationally ready from day one. This includes comprehensive monitoring and alerting concepts as well as prepared rollback scenarios, documented troubleshooting processes, and structured escalation paths.

All relevant scripts and instructions should be stored in a central, accessible location, enabling teams to respond quickly in emergencies and minimize service interruptions.

Development practices for reliable deployments

The quality of a cloud-native solution starts with the code. Clear coding guidelines, peer reviews, and automated tests form the basis for stable deployments. Within CI/CD pipelines, quality gates can be enforced so that only validated code reaches production-like environments.

Critical test categories include:

  • Unit and integration tests
  • Smoke tests
  • Load and performance tests

These measures ensure the system remains stable under realistic usage scenarios, increasing deployment reliability and reducing surprises in production environments.

Gradual rollout of new workloads

When introducing new cloud-native applications, a progressive rollout is recommended. Instead of releasing a system to all users at once, it is first made available to a small pilot group.

This soft launch follows the principle of canary deployments, where the system is tested under real conditions without the risk of large-scale failures. Weaknesses can be identified and resolved early, before the full rollout to all users.

Only after stability and performance have been validated is the system broadly released.

Documentation and handover to operations

Another key factor for success is clear documentation of operational and escalation processes. This includes restart procedures, log access, common error patterns, and defined escalation paths.

These resources must be easily accessible so that support teams can act quickly when issues arise.

Equally important is the timely handover to operations. Responsibilities must be clearly defined, and support models aligned — whether limited support hours or full 24/7 coverage. Addressing these topics in advance prevents responsibility gaps and operational friction.

Planning for new features

New functionality should always be introduced within a structured change management process. This involves documenting changes, defining rollback plans, and securing stakeholder approvals.

  • For minor, backward-compatible updates, in-place updates with feature flags or staged rollouts are effective.
  • For more complex, high-risk changes, blue-green deployments should be used, where old and new versions run in parallel. This ensures the ability to switch back instantly if issues arise.

Rollback as a safety net

No deployment is without risk. Therefore, a well-defined rollback plan is indispensable.

First, clear criteria must be defined for when a deployment is considered failed — for example, critical performance degradation, security incidents, or failed integrity checks.

Next, automated rollback steps should be integrated into the CI/CD pipeline, enabling teams to revert to a previous version without manual intervention.

In addition, workload-specific rollback instructions are required:

  • For infrastructure deployments: reapplying earlier IaC templates
  • For applications: deploying an older container image

Equally important is regular rollback testing in non-production environments. This ensures all steps will work smoothly in real scenarios.

Finally, after any real rollback or failed deployment, retrospectives should be conducted to improve processes and keep the rollback strategy up to date.

Building Cloud-Native Solutions

The goal of developing cloud-native solutions in Azure is to design applications that fully leverage the characteristics of the cloud. Unlike a simple migration or rehost strategy, systems are not merely operated in Azure, but are consistently developed for scalability, resilience, and agility. This approach delivers not only short-term technical benefits but also establishes a long-term foundation for a modern, innovation-ready IT landscape.

A defining feature of cloud-native development is the consistent use of managed services and platform capabilities provided by Azure. Organizations can therefore reduce their investment in infrastructure operations and maintenance, focusing instead on business-critical functionality. Services such as Azure App Service, Azure Functions, or Azure Container Apps make it possible to deploy applications flexibly, scale automatically, and operate with minimal overhead. For more complex architectures centered around microservices and containers, the Azure Kubernetes Service (AKS) offers a robust platform for orchestrating containerized applications, tightly integrated with Azure’s native ecosystem services.

On the data and persistence layer, the cloud-native paradigm becomes equally evident. Azure Cosmos DB provides a global, highly available NoSQL database with millisecond latency and multi-region replication. Complementing this, Azure SQL Database delivers a fully managed relational database with built-in high availability, security, and autoscaling. This frees organizations from traditional operating system and database administration tasks, enabling them to use data platforms strategically to drive innovation and business growth.

Another major advantage of cloud-native solutions is the ability to implement event-driven architectures. Azure offers powerful integration and messaging services such as Event Grid, Service Bus, and Logic Apps. These make it possible to orchestrate complex business processes across decoupled components, achieving both flexibility and stability. Combined with Azure Functions, this enables highly scalable, reactive systems that consume compute resources only when needed, aligning costs strictly with usage.

From a strategic perspective, cloud-native solutions in Azure allow organizations to position IT as a true driver of innovation. Autoscaling enables rapid responses to new market opportunities, while resilience and integrated security ensure stability and compliance. At the same time, the consistent use of Infrastructure as Code, CI/CD pipelines, and DevOps practices fosters a culture of automation and repeatability, reducing operating costs and improving release quality.

Equally important is the avoidance of new legacy systems. Applications that are designed from the start with cloud-native architectural principles — such as microservices, loose coupling, and API-centric integration — are far easier to modernize and evolve than monolithic systems. This minimizes the risk that today’s investments will become tomorrow’s technical debt.

Organizations that adopt this approach secure not only technological advantages but also create a foundation for agile business models and rapid innovation cycles. Azure provides a complete ecosystem covering compute, data, integration, security, and governance from a single source. The challenge is not simply in using individual services correctly but in developing a holistic strategy that aligns business objectives, operating models, and architectural decisions.

Cloud-native development in Azure is therefore more than a technical option; it is a strategic step to future-proof the IT organization and to consistently advance digital transformation.

Conclusion

With the Adopt phase, cloud transformation becomes tangible. Workloads are migrated, modernized, or newly developed. Organizations can now truly realize the benefits of the cloud.

To prevent newly introduced systems from slipping into security risks, cost overruns, or uncontrolled growth, clear governance rules, security concepts, and a well-designed operating model are essential. This is where the next phases of the Cloud Adoption Framework come into play.

In the upcoming blog posts, we will explore:

  • how organizations establish policies and standards in the Govern phase,
  • how security and compliance are anchored in the Secure phase,
  • and how the Manage phase ensures stable and efficient cloud operations.

Together, these phases provide a holistic view of cloud adoption — from strategy to real-world usage, and toward a sustainable, secure, and well-controlled operating model.

Migration

Modernization

Cloud-Native Solutions


Interested in Working Together?

We look forward to hearing from you.

Don't like forms?

mertkan@henden-consulting.de