Image 2
View All Posts

Special Features and Challenges of Azure Container Apps

In this blog post, you'll get a detailed overview of Azure Container Apps, its use cases, and special features. Additionally, the key differences to other container solutions in Azure are highlighted, enabling you to make an informed decision for your cloud architecture.

Microsoft Azure
Container Apps
Cloud
Networking
Image

Compute Resources for Containerized Applications

Teams have various options available for deploying containerized applications, including Azure Container Apps, Azure Kubernetes Service (AKS), Azure App Service, Azure Container Instances, and other platforms. Each of these solutions brings specific advantages and use cases, so the choice of the appropriate technology depends heavily on the individual requirements of a project.

Azure Container Apps is a serverless platform that is ideal for microservices, event-driven applications, and flexible scaling requirements. In contrast, AKS offers comprehensive control over the Kubernetes infrastructure, while Azure App Service is specifically optimized for web applications. Azure Container Instances enable fast and isolated deployments, while Azure Functions focuses on serverless, short-lived processes. Other alternatives such as Azure Spring Apps and Azure Red Hat OpenShift are aimed at specific development approaches and enterprise requirements.

In this blog post, you will get a detailed overview of Azure Container Apps, their use cases, and special features. In addition, the key differences to other container solutions in Azure are highlighted, enabling you to make an informed decision for your cloud architecture.

ServiceAdvantagesDisadvantagesBest Use Cases
Azure Container AppsServerless, automatic scaling, optimized for microservices,
DAPR Support
No direct access to Kubernetes APIsMicroservices, event-driven applications
Azure Kubernetes Service (AKS)Full control over Kubernetes API, high scalability, flexibilityRequires Kubernetes know-how and cluster managementComplex containerized workloads
Azure App ServiceSimple hosting for web applications, CI/CD integration, Managed ServiceLittle control over infrastructure, primarily for web applicationsWeb apps, APIs, simple container apps
Azure Container Instances (ACI)Fast, simple deployment of individual containers, no overheadNo automatic scaling, no orchestrationTemporary or isolated container workloads
Azure FunctionsServerless, event-driven, automatic scalingShort-lived functions, limited runtimeEvent-driven applications, API backends

To use Azure Container Apps, we need to understand two essential components - Azure Container App Environments & Azure Container Apps.

Azure Container App Environments

Azure Container App Environments are the top-level logical unit within Azure Container Apps. They serve as an overarching resource for managing multiple container apps and enable shared network, scaling, and configuration options.

Azure Container Apps offers two different workload plan types, which are suitable depending on requirements and usage scenarios:

  • Consumption Only: This plan provides exclusively serverless compute resources, so only the actually used resources are charged. This can be useful for applications with highly fluctuating loads. However, the "Workload Profiles" plan can also support serverless scaling, which is why "Consumption Only" is only an advantageous choice in exceptional cases.

  • Workload Profiles: In addition to serverless scaling, this plan also allows the definition of fixed CPU and memory resources (SKUs) for your container environment. This is particularly suitable for applications with stable loads or when guaranteed performance is required. In contrast to the "Consumption Only" plan, this type offers more control over the underlying resources.

In a Container App Environment of the "Workload Profile" type, you can use multiple types of SKUs in parallel, so the following exemplary distribution would be possible in a Container App Environment:

  • Workload Profile Dedicated 1 (SKU: General Purpose D-series with 4 vCPUs and 16GB RAM) : 2 Apps
  • Workload Profile Dedicated 2 (SKU: General Purpose D-series with 4 vCPUs and 16GB RAM) : 1 App
  • Workload Profile Consumption (Up to 4 vCPUs and 8GB RAM): 15 Apps

The reason to choose the Workload Profile type is also due to the limitation of the Consumption Only type to 2 vCPUs and 4GB RAM per app. With the Workload Profile type, you are more flexible in the long term and can better respond to changing resource requirements.

The advantage of Consumption (Workload Profile) and Consumption Only Container Apps Environments is that scaling rules can be used for applications. You can define rules to scale the required resources as needed. However, for predictable and constant resource allocations, Workload Profiles can be more cost-effective.

Example with simultaneous use of Consumption and Workload Profile Container Apps Environments:

Container Apps Environment Types

Network Architecture of Container App Environments

In this section, I will show you all the important features and limitations you should consider when deploying your Container Apps.

Decision Basis 1: VNet Integration

Container App Environments can be created in your own VNet or in Microsoft's tenant. To have full control over your Container Apps Environment, I recommend creating a dedicated Container Apps Environment in your own VNet. If you can use your existing VNet, you can easily configure Network Security Groups for your Container Apps Environment subnet and ensure private communication to other resources in the same VNet.

Subnet Configuration for Azure Container Apps Environments:

  • Dedicated Subnet: Virtual network integration requires a dedicated subnet
  • Immutable Size: Subnet sizes cannot be changed after creation
  • Delegation: Your subnet must be delegated to Microsoft.App/environments
  • Infrastructure Reservation: Container Apps automatically reserves 12 IP addresses for infrastructure (regardless of scaling)
Environment TypeMinimum Subnet SizeIP Address Allocation
Workload Profile/27Each node gets its own IP address. With scaling, the IP requirement increases proportionally to the number of nodes
Consumption Only/23IP addresses are shared between multiple replicas. 1 IP address per 10 replicas

Another reason to choose the Workload Profile Container Apps Environment is the availability of User Defined Routes (UDR):

Environment TypeSupported Plan TypesDescription
Workload ProfileConsumption, DedicatedSupports User-Defined Routes (UDR), egress via NAT Gateway, and creating private endpoints in the container app environment. The minimum required subnet size is /27.
Consumption OnlyConsumptionDoes not support User Defined Routes (UDR), egress via NAT gateway, peering via remote gateway, or other custom egress. The minimum required subnet size is /23.

With User Defined Routes, you can route the traffic exiting from Container Apps to an Azure Application Firewall first. Container Apps Networking Traffic

Decision Basis 2: Virtual IP Configuration

The configuration of ingress traffic is done in the ingress section, where various settings can be made:

  • Access Level: The container app can be configured to be either internally or externally accessible. The environment variable CONTAINER_APP_ENV_DNS_SUFFIX is used to automatically resolve the fully qualified domain name suffix (FQDN) of the environment. Within the same environment, applications can communicate directly with each other via their name.
  • Traffic Splitting Rules: Rules can be defined to distribute incoming traffic to different revisions of the application. This allows, for example, gradual rollouts or A/B testing. Further details on configuring traffic distribution can be found in the Azure documentation.
TypeDescriptionRecommendation
InternalNo public endpoints. Container Apps are accessible in the VNet only via internal IP addresses.Always deploy as internal Container Apps, as the apps are usually assigned to a custom domain via the load balancers.
ExternalAllows public requests via the default xyz.azurecontainerapps.io FQDN.Only advisable if the app should be accessible under the default FQDN.

The following figure shows the network traffic using an Application Gateway for ingress traffic, and then an example where the Container App ingress type is set to external and the default FQDN is accessible on the internet: Container Apps Public IP

Use of Load Balancers

Azure Container Apps provide a flexible platform for running containerized applications in the Azure Cloud. But often the question arises: How can Container Apps be optimally integrated into existing architectures with load balancing, global distribution, and security requirements? In this section, we will examine various combinations of Azure Container Apps with Application Gateway and Front Door, as well as the corresponding network configuration requirements.

1. Application Gateway with Azure Container Apps

Scenario: Application Gateway is placed in front of the Container Apps as a reverse proxy and Web Application Firewall (WAF). This architecture is particularly suitable for internal and external web applications that require TLS termination, hostname-based routing, or Web Application Firewall protection.

Network configuration: Application Gateway is deployed in its own subnet within an Azure Virtual Network (VNet). For a private Azure Container Apps environment, these must also be integrated into a VNet to enable direct communication. If the Container Apps are publicly accessible, the AGW can operate as a reverse proxy with a public IP.

2. Azure Front Door with Azure Container Apps

Scenario: Azure Front Door is used as a global Content Delivery Network (CDN) and load balancer for geographically distributed applications. This is particularly suitable for scenarios with multi-region deployments.

Network configuration: Front Door does not support direct VNet integration, as it is a global service. To use Azure Container Apps with private access, an additional Azure Private Link is required. Alternatively, the Container Apps must be accessible via a public IP or a publicly accessible domain name.

3. Combination of Application Gateway and Front Door

Scenario: A combination of both services allows Front Door to be used as a global load balancer, while Application Gateway handles more detailed routing and security mechanisms within a region.

Network configuration: Here, Front Door is used as the global entry point, while Application Gateway operates in a VNet within the region. If Container Apps are hosted privately, they must communicate with Front Door via Private Link or be made accessible via the Application Gateway.

4. Additional Options

Azure Load Balancer (ALB)

Scenario: If Container Apps are operated in a VNet (e.g., with Workload Profiles), an internal or external Azure Load Balancer can be used to distribute traffic to different instances.

Advantages:

  • Low latency, as it is a Layer 4 load balancer (TCP/UDP).
  • Supports both public and private IPs.

Limitations:

  • No built-in SSL/TLS offloading or WAF functionality – usable in combination with Azure Application Gateway.

Azure Traffic Manager (ATM)

Scenario: Often used in multi-region or multi-cloud architectures to distribute traffic globally to different Azure Container Apps.

Advantages:

  • DNS-based load balancer for global high availability.
  • Support for various routing methods (Performance-based, Geographic, Failover, Weighting).

Limitations:

  • No direct integration with private VNets; it only controls traffic via DNS and not in real-time.

The Hierarchy of an Azure Container App in Detail

After looking at the special features of configuring Container Apps Environments, I would like to explain the relevant properties of Container Apps in this section.

Container App – The Top Level

An Azure Container App represents the entire application. It can consist of one or more containers that run together in an environment. The Container App defines global configurations such as autoscaling rules, network configuration, and identity management.

Revisions - Versioning the Application

Each Container App has one or more revisions. A revision is an instance of the application with a specific configuration. Changes to the Container App (e.g., updates to environment variables or container images) create a new revision. Revision-specific aspects include: deploying new versions without downtime, traffic distribution to multiple revisions, and rollbacks to previous revisions.

Replicas – Scaling within a Revision

Within a revision, replicas are created, which represent the actual container instances. Replicas enable horizontal scaling and ensure that enough instances are running to handle the load. They follow the defined scaling rules (e.g., CPU/memory usage or HTTP requests per second).

Containers – The Application Components

Each replica contains one or more containers that run together as a unit. Containers within a replica share resources and communicate via localhost. Typically, there are two types of containers:

  • Application container: The main container that provides the core functionality of the application.
  • Sidecar container: Additional containers for supporting tasks such as logging, monitoring, or service mesh functionality.

Init Containers - Preparatory Steps before Starting Application Components

Init containers are special containers that are started before the regular application containers. They serve to perform one-time tasks, such as:

  • Loading configuration files
  • Waiting for dependencies (e.g., a database connection)
  • Initializing resources

The following figure also shows the differences and hierarchies: Container Apps Hierarchy

Overview of Additional Features

Microservices and Communication: DAPR

DAPR (Distributed Application Runtime) simplifies communication between microservices and enables applications to be developed with robust, portable APIs. It provides a standardized solution for challenges such as State Management, Service Invocation, and Event Streaming.

Traffic Splitting

With Traffic Splitting, different versions of an application can be run in parallel. This enables gradual rollouts of new versions, A/B testing, or Blue/Green deployments without impacting the user experience.

Security: mTLS and Network Security Groups (NSG)

For secure communication within Container Apps, mTLS (Mutual Transport Layer Security) ensures that all connections between services are encrypted. In addition, NSGs (Network Security Groups) provide detailed control options to regulate and secure network access to containers and services.

Performance: GPUs for Demanding Workloads

For compute-intensive tasks such as machine learning or graphical calculations, GPUs provide a significant performance boost. Container Apps support the use of GPUs to achieve optimal performance even in more demanding scenarios.

Management of Sensitive Data: Secrets Management

With Secrets Management, sensitive data such as API keys, passwords, or certificates can be securely stored and managed. Container Apps allow this data to be stored encrypted in a central management in Key Vaults and made accessible only to authorized services.

Access Management: Integrated Authentication

Integrated authentication enables easy implementation of security protocols and management of user access. This ensures that only authorized users and services can access resources.

Scaling: Automatic Scaling

Thanks to automatic scaling, Container Apps dynamically adapt to requirements. Whether the load increases or decreases, the platform ensures that sufficient resources are always available without manual intervention.

Monitoring and Logging: Logging and Monitoring

Logging and monitoring are essential functions for monitoring application performance. Container Apps provide detailed insights into the state of the application to quickly identify errors and optimize performance.

Persistent Data Storage: Storage Mounts & Blob Storage

With Storage Mounts, containers can access persistent storage, so data is retained even after containers are restarted. This is particularly important for applications that require long-term storage and consistent data.

Serverless Components: Functions

The integration of Functions enables the integration of serverless components into Container Apps. This provides an efficient way to perform smaller, event-driven tasks without having to manage a full infrastructure.

Conclusion

The variety of features that Container Apps offer makes them one of the most versatile platforms for modern cloud applications. From secure communication and secrets management to powerful scaling mechanisms, Container Apps open up a world of possibilities for developers to build scalable, secure, and efficient applications.


Interested in Working Together?

We look forward to hearing from you.

Don't like forms?

mertkan@henden-consulting.de