UML  «Prev  Next»
Lesson 9 Deployment view
Objective This page defines the purpose and function of the deployment view.

Deployment View: Purpose, Function, and Infrastructure as Code

The deployment view addresses the software-to-hardware mapping problem—documenting which software components execute on which physical nodes, how nodes communicate via networks, and what protocols enable inter-node communication. While logical view models conceptual design and component view organizes software modules, deployment view specifies physical architecture: servers, workstations, mobile devices, embedded processors, network topology, and runtime environment configuration. Understanding the deployment view's purpose and function requires examining both its role in distributed systems architecture and why Infrastructure as Code largely superseded static deployment diagrams for cloud-native applications while the deployment view's conceptual framework—thinking deliberately about physical distribution, capacity planning, and operational topology—remains essential for system reliability, performance, and cost optimization. Contemporary practice applies deployment diagrams selectively where physical architecture visualization serves specific needs—capacity planning discussions, disaster recovery documentation, regulatory compliance showing data residency—while using Terraform, Kubernetes manifests, AWS CloudFormation, and architecture decision records for executable infrastructure definitions synchronized with actual deployment rather than manual diagrams diverging from production reality.

Purpose of the Deployment View

The deployment view's purpose centers on bridging the organizational and technical divide between software development and infrastructure operations. Software teams design applications thinking in classes, components, and services. Operations teams manage physical resources—servers, networks, storage, processors. These perspectives rarely align naturally. Deployment diagrams provide common language enabling both groups to discuss where software executes, what hardware requirements support it, how services communicate across networks, and what failure modes physical distribution introduces. This shared visualization facilitates conversations impossible when software and infrastructure teams work from incompatible mental models.

Distributed systems particularly benefit from explicit deployment modeling. When application components execute across multiple machines, physical topology directly impacts functionality. Network latency between data centers affects user experience. Firewall rules control service communication. Load balancer configuration determines request distribution. Database replication topology influences consistency guarantees. Geographic distribution satisfies data residency regulations. The deployment view makes these physical realities explicit, enabling architecture decisions considering both logical design elegance and operational feasibility. Ignoring deployment concerns during design frequently produces logically sound but operationally impractical architectures.
Deployment view
Figure 1: Deployment View showing nodes (physical or virtual machines), communication paths (networks), and deployed components mapped to execution environments

Function: What Deployment Diagrams Model

Deployment diagrams represent nodes—computational resources executing software. Physical nodes include servers, workstations, mobile devices, embedded processors, network routers, IoT devices. Virtual nodes represent virtualized execution environments—virtual machines, Docker containers, Kubernetes pods, serverless functions, cloud instances. Nodes have attributes: processing capacity (CPU cores, speed), memory (RAM, storage), network interfaces (bandwidth, protocols), operating system, runtime environment (JVM version, Node.js version, Python interpreter). Node stereotypes categorize purposes: «application server», «database server», «web server», «client workstation», «edge device». This node modeling enables capacity planning—do we have sufficient resources for expected load?—and failure analysis—what happens if this node fails?
Communication paths (associations between nodes) model network connectivity. Lines connecting nodes represent physical or logical network links. Stereotypes specify protocols: «HTTP», «HTTPS», «TCP», «JDBC», «gRPC», «WebSocket». Multiplicity indicates connection cardinality—load balancer connects to multiple application servers, application server connects to one database server (or several for high availability). Communication paths reveal network dependencies enabling network architecture decisions: what bandwidth supports anticipated traffic? where do firewalls enforce security boundaries? which connections require encryption? how do we achieve redundancy eliminating single points of failure?
Artifacts represent physical manifestations of software components—executables, libraries, configuration files, data files. Deployment diagrams show artifacts deployed onto nodes: `order-service.jar` deploys to Application Server node, `nginx.conf` deploys to Web Server node, database schema deploys to Database Server node. This artifact-to-node mapping documents what must exist where for system operation. Deployment automation scripts (Ansible playbooks, Chef recipes, Puppet manifests) historically referenced deployment diagrams determining which artifacts deploy to which nodes. Modern infrastructure-as-code approaches embed this mapping in executable definitions rather than separate diagrams.
// Traditional Deployment Diagram: Three-Tier Architecture

«client workstation»
┌──────────────────┐
│  User Browser    │
│  (Chrome/Edge)   │
└────────┬─────────┘
         │ HTTPS
         │ Port 443
         ▼
«web server»
┌──────────────────┐
│   Web Server     │
│   - Nginx        │
│   - static files │
│   - SSL cert     │
└────────┬─────────┘
         │ HTTP
         │ Port 8080
         ▼
«application server»
┌──────────────────┐
│  App Server      │
│  - JVM 17        │
│  - Tomcat 10     │
│  - app.war       │
└────────┬─────────┘
         │ JDBC
         │ Port 5432
         ▼
«database server»
┌──────────────────┐
│  DB Server       │
│  - PostgreSQL 15 │
│  - 16GB RAM      │
│  - SSD storage   │
└──────────────────┘

// Deployment view shows:
// - Physical/logical node organization
// - Protocol stack (HTTPS → HTTP → JDBC)
// - Port assignments
// - Technology versions
// - Capacity specs (RAM, storage)


Deployment View Evolution: Cloud and Containers

Cloud computing fundamentally transformed deployment view's application. Traditional deployment diagrams modeled owned physical servers in data centers—purchasing decisions, capacity planning, hardware procurement all required years-long planning horizons making deployment diagrams valuable planning artifacts. Cloud infrastructure elasticity changed this calculus. Virtual machines provision in minutes, auto-scaling groups adjust capacity automatically, serverless functions execute without explicit infrastructure management. The static deployment planning deployment diagrams supported became less relevant when infrastructure becomes programmable and dynamic. However, the conceptual framework—thinking about geographic distribution, availability zones, network topology, data locality—remains essential even when implementation details shift from static diagrams to executable infrastructure code.
Container orchestration platforms (Kubernetes, Docker Swarm, AWS ECS) provide deployment abstraction treating containers as logical deployment units independent of specific physical nodes. Kubernetes manifests declare desired state—run 3 replicas of order-service container—while orchestrator handles node assignment, load balancing, health checking, and automatic recovery. This abstraction decouples logical deployment (how many instances, what configuration) from physical deployment (which servers, which data centers). Traditional deployment diagrams showing specific software-to-server mappings become obsolete when orchestrators continuously reassign containers across node pools. Modern deployment documentation focuses on logical topology (services, replicas, communication patterns) captured in declarative manifests rather than physical mapping diagrams.
// Modern Deployment: Kubernetes Manifest (Infrastructure as Code)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: order-service
spec:
  replicas: 3                    # Desired instance count
  selector:
    matchLabels:
      app: order-service
  template:
    metadata:
      labels:
        app: order-service
    spec:
      containers:
      - name: order-service
        image: company/order-service:2.3.1
        ports:
        - containerPort: 8080
        resources:
          requests:             # Capacity requirements
            memory: "256Mi"
            cpu: "500m"
          limits:
            memory: "512Mi"
            cpu: "1000m"
        env:                    # Configuration
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: db-secret
              key: url

---
apiVersion: v1
kind: Service
metadata:
  name: order-service
spec:
  type: LoadBalancer         # Network exposure
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: order-service

// This executable manifest:
// - Specifies deployment topology (3 replicas)
// - Declares resource requirements (memory, CPU)
// - Defines network configuration (ports, load balancer)
// - Contains actual deployment specification (not diagram)
// - Stays synchronized with reality (source of truth)


What Modern Practice Replaced

Infrastructure as Code (IaC) largely replaced static deployment diagrams for defining and documenting infrastructure. Terraform defines infrastructure declaratively across cloud providers (AWS, Azure, GCP), version controlling infrastructure specifications in Git, applying changes through automated pipelines, maintaining state matching actual deployed resources. AWS CloudFormation, Azure Resource Manager templates, and Google Cloud Deployment Manager provide cloud-specific IaC. Pulumi enables infrastructure definition in general-purpose languages (TypeScript, Python, Go). These tools not only document deployment topology but actually provision infrastructure, ensuring documentation-reality synchronization impossible with manual UML diagrams. When infrastructure changes, IaC definitions update; deployment diagrams frequently diverged from production reality.
Infrastructure as Code captures deployment view concerns—node types, communication paths, capacity specifications—in executable format. Terraform defines EC2 instances (nodes), security groups (communication rules), load balancers (distribution mechanisms), RDS databases (data persistence). These definitions serve dual purposes: deployment automation executes them provisioning actual infrastructure, and documentation readers understand system topology from authoritative source. The shift from descriptive diagrams toward prescriptive code represents deployment view's natural evolution—moving from documentation artifact toward operational reality.

// Terraform: Deployment as Executable Code

# Application Server Nodes (Auto-Scaling Group)
resource "aws_autoscaling_group" "app_servers" {
  min_size         = 2
  max_size         = 10
  desired_capacity = 3
  
  launch_template {
    id      = aws_launch_template.app_server.id
    version = "$Latest"
  }
  
  vpc_zone_identifier = [
    aws_subnet.private_a.id,
    aws_subnet.private_b.id
  ]
  
  tag {
    key                 = "Name"
    value               = "order-service"
    propagate_at_launch = true
  }
}

# Database Node
resource "aws_db_instance" "postgres" {
  identifier        = "orders-db"
  engine            = "postgres"
  engine_version    = "15.2"
  instance_class    = "db.t3.large"  # Capacity spec
  allocated_storage = 100
  
  vpc_security_group_ids = [aws_security_group.database.id]
  db_subnet_group_name   = aws_db_subnet_group.main.name
  
  backup_retention_period = 7
  multi_az               = true      # High availability
}

# Communication Path (Security Group)
resource "aws_security_group" "app_server" {
  name = "app-server-sg"
  
  ingress {
    from_port   = 8080
    to_port     = 8080
    protocol    = "tcp"
    cidr_blocks = ["10.0.0.0/16"]  # Only internal traffic
  }
  
  egress {
    from_port   = 5432               # Database communication
    to_port     = 5432
    protocol    = "tcp"
    cidr_blocks = ["10.0.0.0/16"]
  }
}

// IaC advantages over deployment diagrams:
// - Executable (actually provisions infrastructure)
// - Version controlled (track changes over time)
// - Testable (validate before applying)
// - Always accurate (source of truth)

The C4 Model's Deployment diagram (level 4, though often considered supplementary) provides modern alternative to UML deployment diagrams when visual topology documentation serves specific purposes. C4 Deployment diagrams show deployment nodes, containerized services, and infrastructure components using simple notation accessible to non-UML experts. Unlike traditional UML focusing on complete deployment specification, C4 Deployment diagrams emphasize high-level topology communication—multi-region distribution, availability zone placement, CDN edge locations—leaving detailed configuration to IaC definitions. This division—high-level visual communication via C4, detailed executable specification via IaC—reflects modern practice separating human-readable overview from machine-executable precision.

Where Deployment View Remains Valuable

Capacity planning discussions benefit from deployment view visualization. When planning system scaling—can we handle 10x traffic? what additional infrastructure supports Black Friday load?—deployment diagrams facilitate stakeholder conversations. Infrastructure costs, performance targets, and architectural tradeoffs become tangible when visualized. Whiteboard sketches showing current deployment, proposed scaled deployment, and cost/benefit analysis enable executive decision-making impossible from Terraform code alone. These diagrams serve as communication artifacts during planning, with IaC implementing decisions afterward.
Disaster recovery and business continuity planning rely on deployment topology understanding. What happens if AWS us-east-1 fails? How do we failover to backup region? What data replicates cross-region? Which services require geographic distribution for regulatory compliance? Deployment diagrams showing primary site, backup site, replication mechanisms, and failover procedures document disaster scenarios. Compliance auditors reviewing disaster recovery capabilities expect topology diagrams demonstrating redundancy strategies, not just Terraform configurations whose implications require deep technical expertise to extract.
Regulatory compliance and data residency requirements mandate deployment documentation showing data location and movement. GDPR requires understanding where European customer data resides and flows. HIPAA demands knowing which nodes process protected health information. Financial regulations specify data sovereignty requirements. Deployment diagrams annotated with data classifications, processing locations, and cross-border flows satisfy compliance documentation requirements. While IaC defines actual infrastructure, compliance stakeholders need visual representations communicating regulatory compliance that deployment diagrams provide more effectively than code.
Legacy system documentation reverse-engineers deployment topology from production environments before modernization. Network scanning tools map communication patterns, configuration management databases inventory servers and applications, application performance monitoring reveals runtime dependencies. These discoveries consolidate into deployment diagrams documenting current state—the baseline for modernization planning. Migration strategies leverage deployment understanding: which components migrate to cloud first? what dependencies prevent independent migration? how do we achieve incremental cloud adoption minimizing disruption? Deployment diagrams become artifacts showing transformation from current legacy architecture toward target cloud-native deployment.

Modern Deployment Documentation Tools

Automated diagram generation from infrastructure code provides best-of-both-worlds solution—executable IaC as source of truth, generated diagrams for visualization. Terraform Graph generates DOT graphs showing resource dependencies. Rover creates interactive HTML documentation from Terraform state. Inframap produces deployment diagrams from Terraform/CloudFormation. Cloudcraft visualizes AWS architecture with live syncing from actual accounts. These tools ensure deployment diagrams stay synchronized with infrastructure reality because generated from authoritative sources rather than manually maintained separate documentation.
When manual deployment diagrams serve specific communication needs, modern tools provide lightweight alternatives to heavyweight UML tools. draw.io (diagrams.net) offers AWS/Azure/GCP shape libraries enabling architecture diagrams without UML formalism. Lucidchart provides collaborative cloud architecture diagramming with real-time editing. Cloudcraft creates AWS architecture diagrams with cost estimation integration. PlantUML supports deployment diagrams through text syntax enabling version control. These tools democratize deployment documentation—teams create architecture diagrams without expensive enterprise tools or deep UML knowledge.
// PlantUML: Deployment Diagram as Code

@startuml
node "Load Balancer" {
  artifact nginx
}

node "App Server 1" {
  artifact "order-service.jar"
  database "Redis Cache"
}

node "App Server 2" {
  artifact "order-service.jar"
  database "Redis Cache"
}

node "Database Server" {
  database PostgreSQL {
    artifact "orders schema"
  }
}

nginx --> "order-service.jar" : HTTP/8080
"order-service.jar" --> PostgreSQL : JDBC/5432
"order-service.jar" --> "Redis Cache" : Redis/6379
@enduml

// Advantages:
// - Text format (version controllable)
// - Generate images in CI/CD
// - No vendor lock-in
// - UML-compliant deployment diagram

Cloud-Native Deployment Patterns

Multi-region deployment addresses availability and latency through geographic distribution. Primary region serves majority traffic, secondary regions provide disaster recovery or serve geographically distant users. Deployment documentation shows region topology, inter-region data replication, DNS-based routing, and failover procedures. Global load balancers (AWS Route 53, Azure Traffic Manager, Cloudflare) distribute requests geographically. Cross-region database replication maintains data consistency. These patterns require explicit deployment modeling ensuring all stakeholders understand geographic architecture implications—latency, consistency tradeoffs, failover complexity, regulatory data residency.
Availability zones within single region provide fault isolation without geographic distribution. Cloud providers offer multiple isolated data centers (availability zones) within regions. Best practice distributes application across zones—load balancer in each zone, application servers across zones, database with multi-AZ replication. This zonal distribution survives single data center failures while maintaining low latency. Deployment documentation specifies zonal strategy communicating resilience approach to operations teams and auditors.
Edge computing and CDN architectures push computation toward users reducing latency. Content delivery networks (CloudFront, Cloudflare, Fastly) cache static assets globally. Edge functions (Lambda@Edge, Cloudflare Workers) execute code at edge locations. Deployment diagrams showing central origin servers, global CDN distribution, and edge compute capabilities communicate performance architecture stakeholders understand without deep distributed systems expertise.

Integration with Architecture Decision Records

Architecture Decision Records document deployment architecture decisions with context, decision, and consequences. ADR capturing "Deploy to three availability zones for 99.99% availability" includes deployment diagram showing zonal distribution, explains availability calculation, documents cost implications, and references IaC implementing decision. This combination—narrative ADR explaining reasoning, visual diagram communicating topology, executable IaC implementing architecture—provides comprehensive deployment documentation serving different stakeholder needs. Executives read ADRs, architects review diagrams, operators work from IaC.

Conclusion

The deployment view's purpose—mapping software to hardware, documenting physical topology, planning capacity and distribution—remains critically relevant despite dramatic evolution in how deployment gets specified and managed. Its function—modeling nodes, communication paths, artifacts, and runtime environments—addresses genuine architectural needs transcending specific technologies. Contemporary practice shifted from static UML deployment diagrams toward Infrastructure as Code providing executable deployment specifications, automated diagram generation ensuring documentation accuracy, and architecture decision records explaining deployment rationale. Deployment diagrams serve selectively where visual topology communication provides specific value: capacity planning discussions, disaster recovery documentation, regulatory compliance demonstration, stakeholder communication about geographic distribution. Understanding deployment view concepts—deliberate thinking about physical architecture, capacity requirements, network topology, operational feasibility—enables informed infrastructure decisions regardless whether documentation takes form of UML diagrams, Terraform code, generated visualizations, or architecture decision records. The fundamental insight—physical deployment profoundly impacts system reliability, performance, cost, and compliance—persists across all approaches, from traditional data center deployment diagrams through modern cloud-native infrastructure as code, demonstrating enduring value even as specific techniques evolved from manual static documentation toward automated executable infrastructure synchronized with production reality through continuous deployment pipelines and declarative infrastructure management.
In the next lesson, the purpose and function of packages will be discussed.
SEMrush Software Target 9SEMrush Software Banner 9