Azure Web App Logging
An Azure Web App can log in two broad ways: locally on the app host for quick troubleshooting, or externally through Azure Monitor diagnostic settings for longer-lived and downstream analytics use. The best choice depends on the following factors: speed and simplicity, or durability, integration, and centralized operations.
Logging options
Local logging writes logs to the App Service file system, where you can download them or access them over FTPS. This is the lightest-weight option for development and short investigations, and Azure App Service supports FTPS-only mode so you can avoid plain FTP; if you are using file-system logging, a common optimization is to keep retention at 0 days and size quota around 35 MB so you do not accumulate unnecessary storage or incur avoidable cost on the app resource.
Diagnostic settings send logs to a Storage account, Event Hub, or Log Analytics. This is the better fit when you need centralized retention, querying, or forwarding to operational tools such as Splunk through Event Hub or another ingestion pipeline, but it can generate meaningful storage and ingestion volume depending on how verbose the selected log categories are
Practical trade-offs
Local file-system logging is usually faster to access and easier for developers because the logs sit close to the app and can be pulled immediately. The downside is that it is not designed for long-term retention or enterprise-scale observability, and the footprint should be kept intentionally small so it does not compete with the app for space or create unnecessary overhead.
Diagnostic settings are better for compliance, analytics, and cross-team access because they move data out of the app into durable Azure services. The trade-off is cost and volume: app logs, HTTP logs, and platform logs can grow quickly, and sending all categories to Storage or Event Hub increases both ingestion and downstream processing costs, especially if a SIEM such as Splunk also charges for indexed volume.
Blob storage option
Sending logs to Azure Blob Storage is often the middle ground between local-only logs and a full streaming pipeline. Compared with keeping logs on the app host, blob storage gives you better retention, easier central access, and stronger separation of duties; compared with Event Hub, it is simpler and usually cheaper for archive-style retention, but less suitable for real-time operational forwarding.
From a security perspective, blob storage is preferable when you want to restrict access with managed identities, RBAC, and private networking rather than exposing the app host file system or broadly granting FTPS access. In general, the more external the log destination, the better your control plane story becomes, but the more important it is to secure identities, network paths, and storage permissions.
Cost impact
When logging is turned on for all log types, the monthly cost increases in two places: the App Service side and the destination side. On the app side, local logging can consume file-system quota and operational overhead, while external logging can add Azure Monitor, Storage, Event Hub, and downstream SIEM costs; in practice, the biggest cost driver is usually log volume rather than the mere act of enabling logging
A full “everything on” configuration can become expensive if verbose application logs, HTTP logs, and platform diagnostics are all emitted continuously. The right way to manage cost is to limit categories to what is actually needed, reduce verbosity in production, and set retention policies that match the business need instead of defaulting to indefinite collection
Premium tier considerations
If the app service plan is upgraded to the lowest Premium tier, turning on logging through diagnostic settings is generally a better production pattern than relying on only local file logging. Premium gives more headroom for performance-sensitive workloads, but logging still adds CPU, I/O, and network overhead, especially if the destination is remote and every write must be exported out of the app path
The main security concern is not the Premium tier itself, but the expanded data flow: logs may contain request paths, headers, identifiers, or exception details, so access to the destination must be tightly limited. The main performance concern is bursty log generation, which can increase latency if the app spends too much time serializing and exporting log data rather than serving requests
Dev and ops access
A good pattern is to optimize for both developer and operational needs by splitting access modes. Developers can use local logs or near-real-time access for low-latency troubleshooting and faster iteration, while operations teams consume the same data centrally with read-only access, least privilege, and controlled retention in Storage, Event Hub, or a SIEM pipeline
This reduces friction because developers get interactive access without waiting on a downstream pipeline, while operations gets governed, durable visibility with auditability and restricted permissions. In practice, that usually means keeping local logs small and temporary, and pushing only the logs needed for production observability into centralized destinations
Recommendations
Azure’s general direction for App Service logging is to use local logs for short-lived troubleshooting, diagnostic settings for durable monitoring, and secure transport and access controls for anything beyond the app host. FTPS should be limited to FTPS-only or disabled when not needed, detailed error pages should not be exposed to clients in production, and logging categories should be scoped narrowly to reduce cost and noise.
A popular policy posture is:
• Keep local file-system logs small, temporary, and developer-focused.
• Use diagnostic settings for production retention and centralized monitoring.
• Route only necessary categories to Storage or Event Hub.
• Restrict destination access with least privilege and private connectivity where possible.
• Treat log content as sensitive operational data and control retention accordingly
No comments:
Post a Comment