HTTP 504 Gateway Timeout

HTTP 504 Gateway Timeout indicates a gateway or proxy did not receive a timely response from an upstream server. The upstream didn't respond at all within the timeout period — contrast with 502 where the upstream responded but with invalid data. Common causes: upstream server is too slow (increase timeout or optimize), network partition between gateway and upstream, DNS resolution failure for the upstream host, or upstream server's connection pool is exhausted.

Debug HTTP 504 live
Analyze real 504 behavior — headers, caching, CORS, redirects
Open Inspector →

Try it (live endpoint)

Response includes the status code, standard headers (including Content-Type), and a small diagnostic JSON body describing the request and returned status.

Simulator URL (copy in the app after load — not a normal link):

https://httpstatus.com/api/status/504

Example request:

curl -i "https://httpstatus.com/api/status/504"
Try in playground

Meaning

The server was acting as a gateway or proxy and did not receive a timely response from the upstream server.

What it guarantees
  • The server (or an upstream) failed to fulfill a valid request.
What it does NOT guarantee
  • The failure is permanent.
  • Immediate retries are always safe or effective.

When to use this status

  • Gateway/proxy failures reaching an upstream service.
  • CDN reverse proxy errors when origin health is degraded.
  • Service mesh/proxy issues during upstream deploys or incidents.

When NOT to use this status (common misuses)

Returning 5xx for client validation errors.
Clients retry unnecessarily; traffic spikes and costs increase.
Returning 500 without stable error identifiers/correlation.
SRE triage slows down; alerting becomes noisy and hard to act on.
Returning 503/504 without retry guidance.
Clients hammer the service or give up too early; cascading failures worsen.

Critical headers that matter

Content-Type
Defines error body format (JSON/text/problem+json).
Clients can’t parse structured errors; observability loses fidelity.
Cache-Control
Prevents caching transient errors unless intended.
CDNs cache failures; prolonged user-visible outages.

Tool interpretation

Browsers
Displays an error state; devtools exposes status and headers. Cache headers can accidentally cache error documents.
API clients
Classifies as failure; retry policy depends on idempotency and code class. Structured errors improve handling.
Crawlers / SEO tools
Persistent failures reduce crawl rate; soft-404 patterns cause indexing instability.
Uptime monitors
Typically alerts based on rate/threshold. Consistent classification reduces false positives.
CDNs / reverse proxies
May cache errors if misconfigured; respects Cache-Control and can serve stale on origin failure.

Inspector preview (read-only)

On this code, Inspector focuses on semantics, headers, and correctness warnings that commonly affect clients and caches.

Signals it will highlight
  • Status semantics vs method and body expectations
  • Header sanity (Content-Type, Cache-Control, Vary) and evidence completeness
  • Error cacheability and retry guidance signals
Correctness warnings
No common correctness warnings are specific to this code.

Guided Lab outcome

  • Reproduce HTTP 504 Gateway Timeout using a controlled endpoint and capture the full exchange.
  • Practice distinguishing status semantics from transport issues (redirects, caching, proxies).
  • Learn to attribute failures to origin vs upstream and apply safe retry/backoff decisions.

Technical deep dive

HTTP 504 Gateway Timeout represents a specific server-side condition that requires different handling than other 5xx errors. Understanding the precise cause helps operations teams diagnose and resolve issues faster. Monitoring systems should distinguish 504 from other 5xx codes for accurate alerting and diagnosis.

Real-world examples

Production 504 Gateway Timeout incident
A production system returns 504 Gateway Timeout. The operations team triages based on the specific status code: upstream timeout.
Load balancer returning 504
A load balancer returns 504 to clients. For 504 specifically, this typically indicates the backend did not respond within the configured timeout.
Client retry logic for 504
A client receives 504 Gateway Timeout. Retry with exponential backoff — the error may be transient.

Framework behavior

Express.js (Node)
Express: uncaught exceptions in route handlers result in 500 by default. Use error middleware: app.use((err, req, res, next) => { res.status(err.status || 500).json({ error: 'Internal Server Error' }); });
Django / DRF (Python)
Django: unhandled exceptions return 504 through custom middleware or exception handling. Custom error views: handler500 = 'myapp.views.server_error'.
Spring Boot (Java)
Spring Boot: @ControllerAdvice can map specific exceptions to 504.
FastAPI (Python)
FastAPI: Use custom exception handlers to return 504 with appropriate error messages.

Debugging guide

  1. Check the gateway/proxy logs for upstream communication details
  2. Verify the error is reproducible — transient 504 errors may indicate intermittent issues like memory pressure or connection pool exhaustion
  3. Check recent deployments — a new deploy is the most common cause of sudden 504 spikes
  4. Increase upstream timeout if the backend is legitimately slow
  5. Test with curl -v to see the full response including headers — some 504 responses include diagnostic headers

Code snippets

Node.js
// Handle 504 Gateway Timeout
process.on('unhandledRejection', (reason) => {
  console.error('Unhandled rejection:', reason);
});

app.use((err, req, res, next) => {
  console.error(`${req.method} ${req.url}:`, err.stack);
  res.status(err.status || 500).json({
    error: process.env.NODE_ENV === 'production'
      ? 'Internal Server Error'
      : err.message,
    requestId: req.id
  });
});
Python
from fastapi import FastAPI, Request
from fastapi.responses import JSONResponse
import logging

logger = logging.getLogger(__name__)

@app.exception_handler(Exception)
async def server_error_handler(request: Request, exc: Exception):
    logger.error(f'{request.method} {request.url}: {exc}',
                 exc_info=True)
    return JSONResponse(
        status_code=504,
        content={'error': 'Gateway Timeout', 'request_id': request.state.id}
    )
Java (Spring)
@ControllerAdvice
public class GlobalErrorHandler {
    private static final Logger log = LoggerFactory.getLogger(
        GlobalErrorHandler.class);

    @ExceptionHandler(Exception.class)
    public ResponseEntity<ErrorResponse> handleException(
            Exception ex, HttpServletRequest req) {
        log.error("{} {}: {}", req.getMethod(),
                  req.getRequestURI(), ex.getMessage(), ex);
        return ResponseEntity.status(504)
            .body(new ErrorResponse("Gateway Timeout",
                "An unexpected error occurred"));
    }
}
Go
func errorMiddleware(next http.Handler) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		defer func() {
			if err := recover(); err != nil {
				log.Printf("%s %s: %v\n%s",
					r.Method, r.URL, err, debug.Stack())
				w.WriteHeader(504)
				json.NewEncoder(w).Encode(map[string]string{
					"error": "Gateway Timeout",
				})
			}
		}()
		next.ServeHTTP(w, r)
	})
}

FAQ

What causes 504 Gateway Timeout errors?
Upstream server too slow to respond, network partitions, DNS failures, connection pool exhaustion.
Should clients retry on 504 Gateway Timeout?
Yes, with exponential backoff. Transient failures often resolve on retry. Use jitter to avoid thundering herd. Limit retries (3-5 attempts max).
How should 504 Gateway Timeout be monitored?
Track 504 error rate as a percentage of total requests. Alert on sustained rates above baseline (e.g., >1% for 5 minutes). Correlate with upstream service health metrics.
What information should a 504 response include?
In production: a generic error message, a request ID for correlation, and optionally a Retry-After header. Never include stack traces, internal paths, database errors, or configuration details. In development: full error details are acceptable. Always log the full error server-side with the request ID.

Client expectation contract

Client can assume
  • The server or an upstream failed to fulfill the request.
Client must NOT assume
  • Immediate retries are always safe or effective.
Retry behavior
Retry idempotent requests with backoff; avoid retries for non-idempotent writes unless you have idempotency keys.
Monitoring classification
Server error
Alert on rate and duration; ensure CDNs do not cache transient failures.

Related status codes

503 Service Unavailable
The server is currently unavailable (overloaded, down for maintenance, or rate limiting). Should include Retry-After header when possible.
505 HTTP Version Not Supported
The server does not support the HTTP protocol version used in the request.
502 Bad Gateway
The server was acting as a gateway or proxy and received an invalid response from the upstream server.

Explore more

Related guides
Related tools
Related utilities