HTTP 502 Bad Gateway

HTTP 502 Bad Gateway indicates that a server acting as a gateway or proxy received an invalid response from an upstream server. This is NOT a problem with the client or the gateway itself — the upstream server returned something the gateway couldn't understand. Common causes: upstream server crashed mid-response, SSL certificate mismatch between gateway and upstream, upstream returned malformed HTTP, or upstream closed the connection unexpectedly.

Debug HTTP 502 live
Analyze real 502 behavior — headers, caching, CORS, redirects
Open Inspector →

Try it (live endpoint)

Response includes the status code, standard headers (including Content-Type), and a small diagnostic JSON body describing the request and returned status.

Simulator URL (copy in the app after load — not a normal link):

https://httpstatus.com/api/status/502

Example request:

curl -i "https://httpstatus.com/api/status/502"
Try in playground

Meaning

The server was acting as a gateway or proxy and received an invalid response from the upstream server.

What it guarantees
  • The server (or an upstream) failed to fulfill a valid request.
What it does NOT guarantee
  • The failure is permanent.
  • Immediate retries are always safe or effective.

When to use this status

  • Gateway/proxy failures reaching an upstream service.
  • CDN reverse proxy errors when origin health is degraded.
  • Service mesh/proxy issues during upstream deploys or incidents.

When NOT to use this status (common misuses)

Returning 5xx for client validation errors.
Clients retry unnecessarily; traffic spikes and costs increase.
Returning 500 without stable error identifiers/correlation.
SRE triage slows down; alerting becomes noisy and hard to act on.
Returning 503/504 without retry guidance.
Clients hammer the service or give up too early; cascading failures worsen.

Critical headers that matter

Content-Type
Defines error body format (JSON/text/problem+json).
Clients can’t parse structured errors; observability loses fidelity.
Cache-Control
Prevents caching transient errors unless intended.
CDNs cache failures; prolonged user-visible outages.

Tool interpretation

Browsers
Displays an error state; devtools exposes status and headers. Cache headers can accidentally cache error documents.
API clients
Classifies as failure; retry policy depends on idempotency and code class. Structured errors improve handling.
Crawlers / SEO tools
Persistent failures reduce crawl rate; soft-404 patterns cause indexing instability.
Uptime monitors
Typically alerts based on rate/threshold. Consistent classification reduces false positives.
CDNs / reverse proxies
May cache errors if misconfigured; respects Cache-Control and can serve stale on origin failure.

Inspector preview (read-only)

On this code, Inspector focuses on semantics, headers, and correctness warnings that commonly affect clients and caches.

Signals it will highlight
  • Status semantics vs method and body expectations
  • Header sanity (Content-Type, Cache-Control, Vary) and evidence completeness
  • Error cacheability and retry guidance signals
Correctness warnings
No common correctness warnings are specific to this code.

Guided Lab outcome

  • Reproduce HTTP 502 Bad Gateway using a controlled endpoint and capture the full exchange.
  • Practice distinguishing status semantics from transport issues (redirects, caching, proxies).
  • Learn to attribute failures to origin vs upstream and apply safe retry/backoff decisions.

Technical deep dive

HTTP 502 Bad Gateway represents a specific server-side condition that requires different handling than other 5xx errors. Understanding the precise cause helps operations teams diagnose and resolve issues faster. Monitoring systems should distinguish 502 from other 5xx codes for accurate alerting and diagnosis.

Real-world examples

Production 502 Bad Gateway incident
A production system returns 502 Bad Gateway. The operations team triages based on the specific status code: upstream service failure.
Load balancer returning 502
A load balancer returns 502 to clients. For 502 specifically, this typically indicates the backend response was unparseable.
Client retry logic for 502
A client receives 502 Bad Gateway. Retry with exponential backoff — the error may be transient.

Framework behavior

Express.js (Node)
Express: uncaught exceptions in route handlers result in 500 by default. Use error middleware: app.use((err, req, res, next) => { res.status(err.status || 500).json({ error: 'Internal Server Error' }); });
Django / DRF (Python)
Django: unhandled exceptions return 502 through custom middleware or exception handling. Custom error views: handler500 = 'myapp.views.server_error'.
Spring Boot (Java)
Spring Boot: @ControllerAdvice can map specific exceptions to 502.
FastAPI (Python)
FastAPI: Use custom exception handlers to return 502 with appropriate error messages.

Debugging guide

  1. Check the gateway/proxy logs for upstream communication details
  2. Verify the error is reproducible — transient 502 errors may indicate intermittent issues like memory pressure or connection pool exhaustion
  3. Check recent deployments — a new deploy is the most common cause of sudden 502 spikes
  4. Check upstream server health and connectivity
  5. Test with curl -v to see the full response including headers — some 502 responses include diagnostic headers

Code snippets

Node.js
// Handle 502 Bad Gateway
process.on('unhandledRejection', (reason) => {
  console.error('Unhandled rejection:', reason);
});

app.use((err, req, res, next) => {
  console.error(`${req.method} ${req.url}:`, err.stack);
  res.status(err.status || 500).json({
    error: process.env.NODE_ENV === 'production'
      ? 'Internal Server Error'
      : err.message,
    requestId: req.id
  });
});
Python
from fastapi import FastAPI, Request
from fastapi.responses import JSONResponse
import logging

logger = logging.getLogger(__name__)

@app.exception_handler(Exception)
async def server_error_handler(request: Request, exc: Exception):
    logger.error(f'{request.method} {request.url}: {exc}',
                 exc_info=True)
    return JSONResponse(
        status_code=502,
        content={'error': 'Bad Gateway', 'request_id': request.state.id}
    )
Java (Spring)
@ControllerAdvice
public class GlobalErrorHandler {
    private static final Logger log = LoggerFactory.getLogger(
        GlobalErrorHandler.class);

    @ExceptionHandler(Exception.class)
    public ResponseEntity<ErrorResponse> handleException(
            Exception ex, HttpServletRequest req) {
        log.error("{} {}: {}", req.getMethod(),
                  req.getRequestURI(), ex.getMessage(), ex);
        return ResponseEntity.status(502)
            .body(new ErrorResponse("Bad Gateway",
                "An unexpected error occurred"));
    }
}
Go
func errorMiddleware(next http.Handler) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		defer func() {
			if err := recover(); err != nil {
				log.Printf("%s %s: %v\n%s",
					r.Method, r.URL, err, debug.Stack())
				w.WriteHeader(502)
				json.NewEncoder(w).Encode(map[string]string{
					"error": "Bad Gateway",
				})
			}
		}()
		next.ServeHTTP(w, r)
	})
}

FAQ

What causes 502 Bad Gateway errors?
Invalid or unparseable responses from upstream servers, SSL/TLS issues between proxy and upstream, upstream server crashes.
Should clients retry on 502 Bad Gateway?
Yes, with exponential backoff. Transient failures often resolve on retry. Use jitter to avoid thundering herd. Limit retries (3-5 attempts max).
How should 502 Bad Gateway be monitored?
Track 502 error rate as a percentage of total requests. Alert on sustained rates above baseline (e.g., >1% for 5 minutes). Correlate with upstream service health metrics.
What information should a 502 response include?
In production: a generic error message, a request ID for correlation, and optionally a Retry-After header. Never include stack traces, internal paths, database errors, or configuration details. In development: full error details are acceptable. Always log the full error server-side with the request ID.

Client expectation contract

Client can assume
  • The server or an upstream failed to fulfill the request.
Client must NOT assume
  • Immediate retries are always safe or effective.
Retry behavior
Retry idempotent requests with backoff; avoid retries for non-idempotent writes unless you have idempotency keys.
Monitoring classification
Server error
Alert on rate and duration; ensure CDNs do not cache transient failures.

Related status codes

501 Not Implemented
The server either does not recognize the request method, or it lacks the ability to fulfill the request.
503 Service Unavailable
The server is currently unavailable (overloaded, down for maintenance, or rate limiting). Should include Retry-After header when possible.
504 Gateway Timeout
The server was acting as a gateway or proxy and did not receive a timely response from the upstream server.
500 Internal Server Error
A generic error message, given when an unexpected condition was encountered.

Explore more

Related guides
Related tools
Related utilities