HTTP 202 Accepted

HTTP 202 Accepted indicates that the request has been accepted for processing, but the processing has not been completed and may not have even started. It is intentionally non-committal — the server promises only that it received the request, not that it will succeed. This status is essential for asynchronous API patterns where operations take seconds to minutes (video transcoding, report generation, batch imports) and blocking the HTTP connection would be impractical.

Debug HTTP 202 live
Analyze real 202 behavior — headers, caching, CORS, redirects
Open Inspector →

Try it (live endpoint)

Response includes the status code, standard headers (including Content-Type), and a small diagnostic JSON body describing the request and returned status.

Simulator URL (copy in the app after load — not a normal link):

https://httpstatus.com/api/status/202

Example request:

curl -i "https://httpstatus.com/api/status/202"
Try in playground

Meaning

The request has been accepted for processing, but processing is not complete. Often used for asynchronous operations or batch processing.

What it guarantees
  • The server accepted the request and produced a final response.
What it does NOT guarantee
  • The underlying business operation is correct across all downstream systems.
  • The response is cacheable unless headers explicitly allow it.

When to use this status

  • GET succeeds and returns a representation of the resource.
  • PUT/PATCH succeeds and returns an updated representation.
  • POST succeeds and returns an immediate result.

When NOT to use this status (common misuses)

Returning 200 for partial failures or errors embedded only in the body.
Clients and monitors treat it as success; failures become silent and harder to alert on.
Returning 200 for creation instead of 201 with Location.
Clients lose a reliable created-resource identifier; SDK behavior becomes inconsistent.
Returning 200 for async acceptance instead of 202.
Clients assume the work is complete and proceed incorrectly.

Critical headers that matter

Content-Type
Defines how clients parse the body.
Clients mis-parse payloads; SDKs and browsers apply wrong decoding.
Cache-Control
Controls cacheability and revalidation.
CDNs/browsers cache dynamic data or fail to cache static content.
ETag / Last-Modified
Enables conditional requests and revalidation.
Unnecessary bandwidth; poor cache consistency.

Tool interpretation

Browsers
Treats as success; caches/revalidates based on headers and validators.
API clients
Deserializes per Content-Type; conditional requests use validators when implemented.
Crawlers / SEO tools
Indexes depending on headers and canonical stability; caches behavior via validators and cache directives.
Uptime monitors
Typically marks success; advanced checks may flag header anomalies or latency.
CDNs / reverse proxies
Caches/revalidates based on Cache-Control, ETag, and Vary; compression and content-type affect behavior.

Inspector preview (read-only)

On this code, Inspector focuses on semantics, headers, and correctness warnings that commonly affect clients and caches.

Signals it will highlight
  • Status semantics vs method and body expectations
  • Header sanity (Content-Type, Cache-Control, Vary) and evidence completeness
Correctness warnings
No common correctness warnings are specific to this code.

Guided Lab outcome

  • Reproduce HTTP 202 Accepted using a controlled endpoint and capture the full exchange.
  • Practice distinguishing status semantics from transport issues (redirects, caching, proxies).

Technical deep dive

202 Accepted (RFC 7231 Section 6.3.3) exists specifically because HTTP is synchronous but many operations are not. The response body SHOULD include a current status of the request and either a pointer to a status monitor (URL) or an estimated completion time. Common patterns: (1) Return 202 with a Location header pointing to a status endpoint (GET /jobs/123). (2) Return 202 with a body containing a job ID and poll URL. (3) Use 202 with Retry-After header to suggest when to check back. The client then polls the status endpoint, which returns 200 with progress updates until the job completes. Some APIs use webhooks as a callback alternative to polling. 202 responses are NOT cacheable.

Real-world examples

Video transcoding service
POST /api/videos with a 2GB video file. The server stores the file and returns 202 with { jobId: 'abc', statusUrl: '/api/jobs/abc', estimatedTime: '5m' }. The client polls the status URL every 10 seconds until the job status changes to 'complete' with a link to the transcoded video.
Bulk email sending
POST /api/campaigns/send triggers sending 100,000 emails. The server returns 202 immediately with a campaign status URL. The status endpoint shows real-time progress: { sent: 45000, failed: 12, total: 100000, status: 'in_progress' }.
Data export request
POST /api/reports/export requests a CSV export of 5 million rows. The server returns 202 with a download URL that initially returns 404. Once the export completes, the download URL returns the file with 200. The status endpoint includes a Retry-After header.

Framework behavior

Express.js (Node)
Express: res.status(202).json({ jobId, statusUrl: `/api/jobs/${jobId}`, estimatedTime: '30s' }). Then create a status endpoint: app.get('/api/jobs/:id', async (req, res) => { const job = await getJob(req.params.id); res.json(job); });
Django / DRF (Python)
Django: return JsonResponse({'job_id': job.id, 'status_url': f'/api/jobs/{job.id}'}, status=202). Use Celery for background tasks: task = process_data.delay(data); return 202 with the Celery task ID.
Spring Boot (Java)
Spring: return ResponseEntity.accepted().header('Location', '/api/jobs/' + jobId).body(new JobStatus(jobId, 'queued')). Use @Async or a message queue (RabbitMQ, SQS) for background processing.
FastAPI (Python)
FastAPI: @app.post('/api/process', status_code=202). Use BackgroundTasks: background_tasks.add_task(heavy_process, data). Return the job tracking info immediately.

Debugging guide

  1. If the client treats 202 as success and stops, it's likely not polling the status endpoint — ensure clients implement the async poll loop
  2. Check the Retry-After header to avoid polling too frequently — respect the server's suggested interval
  3. Monitor job queue depth and processing time — 202 hides latency from users but the work still needs to complete
  4. If jobs silently fail after 202, ensure the status endpoint reflects failures and the client handles them
  5. Test timeout behavior: what happens if the job takes longer than expected? Does the status endpoint eventually return an error?

Code snippets

Node.js
// Async job pattern with Express
app.post('/api/reports', async (req, res) => {
  const jobId = crypto.randomUUID();
  await jobQueue.add({ id: jobId, params: req.body });
  res.status(202)
    .header('Retry-After', '10')
    .json({
      jobId,
      statusUrl: `/api/jobs/${jobId}`,
      message: 'Report generation started'
    });
});

app.get('/api/jobs/:id', async (req, res) => {
  const job = await jobQueue.getJob(req.params.id);
  if (!job) return res.status(404).json({ error: 'Job not found' });
  if (job.status === 'complete') {
    return res.json({ status: 'complete', resultUrl: job.resultUrl });
  }
  res.header('Retry-After', '5').json({ status: job.status, progress: job.progress });
});
Python
from fastapi import FastAPI, BackgroundTasks
import uuid

jobs = {}

@app.post('/api/reports', status_code=202)
async def create_report(params: ReportParams,
                        bg: BackgroundTasks):
    job_id = str(uuid.uuid4())
    jobs[job_id] = {'status': 'queued', 'progress': 0}
    bg.add_task(generate_report, job_id, params)
    return {
        'job_id': job_id,
        'status_url': f'/api/jobs/{job_id}'
    }

@app.get('/api/jobs/{job_id}')
async def get_job(job_id: str):
    return jobs.get(job_id, {'status': 'not_found'})
Java (Spring)
@PostMapping("/api/reports")
public ResponseEntity<JobStatus> createReport(
        @RequestBody ReportParams params) {
    String jobId = UUID.randomUUID().toString();
    asyncService.generateReport(jobId, params);
    return ResponseEntity.accepted()
        .header("Retry-After", "10")
        .header("Location", "/api/jobs/" + jobId)
        .body(new JobStatus(jobId, "queued"));
}
Go
func createReportHandler(w http.ResponseWriter, r *http.Request) {
	jobID := uuid.New().String()
	go processReport(jobID, r.Body) // async

	w.Header().Set("Location", "/api/jobs/"+jobID)
	w.Header().Set("Retry-After", "10")
	w.Header().Set("Content-Type", "application/json")
	w.WriteHeader(http.StatusAccepted)
	json.NewEncoder(w).Encode(map[string]string{
		"jobId":     jobID,
		"statusUrl": "/api/jobs/" + jobID,
	})
}

FAQ

How should a client poll a 202 status endpoint?
Use exponential backoff starting from the Retry-After header value. Poll interval: 1s, 2s, 4s, 8s up to a max (e.g., 30s). Check the status endpoint for terminal states ('complete', 'failed'). Set a client-side timeout after which you alert the user. Consider webhooks for long-running operations (minutes+) instead of polling.
What's the difference between 202 Accepted and 200 OK with async processing?
202 explicitly signals 'processing hasn't completed' — the client knows to check back. Returning 200 implies the operation finished successfully, which misleads clients into thinking the result is ready. The semantic distinction matters for client logic, error handling, and monitoring.
Should 202 include a Location header?
Yes, best practice is to include a Location header pointing to the job status resource. Some implementations use a custom header like X-Status-URL. The response body should also contain the URL since not all clients inspect headers. This URL becomes the client's handle on the async operation.
How do I handle failures in async operations after returning 202?
The status endpoint should reflect failure states: { status: 'failed', error: 'Insufficient storage', failedAt: '...' }. For critical operations, also send failure notifications via webhook, email, or push notification. Never silently swallow failures — the client needs to know the accepted request ultimately failed.

Client expectation contract

Client can assume
  • A final HTTP response was produced and processed by the server.
Client must NOT assume
  • The change is durable across all downstream systems.
Retry behavior
Retries are generally unnecessary; treat as final unless domain rules require revalidation.
Monitoring classification
Success
Use payload and header checks to avoid false positives; cacheability depends on Cache-Control/ETag/Vary.

Related status codes

201 Created
The request has been fulfilled and resulted in a new resource being created.
203 Non-Authoritative Information
The request was successful, but the enclosed payload has been modified by a transforming proxy from the origin server's 200 (OK) response.

Explore more

Related guides
Related tools
Related utilities