Webhooks look simple until they become part of the revenue path. Stripe payment events, PayPal disputes, Telegram bot updates, CRM notifications, marketplace order changes, shipping callbacks, and subscription lifecycle events all arrive through the same weak point: an HTTP endpoint exposed to systems you do not control.
The central mistake is treating a webhook as a normal API request. A production webhook handler is an ingestion boundary. It must authenticate the sender, preserve the original payload, avoid duplicate side effects, tolerate retries, and finish quickly enough that the provider does not mark delivery as failed.
What teams usually get wrong
Most webhook bugs are not caused by exotic attacks. They come from ordinary production behavior:
the provider retries the same event after a timeout;
the same business state arrives through multiple event types;
JSON is parsed and re-encoded before signature verification;
the endpoint performs slow business logic before returning
2xx;a database write succeeds, but the response fails;
two identical deliveries are processed concurrently;
local tests use clean payloads, while production payloads contain unexpected optional fields.
This is why webhook handling should be designed around failure from the beginning. A safe endpoint does not ask, “Can I process this request?” It asks, “Can I prove who sent it, store it once, and process it exactly as many times as the business can tolerate?”
A webhook endpoint should acknowledge delivery, not complete the whole business workflow.
A production webhook flow
A resilient PHP webhook flow usually has five stages:
Read the raw request body.
Verify the provider signature before trusting the payload.
Extract a stable event identifier.
Store the event with a uniqueness constraint.
Dispatch asynchronous processing and return quickly.
The endpoint should do as little work as possible synchronously. Signature verification, basic validation, deduplication, persistence, and enqueueing are acceptable. Sending emails, calling CRMs, generating invoices, updating multiple aggregates, or contacting third-party APIs should happen later in a worker.
Concern | Shortcut implementation | Production-oriented implementation |
|---|---|---|
Signature verification | Trusts JSON fields or IP allowlists | Verifies raw body with provider signature |
Request lifecycle | Performs full business workflow inline | Stores event and dispatches a job |
Timeout behavior | Depends on controller completion | Returns 2xx after durable ingestion |
Duplicate handling | Assumes each event arrives once | Uses event ID uniqueness and idempotent writes |
Retry strategy | Lets providers retry blindly | Separates delivery retry from internal job retry |
Failure isolation | One slow dependency blocks delivery | External calls run outside the webhook request |
Debugging | Logs only parsed payload | Stores raw payload, headers, status, and attempts |
Verify signatures against the raw body
For many providers, signature verification depends on the exact raw request body. If you parse JSON and then encode it again, whitespace, key order, escaping, and numeric formatting may change. The signature may fail even when the logical payload is the same.
The exact signature scheme differs across providers. Stripe, PayPal, Telegram integrations, CRMs, and marketplaces do not all sign requests in the same way. Some use HMAC headers, some include timestamps, some require asymmetric verification, and some support custom shared secrets. The safe architecture is the same: verify before trust.
A simplified HMAC-based verifier might look like this:
<?php
final class WebhookSignatureVerifier
{
public function verify(
string $rawBody,
string $signatureHeader,
string $timestampHeader,
string $secret
): bool {
if ($signatureHeader === '' || $timestampHeader === '') {
return false;
}
$timestamp = (int) $timestampHeader;
$now = time();
if (abs($now - $timestamp) > 300) {
return false;
}
$signedPayload = $timestampHeader . '.' . $rawBody;
$expected = hash_hmac('sha256', $signedPayload, $secret);
return hash_equals($expected, $signatureHeader);
}
}The timestamp check reduces replay risk. The hash_equals() call avoids timing-sensitive string comparison. The exact header names and signed payload format should come from each provider integration, not from a shared assumption across all webhooks.
In a framework controller, preserve the raw body before decoding:
<?php
public function __invoke(Request $request): JsonResponse
{
$rawBody = $request->getContent();
$isValid = $this->signatureVerifier->verify(
rawBody: $rawBody,
signatureHeader: (string) $request->headers->get('X-Webhook-Signature', ''),
timestampHeader: (string) $request->headers->get('X-Webhook-Timestamp', ''),
secret: $this->webhookSecret,
);
if (!$isValid) {
return new JsonResponse(['error' => 'invalid signature'], 401);
}
$payload = json_decode($rawBody, true, flags: JSON_THROW_ON_ERROR);
// Continue with event extraction and durable storage.
return new JsonResponse(['received' => true], 202);
}Do not log secrets, full signature headers, or sensitive customer fields. Log enough for diagnosis: provider, event ID, delivery ID if present, timestamp, verification result, and processing state.
Treat duplicates as normal behavior
Webhook providers retry. Networks fail. Load balancers close connections. Your application may commit a database transaction and then fail to return a response. From the provider’s point of view, the delivery failed. From your database’s point of view, the event already changed state.
That means duplicate protection is not an optimization. It is part of correctness.
Use the provider’s stable event ID when available. If the provider does not provide one, build a conservative deduplication key from fields that are stable enough for that integration, such as provider name, object ID, event type, and provider timestamp. Avoid using the full raw payload hash as the only dedupe key if the provider can resend semantically identical events with slightly different metadata.
A minimal table might look like this:
CREATE TABLE webhook_events (
id BIGSERIAL PRIMARY KEY,
provider VARCHAR(64) NOT NULL,
event_id VARCHAR(191) NOT NULL,
event_type VARCHAR(191) NOT NULL,
payload JSONB NOT NULL,
status VARCHAR(32) NOT NULL DEFAULT 'pending',
received_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
processed_at TIMESTAMPTZ NULL,
attempts INTEGER NOT NULL DEFAULT 0,
last_error TEXT NULL,
CONSTRAINT webhook_events_provider_event_unique
UNIQUE (provider, event_id)
);The uniqueness constraint matters more than the application-level check. Two deliveries can arrive at the same time. A SELECT followed by an INSERT is not enough unless protected by a transaction or database constraint.
In PHP, the ingest step should be safe to call repeatedly:
<?php
final class WebhookIngestor
{
public function ingest(string $provider, array $payload, string $rawBody): WebhookIngestResult
{
$eventId = $this->extractEventId($provider, $payload);
$eventType = $this->extractEventType($provider, $payload);
try {
$event = WebhookEvent::create([
'provider' => $provider,
'event_id' => $eventId,
'event_type' => $eventType,
'payload' => json_decode($rawBody, true, flags: JSON_THROW_ON_ERROR),
'status' => 'pending',
]);
ProcessWebhookEvent::dispatch($event->id);
return WebhookIngestResult::stored($event->id);
} catch (UniqueConstraintViolationException) {
return WebhookIngestResult::duplicate($provider, $eventId);
}
}
}For duplicate deliveries, returning 2xx is usually the right behavior after verification. The provider is asking whether you received the delivery, not whether you want to process the same business change again.
Idempotency is deeper than deduplication
Deduplication prevents the same provider event from being inserted twice. Idempotency prevents the same business effect from happening twice.
For example:
a marketplace may send both
order.paidandpayment.captured;a CRM may resend a contact update after a field sync;
a payment provider may send separate events for authorization, capture, refund, and dispute;
a Telegram bot update may be delivered again if the previous response timed out.
The worker must apply business transitions safely. Do not assume the event stream is a perfect command log. Check current state before changing it.
<?php
final class MarkInvoicePaid
{
public function __invoke(string $providerPaymentId): void
{
DB::transaction(function () use ($providerPaymentId): void {
$payment = Payment::where('provider_payment_id', $providerPaymentId)
->lockForUpdate()
->firstOrFail();
if ($payment->status === PaymentStatus::Paid) {
return;
}
$payment->status = PaymentStatus::Paid;
$payment->paid_at = now();
$payment->save();
Invoice::whereKey($payment->invoice_id)
->whereNull('paid_at')
->update(['paid_at' => now()]);
});
}
}The important details are the row lock, the state check, and the conditional update. The event can be replayed without producing a second payment, second invoice email, or second fulfillment request.
Timeouts: respond fast, process later
Webhook providers commonly expect a response within a limited window. The exact timeout varies, and it can change by provider or product. Your design should not depend on using the whole window.
A safe target is to keep the HTTP request path small and predictable:
verify signature;
validate minimal structure;
write event record;
enqueue job;
return
2xx.
Everything else belongs in asynchronous processing.
This separation also gives you better retry control. Provider retries are designed for delivery. Internal job retries are designed for processing. Mixing the two creates noisy incidents: a temporary CRM outage can cause the payment provider to resend events that you already received successfully.
Retry strategy without duplicate side effects
There are two retry loops to manage.
The first loop is external. The provider retries delivery when your endpoint times out or returns a non-success status. You do not fully control this loop.
The second loop is internal. Your queue retries the processing job when your own business logic fails. You control delay, maximum attempts, dead-letter handling, and alerting.
A worker should mark state explicitly:
<?php
final class ProcessWebhookEvent
{
public function handle(): void
{
$event = WebhookEvent::query()
->whereKey($this->eventId)
->lockForUpdate()
->firstOrFail();
if ($event->status === 'processed') {
return;
}
$event->increment('attempts');
try {
$this->router->handle(
provider: $event->provider,
eventType: $event->event_type,
payload: $event->payload,
);
$event->status = 'processed';
$event->processed_at = now();
$event->last_error = null;
$event->save();
} catch (Throwable $e) {
$event->status = 'failed';
$event->last_error = mb_substr($e->getMessage(), 0, 2000);
$event->save();
throw $e;
}
}
}This gives operations teams a clear view of what happened: received, duplicate, processing, processed, failed, retrying. Without explicit event state, debugging becomes a search through application logs under incident pressure.
Provider-specific differences still matter
The architecture can be shared, but verification and event semantics should stay provider-specific.
Stripe-style payment events, PayPal webhook notifications, Telegram bot updates, CRM callbacks, and marketplace order events differ in practical ways:
Provider category | Typical risk | Handler design implication |
|---|---|---|
Payments | Duplicate financial side effects | Strong idempotency around payment, invoice, refund, and fulfillment state |
Messaging bots | Repeated updates after slow responses | Fast acknowledgement and separate command execution |
CRM systems | Partial updates and field drift | Merge logic, field-level validation, and source priority rules |
Marketplaces | Event ordering differences | State-machine checks instead of assuming chronological delivery |
Internal webhooks | Weak signing discipline | Same verification and dedupe rules as external providers |
Do not build one generic “webhook controller” that silently normalizes away important provider behavior. A shared ingestion layer is useful. A shared business handler for unrelated providers is usually where correctness starts to leak.
Testing webhooks like production traffic
Manual testing with copied JSON is not enough. Useful webhook tests cover the properties that break in production:
invalid signature with valid JSON;
valid signature with malformed event shape;
replayed event inside and outside the timestamp tolerance;
duplicate event delivered concurrently;
slow downstream dependency;
worker failure after partial database changes;
unknown event type;
event for an object that does not exist locally yet.
For PHP teams, this usually means testing the verifier with raw strings, the ingestor against a real database constraint, and the worker with transaction-aware integration tests. Mocking the whole flow at the controller level can hide the exact failures webhooks are known for.
What to adopt first
If your current webhook implementation is a large controller that verifies, processes, calls external APIs, and returns a response at the end, start with three changes:
Move signature verification before JSON trust.
Store incoming events with a unique provider event key.
Process business logic asynchronously with idempotent state transitions.
These changes reduce the highest-risk failure modes without requiring a full architecture rewrite. After that, improve observability: event state, attempts, last error, provider, event type, processing duration, and duplicate count.
For engineers who work with backend PHP systems in production and want to validate practical knowledge around reliability, architecture boundaries, and application correctness, the Senior PHP Engineer certification is the closest fit to this kind of work.
Conclusion
Secure webhook handling in PHP is less about a single signature function and more about designing the endpoint as a reliable ingestion boundary. Verify the raw request, store events durably, return quickly, process asynchronously, and make business operations idempotent.
That design works across Stripe, PayPal, Telegram, CRM platforms, marketplaces, and internal integrations because it matches how distributed systems actually fail. Webhooks are not guaranteed clean commands. They are external delivery attempts. Treat them that way, and your PHP application becomes easier to operate, safer to retry, and less likely to turn a network glitch into a business incident.