← back

Why your microservice will process the same message twice — and what to do about it

20246 min read

When I built MeshGate, I made an assumption that took me an embarrassingly long time to unlearn: I assumed that if I sent a message once, it would be processed exactly once. That assumption is wrong in any distributed system, and the consequences of not handling it range from duplicate billing records to double-charged users.

The network is not reliable

Here is the scenario. A user registers in MeshGate. The auth service persists the user record and publishes a UserRegisteredEvent to RabbitMQ. The billing service consumes that event and creates a subscription. The billing service sends an acknowledgment back to RabbitMQ — but the network drops the ACK. RabbitMQ never receives it. From RabbitMQ's perspective, the message was never processed. It redelivers.

Now the billing service has processed the same registration event twice. If you're not handling this, the user has two subscriptions.

This isn't a theoretical edge case. Network partitions, consumer restarts, broker failovers — redelivery is a normal operating condition in any message broker. RabbitMQ, Kafka, SQS — they all do it. Your consumers need to be ready.

Idempotency means the same input always produces the same result

An idempotent operation can be applied multiple times without changing the outcome beyond the first application. Deleting a record by ID is idempotent — delete it twice and the result is the same. Creating a record is not idempotent by default — do it twice and you have two records.

The goal is to make your event consumers idempotent: processing the same event ten times should have the same effect as processing it once.

The Outbox pattern implementation

In MeshGate I solved this with a simple deduplication table:

@RabbitListener(queues = "billing.user.registered")
@Transactional
public void handleUserRegistered(UserRegisteredEvent event) {
    String deduplicationKey = "REG-" + event.getAuthId();

if (processedEventRepository.existsByEventKey(deduplicationKey)) { log.info("Duplicate event detected, skipping: {}", deduplicationKey); return; }

billingService.createFreeSubscription(event.getAuthId(), event.getEmail());

processedEventRepository.save(new ProcessedEvent(deduplicationKey)); } ```

The key details: the check, the processing, and the deduplication record insertion all happen within a single database transaction. If anything fails mid-way, the transaction rolls back — the deduplication record is not saved, and the next redelivery will process the event correctly. If everything succeeds, the record is saved and any future redelivery is a no-op.

The tradeoff

This approach adds a database read to every message consumption. For high-throughput systems you'd want a Redis set instead of a SQL table — O(1) lookup, TTL-based expiry after a window where redelivery is plausible. For MeshGate's scale, PostgreSQL is fine.

The broader principle

Design every consumer as if the message will arrive twice. In distributed systems, at-least-once delivery is the default. Exactly-once is expensive and often impossible to guarantee end-to-end. The practical answer is idempotent consumers — systems that don't care if a message arrives more than once because they handle it correctly either way.

If you're building microservices and not thinking about this, your system works until it doesn't. And when it breaks, it breaks in the most confusing way possible: silent duplication that only shows up in your data weeks later.

Written by Basit Tijani. Find me on GitHub or LinkedIn.