Logstash as messaging platforms bridge

I’m still impressed how powerful Logstash is. It’s simple in theory - you set the input, make some transformations and send the output. The trick is - you can use it with variety of inputs and outputs and Logstash will handle them in performant way.

The canonical use case is setting up logging in ELK stack - get input from filebeat, syslog or Kafka and save it to Elasticsearch. We use it to log important events from haproxy logs or display Kafka messages in Kibana - for analysis or to debug.

Bridging Kafka and RabbitMQ

The company I work for has 3 products - websites with quite similar business model, but historically (company aquisition) they use different technology stack. The product I’m involved in is a Rails app, two others are on .net stack. The Rails app uses Kafka for broadcasting events while .net ones uses RabbitMQ. It was ok when we actually didn’t have to communicate between services but then one project came out to share part of core business logic across our services. We didn’t want to give up on Kafka, other team didn’t want to switch from RabbitMQ so decided to make a bridge between these two.

Is sounds quite easy - you get the message from Kafka and transport it to RabbitMQ or the way around. The first idea was to make the bridging service on our own, but then we realized it’s something the Logstash is good at.

The input and the output are both Kafka and RabbitMQ. You want the Kafka input to have set decorate_events=true, and the RabbitMQ one metadata_enabled=true so you can use details about topics / queues / exchanges in transformation logic.

What’s in the message

For now on we think of our services as being external to each other. We translate message we get from the other service to our own commands - so we can still use our own messages to trigger these commands, but also the identity translation is made near the bridge. I mean we have separate Kafka consumer which translates IDs and other message concepts to our internal commands and sends them over Kafka to the handler.

It’s useful when we have to debug what happened - sometimes it’s the other service fault, sometimes translation or the handler itself. But we know more details on when the things got wrong.

To distinguish which command has to be sent we use original exchange name. The translation resolver delegates the message to proper translator. The translator validates the format and correctness of message and translates it to our command.

We also need some correllation id and message creation timestamp to deduplicate events - we had problem with duplicates on Kafka probably because of buggy version, but still we want to make sure we won’t get duplicates.

Debugging tips

The bridge itself is almost bulletproof, i.e. it can retry with exponantial backoff. It’s something you can even forget you have. What you cannot forget is the coupling between your services - after some time the original message can be triggered by other microservice or may use different logic and you just have to know what happened. With Kafka (and Logstash) it’s super easy to audit all commands - with addition of Elasticsearch and Kibana. It’s just another thing you log to ES. You can use correllation_id here too to track down consequences or causes of unexpected behaviour.

Logstash is powerful yet simple

After architecting this solution I found other usecases where it could be useful. You may use it to send notifications (email, Slack) - based on a message field. You can use it to trigger webhooks or create the webhook (with http input).