- Receive schedule request from HTTP Endpoint
- Validate request body and require fields before response
- If the validation or Kafka sender fails return bad request
- If the validation and kafka sender ok return accepted http code
- Endpoint response a request Id to get processed request back
- Produce "ScheduleRequested" event to Kafka topic
- Define Schedule structure
- Consume "ScheduleRequested" event from Kafka
- Kafka Consumer with auto commit = false
- Only filled fields will be considered
- Processing batch messages and commit only at the end
- If the invalid schema: offset will be committed and message will be sent to a DLQ (TODO)
- If the schedule has invalid business, the offset will be committed and message will be sent to a DLQ (TODO)
- If it happens any unexpected error, the batch messages will not be commit
- If it happens unexpected error in last message, the previous messages could be processed
- All schedule are persisted in Cassandra
- Schedule and Customer persist in a single table
- Table key is composed by dateTime, description and customer Document number
- After processed produce schedule event
- ScheduleProcessed is produced with requestId
- Schedule change topic is produced
- Start infrastructure (Kafka, Cassandra), execute
docker-compose up
- Create topics
./kafka-create-topics.sh
- Create tables
./cassandra-create-tables.sh
- Build docker images
./build-app-docker-images.sh
- Start applications
./start-apps.sh
- Starting websocket client to receive process response
docker run -it --network=reactive-microservices solsson/websocat ws://schedule-connector:8080/schedules
- Execute test to send schedules
./test.sh