In this example, we'll read person
records from a user
topic split them into child
and adult
topics based on age.
- Checkout the dataflow.yaml.
- Make sure to Install SDF and start a Fluvio cluster.
With the dataflow.yaml
file in the current directory, run the following commands:
sdf run
The sample data file used to run this test ./sample-data/data.txt
has the following records:
{"name":"Andrew","age":16}
{"name":"Jackson","age":17}
{"name":"Randy","age":32}
{"name":"Alice","age":28}
{"name":"Linda","age":15}
Produce the data to the user
topic:
fluvio produce user -f ./sample-data/data.txt
Checkout the data in user
topic:
fluvio consume user -Bd
Consume from child
and adult
to see the result:
fluvio consume child -Bd
fluvio consume adult -Bd
# child
{"age":16,"name":"Andrew"}
{"age":17,"name":"Jackson"}
{"age":15,"name":"Linda"}
# adult
{"age":32,"name":"Randy"}
{"age":28,"name":"Alice"}
Note: the data from user
topic is split into child
and adult
topics based on age.
Display the stateful dataflow stats in the sdf
runtime >>
terminal:
show split-service/user/metrics
Key Window succeeded failed
stats * 5 0
Exit sdf
terminal and clean-up. The --force
flag removes the topics:
sdf clean --force