Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why don't fork on dual mode (S1 and T1/C1 datagrams simultaneously) #40

Open
jeky-- opened this issue Dec 30, 2022 · 5 comments
Open

Why don't fork on dual mode (S1 and T1/C1 datagrams simultaneously) #40

jeky-- opened this issue Dec 30, 2022 · 5 comments

Comments

@jeky--
Copy link

jeky-- commented Dec 30, 2022

Hi,
using rtl-wmbus on a Raspberry Pi 2 I've seen that cpu (a single core) goes to 100% if decoding both S1 and T1/C1... Using "-a -p T" it stay around 65%.

Since the Raspberry is multicore, why the software does not "fork", with the two decoding enabled?
I've already found a workaround (I hope), using a tee and a fifo, I start two instances of rtl_wmbus one with "-s -a -p T" and one with "-s -a -P S". This seems to split the workload on two cpus :)

Is there any side effect in my way? Is there a good reason to keep it in a single thread?
Thanks in advance
\Jeky

@weetmuts
Copy link
Contributor

Can you post the command lines needed to do this?

@jeky--
Copy link
Author

jeky-- commented Dec 30, 2022

Sure!

mkfifo fifo1 2> /dev/null
rtl_sdr -f 868.625M -s 1.6M - 2>/dev/null | tee fifo1 | ./rtl-wmbus/build/rtl_wmbus -s -a -p T >> contatori_1.txt &
cat fifo1 | ./rtl-wmbus/build/rtl_wmbus -s -a -p S >> contatori_2.txt &
tail -f contatori_*.txt

here is a top:

top - 23:25:35 up 1 min,  4 users,  load average: 0.70, 0.36, 0.14
Tasks: 172 total,   1 running, 171 sleeping,   0 stopped,   0 zombie
%Cpu(s): 31.0 us,  3.0 sy,  0.0 ni, 65.8 id,  0.0 wa,  0.0 hi,  0.2 si,  0.0 st
MiB Mem :    922.1 total,    410.7 free,    153.0 used,    358.5 buff/cache
MiB Swap:    100.0 total,    100.0 free,      0.0 used.    715.7 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
 1077 root      20   0    2344    396    332 S  68.3   0.0   0:11.67 rtl_wmbus
 1079 root      20   0    2344    412    348 S  57.1   0.0   0:09.72 rtl_wmbus
 1076 root      20   0    6492    388    336 S   4.6   0.0   0:00.74 tee
 1075 root      20   0   14960   1924   1720 S   3.6   0.2   0:00.90 rtl_sdr
 1078 root      20   0    6628    372    320 S   2.0   0.0   0:00.36 cat

@weetmuts
Copy link
Contributor

Very nice!

I suppose the same could be done in rtlwmbus.... but no-one has thought about doing it before!

@xaelsouth
Copy link
Owner

xaelsouth commented Jan 24, 2023

Hi.

Well, splitting one IQ-stream in two, like proposed above, effectively doubles amount of the data that you're dealing with. The second thing is that all data preprocessing has to be done twice. Preprocessing steps are those before process_t1_c1_chain() and process_s1_chain().

Ideally, in sense of better balancing the CPU load, the data flow would look like this:

rtl_sdr -> rtl_wmbus | preprocessing | memory barrier | continue preprocessing
.........................................................|-> thread1 for process_t1_c1_chain() using double buffering | memory barrier | continue to run thread1
.........................................................| ->thread2 for process_s1_chain() using double buffering | memory barrier | continue to run thread2

rtl_wmbus shall use double buffering technique filling a second buffer while the first buffer is being processed by thread1 for process_t1_c1_chain() with switching buffers afterwards. The same applies to thread2 for process_s1_chain(). When both chains() are ready with their work they would be waiting at memory barrier before going to be triggered by the main() function which will pass the same memory barrier. The trick is that both threads (each of those!) must run faster than the preprocessing (main) thread.

I will be going to implement this, but need some time - time is very rare resource which I (don't) have. :/

@jeky--
Copy link
Author

jeky-- commented Jan 24, 2023

Hi xaelsouth and thank you for your comment.
Since there is a working workaround I guess that this implementation should have a very low priority.
Sadly I'm not good enough in developing, but the details you have provided may be helpful if someone else have the time (and commitment) to do this task.
I would like to take this opportunity to thank you for the whole project!
\Jeky

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants