Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scale frequency to suppress RCU CPU stall warning #67

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

Mes0903
Copy link
Collaborator

@Mes0903 Mes0903 commented Jan 11, 2025

Since the emulator currently operates using sequential emulation, the execution time for the boot process is relatively long, which can result in the generation of RCU CPU stall warnings.

To address this issue, there are several potential solutions:

  1. Scale the frequency to slow down emulator time during the boot process, thereby eliminating RCU CPU stall warnings.
  2. During the boot process, avoid using clock_gettime to update ticks and instead manage the tick increment relationship manually.
  3. Implement multi-threaded emulation to accelerate the emulator's execution speed.

For the third point, while implementing multi-threaded emulation can significantly accelerate the emulator's execution speed, it cannot guarantee that this issue will not reappear as the number of cores increases in the future. Therefore, a better approach is to use methods 1 and 2 to allow the emulator to set an expected time for completing the boot process.

The advantages and disadvantages of the scale method are as follows:

Advantages:

  • Simple implementation
  • Effectively sets the expected boot process completion time
  • Results have strong interpretability
  • Emulator time can be easily mapped back to real time

Disadvantages:

  • Slower execution speed

The advantages and disadvantages of the increment ticks method are as follows:

Advantages:

  • Faster execution speed
  • Effectively sets the expected boot process completion time

Disadvantages:

  • More complex implementation
  • Some results are difficult to interpret
  • Emulator time is difficult to map back to real time

Based on practical tests, the second method provides limited acceleration but introduces some significant drawbacks, such as difficulty in interpreting results and the complexity of managing the increment relationship. Therefore, this commit opts for the scale frequency method to address this issue.

This commit divides time into emulator time and real time. During the boot process, the timer uses scale frequency to slow down the growth of emulator time, eliminating RCU CPU stall warnings. After the boot process is complete, the growth of emulator time aligns with real time.

To configure the scale frequency parameter, three pieces of information are required:

  1. The expected completion time of the boot process
  2. A reference point for estimating the boot process completion time
  3. The relationship between the reference point and the number of SMPs

According to the Linux kernel documentation:
https://docs.kernel.org/RCU/stallwarn.html#config-rcu-cpu-stall-timeout

The grace period for RCU CPU stalls is typically set to 21 seconds. By dividing this value by two as the expected completion time, we can provide a sufficient buffer to reduce the impact of errors and avoid RCU CPU stall warnings.

Using gprof for basic statistical analysis, it was found that semu_timer_clocksource accounts for approximately 10% of the boot process execution time. Since the logic within semu_timer_clocksource is relatively simple, its execution time can be assumed to be nearly equal to clock_gettime.

Furthermore, by adding a counter to semu_timer_clocksource, it was observed that each time the number of SMPs increases by 1, the execution count of semu_timer_clocksource increases by approximately $2 \times 10^8$ (see the table below).

With this information, we can estimate the boot process completion time as

$$ \text{SecPerCall} \times \text{SMPs} \times 2 \times 10^8 \times \frac{100%}{10%} $$

seconds, and thereby calculate the scale frequency parameter. For instance, if the estimated time is 200 seconds and the target time is 10 seconds, the scaling factor would be 10 / 200.

To calculate the proportion of semu_timer_clocksource using gprof, simply add the -pg option during compilation. Then, terminate the emulator's operation immediately after the first switch to U mode (when the boot process is complete). This approach allows for a rough estimation of the proportion occupied by semu_timer_clocksource.

Below is the gprof testing output:

  • CPU: 13th Gen Intel(R) Core(TM) i7-13700

     mes@DESKTOP-HLQ9F6A:~/semu$ gprof ./semu
     Flat profile:
     
     Each sample counts as 0.01 seconds.
       %   cumulative   self              self     total           
      time   seconds   seconds    calls  Ts/call  Ts/call  name    
      39.91     55.19    55.19                             vm_step
      10.55     69.78    14.59                             semu_timer_clocksource
       9.27     82.60    12.82                             semu_start
       6.49     91.57     8.97                             aclint_mswi_update_interrupts
       6.19    100.13     8.56                             mmu_store
       5.73    108.05     7.92                             mmu_translate
       3.98    113.56     5.51                             semu_timer_get
       3.33    118.16     4.60                             op_rv32i
       2.92    122.20     4.04                             aclint_mtimer_update_interrupts
       2.76    126.02     3.82                             aclint_sswi_update_interrupts
       2.10    128.93     2.91                             ram_read
       1.59    131.13     2.20                             mmu_load
       1.27    132.89     1.76                             mem_store
       1.19    134.54     1.65                             _init
       0.62    135.40     0.86                             mem_load
       0.46    136.04     0.64                             u8250_check_ready
       0.41    136.61     0.57                             ram_write
       0.39    137.15     0.54                             csr_read
       0.37    137.66     0.51                             virtio_net_refresh_queue
       0.28    138.05     0.39                             mem_fetch
       0.17    138.28     0.23                             op_csr_cs
    
  • CPU: Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz

     mes@Mes:~/semu$ gprof ./semu
     Flat profile:
     
     Each sample counts as 0.01 seconds.
       %   cumulative   self              self     total           
      time   seconds   seconds    calls  Ts/call  Ts/call  name    
      32.96     77.77    77.77                             vm_step
      15.96    115.43    37.66                             semu_timer_clocksource
      13.95    148.34    32.91                             semu_start
       6.10    162.74    14.40                             aclint_mswi_update_interrupts
       5.95    176.77    14.03                             mmu_store
       5.35    189.39    12.62                             mmu_translate
       3.85    198.47     9.08                             aclint_mtimer_update_interrupts
       2.73    204.91     6.44                             op_rv32i
       2.46    210.72     5.81                             ram_read
       2.11    215.70     4.98                             aclint_sswi_update_interrupts
       1.74    219.81     4.11                             mmu_load
       1.74    223.91     4.10                             semu_timer_get
       1.30    226.98     3.07                             mem_store
       0.92    229.14     2.16                             ram_write
       0.81    231.05     1.91                             _init
       0.61    232.48     1.43                             mem_load
       0.55    233.78     1.30                             csr_read
       0.35    234.60     0.82                             u8250_check_ready
       0.22    235.11     0.51                             op_csr_cs
       0.21    235.61     0.50                             virtio_net_refresh_queue
       0.14    235.95     0.34                             mem_fetch
       0.00    235.96     0.01                             semu_timer_init
    

And by adding a counter to semu_timer_clocksource, it becomes possible to calculate the relationship between SMPs and the number of times semu_timer_clocksource is called.

Below is the testing output(13th Gen Intel(R) Core(TM) i7-13700):

SMP times call semu_timer_total_ticks
1 239,937,385
2 410,377,969
3 600,190,253
4 825,078,230
5 1,007,098,511
6 1,213,304,632
7 1,419,857,500
8 1,627,353,803
9 1,835,991,887
10 2,056,837,458
11 2,269,651,319
12 2,485,299,111
13 2,703,809,852
14 2,917,880,774
15 3,134,960,385
16 3,450,041,050
17 3,587,880,492
18 3,810,315,132
19 4,052,935,937
20 4,274,307,156
21 4,503,423,577
22 4,742,850,891
23 4,973,145,039
24 5,213,953,075
25 5,597,276,263
26 5,696,829,697
27 5,935,553,504
28 6,173,571,960
29 6,620,649,951
30 6,683,572,727
31 6,942,616,048
32 7,180,650,711

@jserv jserv requested review from chiangkd and ranvd January 11, 2025 16:51
jserv

This comment was marked as resolved.

#define SEMU_BOOT_TARGET_TIME 10
#endif

bool boot_complete = false;
Copy link
Collaborator

@ranvd ranvd Jan 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest moving boot_complete variable into vm_t for a more conceptually accurate design.

Copy link
Collaborator Author

@Mes0903 Mes0903 Jan 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we move boot_complete into vm_t, all existing functions for semu_timer_t would need an additional vm_t parameter. For example, semu_timer_get would change to:

semu_timer_get(vm_t *vm, semu_timer_t *timer)

This change would indirectly require the areas that call this function to also pass in a vm_tparameter. For instance, since semu_timer_get is called within aclint_mtimer_update_interrupts, the API of aclint_mtimer_update_interrupts would also need to be updated to include vm_t.

As this pattern continues, the API changes would proliferate significantly. Perhaps we could introduce a static bool pointer pointing to boot_complete and assign its value during semu_timer_init. This way, we would only need to modify the parameters of semu_timer_init.

utils.c Outdated Show resolved Hide resolved
utils.c Outdated Show resolved Hide resolved
utils.c Outdated Show resolved Hide resolved
@Mes0903
Copy link
Collaborator Author

Mes0903 commented Jan 13, 2025

I noticed that the time during boot process was far from the target time on macOS. I think this may cause by the hypothesis of " $SMP \times 2 \times 10^8$ calls of semu_timer_clocksource" or "semu_timer_clocksource is ~10% of boot overhead", is not correct on macOS. But the logic of the function was implemented now.

utils.c Outdated Show resolved Hide resolved
utils.h Outdated Show resolved Hide resolved
utils.c Outdated Show resolved Hide resolved
utils.c Outdated Show resolved Hide resolved
utils.c Outdated Show resolved Hide resolved
utils.c Outdated Show resolved Hide resolved
utils.h Outdated Show resolved Hide resolved
@Mes0903 Mes0903 force-pushed the scale-frequency branch 2 times, most recently from bdf4a62 to cb265ae Compare January 15, 2025 03:55
@jserv

This comment was marked as outdated.

@jserv

This comment was marked as outdated.

@Mes0903
Copy link
Collaborator Author

Mes0903 commented Jan 15, 2025

Based on the output, I think it clearly that the assumption of " $SMP \times 2 \times 10^8$ calls of semu_timer_clocksource" or "semu_timer_clocksource is ~10% of boot overhead" does not hold on macOS. We need to re-evaluate the number of semu_timer_clocksource calls and its proportion on macOS to accurately calculate the scale factor.

Makefile Outdated Show resolved Hide resolved
Copy link
Collaborator

@jserv jserv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refine the git commit messages.

Change from

According to the Linux kernel documentation:
https://docs.kernel.org/RCU/stallwarn.html#config-rcu-cpu-stall-timeout

to

According to Using RCU’s CPU Stall Detector [1]
...
[1] https://docs.kernel.org/RCU/stallwarn.html#config-rcu-cpu-stall-timeout

Append Close #51 at the end of git commit messages.

utils.c Outdated Show resolved Hide resolved
utils.h Outdated Show resolved Hide resolved
utils.c Outdated Show resolved Hide resolved
utils.c Outdated Show resolved Hide resolved
utils.c Outdated Show resolved Hide resolved
utils.c Outdated Show resolved Hide resolved
utils.c Outdated Show resolved Hide resolved
utils.c Outdated Show resolved Hide resolved
utils.h Outdated Show resolved Hide resolved
Since the emulator currently operates using sequential emulation, the
execution time for the boot process is relatively long, which can result
in the generation of RCU CPU stall warnings.

To address this issue, there are several potential solutions:

1. Scale the frequency to slow down emulator time during the boot
   process, thereby eliminating RCU CPU stall warnings.
2. During the boot process, avoid using 'clock_gettime' to update ticks
   and instead manage the tick increment relationship manually.
3. Implement multi-threaded emulation to accelerate the emulator's
   execution speed.

For the third point, while implementing multi-threaded emulation can
significantly accelerate the emulator's execution speed, it cannot
guarantee that this issue will not reappear as the number of cores
increases in the future. Therefore, a better approach is to use methods
1 and 2 to allow the emulator to set an expected time for completing the
boot process.

The advantages and disadvantages of the scale method are as follows:

Advantages:
- Simple implementation
- Effectively sets the expected boot process completion time
- Results have strong interpretability
- Emulator time can be easily mapped back to real time

Disadvantages:
- Slower execution speed

The advantages and disadvantages of the increment ticks method are as
follows:

Advantages:
- Faster execution speed
- Effectively sets the expected boot process completion time

Disadvantages:
- More complex implementation
- Some results are difficult to interpret
- Emulator time is difficult to map back to real time

Based on practical tests, the second method provides limited
acceleration but introduces some significant drawbacks, such as
difficulty in interpreting results and the complexity of managing the
increment relationship. Therefore, this commit opts for the scale
frequency method to address this issue.

This commit divides time into emulator time and real time. During the
boot process, the timer uses scale frequency to slow down the growth of
emulator time, eliminating RCU CPU stall warnings. After the boot
process is complete, the growth of emulator time aligns with real time.

To configure the scale frequency parameter, three pieces of information
are required:

1. The expected completion time of the boot process
2. A reference point for estimating the boot process completion time
3. The relationship between the reference point and the number of SMPs

According to Using RCU’s CPU Stall Detector [1], the grace period for RCU CPU stalls is typically set to 21 seconds. By
dividing this value by two as the expected completion time, we can
provide a sufficient buffer to reduce the impact of errors and avoid
RCU CPU stall warnings.

Using 'gprof' for basic statistical analysis, it was found that
'semu_timer_clocksource' accounts for approximately 10% of the boot
process execution time. Since the logic within 'semu_timer_clocksource'
is relatively simple, its execution time can be assumed to be nearly
equal to 'clock_gettime'.

Furthermore, by adding a counter to 'semu_timer_clocksource', it was
observed that each time the number of SMPs increases by 1, the execution
count of 'semu_timer_clocksource' increases by approximately '2 * 10^8'

With this information, we can estimate the boot process completion time
as 'sec_per_call * SMPs * 2 * 10^8 * (100% / 10%)' seconds, and thereby
calculate the scale frequency parameter. For instance, if the estimated
time is 200 seconds and the target time is 10 seconds, the scaling
factor would be '10 / 200'.

Close sysprog21#51

[1] https://docs.kernel.org/RCU/stallwarn.html#config-rcu-cpu-stall-timeout
@jserv
Copy link
Collaborator

jserv commented Jan 23, 2025

Running on macOS + Apple M1, I got the following:

$ make SMP=8 check
...
[    0.000000] rcu:     RCU restricting CPUs from NR_CPUS=32 to nr_cpu_ids=8.
[    0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
...
[   31.988673] rcu: INFO: rcu_sched self-detected stall on CPU
[   31.993102] rcu:     2-....: (5247 ticks this GP) idle=2e0c/1/0x40000002 softirq=517/517 fqs=2128
[   32.002647]  (t=5253 jiffies g=-891 q=11 ncpus=8)
[   32.007524] CPU: 2 PID: 8 Comm: kworker/u16:0 Not tainted 6.1.99 #1
[   32.013259] Hardware name: semu (DT)
[   32.016691] Workqueue: events_unbound async_run_entry_fn
[   32.022636] epc : do_reset+0x30/0x6c
[   32.027939]  ra : do_reset+0x38/0x6c
[   32.033080] epc : c03816d0 ra : c03816d8 sp : c0893e50
[   32.038049]  gp : c04de840 tp : c085af40 t0 : c0bfa5d0
[   32.042925]  t1 : 00000008 t2 : 6d2e8da2 s0 : c0893e60
[   32.047778]  s1 : c03a48b8 a0 : 00000001 a1 : 00000000
[   32.052489]  a2 : 00000000 a3 : 005ac9b1 a4 : 0025364e
[   32.057287]  a5 : 00000000 a6 : 00000000 a7 : 0000000c
[   32.061854]  s2 : 004725a7 s3 : 0038da58 s4 : 00000000
[   32.066685]  s5 : 00000030 s6 : 00000007 s7 : c03a496c
[   32.071414]  s8 : c03a48b8 s9 : c0381488 s10: c03a4968
[   32.076499]  s11: c0381894 t3 : c0a7e400 t4 : 00000003
[   32.081382]  t5 : 46f17671 t6 : c0893e08
[   32.085321] status: 00000120 badaddr: dfcac9b1 cause: 80000005
[   32.090664] [<c03816d0>] do_reset+0x30/0x6c
[   32.096556] [<c0381870>] write_buffer+0x40/0x64
[   32.102722] [<c038207c>] unpack_to_rootfs+0x1c4/0x300
[   32.109299] [<c0382848>] do_populate_rootfs+0x70/0xd8
[   32.114948] [<c002bd8c>] async_run_entry_fn+0x30/0xd0
[   32.120732] [<c0021990>] process_one_work+0x198/0x224
[   32.126716] [<c0021f18>] worker_thread+0x238/0x294
[   32.132557] [<c00289c0>] kthread+0xd4/0xd8
[   32.137496] [<c0002514>] ret_from_exception+0x0/0x1c
[   55.544062] Freeing initrd memory: 8188K

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants