-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
performance tuning help need when I issue read on fast NVMe device with example code #29
Comments
Seems you were corrupting memory. You must ensure the |
got it. :) do you have email or wechat, so that can connect offline? :) |
Just use Github please. |
hello I read your comment and code again. the buf is out of for loop. this buf will not be freed. but I feel you mean that multiple IO write the same memory will corrupt the information. actually it is fine for me. I just want to measure the performance with this framework, currently I do not need worry about data consistent. I see it is around 2000MB/s when I run over one NVMe SSD that have 6GB/s BW. could you please shed some light how to tune this? :) |
This is an async operation, which returns immediately without waiting the I/O to be finished. That is too say, when
For example: READ (1) -> WRITE (2) -> READ (3) -> WRITE (4) -> FSYNC (5) 5 won't start before 4 gets finished; 4 won't start before 3 gets finished... 2 won't start before 1 gets finished. At the end, we wait for 5 gets finished with |
Don't talk about performance before you get things correct. |
actually, I already put buf into global variable, it will not get freed during process running. double free bug and use after free bug will cause process crash error. it will not impact the performance I feel. |
OK, I will pre-allocate memory buffer for this experiment. but I feel one global buf does not impact the performance result. |
hello expert.
I change the link_cp code a little to to read on one fast NVMe, its read capabilities are 6000MB/s
with below code I can only reach 2400MB/s, what should be bottleneck?
just run above code with link_cp /dev/nvme0n1. then run iostat you will know BW.
The text was updated successfully, but these errors were encountered: