Skip to content

Latest commit

 

History

History
7 lines (7 loc) · 805 Bytes

notes.md

File metadata and controls

7 lines (7 loc) · 805 Bytes
  1. implemented a recursive crawler that spawns other instances of itself to crawl a dir
  2. that was slow, so I copied ripgrep's approach by spawning a fixed num of tasks (akin to threads), which worked better but still slow
  3. tried locking outside the push-loop (mention ripgrep's recent switch to a locked stack) and that was far worse, idk why
  4. tried copy/pasting code as exactly the same but using std::thread instead of async_std::task, much much better
  5. next step is to try buffering output, since I observed that elminating the printing puts us on par with find
  6. amazingly, running a single threaded crawler is far more performant than multi-threaded, if the task is only to print the directory!
    1. what if we don't print anything? still the case? or is locking stdout the expensive part?