You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
The ProbabilisticGeneratorGrammarCoverageFuzzer is over 30 times slower than GeneratorGrammarFuzzer when fuzzing 100 inputs from a CSV grammar (without generators or probabilities).
I do not know whether this is just part of the inherent, and unavoidable, complexity of coverage-based fuzzing, but it is really a significant slow-down compared to the standard grammar-based fuzzer. Maybe there is an avoidable bottleneck somewhere, or recent optimizations that can be integrated, and thus also presented to the readers of the fuzzing book?
To Reproduce
Execute the following code. It will demonstrate the relative performance loss.
Duration PGGC-Fuzzer: 6.4s
Duration GeneratorGrammarFuzzer: 0.19s
Ratio: GeneratorGrammarFuzzer takes 3.0% of the time of PGGC-Fuzzer
Ratio: PGGC-Fuzzer 33.9 times slower than GeneratorGrammarFuzzer
Expected behavior
I expect the ProbabilisticGeneratorGrammarCoverageFuzzer to be slower than GeneratorGrammarFuzzer, but not 40 times.
Desktop (please complete the following information):
OS: macOS
Python version: 3.9
The text was updated successfully, but these errors were encountered:
rindPHI
changed the title
ProbabilisticGeneratorGrammarCoverageFuzzer really slow for CSV grammar (~40 times slower than GrammarFuzzer)
ProbabilisticGeneratorGrammarCoverageFuzzer really slow for CSV grammar (~40 times slower than GeneratorGrammarFuzzer)
Mar 12, 2021
rindPHI
changed the title
ProbabilisticGeneratorGrammarCoverageFuzzer really slow for CSV grammar (~40 times slower than GeneratorGrammarFuzzer)
ProbabilisticGeneratorGrammarCoverageFuzzer really slow for CSV grammar (>30 times slower than GeneratorGrammarFuzzer)
Mar 12, 2021
Describe the bug
The ProbabilisticGeneratorGrammarCoverageFuzzer is over 30 times slower than GeneratorGrammarFuzzer when fuzzing 100 inputs from a CSV grammar (without generators or probabilities).
I do not know whether this is just part of the inherent, and unavoidable, complexity of coverage-based fuzzing, but it is really a significant slow-down compared to the standard grammar-based fuzzer. Maybe there is an avoidable bottleneck somewhere, or recent optimizations that can be integrated, and thus also presented to the readers of the fuzzing book?
To Reproduce
Execute the following code. It will demonstrate the relative performance loss.
Example Output
Expected behavior
I expect the ProbabilisticGeneratorGrammarCoverageFuzzer to be slower than GeneratorGrammarFuzzer, but not 40 times.
Desktop (please complete the following information):
The text was updated successfully, but these errors were encountered: