Replies: 2 comments
-
@BulzN Thanks for opening this discussion. I've tried multiple times to reproduce the issue and nail down the bottleneck, but I'm unable to experience any speed reduction, the speed remains more or less the same: # version 2.3.3
$ time echo 192.168.179.0/24 | naabu
naabu 40.46s user 18.88s system 64% cpu 1:11.60 total
# latest
$ time echo 192.168.179.0/24 | naabu
naabu 4.23s user 11.34s system 26% cpu 58.862 total Could you provide more context on how to reproduce the issue, including if possible reproduction steps? |
Beta Was this translation helpful? Give feedback.
-
@Mzack9999 Below is a follow-up comment that summarizes my testing results and invites other users to confirm: Follow-Up Comment: Sure, with much pleasure – here’s an update on my Naabu tests. I've run two assessments using different deployment methods. The local test was executed on my MacBook Pro (macOS 15.3, 24 GB RAM) with Naabu installed via Homebrew (version 2.3.4, Homebrew 4.4.20), and the Docker test used the image Environment
1st Assessment – Local InstallationCommand Ran: time echo 1.1.1.1 | naabu Output:
2nd Assessment – Docker ContainerCommand Ran: time docker run -it --rm \
--ulimit nofile=65535:65535 \
-v "${PWD}:/tmp" \
projectdiscovery/naabu:v2.3.3 -host 1.1.1.1 Output:
|
Beta Was this translation helpful? Give feedback.
-
Naabu Stability and Performance Analysis
During my investigation of Naabu in a production environment analyzing over 300 hosts, I found that the most stable version so far is 2.3.3. While the tool is excellent, I have identified a few drawbacks:
Issues with Nmap CLI Parsing – When scanning a large number of hosts and ports, integrating Naabu with Nmap CLI sometimes causes the scanning process to break unexpectedly. Despite my efforts, I couldn't find a clear explanation for this behavior. To ensure a complete and accurate analysis of all hosts, I had to run Nmap separately after Naabu completed its scanning.
Performance Regression in Version 2.3.4 – The latest version, 2.3.4, appears to have a significantly slower analysis rate compared to 2.3.3. While the older version utilized more CPU resources (50–70% usage) and completed scans faster, 2.3.4 shows a drastic drop in CPU utilization—falling below 1%—while memory usage remains unchanged. This issue was observed in a Docker environment with 4 allocated CPUs, where the newer version fails to utilize them efficiently, resulting in significantly longer scan times.
Metrics Endpoint Format Improvements – Enhancing the metrics endpoint to support structured formats compatible with observability tools, such as Prometheus, would be a valuable addition. This would allow users to easily feed scan results—including IPs, ports, hosts, and Nmap findings—into monitoring and alerting systems.
These are just my findings, and I could be mistaken regarding the first two points, but this is my analysis based on usage from December 2024 until now. I hope this serves as constructive feedback and a simple heads-up. Keep up the great work—Naabu is one of the best open-source tools for this purpose, and the entire ProjectDiscovery.io suite is outstanding. Looking ahead, it would be fantastic to see a self-hosted environment integrating all these tools, even if access to a self-hosted repository were available through a donation model.
Beta Was this translation helpful? Give feedback.
All reactions