-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow GitLab fetching? #276
Comments
Average speed seems to be ~1 commit per second... So importing repo with 1000 commits will take around 15 minutes... Taking into account that you don't parse and load diffs but only commit messages it seems very slow |
Also it seems that function is not being terminated after finishing the full import. If I open mantis in another page, I can see that imported repo was created and replaced original repo, so it looks like done, but tab with full import is still doing something \ waiting for reply... |
I didn't have a deeper look, but did you measure that the MantisBT / plugin part is the bottleneck? |
Even if it's Gitlab (which I'm really not sure, because I can clone my repo in less than 1 minute. I assume Mantis should use API for fetching commits only and it should be even faster), it would be still very nice to show the progress and don't make impression of hanging process. |
Related to #92 |
There are a lot of issues and things that can be improved with the full import process. Performance is one of them, but not the main problem IMO. Ideally it should run as a background server process, and the GUI would simply query an API to report progress. As for the slowness, bear in mind that for each individual changeset, the script is calling the Gitlab webservice to retrieve the commit's data, process it and then go through all the parents. |
I just tried to connect Mantis with my GitLab repo (~1400 commits) and when I clicked "Fetch latest changes" it looked hanging and using all my CPU more than for 5 minutes... Is it expected? Maybe you could add some progress bar to the page to show the users how is it going... because now it's just dead and using all CPU available...
The text was updated successfully, but these errors were encountered: