You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
mentioning that some datasets are SUPER big and that ppl might not want to run these commands (or they should check the size before doing so-- with instruction how to do it)
describing procedure to only download part of the dataset (eg git annex get sub-blabla*.*)
adding a section "update" that explains how to update the repository, also explaining that it is possible to update only the non-imaging files, eg with: "git pull" (am i correct @kousu ?)
The text was updated successfully, but these errors were encountered:
git tells you how large a download will be as it goes; however git-annex doesn't have an easy way to do the same, though it does keep size information around. We would have to write and package a small script that people could run after clone but before. Alternatively, we could get an estimate by adding another custom chart to netdata, but it would only be an estimate since it would count size used across all branches. In either case, there is no workable instruction right now.
I think if we Considering plain git #68 then we can just lean on the estimate given at clone time: to find out the size of a dataset, just start downloading it. You can stop it if it says it's going to be too many gigs.
Otherwise, we'll need to develop a bit of software.
I have reservations here too though. We should not be teaching people git. This should be at best a hand-book to the parts of git customized for our use-case.
here: https://github.com/neuropoly/data-management/blob/master/internal-server.md#download, can we consider:
The text was updated successfully, but these errors were encountered: