-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zsync2 does not support downloading large files. #31
Comments
Thanks @ghuls. Could you please send a PR that includes
Again, thank you very much. |
@probonopd Adding this change is not enough. It seems that there are a lot of issues with files bigger than 2GiB:
It seems to request an invalid byte range: Tested with this changes: $ git diff
diff --git a/src/legacy_http.c b/src/legacy_http.c
index 41310da..ccf4f06 100644
--- a/src/legacy_http.c
+++ b/src/legacy_http.c
@@ -53,8 +53,8 @@ struct http_file
} handle;
char *buffer;
- size_t buffer_len;
- size_t buffer_pos;
+ off_t buffer_len;
+ off_t buffer_pos;
int still_running;
};
@@ -391,9 +391,9 @@ static int fill_buffer(HTTP_FILE *file, size_t want, CURLM* multi_handle)
*
* Removes `want` bytes from the front of the buffer.
*/
-static int use_buffer(HTTP_FILE *file, int want)
+static off_t use_buffer(HTTP_FILE *file, off_t want)
{
- if((file->buffer_pos - want) <= 0){
+ if(file->buffer_pos <= want){
/* trash the buffer */
if(file->buffer){
free(file->buffer);
@@ -416,7 +416,7 @@ static int use_buffer(HTTP_FILE *file, int want)
*/
size_t http_fread(void *ptr, size_t size, size_t nmemb, HTTP_FILE *file, struct range_fetch *rf)
{
- size_t want;
+ off_t want;
want = nmemb * size;
fill_buffer(file, want, rf->multi_handle);
@@ -560,14 +560,14 @@ static void buflwr(char *s) {
int range_fetch_read_http_headers(struct range_fetch *rf) {
char buf[512];
int status;
- int seen_location = 0;
+ uint seen_location = 0;
{ /* read status line */
char *p;
if (rfgets(buf, sizeof(buf), rf) == NULL){
/* most likely unexpected EOF from server */
- fprintf(stderr, "EOF from server");
+ fprintf(stderr, "EOF from server\n");
return -1;
}
if (buf[0] == 0)
@@ -622,7 +622,7 @@ int range_fetch_read_http_headers(struct range_fetch *rf) {
p += 2;
buflwr(buf);
{ /* Remove the trailing \r\n from the value */
- int len = strcspn(p, "\r\n");
+ uint len = strcspn(p, "\r\n");
p[len] = 0;
}
/* buf is the header name (lower-cased), p the value */
@@ -631,13 +631,14 @@ int range_fetch_read_http_headers(struct range_fetch *rf) {
if (status == 206 && !strcmp(buf, "content-range")) {
/* Okay, we're getting a non-MIME block from the remote. Get the
* range and set our state appropriately */
- int from, to;
+ off_t from, to;
sscanf(p, "bytes " OFF_T_PF "-" OFF_T_PF "/", &from, &to);
+ fprintf(stderr, "content-range from: %d to: %d\n", from, to);
if (from <= to) {
rf->block_left = to + 1 - from;
rf->offset = from;
} else {
- fprintf(stderr, "failed to parse content-range header");
+ fprintf(stderr, "failed to parse content-range header\n");
}
/* Can only have got one range. */
@@ -678,7 +679,7 @@ int range_fetch_read_http_headers(struct range_fetch *rf) {
*/
}
- fprintf(stderr, "Error while parsing headers");
+ fprintf(stderr, "Error while parsing headers\n");
return -1;
}
diff --git a/src/zsclient.cpp b/src/zsclient.cpp
index 06a993b..c5fd3f0 100644
--- a/src/zsclient.cpp
+++ b/src/zsclient.cpp
@@ -269,12 +269,14 @@ namespace zsync2 {
// if interested in headers only, download 1 kiB chunks until end of zsync header is found
if (headersOnly) {
- static const auto chunkSize = 1024;
- unsigned long currentChunk = 0;
+issueStatusMessage("headersOnly");
+ static const off_t chunkSize = 1024;
+ off_t currentChunk = 0;
// download a chunk at a time
while (true) {
std::ostringstream bytes;
+issueStatusMessage("headersOnly:" + std::to_string(currentChunk) + " " + std::to_string( chunkSize) + " " + std::to_string(currentChunk + chunkSize - 1) + "\n");
bytes << "bytes=" << currentChunk << "-" << currentChunk + chunkSize - 1;
session.SetHeader(cpr::Header{{"range", bytes.str()}});
|
It'll be much more easy to review if you send a PR right away. |
I think he is not sending a PR since despite his changes it is not working yet. |
Hmm, applying this diff file (manually, thanks a lot git apply for never working) makes it work on the Garuda Linux ISO file I tested this on: https://builds.garudalinux.org/iso/garuda/dr460nized/210324/garuda-dr460nized-linux-zen-210324.iso.zsync |
I've noticed while compiling this in cygwin that this is sometimes wrong and uses 32 bit stuff instead: https://github.com/AppImage/zsync2/blob/86cfd3a1d6a27483ec40edd62c1a6bd409cbbe5d/src/format_string.h#L24-L36 Forcing it to use 64 bit stuff fixed any issues I had on the cygwin compiled version. |
This patch goes in the right direction, but it actually doesn't solve the issue. See my comments in #59. A fix must use fixed 64-bit types. |
Apparently, only a few lines have to be changed in order to support large(r) files on 64-bit machines. This commit doesn't (yet) fix the issue on 32-bit machines (it also doesn't test this explicitly). In comparison to #59, however, it uses types that help get this to work on 32-bit machines as well, as it doesn't use compiler-dependent types, but types that are known to be large enough even there. Closes #59. CC #31.
Apparently, only a few lines have to be changed in order to support large(r) files on 64-bit machines. This commit doesn't (yet) fix the issue on 32-bit machines (it also doesn't test this explicitly). In comparison to AppImageCommunity#59, however, it uses types that help get this to work on 32-bit machines as well, as it doesn't use compiler-dependent types, but types that are known to be large enough even there. Closes AppImageCommunity#59. CC AppImageCommunity#31. (cherry picked from commit a8e2d68)
zsync2 does not support downloading large files.
failed to parse content-range headerError while parsing headersOther error? -1
I patched zsync2 so it shows to and from values:
As you can see
int
(signed int) is not big enough,from
andto
should beuint
(unsigned int) (at least 32 bits).The text was updated successfully, but these errors were encountered: