Jump to content

resume downloads


 Share

Recommended Posts

Anyone knows how I can resume a download from the point where it stopped? I have large files (100-200MB)to download and for many reasons not always they go to the end.

I wrote a program in AU3 using InetGet (http:), but when it is not completely downloaded, I retrieve the command, but the file is entirely downloaded again.

What I need is some method where I can continue the download from the point it stopped, no matter if it is http: or ftp:

Any help?

edited in 8/10/06 - see ahead the solution: WGET program to continue broken downloads a other options. It's a very complete download program

Jose

Edited by joseLB
Link to comment
Share on other sites

You will have to intiate a connection with the server and learn how to negotiate such a continuation.

S1- o, I can imagine there is no simple solution, if possible I will have to write all the code for this negotiation?

2- And, probably there will be one negotiation for each kind of server?

3- so, how that ftp packages work with resume or dividing one large file in many parts, ?no matter? the server? Or they have a specific negotiation for each kind of server?

I know that not all servers even support resumed downloads so you must make sure ahead of time that you will be able to.

4- do you know if IIS (Win NT4 - service pack 6) have this ability?

thanks for your help

Jose

Link to comment
Share on other sites

Thanks, SmOke_N and Dabus, you are right, WGET is a command line sofware.

I found a download at: http://users.ugent.be/~bpuype/wget/

or at http://gnuwin32.sourceforge.net/packages/wget.htm

or, maybe better, from: http://sourceforge.net/project/showfiles.p...ackage_id=16430 or the official homepage: http://www.gnu.org/software/wget/wget.html (plenty of explanations also)

Jose

Edited by joseLB
Link to comment
Share on other sites

  • Moderators

Thanks, SmOke_N and Dabus, you are right, WGET is a command line sofware.

I found a download at: http://users.ugent.be/~bpuype/wget/

or at http://gnuwin32.sourceforge.net/packages/wget.htm

or, maybe better, from: http://sourceforge.net/project/showfiles.p...ackage_id=16430 or the official homepage: http://www.gnu.org/software/wget/wget.html (plenty of explanations also)

Jose

Hmm, I didn't even reply to this thread :lmao: ... I think you meant Azkay :ph34r:

Common sense plays a role in the basics of understanding AutoIt... If you're lacking in that, do us all a favor, and step away from the computer.

Link to comment
Share on other sites

Hmm, I didn't even reply to this thread :lmao: ... I think you meant Azkay :ph34r:

oops, you are right. Azkay that give the clue. Thanks Azkay again. But anyway, this is an importatn item in this forum. How to resume from a download that was aborted by any reason. you don't think that the FTP.au3 could handle that?

Jose

Edited by joseLB
Link to comment
Share on other sites

and here come examples of usage: ( http://prdownloads.sourceforge.net/gnuwin3...oc.zip?download )

7 Examples

The examples are divided into three sections loosely based on their complexity.

7.1 Simple Usage

Say you want to download a url. Just type:

wget http://fly.srk.fer.hr/

But what will happen if the connection is slow, and the file is lengthy? The connection will probably fail before the whole file is retrieved, more than once. In this case, Wget will try getting the file until it either gets the whole of it, or exceeds the default number of retries (this being 20). It is easy to change the number of tries to 45, to insure that the whole file will arrive safely:

wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg

Now let's leave Wget to work in the background, and write its progress to log file 'log'. It is tiring to type '--tries', so we shall use '-t'.

wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &

The ampersand at the end of the line makes sure that Wget works in the background. To unlimit the number of retries, use '

-t inf'.

The usage of ftp is as simple. Wget will take care of login and password.

wget ftp://gnjilux.srk.fer.hr/welcome.msg

If you specify a directory, Wget will retrieve the directory listing, parse it and convert it to html. Try:

wget ftp://ftp.gnu.org/pub/gnu/ links index.html

7.2 Advanced Usage

You have a file that contains the URLs you want to download? Use the '-i' switch:

wget -i

file

If you specify '

-' as file name, the urls will be read from standard input.

Create a five levels deep mirror image of the GNU web site, with the same directory structure the original has, with only one try per document, saving the log of the activities to 'gnulog':

wget -r http://www.gnu.org/ -o gnulog

The same as the above, but convert the links in the html files to point to local files, so you can view the documents off-line:

wget --convert-links -r http://www.gnu.org/ -o gnulog

Retrieve only one html page, but make sure that all the elements needed for the page to be displayed, such as inline images and external style sheets, are also downloaded. Also make sure the downloaded page references the downloaded links.

wget -p --convert-links http://www.server.com/dir/page.html

The

html page will be saved to 'www.server.com/dir/page.html', and the images, stylesheets, etc., somewhere under 'www.server.com/', depending on where they were on the remote server.

The same as the above, but without the 'www.server.com/' directory. In fact, I don't want to have all those random server directories anywayjust save all those files under a 'download/' subdirectory of the current directory.

wget -p --convert-links -nH -nd -Pdownload \ http://www.server.com/dir/page.html

Retrieve the index.html of 'www.lycos.com', showing the original server headers:

wget -S http://www.lycos.com/

Save the server headers with the file, perhaps for post-processing.

wget --save-headers http://www.lycos.com/more index.html

Retrieve the first two levels of 'wuarchive.wustl.edu', saving them to '/tmp'.

wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/

You want to download all the gifs from a directory on an http server. You tried 'wget http://www.server.com/dir/*.gif[fon...R10"]', but that didn't work because http retrieval does not support globbing. In that case, use:

wget -r -l1 --no-parent -A.gif http://www.server.com/dir/

More verbose, but the effect is the same. '

-r -l1' means to retrieve recursively (see Chapter 3 [Recursive Download], page 23), with maximum depth of 1. '--no-parent' means that references to the parent directory are ignored (see Section 4.3 [Directory-Based Limits], page 25), and '-A.gif' means to download only the gif files. '-A "*.gif"' would have worked too.

Suppose you were in the middle of downloading, when Wget was interrupted. Now you do not want to clobber the files already present. It would be:

wget -nc -r http://www.gnu.org/

If you want to encode your own username and password to http or ftp, use the appropriate url syntax (see Section 2.1 [url Format], page 2).

wget ftp://hniksic:mypassword@unix.server.com/.emacs

Note, however, that this usage is not advisable on multi-user systems because it reveals your password to anyone who looks at the output of

ps.

You would like the output documents to go to standard output instead of to files?

wget -O - http://jagor.srce.hr/ http://www.srce.hr/

You can also combine the two options and make pipelines to retrieve the documents from remote hotlists:

wget -O - http://cool.list.com/ | wget --force-html -i -

7.3 Very Advanced Usage

If you wish Wget to keep a mirror of a page (or ftp subdirectories), use '--mirror' ('-m'), which is the shorthand for '-r -l inf -N'. You can put Wget in the crontab file asking it to recheck a site each Sunday:

crontab

0 0 * * 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog

In addition to the above, you want the links to be converted for local viewing. But, after having read this manual, you know that link conversion doesn't play well with timestamping, so you also want Wget to back up the original html files before the conversion. Wget invocation would look like this:

wget --mirror --convert-links --backup-converted \http://www.gnu.org/ -o /home/me/weeklog

But you've also noticed that local viewing doesn't work all that well when html files are saved under extensions other than '.html', perhaps because they were served as 'index.cgi'. So you'd like Wget to rename all the files served with content-type 'text/html' or 'application/xhtml+xml' to 'name.html'.

wget --mirror --convert-links --backup-converted \--html-extension -o /home/me/weeklog \

http://www.gnu.org/

Or, with less typing:

wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog

Edited by joseLB
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...