10 Wget (Linux File Downloader) Command Examples in Linux

Best Affordable Linux and WordPress Services For Your Business
Outsource Your Linux and WordPress Project and Get it Promptly Completed Remotely and Delivered Online.

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Get your own self-hosted blog with a Free Domain at ($3.45/month).
  4. Become a Supporter - Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

Narad Shrestha

He has over 10 years of rich IT experience which includes various Linux Distros, FOSS and Networking. Narad always believes sharing IT knowledge with others and adopts new technology with ease.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

RedHat RHCE and RHCSA Certification Book
Linux Foundation LFCS and LFCE Certification Preparation Guide

You may also like...

10 Responses

  1. Sivasreekanth says:


    I need help regarding below, when T try to download website using command wget --mirror, it is going to ReadLine loop and coming out to complete request, can any one suggest on the same please.

    private void downLoadReport() {
    		Runtime rt = Runtime.getRuntime();
    		try {
    			Process p = rt.exec(wgetDir + "wget --mirror http://alex.smola.org/drafts/thebook.pdf");
    			// return result == 0;
    			// Process p = Runtime.getRuntime().exec("wget --mirror
    			// http://askubuntu.com");
    			System.out.println("Process " + p);
    			BufferedReader reader = new BufferedReader(new InputStreamReader(p.getInputStream()));
    			String line;
    			while ((line = reader.readLine()) != null) {
    				// System.out.println(lineNumber + "\n");
    			System.out.println(line + "\n");
    			System.out.println("Download process is success");
    			System.out.println("Wget --mirror has been successfully executed....");
    		catch (IOException ioe) {
    		} /*
    			 * catch (InterruptedException e) { // TODO Auto-generated catch
    			 * block e.printStackTrace(); }
  2. umesh says:

    everything is ok, but how to find particular software link (google chrome, team viewer etc.)

  3. Thomas says:

    Great post! Have nice day ! :)

  4. Bhuvana says:

    We are facing slowness when we are sending traffic using wget in linux machines through TCL .
    Traffic passing gets struck in the middle when the file size is more than 100MB.
    Hence we are unable to meassure the bandwidth during download.
    Please suggest any way to address this issue.

  5. Scully says:

    You made my day. I tried without the ftp:// and failed one time after the other.

  6. Predatux says:


    I want to make a script to check a website and download the latest available version of a deb file and install it.

    The problem I have is that on the website, each time the version changes, so the file name is changed, so I can not know the exact name to give the order to wget.

    I wonder if there is any way to include wildcards in wget, or similar option.

    As an example, suppose you want to download each x time the latest “Dukto” in 64 bits.

    Their website is:

    How i can tell wget to look in that directory and download the dukto*.deb?

    Thanks in advance.

  7. Hasanat says:

    what does is mean by “downloads file recursively” ??

    • John Lang says:

      That means it will go through all the links on the website. So, for example, if you have a website with links to more websites then it will download each of those and any other links that are in that website. You can set the number of layers, etc (reference http://www.gnu.org/software/wget/manual/html_node/Recursive-Retrieval-Options.html ). This is actually how google works but for the whole internet, it goes through ever link on every website to every other one. Also, if you use some more commands you can actually download a whole site and make it suitable for local browsing so if you have a multipage site that you use often, you could set it recursive and then open it up even without an internet connection. I hope that makes sense(the tl;dr version is that it follows every link on that website to more links and more files in a ‘tree’)

Got something to say? Join the discussion.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.