Rsync (Remote Sync): 10 Practical Examples of Rsync Command in Linux

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Get your own self-hosted blog with a Free Domain at ($3.95/month).
  4. Become a Supporter - Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

Tarunika Shrivastava

I am a linux server admin and love to play with Linux and all other distributions of it. I am working as System Engineer with a Web Hosting Company.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

RedHat RHCE and RHCSA Certification Book
Linux Foundation LFCS and LFCE Certification Preparation Guide

You may also like...

132 Responses

  1. Fasih says:

    Why I am getting 1 hour time difference from the original source file, whereas both target and source machine have the same times.

    # rsync - a /redo/archive/*  [email protected]:/media/disk/archive
    
  2. Paul says:

    I’m using rsync, talking to a rsyncd, to avoid encryption through ssh, as the rsyncing machines are low powered embedded boards. Consequently, the rsyncd provides modules to address the server directories, say, a module “backup”.

    A client machine now attempts to sync some local directories to a corresponding path under the directory, specified by module on rsyncd machine, like:

    # rsync $options $path/$dir server::backup/$path
    

    But as long as $path doesn’t exist on the server, rsyncing $dir fails. I’m looking for a way to create $path on the server automatically when written to, akin to what mkdir -p does. Given that the canonical path on server is hidden behind the module name, letting client ssh to a server in order to create the directory prior to rsync is something I want to avoid.

    I haven’t found an rsync option allowing the creation of destination directories on a needs base. Do you have any suggestion how to solve this case, short of manually creating destination directories prior to first use?

    I was thinking of some intricate scheme of iterating through parent directories, rsyncing each level of parent directories while excluding all siblings until decided to the source directory. While this could possibly work (untested), I doubt that this effort is actually meant to be necessary for this presumably rather common use case.

    Thank you for any suggestion.

    • Paul says:

      I’m now considering a refinement to the thought of method above: Rather than iterating through directories, rsyncing each level separately for the creation of a single directory level, I’m now considering to try creating the whole path in a temporary directory, rsyncing all of those in one go, then removing the temporary directories again.

      In consequence, the whole path has been created on the server, ready to commence the actual data transfer to its deepest level directory. Less effort already, but still rather Rube Goldbergish – more ideas still welcome.

      • Paul says:

        I’ve now, at least for the time being, settled for the described approach:

        * creation of temporary directory (mktemp -d)
        * creation of wanted path in temp directory (mkdir -p)
        * rsync top level directory of wanted path from temporary directory (rsync – this creates wanted the directory hierarchy on server)
        * removal of temporary directory (rm -r)
        * rsync $path dest::module/$path # as destination directory already exists, rsync proceeds without failure.

        while probably not the most elegant method, the additional complexity is manageable: no iterating through path components, or other path decomposition. just plain use of path components already existing for scripting purposes. So unless a better (working) idea pops up, I’m happy for now. Thank you all for bearing with me.

    • Br. Bill says:

      The trick is to put trailing slashes on the paths:

      # rsync $options /my/files/source/ server::backup/files/target/
      

      Directory “source” will be named “target” at the receiving system. Provide the exact name you want.

      • Paul says:

        Actually, there is a trailing slash in the source path, which got hidden behind using the variable $dir. A copy and paste of an actual and complete call are here (well, not quite actual, I simplified options a bit, but the result is the same):

        # rsync -av /home/l/read/tips/ buffalo::backup/vpn/home/l/read/
        sending incremental file list
        rsync: mkdir "vpn/home/l/read" (in backup) failed: No such file or directory (2)
        rsync error: error in file IO (code 11) at main.c(656) [Receiver=3.1.1]
        (11:46:52) [email protected] ~ # 
        

        Using or not using a trailing slash makes no difference here. rsync succeeds as soon the directory vpn/home/l/read is manually created under the directory representing module backup. This is the case regardless whether the destination is specified by module name or by canonical name over ssh.

        Another thought came up which I may have to try out: Exclude everything by option, then include the topmost directory of the wanted path. Current “solution” is still to create destination directories manually when necessary.

        • Paul says:

          Regarding about my previous comment: “then include the topmost directory” won’t do, as this transfers the unwanted siblings too. The whole path, down to deepest source directory, should be needed.

          • Br. Bill says:

            I don’t know why it differs for you. I have no trouble with this and I do it all the time (Linux). Here’s one I just used last weekend, worked like a charm.

            # rsync -avh /eng/data/perforce/nh_perforce/ rack4::p4storage/engp4_nh_backup/
            
  3. vignesh says:

    Hi,

    If I set rsync the folder of the files will get automatically change or need to set some cron for that? Please help me out.

    • Ravi Saive says:

      @Vignesh,

      To auto sync files/folders you must set cron for rsync.

    • Paul says:

      You could have a look at lsyncd, which watches for modification on files through inotify, then transfers the modified files using rsync upon modification.

      An alternative could be incrontab to do essentially the same. Latter is a more general approach, as incrontab can be used for other actions than rsync. Both methods will allow you to monitor changes and rsync those automatically shortly after they took place.

      • Paul says:

        It seems comments get edited prior to being published. I noticed earlier that words were edited, unfortunately introducing errors that way. Same here: I originally wrote “incrontab“, not “in crontab”. While cron merely executes by time specification, incrontab allows inotify event specification, which is what I was actually referring to: https://linux.die.net/man/1/incrontab

        • Ravi Saive says:

          @Paul,

          Yes, comments were edited only if needed, but sorry I mistook it incrontab as “in crontab“. Thanks for clarification about incrontab, never heard before.

          • Paul says:

            Thanks you for correcting my original message another time. Please change also:

            “An alternative could be crontab to do essentially the same. Later is a more general approach, as in crontab can be used for other actions than rsync”

            to

            “An alternative could be incrontab to do essentially the same. Latter is a more general approach, as incrontab can be used for other actions than rsync”

            In earlier messages, I was several times wondering already why I have overlooked evidential mistakes when rereading prior to sending – I initially thought that some autocorrection went wrong, and have the impression that editing introduces more mistakes than it fixes.

          • Ravi Saive says:

            @Paul,

            Sorry for trouble, corrected the line in the last comment as suggested..

  4. Jeff Nanas says:

    Hello, is there any way to produce an exit code if it sees that source and destination are not the same while using the option –dry-run? Thanks

  5. Br. Bill says:

    When rsyncing from local to local, compressing only slows it down. Skip the -z option if you’re backing a local disk up to a local disk, because it’s just reading the whole file, compressing it, decompressing it, and writing the whole file. No need for the compression middleman if the sync isn’t over a network.

    • Ravi Saive says:

      @Bill,

      Thanks for the tip, didn’t know this, actually I never used rsync for local backups, I always used to sync remote servers with local, anyway thanks hope this will help other users who used to backup files locally..

  6. Raja says:

    Hi Ravi,

    I have tried as per your suggestion but it’s not working as my requirement. still, am able to copy the timestamp which has presented on the source .

  7. Raja says:

    Thanks for your reply.

    As per your above command, it doesn’t meet my requirement. I guess ‘n’ argument is dry-run (not do any file transfers, instead it will just report the actions it would have taken.)
    Please help me on this and I need only copy changed folders in source and don’t want copy the time stamp which is not copied and changed on destination.

  8. Raja says:

    Hi,

    With Rsync command I want only copy/sync changed files/folder to the destination folder. I had some issue with Rsync-like whenever am executing the Rsync command am copied along with the time stamp as well.

    For example, my destination folder ‘Linux’ had updated on 30th June and in my source folder there is no update info for the Linux folder, but when am performing the Rsync command my destination Linux folder time stamp has been updated with source folder time stamp. I don’t want to copy the timestamp as well. please suggest me on this and please glance on below command which I have used.

    rsync -avh /source/Linux/ /destination/Linux/

    • Ravi Saive says:

      @Raja,

      You can use following command to only sync new or changed files over rsync to destination folder:

      # rsync -uan /source/Linux/ /destination/Linux/
      
      • Raja says:

        Thanks for your reply.

        As per your above command, it does not work because the ‘n’ argument tells the dry-run (not do any file transfers, instead, it will just report the actions it would have taken)
        For your info am doing this Rsync with two directories present on the same server. Please help me on this.

        Simply, need to copy only latest changed files/folders from source to destination.(don’t want to copy the timestamp which has not changed in source)

        • Ravi Saive says:

          @Raja,

          Yes, the -n option is used to check the files, and once you confirm that the files are listed correctly on the dry-run, remove the -n option and run:

          # rsync -ua /source/Linux/ /destination/Linux/
          
          • Raja says:

            Hi Ravi,

            I have tried as per your suggestion but it’s not working as my requirement. still, am able to copy the timestamp which has presented on the source .

      • Raja says:

        Thanks for your reply.

        As per your above command, it doesn’t meet my requirement. I guess ‘n’ argument is dry-run (not do any file transfers, instead it will just report the actions it would have taken.)
        Please help me on this and I need only copy changed folders in source and don’t want copy the time stamp which is not copied and changed on destination.

  9. Gaurav Parashar says:

    Hi Ravi,
    You mentioned in one of your examples that you could transfer the contents from source to destination securely over ssh when you use the “-e” option and specify ssh. May i know that over what protocols is the transfer done when not using ssh option?

    • Ravi Saive says:

      @Gaurav,

      Yes, I do mentioned that you can transfer files from source to another securely using SSH, but still it all depends on which protocol you configured under sshd_config file, I suggest to use Protocol 2 for more better security in SSH configuration file..

    • Paul says:

      rsync can talk to remote rsync daemon without tunneling through ssh

  10. daniel raiche says:

    rsync works great but how do I get it to keep using the entire network bandwidth for really large files, 1tb+? I’m hitting 200mbs a sec over a 10gb ethernet connection in the beginning but sometimes it slows to 20mbs for no apparent reason and stays there.

    Is there a way to have it check or do an ack reset to re-negotiate the link speed through the switch?

    • Ravi Saive says:

      @Daniel,

      The slowness happens because of file encryption during transfer files over SSH, if you have that much of large data, you can reduce the encryption level or use other alternative tool like parsync (a rsync wrapper for larger data transfer).

    • Paul says:

      @Daniel, for the same reason, do I now run rsync as daemon server side, launched through xinetd. Now rsync can talk to rsyncd without ssh, and throughput went up considerably, while CPU load dropped.

      Security is of no concern in my case, as transfers take place in a trusted LAN, or go through a VPN – in which case I merely avoided double encryption, by SSH in addition to by VPN).

      A new problem was then introduced though, for which I have currently no proper solution – I wrote about this in another post just a few minutes ago in another comment here, asking for suggestions.

    • Paul says:

      @Daniel, regrettably I avoided addressing your actual issue: bandwidth limiting. Well, that’s possible with rsync running as daemon too, by use of the option –bwlimit=RATE. through this setting can you control the maximum amount of data transferred per time unit.

Got something to say? Join the discussion.

Your email address will not be published. Required fields are marked *