How to Clear RAM Memory Cache, Buffer and Swap Space on Linux

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Use our Hosting referral link if you planning to start your blog ($3.82/month).
  4. Become a Supporter - Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

RedHat RHCE and RHCSA Certification Book
Linux Foundation LFCS and LFCE Certification Preparation Guide

You may also like...

37 Responses

  1. Peter P. says:

    Frankly I am amazed. Above you wrote up the one reason why anyone would do such a thing as cleaning the cache on Linux: testing – especially benchmarking. Then you go ahead and explain how to set up a cron job that cleans the cache every night.

    what is the point of that? Any newbie reading this will think that cleaning the cache (or even reconnecting the swap partition) is a good thing to do for administration purposes, like you would do when you clean the disk cache for Internet Explorer on a Windows machine.

    It isn’t. The explanation why it is not is in your article, but the way how it is mentioned embedded in instructions on how to do it anyway seems to be misleading to newbies so please allow me to explain.

    Yes, there are some applications around that hog memory so bad that the system memory may be eaten up and the system starts migrating memory pages onto the swap partition. Firefox comes to mind as it can become a problem when running with only 2GB of system memory.

    Even if you close tabs of especially memory hungry web pages (ebay is a really bad offender here) not all the code in memory will be released as it should be. Keep in mind here that this is a problem of the application and not Linux though. This means you won’t get that memory back by fiddling with the os, like dropping the cache anyway. The intervention required would be to do something about Firefox.

    The only way I know of to get the memory back is to terminate the offending process i.e. Firefox. A notable exception to this are databases that can seem to hog memory if they are not properly configured (opposed to poor memory management within the application) but even then you’ll need to look at your database first (while keeping in mind that ‘Database Administrator’ is a job description for a reason. Whatever you do, purging the cache won’t help).

    So yes, what I am saying is that the preposition in the second sentence of this article is false. If you have a process that is eating up your memory then purging the cache won’t even touch it, while the process is running.

    Terminating the process will release the memory. Sometimes you can even observe how the kernel decides to discard most of the memory claimed by such a terminated process itself, i.e. it doesn’t even keep it in the cache.

    If the process claimed enough memory, it may have displaced a lot of essential code from the memory into the swap space causing the computer to run slower for a little while longer until that memory code is retrieved. Now if you are on your desktop at home you may want to follow the instructions above and say ‘swapoff -a && swapon -a‘ and get a cup of tea and when you are back your computer will be fast again.

    If you don’t like tea you may just want to continue what you have been doing without reconnecting your swap as it probably won’t take long for the memory to migrate back anyway. NOT reconnecting swap will have the advantage that only the code that is actually needed will be placed back into memory (my preferred choice). So: reconnecting swap will consume more system resources overall than letting the kernel deal with it.

    Do not reconnect swap on a live production system unless you really think you know what you are doing. But then I shouldn’t have to say this as you would find out about this anyway while doing your research / testing as you should when doing this kind of stuff on a live production system.

    Here is another thought. Maybe the cache-drop fallacy comes from the way memory usage is traditionally accounted for on Linux systems. Par example if you open ‘top‘ in a terminal and look at the row where it says ‘Mem‘, there are entries ‘free‘ and ‘used‘ memory.

    Now the stats for used memory always includes the memory used for caching and buffering. The free memory is the memory that is not used at all. So if you want to know the memory used for os and applications subtract buffer and cache values from the used memory and you’ll get the footprint of all the residual memory used for applications.

    If you don’t know that and only looked at the amount of free memory you may have thought you were actually running out of physical memory, but as long as there is plenty of memory used by the cache this is not true. If you drop the cache as described above, top will report all that memory as free memory but this is really not what you thought you wanted – unless you are testing or benchmarking (see Ole Tanges post here for an example).

    Now the policy of the Linux kernel is to use as much of the memory as it can for something useful. First priority obviously goes to os / application code. All the rest is used for buffer/cache (more on that here: http://stackoverflow.com/questions/6345020/linux-memory-buffer-vs-cache).

    It’s written above in the article but I’ll say it here again: the data in the cache are copies of files stored on your main drive. It’s kept there just in case it’s needed again, so it’s there a lot quicker than having to read it from the drive again.

    • If you drop it and it is needed again it will have to be read from the slow drive again. The only effect this has is that it will make your system slower by the amount of time it takes to replace the formally cached pages.
    • If the memory space is needed by an application the kernel will drop the pages required itself. But only for the amount required and only those where it thinks they are less likely to be needed again. This only takes small fractions of micro seconds and will obviously keep the rest of the cache intact to be used for what it’s there for.

    This is good and you want to keep it that way on any kind of Linux installation. Unless you are testing (in a test environment). Or just playing around and learning something new, and for that your article is brilliant!

    P.S.: I noticed some people here trying to flush the cache while they are obviously having problems with limited memory. As I said before this is something to look at at application level rather than os level.

    A good first step at finding out what is causing the memory bottleneck is to use ‘top‘. Enter this at the command line and press Shift+m (or M if you like). This will sort the list of processes running on your system by their residual memory footprint. The column you need to look at is ‘RES‘ for residual memory (that is memory actually allocated within the virtual memory space).

    You’ll soon see which process is causing most problems. There is not one answer to what to do next. A process like Firefox may be restarted. If the memory problem is caused by virtual machines the host memory is probably over committed, i.e. the combined allocated memory of all vms exceeds the total amount of physical memory on the host or at least doesn’t leave it any space to run itself.

    If this is a real problem on your box you could try to reduce the allocatable memory for each vm. If that is not an option the cheapest and easiest way to solve this problem is usually to stick some more memory into the box (if running KVM, a good start is reading this: http://www.linux-kvm.org/page/Memory).

    It could be the application you are running has a bug in memory management (try up or downgrading, consider filing a bug report). It could be the application just requires more memory than you have installed (image/video editing apps come to mind, where the amount of required memory depends on the size of the files you are working on).

    Again, if this is an ongoing issue you won’t get around upgrading your memory. Enterprise grade databases Oracle etc. are harder to advise on here. As they can do their own cache management you won’t necessarily see what’s really going on with top and just throwing ‘more tin’ at it, i.e. just installing more memory may do only very little difference.

    Read an introduction about profiling for the specific db you are running, if you don’t have one already: set up a test machine (with hardware as similar as reasonably possible to the production one), copy the configuration and data set from your production box over and set up test scenarios that hopefully replicate some of the peak use cases and take it from there. Distributed apps like apache, enterprise grade accounting software, you name it have their own specific requirements.

    Whatever else you do, look at the documentation of the app.

    Once you understand what’s going on there are a few (advanced) things that can be done on os level. One example is setting up cgroups to control the ‘swappiness‘ of certain sets of applications (read http://unix.stackexchange.com/questions/10214/how-to-set-per-process-swapiness-for-linux/10227#10227).

    If you consider setting this up on a production system you do want to set it up in a test environment first and make sure it does what you want it to do. You have been warned.

  2. Tri Akasah says:

    Hey there,

    Is it okay for SSD server to clear pagecache only on hourly cron?

    Thanks

  3. Emmanuel Babrieux says:

    Clearing the cache is definitely useless..

    The cache you sees there is just direct memory content of disk files ( ext2-3-4 basic speed enhancement of accesses ). The goal is to enhance the access to common used files WHEN memory is available ( RAM not used ).

    So, Linux automatically release this “cache” when a process need memory, what you do by “resetting” the cache content is to remove those files content inside the ram and ask your system to use the disk content instead ( you know that disk accesses are slower and then your application’s performances will be less effective, it’s for performances that Linux EXT FS does so )

    Then for me this is useless ( is automatically managed by the OS ) and even could lead to performances issues on high load applications …

  4. prashant says:

    Hey I have working servers on VM’s but after some days it got slow. So I have to reboot every 7 days or whenever we face error..

    There is no user login after 11-am so can I use eco 3? with ram and swap clearing on and off ?

    • Ravi Saive says:

      @Prashant,

      Yes, you can use echo 3 command to clear your RAM cache and buffer to free up some space to function server properly..

      • prashant says:

        @Ravi
        i tried and checked and worked fine echo but some VMs start using Swap memory i.e “swapoff -a && swapon -a” gave some error regarding volume group.

        I will post that when i will get same error and any way thanks cause mainly i use echo 1 but still i need some vms to restart.

        lets see if this echo 3 can fix that issue

  5. mssupport says:

    This is not working fine for us. So could you provide some alternate for this.

  6. Ole Tange says:

    When you would use this:

    When measuring performance it can be important to do that in a reproducible way. Caches can often mess up these results.

    So one of the situations where you would drop all caches, is if you have more ways to do the same thing, and are trying to figure out which way is the fastest:

    echo 3 | sudo tee /proc/sys/vm/drop_caches
    time do_the_thing version1
    echo 3 | sudo tee /proc/sys/vm/drop_caches
    time do_the_thing version2
    
  7. Gonzalo Oviedo Lambert says:

    Very clear explanation. Thank you.

  8. Bee Kay says:

    Oh, that was fun. I like getting 10GB of RAM back in one command…!

  9. Pavel Pulec says:

    Are you sure that it can corrupt the database? I think that database can be pretty slow but no file should be corrupted.

  10. Viril Calimlim says:

    Hi Avishek,

    Great article. Just a little correction maybe on the crontab entry. Is it really 2pm? Cheers! :)

Got something to say? Join the discussion.

Your email address will not be published. Required fields are marked *

Join Over 300K+ Linux Users
  1. 202,035
  2. 9,267
  3. 38,621

Are you subscribed?