How to Recover Deleted Files in Linux Before They’re Gone

New to Linux commands? Our 100+ Essential Linux Commands course covers everything you used in this guide and a lot more with real examples and explanations.

You deleted a log file mid-session, a running process is still writing to it, and ls shows nothing, but the data is still there, and you can get it back before the process closes.

Linux deletes files the way most people assume is permanent, but what the rm command actually does is remove the directory entry and decrement the file’s link count, and if any process still has that file open, the kernel keeps the underlying inode alive until the last file descriptor pointing to it is closed.

That window between rm and process exit is your recovery window, and on a live production system, it’s often wide enough to save you.

How Linux File Deletion Actually Works

When you run rm on a file, the kernel unlinks it from the directory tree, which is why ls stops showing it, but it does not immediately free the disk blocks, because the inode still has a reference count greater than zero as long as any process holds an open file descriptor to it.

The moment the last process closes that descriptor or exits, the reference count drops to zero, and the kernel marks those blocks as free, which is when the data is truly gone.

So if your web server, database, or log aggregator is actively writing to a file you just deleted, the data is still intact on disk and accessible through a special path the kernel exposes in /proc.

If this is the first time you’ve heard that rm doesn’t immediately wipe data, share this with your team – it’s the kind of thing that saves a job at 2 am.

Find the Process Holding the File Open

The tool you need is the lsof command, which stands for “list open files” and shows every file descriptor currently held by running processes.

If it’s not installed, grab it first.

sudo apt install lsof         [On Debian, Ubuntu and Mint]
sudo dnf install lsof         [On RHEL/CentOS/Fedora and Rocky/AlmaLinux]

The sudo prefix runs the command with root privileges, which is required here because file descriptors belonging to other processes are not visible to unprivileged users.

Now search for the deleted file by name or by the string deleted in the output:

sudo lsof | grep deleted

Output:

nginx     1423  www-data   4w   REG  253,1  204800  131074 /var/log/nginx/access.log (deleted)
rsyslogd  1201      root   7w   REG  253,1  819200  131075 /var/log/syslog (deleted)

The columns you care about are the second column (PID), the fourth column (file descriptor number, here 4w and 7w), and the last column which confirms the file is marked (deleted).

If the file you deleted isn’t showing the word deleted, run lsof +L1 instead, which explicitly lists all files with a link count below 1.

sudo lsof +L1

Output:

COMMAND   PID     USER   FD   TYPE DEVICE SIZE/OFF NLINKS     NODE NAME
nginx    1423 www-data    4w   REG  253,1   204800      0   131074 /var/log/nginx/access.log (deleted)

The NLINKS column showing 0 confirms the directory entry is gone, but the kernel still holds the data.

Recover the File via /proc/fd

The kernel exposes every open file descriptor for every process under /proc/<PID>/fd/, where each descriptor appears as a symlink pointing to the original file path, even after deletion.

From the lsof output above, the nginx process has PID 1423 and file descriptor 4, so the path to the still-live data is /proc/1423/fd/4.

Copy it out with cp command:

sudo cp /proc/1423/fd/4 /var/log/nginx/access.log.recovered

If you get no output, it means success.

Verify the recovered file has content:

ls -lh /var/log/nginx/access.log.recovered

Output:

-rw-r--r-- 1 root root 200K May  6 03:14 /var/log/nginx/access.log.recovered

If you see a file size matching what you expected, the data is intact. You can now move it back to the original path or hand it off to whatever tool needs it.

If the running process is still writing to the deleted descriptor, it will keep writing to /proc/<PID>/fd/4 but nothing will go to the recovered file you just copied, since that’s a snapshot at copy time.

So if the process is a log writer you care about, you’ll also want to restart it after restoring the file so it opens a proper linked file again.

If you see Permission denied when accessing /proc/<PID>/fd/4, either run the command as root with sudo or check that the PID still exists with:

ps aux | grep PID
Want to build confidence working with Linux file systems, inodes, and process internals at this level? The 100+ Essential Linux Commands course on Pro TecMint covers lsof, stat, find, and dozens of other commands with the kind of depth that makes these recoveries feel routine.

Recover Deleted Files Fast with Shell Script

When you’re staring at a broken production system at midnight, typing multiple commands in sequence is error-prone, so here’s a small shell function that wraps the whole lookup and copy into one step.

Give it the name of the deleted file, and it handles the rest.

recover_deleted() {
  local filename="$1"
  local output="${2:-/tmp/recovered_file}"
  local result
  result=$(sudo lsof +L1 2>/dev/null | grep "$filename")

  if [[ -z "$result" ]]; then
    echo "No process holds $filename open. Data may already be gone."
    return 1
  fi

  local pid fd
  pid=$(echo "$result" | awk 'NR==1{print $2}')
  fd=$(echo "$result" | awk 'NR==1{print $4}' | tr -d 'rwu')

  echo "Found: PID=$pid FD=$fd"
  sudo cp /proc/"$pid"/fd/"$fd" "$output" && echo "Recovered to $output"
}

Drop that into your ~/.bashrc or a shared sysadmin dotfile and source it.

Then call it as:

recover_deleted /var/log/nginx/access.log /var/log/nginx/access.log.recovered

Output:

Found: PID=1423 FD=4
Recovered to /var/log/nginx/access.log.recovered

The awk NR==1 picks the first matching result in case multiple processes have the same file open, and tr -d 'rwu' strips the read/write/lock suffix from the FD field, so you’re left with a clean integer for the /proc path.

Tip: If you have multiple processes holding the same deleted file open, recover from the one with the largest file size, since it likely has the most complete data. Check sizes with the following command for each candidate.

sudo ls -lh /proc/<PID>/fd/<FD>
If you’ve ever written a recovery script like this and saved a production incident, share this article with your team before they need it.

What If the Process Already Closed the File?

Once every process that had the file open exits or closes the descriptor, the kernel frees the disk blocks, and the data is gone from /proc as well.

At that point, you’re in hardware recovery territory, using tools like extundelete for ext4 filesystems or testdisk and photorec for broader filesystem support, but those tools recover from raw disk blocks and have no guarantee of success after heavy write activity.

The /proc method is fast and reliable precisely because the kernel is handing you back live data, while the forensic tools are reassembling it from fragments.

Warning: Never run extundelete or any recovery tool on a mounted read-write filesystem. Unmount the partition first, or boot from a live USB, or you risk the filesystem overwriting the very blocks you’re trying to recover.

Prevent Accidental Deletion With Hard Links or Bind Mounts

If you’re managing a system where a process writes logs to a single path and you need those logs to survive even an accidental rm, the cleanest solution is a hard link, which keeps an additional directory entry pointing to the same inode, so the link count never drops to zero from a single rm.

ln /var/log/nginx/access.log /var/log/nginx/access.log.hardlink

Confirm both entries point to the same inode:

ls -li /var/log/nginx/access.log /var/log/nginx/access.log.hardlink

Output:

131074 -rw-r--r-- 2 www-data www-data 204800 May  6 03:14 /var/log/nginx/access.log
131074 -rw-r--r-- 2 www-data www-data 204800 May  6 03:14 /var/log/nginx/access.log.hardlink

Both entries show the same inode number 131074 and a link count of 2, meaning rm /var/log/nginx/access.log would drop the count to 1 and leave the data fully intact at the hardlink path.

Hard links work only within the same filesystem, so if you need cross-filesystem protection, a bind mount with mount --bind achieves a similar effect.

If you want to go deeper on file system internals, inodes, and how Linux manages disk storage under the hood, the Learn Linux in 7 Days course on Pro TecMint covers the fundamentals in a way that makes this kind of recovery second nature.
Conclusion

The /proc/<PID>/fd/ path is one of those Linux mechanisms that feels like a cheat code the first time you use it, and once you’ve saved a production incident with it you’ll never think of rm the same way again.

You now know how to find the right PID and file descriptor with lsof +L1, copy the live data out before the process exits, and wrap the whole thing in a shell function so it’s one command when you’re running on adrenaline at 3 am.

Right now, open a terminal on your Linux system and try this yourself: create a test file, open it in tail -f in one terminal, delete it with rm in another, then run lsof +L1 | grep test and find the live descriptor.

Copy it out with the following command, and confirm the content matches what you put in the original file. Running it once in a non-critical environment means the first time you need it for real, your hands will already know the steps.

cp /proc/<PID>/fd/<FD> /tmp/recovered

How do you currently guard against accidental deletion on your production systems? cp backups, snapshots, hard links, something else entirely? Drop your setup in the comments – always curious what’s working in the wild.

Root Plan
Premium Linux Education for Serious Learners

Take Your Linux Skills to the Next Level

Root members get full access to every course, certification prep track, and a growing library of hands-on Linux content — with new courses added every month.

What You Get
Ad-free access to all premium articles
Access to all courses: Learn Linux, AI for Linux, Bash Scripting, Ubuntu Handbook, Golang and more.
Access to Linux certifications (RHCSA, RHCE, LFCS and LFCA)
Access new courses on release
Get access to weekly newsletter
Priority help in comments
Private Telegram community
Connect with the Linux community
Gabriel Cánepa
Gabriel Cánepa is a GNU/Linux sysadmin and web developer from Villa Mercedes, San Luis, Argentina. He works for a worldwide leading consumer product company and takes great pleasure in using FOSS tools to increase productivity in all areas of his daily work.

Each tutorial at TecMint is created by a team of experienced Linux system administrators so that it meets our high-quality standards.

Join the TecMint Weekly Newsletter (More Than 156,129 Linux Enthusiasts Have Subscribed)
Was this article helpful? Please add a comment or buy me a coffee to show your appreciation.

9 Comments

Leave a Reply
  1. I had the same problem two years ago and I tried a lot of programs, like debugfs, photorec, ext3grep and extundelete. ext3grep was the best program to recover files.

    The syntax is very easy:

    # ext3grep image.img --restore-all
    OR
    # ext3grep /dev/sda3 --restore-all --after date -d '2015-01-01 00:00:00' '+%s' --before `date -d ‘2015-01-02 00:00:00’ ‘+%s’
    

    This video (https://youtu.be/XZTXcGVFgiE) is a mini tutorial that can help you.

    Reply
  2. How do you recover all files and directories in a folder when deleted by rm -r * and not just one file? using redhat enterprise 6.6.

    Thank you

    Reply
  3. Gabriel Hello,

    Here Romualdo from Malaga (Spain). Look, a simple detail, you’ve eaten a “c” in the command. ~/.bashrc For the changes to take effect immediately.

    I want to congratulate you because all your articles are great, of whom I am one of your many fans.

    Receives a warm greeting.
    Romu.

    Reply

Got Something to Say? Join the Discussion...

Thank you for taking the time to share your thoughts with us. We appreciate your decision to leave a comment and value your contribution to the discussion. It's important to note that we moderate all comments in accordance with our comment policy to ensure a respectful and constructive conversation.

Rest assured that your email address will remain private and will not be published or shared with anyone. We prioritize the privacy and security of our users.

Root Plan Premium Linux Education for Serious Learners

Before You Go - Upgrade Your Linux Skills

Root members get everything in one place, with new courses added every month.

What You Get
Ad-free access to all premium articles
Access to all courses: Learn Linux, AI for Linux, Bash Scripting, Ubuntu Handbook, Golang and more.
Linux certifications: RHCSA, RHCE, LFCS and LFCA
Access new courses on release
Weekly newsletter, priority support & Telegram community
Join Root Today and Start Learning Linux the Right Way
Structured courses, certification prep, and a community of Linux professionals - all in one membership.
Join Root Plan →
$8/mo · or $59/yr billed annually