The Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, a new program that aims at helping individuals all over the world to get certified in basic to intermediate system administration tasks for Linux systems. This includes supporting running systems and services, along with first-hand troubleshooting and analysis, and smart decision-making to escalate issues to engineering teams.
Please watch the following video that demonstrates about The Linux Foundation Certification Program.
The series will be titled Preparation for the LFCS (Linux Foundation Certified Sysadmin) Parts 1 through 10 and cover the following topics for Ubuntu, CentOS, and openSUSE:
Important: Due to changes in the LFCS certification requirements effective Feb. 2, 2016, we are including the following necessary topics to the LFCS series published here. To prepare for this exam, your are highly encouraged to use the LFCE series as well.
This post is Part 1 of a 20-tutorial series, which will cover the necessary domains and competencies that are required for the LFCS certification exam. That being said, fire up your terminal, and let’s start.
Processing Text Streams in Linux
Linux treats the input to and the output from programs as streams (or sequences) of characters. To begin understanding redirection and pipes, we must first understand the three most important types of I/O (Input and Output) streams, which are in fact special files (by convention in UNIX and Linux, data streams and peripherals, or device files, are also treated as ordinary files).
The difference between > (redirection operator) and | (pipeline operator) is that while the first connects a command with a file, the latter connects the output of a command with another command.
# command > file # command1 | command2
Since the redirection operator creates or overwrites files silently, we must use it with extreme caution, and never mistake it with a pipeline. One advantage of pipes on Linux and UNIX systems is that there is no intermediate file involved with a pipe – the stdout of the first command is not written to a file and then read by the second command.
For the following practice exercises we will use the poem “A happy child” (anonymous author).
The name sed is short for stream editor. For those unfamiliar with the term, a stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline).
The most basic (and popular) usage of sed is the substitution of characters. We will begin by changing every occurrence of the lowercase y to UPPERCASE Y and redirecting the output to ahappychild2.txt. The g flag indicates that sed should perform the substitution for all instances of term on every line of file. If this flag is omitted, sed will replace only the first occurrence of term on each line.
# sed ‘s/term/replacement/flag’ file
# sed ‘s/y/Y/g’ ahappychild.txt > ahappychild2.txt
Should you want to search for or replace a special character (such as /, \, &) you need to escape it, in the term or replacement strings, with a backward slash.
For example, we will substitute the word and for an ampersand. At the same time, we will replace the word I with You when the first one is found at the beginning of a line.
# sed 's/and/\&/g;s/^I/You/g' ahappychild.txt
In the above command, a ^ (caret sign) is a well-known regular expression that is used to represent the beginning of a line.
As you can see, we can combine two or more substitution commands (and use regular expressions inside them) by separating them with a semicolon and enclosing the set inside single quotes.
Another use of sed is showing (or deleting) a chosen portion of a file. In the following example, we will display the first 5 lines of /var/log/messages from Jun 8.
# sed -n '/^Jun 8/ p' /var/log/messages | sed -n 1,5p
Note that by default, sed prints every line. We can override this behaviour with the -n option and then tell sed to print (indicated by p) only the part of the file (or the pipe) that matches the pattern (Jun 8 at the beginning of line in the first case and lines 1 through 5 inclusive in the second case).
Finally, it can be useful while inspecting scripts or configuration files to inspect the code itself and leave out comments. The following sed one-liner deletes (d) blank lines or those starting with # (the | character indicates a boolean OR between the two regular expressions).
# sed '/^#\|^$/d' apache2.conf
The uniq command allows us to report or remove duplicate lines in a file, writing to stdout by default. We must note that uniq does not detect repeated lines unless they are adjacent. Thus, uniq is commonly used along with a preceding sort (which is used to sort lines of text files). By default, sort takes the first field (separated by spaces) as key field. To specify a different key field, we need to use the -k option.
The du –sch /path/to/directory/* command returns the disk space usage per subdirectories and files within the specified directory in human-readable format (also shows a total per directory), and does not order the output by size, but by subdirectory and file name. We can use the following command to sort by size.
# du -sch /var/* | sort –h
You can count the number of events in a log by date by telling uniq to perform the comparison using the first 6 characters (-w 6) of each line (where the date is specified), and prefixing each output line by the number of occurrences (-c) with the following command.
# cat /var/log/mail.log | uniq -c -w 6
Finally, you can combine sort and uniq (as they usually are). Consider the following file with a list of donors, donation date, and amount. Suppose we want to know how many unique donors there are. We will use the following command to cut the first field (fields are delimited by a colon), sort by name, and remove duplicate lines.
# cat sortuniq.txt | cut -d: -f1 | sort | uniq
Read Also: 13 “cat” Command Examples
grep searches text files or (command output) for the occurrence of a specified regular expression and outputs any line containing a match to standard output.
Display the information from /etc/passwd for user gacanepa, ignoring case.
# grep -i gacanepa /etc/passwd
Show all the contents of /etc whose name begins with rc followed by any single number.
# ls -l /etc | grep rc[0-9]
Read Also: 12 “grep” Command Examples
tr Command Usage
The tr command can be used to translate (change) or delete characters from stdin, and write the result to stdout.
Change all lowercase to uppercase in sortuniq.txt file.
# cat sortuniq.txt | tr [:lower:] [:upper:]
Squeeze the delimiter in the output of ls –l to only one space.
# ls -l | tr -s ' '
cut Command Usage
The cut command extracts portions of input lines (from stdin or files) and displays the result on standard output, based on number of bytes (-b option), characters (-c), or fields (-f). In this last case (based on fields), the default field separator is a tab, but a different delimiter can be specified by using the -d option.
Extract the user accounts and the default shells assigned to them from /etc/passwd (the –d option allows us to specify the field delimiter, and the –f switch indicates which field(s) will be extracted.
# cat /etc/passwd | cut -d: -f1,7
Summing up, we will create a text stream consisting of the first and third non-blank files of the output of the last command. We will use grep as a first filter to check for sessions of user gacanepa, then squeeze delimiters to only one space (tr -s ‘ ‘). Next, we’ll extract the first and third fields with cut, and finally sort by the second field (IP addresses in this case) showing unique.
# last | grep gacanepa | tr -s ' ' | cut -d' ' -f1,3 | sort -k2 | uniq
The above command shows how multiple commands and pipes can be combined so as to obtain filtered data according to our desires. Feel free to also run it by parts, to help you see the output that is pipelined from one command to the next (this can be a great learning experience, by the way!).
Although this example (along with the rest of the examples in the current tutorial) may not seem very useful at first sight, they are a nice starting point to begin experimenting with commands that are used to create, edit, and manipulate files from the Linux command line. Feel free to leave your questions and comments below – they will be much appreciated!
43 thoughts on “LFCS: How to use GNU ‘sed’ Command to Create, Edit, and Manipulate files in Linux – Part 1”
Hi , I am planning for LFCS exam, please advise what need to be done to crack this exam?
I know there was a refresh for the LFCS 2018 exam.
I’m wondering if anyone has taken it or used tecmint to study for it via this guide? It looks pretty good to me overall, but i’m wondering if there’s some things that are updated that weren’t on the guide?
Yes, LFCS and LFCE both are updated to new exam objectives effective from April 10, 2018. It’s not possible to update each article with latest exam updates, so we have put all together with updates on our book that you can find here – https://www.tecmint.com/linux-foundation-lfcs-lfce-certification-exam-book/
Thank you for your help, and for this great series created for the LFCS exam! I just passed the exam and all of the information found here is valuable and extremely helpful. Great job !!
Thank you for your comment! Would you mind sharing your certificate id?
What is the nature of the exam. I do not want dumps, just asking about the type of questions?
Traian Sandu, can you please recommend me any other free material for the the LFCS exam. congrats for your goal
First of all, thank you for such a wonderful guide on LFCS. I’m brushing up on my Linux and will probably go through your LFCE series as well, and perhaps go for the cert someday.
Question: Since the exam lets you choose the distribution you test with, can we simply focus on the specific ways of doing things? Let’s say I will plan to test on Centos, can I simply skip the Debian and OpenSUSE commands to cut on reading time?
Thanks for your kind words about this guide! Yes, if you’re planning on taking the exam using a specific distro, you can skip the commands of the others to cut on reading time. Please let us know if you decide to take the exam!
Hi Gabriel, is the information in these tutorials the same in the ebook that is offered here or does the ebook cover more information.
The PDF version contains several corrections that were pointed out by the readers of these tutorials. Other than that, it’s exactly the same material.
Today I passed my exam and I’d like to say thank you for such a great series! It was my main source for learning. Awesome work.
Congratulations for cracking the LFCS exam and thanks for finding the series helpful…
Congratulations! Please help us spread the word by sharing this series via your social network profiles!
Thank you for Greet work .
thanks for this article series. Are the lab files available somewhere?
Thank you for your comment. No, the lab files are not available. To be honest, I didn’t think of saving them. Will keep the idea in mind for future tutorials, though.
A thank you from my side. I am doing the LFCS course and preparing the certification next year.
The tutorial is a good complement.
I am glad to hear that this series is a good complement to prepare for the certification. Did you already take the exam? If so, would you be as kind as to share your experience with us?
Thanks for this great job!!
I only used this material to prepare the LFCS exam and it helped me a lot. Exam passed!!
Only is missing the LVM section which I learned from the Ubuntu page. But I will definitely recommend this training series!
I am so glad to hear that you used the LFCS series to prepare for the exam. Yes, LVM is not included because when I first wrote the series that topic was not in the domain and competencies required for the exam. Eventually, LVM replaced RAID so when I review these articles here soon I’ll make sure to make the change.
Please help us by spreading the word. That is the best thank you! you can give us :).