Necessary Linux / Unix Commands
There are a lot of articles on the web regarding Linux commands. Here I have listed some of my favorite Linux commands. These commands are little bit more advanced than those mentioned in the general blog but very useful.
-
-
- Check top usages process
ps -eo pid,user,ppid,cmd,%mem,%cpu --sort=-%cpu | head
- Start service at startup in CentOS/Redhat 7
systemctl enable httpd
- Check the listening port on Linux
netstat -atupn | grep LISTEN netstat -atupn | grep LISTEN | grep ":22" lsof -i:22
- Check the IO performance
iostat -x 5
The above command continuously prints the IO stat on the screen at 5-second intervals
- Check the Process performance Vs IO and Memory
#details command pidstat -h -U -r -d -l 10 #Short pidstat -h -U -r -d 10
The above command continuously prints the process consuming IO on the screen at 10-second intervals
- Check the Process performance Vs CPU
pidstat -h -U -l 10
The above command continuously prints the process consuming IO on the screen at 10-second intervals
- Find all files following a pattern
find /home/mdn2000/ms-h/bin -name "core*"
Description: The above command finds all files prefixed with core and show them on the screen
- Find all files following a pattern and older than a specific time
find /home/mdn2000/ms-h/bin -name "core*" -ctime +7
Description: The above command finds all files prefixed with core and older than 7 day
- Find all files following a pattern and older than a specific time and then move them specific directory
find /home/huawei/mdn2000/ms-h/bin -name "core.hmsserver*" -ctime +7 -exec mv {} /home/hms/data/c/core_backup/ \;
Description: The above command finds all the files prefixed with core and older than a specific time and then move them to specific directory
- Find all files following a pattern and rename them to another pattern
find ./ -name "*.repo.osms-backup" -exec sh -c 'mv "$1" "${1%.repo.osms-backup}.repo"' _ {} \;
Description: The above command finds all the files prefixed with “*.repo.osms-backup” andrename them to *.repo”.
- Compress and archive all files of a specific pattern
i) tar -czvf test.tar.gz *.unl ii) tar –czvf test.tar.gz prm*
Description: The first command will compress and archive all files with the suffix /file extension “.unl”. The second one will compress and archive all files with the prefix “prm” . Here gzip compress is used which is very efficient.
- Uncompress and Untar
tar -xzvf test.tar.gz
Description: The above command will untar and uncompress all files that were previously archived and compressed as file name test.tar.gz by “ tar –czvf” command
- Copy all files recursively from the web
wget --recursive --no-parent --no-clobber --execute robots=off http://public-yum.oracle.com/repo/OracleLinux/OL5/latest/
- Check how many HTTP processes are running
# ps -ylC httpd | wc -l
Description: The above command shows how many httpd processes are running. When more requests come to the Webserver ( Apache, Nginx etc.), it spawns more processes. Generally, the process consumes more memory and CPU.
- Boot Redhat / Oracle Linux in Single user/Rescuew mode
During boot time press e and then add rd.break at the end of linux kernel reference line. CTRl +X will let you log in single use rmode. After log in run the following command mount -o remount,rw / chroot /sysroot/ or replace "ro" of kernel reference line with "rw init=/sysroot/bin/sh" and CTRL +X. After log in run chroot /sysroot/
- Rescan newly added space on an existing disk (Especially applicable on Cloud & Virtual environment)
dd iflag=direct if=/dev/sdg of=/dev/null count=1 echo "1" | sudo tee /sys/class/block/sdg/device/rescan or partprobe
Description: /dev/sdg is already present in the server. From Cloud / virtual management console disk space is increased.
- Rescan all disks (Especially applicable on Virtual environment)
for device in /sys/class/scsi_disk/*; do echo "Rescanning $device" echo 1 > "$device/device/rescan" done
- Check total and average memory consumption of a process For httpd process:
# ps -ylC httpd | awk '{x += $8;y += 1} END {print "Apache Memory Usage (MB): "x/1024; print "Average Process Size (MB): "x/((y-1)*1024)}' Apache Memory Usage (MB): 284.121 Average Process Size (MB): 10.523
For mysqld process:
# ps -ylC mysqld | awk '{x += $8;y += 1} END {print "MySQL Memory Usage (MB): "x/1024; print "Average Process Size (MB): "x/((y-1)*1024)}' MySQL Memory Usage (MB): 15840.1 Average Process Size (MB): 15840.1
Description: The above two commands will output how much total memory is consumed by Apache and MySQL server as well as the average memory consumption of a process. For Apache many httpd process are spawned but for MySQL only one mysqld process is spawned.
- Increase partition szie in AIX:
chfs -a size=+500000 /var
The above command will increase the size of /var parttion with 50 MB. /var must be part of LV that can take free sapce from its VG
- Check which package is required to be installed for a command by yum for Centos/Redhat
# [root@localhost ~]# yum whatprovides netstat Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centosmirror.go4hosting.in * extras: centos.ustc.edu.cn * updates: centos.ustc.edu.cn net-tools-2.0-0.17.20131004git.el7.x86_64 : Basic networking tools Repo : @base Matched from: Filename : /usr/bin/netstat
Description: The above command will show to activate netstat command( if it is not already installed on your Linux server), you need to install net-tools package. So the next command will be
#yum install net-tools
- Activate tcp/udp port in firewall at Redhat7/Cen0S7Allow the port through firewalld:
-
firewall-cmd --permanent --add-port=21/tcp
And reload the firewall:
firewall-cmd --reload
Description: Here we activate ftp port 21 so that the remote host can get FTP access
- Block an ip/network at Linux firewalls
firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='' reject" firewall-cmd --reload
Remove the previous bloked IP/Network
firewall-cmd --permanent --remove-rich-rule="rule family='ipv4' source address='' reject" firewall-cmd --reload
- Split a big file into smaller ones according to the line
-
split -l 200000 -d NID_formatted.txt nid
Description: The file NID_formatted.txt contains 2000000 lines. The split command will divide it into 10 files containing each 200000 lines. -d option will add a numeric suffix (length 2) and nid is the prefix. so the splitted files name will be nid00, nid01, nid02 ……. nid19. If you do not use -d option, the suffix will be alphanumeric which will generate files like nidaa, nidbb … and so on.
- Set up an SSH key passphrase for your environment
-
eval "$(ssh-agent -s)" ssh-add ~/.ssh/id_rsa
- Check top usages process
-