What is the difference between $@ and $* in shell script?

There are no difference between $* and $@, but there is a difference between "$@" and "$*".

$ cat 1.sh
mkdir "$*"

$ cat 2.sh
mkdir "$@"

$ sh 1.sh a "b c" d

$ ls -l
total 12
-rw-r--r-- 1 igor igor   11 mar 24 10:20 1.sh
-rw-r--r-- 1 igor igor   11 mar 24 10:20 2.sh
drwxr-xr-x 2 igor igor 4096 mar 24 10:21 a b c d

We gave three arguments to the script (a, b c and d) but in “$*” they all were merged into one argument a b c d.

$ sh 2.sh a "b c" d

$ ls -l
total 24
-rw-r--r-- 1 igor igor   11 mar 24 10:20 1.sh
-rw-r--r-- 1 igor igor   11 mar 24 10:20 2.sh
drwxr-xr-x 2 igor igor 4096 mar 24 10:21 a
drwxr-xr-x 2 igor igor 4096 mar 24 10:21 a b c d
drwxr-xr-x 2 igor igor 4096 mar 24 10:21 b c
drwxr-xr-x 2 igor igor 4096 mar 24 10:21 d

You can see here, that "$*" means always one single argument, and "$@" contains as many arguments, as the script had. “$@” is a special token which means “wrap each individual argument in quotes”. So a "b c" d becomes (or rather stays) "a" "b c" "d" instead of "a b c d" ("$*") or "a" "b" "c" "d" ($@ or $*).

Also, I would recommend this beautiful reading on the theme:

http://tldp.org/LDP/abs/html/internalvariables.html#ARGLIST

Commands to Check SSL cert’s validity and other details

1. Get complete available details of an SSL certificate

openssl x509 -text -in ssl.cert 
 
2. Who issued the certificate?

openssl x509 -noout -in ssl.cert -issuer

3. To whom the certificate was issued?

openssl x509 -noout -in ssl.cert -subject

4. To check the expiry date of SSL certificate

openssl x509 -noout -in ssl.cert -dates

5. To get SSL cert’s hash value

openssl x509 -noout -in ssl.cert -hash

6. To get SSL cert’s MD5 fingerprint

openssl x509 -noout -in ssl.cert -fingerprint

To check CSR: openssl req -noout -text -in new.csr 

To check key: openssl rsa -noout -text -in new.key 

Master-Slave Replication

Master-Slave Replication
Treselle Engineering June 6, 2014
669
VIEWS
Twitter Facebook Google +r LinkedIN
Master-Slave Replication
Table of Content [show]
Introduction

This blog covers the basics of how replication really works on the high level, and the configuration of Master-Slave replication. With this replication we can share load between Master and Slave (only read operations), take backups from Slave server without effecting the Master server.

Use Case

This use case describes replication and configuration of a Master-Slave replication.

What we need to do:

Theoretical explanation of how replication works.
Configuration of Master Server.
Configuration of Slave Server.
Solution

Before solving our use case, let’s get some pre-requisites satisfied.

Pre-requisites:
Minimum two Linux servers along with MySQL software should be installed.

Master ip: 192.168.0.1
Slave ip: 192.168.0.2
Theoretical explanation of how replication works:
Types of mysql replication:
Replication is based on events written to the binary log, which are read from master and then processed on the slave.

Statement Based Replication:
Replication work is based on propagation of SQL statements from master to slave. This is called statement-based replication. Often it called SBR, Which corresponds to the standard statement-based binary logging format.

Row Based Replication:
Replication based on row-based logging which changes binary logging logs in individual table row. This is known as row-based replication. It is also called as RBR. In row-based replication, the master writes events to the binary log that indicates how individual table rows are changed.

Mixed Based Replication:
The server can change the binary logging format in real time according to the type of event using mixed-format logging. When the mixed format is in effect, statement-based logging is used by default, but automatically switches to row-based logging in particular cases. Replication using the mixed format is often referred to as mixed-based replication or mixed-format replication.

So now let’s start with what is happening on the master. For replication to work, first and foremost, master needs to write replication events to a special log called binary log. The binary log file stores data that replication slave will be reading later. Whenever a replication slave connects to a master, master creates a new thread for the connection.

Slaves that are up to date will mostly be reading events that are still cached in OS cache on the master, so there will not be any physical disk reads on the master in order to feed binary log events to slave(s). However, when you connect a replication slave that is few hours or even days behind, it will initially start reading binary logs that were written hours or days ago – master may no longer have these cached, so disk reads will occur. If master does not have free IO resources, you may feel a bump at that point.

Now let’s see what is happening on the slave. When you start replication, two threads are started on the slave:

IO thread:
This process called IO thread connects to a master, reads binary log events from the master as they come in and just copies them over to a local log file called relay log. That’s all.

Even though there are only one thread reading binary log from the master and one writing relay log on the slave, very rarely copying of replication events is a slower element of the replication. There could be a network delay, causing a steady delay of few hundred milliseconds, but that’s about it.

To see IO thread status, just type “show slave status\G” on slave.

Master_Log_File – last file copied from the master (most of the time it would be the same as last binary log written by a master)
Read_Master_Log_Pos – This shows the position where binary log copied over the relay log on the slave.

SQL thread:
This process reads the events from a relay log stored locally on the replication slave and then applies them as fast as possible and it is a single thread.

To see SQL thread status, just type “show slave status\G” on slave.

Relay_Master_Log_File – The name of the master binary log file containing the most recent event executed by the SQL thread.
Exec_Master_Log_Pos – The position in the current master binary log file through which the SQL thread has read and executed.

SQL thread

Configuration of Master Server:
Take backup of database from Master server. The command to take consistent backup is given below:

1
# mysqldump -u$username -p$passwd DBname –single-transaction -R –triggers –quick –master-data=2 –flush-logs>/opt/mysqlbackup/MasterBackup.sql
Edit my.cnf file on the Master server to enable binary logging and set the server’s id.

1
#vi /etc/my.cnf
Add these lines under [mysqld] section:

1
2
log-bin=mysql-bin
server-id=1
Restart MySQL for the changes to take effect.

1
#/etc/init.d/mysqld restart
Login into MySQL as root user and create the slave user and grant privileges for replication.

1
2
3
mysql> GRANT REPLICATION SLAVE ON *.* TO ‘slave_user’@’192.168.0.2’ IDENTIFIED BY ‘your_password’;
mysql> FLUSH PRIVILEGES;
mysql> FLUSH TABLES WITH READ LOCK;
Now execute ‘SHOW MASTER STATUS’ command to get all the data we need.

Show master status

Note the current binary log and position. In our example, the Master server is currently on mysql- bin.00003 binary log and on position 239. Here Binlog_Do_DB means to capture the DB changes into binary file and Binlog_Ignore_DB means do not capture the DB changes into binary file and these are empty because we did not mentioned these parameters in my.cnf file.

Configuration of Slave Server:
Edit my.cnf file on the Slave server.

1
#vi /etc/my.cnf
Add these lines under the [mysqld] section:

1
2
3
server-id = 2
relay-log = mysql-relay-bin
log-bin = mysql-bin
Restart MySQL for the changes to take effect.

1
#/etc/init.d/mysqld restart
Now import the dump file that we exported from Master server.

1
# mysql -u root -p CHANGE MASTER TO
-> MASTER_HOST=’192.168.0.1′,
-> MASTER_USER=’slave_user’,
-> MASTER_PASSWORD=’your_password’,
-> MASTER_LOG_FILE=’mysql-bin.000003′,
-> MASTER_LOG_POS=239;
Note the values for each field. The MASTER_HOST is the private IP of the Master server, MASTER_USER is the user we created for replication, MASTER_PASSWORD is the password for the replication user, MASTER_LOG_FILE is the binary log that we recorded from the Master server status earlier, and MASTER_LOG_POS is the position the Master was in that we recorded.

Now start the slave thread on the Slave server.

1
mysql> START SLAVE;
Let’s make sure that replication is working with the ‘SHOW SLAVE STATUS’ statement:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
mysql> SHOW SLAVE STATUS\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.0.1
Master_User: slave_user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000003
Read_Master_Log_Pos: 314
Relay_Log_File: mysqld-relay-bin.000003
Relay_Log_Pos: 235
Relay_Master_Log_File: mysql-bin.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 314
Relay_Log_Space: 235
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
If Slave_IO_Running and Slave_SQL_Running is YES then your replication is working fine.

Conclusion

One of the biggest advantages to have master-slave set up in MySQL is to be able to use master for all of the inserts and send some, if not all, select queries to slave. This will most probably speed up your application without having to diving into optimizing all the queries or buying more hardware.
Do backups from slave. That way site is not affected at all when doing backups. This becomes a big deal when your database has grown to multiple gigs and every time you do backups using mysqldump, site lags when table locks happen. For some sites, this could mean that site goes down for few secs to minutes. If we have slave, we just take slave out of rotation and run backups off the slave.
References

http://dev.mysql.com/doc/refman/5.5/en/replication-howto.html/
http://www.tecmint.com/how-to-setup-mysql-master-slave-replication-in-rhel-centos-fedora/

HOW TO INSTALL MYSQL ON UBUNTU/DEBIAN

12 December, 2007

It may seem easy for some, but for others, installing MySQL on Ubuntu or Debian Linux is not an easy task. This article explains to you how to install the MySQL Server and Client packages on a Ubuntu/Debian system.

First of all, make sure your package management tools are up-to-date. Also make sure you install all the latest software available.

sudo apt-get update
sudo apt-get dist-upgrade

After a few moments (or minutes, depending on the state of your system), you’re ready to install MySQL. ~ By default, recent Ubuntu/Debian systems install a MySQL Server from the 5-branch. This is a good thing, so don’t worry.

First, install the MySQL server and client packages:

sudo apt-get install mysql-server mysql-client

When done, you have a MySQL database read to rock ‘n roll. However, there’s more to do.

You need to set a root password, for starters. MySQL has it’s own user accounts, which are not related to the user accounts on your Linux machine. By default, the root account of the MySQL Server is empty. You need to set it. Please replace ‘mypassword’ with your actual password and myhostname with your actual hostname.

sudo mysqladmin -u root -h localhost password 'mypassword'
sudo mysqladmin -u root -h myhostname password 'mypassword'

Now, you probably don’t want just the MySQL Server. Most likely you have Apache+PHP already installed, and want MySQL to go with that. Here are some libraries you need to install to make MySQL available to PHP:

sudo apt-get install php5-mysql

Or for Ruby:

sudo apt-get install libmysql-ruby

You can now access your MySQL server like this:

mysql -u root -p

Have fun using MySQL Server.

How to Install Oracle Java JRE on Ubuntu Linux

This tutorial will cover the installation of 32-bit and 64-bit Oracle Java 7 (currently version number 1.8.0_25) JRE on 32-bit and 64-bit Ubuntu operating systems. These instructions will also work on Debian and Linux Mint. This article is intended for those who only want to install Oracle Java JRE on their Debian based Linux systems, such as Debian, Ubuntu and Linux Mint. Using this method you will only be able to run and execute Java programs and not be able to develop and program in Java. This article was created due to so many requests from other users who wanted to know how to only install Oracle Java JRE on their Ubuntu systems. I included a section on how to enable Oracle Java JRE in your web browsers as well using this method. These instructions will work on Debian, Ubuntu and Linux Mint.

Steps

  1. Install Oracle Java JRE on Ubuntu Linux Step 1 Version 2.jpg
    1
    Check to see if your Ubuntu Linux operating system architecture is 32-bit or 64-bit, open up a terminal and run the following command below.

    • Type/Copy/Paste: file /sbin/init
      • Note the bit version of your Ubuntu Linux operating system architecture it will display whether it is 32-bit or 64-bit.
    Ad
  2. Install Oracle Java JRE on Ubuntu Linux Step 2.jpg
    2
    Check if you have Java installed on your system. To do this, you will have to run the Java version command from terminal.

    • Open up a terminal and enter the following command:
      • Type/Copy/Paste: java -version
    • If you have OpenJDK installed on your system it may look like this:
      • java version “1.7.0_15”
        OpenJDK Runtime Environment (IcedTea6 1.10pre) (6b15~pre1-0lucid1)
        OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode)
    • If you have OpenJDK installed on your system, you have the wrong vendor version of Java installed for this exercise.
  3. Install Oracle Java JRE on Ubuntu Linux Step 3.jpg
    3
    Completely remove the OpenJDK/JRE from your system and create a directory to hold your Oracle Java JRE binaries. This will prevent system conflicts and confusion between different vendor versions of Java. For example, if you have the OpenJDK/JRE installed on your system, you can remove it by typing the following at the command line:

    • Type/Copy/Paste: sudo apt-get purge openjdk-\*
      • This command will completely remove OpenJDK/JRE from your system
    • Type/Copy/Paste: sudo mkdir -p /usr/local/java
      • This command will create a directory to hold your Oracle Java JDK and JRE binaries.
  4. Install Oracle Java JRE on Ubuntu Linux Step 4.jpg
    4
    Download the Oracle Java JRE for Linux. Make sure you select the correctcompressed binaries for your system architecture 32-bit or 64-bit (which end in tar.gz).

    • For example, if you are on Ubuntu Linux 32-bit operating system download 32-bit Oracle Java binaries.
    • For example, if you are on Ubuntu Linux 64-bit operating system download 64-bit Oracle Java binaries.
    • Optional, Download the Oracle Java JDK/JRE Documentation
      • Select jdk-7u40-apidocs.zip
    • Important Information: 64-bit Oracle Java binaries do not work on 32-bit Ubuntu Linux operating systems, you will receive multiple system error messages, if you attempt to install 64-bit Oracle Java on 32-bit Ubuntu Linux.
  5. 5
    Copy the Oracle Java binaries into the /usr/local/java directory. In most cases, the Oracle Java binaries are downloaded to: /home/“your_user_name”/Downloads.

    • 32-bit Oracle Java on 32-bit Ubuntu Linux installation instructions:
      • Type/Copy/Paste: cd /home/“your_user_name”/Downloads
      • Type/Copy/Paste: sudo cp -r jre-8u25-linux-i586.tar.gz /usr/local/java
      • Type/Copy/Paste: cd /usr/local/java
    • 64-bit Oracle Java on 64-bit Ubuntu Linux installation instructions:
      • Type/Copy/Paste: cd /home/“your_user_name”/Downloads
      • Type/Copy/Paste: sudo cp -r jre-8u25-linux-x64.tar.gz /usr/local/java
      • Type/Copy/Paste: cd /usr/local/java
  6. Install Oracle Java JRE on Ubuntu Linux Step 6 Version 2.jpg
    6
    Run the following commands on the downloaded Oracle Java tar.gz files.Make sure to do this as root in order to make them executable for all users on your system. To open a root terminal type sudo -s you will be prompted for your logon password.

    • 32-bit Oracle Java on 32-bit Ubuntu Linux installation instructions:
      • Type/Copy/Paste: sudo chmod a+x jre-8u25-linux-i586.tar.gz
    • 64-bit Oracle Java on 64-bit Ubuntu Linux installation instructions:
      • Type/Copy/Paste: sudo chmod a+x jre-8u25-linux-x64.tar.gz
  7. Install Oracle Java JRE on Ubuntu Linux Step 7 Version 2.jpg
    7
    Unpack the compressed Java binaries, in the directory /usr/local/java

    • 32-bit Oracle Java on 32-bit Ubuntu Linux installation instructions:
      • Type/Copy/Paste: sudo tar xvzf jre-8u25-linux-i586.tar.gz
    • 64-bit Oracle Java on 64-bit Ubuntu Linux installation instructions:
      • Type/Copy/Paste: sudo tar xvzf jre-8u25-linux-x64.tar.gz
  8. Install Oracle Java JRE on Ubuntu Linux Step 8 Version 2.jpg
    8
    Double-check your directories. At this point, you should have an uncompressed binary directory in /usr/local/java for the Java JDK/JRE listed as:

    • Type/Copy/Paste: ls -a
    • jre1.8.0_25
  9. Install Oracle Java JRE on Ubuntu Linux Step 9 Version 2.jpg
    9
    Edit the system PATH file /etc/profile and add the following system variables to your system path. Use nano, gedit or any other text editor, as root, open up /etc/profile.

    • Type/Copy/Paste: sudo gedit /etc/profile
    • or
    • Type/Copy/Paste: sudo nano /etc/profile
  10. Install Oracle Java JRE on Ubuntu Linux Step 10 Version 2.jpg
    10
    Scroll down to the end of the file using your arrow keys and add the following lines below to the end of your /etc/profile file:

    • Type/Copy/Paste:

      JAVA_HOME=/usr/local/java/jre1.8.0_25
      PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
      export JAVA_HOME
      export PATH

  11. Install Oracle Java JRE on Ubuntu Linux Step 11 Version 2.jpg
    11
    Save the /etc/profile file and exit.
  12. Install Oracle Java JRE on Ubuntu Linux Step 12.jpg
    12
    Inform your Ubuntu Linux system where your Oracle Java JRE is located.This will tell the system that the new Oracle Java version is available for use.

    • Type/Copy/Paste: sudo update-alternatives –install “/usr/bin/java” “java” “/usr/local/java/jre1.8.0_25/bin/java” 1
      • this command notifies the system that Oracle Java JRE is available for use
    • Type/Copy/Paste: sudo update-alternatives –install “/usr/bin/javaws” “javaws” “/usr/local/java/jre1.8.0_25/bin/javaws” 1
      • this command notifies the system that Oracle Java Web start is available for use
  13. Install Oracle Java JRE on Ubuntu Linux Step 13.jpg
    13
    Inform your Ubuntu Linux system that Oracle Java JRE must be the default Java.

    • Type/Copy/Paste: sudo update-alternatives –set java /usr/local/java/jre1.8.0_25/bin/java
      • This command will set the Java runtime environment for the system
    • Type/Copy/Paste: sudo update-alternatives –set javaws /usr/local/java/jre1.8.0_25/bin/javaws
      • this command will set Java Web start for the system
  14. Install Oracle Java JRE on Ubuntu Linux Step 14 Version 2.jpg
    14
    Reload your system wide PATH /etc/profile by typing the following command:

    • Type/Copy/Paste: /etc/profile
    • Note your system-wide PATH /etc/profile file will reload after reboot of your Ubuntu Linux system
  15. Install Oracle Java JRE on Ubuntu Linux Step 15 Version 2.jpg
    15
    Test to see if Oracle Java was installed correctly on your system. Run the following commands and note the version of Java:
  16. Install Oracle Java JRE on Ubuntu Linux Step 16 Version 2.jpg
    16
    A successful installation of 32-bit Oracle Java will display:

    • Type/Copy/Paste: java -version
      • This command displays the version of Java running on your system
    • You should receive a message which displays:
      • java version “1.8.0_05”
        Java(TM) SE Runtime Environment (build 1.8.0_05-b18)
        Java HotSpot(TM) Server VM (build 24.45-b08, mixed mode)
  17. Install Oracle Java JRE on Ubuntu Linux Step 17 Version 2.jpg
    17
    A successful installation of Oracle Java 64-bit will display:

    • Type/Copy/Paste: java -version
      • This command displays the version of Java running on your system
    • You should receive a message which displays:
      • java version “1.8.0_25”
        Java(TM) SE Runtime Environment (build 1.8.0_05-b18)
        Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)

How to setup and configure Oracle11g R2 in RHEL6.4

How to setup and config Oracle11g R2 in RHEL6.4

Hardware Requirements Check:

Memory Requirements:

Minimum:1 GB of RAM

Recommended:   2 GB of RAM [ Production Server it’s 8 GB RAM]

Available RAM SWAP Space Requirement
Between 1 GB and 2 GB 1.5 times the size of the RAM
Between 2 GB and 16 GB Equal to the size of the RAM
More than 16 GB 16 GB

Disk Space Requirements 

Installation Type Minimum Requirement for Software Files (GB)
Enterprise Edition 4.35
Standard Edition 4.22
Installation Type Minimum Requirement for Data Files (GB)
Enterprise Edition 1.7
Standard Edition 1.5

Package Requirements Check

yum install cloog-ppl libXxf86misc*

yum install  compat-libcap1 libXxf86vm*

yum install  compat-libstdc++-33 libaio-devel*

yum install   cpplibdmx*

yum install gcc libstdc++-devel*

yum install gcc-c++ mpfr*

yum install  glibc-devel make*

yum install  glibc-headers ppl*

yum install  kernel-headers xorg-x11-utils*

yum install  libXmu xorg-x11-xauth*

yum install  libXt libXv*

yum install  ksh libXxf86dga*

yum install  unixODBC-devel-2.2.11*

yum install unixODBC-2.2.11*

Disable SELinux and Firewall

[root@localhost ~]# system-config-selinux

[root@localhost ~]# vim /etc/sysconfig/selinux

selinux=disabled

:wq

[root@localhost ~]# service iptables stop

Creation of Required O/S Users and Groups

[root@localhost ~]# groupadd -g 1001 oinstall

[root@localhost ~]# groupadd -g 1002 dba

[root@localhost ~]# groupadd -g 1003 oper

[root@localhost ~]# useradd -m -u 1001 -g oinstall -G dba,oper oracle

Configuration of Kernel Parameters

    #vim  /etc/sysctl.conf

kernel.shmmax = 68719476736

kernel.shmall = 4294967296

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

fs.aio-max-nr = 1048576

fs.file-max = 6815744

:wq

[root@localhost ~]# sysctl -p

Setting Shell Limits for the Oracle User

#vim  /etc/security/limits.conf

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

oracle soft stack 10240

oracle hard stack 32768

:wq

Creation of Required Directories:

[root@localhost ~]# mkdir -p /u01/app/oracle

[root@localhost ~]# chown -R oracle:oinstall /u01/app/oracle

[root@localhost ~]# chmod -R 775 /u01/app/oracle

Configuring the oracle User’s Environment:

[root@localhost ~]# su – oracle

[oracle@localhost ~]$ vim ~/.bash_profile

umask 022

TMP=/tmp; export TMP

TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=localhost.localdomain; export ORACLE_HOSTNAME

ORACLE_UNQNAME=orcl; export ORACLE_UNQNAME

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME

ORACLE_SID=orcl; export ORACLE_SID

PATH=/usr/sbin:$PATH; export PATH

PATH=$ORACLE_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH export PATH
:wq

Installation of Oracle DB Software in response file

Parameter Response
oracle.install.option INSTALL_DB_SWONLY
ORACLE_HOSTNAME localhost.localdomain
UNIX_GROUP_NAME oinstall
INVENTORY_LOCATION /u01/app/oraInventory
SELECTED_LANGUAGES en
ORACLE_HOME /u01/app/oracle/product/11.2.0/dbhome_1
ORACLE_BASE /u01/app/oracle
oracle.install.db.InstallEdition SE
oracle.install.db.EEOptionsSelection false
oracle.install.db.DBA_GROUP dba
oracle.install.db.OPER_GROUP oper
oracle.install.db.config.starterdb.type GENERAL_PURPOSE
oracle.install.db.config.starterdb.globalDBName
oracle.install.db.config.starterdb.SID
oracle.install.db.config.starterdb.characterSet AL32UTF8
oracle.install.db.config.starterdb.memoryOption true
oracle.install.db.config.starterdb.memoryLimit
oracle.install.db.config.starterdb.installExampleSchemas false
oracle.install.db.config.starterdb.enableSecuritySettings true
oracle.install.db.config.starterdb.password.ALL
oracle.install.db.config.starterdb.control DB_CONTROL
oracle.install.db.config.starterdb.automatedBackup.enable false
oracle.install.db.config.starterdb.storageType
oracle.install.db.config.starterdb.fileSystemStorage.dataLocation
oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation
SECURITY_UPDATES_VIA_MYORACLESUPPORT false
DECLINE_SECURITY_UPDATES true
oracle.installer.autoupdates.option SKIP_UPDATES

Install oracle software using response file:

#cd  /u01/database

#./runInstaller -silent -responseFile /home/oracle/db_install.rsp -ignoreSysPrereqs -ignorePrereq

Once installation completed and execute root.sh script in root user.

#sh /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

we will get output as oracle software installation successfully completed.

DBCA (DB Create) in Silent Mode:

Edit all mandatory changes in dbca.rsp response file.

[GENERAL]

RESPONSEFILE_VERSION = “11.2.0”

OPERATION_TYPE = “createDatabase”

[CREATEDATABASE]

GDBNAME = “orcl”

SID = “orcl”

TEMPLATENAME = “General_Purpose.dbc”

SYSPASSWORD = “password”

SYSTEMPASSWORD = “password”

SYSMANPASSWORD = “password”

DBSNMPPASSWORD = “password”

CHARACTERSET = “US7ASCII”

NATIONALCHARACTERSET= “UTF8”

:wq

#cd /u01/database

#[oracle@localhost/database]$ dbca -silent -createDatabase -responseFile dbca.rsp

Copying database files

1% complete

3% complete

11% complete

18% complete

26% complete

37% complete

Creating and starting Oracle instance

40% complete

45% complete

50% complete

55% complete

56% complete

60% complete

62% complete

Completing Database Creation

66% complete

70% complete

73% complete

85% complete

96% complete

100% complete

Look at the log file “/u01/app/oracle/cfgtoollogs/dbca/orcl/orcl.log” for further details.

It seems that Database Creation is successful completed.

Response file:

Install oracle Database software response file

db_install.rsp

create Database response file

dbca.rsp

How to install and configure the auditing tool: Sudosh

Auditing Tool: Sudosh

We are out of auditing control when privileged users are executing root command without being recorded.
Hence, sudosh command is introduced to fill the gap. Sudosh is an auditing shell filter and can be used as login shell.
It will record all keystrokes and output. The sessions can be played back whenever necessary.

Download: SUDOSH2

1)Extract, Complile and Install:
ubuntu@ip-172-31-40-239:~/Downloads$tar zxvf sudosh2-1.0.4.tgz
ubuntu@ip-172-31-40-239:~/Downloads$ cd sudosh2-1.0.4/
ubuntu@ip-172-31-40-239:~/Downloads$  sudo CFLAGS=”-D_GNU_SOURCE” ./configure
ubuntu@ip-172-31-40-239:~/Downloads$ sudo make
ubuntu@ip-172-31-40-239:~/Downloads$ sudo make install

2)Configure sudoers via visudo:

User_Alias ADMINS=user1,user2
Cmnd_Alias SUDOSH=/usr/local/bin/sudosh

ADMINS  ALL=SUDOSH

3)Usage of Sudosh
ubuntu@ip-172-31-40-239:~/Downloads$ sudo sudosh
[sudo] password for ubuntu:

4)Sudosh Replay
Use the “sudosh-replay” command to replay previous root sessions.
root@ip-172-31-40-239:~# sudosh-replay
Date Duration From To ID
==== ======== ==== == ==
sudosh-replay ubuntu-root-1411695874-9eJnjQSeI4FCkIcW 1 2

#sudosh-replay ubuntu-root-1411695874-9eJnjQSeI4FCkIcW 1 2

You will see the action reply.

How could I reset the Splunk admin password?

To reset the admin password you will need to have access to the file system:
– move the $SPLUNK_HOME/etc/passwd file to passwd.bak
– restart splunk. After the restart you should be able to login using the default login                 (admin/changeme).

If you created other user accounts, copy those entries from the backup file into the new passwd file and restart splunk.

HTTP status codes

When a request is made to your server for a page on your site (for instance, when a user accesses your page in a browser or when Googlebot crawls the page), your server returns an HTTP status code in response to the request.

This status code provides information about the status of the request. This status code gives Googlebot information about your site and the requested page.

Some common status codes are:

  • 200 – the server successfully returned the page
  • 404 – the requested page doesn’t exist
  • 503 – the server is temporarily unavailable

A complete list of HTTP status codes is below. You can also visit the W3C page on HTTP status codes for more information.

1xx (Provisional response)
Status codes that indicate a provisional response and require the requestor to take action to continue.

Code Description
100 (Continue) The requestor should continue with the request. The server returns this code to indicate that it has received the first part of a request and is waiting for the rest.
101 (Switching protocols) The requestor has asked the server to switch protocols and the server is acknowledging that it will do so.

2xx (Successful)

Status codes that indicate that the server successfully processed the request.

Code Description
200 (Successful) The server successfully processed the request. Generally, this means that the server provided the requested page. If you see this status for your robots.txt file, it means that Googlebot retrieved it successfully.
201 (Created) The request was successful and the server created a new resource.
202 (Accepted) The server has accepted the request, but hasn’t yet processed it.
203 (Non-authoritative information) The server successfully processed the request, but is returning information that may be from another source.
204 (No content) The server successfully processed the request, but isn’t returning any content.
205 (Reset content) The server successfully proccessed the request, but isn’t returning any content. Unlike a 204 response, this response requires that the requestor reset the document view (for instance, clear a form for new input).
206 (Partial content) The server successfully processed a partial GET request.

3xx (Redirected)
Further action is needed to fulfill the request. Often, these status codes are used for redirection. Google recommends that you use fewer than five redirects for each request. You can use Search Console to see if Googlebot is having trouble crawling your redirected pages. The Crawl Errors page under Crawl lists URLs that Googlebot was unable to crawl due to redirect errors.

Code Description
300 (Multiple choices) The server has several actions available based on the request. The server may choose an action based on the requestor (user agent) or the server may present a list so the requestor can choose an action.
301 (Moved permanently) The requested page has been permanently moved to a new location. When the server returns this response (as a response to a GET or HEAD request), it automatically forwards the requestor to the new location. You should use this code to let Googlebot know that a page or site has permanently moved to a new location.
302 (Moved temporarily) The server is currently responding to the request with a page from a different location, but the requestor should continue to use the original location for future requests. This code is similar to a 301 in that for a GET or HEAD request, it automatically forwards the requestor to a different location, but you shouldn’t use it to tell the Googlebot that a page or site has moved because Googlebot will continue to crawl and index the original location.
303 (See other location) The server returns this code when the requestor should make a separate GET request to a different location to retrieve the response. For all requests other than a HEAD request, the server automatically forwards to the other location.
304 (Not modified) The requested page hasn’t been modified since the last request. When the server returns this response, it doesn’t return the contents of the page.

You should configure your server to return this response (called the If-Modified-Since HTTP header) when a page hasn’t changed since the last time the requestor asked for it. This saves you bandwidth and overhead because your server can tell Googlebot that a page hasn’t changed since the last time it was crawled.

305 (Use proxy) The requestor can only access the requested page using a proxy. When the server returns this response, it also indicates the proxy that the requestor should use.
307 (Temporary redirect) The server is currently responding to the request with a page from a different location, but the requestor should continue to use the original location for future requests. This code is similar to a 301 in that for a GET or HEAD request, it automatically forwards the requestor to a different location, but you shouldn’t use it to tell the Googlebot that a page or site has moved because Googlebot will continue to crawl and index the original location.

4xx (Request error)
These status codes indicate that there was likely an error in the request which prevented the server from being able to process it.

Code Description
400 (Bad request) The server didn’t understand the syntax of the request.
401 (Not authorized) The request requires authentication. The server might return this response for a page behind a login.
403 (Forbidden) The server is refusing the request. If you see that Googlebot received this status code when trying to crawl valid pages of your site (you can see this on the Crawl Errors page under Health in Google Search Console), it’s possible that your server or host is blocking Googlebot’s access.
404 (Not found) The server can’t find the requested page. For instance, the server often returns this code if the request is for a page that doesn’t exist on the server.

If you don’t have a robots.txt file on your site and see this status on the Blocked URLs page in Google Search Console, this is the correct status. However, if you do have a robots.txt file and you see this status, then your robots.txt file may be named incorrectly or in the wrong location. (It should be at the top-level of the domain and named robots.txt.)

If you see this status for URLs that Googlebot tried to crawl, then Googlebot likely followed an invalid link from another page (either an old link or a mistyped one).

405 (Method not allowed) The method specified in the request is not allowed.
406 (Not acceptable) The requested page can’t respond with the content characteristics requested.
407 (Proxy authentication required) This status code is similar 401 (Not authorized); but specifies that the requestor has to authenticate using a proxy. When the server returns this response, it also indicates the proxy that the requestor should use.
408 (Request timeout) The server timed out waiting for the request.
409 (Conflict) The server encountered a conflict fulfilling the request. The server must include information about the conflict in the response. The server might return this code in response to a PUT request that conflicts with an earlier request, along with a list of differences between the requests.
410 (Gone) The server returns this response when the requested resource has been permanently removed. It is similar to a 404 (Not found) code, but is sometimes used in the place of a 404 for resources that used to exist but no longer do. If the resource has permanently moved, you should use a 301 to specify the resource’s new location.
411 (Length required) The server won’t accept the request without a valid Content-Length header field.
412 (Precondition failed) The server doesn’t meet one of the preconditions that the requestor put on the request.
413 (Request entity too large) The server can’t process the request because it is too large for the server to handle.
414 (Requested URI is too long) The requested URI (typically, a URL) is too long for the server to process.
415 (Unsupported media type) The request is in a format not support by the requested page.
416 (Requested range not satisfiable) The server returns this status code if the request is for a range not available for the page.
417 (Expectation failed) The server can’t meet the requirements of the Expect request-header field.

5xx (Server error)
These status codes indicate that the server had an internal error when trying to process the request. These errors tend to be with the server itself, not with the request.

Code Description
500 (Internal server error) The server encountered an error and can’t fulfill the request.
501 (Not implemented) The server doesn’t have the functionality to fulfill the request. For instance, the server might return this code when it doesn’t recognize the request method.
502 (Bad gateway) The server was acting as a gateway or proxy and received an invalid response from the upstream server.
503 (Service unavailable) The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state.
504 (Gateway timeout) The server was acting as a gateway or proxy and didn’t receive a timely request from the upstream server.
505 (HTTP version not supported) The server doesn’t support the HTTP protocol version used in the request.

Introduction to Logging in Tomcat 7

JULI

Previous versions of Tomcat (till 5.x) use Apache common logging services for generating logs. A major disadvantage with this logging mechanism is that it can handle only single JVM configuration and makes it difficult to configure separate logging for each class loader for independent application. In order to resolve this issue, Tomcat developers have introduced a separate API for Tomcat 6 version, that comes with the capability of capturing each class loader activity in the Tomcat logs. It is based on java.util.logging framework.

By default, Tomcat 7 uses its own Java logging API to implement logging services. This is also called as JULI. This API can be found in TOMCAT_HOME/bin of the Tomcat 7 directory structures (tomcat-juli.jar). The following screenshot shows the directory structure of the bin directory where tomcat-juli.jar is placed. JULI also provides the feature for custom logging for each web application, and it also supports private per-application logging configurations. With the enhanced feature of separate class loader logging, it also helps in detecting memory issues while unloading the classes at runtime.

For more information on JULI and the class loading issue, please refer to http://tomcat.apache.org/tomcat-7.0-doc/logging.html and http://tomcat.apache.org/tomcat-7.0-doc/class-loader-howto.html respectively.

Loggers, appenders, and layouts

There are some important components of logging which we use at the time of implementing the logging mechanism for applications. Each term has its individual importance in tracking the events of the application. Let’s discuss each term individually to find out their usage:

    • Loggers:It can be defined as the logical name for the log file. This logical name is written in the application code. We can configure an independent logger for each application.
    • Appenders: The process of generation of logs are handled by appenders. There are many types of appenders, such as FileAppender, ConsoleAppender, SocketAppender, and so on, which are available in log4j. The following are some examples of appenders for log4j:
      log4j.appender.CATALINA=org.apache.log4j.DailyRollingFileAppender log4j.appender.CATALINA.File=${catalina.base}/logs/catalina.out log4j.appender.CATALINA.Append=true log4j.appender.CATALINA.Encoding=UTF-8

The previous four lines of appenders define the DailyRollingFileAppender in log4j, where the filename is catalina.out. These logs will have UTF-8 encoding enabled.

If log4j.appender.CATALINA.append=false, then logs will not get updated in the log files.

# Roll-over the log once per day log4j.appender.CATALINA.DatePattern=’.’dd-MM-yyyy’.log’ log4j.appender.CATALINA.layout = org.apache.log4j.PatternLayout log4j.appender.CATALINA.layout.ConversionPattern = %d [%t] %-5p %c- %m%n

The previous three lines of code show the roll-over of log once per day.

    • Layout: It is defined as the format of logs displayed in the log file. The appender uses layout to format the log files (also called as patterns). The highlighted code shows the pattern for access logs:

Loggers, appenders, and layouts together help the developer to capture the log message for the application event.

Types of logging in Tomcat 7

We can enable logging in Tomcat 7 in different ways based on the requirement. There are a total of five types of logging that we can configure in Tomcat, such as application, server, console, and so on. The following figure shows the different types of logging for Tomcat 7. These methods are used in combination with each other based on environment needs. For example, if you have issues where Tomcat services are not displayed, then console logs are very helpful to identify the issue, as we can verify the real-time boot sequence. Let’s discuss each logging method briefly.

Application log

These logs are used to capture the application event while running the application transaction. These logs are very useful in order to identify the application level issues. For example, suppose your application performance is slow on a particular transition, then the details of that transition can only be traced in application log. The biggest advantage of application logs is we can configure separate log levels and log files for each application, making it very easy for the administrators to troubleshoot the application.

Log4j is used in 90 percent of the cases for application log generation.

Server log

Server logs are identical to console logs. The only advantage of server logs is that they can be retrieved anytime but console logs are not available after we log out from the console.

Console log

This log gives you the complete information of Tomcat 7 startup and loader sequence. The log file is named as catalina.out and is found in TOMCAT_HOME/logs. This log file is very useful in checking the application deployment and server startup testing for any environment. This log is configured in the Tomcat file catalina.sh, which can be found in TOMCAT_HOME/bin.

The previous screenshot shows the definition for Tomcat logging. By default, the console logs are configured as INFO mode.

There are different levels of logging in Tomcat such as WARNING, INFORMATION, CONFIG, and FINE.

The previous screenshot shows the Tomcat log file location, after the start of Tomcat services.

The previous screenshot shows the output of the catalina.out file, where Tomcat services are started in 1903 ms.

Access log

Access logs are customized logs, which give information about the following:

  • Who has accessed the application
  • What components of the application are accessed
  • Source IP and so on

These logs play a vital role in traffic analysis of many applications to analyze the bandwidth requirement and also helps in troubleshooting the application under heavy load. These logs are configured in server.xml in TOMCAT_HOME/conf. The following screenshot shows the definition of access logs. You can customize them according to the environment and your auditing requirement.

Let’s discuss the pattern format of the access logs and understand how we can customize the logging format:

  • Class Name: This parameter defines the class name used for generation of logs. By default, Apache Tomcat 7 uses the org.apache.catalina.valves.AccessLogValve class for access logs.
  • Directory: This parameter defines the directory location for the log file. All the log files are generated in the log directory—TOMCAT_HOME/logs—but we can customize the log location based on our environment setup and then update the directory path in the definition of access logs.
  • Prefix: This parameter defines the prefix of the access log filename, that is, by default, access log files are generated by the name localhost_access_log.yy-mm-dd.txt.
  • Suffix: This parameter defines the file extension of the log file. Currently it is in .txt format.
  • Pattern: This parameter defines the format of the log file. The pattern is a combination of values defined by the administrator, for example, %h = remote host address. The following screenshot shows the default log format for Tomcat 7. Access logs show the remote host address, date/time of request, method used for response, URI mapping, and HTTP status code.

In case you have installed the web traffic analysis tool for application, then you have to change the access logs to a different format.

Host manager

These logs define the activity performed using Tomcat Manager, such as various tasks performed, status of application, deployment of application, and lifecycle of Tomcat. These configurations are done on the logging.properties, which can be found in TOMCAT_HOME/conf.

The previous screenshot shows the definition of host, manager, and host-manager details. If you see the definitions, it defines the log location, log level, and prefix of the filename.

In logging.properties, we are defining file handlers and appenders using JULI.

The log file for manager looks similar to the following:

I28 Jun, 2011 3:36:23 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host ‘localhost’ 28 Jun, 2011 3:37:13 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host ‘localhost’ 28 Jun, 2011 3:37:42 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: undeploy: Undeploying web application at ‘/sample’ 28 Jun, 2011 3:37:43 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host ‘localhost’ 28 Jun, 2011 3:42:59 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host ‘localhost’ 28 Jun, 2011 3:43:01 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host ‘localhost’ 28 Jun, 2011 3:53:44 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host ‘localhost’

Types of log levels in Tomcat 7

There are seven levels defined for Tomcat logging services (JULI). They can be set based on the application requirement. The following figure shows the sequence of log levels for JULI:

Every log level in JULI had its own functionality. The following table shows the functionality of each log level in JULI:

Log level Description
SEVERE(highest) Captures exception and Error
WARNING Warning messages
INFO Informational message, related to

server activity

CONFIG Configuration message
FINE Detailed activity of server transaction

(similar to debug)

FINER More detailed logs than FINE
FINEST(least) Entire flow of events (similar to trace)

For example, let’s take an appender from logging.properties and find out the log level used; the first log appender for localhost is using FINE as the log level, as shown in the following code snippet:

localhost.org.apache.juli.FileHandler.level = FINE localhost.org.apache.juli.FileHandler.directory = ${catalina.base}/logs localhost.org.apache.juli.FileHandler.prefix = localhost.

The following code shows the default file handler configuration for logging in Tomcat 7 using JULI. The properties are mentioned and log levels are highlighted:

############################################################ # Facility specific properties. # Provides extra control for each logger. ############################################################ org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers = 2localhost.org.apache.juli.FileHandler org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager] .level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager] .handlers = 3manager.org.apache.juli.FileHandler org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host- manager].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host- manager].handlers = 4host-manager.org.apache.juli.FileHandler

(For more resources on Apache, see here.)

Log4j

Log4j is the project run by THe Apache Software Foundation. This project helps in enabling the logs at the various levels of the server and the application.

The major advantage of log4j is manageability. It provides the developer a freedom to change the log level at the configuration file level. Also, you can enable/disable logs at the configuration level, so there is no need to change the code. We can customize the log pattern based on the application, separately. Log4j has six log levels. The following figure shows the different types of log levels in log4j.

Log level for log4j

Every log level in log4j had its own functionality. The following table shows the functionality of each log level in log4j:

Log level Description
OFF This level is set when you want logging to be set as false (Stopped logging).
FATAL This log level will print the severe errors that cause premature termination.
ERROR This log level is used to capture runtime errors or unexpected conditions.

Expect these to be immediately visible on a status console.

WARN This level is used in the previous version.

It gives you almost errors, other runtime situations that are

undesirable or unexpected, but not necessarily wrong.

Expect these to be immediately visible on a status console.

INFO This log level will define the interesting runtime events (startup/shutdown).

It is best practice to put the logs at INFO level.

DEBUG Detailed information on the flow through the system is defined in this level.
TRACE This log level captures all the events in the system and application,

you can say everything.

How to use log4j

Following are the steps to be performed to use log4j:

    1. Download apache-log4j-1.2.X.tar.gz from its official URL http://logging.apache.org/log4j/1.2/download.html, where X is the minor version.
    2. Unzip the folder and place the log4j.jar in the lib for TOMCAT_HOME/lib and delete juli*.jar from lib.
    3. Delete logging.properties from TOMCAT_HOME/CONF.
    4. Create a file log4j.properties in TOMCAT_HOME/CONF and define the log appenders for the Tomcat instance. The following screenshot shows the appenders for catalina.out. Also, the highlighted code shows the roll-over of logs per day.

You can customize the log rotation based on size, day, hour, and so on, using the previous log4j appenders marked in bold.

  1. Restart Tomcat services.

Important tip for production environment

DEBUG and TRACE modes should only be enabled in production. In case of troubleshooting, the ideal mode is INFO (DEBUG and TRACE generate heavy logging and also affect the server performance).

Appenders should be enabled everyday in a production environment. This helps the administrator to perform log analysis very easily (file size is less).

Log level mapping

Till now, we have discussed about the various log levels for JULI and log4j. Let us do a quick log level mapping for JULI and log4j. The following table shows the one-to-one mapping for log4j and JULI:

Log level in JULI Log level in log4j
SEVERE FATAL, ERROR
WARNING WARN
INFO INFO
CONFIG NA
FINE DEBUG
FINER DEBUG
FINEST TRACE

Values for Tomcat 7

Values are defined as identifiers which change the pattern of the string in the log . Suppose you want to know the IP address of remote host, which has accessed the website, then you add the combination of the following values mentioned in the log appenders. For example, let’s customize the access logs for Tomcat 7. By default, access logs for Tomcat are defined as the follows:

We want to change the log pattern to show the time taken to process the request. We have to add the %T in the patterns. The changed code is shown as follows:

The following table shows the values used in Tomcat 7 for log pattern customization:

Values Description
%a Remote IP address
%A Local IP address
%b Bytes sent, excluding HTTP headers, or ” if zero
%B Bytes sent, excluding HTTP headers
%h Remote hostname (or IP address if enableLookups for the connector is false)
%H Request protocol
%l Remote logical username from identd
%m Request method (GET, POST, and so on)
%p Local port on which this request was received
%q Query string (prepended with a ‘?’ if it exists)
%r First line of the request (method and request URI)
%s HTTP status code of the response
%S User session ID
%t Date and time, in Common Log format
%u Remote user that was authenticated (if any)
%U Requested URL path
%v Local server name
%D Time taken to process the request, in milliseconds
%T Time taken to process the request, in seconds
%I Current request thread name (can compare later with stack traces)

Log analysis

Log analysis is a very important and tricky issue, which needs to be handled with a lot of care. If you overlook a few lines, then you will never be able to find the root cause of the issue. Some of the best practices which need to be kept in consideration while doing the log analysis are mentioned as follows:

  • Check the logs of the last 1 hour from the issue
  • Always go to the first exception in the logs when the error has started
  • Always keep in mind that issues are not caused due to malfunction of Tomcat, also check the other infrastructure resources

In non-DOS operating systems (Linux, Unix, Ubuntu, and so on), there are two utilities which are very useful in log analysis, grep and awk. Let’s discuss grep and awk utilities briefly:

  • grep: This utility prints the lines which match the string searched.grep Error catalina.logThe previous command is an example of the grep command for searching the word “error” in the file catalina.log and display the lines which contain the word “error”.
  • awk: This command is used for pattern scanning. Suppose we want to print 10 columns in the entire data file, then this command is very useful. The following screenshot shows the output of the command when run for the /opt directory.find “location of directory “ -type f -size +10000k -exec ls -lh {} \; | awk ‘{ print $9 “: ” $5 }’ find “/opt“ -type f -size +10000k -exec ls -lh {} \; | awk ‘{ print $9 “: ” $5 }’

Helpful commands for log analysis

Administrators are looking for shortcut commands to do their work efficiently. The following are some useful commands that I have collected during log analysis:

The following commands are used for searching big log files. Sometimes in a production environment, we get alerts for disk out of space. The following commands can be used:

    • Finding large files and directories in Linux:

find “location of directory “; type f -size +10000k -exec ls -lh {} \; | awk ‘{ print $9 “: ” $5 }’

    • Finding directories with size over 100MB:

find / -type d -size +100000k

    • Sort directories as per size using du:

du –max-depth=1 -m | sort -n -r

    • Finding directory sizes:

du -sh folder_name du -ch folder_name du -csh folder_name

    • The following command is used for truncating huge log files on the live system (log rotation can be done without recycle of services):

cat /dev/null > file_name

The following mentioned commands are used for searching the string in different files:

    • Finding ERROR exception:

grep ERROR log_file

    • Last 200 lines in log file:

tail -200 log_file

    • Current logs to be updated

tail -f log_file