How to setup and configure Oracle11g R2 in RHEL6.4

How to setup and config Oracle11g R2 in RHEL6.4

Hardware Requirements Check:

Memory Requirements:

Minimum:1 GB of RAM

Recommended:   2 GB of RAM [ Production Server it’s 8 GB RAM]

Available RAM SWAP Space Requirement
Between 1 GB and 2 GB 1.5 times the size of the RAM
Between 2 GB and 16 GB Equal to the size of the RAM
More than 16 GB 16 GB

Disk Space Requirements 

Installation Type Minimum Requirement for Software Files (GB)
Enterprise Edition 4.35
Standard Edition 4.22
Installation Type Minimum Requirement for Data Files (GB)
Enterprise Edition 1.7
Standard Edition 1.5

Package Requirements Check

yum install cloog-ppl libXxf86misc*

yum install  compat-libcap1 libXxf86vm*

yum install  compat-libstdc++-33 libaio-devel*

yum install   cpplibdmx*

yum install gcc libstdc++-devel*

yum install gcc-c++ mpfr*

yum install  glibc-devel make*

yum install  glibc-headers ppl*

yum install  kernel-headers xorg-x11-utils*

yum install  libXmu xorg-x11-xauth*

yum install  libXt libXv*

yum install  ksh libXxf86dga*

yum install  unixODBC-devel-2.2.11*

yum install unixODBC-2.2.11*

Disable SELinux and Firewall

[root@localhost ~]# system-config-selinux

[root@localhost ~]# vim /etc/sysconfig/selinux

selinux=disabled

:wq

[root@localhost ~]# service iptables stop

Creation of Required O/S Users and Groups

[root@localhost ~]# groupadd -g 1001 oinstall

[root@localhost ~]# groupadd -g 1002 dba

[root@localhost ~]# groupadd -g 1003 oper

[root@localhost ~]# useradd -m -u 1001 -g oinstall -G dba,oper oracle

Configuration of Kernel Parameters

    #vim  /etc/sysctl.conf

kernel.shmmax = 68719476736

kernel.shmall = 4294967296

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

fs.aio-max-nr = 1048576

fs.file-max = 6815744

:wq

[root@localhost ~]# sysctl -p

Setting Shell Limits for the Oracle User

#vim  /etc/security/limits.conf

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

oracle soft stack 10240

oracle hard stack 32768

:wq

Creation of Required Directories:

[root@localhost ~]# mkdir -p /u01/app/oracle

[root@localhost ~]# chown -R oracle:oinstall /u01/app/oracle

[root@localhost ~]# chmod -R 775 /u01/app/oracle

Configuring the oracle User’s Environment:

[root@localhost ~]# su – oracle

[oracle@localhost ~]$ vim ~/.bash_profile

umask 022

TMP=/tmp; export TMP

TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=localhost.localdomain; export ORACLE_HOSTNAME

ORACLE_UNQNAME=orcl; export ORACLE_UNQNAME

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME

ORACLE_SID=orcl; export ORACLE_SID

PATH=/usr/sbin:$PATH; export PATH

PATH=$ORACLE_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH export PATH
:wq

Installation of Oracle DB Software in response file

Parameter Response
oracle.install.option INSTALL_DB_SWONLY
ORACLE_HOSTNAME localhost.localdomain
UNIX_GROUP_NAME oinstall
INVENTORY_LOCATION /u01/app/oraInventory
SELECTED_LANGUAGES en
ORACLE_HOME /u01/app/oracle/product/11.2.0/dbhome_1
ORACLE_BASE /u01/app/oracle
oracle.install.db.InstallEdition SE
oracle.install.db.EEOptionsSelection false
oracle.install.db.DBA_GROUP dba
oracle.install.db.OPER_GROUP oper
oracle.install.db.config.starterdb.type GENERAL_PURPOSE
oracle.install.db.config.starterdb.globalDBName
oracle.install.db.config.starterdb.SID
oracle.install.db.config.starterdb.characterSet AL32UTF8
oracle.install.db.config.starterdb.memoryOption true
oracle.install.db.config.starterdb.memoryLimit
oracle.install.db.config.starterdb.installExampleSchemas false
oracle.install.db.config.starterdb.enableSecuritySettings true
oracle.install.db.config.starterdb.password.ALL
oracle.install.db.config.starterdb.control DB_CONTROL
oracle.install.db.config.starterdb.automatedBackup.enable false
oracle.install.db.config.starterdb.storageType
oracle.install.db.config.starterdb.fileSystemStorage.dataLocation
oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation
SECURITY_UPDATES_VIA_MYORACLESUPPORT false
DECLINE_SECURITY_UPDATES true
oracle.installer.autoupdates.option SKIP_UPDATES

Install oracle software using response file:

#cd  /u01/database

#./runInstaller -silent -responseFile /home/oracle/db_install.rsp -ignoreSysPrereqs -ignorePrereq

Once installation completed and execute root.sh script in root user.

#sh /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

we will get output as oracle software installation successfully completed.

DBCA (DB Create) in Silent Mode:

Edit all mandatory changes in dbca.rsp response file.

[GENERAL]

RESPONSEFILE_VERSION = “11.2.0”

OPERATION_TYPE = “createDatabase”

[CREATEDATABASE]

GDBNAME = “orcl”

SID = “orcl”

TEMPLATENAME = “General_Purpose.dbc”

SYSPASSWORD = “password”

SYSTEMPASSWORD = “password”

SYSMANPASSWORD = “password”

DBSNMPPASSWORD = “password”

CHARACTERSET = “US7ASCII”

NATIONALCHARACTERSET= “UTF8”

:wq

#cd /u01/database

#[oracle@localhost/database]$ dbca -silent -createDatabase -responseFile dbca.rsp

Copying database files

1% complete

3% complete

11% complete

18% complete

26% complete

37% complete

Creating and starting Oracle instance

40% complete

45% complete

50% complete

55% complete

56% complete

60% complete

62% complete

Completing Database Creation

66% complete

70% complete

73% complete

85% complete

96% complete

100% complete

Look at the log file “/u01/app/oracle/cfgtoollogs/dbca/orcl/orcl.log” for further details.

It seems that Database Creation is successful completed.

Response file:

Install oracle Database software response file

db_install.rsp

create Database response file

dbca.rsp

How to install and configure the auditing tool: Sudosh

Auditing Tool: Sudosh

We are out of auditing control when privileged users are executing root command without being recorded.
Hence, sudosh command is introduced to fill the gap. Sudosh is an auditing shell filter and can be used as login shell.
It will record all keystrokes and output. The sessions can be played back whenever necessary.

Download: SUDOSH2

1)Extract, Complile and Install:
ubuntu@ip-172-31-40-239:~/Downloads$tar zxvf sudosh2-1.0.4.tgz
ubuntu@ip-172-31-40-239:~/Downloads$ cd sudosh2-1.0.4/
ubuntu@ip-172-31-40-239:~/Downloads$  sudo CFLAGS=”-D_GNU_SOURCE” ./configure
ubuntu@ip-172-31-40-239:~/Downloads$ sudo make
ubuntu@ip-172-31-40-239:~/Downloads$ sudo make install

2)Configure sudoers via visudo:

User_Alias ADMINS=user1,user2
Cmnd_Alias SUDOSH=/usr/local/bin/sudosh

ADMINS  ALL=SUDOSH

3)Usage of Sudosh
ubuntu@ip-172-31-40-239:~/Downloads$ sudo sudosh
[sudo] password for ubuntu:

4)Sudosh Replay
Use the “sudosh-replay” command to replay previous root sessions.
root@ip-172-31-40-239:~# sudosh-replay
Date Duration From To ID
==== ======== ==== == ==
sudosh-replay ubuntu-root-1411695874-9eJnjQSeI4FCkIcW 1 2

#sudosh-replay ubuntu-root-1411695874-9eJnjQSeI4FCkIcW 1 2

You will see the action reply.

How could I reset the Splunk admin password?

To reset the admin password you will need to have access to the file system:
– move the $SPLUNK_HOME/etc/passwd file to passwd.bak
– restart splunk. After the restart you should be able to login using the default login                 (admin/changeme).

If you created other user accounts, copy those entries from the backup file into the new passwd file and restart splunk.

HTTP status codes

When a request is made to your server for a page on your site (for instance, when a user accesses your page in a browser or when Googlebot crawls the page), your server returns an HTTP status code in response to the request.

This status code provides information about the status of the request. This status code gives Googlebot information about your site and the requested page.

Some common status codes are:

  • 200 – the server successfully returned the page
  • 404 – the requested page doesn’t exist
  • 503 – the server is temporarily unavailable

A complete list of HTTP status codes is below. You can also visit the W3C page on HTTP status codes for more information.

1xx (Provisional response)
Status codes that indicate a provisional response and require the requestor to take action to continue.

Code Description
100 (Continue) The requestor should continue with the request. The server returns this code to indicate that it has received the first part of a request and is waiting for the rest.
101 (Switching protocols) The requestor has asked the server to switch protocols and the server is acknowledging that it will do so.

2xx (Successful)

Status codes that indicate that the server successfully processed the request.

Code Description
200 (Successful) The server successfully processed the request. Generally, this means that the server provided the requested page. If you see this status for your robots.txt file, it means that Googlebot retrieved it successfully.
201 (Created) The request was successful and the server created a new resource.
202 (Accepted) The server has accepted the request, but hasn’t yet processed it.
203 (Non-authoritative information) The server successfully processed the request, but is returning information that may be from another source.
204 (No content) The server successfully processed the request, but isn’t returning any content.
205 (Reset content) The server successfully proccessed the request, but isn’t returning any content. Unlike a 204 response, this response requires that the requestor reset the document view (for instance, clear a form for new input).
206 (Partial content) The server successfully processed a partial GET request.

3xx (Redirected)
Further action is needed to fulfill the request. Often, these status codes are used for redirection. Google recommends that you use fewer than five redirects for each request. You can use Search Console to see if Googlebot is having trouble crawling your redirected pages. The Crawl Errors page under Crawl lists URLs that Googlebot was unable to crawl due to redirect errors.

Code Description
300 (Multiple choices) The server has several actions available based on the request. The server may choose an action based on the requestor (user agent) or the server may present a list so the requestor can choose an action.
301 (Moved permanently) The requested page has been permanently moved to a new location. When the server returns this response (as a response to a GET or HEAD request), it automatically forwards the requestor to the new location. You should use this code to let Googlebot know that a page or site has permanently moved to a new location.
302 (Moved temporarily) The server is currently responding to the request with a page from a different location, but the requestor should continue to use the original location for future requests. This code is similar to a 301 in that for a GET or HEAD request, it automatically forwards the requestor to a different location, but you shouldn’t use it to tell the Googlebot that a page or site has moved because Googlebot will continue to crawl and index the original location.
303 (See other location) The server returns this code when the requestor should make a separate GET request to a different location to retrieve the response. For all requests other than a HEAD request, the server automatically forwards to the other location.
304 (Not modified) The requested page hasn’t been modified since the last request. When the server returns this response, it doesn’t return the contents of the page.

You should configure your server to return this response (called the If-Modified-Since HTTP header) when a page hasn’t changed since the last time the requestor asked for it. This saves you bandwidth and overhead because your server can tell Googlebot that a page hasn’t changed since the last time it was crawled.

305 (Use proxy) The requestor can only access the requested page using a proxy. When the server returns this response, it also indicates the proxy that the requestor should use.
307 (Temporary redirect) The server is currently responding to the request with a page from a different location, but the requestor should continue to use the original location for future requests. This code is similar to a 301 in that for a GET or HEAD request, it automatically forwards the requestor to a different location, but you shouldn’t use it to tell the Googlebot that a page or site has moved because Googlebot will continue to crawl and index the original location.

4xx (Request error)
These status codes indicate that there was likely an error in the request which prevented the server from being able to process it.

Code Description
400 (Bad request) The server didn’t understand the syntax of the request.
401 (Not authorized) The request requires authentication. The server might return this response for a page behind a login.
403 (Forbidden) The server is refusing the request. If you see that Googlebot received this status code when trying to crawl valid pages of your site (you can see this on the Crawl Errors page under Health in Google Search Console), it’s possible that your server or host is blocking Googlebot’s access.
404 (Not found) The server can’t find the requested page. For instance, the server often returns this code if the request is for a page that doesn’t exist on the server.

If you don’t have a robots.txt file on your site and see this status on the Blocked URLs page in Google Search Console, this is the correct status. However, if you do have a robots.txt file and you see this status, then your robots.txt file may be named incorrectly or in the wrong location. (It should be at the top-level of the domain and named robots.txt.)

If you see this status for URLs that Googlebot tried to crawl, then Googlebot likely followed an invalid link from another page (either an old link or a mistyped one).

405 (Method not allowed) The method specified in the request is not allowed.
406 (Not acceptable) The requested page can’t respond with the content characteristics requested.
407 (Proxy authentication required) This status code is similar 401 (Not authorized); but specifies that the requestor has to authenticate using a proxy. When the server returns this response, it also indicates the proxy that the requestor should use.
408 (Request timeout) The server timed out waiting for the request.
409 (Conflict) The server encountered a conflict fulfilling the request. The server must include information about the conflict in the response. The server might return this code in response to a PUT request that conflicts with an earlier request, along with a list of differences between the requests.
410 (Gone) The server returns this response when the requested resource has been permanently removed. It is similar to a 404 (Not found) code, but is sometimes used in the place of a 404 for resources that used to exist but no longer do. If the resource has permanently moved, you should use a 301 to specify the resource’s new location.
411 (Length required) The server won’t accept the request without a valid Content-Length header field.
412 (Precondition failed) The server doesn’t meet one of the preconditions that the requestor put on the request.
413 (Request entity too large) The server can’t process the request because it is too large for the server to handle.
414 (Requested URI is too long) The requested URI (typically, a URL) is too long for the server to process.
415 (Unsupported media type) The request is in a format not support by the requested page.
416 (Requested range not satisfiable) The server returns this status code if the request is for a range not available for the page.
417 (Expectation failed) The server can’t meet the requirements of the Expect request-header field.

5xx (Server error)
These status codes indicate that the server had an internal error when trying to process the request. These errors tend to be with the server itself, not with the request.

Code Description
500 (Internal server error) The server encountered an error and can’t fulfill the request.
501 (Not implemented) The server doesn’t have the functionality to fulfill the request. For instance, the server might return this code when it doesn’t recognize the request method.
502 (Bad gateway) The server was acting as a gateway or proxy and received an invalid response from the upstream server.
503 (Service unavailable) The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state.
504 (Gateway timeout) The server was acting as a gateway or proxy and didn’t receive a timely request from the upstream server.
505 (HTTP version not supported) The server doesn’t support the HTTP protocol version used in the request.

Introduction to Logging in Tomcat 7

JULI

Previous versions of Tomcat (till 5.x) use Apache common logging services for generating logs. A major disadvantage with this logging mechanism is that it can handle only single JVM configuration and makes it difficult to configure separate logging for each class loader for independent application. In order to resolve this issue, Tomcat developers have introduced a separate API for Tomcat 6 version, that comes with the capability of capturing each class loader activity in the Tomcat logs. It is based on java.util.logging framework.

By default, Tomcat 7 uses its own Java logging API to implement logging services. This is also called as JULI. This API can be found in TOMCAT_HOME/bin of the Tomcat 7 directory structures (tomcat-juli.jar). The following screenshot shows the directory structure of the bin directory where tomcat-juli.jar is placed. JULI also provides the feature for custom logging for each web application, and it also supports private per-application logging configurations. With the enhanced feature of separate class loader logging, it also helps in detecting memory issues while unloading the classes at runtime.

For more information on JULI and the class loading issue, please refer to http://tomcat.apache.org/tomcat-7.0-doc/logging.html and http://tomcat.apache.org/tomcat-7.0-doc/class-loader-howto.html respectively.

Loggers, appenders, and layouts

There are some important components of logging which we use at the time of implementing the logging mechanism for applications. Each term has its individual importance in tracking the events of the application. Let’s discuss each term individually to find out their usage:

    • Loggers:It can be defined as the logical name for the log file. This logical name is written in the application code. We can configure an independent logger for each application.
    • Appenders: The process of generation of logs are handled by appenders. There are many types of appenders, such as FileAppender, ConsoleAppender, SocketAppender, and so on, which are available in log4j. The following are some examples of appenders for log4j:
      log4j.appender.CATALINA=org.apache.log4j.DailyRollingFileAppender log4j.appender.CATALINA.File=${catalina.base}/logs/catalina.out log4j.appender.CATALINA.Append=true log4j.appender.CATALINA.Encoding=UTF-8

The previous four lines of appenders define the DailyRollingFileAppender in log4j, where the filename is catalina.out. These logs will have UTF-8 encoding enabled.

If log4j.appender.CATALINA.append=false, then logs will not get updated in the log files.

# Roll-over the log once per day log4j.appender.CATALINA.DatePattern=’.’dd-MM-yyyy’.log’ log4j.appender.CATALINA.layout = org.apache.log4j.PatternLayout log4j.appender.CATALINA.layout.ConversionPattern = %d [%t] %-5p %c- %m%n

The previous three lines of code show the roll-over of log once per day.

    • Layout: It is defined as the format of logs displayed in the log file. The appender uses layout to format the log files (also called as patterns). The highlighted code shows the pattern for access logs:

Loggers, appenders, and layouts together help the developer to capture the log message for the application event.

Types of logging in Tomcat 7

We can enable logging in Tomcat 7 in different ways based on the requirement. There are a total of five types of logging that we can configure in Tomcat, such as application, server, console, and so on. The following figure shows the different types of logging for Tomcat 7. These methods are used in combination with each other based on environment needs. For example, if you have issues where Tomcat services are not displayed, then console logs are very helpful to identify the issue, as we can verify the real-time boot sequence. Let’s discuss each logging method briefly.

Application log

These logs are used to capture the application event while running the application transaction. These logs are very useful in order to identify the application level issues. For example, suppose your application performance is slow on a particular transition, then the details of that transition can only be traced in application log. The biggest advantage of application logs is we can configure separate log levels and log files for each application, making it very easy for the administrators to troubleshoot the application.

Log4j is used in 90 percent of the cases for application log generation.

Server log

Server logs are identical to console logs. The only advantage of server logs is that they can be retrieved anytime but console logs are not available after we log out from the console.

Console log

This log gives you the complete information of Tomcat 7 startup and loader sequence. The log file is named as catalina.out and is found in TOMCAT_HOME/logs. This log file is very useful in checking the application deployment and server startup testing for any environment. This log is configured in the Tomcat file catalina.sh, which can be found in TOMCAT_HOME/bin.

The previous screenshot shows the definition for Tomcat logging. By default, the console logs are configured as INFO mode.

There are different levels of logging in Tomcat such as WARNING, INFORMATION, CONFIG, and FINE.

The previous screenshot shows the Tomcat log file location, after the start of Tomcat services.

The previous screenshot shows the output of the catalina.out file, where Tomcat services are started in 1903 ms.

Access log

Access logs are customized logs, which give information about the following:

  • Who has accessed the application
  • What components of the application are accessed
  • Source IP and so on

These logs play a vital role in traffic analysis of many applications to analyze the bandwidth requirement and also helps in troubleshooting the application under heavy load. These logs are configured in server.xml in TOMCAT_HOME/conf. The following screenshot shows the definition of access logs. You can customize them according to the environment and your auditing requirement.

Let’s discuss the pattern format of the access logs and understand how we can customize the logging format:

  • Class Name: This parameter defines the class name used for generation of logs. By default, Apache Tomcat 7 uses the org.apache.catalina.valves.AccessLogValve class for access logs.
  • Directory: This parameter defines the directory location for the log file. All the log files are generated in the log directory—TOMCAT_HOME/logs—but we can customize the log location based on our environment setup and then update the directory path in the definition of access logs.
  • Prefix: This parameter defines the prefix of the access log filename, that is, by default, access log files are generated by the name localhost_access_log.yy-mm-dd.txt.
  • Suffix: This parameter defines the file extension of the log file. Currently it is in .txt format.
  • Pattern: This parameter defines the format of the log file. The pattern is a combination of values defined by the administrator, for example, %h = remote host address. The following screenshot shows the default log format for Tomcat 7. Access logs show the remote host address, date/time of request, method used for response, URI mapping, and HTTP status code.

In case you have installed the web traffic analysis tool for application, then you have to change the access logs to a different format.

Host manager

These logs define the activity performed using Tomcat Manager, such as various tasks performed, status of application, deployment of application, and lifecycle of Tomcat. These configurations are done on the logging.properties, which can be found in TOMCAT_HOME/conf.

The previous screenshot shows the definition of host, manager, and host-manager details. If you see the definitions, it defines the log location, log level, and prefix of the filename.

In logging.properties, we are defining file handlers and appenders using JULI.

The log file for manager looks similar to the following:

I28 Jun, 2011 3:36:23 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host ‘localhost’ 28 Jun, 2011 3:37:13 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host ‘localhost’ 28 Jun, 2011 3:37:42 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: undeploy: Undeploying web application at ‘/sample’ 28 Jun, 2011 3:37:43 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host ‘localhost’ 28 Jun, 2011 3:42:59 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host ‘localhost’ 28 Jun, 2011 3:43:01 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host ‘localhost’ 28 Jun, 2011 3:53:44 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host ‘localhost’

Types of log levels in Tomcat 7

There are seven levels defined for Tomcat logging services (JULI). They can be set based on the application requirement. The following figure shows the sequence of log levels for JULI:

Every log level in JULI had its own functionality. The following table shows the functionality of each log level in JULI:

Log level Description
SEVERE(highest) Captures exception and Error
WARNING Warning messages
INFO Informational message, related to

server activity

CONFIG Configuration message
FINE Detailed activity of server transaction

(similar to debug)

FINER More detailed logs than FINE
FINEST(least) Entire flow of events (similar to trace)

For example, let’s take an appender from logging.properties and find out the log level used; the first log appender for localhost is using FINE as the log level, as shown in the following code snippet:

localhost.org.apache.juli.FileHandler.level = FINE localhost.org.apache.juli.FileHandler.directory = ${catalina.base}/logs localhost.org.apache.juli.FileHandler.prefix = localhost.

The following code shows the default file handler configuration for logging in Tomcat 7 using JULI. The properties are mentioned and log levels are highlighted:

############################################################ # Facility specific properties. # Provides extra control for each logger. ############################################################ org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers = 2localhost.org.apache.juli.FileHandler org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager] .level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager] .handlers = 3manager.org.apache.juli.FileHandler org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host- manager].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host- manager].handlers = 4host-manager.org.apache.juli.FileHandler

(For more resources on Apache, see here.)

Log4j

Log4j is the project run by THe Apache Software Foundation. This project helps in enabling the logs at the various levels of the server and the application.

The major advantage of log4j is manageability. It provides the developer a freedom to change the log level at the configuration file level. Also, you can enable/disable logs at the configuration level, so there is no need to change the code. We can customize the log pattern based on the application, separately. Log4j has six log levels. The following figure shows the different types of log levels in log4j.

Log level for log4j

Every log level in log4j had its own functionality. The following table shows the functionality of each log level in log4j:

Log level Description
OFF This level is set when you want logging to be set as false (Stopped logging).
FATAL This log level will print the severe errors that cause premature termination.
ERROR This log level is used to capture runtime errors or unexpected conditions.

Expect these to be immediately visible on a status console.

WARN This level is used in the previous version.

It gives you almost errors, other runtime situations that are

undesirable or unexpected, but not necessarily wrong.

Expect these to be immediately visible on a status console.

INFO This log level will define the interesting runtime events (startup/shutdown).

It is best practice to put the logs at INFO level.

DEBUG Detailed information on the flow through the system is defined in this level.
TRACE This log level captures all the events in the system and application,

you can say everything.

How to use log4j

Following are the steps to be performed to use log4j:

    1. Download apache-log4j-1.2.X.tar.gz from its official URL http://logging.apache.org/log4j/1.2/download.html, where X is the minor version.
    2. Unzip the folder and place the log4j.jar in the lib for TOMCAT_HOME/lib and delete juli*.jar from lib.
    3. Delete logging.properties from TOMCAT_HOME/CONF.
    4. Create a file log4j.properties in TOMCAT_HOME/CONF and define the log appenders for the Tomcat instance. The following screenshot shows the appenders for catalina.out. Also, the highlighted code shows the roll-over of logs per day.

You can customize the log rotation based on size, day, hour, and so on, using the previous log4j appenders marked in bold.

  1. Restart Tomcat services.

Important tip for production environment

DEBUG and TRACE modes should only be enabled in production. In case of troubleshooting, the ideal mode is INFO (DEBUG and TRACE generate heavy logging and also affect the server performance).

Appenders should be enabled everyday in a production environment. This helps the administrator to perform log analysis very easily (file size is less).

Log level mapping

Till now, we have discussed about the various log levels for JULI and log4j. Let us do a quick log level mapping for JULI and log4j. The following table shows the one-to-one mapping for log4j and JULI:

Log level in JULI Log level in log4j
SEVERE FATAL, ERROR
WARNING WARN
INFO INFO
CONFIG NA
FINE DEBUG
FINER DEBUG
FINEST TRACE

Values for Tomcat 7

Values are defined as identifiers which change the pattern of the string in the log . Suppose you want to know the IP address of remote host, which has accessed the website, then you add the combination of the following values mentioned in the log appenders. For example, let’s customize the access logs for Tomcat 7. By default, access logs for Tomcat are defined as the follows:

We want to change the log pattern to show the time taken to process the request. We have to add the %T in the patterns. The changed code is shown as follows:

The following table shows the values used in Tomcat 7 for log pattern customization:

Values Description
%a Remote IP address
%A Local IP address
%b Bytes sent, excluding HTTP headers, or ” if zero
%B Bytes sent, excluding HTTP headers
%h Remote hostname (or IP address if enableLookups for the connector is false)
%H Request protocol
%l Remote logical username from identd
%m Request method (GET, POST, and so on)
%p Local port on which this request was received
%q Query string (prepended with a ‘?’ if it exists)
%r First line of the request (method and request URI)
%s HTTP status code of the response
%S User session ID
%t Date and time, in Common Log format
%u Remote user that was authenticated (if any)
%U Requested URL path
%v Local server name
%D Time taken to process the request, in milliseconds
%T Time taken to process the request, in seconds
%I Current request thread name (can compare later with stack traces)

Log analysis

Log analysis is a very important and tricky issue, which needs to be handled with a lot of care. If you overlook a few lines, then you will never be able to find the root cause of the issue. Some of the best practices which need to be kept in consideration while doing the log analysis are mentioned as follows:

  • Check the logs of the last 1 hour from the issue
  • Always go to the first exception in the logs when the error has started
  • Always keep in mind that issues are not caused due to malfunction of Tomcat, also check the other infrastructure resources

In non-DOS operating systems (Linux, Unix, Ubuntu, and so on), there are two utilities which are very useful in log analysis, grep and awk. Let’s discuss grep and awk utilities briefly:

  • grep: This utility prints the lines which match the string searched.grep Error catalina.logThe previous command is an example of the grep command for searching the word “error” in the file catalina.log and display the lines which contain the word “error”.
  • awk: This command is used for pattern scanning. Suppose we want to print 10 columns in the entire data file, then this command is very useful. The following screenshot shows the output of the command when run for the /opt directory.find “location of directory “ -type f -size +10000k -exec ls -lh {} \; | awk ‘{ print $9 “: ” $5 }’ find “/opt“ -type f -size +10000k -exec ls -lh {} \; | awk ‘{ print $9 “: ” $5 }’

Helpful commands for log analysis

Administrators are looking for shortcut commands to do their work efficiently. The following are some useful commands that I have collected during log analysis:

The following commands are used for searching big log files. Sometimes in a production environment, we get alerts for disk out of space. The following commands can be used:

    • Finding large files and directories in Linux:

find “location of directory “; type f -size +10000k -exec ls -lh {} \; | awk ‘{ print $9 “: ” $5 }’

    • Finding directories with size over 100MB:

find / -type d -size +100000k

    • Sort directories as per size using du:

du –max-depth=1 -m | sort -n -r

    • Finding directory sizes:

du -sh folder_name du -ch folder_name du -csh folder_name

    • The following command is used for truncating huge log files on the live system (log rotation can be done without recycle of services):

cat /dev/null > file_name

The following mentioned commands are used for searching the string in different files:

    • Finding ERROR exception:

grep ERROR log_file

    • Last 200 lines in log file:

tail -200 log_file

    • Current logs to be updated

tail -f log_file