Thread 1 cannot allocate new log [Checkpoint not complete] or [archival required] in alert.log

In alert.log

ORACLE Instance ORCL - Can not allocate log, archival required
Thread 1 cannot allocate new log, sequence xxx
Checkpoint not complete

These messages are written to alert.log when the database wants to reuse a redo log file but it can not do. This is related with

1- )  Not finished checkpointing (DBWR) or
2- ) ARCH process can not copy the redo log file to the archive destination.

Oracle wants to reuse log file but current checkpoint is in that log file. The database halts until checkpoint completion or archiving activity finishes. So Oracle waits until reusing redo log file safely.

As you see one reason is slow DBWR. OK, how can we make faster DBWR?

  • You can use multiple DBWR processes
  • Enable ASYNC I/O : For RHEL , you can verify if relink libaio with your Oracle binary using ldd and nm commands. Verifying Asynchronous I/O Usage.
  • You can use DBWR slaves if ASYNC I/O is not supported.

Adding more redo log files can also solve this issue. This operation may create more room for DBWR as you guess.

Increasing size of redo log files can also help you. Like adding more redo log files, increasing size of them give you more time to reuse redo log file.

Good luck.


Checking RMAN backup using Shell Script

Hi ,

Here is a shell script for checking your RMAN backup operation status.

Usage :

[oracle@ora1 ~]$ chmod 755

[oracle@ora1 ~]$ ORA1 (This is your ORACLE_SID)

Oracle Database Backup and Recovery User’s Guide ‘s  “To query details about past and current RMAN jobs” section is here.

Good Luck 🙂

[oracle@ora1 ~]$ cat

#If you will schedule this script oracle's crontab then you do not need export both SID and HOME variables
export ORACLE_SID=$1
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1

#Formatting date
MYDATE=`date +%Y-%m-%d:%H:%M:%S`


#Query FAILED Backup Operations
set feed off
set linesize 400
set pagesize 200
spool backup.alert
spool off

#Check if there is any FAILURE
ISFAIL=`cat backup.alert|grep FAILED |grep -v RMAN_BACKUP_JOB_DETAILS|wc -l`

if [ $ISFAIL -gt 0 ]
#Fetch Backup Type if any failure occured. For DB Full Backup BTYPE = DB , For ARCHIVELOG BTYPE= ARCHIVELOG etc.
BTYPE=`cat backup.alert|grep FAILED |grep -v RMAN_BACKUP_JOB_DETAILS|awk '{print $1}'`
echo $BTYPE " Backup ERROR"  > backup_check_$MYDATE.log
exit 0

SCSI LUN Disk permission operation using udev rules on RedHat 5

If you are working with RedHat 5.x and SCSI LUNs before adding your disks into your ASM diskgroups , creating RAWs can be an easy way to manage permissions. Let’s do an example :

For example your storage admin gave you 2 LUNs. One is /dev/sdb1 and the other one is /dev/sdc1.

1- ) Assigning these LUNs to RAWs

[root@ora1 ~]# raw /dev/raw/raw1 /dev/sdb1

[root@ora1 ~]# raw /dev/raw/raw2 /dev/sdc1

[root@ora2 ~]# raw /dev/raw/raw1 /dev/sdb1

[root@ora2 ~]# raw /dev/raw/raw2 /dev/sdc1

2- ) Please add these two rows in /etc/sysconfig/rawdevices
/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdc1

3- ) Now we are creating a new file like 88-raw.rules under /etc/udev/rules.d/ and writing down this entry:

KERNEL==”raw[1-2]*”, OWNER=”oracle”,GROUP=”dba”, MODE=”640″

If you have 3 RAWs , it will be KERNEL==”raw[1-3]*”, … as well.

That’s all. Now you can discover your disks to add them into your ASM.

Good Luck 🙂

Disk permission operation using udev rules on RedHat 6

Changing disk permission can be perform using udev. Let’s see an example :

1- ) Fetch the unique disk id with scsi_id.

[root@ora1 ~]# scsi_id -g -u -d /dev/sdb

This command returns a unique id like “20a0c2b147c3ae84b74a42b058e8a7c3ae84b74a42b0a393"

2- ) A new udev file will be used for giving permission (But really new)

[root@ora1 ~]# cd /etc/udev/rules.d/

[root@ora1 ~]# more 88-oracle-disks.rules

KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="20a0c2b147c3ae84b74a42b058e8a7c3ae84b74a42b0a393", NAME+="raw/raw1", OWNER="oracle", GROUP="dba", MODE="0660"

[root@ora1 ~]#

Disk will be created under /dev, i.e., raw/raw1 means disk’s location is /dev/raw/raw1.

As you see this rule has serial id of the disk (20a0c…393) ,disk name (raw/raw1), mode (0660) and group-owner  information.

After creating this rule , please reboot host(s) and kindly check if you can discover your disk under “/dev/raw/*”

You can find a very good article about this topic in Frits Hoogland’s blog. Here is the link.

Good Luck 🙂

“crsctl stop/start crs” vs “crsctl start/stop cluster”


crs and cluster start/stop operations are sometimes confused.

For that reason, I want to identify the difference between crs and cluster operations.

Lets clearify this from “crsctl stop” command help :

-bash-3.2$ crsctl stop
.... (Some usage hints...)
(Here is important ..)
crsctl stop crs [-f]
Stop OHAS on this server.(This means you can ONLY stop local host's CRS.)
-f  Force option  (You can use Force optin using "-f")

crsctl stop cluster [[-all]|[-n <server>[...]]] [-f]
Stop CRS stack
Default         Stop local server
-all            Stop all servers
-n              Stop named servers
server [...]    One or more blank-separated server names
-f              Force option

As you see above, you can stop CRS stack using cluster command both local node and remote node while OHASD is running. OHASD must be running for managing CRS stack.(That is important)