TIP #14: Difference between full backup and level 0 incremental backup

I asked many times what the difference between full backup and level 0 incremental backup is.Whether or not they are the same and ...
Incremental level 0 backup which is the base for subsequent incremental backup copies all blocks containing data same as full backup. (Same functionality)
The only difference between these two backups is that a full backup never considered as base backup in incremental strategy.
Therefore, if you select incremental strategy as part of your backup strategy, you need at least one level 0 incremental backup.

TIP #13: Ideal RMAN format for backup

Naming RMAN backup is vital.
Proper backup name gives you better idea of which backup may be used during recovery.
My ideal RMAN backup format is : backup_%d_set%s_piece%p_%T_%U for backupsets when duplexing is not used. For duplexing backup it would be backup_%d_set%s_piece%p_copy%c_%T_%U
The following is brief description of these tags :
  • %d : name of database
  • %s :backup set number. (Unique for controlfile lifetime)
  • %p :piece number in backup set.
  • %T : Specified date in format YYYYMMDD
  • %U : Unique number consist of %u_%p_%c
  • %c : copy number of backup piece.when duplexing of backup piece is in use

Please pay attention that these tags are case sensitive.

If no format is specified, Oracle by default uses %U.For controlfile autobackup, I prefer to use default config which is %F.This tag has DBID in the filename which would be necessary in the case of recovery when RMAN catalog is not accessible.This tag can not be used for backup sets.

Enjoy reading ...

TIP #12: Can not use TNS to login to database.

When I tried to connect to database via TNS, I got the following error :

ERROR - DBPrereq DBConection error
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
IBM AIX RISC System/6000 Error: 2: No such file or directory

However, database was available and was accessible from sqlplus in the box without using tnsnames.
TNS entry in tnsnames.ora was pinggable (tnsping for this entry was OK).
Environment settings were fine.

The problem was because of spare / in Oracle_home in listener.ora.

Wrong listener.ora
(ORACLE_HOME = /o029/home/metaAS10g/product/
(PROGRAM = extproc)
((GLOBAL_DBNAME = dasdbm.isc.upenn.edu)
(SID_NAME = dasdbm)
(ORACLE_HOME = /o029/home/metaAS10g/product/

Correct listener.ora
(ORACLE_HOME = /o029/home/metaAS10g/product/
(PROGRAM = extproc)
((GLOBAL_DBNAME = dasdbm.isc.upenn.edu)
(SID_NAME = dasdbm)
(ORACLE_HOME = /o029/home/metaAS10g/product/

TIP #11: Export failed, generate ORA-07745 error

Export of any schema failed on does not matter whether using exp or data pump.It also creates ORA-07745 error in Alert log.
Export log reports the followings:
EXP-00056: ORACLE error 24324 encountered
ORA-24324: service handle not initialized
EXP-00056: ORACLE error 24324 encountered
ORA-24324: service handle not initialized
EXP-00000: Export terminated unsuccessfully

The root of this problem is MDSYS account.Unlocking this account resolves the issue.

TIP #10: How to find password of existing database link

I faced with a odd situation.I `d like to copy some databases link from source DBs to target DBs. In order to create database link in target database, I decided to run "create database link" sql statement.However, for creating database link in target database, I need to know username and password which is currently used by the existing database link in source database.

sys.link$ has unencrypted password value for each database link.

The following is sample query.

select 'create database link ' || name ||' connect to '|| userid ||' identified by ' || password || ' using '||''''||host ||''''||';' from sys.link$

TIP #9 : Track many open cursors.

If cursor in PL/SQL code get open but never get close, Oracle may report too many open cursors error in Alert log.
I found the following code handy to track the cause of problem.

sid in(SELECT sid FROM V$OPEN_CURSOR group by sid having count(*)>&threshold);

TIP #8 : Index for null columns

Index does not store NULL value, consequently if you have null condition in SQL statement, related index is ignored by default.

create index scott.empidx_sal on scott.emp(sal)
select ename from scott.emp where sal is null;
---> Execution plan is : Full table scan.

Workaround :

- Create function-based index on nvl(sal,-1) and changed the query to select ename from scott.emp where nvl(sal,-1)=-1;

- I prefer this solution. Create composed index. In composed index, as far as the whole columns are not null, index does store null value.
create index scott.empidx_salename on scott.emp(ename,sal);
select ename from scott.emp where sal is null;
---> Execution plan is : Index scan

TIP #7 : Listener crash in Windows box.

I faced to a problem yesterday.Oracle listener crashed each time a connection was made.OS was WINXP SP2.

Event log showed the following error in SYSTEM log and APPS log.

Faulting application TNSLSNR.EXE, version, faulting module orantcp9.dll, version, fault address 0x00005732

The following items are proposed by metalink to resolve the issue :The following items are proposed by metalink to resolve the issue :

- Setting use_shared_socket=true in KHEY_LOCAL_MACHINE \software\oracle\home0
- Removing all 3rd party software for Internet download such as download accelarator.
- Changing TNSNAMES.ORA so that explicitly mention about ORACLE_HOME.

None of above solutions worked for me.In addition, I could not find anything in listener trace file or listener log which may lead me to the cause of the problem.

By the way, I checked out that no process was listening on the listener default port.Changing listener port did not change anything.

Finally I found the following magic command which fixed the issue after running and restarting the box.

netsh winsock reset catalog

Simple explanation :

Most of the Internet connectivity problems are because of the corruption in Winsock settings. Windows sockets settings may get corrupted due to the installation of a networking software, Malware infestation. In this situation user can connect to the Internet, but randomly may get Page cannot be displayed .Above command resets the Winsock catalog to the default configuration. This can be useful if a malformed LSP is installed that results in loss of network connectivity.

TIP #6: Stop jobs in 10g

Before 10g, if you want to stop all jobs, you can simply set JOB_QUEUE_PROCESSES to zero.
In 10g, Oracle intrduced scheduler_job with many advanced features.
In this post I do not want to deal with different features of scheduler_jobs.
In 10g, it is possible to schedule job either in dba_jobs or dba_scheduler_jobs.
Setting JOB_QUEUE_PROCESSES to zero stops only jobs in dba_jobs which means that jobs in scheduler are still running .
In order to disable jobs in dba_scheduler_job, the following script should be run for each job in scheduler.

sys.dbms_scheduler.disable( '"schema?"."job name?"' );

FYI : For finding all jobs using 10g scheduler, run select owner, job_name, state from dba_scheduler_jobs;

TIP #5 : Oracle account locked after 10g migration

I migrated 9i DBs to 10g.
After migration, clients complained that some of their accounts were locked.
In 10g, FAILED_LOGIN_ATTEMPTS in default profile was changed from UNLIMITED to 10.
This is the cause of problem.

I suggest to create new profile before migration and set all users with default profile to the new profile.
New default profile can be something like this :

select 'ALTER USER ' username || ' PROFILE NEW_DEFAULT;' from dba_users where profile='DEFAULT';

TIP #4 : RMAN Recovery window

I `d like to clarify misunderstanding of RMAN recovery window.
There is incorrect thought :

Setting RMAN recovery window to 14 days ==> any backup older than 14 days become obsolete. Incorrect

Regarding to Oracle document, A recovery window is a period of time that begins with the current time and extends backward in time to the point of recoverability. In other words, Recovery window determines the earliest point of time when recovery is possible.Consequently, RMAN may need backups older than recovery window=n days to recover database to n days ago.

Let me provide you an example :

Assumptions :
- Recovery window 7 days.
- Scheduled DB backups. (Nov 1st,Nov 14th,Nov 21th,Nov 28th)
- current date : Nov 16th.
Nov 1st -- Nov 14th ---Nov 21th-- Nov 28th
Nov9th ---- Nov 16th   

In this case for being recoverable between Nov 9th till Nov 16th (which is recovery window) , RMAN requires Nov 1st backup for recovery at any time between Nov 9th and Nov 14th. In addition, RMAN uses Nov 14th backup for recovery after that.
This shows better picture. Having recovery window of 7 days does not result to make Nov 1st backup as obsolete.

TIP #3 : _PGA_MAX_SIZE hidden parameter.

_pga_max_size is hidden parameter which can have key role in proper PGA usage.
For serial operation, the size of PGA for each work area is limited to max 100MB when pga_aggregate_target>=2GB.
_pga_max_size controls max PGA which can be used by every workarea.
Oracle Default setting can be changed by assigning higher value to this hidden parameter. In addition, It would solve high temporary tablespace usage.
I highly recommend of setting this parameter for big database.
I did not encounter to any problem after setting this parameter in 9i and 10g.

TIP #2 : TEMP Tablespace can not extend.

TEMP tablespace growth issue is known issue for DBAs.
I know that each of us experienced this issue so many time.
In this post, I `d like to outline this issue and explain options to fix.
Basically, this error reports in Alert log with the following statement:

ORA-01652: unable to extend temp segment by %s in tablespace %s).

A "temp segment" is not necessarily a SORT segment in a temporary tablespace. Actually it can be used in following conditions:

- In SORT statement
- In creating Index
- In enabling constraint
- In creating primary key/unique key constraint.

Conseuqently, it is not true to say TEMP segment is only used for SORT operations.
On the other hand the most probable usage of TEMP segment is sort usage in practical situation.

I suggest to follow below guideline to resolve the issue :

- Find out whether or not PGA size
(pga_aggregate target in >=9i or sort_area_size <9i ) has proper size.
Having bigger PGA will decrease TEMP usage.

- Make sure that TEMP datafile is autoextensible and maxbytes/maxblocks are not very low.

select * from dba_temp_files;

- Find queries which allocate big chunk of temp tablespace and try to tune these queries.
The following is shell script which can be scheduled via crontab.
The result has SQL statements which allocate TEMP tablespace

In database :


In OS : (monitor_temp.sh)

. ~orfnprd/.profile
$ORACLE_HOME/bin/sqlplus -s "/ as sysdba" <<\EOF
set serveroutput on
insert into monitor_sortseg select tablespace_name,USED_EXTENTS,FREE_EXTENTS,sysdate
from v\$sort_segment where upper(tablespace_name)='TEMP' and free_extents<=500;
if(SQL%ROWCOUNT<>0) then
dbms_output.put_line('Start -------- Start Date:'||sysdate);
dbms_output.put_line('TEMP free extent is lower than 50 .... Finding SQL statement');
insert into monitor_sortuse select USER,SESSION_NUM,sortu.TABLESPACE,
sysdate,sql_text,sorts from v\$sort_usage sortu,v\$sqlarea sqla where upper
(tablespace)='TEMP' and sortu.SQLHASH=sqla.HASH_VALUE ;
dbms_output.put_line('End ----------End Date:'||sysdate);
end if;

Sample crontab :

0,5,10,15,20,25,30,40,45,50,55 * * * * sample_temp.sh > sample_temp.log 2>&1

TIP #1: How to minimize rollback segments generation in bulk delete ?

One of my client has monthly job which deletes many records. This job generates many rollback segments. The main idea of this post is to find out options to minimize rollback generation in bulk delete.

There are 2 options :

- Replace delete statement with truncate. Truncate deletes all records in table and does not generate any rollback segment. This solution may not be applicable in all cases. Actually this option is not applicable to my client because code should not be changed (Company rule!!!!).Besides, monthly job only deletes table partially.

- Run delete statement iteratively. The following shows sample code :

nCount number;
delete from table? where rownum<1001' and condition?;
select count(ROWID) from table? INTO nCount;
commit; --- Cleans up rollback
EXIT WHEN nCount = 0;

With this option, rollback segments cleans up after each deletion of 1000 records.
Then, these rollback segments can be reused.

First post

I created this new blog to address some Oracle DBA tips that I experienced.
I hope that this blog will be a place to share ideas in Oracle DBA subject.
I appreciate any technical post on this blog.I try my best to respond them.
Currently, I am working as Oracle DBA in Canada.