TIP 95#: Pending transaction and session Exit !

What would happen if some DML changes are run and a session is exited without rollback or commit ? What if you have a release to run with many SQL and PLSQL pieces and it errors out in the middle ?
In this post, I am trying to clarify above questions :

SQLPLUS session is exited
If a sqlplus session is exited, by default it commits all the pending transactions. By default in sqlplus, EXITCOMMIT is set to ON and this is how it enforces sqlplus to commit any pending transaction!. If you want to disable this feature, set EXITCOMMIT to OFF.
Also please be aware that AUTOCOMMIT also can be set in sqlplus which is OFF by default,so it should not be a concern in normal cases however, if this is set, then any pending transactions are also committed.

Example 1 :
--- Session 1 (Default sqlplus setting) ---
SQL>create table dummy (id number);
Table created.
SQL>insert into dummy values(100);
1 row created.
--- Session 2 (Default sqlplus setting) ---
SQL>select * from dummy;

When session1 exited,all pending transactions are committed
and as the result, session 2 can see the change

Example 2 :
--- Session 1 (Disable exitcommit) ---
SQL>set exitcommit OFF
SQL>create table dummy (id number);
Table created.
SQL>insert into dummy values(100);
1 row created.
--- Session 2 ---
SQL>select * from dummy;
no rows selected
When session1 exited, all pending transactions are
rolledback if exitcommit is disabled

PLSQL blocks and transactions

Oracle treats stored procedure call as atomic operation.In other words, when begin.... end PLSQL code is submitted, Oracle wraps it with SAVEPOINT and if the block fails in any part, Oracle restores the change before the whole PLSQL call.

Example 3 :
--- Session 1 - PLSQL block ---
2 insert into dummy values(1000);
3 insert into dummy values(1001,'aa');
4 end;
5 /

ERROR at line 3:
ORA-06550: line 3, column 13:
PL/SQL: ORA-00913: too many values
ORA-06550: line 3, column 1:
PL/SQL: SQL Statement ignored

------------------- Session 2 -----------------
SQL>select * from dummy;
no rows selected

The whole PLSQL is an Atomic operation,
So no changes are committed

Release run with many PLSQL blocks

For a release to run which has several PLSQL blocks and SQL statement, I prefer to commit all transactions as long as there is no issue and when there is an issue, the script stops applying further changes.
The key is not to run the whole script over and over again if an issue is detected.
For this, WHENEVER SQLERROR EXIT SQL.SQLCODE; is used which exits the session when error occurs and at the same time it does commit any pending transaction.

Example 4 :
--- Session 1 - PLSQL block ---
run.sql :

insert into dummy values(2000);
insert into dummy values(3000,'aa');


ERROR at line 8:
ORA-06550: line 8, column 13:
PL/SQL: ORA-00913: too many values
ORA-06550: line 8, column 1:
PL/SQL: SQL Statement ignored

------------------- Session 2 -----------------
SQL>select * from dummy;

Each PLSQL is an Atomic operation, when the second
PLSQL fails, the session is exited but as the result
of exit, it does commit all pending transactions including
the first PLSQL

TIP 94#: Good to know/be reminded of

I was reading couple articles from Tom Kyte in Oracle magazine and I found it would be great to mention some here for reminder/reference ...

Fast Full Index Scan va Full Index Scan

FFI reads the entire index, unsorted. It is called as Tiny version of table. (If you want to make a tiny version of table on some columns, it could be an option as long as you have at least one NOT NULL column). FFI uses Mutiblock IO and it reads the entire index including leaf blocks, branches and root and just ignore branches and root.
FIS however, reads single block, it starts from root and goes down to branche and the leaf blocks and then when it hits the leaf blocks, it reads the bottom of the index.

Null values in bitmap indexes

Unlike B*Tree indexes, bitmap indexes always index NULL values.Every row in table is indexed by bitmap index not matter if the value is NULL or NOT NULL

Wide table

If you have a wide table but couple columns are in use not all, in order to improve the performance of fetching records, the following two options can be considered :

Create index on all necessary columns and Oracle does use FFIS as tiny version of table.

Break down table to two table, one with most frequent used columns and one with less frequent used columns. Have a view on these two tables and use the view, Whenever most frequent used columns are accessed, Oracle automatically elimiate the second table.

Basic sample :

maintab (col1, col2, col3, col4, col5)
tab1 (col1, col2, col3)
tab2 (col1, col4, col5)
view as select col1,col2,col3, col4,col4 from tab1, tab2 where tab1.col1=tab2.col1

DDL in trigger

It is wrong to use any DDL in trigger as DDL does implicit commit. Still with pragma autonomous_transaction, it is wrong to use DDL in trigger. If there is no choice, it is better to submit DDL as a job instead of executing DDL in trigger..

Easy Connect

10g on wards, allows to use easy connect to connect to DB. For easy connect, in sqlnet.ora we should have EZCONNECT in NAMES.DIRECTORY_PATH. If this setting is in place, easy connect can be used in this way - //host:port/db

TIP 93#: GoldenGate lag

I have a client with Oracle DBs using bi-directional GoldenGate replication in production and UAT environment. I noticed that DBs are out of sync. Checking GoldenGate lag using info all did not show any major lag, Also nothing was reported in GoldenGate log or replication discard file. All processes
were up and running on source and target boxes per the following snaps. (On node A :EXT01SA is primary extract, PPSA01SB is pump which pushes changes to node B, and RESB01SA is replicat which receives changes from node B - The same set of processes exist on node B)

------- Node A --------
GGSCI (########) 1> info all

Program Status Group Lag Time Since Chkpt

EXTRACT RUNNING EXT01SA 00:00:00 00:00:10 
EXTRACT RUNNING PPSA01SB 00:00:00 00:00:01 
REPLICAT RUNNING RESB01SA 00:00:00 00:00:02 

------ Node B ---------

Program Status Group Lag Time Since Chkpt

EXTRACT RUNNING EXT01SB 00:00:00 00:00:03 
EXTRACT RUNNING PPSB01SA 00:00:00 00:00:02 
REPLICAT RUNNING RESA01SB 00:00:00 00:00:01 

Checking further on the GG processes, I found that pump is behind primary extract although it is not reported in info all.In more details, the following shows that Primary extract is on sequence 349 while pump is on sequence 340 !!! When there is no lag, GG pump should read almost the same sequence that primary extract writes.

GGSCI > send extract EXT01SA, status
Sending STATUS request to EXTRACT EXT01SA ...
Current status: Recovery complete: At EOF
Current read position:
Sequence #: 5297
RBA: 17509888
Timestamp: ############
Current write position:
 Sequence #: 349 
RBA: 921878
Timestamp: ##########
Extract Trail: ############

GGSCI > send extract PPSA01SB status
Sending STATUS request to EXTRACT PPSA01SB ...

Current status: Recovery complete: Processing data

Current read position:
Sequence #: 340 
RBA: 472796720
Timestamp: #########
Extract Trail: ############

Current write position:
Sequence #: 737
RBA: 2248444
Timestamp: ###########
Extract Trail: ##########

So now, we know why DBs are out of sync but what is the cause? Checking with log dump, the sequence that GG pump stopped on is the end of file for trail 340. (Interesting !!).
Further checking in GG log showed that the primary extract trail was rolled over manually with alter extract EXT01SA, etrollover; and in GG when a trail file is rolled over manually, Pump and replicat stops with detecting the new trail files and these processes require to be rolled over also. Interestingly enough info all does not show any lag but the moment pump or replicat is rolled over, the commands starts showing the right lag.

Lesson : "info all" command in GoldenGate does not cover all circumstance for reporting lag. The best way is to find out read and write sequence of each GG process and determine the lag.

TIP 92#: Resolve ORA-01075

SVC disk on a client crashed and as the result Oracle database crashed. After resolving disk isue, I was getting ORA-01075 error on sqlplus to startup DB.

> sqlplus '/ as sysdba'
SQL*Plus: Release - Production on
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

ORA-01075: you are currently logged on

Checking the process, nothing from Oracle user was up and running.
Eventually, It turned out that the issue is related to stuck shared memory and semaphore.
To check out if this is the case for you, follow below steps :

1. Login as Oracle
(or owner of software)

2. Make sure there is no process up as oracle
(Assume Oralce is the owner of oracle binaries)

3.run ipcs
* Check for any shared memory segment for Oracle
* Check for any semaphore for Oracle

4. Kill shared memory segment or semaphore
which is owned by oracle
* ipcrm -m shmid_from_step_3
(to cleanup shared memory segment)
* ipcrm -s semid_from_step_3
(to cleanup semaphore from step 3)

5. Retry sqlplus

TIP 91# : Refresh of materialized view erros with overflow

When I showed up in office couple weeks ago, a client called and complained that a materialized view refresh failed, this refresh has been working with no errors in years. The client confirmed that there was no code change in the materialized view or any underlying tables.Refreshing the materialized view manually also failed in couple second with the same error :

ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-01426: numeric overflow
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2545
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2751
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2720
ORA-06512: at line 1

The errors first looked like it was due to data in underlying tables which can not be fit in the defined materialized view. As a test, I captured the select statement for the materialized view and populated it in a temporary table to see if any column data types in temporary table is different from the materialized view column data type.
Interesting enough, all columns in temporary table and materialized view were the same in terms of data type and the length.So it can not be data type overflow ?!!!
Tracing the refresh (10046) did not resolve anything.

Eventually found that the issue was not the materialized view by itself but it was due to overflow of dba_tab_modifications which keeps track of insert/update/delete for each object.
Since the materialized view is refreshed on regular basis, Oracle 's number which keeps track of changes were high for the materialized view and when the refresh was done they were overflowed. Metalink ID 882997.1 implicitly reported this issue.

Oracle released a one-off patch for this issue but in interim, gathering stats on the materialized view should reset the number in the dictionary and then refresh should run in success :

select inserts, deletes from sys.mon_mods_all$ where obj# = &object_id;

---------- ----------
4295948847 4283835810

exec dbms_stats.gather_table_stats('@owner','&mv_name',estimate_percent=>5,cascade=> false);
select inserts, deletes from sys.mon_mods_all$ where obj# = 189467;

select inserts, deletes from sys.mon_mods_all$ where obj# = &object_id;

no rows

For your info, this issue was reported in 11gR1 (