TIP #77 : OC4J failed to start after applying Oct2008 security patch

After applying Oct2008 CPU patch for a client on Oracle application server 10gR2, OC4J_Security on Infra structure tier failed to startup. No much information in logs, the only thing that was shown after couple minutes was :


--> Process (pid=0)
oid dependency failed
failed to start a managed process because a dependency check failed

It was interesting that OID started without any issue.
Above issue could occur when OID run on port lower than 1024 and $ORACLE_HOME/bin/oidldapd has not the right permission.
This file should own by root.To fix permission issue, $ORACLE_HOME/root.sh from Infra structure home should be run.

Happy CPU patching !

TIP 76#: How to find peeked value of bind variables

Oracle usually recommends to be sensitive in using bind variables. Starting Oralce 9i, Oracle introduced a concept of bind variable peeking. This means that the first time a SQL statement containing bind predicates is parsed, the optimizer will look at the value of the bind variable and use that value for creating the execution plan for the query.
This plan is then stored and used for all future executions regardless of changed bind values. If the initial bind value is not representative of the other values that are provided in future executions of the query, then, although the access path is good for the intial value, future executions may perform poorly. This may result in suboptimal query performance for some iterations of the query.

As I said, bind variable peeking is TRUE by default in 9.0 and above. This feature could be disabled with setting _OPTIM_PEEK_USER_BINDS to FALSE.

In this post, I am trying to demonstrate how to find the peeked value for bind variable, This helps in the case of poor performance to see if Oracle execute query based on the right value. (Sample was run in

Test 1
In this test, a query with bind variable was run against test table.
When bind_data column in v$sql is not null,SQL has bind variables.
v$sql_bind_capture is used to find which bind value has been peeked by Oracle.

Test 2
In this test, new value (100) was passed to the same query but as it is shown Oracle still has -1 as a peek value.

Test 3
In this test, shared pool was flushed from test table. This forced Oracle to peek again the value. This time it peeked 100.

TIP #75 : Most recent asked questions (2)

This is the second collection of reader 's question and my answer.
Thanks to all reader who showed interest with asking question.
I should apologize for my current delay in response (I was away for a while).I will try to catch up;Wait for more post and answers.

Question (Date : July 26th,2008)

How to reset the statistics in oracle 10g ?


You can see history of stats operation with running the following :

Also you can get table statistics with running the following :
select * from DBA_TAB_STATS_HISTORY;

Old statistics can be restored with the following code :
exec dbms_stats.restore_table_stats({owner} ,{table name}, {timestamp});

Question (Date : July 20th,2008)

In my database open_cursiors are 500 in parameter file, when I seen in v$open_cursors while running the dml operations at that 2800 it is showing , any suggestions to increase performance.


v$open_cursors takes into account :
implicit cursors used
distinct explicit cursors opened
dynamic cursors PARSED and NOT CLOSED
As the result it shows accumulated data of cursors which PARSED but NOT CLOSED over time.

Also select value from v$sysstat where statistic# = 3 takes into account :

implicit cursors used
distinct explicit cursors opened
dynamic cursors OPENED.

To fix the situation :
Make sure that explicit cursors are closed by CLOSE.
Make sure to use bind variable to reduce number of requried cursor.
If you can not use bind variable, you may get benefit of proper setting of cursor_sharing parameter.

Question (Date : July 20th,2008)

Can we restore the defintion of the production data base(without data) with RMAN?


As you know, the defintion of database is called metadata and is kept is SYSTEM (and SYSAUX in >10g) tablespaces.
What you need to do is to Restore/recover a tiny database from production backup with only SYSTEM/UNDO (and SYSAUX if >10g) tablespaces and then uses different views like dba_source, dba_tables, dba_indexes, dba_objects and ... to reterive different object definitions.

Question (Date : July 20th,2008)

I have a dbajob which stopped last week without anyreason. I can see that job in the dbs_jobs table and its next date set to 9th of july whcih is the last date it ran. I cant change the next date or I cant manually run it. its a stored procedure creates emails and send to users..i tried - exec dbms_job.change(14006, NULL, NULL, 'SYSDATE + 1/24');exec dbms_job.run(14006);but nothing happenedany idea?


It could be job_queue_processes is zero.
Please run the following to find out job_queue_processes

select name,value from v$parameter where name like 'job_queue_processes';

If it is zero, you need to change it with the following command.

alter system set job_queue_processes=10;

To be safe on next database bounce, you also need to make sure pfile/spfile will have this change.

Question (Date : July 20th,2008)

I want to put a condition in a column that the data would be inform of 'VA0001'. How i do this please help me


I assumed that you want a column to have the constant value of 'VA0001'. Here is options that you may use
1. Add check constraint.
Example :

drop table tbl1;
create table tbl1 (id1 number, id2 varchar2(20));
alter table tbl1 add constraint check_id2 check (id2 in ('VA0001'));
insert into tbl1 values(10,'VA0001');
insert into tbl1 values(20,'AAAA');
--- ORA-02290: check constraint (AVAIL.CHECK_ID2) violated

2. Define default value for a colum (It is doable if you have control over insert/update SQL statement)

drop table tbl2;
create table tbl2 (id1 number, id2 varchar2(20) default 'VA0001');
insert into tbl2(id1) values(10);
select * from tbl2;

10 VA0001

Question (Date : July 3rd,2008)

My replication push is defined as sysdate + 1/(60*60*24) but userscomplain that it runs every 2 minutes. DBA_JOBS has LAST 2008-07-02 00:50:53 Next 2008-07-02 00:50:54 Now 2008-07-02 00:52:33 Whya 2 minute gap?


Interval definition is right, it should run every second.
However, if the code that is executed takes more than a second, current run will overlaps with next run. To make sure that scheduler can run every second, you need to make sure that code can be completed at any situation in less than a second.

Question (Date : May 22th,2008)

How to provide a rough estimate of how much space (in Gigabytes)could be saved by exporting and re-importing a database ?
Also, if possible,please include something that will give (even rough) estimates of the down-time that we would require to perform the task


If the whole purpose of export/import is to regorganize table to save space, I would like to offer the following as a better option.

- Check metalink note 262353.1 for finding out a way to shrink eligible tables. In 10g this could be done by running a simple shrink command.
- Check http://oradbatips.blogspot.com/2007/01/tip-21-shrink-datafiles.html on how to shrink datafiles.

Question (Date : May 22th,2008)

I have Oracle 10g running on an Integrity box with two single core CPU's, Oracle is only using one ofthem, any ideas?


Distribution of processing to the CPUs is handled by OS, not Oracle. The parallel query option is not required for this distribution of load to occur by the OS.If you are running a SINGLE query, OS would not be able to dispatch it to multiple processors without parallel execution but that doesn'tmean both processors wouldn't be active when there may be operating system processing occuring on one of the CPUs.If you want a SINGLE query to take advantage of multiple CPUs, this could be done with parallel execution which is only licensed withOracle Enterprise Edition. Also, parallel execution can improve performance with hardware configurations that do not include multiple processors although this is where the biggest gain can be achieved.

Question (Date : April 18th,2008)

Is it any necessary to change the default port on the Listener file in 10g? Is changing it an app. killer and does changing it really increase security?


No, it is not necessary. However be aware that in 10g, you do not need to put listener in password proctected mode because only Oracle (owner of software) can bounce listener by default.
Also be aware of a bug in 10g listener which may chew up whole CPU. To fix this, you need to put the following entry in listener.ora file.


Question (Date : April 17th,2008)

I'm running a expdp and impdp of a schema which is of 170 GB,there is enough space in target database.export took 1:15min but import is hanging for several hours.Both the schemas have different tablespace name sofirst time my import failed as it was searching for the same tablespace name of the source.Then i generated SQLfile for the creation of tables and indexes and then changed the tablespace name and then created the tables manually.Then i ran the impdp then it is not importing the data in the table and in the log file when it is abuot to import the statistics it gets hanged pls tell me a solution for this problem.Beacuse of this problem I'm not able to procedd with this job.Thnak you in Advance


Two different things have been asked in this question.
To use different tablespace in target database than source database, you can use REMAP_TABLESPACE during import.
To address performance issue, I would say that import is always slower than export.However, when you run import, you may consider the following :

- Import Structure first with no index and then import data.(CONTENT parameter).
- During import check v$session_waits to find out on which events sessions are waiting and what the bottleneck is. Sometimes PGA, sort_area_size and ... may need to be tunned.
- If tables are big, it is better off to import with DIRECT PATH. User ACCESS_METHOD parameter during import.


I have a question regarding oracle cbo. environment we have is oracle 10g in windows and siebel on the application side. statistics gathering was scheduled on regular basis. since we faced some performance issues stats gathering was turned off entirely after that. after switching of the stats gathering process data has grown considerably. now the developers want to start the dbms_stats gathering package again. my contention is if we analyze and gather statistics now we will end up having skewed data . and the performance might get worse than before. metalink doc id (Note:44961.1 what is your thought on this? would really appreciate your time & advice


I would do the followings :
- Export stats for existing schema.
- Save statspack report before stats gathering.
- Gather statistics.
- If it is 9i, keep monitoring on for table.

If there is performance issue for most queries, drop new stats and import stats from step1 and tune SQL.


How do you copy Oracle_Home from windows and load it to a different server and bring up the database?Is there any relink in Windows or do I need to update window registry.Basically what is the steps in cloning a database from windows 2003 to another server that is also windows 2003?


The following two options are feasible options.
- Use Grid control for clone.
- Use OUI for clone.

Each option has its own pros/cons however, I think Grid control clone feature is the easiest and the best.


After applying patch p6832736_101210_LINUX.zip on OAS Form and Report service version, my applycation use form webutil can not working.After click loading form webutil, only white screen appear and application was hang. I must close all page and login again.
Webutil 1.06
OAS Forms and Report services version
DBS 10g v.


This is known issue which has been report in metalink Note:377609.1. Fortunatelly, workaround is simple.


I take daily rman backup of 360gb database. Usually, it takes 4-5 hours, but, nowadays, every other day, it takes more than 24 hours.Please, help me regarding this issue?


Please follow below steps :

1. Make sure that there is not major change with checking backup size.You could do it with checking rc_backupset.

2. Check archivelog generation for days which RMAN takes long and compare it with other days. It could be RMAN have some more data to be backedup.
You could use the following script :

select to_char(sysdate,'DAY') DAY,sum(blocks*block_size)/1024/1024/1024 GB , count(*) Archivelog_number from v$archived_log where FIRST_TIME>=trunc(sysdate,'DD')

3. To Check RMAN performance, you could use V$BACKUP_ASYNC_IO or V$BACKUP_SYNC_IO.These views show performance information about ongoing and recently completed RMAN backup.The biggest benefit from these views is the EFFECTIVE_BYTES_PER_SECOND column in rows where TYPE column is set to AGGREGATE.This column represents the rate at which the objects are being backed up or recovered, in bytes per second.
(Refer to metalink 237083.1)


I forgot my jobid. i want to stop job ...how can i do that


If job is running, the best way to kill it is to remove it from dba_jobs. Here are steps :

1. Find job number from dba_jobs. (select job,what from dba_jobs;)
2. Remove job, use job value from step1. (exec dbms_job.remove(&job);
3. Finalize remove with commit command. (commit;)


Developers keep on telling that my queries are getting slow, as I am fresher for oacle dba what are the things i need to concentrate.


You need to find out the bottleneck. To take it simple, here are first steps that I recommend.

1. Take a SQL statement from developer.
2. Create two connection to database.
3. On first session, run the following :

sqlplus userid??/password??@connect_string
select sid from v$mystat group by sid; --> Keep the result.
--- Run query

4. On second session, run the following :

sqlplus '/ as sysdba'
select sid,'Event='event '=>'parameter1'='p1','parameter2'='p2','parameter3'='p3 Event, wait_time/100 wait_in_second from v$session_wait ses,(select parameter1,parameter2,parameter3,name from v$event_name)evntwhere ses.event=evnt.name and sid=&first_session_sid order by wait_time desc; --- Replace first_session sid with value of step 3.

-- Result shows what SQL statement is waiting for.
-- Repeat running above SQL and save the result until session 1 is finished.
Also you can check execution plan with following below steps.

sqlplus userid???/password ???
set pages 1000
set autotrace traceonly
--- Run sql statement.

Having above information will enable you to find bottleneck and suggest remedy accordingly. Please send me the result if you need my help.


I want to know when ever we create oracle object(index, table) is it necessary to generate stats for server to choose best execution plan since as per my knowledge if object structure changes(say 15%)we need to generate stats is it true or not


It depends which version of Oracle you are running.
For 10g onwards, Oracle gather stats at time of object creation.
For 9i, The best way is to enable monitoring on objects. (alter table table_name monitoring;) and let Oracle tracks object modification. For stats gathering, if you run dbms_stats with GATHER_AUTO, oracle knows which objects need statistics.


How to open an sys account ,if i forgot my password?


As installation owner (Possibly Oracle) login to box and run the following. (Insert desired password in password??)

sqlplus /nolog
connect '/ as sysdba' (On Unix) or connect "/ as sysdba" (On windows)
alter user sys identified by password??;

Also you can reset password with any user who has sysdba privilege.


I face an issue in Oracle 10g ( on Linux 64 bits). While executing a query using the CONNECT BY clause, I find that it executes faster if directly executed on TOAD/sqlplus. However, when I put the same query in a function, it slows down considerably.
Typically, the direct execution takes less than a second while the function takes more than 30 sec. The query is like below:
SELECT DISTINCT ner.neh_parent_id
FROM nw_element_relations ner
WHERE parent_ne_type = 'BTS'
AND ( job_dep_pkg.fn_job_last_ner_image(227987, ner.job_id, ner.ner_id, ner.active) = 'Y' OR ner.job_id = 227987) AND ner.action != 'R' CONNECT BY neh_child_id = PRIOR neh_parent_id AND ( job_dep_pkg.fn_job_last_ner_image (227987, ner.job_id, ner.ner_id, ner.active) = 'Y' OR ner.job_id = 227987) AND ner.action != 'R' AND child_ne_type != 'BTS';

I have even set the _old_connect_by_enabled = TRUE to avoid the issue of poor performance in 10g for the CONNECT BY clause


Before going into the matter of whether or not SQL statement needs tunning, the followings are recommended.

1. Minimize switching between SQL engine and PL/SQL engine : Try to execute most statement as SQL statement if you can.Switching between SQL engine and PL/SQL engine is costly.You should see performance side affect if you switch between these two engines. It is recommended just to run SQL in PL/SQL if you run out of option.

2. Make sure PL/SQL optimizer level set properly. Check value of plsql_optimize_level value.

3. Check http://oradbatips.blogspot.com/2007/09/tip57-how-to-use-plsql-profiler.html to find out how troubleshoot PL/SQL code.

4. I see that in start with a package is called, There is a published bug in Oracle that CONNECT BY query containing a subquery in the START WITH clausemay be slow. (Bug 5211863). It is supposed to be fixed in

Please send content of PL/SQL function and job_dep_pkg.fn_job_last_ner_image if you need more help.

TIP 74# : Can not run startup nomount

After an upgrade from to for a client, I encountered with the following error :

SQL> startup nomount
ORA-00940: invalid ALTER command

Interestingly, after removing spfile and creating a pfile with simple option, the issue was resolved.
It turned out to be because of a hidden parameter which disable/enable a bug in database which does not exist in Parameter was _fix_control=5385629:on which turns on the fix for bug 5385629 in but since this bug does not exist in, having this parameter in spfile caused a problem as Oracle can not enable this bug in

Please be advised to make sure spfile is good and healthy and it has not this parameter if you get this error.

TIP 73#: How to have execution plan with run-time

As you may know, the normal execution plan does not give you run-time of each pieces of any execution plan, Time which is reported is not exactly real elapsed time of SQL statement.
There are 2 ways to find out run-time of SQL statement which could be used greatly in SQL tuning :

Option 1 :

Trace SQL statement when statistics set to All and find out run-time from the trace file.

Obviously this option is traditional/old way which uses resources and takes time to scan trace file and find out run-time.

Option 2 :

Use gather_plan_statistics in SQL statement which let Oracle to gather most required stats for SQL tuning.

This option is simpler and faster.
I demonstrate it with the following example :

Step 1 : Enable sqlplus trace :

In order to prevent SQL statement to output the result on screen, it is better of to enable trace in order to just run and report statistics.

set autotrace traceonly stat;

Step 2 : Run query with hint:

My sample runs a simple query against test1 table.

SQL> desc test1
Name Null? Type
----------------------------------------- -------- ----------------------------

SQL> select /*+ gather_plan_statistics */ * from test1 where id=:myid;

1 recursive calls
0 db block gets
6 consistent gets
0 physical reads
0 redo size
648 bytes sent via SQL*Net to client
488 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed

3. Find SQL statement in shared pool

SQL> select sql_id,executions from v$sql where sql_text='select /*+ gather_plan_statistics */ * from test1 where id=:myid';

------------- ----------
82u1sypmr2k1f 1

4. Find execution plan with all run-time :

SQL> select * from table(dbms_xplan.display_cursor('82u1sypmr2k1f',null,'ALL IOSTATS LAST'));

As you see, E-Time shows execution plan which definitely is helpful to find out bottleneck in SQL execution and to find out at which part SQL statement is taking most of the time.

TIP 72# : Restore old statistics !

I have a client which called me yesterday and complained that users were experiencing performance degradation while there was no major change on database.
After some investigations, I found that auto 10g stats gathering is on and from DBA_OPTSTAT_OPERATIONS and DBA_TAB_STATS_HISTORY I found that last stats gathering was close to the time that users started complaining about performance.So as my first guess I pointed to the new stats as culprit. To prove this guess, I was Lucky that database was 10g so that old version of stats are saved automatically.
I went with the idea that If new stats resulted to suboptimal execution plan, old stats could be restored to fix the issue.

To restore statistics, I ran exec dbms_stats.restore_table_stats({owner} ,{table name}, {timestamp}); which replaced old stats with new stats.
After restoring old stats, performance problem was resolved.
Also in order to keep stats on this table intact, I lock stats on that specific table for a while.

TIP 71#: Number of concurrent users

One of my client got an renewal invoice from Oracle which tag price was based on 550 users.I got a call from the client which believed that having 550 concurrent users in beyond imagine.
Anyway to check number of concurrent users ?

There are 2 ways to check max number of concuurent user.

1. sessions_highwater in v$license shows the max number of concurrent user since last database startup.

2. If you are looking for details longer than last startup, Alert log shows the max number of concurrent users after each shutdown.
Here is a sample :

Shutting down instance (normal)
License high water mark = 3

TIP 70# : Performance issue on restoring database with Veritas netbackup

A client asked me to refresh a test database from a production including data and structure. Production database was 800GB and RMAN level 0 backup was taken every day.
In the first step, I tried to restore a 4GB datafile from tape to have better picture on how long the whole exerxise would take. Result was surprising!.
Restore of 4GB datafile took 15 minutes. Simple math showed me that whole restore should take more than 46 hours !!!
This time obviously was not acceptable to the client.
While I was searching more, I found that whole database backup just takes only 7 hours.
As the first guess I was suspicious to Veritas catalog and how it looks up backups.
I finally found metalink note 309891.1 which strongly recommends to put specific backup format to improve RMAN restore using Veritas netbackup.
Oracle recommends to put %t (lower case) at the end of backup format (Only at the end) to ensure uniqueness and optimal performance.
Interestingly, when backup format was changed to 'datafile208__%d_%u_%s_%T_%t'. the whole restore was done in 8 hours.
This workaround is applicable to to

TIP 69# : Estimate TEMP usage without running SQL

I decided to post this tip for a while but each time I distracted with something and I forgot that. Yesterday a client called me with an issue on TEMP tablespace. Client was running a big merge statement on warehouse database but each time job failed because it was not able to extend TEMP tablespace.As the result, client asked me how to determine required TEMP usage without running SQL statement.

My immediate answer was :
- Login to database with the same user.
- Get Execution plan.
- Watch TempSpc column in execution plan.

Client called me back and said "there is not TempSpc column in execution plan."(By the way client runs latest 10g version (

There are basically two reasons that TempSpc may not appear in execution plan.

Reason 1 : SQL statement does not need temporary tablespace and dedicated memory (PGA) is enough to run all operations

Reason 2 : Plan table is old and has not any column to keep Temp space usage.

To update plan, run the followings :
- drop table plan_table
- @$ORACLE_HOME/rdbms/admin/utlxplan.sql

(If you are using sqltrace for getting execution plan, you need to run @$ORACLE_HOME/sqlplus/admin/plustrce.sql as sys and then grant PLUSTRACE to the user)

Here is an example to demostrate TempSpc column in exeuction plan.

Execution plan for select * from dba_tables which has not any TempSpc columns because all operation can fit in existing memory.

Execution plan for select * from dba_tables order by table_name which TempSpc column appears since sorting needs more space than existing PGA.

TIP # 68 : Export hangs, Shutdown immediate hangs !!!

One of my client is running Oracle 9i ( and Oracle 10g ( On Solaris 64 bit in the same box.Oracle 10g is mainly used for recovery catalog and Grid repository.
All my efforts to take export of recovery catalog schema faild.With tradition export tool (exp), export hangs in exporting "Cluster Definition". Data pump export/import (expdp and impdp) runs forever.
Data pump sessions were waiting on "wait for unread message on broadcast channel" which is idle wait.Export sessions were waiting on cursor: pin S wait on X.In order to fix the issue, I just turned off mutex by seting hidden parameter. (I will explain in next post about mutex in 10g database).
To make this change effective, database should be bounced.
Surprisingly, shutdown immediate also hung, alert log reported Oracle was waiting for DBWx (Database Writer) to be shut down !!!!.
I ended up to shutdown abort database and startup it again though bouncing database sis not fix the situation.
The first export after bouncing database failed with ORA-0600 in alert log and with the following error lines in exporting cluster definitions.

EXP-00056: ORACLE error 24324 encountered
ORA-24324: service handle not initialized
EXP-00056: ORACLE error 24324 encountered
ORA-24324: service handle not initialized
EXP-00000: Export terminated unsuccessfully

All of the next runs of exp and expdp hung same as before bounce.
Eventually, I found that problem was because of improper setting of LD_LIBRARY_PATH and LIBPATH.This parameter needs to be set to point to $ORACLE_HOME/lib in 10g as first reference. Setting those parameter properly fixed the situation including shutdown problem and exp/expdp problem. (For more info, please refer to metalink note : 351650.1).

TIP #67 : Most recent asked questions

As you may notice, I opened a new section in my blog couple weeks ago which allows readers to ask any technical question in Oracle database administration or Oracle application server administration. This open up gate for reader to ask me what they want to see in this blog.
I should say that I got good reception from readers which encourage me to keep this section. In this post I gathered 10 latest questions plus answers.
To mention that answers are mostly prepared by myself and a friend of mine who is senior Oracle DBA.
Feel free to send me questions or post comments on questions/answers.

Question 10

How to reset report server password


1. Check httpd.conf in %ORACLE_HOME%\Apache\Apache\conf
Find if you have something like the followings :

AuthUserFile {path}\pwfile
Require user {username}

2. Take a copy of file in AuthUserFile.
3. Change password with running the following command :

%ORACLE_HOME%/apache/apache/bin/htpasswd -c pwfile username???
(username??? is value {username} from step 1

4. Restart Apache. New password should be effective.

This works if report server is not using OID for its authentication.

Question 9

How to find out who created (I mean OS user) a trigger in oracle database.?


ctime in sys.obj$ shows creation time of trigger.
With having object creation time, you have the following options to find OS user :

1. Using Audit file/table if Audit has been enabled in database.
2. Using listener.log to find out who connected to database at that time. (This worked if user connected to database via listener).
3. Check system log/message( May be SA can help you) to figure out who logged in during that period.
4. If session is still active and shared pool has not aged out 'create trigger' statement, you can use v$sqlarea and v$session to find out OS user. (However, I put it as low chance)

I ran comprehensive test of Log miner and for sure with logmining archive log file at trigger creation time, you can see changes in sys.source$ but you can not get any changes in v$session or its fixed table (X$KSUSE). Since changes on this table goes to memory.As the result, I do not see any viable solution except above options. Later, you may need to think of enabling Audit or having logon trigger if you `d like to catch this kind of activities.

Question 8

How can I reset the HWM in 9i without truncating the table or exp/imp? I have HP openview message history tables that become quite large during a data surge but drop off to very few records later.


I suggest to use DBMS_REDEFINITION. Here is simple sample to demonstrate on how to reset HWM on scott.tbl1.

create table tbl2 asselect * from tbl1 where 1=2;
EXEC DBMS_REDEFINITION.START_REDEF_TABLE('SCOTT', 'tbl1', 'tbl2', NULL, 2);----- Copy dependenciesDeclareblk_out pls_integer ;
blk_out := 0 ;

Question 7

We have 1 master site and 2 replication site.I try to recreate whole materialize view at one site and didnt touch other site. Both sites were and are in FAST refresh mode.The one which was NOT recreated now giving me error ORA-12034 materialized view log on %S% %T%younger than last refresh.I tried complete refresh on the other site (which was not recreted )it was successful but could not able to perform fast refresh.


I asked my friend who is Senior DBA and has experience in replication to answer this.Here is an excerpts from his thought :
"It seems that during the recreation of the first mv site (perhaps through replication managergui), the materialized view log on the master site is recreated and as a result its creation time is later than the mtimeof the MVs in the second site. If this is the problem,it should be resolved after a complete refresh on the second site and should not persist any more. If you are not able to do the fast refresh even after a complete refresh, please provide me some more information so I could simply recreate scenario ".

Also I would suggest to make sure that certain DDLs have not been run on master site. Certain DDL can modify master tables but can not log in materialized view. (alter partition, drop partition).

Question 6

sqlplus /nolog
conn and /as sysdba
this gets me in to the sys account without knowing its password. Any way to force me to always use a password?


If OS user is part of Oracle group, your OS user is allowed to login to database without any password.Two options to change this situation :

1. Change OS group of your OS user to be different from primary group of Oracle software installation.
2. Change $ORACLE_HOME/rdbms/lib/config.[cs] file and define new DBA group which is different from your OS user group. (example : define SS_DBA_GRP "mygroup"). You need to relink Oracle executable to take this change into account. (See metalink note : 18089.1)

Question 5

How can I see the parameters getting passed to a procedure, which is called from the application using trace files?


You basically have many options. Here are some:

1. If values are passed as a literal, you can check v$sqlarea to find out SQL text.This way you are able to see full SQL statement with value of parameters. (Example : select SQL_TEXT from v$sqlarea where upper(SQL_TEXT) like '%PROC1%'; -- Replace PROC1 with procedure name)

2. If values are passed as bind variables, you could enable 10064 event to trace session.

3. You could also write a wrapper on top of procedure to print out/save parameter values and then call main procedure.

Question 4

How to call a executable file (i.e) *.exe from a oracle stored procedure is that possible ?


Yes, it is possible. However I should confess that it is not as easy as normal PL/SQL code.
You can achieve that either with External Procedure call and C language or with Java language.
Long story short, you need to have a Java code to execute .exe file with exec and you need to have a PL/SQL wrapper function for this Java code. After granting different permission to database user, user would be able to execute .exe (executable) from PL/SQL code.
Hope this answers your question.

Question 3

I was wondering, though, about matching sql to index used. Not sure what tables to query. This would be helpful when examining index performance. Currently I find current sql in memory and check the where clause.


You need to check an execution plan of SQL statement to see whether or not it uses any index.If database is 10.2 onwards, the simplest way is to just run SQL statements and get the plan with the followings :

SQL> set linesize 150
SQL> set pagesize 2000
SQL> select * from table(dbms_xplan.display_cursor(null,null, 'ALL'));

If database is not 10.2 onwards, the simplest way is to explain sql statement :

explain plan for {sql_statement}

Obviously, there are some other ways to check this situation, please let me know if you are interested for more details.

Question 2

I need to display,fetch whole data from one of my column which is in CLOB.,i tried but it displays only some characters only .,Not full data., wht should i do ?


Just run the following. (Text?? should be replaced with CLOB column and table? should be replaced with table name)

buffer varchar2(32000);
buffer_size integer := 32000;
offset integer := 1;
length number(8);
myvar mycur%ROWTYPE;
OPEN mycur;
FETCH mycur INTO myvar;
length := dbms_lob.getlength(myvar.TEXT);
while offset <>
dbms_lob.read(myvar.TEXT, buffer_size, offset, buffer);
offset := offset + buffer_size;
end loop;
CLOSE mycur;

Question 1

I need query assist :I want to fetch records (for exp.15th jun 2007 to 15th nov 2007) from oracle 10g DB date wise records...???


Assumptions :
table_name : tbl1
data_field : f1_date with DATE type

select * from tbl1 where trunc(f1_date,'DD')