ON CALL DBA SUPPORT

— Database blog

Archive for the ‘Oracle STREAMS’ Category

SQL TRACE FOR STREAMS PROCESSES

Posted by ssgottik on 11/10/2013

STEP 01.GET THE SID OF THE PROCESSES RUNNING IN THE DATABASE USING ONE OF THE BELOW( As per your requirements):

— For the capture process:
SQL >  select SID, CAPTURE_NAME from v$streams_capture;
— For the propagation:
SQL > select QNAME, DESTINATION, PROCESS_NAME, SESSION_ID from dba_queue_schedules;
— For the apply process:
SQL > select SID, APPLY_NAME from v$streams_apply_server;

STEP 02. GET THE PID FROM THE SID :

SQL > select PID, p.PROGRAM from v$process p, v$session s where s.paddr=p.addr and sid=<SID obtained from queries above>;

STEP 03. BEGIN TARACING USING BELOW COMMANDS:

SQL > oradebug setorapid <PID from previous step>
Oracle pid: 259, Unix process pid: 99878, image: oracle@vmlinux1 (AS0B)
SQL > oradebug unlimit
SQL > oradebug Event 10046 trace name context forever, level 12

A trace file with the above PID will be genetrace . In my case the trace file looks like orcl_as01_99878.trc

STEP 04. STOP TRACING USING BELOW COMMAND:

SQL > oradebug Event 10046 trace name context off

Advertisements

Posted in Oracle STREAMS | Leave a Comment »

STREAMS PARAMETER CHANGING STEPS

Posted by ssgottik on 17/09/2013

Login as STREAMS ADMIN user and than execute :

FOR CAPTURE :

exec dbms_capture_adm.set_parameter(‘<CAPTURE_NAME>’, ‘<PARAMETER_NAME>’, ‘<VALUE>’);

FOR APPLY:
exec dbms_apply_adm.set_parameter(‘<APPLY_NAME>’, ‘<PARAMETER_NAME>’, ‘<VALUE>’);

Posted in Oracle STREAMS | Leave a Comment »

ORA-01403 NO DATA FOUND in ORACLE STREAMS

Posted by ssgottik on 09/05/2011

ORA-01403 NO DATA FOUND in ORACLE STREAMS

ORA-01403 error occurs when an apply process tries to update an existing row in the target database and the OLD_VALUES in the row LCR do not match the current values at the destination database.
Typically, one of the following conditions causes this error:
1. Supplemental logging is not specified for the columns that require supplemental logging at the source database side. This is because, LCRS from source database may not contain values for key columns.
2. There may be a problem with the primary key in the destination table. If there is no primary key mentioned for the target table or if the primary key in the target database table is different from the source database table.
3. If there is any data mismatch between source table and the target table.

TROUBLESHOOTING ORA – 01403:
STEP 1:  where you will find ORA – 01403 errors in streams?
If you query DBA_APPLY_ERROR in the Target Database or  in the database where Apply is running , you will get the list of objects or records which are out of synch or getting ORA_01403 error in the error_message column:
Run the below mentioned query:
SQL >  SELECT APPLY_NAME,
SOURCE_DATABASE,
LOCAL_TRANSACTION_ID,
ERROR_NUMBER,
ERROR_MESSAGE,
MESSAGE_COUNT
FROM DBA_APPLY_ERROR;

We have to check each and every error_message and see what is the error related to it and which command it cause this problem.
To check what the root cause of this error is, we have to use PRINT_LCR package provides by oracle: 

STEP 2 : Run below mentioned code as STRMADMIN user (streams admin user) only once
CREATE OR REPLACE PROCEDURE print_any(data IN ANYDATA) IS
tn VARCHAR2(61);
str VARCHAR2(4000);
chr VARCHAR2(1000);
num NUMBER;
dat DATE;
rw RAW(4000);
res NUMBER;
BEGIN
IF data IS NULL THEN
DBMS_OUTPUT.PUT_LINE(‘NULL value’);
RETURN;
END IF;
tn := data.GETTYPENAME();
IF tn = ‘SYS.VARCHAR2’ THEN
res := data.GETVARCHAR2(str);
DBMS_OUTPUT.PUT_LINE(SUBSTR(str,0,253));
ELSIF tn = ‘SYS.CHAR’ then
res := data.GETCHAR(chr);
DBMS_OUTPUT.PUT_LINE(SUBSTR(chr,0,253));
ELSIF tn = ‘SYS.VARCHAR’ THEN
res := data.GETVARCHAR(chr);
DBMS_OUTPUT.PUT_LINE(chr);
ELSIF tn = ‘SYS.NUMBER’ THEN
res := data.GETNUMBER(num);
DBMS_OUTPUT.PUT_LINE(num);
ELSIF tn = ‘SYS.DATE’ THEN
res := data.GETDATE(dat);
DBMS_OUTPUT.PUT_LINE(dat);
ELSIF tn = ‘SYS.BLOB’ THEN
DBMS_OUTPUT.PUT_LINE(‘BLOB Found’);
ELSE
DBMS_OUTPUT.PUT_LINE(‘typename is ‘ || tn);
END IF;
END print_any;
/
CREATE OR REPLACE PROCEDURE print_lcr(lcr IN SYS.ANYDATA) IS
typenm VARCHAR2(61);
ddllcr SYS.LCR$_DDL_RECORD;
proclcr SYS.LCR$_PROCEDURE_RECORD;
rowlcr SYS.LCR$_ROW_RECORD;
res NUMBER;
newlist SYS.LCR$_ROW_LIST;
oldlist SYS.LCR$_ROW_LIST;
ddl_text CLOB;
BEGIN
typenm := lcr.GETTYPENAME();
DBMS_OUTPUT.PUT_LINE(‘type name: ‘ || typenm);
IF (typenm = ‘SYS.LCR$_DDL_RECORD’) THEN
res := lcr.GETOBJECT(ddllcr);
DBMS_OUTPUT.PUT_LINE(‘source database: ‘ ||
ddllcr.GET_SOURCE_DATABASE_NAME);
DBMS_OUTPUT.PUT_LINE(‘owner: ‘ || ddllcr.GET_OBJECT_OWNER);
DBMS_OUTPUT.PUT_LINE(‘object: ‘ || ddllcr.GET_OBJECT_NAME);
DBMS_OUTPUT.PUT_LINE(‘is tag null: ‘ || ddllcr.IS_NULL_TAG);
DBMS_LOB.CREATETEMPORARY(ddl_text, TRUE);
ddllcr.GET_DDL_TEXT(ddl_text);
DBMS_OUTPUT.PUT_LINE(‘ddl: ‘ || ddl_text);
DBMS_LOB.FREETEMPORARY(ddl_text);
ELSIF (typenm = ‘SYS.LCR$_ROW_RECORD’) THEN
res := lcr.GETOBJECT(rowlcr);
DBMS_OUTPUT.PUT_LINE(‘source database: ‘ ||
rowlcr.GET_SOURCE_DATABASE_NAME);
DBMS_OUTPUT.PUT_LINE(‘owner: ‘ || rowlcr.GET_OBJECT_OWNER);
DBMS_OUTPUT.PUT_LINE(‘object: ‘ || rowlcr.GET_OBJECT_NAME);
DBMS_OUTPUT.PUT_LINE(‘is tag null: ‘ || rowlcr.IS_NULL_TAG);
DBMS_OUTPUT.PUT_LINE(‘command_type: ‘ || rowlcr.GET_COMMAND_TYPE);
oldlist := rowlcr.GET_VALUES(‘OLD’);
FOR i IN 1..oldlist.COUNT LOOP
if oldlist(i) is not null then
DBMS_OUTPUT.PUT_LINE(‘old(‘ || i || ‘): ‘ || oldlist(i).column_name);
print_any(oldlist(i).data);
END IF;
END LOOP;
newlist := rowlcr.GET_VALUES(‘NEW’);
FOR i in 1..newlist.count LOOP
IF newlist(i) IS NOT NULL THEN
DBMS_OUTPUT.PUT_LINE(‘new(‘ || i || ‘): ‘ || newlist(i).column_name);
print_any(newlist(i).data);
END IF;
END LOOP;
ELSE
DBMS_OUTPUT.PUT_LINE(‘Non-LCR Message with type ‘ || typenm);
END IF;
END print_lcr;
/

CREATE OR REPLACE PROCEDURE print_errors IS
CURSOR c IS
SELECT LOCAL_TRANSACTION_ID,
SOURCE_DATABASE,
MESSAGE_COUNT,
ERROR_NUMBER,
ERROR_MESSAGE
FROM DBA_APPLY_ERROR
ORDER BY SOURCE_DATABASE, SOURCE_COMMIT_SCN;
i NUMBER;
txnid VARCHAR2(30);
source VARCHAR2(128);
msgcnt NUMBER;
errnum NUMBER := 0;
errno NUMBER;
errmsg VARCHAR2(128);
lcr SYS.AnyData;
r NUMBER;
BEGIN
FOR r IN c LOOP
errnum := errnum + 1;
msgcnt := r.MESSAGE_COUNT;
txnid := r.LOCAL_TRANSACTION_ID;
source := r.SOURCE_DATABASE;
errmsg := r.ERROR_MESSAGE;
errno := r.ERROR_NUMBER;
DBMS_OUTPUT.PUT_LINE(‘*************************************************’);
DBMS_OUTPUT.PUT_LINE(‘—– ERROR #’ || errnum);
DBMS_OUTPUT.PUT_LINE(‘—– Local Transaction ID: ‘ || txnid);
DBMS_OUTPUT.PUT_LINE(‘—– Source Database: ‘ || source);
DBMS_OUTPUT.PUT_LINE(‘—-Error Number: ‘||errno);
DBMS_OUTPUT.PUT_LINE(‘—-Message Text: ‘||errmsg);
FOR i IN 1..msgcnt LOOP
DBMS_OUTPUT.PUT_LINE(‘–message: ‘ || i);
lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE(i, txnid);
print_lcr(lcr);
END LOOP;
END LOOP;
END print_errors;
/
CREATE OR REPLACE PROCEDURE print_transaction(ltxnid IN VARCHAR2) IS
i NUMBER;
txnid VARCHAR2(30);
source VARCHAR2(128);
msgcnt NUMBER;
errno NUMBER;
errmsg VARCHAR2(128);
lcr SYS.ANYDATA;
BEGIN
SELECT LOCAL_TRANSACTION_ID,
SOURCE_DATABASE,
MESSAGE_COUNT,
ERROR_NUMBER,
ERROR_MESSAGE
INTO txnid, source, msgcnt, errno, errmsg
FROM DBA_APPLY_ERROR
WHERE LOCAL_TRANSACTION_ID = ltxnid;
DBMS_OUTPUT.PUT_LINE(‘—– Local Transaction ID: ‘ || txnid);
DBMS_OUTPUT.PUT_LINE(‘—– Source Database: ‘ || source);
DBMS_OUTPUT.PUT_LINE(‘—-Error Number: ‘||errno);
DBMS_OUTPUT.PUT_LINE(‘—-Message Text: ‘||errmsg);
FOR i IN 1..msgcnt LOOP
DBMS_OUTPUT.PUT_LINE(‘–message: ‘ || i);
lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE(i, txnid); — gets the LCR
print_lcr(lcr);
END LOOP;
END print_transaction;
/

STEP 3: Use the below mentioned code to check what is present inside the local transaction ID:

DECLARE
lcr SYS.AnyData;
BEGIN
lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE
(1,'<LOCAL_TRANSACTION_ID>’);
print_lcr(lcr);
END;
/

If there are more than one table involved in out of synch problem. Then execute below mentioned code to get the list of tables which are involved in OUT OF SYNCH issue. Run the below code as STRMADMIN user:
STEP 4 : Run the below query only one time as STRMADMIN user:


CREATE OR REPLACE PROCEDURE print_objectname(lcr IN SYS.ANYDATA) IS
typenm VARCHAR2(61);
ddllcr SYS.LCR$_DDL_RECORD;
proclcr SYS.LCR$_PROCEDURE_RECORD;
rowlcr SYS.LCR$_ROW_RECORD;
res NUMBER;
newlist SYS.LCR$_ROW_LIST;
oldlist SYS.LCR$_ROW_LIST;
ddl_text CLOB;
BEGIN

typenm := lcr.GETTYPENAME();
IF (typenm = ‘SYS.LCR$_DDL_RECORD’) THEN
res := lcr.GETOBJECT(ddllcr);
DBMS_OUTPUT.PUT_LINE(‘object: ‘ || ddllcr.GET_OBJECT_NAME);

ELSIF (typenm = ‘SYS.LCR$_ROW_RECORD’) THEN
res := lcr.GETOBJECT(rowlcr);
DBMS_OUTPUT.PUT_LINE(‘object: ‘ || rowlcr.GET_OBJECT_NAME);
–DBMS_OUTPUT.PUT_LINE(‘command_type: ‘ || rowlcr.GET_COMMAND_TYPE);
END IF;
END print_objectname;
/
Run the query query when ever there is out of synch issue. Spool the output of this query and select the distinct tables inside the spool file , it will give you the list of tables which are out of synch.
set serveroutput on;

DECLARE
lcr SYS.AnyData;
type strarr is table of varchar2(100);
localtrnid strarr ;
msgcount strarr ;
BEGIN
 select local_transaction_id,message_count bulk collect into localtrnid ,msgcount  from
DBA_APPLY_ERROR where apply_name=’APPLY_NAME’;
for i in 1 .. localtrnid .count loop
 for j in 1.. to_number(msgcount(i)) loop
  lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE(j,localtrnid(i) );
  print_objectname(lcr);
 end loop;
 
end loop;

–print_lcr(lcr);
END;
/

Thanks and Regards,

Satish.G.S

Posted in Oracle STREAMS | 3 Comments »

STEPS FOR ADDING NEW OBJECTS TO THE EXISTING ORACLE TABLE LEVEL STREAMS CONFIGURATION

Posted by ssgottik on 09/05/2011

Here i am adding a table by name NEW_STRM_TBL to already up and running TABLE level streams.

SOURCE :

SERVER IP     : 192.168.0.1
DATABASE NAME : SOURCEDB
SCHEMA NAME   : SCOTT
TABLE NAME    : NEW_STRM_TBL
TARGET :

SERVER IP     : 192.168.1.1
DATABASE NAME : TARGETDB
SCHEMA NAME   : SCOTT
TABLE_NAME    : NEW_STRM_TBL

STEP 1: STOP STREAMS

Stop APPLY, PROPAGATION and CAPTURE process

STEP 2: DEFINE THE RULE FOR APPLY
CONN STRMADMIN/STRMADMIN@TARGETDB

show user
–NEW_STRM_TBL

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘SCOTT.NEW_STRM_TBL’,
streams_type => ‘APPLY’,
streams_name => ‘STREAM_APPLY_A1’,
queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
include_dml => true,
include_ddl => true,
source_database => ‘SOURCEDB’);
END;
/
STEP 3: DEFINE THE RULE FOR PROPAGATION

CONN STRMADMIN/STRMADMIN@SOURCEDB

–NEW_STRM_TBL

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => ‘SCOTT.NEW_STRM_TBL’,
streams_name => ‘STREAM_PROPAGATE_P1’,
source_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
destination_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q@TARGETDB’,
include_dml => true,
include_ddl => true,
source_database => ‘SOURCEDB’);
END;
/
STEP 4: DEFINE THE RULE FOR CAPTURE

CONN strmadmin/STRMADMIN@SOURCEDB

show user

–NEW_STRM_TBL

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘SCOTT.NEW_STRM_TBL’,
streams_type => ‘CAPTURE’,
streams_name => ‘STREAM_CAPTURE_C1’,
queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
include_dml => true,
include_ddl => true,
source_database => ‘SOURCEDB’);
END;
/
STEP5: OBJECT INSTANTATION

Take the export of table from the source database:

exp SCOTT/TIGER file=exp_tbl.dmp log=exp_tbl.log object_consistancy=y tables=SCOTT.new_strm_tbl

Transfer the dump file to target database:
Import the dump into the target database:

imp SCOTT/TIGER fromuser=SCOTT touser=SCOTT file='<PATH>/exp_tbl.dmp’ log='<PATH>/exp_tbl.log’ STREAMS_INSTANTIATION=Y IGNORE=Y COMMIT=Y

STEP 6: START STREAMS

Start APPLY, PROPAGATION and APPLY

Thanks and Regards,

Satish.G.S

Posted in Oracle STREAMS | Leave a Comment »

SUPPLEMENTAL LOGGING

Posted by ssgottik on 29/04/2011

What is supplemental logging?

Redo log files are generally used for instance recovery and media recovery. The data required for instance recovery and media recovery is automatically recorded in the redo log files. However a redo log based application may require that the additional columns need to be logged into redo log files. The process of adding these additional columns into redo log files is called supplemental logging.

Supplemental logging is not the default behavior of oracle database. It has to be enabled manually after the database is created. You can enable the supplemental logging at two levels

  1. DATABASE LEVEL
  2. TABLE LEVEL

What is the user of supplemental logging in replication?

Supplemental logging at the source database side to certain columns are very much required to ensure that those changes which are happened to the columns which are supplemental logging enabled will be applied successfully at the target database. With the help of these additional columns, oracle decides the rows which need to be updated on the destination side. This is how supplement logging is more critical requirement for replication.

What is the role or use of supplemental logging in oracle streams?

In streams, capture process captures the additional information logged in to redo log file using supplemental logging and place them in the LCR (LOGGICAL CHANGE RECORD). Supplemental logging is configured at the source database side. The apply process at the target database side reads these LCR’s to properly apply DML and DDL changes that are replicated from source database side to target database.

If the table has primary key or unique key column defined, only the column which are involved in primary key or unique key column will be registered in the redo logs along with the actual column that has changed. If the table does not have any primary key or unique key defined, oracle will write all columns of the changed row data into the redo log file.

Depending on the set of additional columns logged there are two types of supplemental log groups:

  1. Unconditional supplemental log group
  2. Conditional supplemental log group

 1. UNCONDITIONAL SUPPLEMENTAL LOG GROUP:

 If you want the before image of the column to be logged in to the redo log file  even if there is no changes happen that column and  which have supplemental logging enabled, then we use UNCONDITIONAL SUPPLEMENTAL LOGGING. This is also call ALWAYS LOG GROUP.

 2. CONDITIONAL SUPPLEMENTAL LOG GROUP:

 The before image of all the columns are logged into the redo log file even if at least one of the columns in the supplemental log group is updated.

 DATABASE LEVEL SUPPLEMENTAL LOGGING:

 How to check supplemental logging is enabled or not?

 SQL> SELECT supplemental_log_data_min FROM v$database;

 How to enable supplemental logging at database level?

 SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

 How to disable supplemental logging at database level?

 SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

 TABLE LEVEL SUPPLEMENTAL LOGGING:

 TABLE LEVEL UNCONDITIONAL SUPPLEMENTAL LOGGING: 

  • Primary Key columns
  • All columns
  • Selected columns

 To specify an unconditional supplemental log group for PRIMARY KEY column(s):

 SQL > ALTER TABLE SCOTT. EMP ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

 To specify an unconditional supplemental log group that includes ALL TABLE columns:

 SQL > ALTER TABLE SCOTT.EMP ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

 To specify an unconditional supplemental log group that includes SELECTED columns:

 SQL> ALTER TABLE SCOTT.EMP ADD SUPPLEMENTAL LOG GROUP t1_g1 (C1,C2) ALWAYS;

 TABLE LEVEL CONDITIONAL SUPPLEMENTAL LOGGING: 

  • Foreign  key
  • Unique
  • Any Columns

To specify a conditional supplemental log group that includes all FOREIGN KEY columns:

 SQL> ALTER TABLE SCOTT.DEPT ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;

 To specify a conditional supplemental log group for UNIQUE column(s) and/or BITMAP index column(s):

 SQL > ALTER TABLE SCOTT.EMP ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;

 To specify a conditional supplemental log group that includes ANY columns:

 SQL>ALTER TABLE SCOTT.EMP  ADD SUPPLEMENTAL LOG GROUP t1_g1 (c1,c3);

 To drop supplemental logging:

 SQL > ALTER TABLE <TABLE NAME >DROP SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

 SQL>ALTER TABLE <TABLE NAME >DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

 SQL> ALTER TABLE <TABLE NAME> DROP SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;

 SQL> ALTER TABLE <TABLE NAME> DROP SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;

 VIEWS

 DBA_LOG_GROUPS

 DBA_LOG_GROUP_COLUMNS

Thanks  and  Regards,

Satish.G.S

 

 

Posted in Oracle STREAMS | 5 Comments »

STEPS TO REMOVE STREAMS FROM THE DATABASE

Posted by ssgottik on 20/04/2011

     COMPLETELY REMOVE STREAMS FROM THE DATABASE

Starting from Oracle Database 10g, Oracle provides a means by which you can remove an entire Streams environment from a database.

Stop streams on both source and target and then execute the below command as sys user

SQL> conn sys@DBSOURCE as sysdba

SQL> execute DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();

SQL> conn sys@DBTARGET as sysdba

SQL> execute DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();

ISSUES:

In case if you get the $BIN error when you execute the above command , then purge the recycle bin and re-execute the above command.

SQL> purge dba_recyclebin;

Thanks and Regards,

Satish.G.S

Posted in Oracle STREAMS | 1 Comment »

STEPS TO IMPLEMENT SCHEMA LEVEL ORACLE STREAMS

Posted by ssgottik on 20/04/2011

 STEPS TO IMPLEMENT SCHEMA LEVEL ORACLE STREAMS

Here i am replicating all the objects of SCOTT schema from DBSOURCE database to SCOTT schema in DBTARGET database.

 SOURCE DATABASE : DBSOURCE

TARGET DATABASE : DBTARGET

SOURCE SCHEMA NAME : SCOTT

TARGET SCHEMA NAME : SCOTT

Fallow the steps in the same sequence.

STEP 0: Check streams unsupported objects present with the schema

Qurey DBA_STREAMS_UNSUPPORTED to get the list of Tables and the reason why streams wont support those  tables  in replication.

SQL > SELECT TABLE_NAME,REASON FROM DBA_STREAMS_UNSUPPORTED WHERE OWNER=’SCOTT’;

STEP 1 : ADD SUPPLEMENT LOGIN TO ALL THE TABLES WHICH ARE PART OF STREAMS REPLICATION

@STEP1_SYS_SOURCE_SUPPLEMENTAL_LOG_DATA.SQL

 Add the supplement login for all the tables present in SCOTT schema at the source side

—    CONTENTS OF .SQL FILES

spool c:\STREAMS_LOG\step1_sys_source_supplement_log_data.log

CONN SYS@DBSOURCE AS SYSDBA

set echo on

show user

alter database force logging;

alter database add supplemental log data;
alter table SCOTT.EMP  ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;                                       
alter table SCOTT.DEPT  ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;                            
alter table SCOTT.EMPLOYEES  ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;

spool off

STEP 2 : SETTING THE ENV VARIABLES AT SOURCE – DBSOURCE

— The database must run in archive log mode

@STEP2_SYS_SOURCE_GLOBALNAME.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step2_sys_source_globalname.log

CONN SYS@DBSOURCE AS SYSDBA

SHOW USER

select * from global_name; –to see current global_name

alter system set global_names=true scope=both;

– Restart DB & do the same changes on Target DB also

spool off

STEP 3 : SETTING THE ENV VARIABLES AT TARGET – DBTARGET

— the database must run in archive log mode

@STEP3_SYS_TARGET_GLOBALNAME.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step3_sys_target_globalname.log

CONN SYS@DBTARGET AS SYSDBA

SHOW USER

select * from global_name; –to see current global_name

alter system set global_names=false scope=both;

– Restart DB & do the same changes on Source DB also

spool off

STEP 4 : CREATING STREAMS ADMINISTRATOR USER AT SOURCE – DBSOURCE

—at the SOURCE:

SQL> create tablespace strepadm datafile ‘/oradata/DBSOURCE/strepadm01.dbf’ size 1000m;

@STEP4_SYS_SOURCE_CREATE_USER.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step4_sys_source_create_user.log

CONN SYS@DBSOURCE AS SYSDBA

SHOW USER

PROMPT CREATING USERS

create user STRMADMIN identified by STRMADMIN default tablespace strepadm temporary tablespace temp;

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;

execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(‘STRMADMIN’);

spool off

STEP 5: CREATING DB LINK AT THE SOURCE -DBSOURCE

@STEP5_STRMADMIN_SOURCE_DBLINK.SQL

— CONTENTS OF .SQL FILES

/* Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at DBSOURCE*/
/* Add the TNS ENTRY details in the tnsnames.ora file */
set echo on

spool c:\STREAMS_LOG\STEP5_strmadmin_source_dblink.log

CONN STRMADMIN@DBSOURCE AS SYSDBA

show user

create database link DBTARGET connect to STRMADMIN identified by STRMADMIN using ‘DBTARGET’;

spool off

STEP 6 : CREATING STREAMS ADMINISTRATOR USER  AT TARGET – DBTARGET

—at the TARGET:

SQL> create tablespace strepadm datafile ‘/oradata/DBTARGET/strepadm01.dbf’ size 1000m;

@STEP6_SYS_TARGET_CREATE_USER.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step6_sys_TARGET_create_user.log

CONN SYS@DBTARGET AS SYSDBA

show user

PROMPT CREATING USERS

create user STRMADMIN identified by STRMADMIN default tablespace strepadm temporary tablespace temp;

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;

execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(‘STRMADMIN’);

spool off
– IF SCOTT schema is not present in the target please create the same.

STEP 7 : CREATE QUEUE AND QUEUE TABLE AT THE SOURCE – DBSOURCE

@STEP7_STRMADMIN_SOURCE_QUEUE.SQL

— CONTENTS OF .SQL FILES

/* Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at DBSOURCE */
set echo on
spool c:\STREAMS_LOG\step7_strmadmin_source_queue.log

connect STRMADMIN@DBSOURCE

show user

BEGIN
   DBMS_STREAMS_ADM.SET_UP_QUEUE(
     queue_table => ‘STREAMS_QUEUE_TABLE’,
     queue_name  => ‘STREAMS_QUEUE_Q’,
     queue_user  => ‘STRMADMIN’);
END;
/

spool off

 STEP 8: CREATE QUEUE AND QUEUE TABLE AT THE TARGET – DBTARGET

@STEP8_STRMADMIN_TARGET_QUEUE.SQL

— CONTENTS OF .SQL FILES

/* Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at DBTARGET */

set echo on

spool c:\STREAMS_LOG\step8_strmadmin_target_queue.log

conn STRMADMIN@DBTARGET

show user

BEGIN
   DBMS_STREAMS_ADM.SET_UP_QUEUE (
     queue_table => ‘STREAMS_QUEUE_TABLE’,
     queue_name  => ‘STREAMS_QUEUE_Q’,
     queue_user  => ‘STRMADMIN’);
END;
/

spool off

 STEP 9: CREATE PROPAGATION PROCESS AT SOURCE – DBSOURCE

@STEP9_STRMADMIN_SOURCE_PROPOGATION.SQL

— CONTENTS OF .SQL FILES

set echo on

spool C:\STREAMS_LOG\step9_strmadmin_source_propogation.log

conn strmadmin@DBSOURCE

SHOW USER

BEGIN

   DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(

     schema_name                        => ‘SCOTT’,

     streams_name                        => ‘STREAM_PROPAGATE_P1’,

     source_queue_name              => ‘STRMADMIN.STREAMS_QUEUE_Q’,

     destination_queue_name       => ‘STRMADMIN.STREAMS_QUEUE@DBTARGET’,

     include_dml                           => true,

     include_ddl                            => true,

     source_database                   => ‘DBSOURCE’);

END;

/

spool off

STEP 10 : CREATE CAPTURE PROCESS AT SOURCE – DBSOURCE

@STEP10_STRMADMIN_SOURCE_CAPTURE.SQL

— CONTENTS OF .SQL FILES

set echo on

/*Step 10 -Connected to DBSOURCE , create CAPTURE */

spool C:\STREAMS_LOG\step10_strmadmin_source_capture.log

CONNstrmadmin@DBSOURCE

show user

BEGIN

  DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(

    schema_name              => ‘SCOTT’,

    streams_type               => ‘CAPTURE’,

    streams_name              => ‘STREAM_CAPTURE_C1’,

    queue_name               => ‘STRMADMIN.STREAMS_QUEUE­_Q’,

    include_dml                => true,

    include_ddl                => true,

    source_database => ‘DBSOURCE’);

END;

/

SPOOL OFF

STEP 11 : CREATE APPLY PROCESS AT TARGET – DBTARGET

@STEP11_STRMADMIN_TARGET_APPLY.SQL

 — CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step11_strmadmin_target_apply_start.log

CONN STRMADMIN/STRMADMIN@DBTAGET

show user

BEGIN

   DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(

     schema_name     => ‘SCOTT’,

     streams_type    => ‘APPLY ‘,

     streams_name    => ‘STREAM_APPLY_A1’,

     queue_name      => ‘STRMADMIN.STREAMS_QUEUE_Q’,

     include_dml     => true,

     include_ddl     => true,

     source_database => ‘DBTARGET’);

END;

/

SPOOL OFF

STEP 12: CREATE NEGATIVE RULE AT SOURCE FOR UNSUPPORTED TABLES – DBSOURCE

 Set negative rule for all the tables which are unsupported by streams ( List you got from  querying DBA_STREAMS_UNSUPPORTED)

— CONTENTS OF .SQL FILES

set echo on

spool c:\streams_source\step12_strmadmin_source_negative_rule.log

conn strmadmin@DBSOURCE

show user

BEGIN

  DBMS_STREAMS_ADM.ADD_TABLE_RULES(

    table_name      =>  ‘SCOTT.<UNSUPPORTED TABLE NAME>’,

    streams_type    =>  ‘capture’,

    streams_name    =>  ‘STREAM_CAPTURE_C1’,

    queue_name      =>  ‘strmadmin.STREAMS_QUEUE_Q’,

    include_dml     =>  true,

    include_ddl     =>  true,

    inclusion_rule  =>  false);

END;

/

SPOOL OFF

STEP 13: STREAMS OBJECT INSTANTATION

@ STEP10_EXP_IMP — Details are present in this text.

SOURCE :

$exp USERNAME/PASSWORD parfile=exp_streams.par

vi exp_streams.par

file=exp_streams.dmp
log=exp_streams.log
object_consistent=y
OWNER=SCOTT
STATISTICS=NONE

SCP THE .DMP FILE TO TARGET AND IMPORT IT:

TARGET:

imp FROMUSER=SCOTT TOUSER=SCOTT FILE=exp_streams.dmp  log=exp_streams.log STREAMS_INSTANTIATION=Y IGNORE=Y COMMIT=Y

STEP 14: START THE APPLY PROCESS AT TARGET – DBTARGET

@STEP14_STRMADMIN_TARGET_START_APPLY.SQL

— CONTENTS OF .SQL FILES

SET ECHO ON

spool c:\STREAMS_LOG\step14_STRMADMIN_TARGET_APPLY_START.log

connect STRMADMIN@DBTARGET

show user

BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => ‘STREAM_APPLY_A1′);
END;
/

—- Set stop_on_error to false so apply does not abort for every error; then, start the Apply process on the destination

BEGIN

  DBMS_APPLY_ADM.SET_PARAMETER(

    apply_name => ‘STREAM_APPLY_A1’,

    parameter  => ‘disable_on_error’,

    value      => ‘n’);

END;

/

 — Start Apply

BEGIN 

DBMS_APPLY_ADM.START_APPLY( 

apply_name => ‘STREAM_APPLY_A1’); 

END; 

spool off

STEP 15 : START THE CAPTURE PROCESS AT SOURCE – DBSOURCE

@STEP15_STRMADMIN_SOURCE_START_CAPTURE.SQL

— CONTENTS OF .SQL FILES

SET ECHO ON

spool c:\STREAMS_LOG\step15_STRMADMIN_SOURCE_CAPTURE_START.log

connect STRMADMIN@DBTARGET

show user

BEGIN
  DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => ‘STREAM_CAPTURE_C1′);
END;
/
spool off

YOUR COMMENTS ARE MOST WELCOME

 Thanks and Regards,

Satish.G.S

Posted in Oracle STREAMS | 19 Comments »

STEPS TO IMPLEMENT TABLE LEVEL ORACLE STREAMS

Posted by ssgottik on 14/04/2011

   STEPS TO SETUP TABLE LEVEL STREAMS
   ————————————————————-

Here i am replicating four table from source database to target database. Details of the source database,target database and tables

which are involved in replication is mentioned below:

SOURCE DATABASE : DBSOURCE

TARGET DATABASE : DBTARGET

SOURCE TABLE OWNER NAME :  SCOTT

TARGET TABLE OWNER NAME :  SCOTT

TABLES WHICH ARE MEMEBER OF REPLICATION : EMP,DEPT,EMPLOYEES
Fallow the steps in the same sequence.

STEP 0 : ADD SUPPLEMENT LOGIN TO ALL THE TABLES WHICH ARE PART OF STREAMS REPLICATION

@STEP0_SYS_SOURCE_SUPPLEMENTAL_LOG_DATA.SQL

— CONTENTS OF .SQL FILES

spool c:\STREAMS_LOG\STEP0_SYS_SOURCE_SUPPLEMENT_LOG_DATA.log

CONN SYS@DBSOURCE AS SYSDBA

set echo on

SHOW USER

alter database force logging;

alter database add supplemental log data;
alter table SCOTT.EMP  ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;                                       
alter table SCOTT.DEPT  ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;                            
alter table SCOTT.EMPLOYEES  ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;

SPOOL OFF;
STEP 1 : SETTING THE ENV VARIABLES AT SOURCE – DBSOURCE

— The database must run in archive log mode

@STEP1_SYS_SOURCE_GLOBALNAME.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step1_sys_source_globalname.log

CONN SYS@DBSOURCE AS SYSDBA

SHOW USER

select * from global_name; –to see current global_name

alter system set global_names=true scope=both;

— Restart DB & do the same changes on Target DB also

spool off

STEP 2 : SETTING THE ENV VARIABLES AT TARGET – DBTARGET

— the database must run in archive log mode

@STEP2_SYS_TARGET_GLOBALNAME.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step2_sys_target_globalname.log

CONN SYS@DBTARGET AS SYSDBA

SHOW USER

select * from global_name; –to see current global_name

alter system set global_names=false scope=both;

— Restart DB & do the same changes on Source DB also

spool off

STEP 3 : CREATING STREAMS ADMINISTRATOR USER AT SOURCE – DBSOURCE

—at the SOURCE:

SQL> create tablespace strepadm datafile ‘/oradata/DBSOURCE/strepadm01.dbf’ size 1000m;

@STEP3_SYS_SOURCE_CREATE_USER.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step3_sys_source_create_user.log

CONN SYS@DBSOURCE AS SYSDBA

SHOW USER

PROMPT CREATING USERS

create user STRMADMIN identified by STRMADMIN default tablespace strepadm temporary tablespace temp;

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;

execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(‘STRMADMIN’);

spool off
STEP 3a: CREATING DB LINK AT THE SOURCE -DBSOURCE

@STEP3a_STRMADMIN_SOURCE_DBLINK.SQL

— CONTENTS OF .SQL FILES

/* Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at DBSOURCE*/
/* Add the TNS ENTRY details in the tnsnames.ora file */
set echo on

spool c:\STREAMS_LOG\STEP3a_strmadmin_source_dblink.log

CONN STRMADMIN@DBSOURCE AS SYSDBA

show user

create database link DBTARGET connect to STRMADMIN identified by STRMADMIN using ‘DBTARGET’;

spool off

STEP 4 : CREATING STREAMS ADMINISTRATOR USER AT TARGET – DBTARGET

—at the TARGET:

SQL> create tablespace strepadm datafile ‘/oradata/DBTARGET/strepadm01.dbf’ size 1000m;

@STEP4_SYS_TARGET_CREATE_USER.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step4_sys_TARGET_create_user.log

CONN SYS@DBTARGET AS SYSDBA

show user

PROMPT CREATING USERS

create user STRMADMIN identified by STRMADMIN default tablespace strepadm temporary tablespace temp;

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;

execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(‘STRMADMIN’);

spool off
— IF SCOTT schema is not present in the target please create the same.

STEP 5 : CREATE QUEUE AND QUEUE TABLE AT THE SOURCE – DBSOURCE

@STEP5_STRMADMIN_SOURCE_QUEUE.SQL

— CONTENTS OF .SQL FILES

/* Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at DBSOURCE */
set echo on
spool c:\STREAMS_LOG\step5_strmadmin_source_queue.log

connect STRMADMIN@DBSOURCE

show user

BEGIN
   DBMS_STREAMS_ADM.SET_UP_QUEUE(
     queue_table => ‘STREAMS_QUEUE_TABLE’,
     queue_name  => ‘STREAMS_QUEUE_Q’,
     queue_user  => ‘STRMADMIN’);
END;
/

spool off
STEP 6 : CREATE QUEUE AND QUEUE TABLE AT THE TARGET – DBTARGET

@STEP6_STRMADMIN_TARGET_QUEUE.SQL

— CONTENTS OF .SQL FILES

/* Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at DBTARGET */
set echo on
spool c:\STREAMS_LOG\step6_strmadmin_target_queue.log

conn STRMADMIN@DBTARGET

show user

BEGIN
   DBMS_STREAMS_ADM.SET_UP_QUEUE(
     queue_table => ‘STREAMS_QUEUE_TABLE’,
     queue_name  => ‘STREAMS_QUEUE_Q’,
     queue_user  => ‘STRMADMIN’);
END;
/

spool off
STEP 7 : CREATE PROPAGATION PROCESS AT SOURCE – DBSOURCE

@STEP7_STRMADMIN_SOURCE_PROPOGATION.SQL

— CONTENTS OF .SQL FILES

set echo on

/*Step 7 -Connected to DBSOURCE, create PROPAGATION */

spool C:\STREAMS_LOG\step7_strmadmin_source_propogation.log

conn strmadmin@DBSOURCE

SHOW USER

–EMP

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => ‘SCOTT.EMP’,
streams_name => ‘STREAM_PROPAGATE_P1’,
source_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
destination_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q@DBTARGET’,
include_dml => true,
include_ddl => true,
source_database => DBSOURCE);
END;
/
–DEPT

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => ‘SCOTT.DEPT’,
streams_name => ‘STREAM_PROPAGATE_P1’,
source_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
destination_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q@DBTARGET’,
include_dml => true,
include_ddl => true,
source_database => ‘DBSOURCE’);
END;
/
–EMPLOYEES

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => ‘SCOTT.EMPLOYEES’,
streams_name => ‘STREAM_PROPAGATE_P1’,
source_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
destination_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q@DBTARGET’,
include_dml => true,
include_ddl => true,
source_database => ‘DB0SOURCE’);
END;
/

SPOOL OFF

STEP 8 : CREATE CAPTURE PROCESS AT SOURCE – DBSOURCE

@STEP8_STRMADMIN_SOURCE_CAPTURE.SQL

— CONTENTS OF .SQL FILES

set echo on

/*Step 8 -Connected to DBSOURCE , create CAPTURE */

spool C:\STREAMS_LOG\step8_strmadmin_source_capture.log

CONN strmadmin@DBSOURCE

show user

–EMP

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘SCOTT.EMP’,
streams_type => ‘CAPTURE’,
streams_name => ‘STREAM_CAPTURE_C1’,
queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
include_dml => true,
include_ddl => true,
source_database => ‘DBSOURCE’);
END;
/
–DEPT

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘SCOTT.DEPT’,
streams_type => ‘CAPTURE’,
streams_name => ‘STREAM_CAPTURE_C1’,
queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
include_dml => true,
include_ddl => true,
source_database => ‘DBSOURCE’);
END;
/
–EMPLOYEES

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘SCOTT.EMPLOYEES’,
streams_type => ‘CAPTURE’,
streams_name => ‘STREAM_CAPTURE_C1’,
queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
include_dml => true,
include_ddl => true,
source_database => ‘DBSOURCE’);
END;
/

SPOOL OFF
STEP 9 : CREATE APPLY PROCESS AT TARGET – DBTARGET

@STEP9_STRMADMIN_TARGET_APPLY.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step9_strmadmin_target_apply_start.log
/* STEP 9.- Specify an ‘APPLY USER’ at the destination database.
This is the user who would apply all statements and DDL statements.
The user specified in the APPLY_USER parameter must have the necessary privileges to perform DML and DDL changes on the apply objects*/ */

CONN STRMADMIN/STRMADMIN@DB001042

show user
–EMP

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘SCOTT.EMP’,
streams_type => ‘APPLY’,
streams_name => ‘STREAM_APPLY_A1’,
queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
include_dml => true,
include_ddl => true,
source_database => ‘DBSOURCE’);
END;
/
–DEPT

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘SCOTT.DEPT’,
streams_type => ‘APPLY’,
streams_name => ‘STREAM_APPLY_A1’,
queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
include_dml => true,
include_ddl => true,
source_database => ‘DBSOURCE’);
END;
/
–EMPLOYEES

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘SCOTT.EMPLOYEES’,
streams_type => ‘APPLY’,
streams_name => ‘STREAM_APPLY_A1’,
queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
include_dml => true,
include_ddl => true,
source_database => ‘DBSOURCE’);
END;
/
—change the user and Set stop_on_error to false so apply does not abort for every error; */

BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => ‘STREAM_APPLY_A1’,
apply_user => ‘SCOTT’);
END;
/
BEGIN
  DBMS_APPLY_ADM.SET_PARAMETER(
    apply_name => ‘STREAM_APPLY_A1’,
    parameter  => ‘disable_on_error’,
    value      => ‘n’);
END;
/
spool off
STEP 10: STREAMS OBJECT INSTANTATION

@ STEP10_EXP_IMP — Details are present in this text.

SOURCE :

$exp USERNAME/PASSWORD parfile=exp_streams.par

vi exp_streams.par

file=exp_streams.dmp
log=exp_streams.log
object_consistent=y
tables=’EMP’,’DEPT’,’EMPLOYEES’
STATISTICS=NONE

SCP THE .DMP FILE TO TARGET AND IMPORT IT:

TARGET:

imp FROMUSER=SCOTT TOUSER=SCOTT FILE=exp_streams.dmp  log=exp_streams.log STREAMS_INSTANTIATION=Y IGNORE=Y COMMIT=Y

NOTE1: Remove all the trigger which got imported and revoke the create trigger privilage.of the schema which is involved in streams.

STEP 11: START THE APPLY PROCESS AT TARGET – DBTARGET

@STEP11_STRMADMIN_TARGET_START_APPLY.SQL

— CONTENTS OF .SQL FILES

SET ECHO ON

spool c:\STREAMS_LOG\step11_STRMADMIN_TARGET_APPLY_START.log

connect STRMADMIN@DBTARGET

show user

BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => ‘STREAM_APPLY_A1’);
END;
/

spool off

STEP 12 : START THE CAPTURE PROCESS AT SOURCE – DBSOURCE

@STEP12_STRMADMIN_SOURCE_START_CAPTURE.SQL

— CONTENTS OF .SQL FILES

SET ECHO ON

spool c:\STREAMS_LOG\step12_STRMADMIN_SOURCE_CAPTURE_START.log

connect STRMADMIN@DBTARGET

show user

BEGIN
  DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => ‘STREAM_CAPTURE_C1’);
END;
/
spool off

                                                                                                                                                   Thanks and Regards,

                                                                                                                                                   Satish.G.S

Posted in Oracle STREAMS | 1 Comment »

STEPS TO STOP STREAMS

Posted by ssgottik on 14/04/2011

STEPS TO STOP STREAMS:


APPLY_NAME = STREAM_APPLY_A1
CAPTURE_NAME = STREAM_CAPTURE_C1
PROPAGATION_NAME = STREAM_PROPAGATION_P1

Execute the below steps with streams administrator user only:

STEP 1. STOP THE APPLY PROCESS:
BEGIN
  DBMS_APPLY_ADM.STOP_APPLY(
    apply_name => ‘STREAM_APPLY_A1’);
END;
/
STEP 2. STOP THE PROPAGATION PROCESS:
BEGIN
  DBMS_PROPAGATION_ADM.STOP_PROPAGATION(
    propagation_name => ‘STREAM_PROPAGATION_P1’);
END;
/
STEP 3. STOP THE CAPTURE PROCESS

BEGIN
  DBMS_CAPTURE_ADM.STOP_CAPTURE(
    capture_name => ‘STREAM_CAPTURE_C1’);
END;
/

Posted in Oracle STREAMS | Leave a Comment »

STEPS TO START ORACLE STREAMS

Posted by ssgottik on 14/04/2011

STEPS TO START STREAMS: Fallow the steps in same order

 APPLY_NAME = STREAM_APPLY_A1

CAPTURE_NAME = STREAM_CAPTURE_C1

PROPAGATION_NAME = STREAM_PROPAGATION_P1

 STEP 1. START THE APPLY PROCESS:

BEGIN

 DBMS_APPLY_ADM.START_APPLY(

apply_name => ‘STREAM_APPLY_A1’);

END;

/

STEP 2. START THE PROPAGATION PROCESS:

 BEGIN

DBMS_PROPAGATION_ADM.START_PROPAGATION(

propagation_name => ‘STREAM_PROPAGATION_P1’);

END;

/

STEP 3. START THE CAPTURE PROCESS

BEGIN

DBMS_CAPTURE_ADM.START_CAPTURE(

capture_name => ‘STREAM_CAPTURE_C1’);

END;

/

Posted in Oracle STREAMS | Leave a Comment »