ON CALL DBA SUPPORT

— Database blog

Archive for April, 2011

SUPPLEMENTAL LOGGING

Posted by ssgottik on 29/04/2011

What is supplemental logging?

Redo log files are generally used for instance recovery and media recovery. The data required for instance recovery and media recovery is automatically recorded in the redo log files. However a redo log based application may require that the additional columns need to be logged into redo log files. The process of adding these additional columns into redo log files is called supplemental logging.

Supplemental logging is not the default behavior of oracle database. It has to be enabled manually after the database is created. You can enable the supplemental logging at two levels

  1. DATABASE LEVEL
  2. TABLE LEVEL

What is the user of supplemental logging in replication?

Supplemental logging at the source database side to certain columns are very much required to ensure that those changes which are happened to the columns which are supplemental logging enabled will be applied successfully at the target database. With the help of these additional columns, oracle decides the rows which need to be updated on the destination side. This is how supplement logging is more critical requirement for replication.

What is the role or use of supplemental logging in oracle streams?

In streams, capture process captures the additional information logged in to redo log file using supplemental logging and place them in the LCR (LOGGICAL CHANGE RECORD). Supplemental logging is configured at the source database side. The apply process at the target database side reads these LCR’s to properly apply DML and DDL changes that are replicated from source database side to target database.

If the table has primary key or unique key column defined, only the column which are involved in primary key or unique key column will be registered in the redo logs along with the actual column that has changed. If the table does not have any primary key or unique key defined, oracle will write all columns of the changed row data into the redo log file.

Depending on the set of additional columns logged there are two types of supplemental log groups:

  1. Unconditional supplemental log group
  2. Conditional supplemental log group

 1. UNCONDITIONAL SUPPLEMENTAL LOG GROUP:

 If you want the before image of the column to be logged in to the redo log file  even if there is no changes happen that column and  which have supplemental logging enabled, then we use UNCONDITIONAL SUPPLEMENTAL LOGGING. This is also call ALWAYS LOG GROUP.

 2. CONDITIONAL SUPPLEMENTAL LOG GROUP:

 The before image of all the columns are logged into the redo log file even if at least one of the columns in the supplemental log group is updated.

 DATABASE LEVEL SUPPLEMENTAL LOGGING:

 How to check supplemental logging is enabled or not?

 SQL> SELECT supplemental_log_data_min FROM v$database;

 How to enable supplemental logging at database level?

 SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

 How to disable supplemental logging at database level?

 SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

 TABLE LEVEL SUPPLEMENTAL LOGGING:

 TABLE LEVEL UNCONDITIONAL SUPPLEMENTAL LOGGING: 

  • Primary Key columns
  • All columns
  • Selected columns

 To specify an unconditional supplemental log group for PRIMARY KEY column(s):

 SQL > ALTER TABLE SCOTT. EMP ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

 To specify an unconditional supplemental log group that includes ALL TABLE columns:

 SQL > ALTER TABLE SCOTT.EMP ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

 To specify an unconditional supplemental log group that includes SELECTED columns:

 SQL> ALTER TABLE SCOTT.EMP ADD SUPPLEMENTAL LOG GROUP t1_g1 (C1,C2) ALWAYS;

 TABLE LEVEL CONDITIONAL SUPPLEMENTAL LOGGING: 

  • Foreign  key
  • Unique
  • Any Columns

To specify a conditional supplemental log group that includes all FOREIGN KEY columns:

 SQL> ALTER TABLE SCOTT.DEPT ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;

 To specify a conditional supplemental log group for UNIQUE column(s) and/or BITMAP index column(s):

 SQL > ALTER TABLE SCOTT.EMP ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;

 To specify a conditional supplemental log group that includes ANY columns:

 SQL>ALTER TABLE SCOTT.EMP  ADD SUPPLEMENTAL LOG GROUP t1_g1 (c1,c3);

 To drop supplemental logging:

 SQL > ALTER TABLE <TABLE NAME >DROP SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

 SQL>ALTER TABLE <TABLE NAME >DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

 SQL> ALTER TABLE <TABLE NAME> DROP SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;

 SQL> ALTER TABLE <TABLE NAME> DROP SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;

 VIEWS

 DBA_LOG_GROUPS

 DBA_LOG_GROUP_COLUMNS

Thanks  and  Regards,

Satish.G.S

 

 

Posted in Oracle STREAMS | 5 Comments »

Oracle Golden Gate Tutorial part 1

Posted by ssgottik on 24/04/2011

ORACLE GOLDEN GATE CONCEPT AND ARCHITECTURE

 

OracleGolden Gateis a tool provided by oracle for transactional data replication among oracle databases and other RDBMS tools (SQL SERVER, DB2.Etc). Its modular architecture gives you the flexibility to easily decouple or combined to provide best solution to meet the business requirements.

 Because of this flexibility in the architecture,Golden Gatesupports numerous business requirements:

  • High Availability
  • Data Integration
  • Zero downtime upgrade and migration
  • Live reporting database

etc

 Oracle Golden Gate Architecture

 Oracle Golden Gate Architecture is composed of the following Components:

 ● Extract

 ● Data pump

 ● Replicat

 ● Trails or extract

 ● Checkpoints

 ● Manager

 ● Collector

 Below is the architecture diagram of GG:

Figure: gg.01

 Oracle Golden Gate server runs on both source and target server. OracleGolden Gateis installed as an external component to the database and it wont uses database resource, in turn it won’t effect database performance. Where as Oracle streams which uses built in packages which are provided by oracle, which uses most of the database resources and there are chances of performance slow down in both source and target databases.

 Let first have a look at architectural components of Oracle Golden Gate:

 EXTRACT:

Extract runs on the source system and it is the extraction mechanism for oracleGolden Gate( capture the changes which happens at the source database).

 The Extract process extracts the necessary data from the database transaction logs. For oracle database transaction logs are nothing both REDO log file data. Unlike streams which runs in the oracle database itself and needs the access to the database. OracleGolden Gatedoes not needs access to the oracle database and also it will extract only the committed transaction from the online redo log file.

 When ever there is a long running transaction which generates more number of redo data will force to switch the redo log file and in turn more number of archive logs will be generated. In these cases the extract process need to read the archive log files to get the data.

 Extract process captures all the changes that are made to objects that are configured for synchronization.  Multiple Extract processes can operate on different objects at the same time. For example once process could continuously extract transactional data changes and stream them to a decision support database. while another process performs batch extracts for periodic reporting or, two extract processes could extract and transmit in parallel to two replicat processes ( with two trails) to minimize target latency when the databases are large.

 DATAPUMP

 Datapump is the secondary extract process with in source oracleGolden Gateconfiguration. You can have the source oracleGolden Gateconfigured without Datapump process also, but in this case Extract process has to send the data to trail file at the target. If the Datapump is configured the primary extract process writes the data to the source trail file and Datapump will read this trail file and propagate the data over the network to target trail file. The Datapump adds the storage flexibility and it isolates the primary extract process from TCP/IP activity.

 You can configure the primary extract process and Datapump extract to extract online or extract during batch processing.

 REPLICAT

 Replicat process runs on the target system. Replicat reads extracted transactional data changes and DDL changes (IF CONFIGURED) that are specified in the Replicat configuration, and than it replicates them to the target database.

 TRAILS OR EXTRACTS

 To support the continuous extraction and replication of source database changes, Oracle Golden Gate stores the captured changes temporarily on the disk in a series of files call a TRAIL. A trail can exist on the source or target system and even it can be in a intermediate system, depending on how the configuration is done. On the local system it is known as an EXTRACT TRAIL and on the remote system it is known as REMOTE TRAIL.

 The use of a trail also allows extraction and replication activities to occur independently of each other. Since these two ( source trail and target trail) are independent you have more choices for how data is delivered.

 CHECKPOINT

 Checkpoints stores the current read and write positions of a process to disk for recovery purposes. These checkpoints ensure that data changes that are marked for synchronization are extracted by extract and replicated by replicat.

 Checkpoint work with inter process acknowledgments to prevent messages from being lost in the network. OracleGolden Gatehas a proprietary guaranteed-message delivery technology.

 Checkpoint information is maintained in checkpoint files within the dirchk sub-directory of the Oracle Golden Gate directory. Optionally, Replicat checkpoints can be maintained in a checkpoint table within the target database, apart from standard checkpoint file.

 MANAGER

 The Manager process runs on both source and target systems and it is the heart or control process of Oracle Golden Gate. Manager must be up and running before you create EXTRAT or REPLICAT process. Manager performs Monitoring, restarting oracle golden gate process, report errors, report events, maintains trail files and logs etc.

 COLLECTOR

 Collector is a process that runs in the background on the target system. Collector receives extracted database changes that are sent across the TCP/IP network and it writes them to a trail or extract file.

Thanks and Regards,

Satish.G.S

Posted in ORACLE GOLDEN GATE | 22 Comments »

STEPS TO REMOVE STREAMS FROM THE DATABASE

Posted by ssgottik on 20/04/2011

     COMPLETELY REMOVE STREAMS FROM THE DATABASE

Starting from Oracle Database 10g, Oracle provides a means by which you can remove an entire Streams environment from a database.

Stop streams on both source and target and then execute the below command as sys user

SQL> conn sys@DBSOURCE as sysdba

SQL> execute DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();

SQL> conn sys@DBTARGET as sysdba

SQL> execute DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();

ISSUES:

In case if you get the $BIN error when you execute the above command , then purge the recycle bin and re-execute the above command.

SQL> purge dba_recyclebin;

Thanks and Regards,

Satish.G.S

Posted in Oracle STREAMS | 1 Comment »

STEPS TO IMPLEMENT SCHEMA LEVEL ORACLE STREAMS

Posted by ssgottik on 20/04/2011

 STEPS TO IMPLEMENT SCHEMA LEVEL ORACLE STREAMS

Here i am replicating all the objects of SCOTT schema from DBSOURCE database to SCOTT schema in DBTARGET database.

 SOURCE DATABASE : DBSOURCE

TARGET DATABASE : DBTARGET

SOURCE SCHEMA NAME : SCOTT

TARGET SCHEMA NAME : SCOTT

Fallow the steps in the same sequence.

STEP 0: Check streams unsupported objects present with the schema

Qurey DBA_STREAMS_UNSUPPORTED to get the list of Tables and the reason why streams wont support those  tables  in replication.

SQL > SELECT TABLE_NAME,REASON FROM DBA_STREAMS_UNSUPPORTED WHERE OWNER=’SCOTT’;

STEP 1 : ADD SUPPLEMENT LOGIN TO ALL THE TABLES WHICH ARE PART OF STREAMS REPLICATION

@STEP1_SYS_SOURCE_SUPPLEMENTAL_LOG_DATA.SQL

 Add the supplement login for all the tables present in SCOTT schema at the source side

—    CONTENTS OF .SQL FILES

spool c:\STREAMS_LOG\step1_sys_source_supplement_log_data.log

CONN SYS@DBSOURCE AS SYSDBA

set echo on

show user

alter database force logging;

alter database add supplemental log data;
alter table SCOTT.EMP  ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;                                       
alter table SCOTT.DEPT  ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;                            
alter table SCOTT.EMPLOYEES  ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;

spool off

STEP 2 : SETTING THE ENV VARIABLES AT SOURCE – DBSOURCE

— The database must run in archive log mode

@STEP2_SYS_SOURCE_GLOBALNAME.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step2_sys_source_globalname.log

CONN SYS@DBSOURCE AS SYSDBA

SHOW USER

select * from global_name; –to see current global_name

alter system set global_names=true scope=both;

– Restart DB & do the same changes on Target DB also

spool off

STEP 3 : SETTING THE ENV VARIABLES AT TARGET – DBTARGET

— the database must run in archive log mode

@STEP3_SYS_TARGET_GLOBALNAME.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step3_sys_target_globalname.log

CONN SYS@DBTARGET AS SYSDBA

SHOW USER

select * from global_name; –to see current global_name

alter system set global_names=false scope=both;

– Restart DB & do the same changes on Source DB also

spool off

STEP 4 : CREATING STREAMS ADMINISTRATOR USER AT SOURCE – DBSOURCE

—at the SOURCE:

SQL> create tablespace strepadm datafile ‘/oradata/DBSOURCE/strepadm01.dbf’ size 1000m;

@STEP4_SYS_SOURCE_CREATE_USER.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step4_sys_source_create_user.log

CONN SYS@DBSOURCE AS SYSDBA

SHOW USER

PROMPT CREATING USERS

create user STRMADMIN identified by STRMADMIN default tablespace strepadm temporary tablespace temp;

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;

execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(‘STRMADMIN’);

spool off

STEP 5: CREATING DB LINK AT THE SOURCE -DBSOURCE

@STEP5_STRMADMIN_SOURCE_DBLINK.SQL

— CONTENTS OF .SQL FILES

/* Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at DBSOURCE*/
/* Add the TNS ENTRY details in the tnsnames.ora file */
set echo on

spool c:\STREAMS_LOG\STEP5_strmadmin_source_dblink.log

CONN STRMADMIN@DBSOURCE AS SYSDBA

show user

create database link DBTARGET connect to STRMADMIN identified by STRMADMIN using ‘DBTARGET’;

spool off

STEP 6 : CREATING STREAMS ADMINISTRATOR USER  AT TARGET – DBTARGET

—at the TARGET:

SQL> create tablespace strepadm datafile ‘/oradata/DBTARGET/strepadm01.dbf’ size 1000m;

@STEP6_SYS_TARGET_CREATE_USER.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step6_sys_TARGET_create_user.log

CONN SYS@DBTARGET AS SYSDBA

show user

PROMPT CREATING USERS

create user STRMADMIN identified by STRMADMIN default tablespace strepadm temporary tablespace temp;

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;

execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(‘STRMADMIN’);

spool off
– IF SCOTT schema is not present in the target please create the same.

STEP 7 : CREATE QUEUE AND QUEUE TABLE AT THE SOURCE – DBSOURCE

@STEP7_STRMADMIN_SOURCE_QUEUE.SQL

— CONTENTS OF .SQL FILES

/* Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at DBSOURCE */
set echo on
spool c:\STREAMS_LOG\step7_strmadmin_source_queue.log

connect STRMADMIN@DBSOURCE

show user

BEGIN
   DBMS_STREAMS_ADM.SET_UP_QUEUE(
     queue_table => ‘STREAMS_QUEUE_TABLE’,
     queue_name  => ‘STREAMS_QUEUE_Q’,
     queue_user  => ‘STRMADMIN’);
END;
/

spool off

 STEP 8: CREATE QUEUE AND QUEUE TABLE AT THE TARGET – DBTARGET

@STEP8_STRMADMIN_TARGET_QUEUE.SQL

— CONTENTS OF .SQL FILES

/* Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at DBTARGET */

set echo on

spool c:\STREAMS_LOG\step8_strmadmin_target_queue.log

conn STRMADMIN@DBTARGET

show user

BEGIN
   DBMS_STREAMS_ADM.SET_UP_QUEUE (
     queue_table => ‘STREAMS_QUEUE_TABLE’,
     queue_name  => ‘STREAMS_QUEUE_Q’,
     queue_user  => ‘STRMADMIN’);
END;
/

spool off

 STEP 9: CREATE PROPAGATION PROCESS AT SOURCE – DBSOURCE

@STEP9_STRMADMIN_SOURCE_PROPOGATION.SQL

— CONTENTS OF .SQL FILES

set echo on

spool C:\STREAMS_LOG\step9_strmadmin_source_propogation.log

conn strmadmin@DBSOURCE

SHOW USER

BEGIN

   DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(

     schema_name                        => ‘SCOTT’,

     streams_name                        => ‘STREAM_PROPAGATE_P1’,

     source_queue_name              => ‘STRMADMIN.STREAMS_QUEUE_Q’,

     destination_queue_name       => ‘STRMADMIN.STREAMS_QUEUE@DBTARGET’,

     include_dml                           => true,

     include_ddl                            => true,

     source_database                   => ‘DBSOURCE’);

END;

/

spool off

STEP 10 : CREATE CAPTURE PROCESS AT SOURCE – DBSOURCE

@STEP10_STRMADMIN_SOURCE_CAPTURE.SQL

— CONTENTS OF .SQL FILES

set echo on

/*Step 10 -Connected to DBSOURCE , create CAPTURE */

spool C:\STREAMS_LOG\step10_strmadmin_source_capture.log

CONNstrmadmin@DBSOURCE

show user

BEGIN

  DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(

    schema_name              => ‘SCOTT’,

    streams_type               => ‘CAPTURE’,

    streams_name              => ‘STREAM_CAPTURE_C1’,

    queue_name               => ‘STRMADMIN.STREAMS_QUEUE­_Q’,

    include_dml                => true,

    include_ddl                => true,

    source_database => ‘DBSOURCE’);

END;

/

SPOOL OFF

STEP 11 : CREATE APPLY PROCESS AT TARGET – DBTARGET

@STEP11_STRMADMIN_TARGET_APPLY.SQL

 — CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step11_strmadmin_target_apply_start.log

CONN STRMADMIN/STRMADMIN@DBTAGET

show user

BEGIN

   DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(

     schema_name     => ‘SCOTT’,

     streams_type    => ‘APPLY ‘,

     streams_name    => ‘STREAM_APPLY_A1’,

     queue_name      => ‘STRMADMIN.STREAMS_QUEUE_Q’,

     include_dml     => true,

     include_ddl     => true,

     source_database => ‘DBTARGET’);

END;

/

SPOOL OFF

STEP 12: CREATE NEGATIVE RULE AT SOURCE FOR UNSUPPORTED TABLES – DBSOURCE

 Set negative rule for all the tables which are unsupported by streams ( List you got from  querying DBA_STREAMS_UNSUPPORTED)

— CONTENTS OF .SQL FILES

set echo on

spool c:\streams_source\step12_strmadmin_source_negative_rule.log

conn strmadmin@DBSOURCE

show user

BEGIN

  DBMS_STREAMS_ADM.ADD_TABLE_RULES(

    table_name      =>  ‘SCOTT.<UNSUPPORTED TABLE NAME>’,

    streams_type    =>  ‘capture’,

    streams_name    =>  ‘STREAM_CAPTURE_C1’,

    queue_name      =>  ‘strmadmin.STREAMS_QUEUE_Q’,

    include_dml     =>  true,

    include_ddl     =>  true,

    inclusion_rule  =>  false);

END;

/

SPOOL OFF

STEP 13: STREAMS OBJECT INSTANTATION

@ STEP10_EXP_IMP — Details are present in this text.

SOURCE :

$exp USERNAME/PASSWORD parfile=exp_streams.par

vi exp_streams.par

file=exp_streams.dmp
log=exp_streams.log
object_consistent=y
OWNER=SCOTT
STATISTICS=NONE

SCP THE .DMP FILE TO TARGET AND IMPORT IT:

TARGET:

imp FROMUSER=SCOTT TOUSER=SCOTT FILE=exp_streams.dmp  log=exp_streams.log STREAMS_INSTANTIATION=Y IGNORE=Y COMMIT=Y

STEP 14: START THE APPLY PROCESS AT TARGET – DBTARGET

@STEP14_STRMADMIN_TARGET_START_APPLY.SQL

— CONTENTS OF .SQL FILES

SET ECHO ON

spool c:\STREAMS_LOG\step14_STRMADMIN_TARGET_APPLY_START.log

connect STRMADMIN@DBTARGET

show user

BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => ‘STREAM_APPLY_A1′);
END;
/

—- Set stop_on_error to false so apply does not abort for every error; then, start the Apply process on the destination

BEGIN

  DBMS_APPLY_ADM.SET_PARAMETER(

    apply_name => ‘STREAM_APPLY_A1’,

    parameter  => ‘disable_on_error’,

    value      => ‘n’);

END;

/

 — Start Apply

BEGIN 

DBMS_APPLY_ADM.START_APPLY( 

apply_name => ‘STREAM_APPLY_A1’); 

END; 

spool off

STEP 15 : START THE CAPTURE PROCESS AT SOURCE – DBSOURCE

@STEP15_STRMADMIN_SOURCE_START_CAPTURE.SQL

— CONTENTS OF .SQL FILES

SET ECHO ON

spool c:\STREAMS_LOG\step15_STRMADMIN_SOURCE_CAPTURE_START.log

connect STRMADMIN@DBTARGET

show user

BEGIN
  DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => ‘STREAM_CAPTURE_C1′);
END;
/
spool off

YOUR COMMENTS ARE MOST WELCOME

 Thanks and Regards,

Satish.G.S

Posted in Oracle STREAMS | 19 Comments »

STEPS TO IMPLEMENT TABLE LEVEL ORACLE STREAMS

Posted by ssgottik on 14/04/2011

   STEPS TO SETUP TABLE LEVEL STREAMS
   ————————————————————-

Here i am replicating four table from source database to target database. Details of the source database,target database and tables

which are involved in replication is mentioned below:

SOURCE DATABASE : DBSOURCE

TARGET DATABASE : DBTARGET

SOURCE TABLE OWNER NAME :  SCOTT

TARGET TABLE OWNER NAME :  SCOTT

TABLES WHICH ARE MEMEBER OF REPLICATION : EMP,DEPT,EMPLOYEES
Fallow the steps in the same sequence.

STEP 0 : ADD SUPPLEMENT LOGIN TO ALL THE TABLES WHICH ARE PART OF STREAMS REPLICATION

@STEP0_SYS_SOURCE_SUPPLEMENTAL_LOG_DATA.SQL

— CONTENTS OF .SQL FILES

spool c:\STREAMS_LOG\STEP0_SYS_SOURCE_SUPPLEMENT_LOG_DATA.log

CONN SYS@DBSOURCE AS SYSDBA

set echo on

SHOW USER

alter database force logging;

alter database add supplemental log data;
alter table SCOTT.EMP  ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;                                       
alter table SCOTT.DEPT  ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;                            
alter table SCOTT.EMPLOYEES  ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;

SPOOL OFF;
STEP 1 : SETTING THE ENV VARIABLES AT SOURCE – DBSOURCE

— The database must run in archive log mode

@STEP1_SYS_SOURCE_GLOBALNAME.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step1_sys_source_globalname.log

CONN SYS@DBSOURCE AS SYSDBA

SHOW USER

select * from global_name; –to see current global_name

alter system set global_names=true scope=both;

— Restart DB & do the same changes on Target DB also

spool off

STEP 2 : SETTING THE ENV VARIABLES AT TARGET – DBTARGET

— the database must run in archive log mode

@STEP2_SYS_TARGET_GLOBALNAME.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step2_sys_target_globalname.log

CONN SYS@DBTARGET AS SYSDBA

SHOW USER

select * from global_name; –to see current global_name

alter system set global_names=false scope=both;

— Restart DB & do the same changes on Source DB also

spool off

STEP 3 : CREATING STREAMS ADMINISTRATOR USER AT SOURCE – DBSOURCE

—at the SOURCE:

SQL> create tablespace strepadm datafile ‘/oradata/DBSOURCE/strepadm01.dbf’ size 1000m;

@STEP3_SYS_SOURCE_CREATE_USER.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step3_sys_source_create_user.log

CONN SYS@DBSOURCE AS SYSDBA

SHOW USER

PROMPT CREATING USERS

create user STRMADMIN identified by STRMADMIN default tablespace strepadm temporary tablespace temp;

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;

execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(‘STRMADMIN’);

spool off
STEP 3a: CREATING DB LINK AT THE SOURCE -DBSOURCE

@STEP3a_STRMADMIN_SOURCE_DBLINK.SQL

— CONTENTS OF .SQL FILES

/* Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at DBSOURCE*/
/* Add the TNS ENTRY details in the tnsnames.ora file */
set echo on

spool c:\STREAMS_LOG\STEP3a_strmadmin_source_dblink.log

CONN STRMADMIN@DBSOURCE AS SYSDBA

show user

create database link DBTARGET connect to STRMADMIN identified by STRMADMIN using ‘DBTARGET’;

spool off

STEP 4 : CREATING STREAMS ADMINISTRATOR USER AT TARGET – DBTARGET

—at the TARGET:

SQL> create tablespace strepadm datafile ‘/oradata/DBTARGET/strepadm01.dbf’ size 1000m;

@STEP4_SYS_TARGET_CREATE_USER.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step4_sys_TARGET_create_user.log

CONN SYS@DBTARGET AS SYSDBA

show user

PROMPT CREATING USERS

create user STRMADMIN identified by STRMADMIN default tablespace strepadm temporary tablespace temp;

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;

execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(‘STRMADMIN’);

spool off
— IF SCOTT schema is not present in the target please create the same.

STEP 5 : CREATE QUEUE AND QUEUE TABLE AT THE SOURCE – DBSOURCE

@STEP5_STRMADMIN_SOURCE_QUEUE.SQL

— CONTENTS OF .SQL FILES

/* Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at DBSOURCE */
set echo on
spool c:\STREAMS_LOG\step5_strmadmin_source_queue.log

connect STRMADMIN@DBSOURCE

show user

BEGIN
   DBMS_STREAMS_ADM.SET_UP_QUEUE(
     queue_table => ‘STREAMS_QUEUE_TABLE’,
     queue_name  => ‘STREAMS_QUEUE_Q’,
     queue_user  => ‘STRMADMIN’);
END;
/

spool off
STEP 6 : CREATE QUEUE AND QUEUE TABLE AT THE TARGET – DBTARGET

@STEP6_STRMADMIN_TARGET_QUEUE.SQL

— CONTENTS OF .SQL FILES

/* Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at DBTARGET */
set echo on
spool c:\STREAMS_LOG\step6_strmadmin_target_queue.log

conn STRMADMIN@DBTARGET

show user

BEGIN
   DBMS_STREAMS_ADM.SET_UP_QUEUE(
     queue_table => ‘STREAMS_QUEUE_TABLE’,
     queue_name  => ‘STREAMS_QUEUE_Q’,
     queue_user  => ‘STRMADMIN’);
END;
/

spool off
STEP 7 : CREATE PROPAGATION PROCESS AT SOURCE – DBSOURCE

@STEP7_STRMADMIN_SOURCE_PROPOGATION.SQL

— CONTENTS OF .SQL FILES

set echo on

/*Step 7 -Connected to DBSOURCE, create PROPAGATION */

spool C:\STREAMS_LOG\step7_strmadmin_source_propogation.log

conn strmadmin@DBSOURCE

SHOW USER

–EMP

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => ‘SCOTT.EMP’,
streams_name => ‘STREAM_PROPAGATE_P1’,
source_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
destination_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q@DBTARGET’,
include_dml => true,
include_ddl => true,
source_database => DBSOURCE);
END;
/
–DEPT

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => ‘SCOTT.DEPT’,
streams_name => ‘STREAM_PROPAGATE_P1’,
source_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
destination_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q@DBTARGET’,
include_dml => true,
include_ddl => true,
source_database => ‘DBSOURCE’);
END;
/
–EMPLOYEES

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => ‘SCOTT.EMPLOYEES’,
streams_name => ‘STREAM_PROPAGATE_P1’,
source_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
destination_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q@DBTARGET’,
include_dml => true,
include_ddl => true,
source_database => ‘DB0SOURCE’);
END;
/

SPOOL OFF

STEP 8 : CREATE CAPTURE PROCESS AT SOURCE – DBSOURCE

@STEP8_STRMADMIN_SOURCE_CAPTURE.SQL

— CONTENTS OF .SQL FILES

set echo on

/*Step 8 -Connected to DBSOURCE , create CAPTURE */

spool C:\STREAMS_LOG\step8_strmadmin_source_capture.log

CONN strmadmin@DBSOURCE

show user

–EMP

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘SCOTT.EMP’,
streams_type => ‘CAPTURE’,
streams_name => ‘STREAM_CAPTURE_C1’,
queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
include_dml => true,
include_ddl => true,
source_database => ‘DBSOURCE’);
END;
/
–DEPT

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘SCOTT.DEPT’,
streams_type => ‘CAPTURE’,
streams_name => ‘STREAM_CAPTURE_C1’,
queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
include_dml => true,
include_ddl => true,
source_database => ‘DBSOURCE’);
END;
/
–EMPLOYEES

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘SCOTT.EMPLOYEES’,
streams_type => ‘CAPTURE’,
streams_name => ‘STREAM_CAPTURE_C1’,
queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
include_dml => true,
include_ddl => true,
source_database => ‘DBSOURCE’);
END;
/

SPOOL OFF
STEP 9 : CREATE APPLY PROCESS AT TARGET – DBTARGET

@STEP9_STRMADMIN_TARGET_APPLY.SQL

— CONTENTS OF .SQL FILES

set echo on

spool c:\STREAMS_LOG\step9_strmadmin_target_apply_start.log
/* STEP 9.- Specify an ‘APPLY USER’ at the destination database.
This is the user who would apply all statements and DDL statements.
The user specified in the APPLY_USER parameter must have the necessary privileges to perform DML and DDL changes on the apply objects*/ */

CONN STRMADMIN/STRMADMIN@DB001042

show user
–EMP

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘SCOTT.EMP’,
streams_type => ‘APPLY’,
streams_name => ‘STREAM_APPLY_A1’,
queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
include_dml => true,
include_ddl => true,
source_database => ‘DBSOURCE’);
END;
/
–DEPT

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘SCOTT.DEPT’,
streams_type => ‘APPLY’,
streams_name => ‘STREAM_APPLY_A1’,
queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
include_dml => true,
include_ddl => true,
source_database => ‘DBSOURCE’);
END;
/
–EMPLOYEES

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘SCOTT.EMPLOYEES’,
streams_type => ‘APPLY’,
streams_name => ‘STREAM_APPLY_A1’,
queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’,
include_dml => true,
include_ddl => true,
source_database => ‘DBSOURCE’);
END;
/
—change the user and Set stop_on_error to false so apply does not abort for every error; */

BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => ‘STREAM_APPLY_A1’,
apply_user => ‘SCOTT’);
END;
/
BEGIN
  DBMS_APPLY_ADM.SET_PARAMETER(
    apply_name => ‘STREAM_APPLY_A1’,
    parameter  => ‘disable_on_error’,
    value      => ‘n’);
END;
/
spool off
STEP 10: STREAMS OBJECT INSTANTATION

@ STEP10_EXP_IMP — Details are present in this text.

SOURCE :

$exp USERNAME/PASSWORD parfile=exp_streams.par

vi exp_streams.par

file=exp_streams.dmp
log=exp_streams.log
object_consistent=y
tables=’EMP’,’DEPT’,’EMPLOYEES’
STATISTICS=NONE

SCP THE .DMP FILE TO TARGET AND IMPORT IT:

TARGET:

imp FROMUSER=SCOTT TOUSER=SCOTT FILE=exp_streams.dmp  log=exp_streams.log STREAMS_INSTANTIATION=Y IGNORE=Y COMMIT=Y

NOTE1: Remove all the trigger which got imported and revoke the create trigger privilage.of the schema which is involved in streams.

STEP 11: START THE APPLY PROCESS AT TARGET – DBTARGET

@STEP11_STRMADMIN_TARGET_START_APPLY.SQL

— CONTENTS OF .SQL FILES

SET ECHO ON

spool c:\STREAMS_LOG\step11_STRMADMIN_TARGET_APPLY_START.log

connect STRMADMIN@DBTARGET

show user

BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => ‘STREAM_APPLY_A1’);
END;
/

spool off

STEP 12 : START THE CAPTURE PROCESS AT SOURCE – DBSOURCE

@STEP12_STRMADMIN_SOURCE_START_CAPTURE.SQL

— CONTENTS OF .SQL FILES

SET ECHO ON

spool c:\STREAMS_LOG\step12_STRMADMIN_SOURCE_CAPTURE_START.log

connect STRMADMIN@DBTARGET

show user

BEGIN
  DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => ‘STREAM_CAPTURE_C1’);
END;
/
spool off

                                                                                                                                                   Thanks and Regards,

                                                                                                                                                   Satish.G.S

Posted in Oracle STREAMS | 1 Comment »

STEPS TO STOP STREAMS

Posted by ssgottik on 14/04/2011

STEPS TO STOP STREAMS:


APPLY_NAME = STREAM_APPLY_A1
CAPTURE_NAME = STREAM_CAPTURE_C1
PROPAGATION_NAME = STREAM_PROPAGATION_P1

Execute the below steps with streams administrator user only:

STEP 1. STOP THE APPLY PROCESS:
BEGIN
  DBMS_APPLY_ADM.STOP_APPLY(
    apply_name => ‘STREAM_APPLY_A1’);
END;
/
STEP 2. STOP THE PROPAGATION PROCESS:
BEGIN
  DBMS_PROPAGATION_ADM.STOP_PROPAGATION(
    propagation_name => ‘STREAM_PROPAGATION_P1’);
END;
/
STEP 3. STOP THE CAPTURE PROCESS

BEGIN
  DBMS_CAPTURE_ADM.STOP_CAPTURE(
    capture_name => ‘STREAM_CAPTURE_C1’);
END;
/

Posted in Oracle STREAMS | Leave a Comment »

STEPS TO START ORACLE STREAMS

Posted by ssgottik on 14/04/2011

STEPS TO START STREAMS: Fallow the steps in same order

 APPLY_NAME = STREAM_APPLY_A1

CAPTURE_NAME = STREAM_CAPTURE_C1

PROPAGATION_NAME = STREAM_PROPAGATION_P1

 STEP 1. START THE APPLY PROCESS:

BEGIN

 DBMS_APPLY_ADM.START_APPLY(

apply_name => ‘STREAM_APPLY_A1’);

END;

/

STEP 2. START THE PROPAGATION PROCESS:

 BEGIN

DBMS_PROPAGATION_ADM.START_PROPAGATION(

propagation_name => ‘STREAM_PROPAGATION_P1’);

END;

/

STEP 3. START THE CAPTURE PROCESS

BEGIN

DBMS_CAPTURE_ADM.START_CAPTURE(

capture_name => ‘STREAM_CAPTURE_C1’);

END;

/

Posted in Oracle STREAMS | Leave a Comment »