Thursday, April 2, 2026

Zero Data Loss Database Migration with RMAN Active Duplication

 

Introduction

When migrating Oracle databases, achieving zero data loss while minimizing downtime is critical. This article demonstrates how to use RMAN's DUPLICATE FROM ACTIVE DATABASE feature with the NOOPEN clause to achieve zero data loss migration while keeping the source database online.

The Challenge

Traditional RMAN active database duplication opens the duplicate database immediately after completion, which means:

  • Any transactions committed after duplication starts may be lost
  • For large databases (TB+), duplication can take hours
  • Data loss window = time from duplication start to completion

Question: Can we recover all transactions up to a planned cutover time, even if duplication takes hours?

Answer: Yes, using the NOOPEN clause.

The Solution: NOOPEN Clause

The key is to prevent RMAN from opening the duplicate database immediately. Instead, keep it in MOUNT mode and open it at your planned cutover time.

How It Works

  1. Duplication Phase (source stays online): RMAN duplicates datafiles and applies archive logs up to duplication completion
  2. Mount Mode: Database remains in MOUNT mode, not yet opened
  3. Cutover: Stop application, force log switch
  4. Roll Forward: Catalog and apply all remaining archive logs generated during duplication
  5. Open with RESETLOGS: Open database to complete recovery
  6. Result: Zero data loss

Implementation

Prerequisites

  • Source database in ARCHIVELOG mode
  • Network connectivity between source and target
  • Sufficient disk space on target server
  • Password file with matching SYS password on both servers

Step 1: Prepare Auxiliary Instance

#!/bin/bash
# prepare_auxiliary.sh
export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1
export ORACLE_SID=dupdb
export PATH=$ORACLE_HOME/bin:$PATH

# Create directories
mkdir -p /u02/oradata/dupdb
mkdir -p /u02/fra/dupdb
mkdir -p /u01/app/oracle/admin/dupdb/adump

# Create password file (match source password)
$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/orapwdupdb password=<YOUR_PASSWORD> force=yes

# Create init parameter file
cat > $ORACLE_HOME/dbs/initdupdb.ora <<'EOF'
db_name=dupdb
db_unique_name=dupdb
control_files='/u02/oradata/dupdb/control01.ctl','/u02/oradata/dupdb/control02.ctl'
db_block_size=8192
sga_target=16G
pga_aggregate_target=8G
processes=1000
db_recovery_file_dest=/u02/fra/dupdb
db_recovery_file_dest_size=500G
diagnostic_dest=/u01/app/oracle
enable_pluggable_database=true
EOF

# Start auxiliary instance
sqlplus / as sysdba <<SQL
startup nomount pfile='$ORACLE_HOME/dbs/initdupdb.ora'
exit
SQL

Step 2: Configure TNS (if on different servers)

On both source and target servers, add TNS entries:

# tnsnames.ora
SOURCEDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = source-server)(PORT = 1521))
    (CONNECT_DATA = (SERVICE_NAME = sourcedb))
  )

DUPDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = target-server)(PORT = 1521))
    (CONNECT_DATA = (SID = dupdb))
  )

On target server, add static listener entry for auxiliary instance:

# listener.ora
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = dupdb)
      (ORACLE_HOME = /u01/app/oracle/product/19.0.0/dbhome_1)
      (SID_NAME = dupdb)
    )
  )

Reload listener: lsnrctl reload

Step 3: Execute Duplication with NOOPEN

#!/bin/bash
# rman_duplicate_noopen.sh
export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1
export ORACLE_SID=sourcedb  # Run from source server
export PATH=$ORACLE_HOME/bin:$PATH

LOG_FILE="/backup/rman_duplicate_$(date +%Y%m%d_%H%M%S).log"
echo "Starting duplication at: $(date)"
echo "Log file: $LOG_FILE"

$ORACLE_HOME/bin/rman msglog=$LOG_FILE <<'EOF'
connect target sys/<PASSWORD>@sourcedb
connect auxiliary sys/<PASSWORD>@dupdb

run {
  allocate channel c1 device type disk connect 'sys/<PASSWORD>@sourcedb';
  allocate channel c2 device type disk connect 'sys/<PASSWORD>@sourcedb';
  allocate auxiliary channel aux1 device type disk;
  allocate auxiliary channel aux2 device type disk;
  
  duplicate target database to dupdb from active database
  spfile
    set db_unique_name='dupdb'
    set db_name='dupdb'
    set control_files='/u02/oradata/dupdb/control01.ctl','/u02/oradata/dupdb/control02.ctl'
    set db_recovery_file_dest='/u02/fra/dupdb'
    set db_recovery_file_dest_size='500G'
    set diagnostic_dest='/u01/app/oracle'
    set db_file_name_convert='/u01/oradata/sourcedb','/u02/oradata/dupdb'
    set log_file_name_convert='/u01/oradata/sourcedb','/u02/oradata/dupdb'
  nofilenamecheck
  noopen;    # KEY: Keep database in MOUNT mode
}

exit
EOF

echo "Duplication completed at: $(date)"
echo "Database is in MOUNT mode, ready for cutover"

Important: For same-server duplication, add nofilenamecheck. For different servers, omit it.

Step 4: Cutover Process

At your planned cutover time:

#!/bin/bash
# cutover.sh

# 1. Stop application
echo "Stopping application..."
# <your application stop commands>

# 2. Force log switch on source to generate final archive logs
export ORACLE_SID=sourcedb
sqlplus / as sysdba <<EOF
alter system switch logfile;
alter system archive log current;
exit
EOF

echo "Waiting for archive logs to be written..."
sleep 5

# 3. On target server: Apply remaining archive logs
export ORACLE_SID=dupdb

# Option A: If archive logs are accessible via NFS or shared storage
rman target / <<EOF
catalog start with '/mnt/source_archive/';
recover database;
exit
EOF

# Option B: If archive logs were copied to target
rman target / <<EOF
catalog start with '/target/archive/';
recover database;
exit
EOF

# 4. Open duplicate database with RESETLOGS
sqlplus / as sysdba <<EOF
alter database open resetlogs;
exit
EOF

# 5. Open PDBs
sqlplus / as sysdba <<EOF
alter pluggable database all open;
exit
EOF

echo "Cutover completed at: $(date)"

Key Steps Explained:

  1. Force log switch: Ensures all transactions are in archive logs
  2. Catalog archive logs: Makes RMAN aware of new archive logs from source
  3. Recover database: Applies all archive logs up to the latest available
  4. Open resetlogs: Creates new incarnation and completes recovery

Step 5: Verification

-- On duplicate database
ALTER SESSION SET CONTAINER = pdb1;
SELECT COUNT(*) FROM critical_table;

-- Compare with source
-- Counts should match exactly

Key Considerations

Archive Log Management

For large databases with long duplication times:

  • Archive log retention: Ensure sufficient space for archive logs during duplication
  • Archive log destination: Must be accessible from duplicate database
  • Monitoring: Watch archive log generation rate
-- Check archive log space usage
SELECT SUM(blocks * block_size)/1024/1024/1024 AS gb_used
FROM v$archived_log
WHERE first_time > SYSDATE - 1;

Network Bandwidth

For different servers:

  • Bandwidth requirement: ~100-500 MB/s for efficient transfer
  • Compression: Consider using SECTION SIZE for parallel transfer
  • Monitoring: Track network utilization during duplication

Downtime Estimation

Actual downtime = Time to:

  1. Stop application (seconds)
  2. Force log switch (seconds)
  3. Roll forward: Copy, catalog and apply remaining archive logs (60 - 300 seconds)
  4. Open database with RESETLOGS (1-3 minutes for TB databases)
  5. Verify data (1-2 minutes)

Typical downtime: 3-10 minutes regardless of database size

Data Loss Risk

Zero data loss IF:

  • Archive logs are retained and accessible
  • No archive log gaps
  • Duplicate database can access source archive log location

Verify before cutover:

-- Check for archive log gaps
SELECT thread#, sequence#
FROM v$archived_log
WHERE deleted = 'NO'
ORDER BY thread#, sequence#;

Different Server Considerations

Network Configuration

  1. Firewall rules: Open port 1521 between servers
  2. DNS/hosts: Ensure hostname resolution
  3. TNS connectivity: Test with tnsping before duplication

Archive Log Access

Option 1: NFS Mount (Recommended)

# On target server, mount source archive log location
mount source-server:/archive /mnt/source_archive

# In RMAN, Oracle will automatically find logs

Option 2: Copy Archive Logs

# After duplication, before opening
scp source-server:/archive/* /target/archive/

# Then open database

Option 3: Standby Redo Logs (Advanced) Configure standby redo logs for real-time log shipping during duplication.

File Path Mapping

Ensure correct path conversion:

-- Check source paths
SELECT name FROM v$datafile;
SELECT member FROM v$logfile;

-- Use in duplicate command
set db_file_name_convert='/source/path','/target/path'
set log_file_name_convert='/source/path','/target/path'

Finding Recovery SCN

After duplication completes, verify the recovery point:

-- Checkpoint SCN (recovery point)
SELECT checkpoint_change#, 
       TO_CHAR(checkpoint_time, 'YYYY-MM-DD HH24:MI:SS') as checkpoint_time
FROM v$database;

-- Datafile checkpoint SCN
SELECT file#, name, checkpoint_change#,
       TO_CHAR(checkpoint_time, 'YYYY-MM-DD HH24:MI:SS') as checkpoint_time
FROM v$datafile 
ORDER BY file#;

-- Convert SCN to timestamp
SELECT current_scn,
       TO_CHAR(scn_to_timestamp(current_scn), 'YYYY-MM-DD HH24:MI:SS.FF6') as scn_timestamp
FROM v$database;

Monitoring Script

#!/bin/bash
# monitor_duplication.sh

while true; do
  echo "=== $(date) ==="
  
  # Check RMAN progress
  tail -20 $LOG_FILE | grep -E 'channel|datafile|archive'
  
  # Check archive log generation
  sqlplus -s / as sysdba <<EOF
  SELECT COUNT(*) as "Archive logs last hour"
  FROM v\$archived_log
  WHERE first_time > SYSDATE - 1/24;
  exit
EOF
  
  sleep 300  # Check every 5 minutes
done

Troubleshooting

Issue: ORA-01017 during duplication

Cause: Password file mismatch
Solution: Recreate password file on auxiliary with same password as source

Issue: Archive logs not found during OPEN RESETLOGS

Cause: Archive log location not accessible
Solution:

  • Verify archive log destination parameter
  • Check file permissions
  • For different servers, ensure NFS mount or copy logs

Issue: Database opens but PDB not recovered

Cause: PDB archive logs not applied
Solution:

ALTER PLUGGABLE DATABASE pdb1 OPEN;
-- If fails, check alert log for specific archive log needed

Comparison with Alternatives

MethodDowntimeData LossComplexityCost
DUPLICATE + NOOPEN3-10 min0LowEE/SE2
Data Guard Switchover<1 min0MediumEE
Export/ImportHoursNALowEE/SE2
Transportable Tablespaces10-30 minNAMediumEE
GoldenGate<1 min0HighOGG

Conclusion

RMAN's DUPLICATE FROM ACTIVE DATABASE with NOOPEN provides a simple, effective solution for zero data loss migrations:

✓ Zero data loss - All committed transactions recovered
✓ Minimal downtime - 2-5 minutes regardless of database size
✓ Source stays online - No impact during duplication
✓ No additional licensing - Uses standard RMAN features
✓ Works across servers - With proper network configuration

The key is the NOOPEN clause, which keeps the database in MOUNT mode until your planned cutover time, allowing Oracle to automatically apply all remaining archive logs when you open it.

References

  • Oracle Database Backup and Recovery User's Guide
  • MOS Note 452868.1: How to Duplicate a Database Using RMAN
  • MOS Note 1526162.1: RMAN DUPLICATE FROM ACTIVE DATABASE

Tested on Oracle Database 19c and 26ai. Procedure applies to Oracle 11g and above.