Thursday, October 9, 2025

Oracle Database Auditing: Standard and Unified Auditing Working Together in Mixed Mode

 

Introduction

Oracle Database 19c supports both traditional standard auditing and the newer unified auditing framework. When running in mixed mode (the default configuration), both auditing mechanisms can operate simultaneously, providing comprehensive audit coverage during migration periods. This blog demonstrates how both auditing types work together and showcases the performance benefits of unified auditing's partitioned architecture.

Understanding Mixed Mode Auditing

In mixed mode, Oracle Database allows:

  • Standard Auditing: Traditional audit trail stored in SYS.AUD$ and accessed via DBA_AUDIT_TRAIL
  • Unified Auditing: Modern audit trail stored in AUDSYS.AUD$UNIFIED and accessed via UNIFIED_AUDIT_TRAIL
  • Simultaneous Operation: Both mechanisms can capture the same events

Environment Setup

First, let's verify the audit configuration and create a test user:

-- Check audit trail parameter
SHOW PARAMETER audit_trail;

NAME        TYPE   VALUE        
----------- ------ ------------ 
audit_trail string DB, EXTENDED

The DB, EXTENDED value indicates standard auditing is enabled with extended information.

-- Create test user
CREATE USER donghua IDENTIFIED BY ComplexPassword;
GRANT DBA TO donghua;

Configuring Standard Auditing

Standard auditing uses the traditional AUDIT command to track database operations:

-- Enable standard auditing for table operations
AUDIT SELECT TABLE, INSERT TABLE, UPDATE TABLE, DELETE TABLE BY donghua;

This captures all SELECT, INSERT, UPDATE, and DELETE operations on tables by user DONGHUA.

Configuring Unified Auditing

Unified auditing uses policy-based configuration for more flexible and centralized audit management:

-- Create unified audit policy
CREATE AUDIT POLICY donghua_table_audit
ACTIONS SELECT, INSERT, UPDATE, DELETE;

-- Apply policy to user
AUDIT POLICY donghua_table_audit BY donghua;

Testing: Both Auditing Mechanisms in Action

Test 1: Operations on Table T1 (Before Unified Policy)

-- Connect as donghua user
CREATE TABLE t1(id INT);
INSERT INTO t1 VALUES(1);
SELECT * FROM t1;
DROP TABLE t1;

Standard Audit Results:

SELECT username, action_name, obj_name, timestamp
FROM dba_audit_trail
WHERE username = 'DONGHUA' AND obj_name = 'T1'
ORDER BY timestamp DESC;

USERNAME    ACTION_NAME    OBJ_NAME    TIMESTAMP 
----------- -------------- ----------- ------------ 
DONGHUA     SELECT         T1          09-OCT-25    
DONGHUA     INSERT         T1          09-OCT-25

At this point, only standard auditing captured the operations since the unified audit policy was not yet created.

Test 2: Operations on Table T2 (After Unified Policy)

-- Connect as donghua user
CREATE TABLE t2 (id INT);
INSERT INTO t2 VALUES (1);
COMMIT;
SELECT * FROM t2;
DROP TABLE t2;

Unified Audit Results:

SELECT unified_audit_policies, dbusername, action_name, object_name, event_timestamp
FROM unified_audit_trail
WHERE dbusername = 'DONGHUA'
AND event_timestamp_utc > TO_TIMESTAMP('2025-10-01','YYYY-MM-DD')
AND object_name = 'T2'
ORDER BY event_timestamp DESC;

UNIFIED_AUDIT_POLICIES    DBUSERNAME    ACTION_NAME    OBJECT_NAME    EVENT_TIMESTAMP 
------------------------- ------------- -------------- --------------- ---------------------------------- 
DONGHUA_TABLE_AUDIT       DONGHUA       SELECT         T2              09-OCT-25 01.37.44.549052000 PM    
DONGHUA_TABLE_AUDIT       DONGHUA       INSERT         T2              09-OCT-25 01.37.39.747061000 PM

Standard Audit Results (Same Operations):

SELECT username, action_name, obj_name, timestamp
FROM dba_audit_trail
WHERE username = 'DONGHUA' AND obj_name IN ('T1','T2')
ORDER BY timestamp DESC;

USERNAME    ACTION_NAME    OBJ_NAME    TIMESTAMP 
----------- -------------- ----------- ------------ 
DONGHUA     SELECT         T2          09-OCT-25    
DONGHUA     INSERT         T2          09-OCT-25    
DONGHUA     SELECT         T1          09-OCT-25    
DONGHUA     INSERT         T1          09-OCT-25

Key Observation: Dual Capture in Mixed Mode

Notice that operations on T2 appear in both audit trails:

  • Unified Audit Trail: Captured by the DONGHUA_TABLE_AUDIT policy
  • Standard Audit Trail: Captured by the traditional AUDIT statement

This demonstrates mixed mode operation where both mechanisms work simultaneously, ensuring no audit gaps during migration.

Unified Auditing Performance: Partitioning and Partition Pruning

One of the key advantages of unified auditing is its use of range partitioning on the EVENT_TIMESTAMP column, enabling efficient data management and query performance.

Partition Structure

The AUDSYS.AUD$UNIFIED table is partitioned by range on EVENT_TIMESTAMP with monthly intervals:

PARTITION BY RANGE ("EVENT_TIMESTAMP") INTERVAL (INTERVAL '1' MONTH)

This means:

  • Each month's audit data is stored in a separate partition
  • Old partitions can be easily archived or dropped
  • Queries with time-based predicates benefit from partition pruning

The column EVENT_TIMESTAMP_UTC in view AUDSYS.UNIFIED_AUDIT_TRAIL , it maps to column EVENT_TIMESTAMP in table AUDSYS.AUD$UNIFIED.

Demonstrating Partition Pruning

Let's examine the execution plan for a time-filtered query:

EXPLAIN PLAN FOR
SELECT unified_audit_policies, dbusername, action_name, object_name, event_timestamp
FROM unified_audit_trail
WHERE dbusername = 'DONGHUA'
AND event_timestamp_utc > TO_TIMESTAMP('2025-10-01','YYYY-MM-DD')
AND object_name IN ('T1','T2')
ORDER BY event_timestamp DESC;

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);

Key Execution Plan Details:

-----------------------------------------------------------------------------------------------------------------------------    
| Id  | Operation                       | Name                      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    
-----------------------------------------------------------------------------------------------------------------------------    
...
|  17 |         PARTITION RANGE ITERATOR|                           |     1 |    82 | 45551   (1)| 00:00:02 |   137 |1048575|    
|* 18 |          TABLE ACCESS FULL      | AUD$UNIFIED               |     1 |    82 | 45551   (1)| 00:00:02 |   137 |1048575|    
...

Predicate Information (identified by operation id):
---------------------------------------------------
  18 - filter("USERID"='DONGHUA' AND ("OBJ_NAME"='T1' OR "OBJ_NAME"='T2') AND "EVENT_TIMESTAMP">TIMESTAMP'
              2025-10-01 00:00:00.000000000')

Partition Pruning in Action:

  • OperationPARTITION RANGE ITERATOR (Id 17)
  • Pstart: 137 - Starting partition number
  • Pstop: 1048575 - Maximum partition number (but only relevant partitions are scanned)

The optimizer uses the event_timestamp_utc > TO_TIMESTAMP('2025-10-01','YYYY-MM-DD') predicate to:

  1. Identify which partitions contain data from October 2025 onwards
  2. Skip all partitions before October 2025
  3. Only scan relevant monthly partitions (starting from partition 137)

This dramatically reduces I/O and improves query performance, especially in databases with years of audit history.

Performance Impact: Constant vs. Dynamic Predicates

The effectiveness of partition pruning depends heavily on whether the optimizer can determine partition boundaries at parse time. Let's compare two queries:

Query 1: Using Constant Timestamp (Optimal)

SET TIMING ON;

SELECT unified_audit_policies, dbusername, action_name, object_name, event_timestamp
FROM unified_audit_trail
WHERE dbusername = 'DONGHUA'
AND event_timestamp_utc > TO_TIMESTAMP('2025-10-01','YYYY-MM-DD')
AND object_name IN ('T1','T2')
ORDER BY event_timestamp DESC;

Elapsed: 00:00:00.293

Query 2: Using Dynamic Expression (Suboptimal)

SELECT unified_audit_policies, dbusername, action_name, object_name, event_timestamp
FROM unified_audit_trail
WHERE dbusername = 'DONGHUA'
AND event_timestamp_utc > SYSTIMESTAMP - 1
AND object_name IN ('T1','T2')
ORDER BY event_timestamp DESC;

Elapsed: 00:00:09.181

Performance Difference: 31x slower (9.181s vs 0.293s)

Execution Plan Comparison

Constant Predicate Plan:

| Id  | Operation                       | Pstart| Pstop |  Cost  |
------------------------------------------------------------------
|  17 | PARTITION RANGE ITERATOR        |   137 |1048575| 45551  |
|  18 | TABLE ACCESS FULL (AUD$UNIFIED) |   137 |1048575| 45551  |
  • Pstart: 137 - Specific starting partition identified at parse time
  • Cost: 45,551 - Lower cost due to fewer partitions scanned

Dynamic Predicate Plan:

| Id  | Operation                       | Pstart| Pstop |  Cost  |
------------------------------------------------------------------
|  17 | PARTITION RANGE ITERATOR        |   KEY |1048575| 492K   |
|  18 | TABLE ACCESS FULL (AUD$UNIFIED) |   KEY |1048575| 492K   |
  • Pstart: KEY - Partition determined at runtime (cannot prune at parse time)
  • Cost: 492,000 - 10.8x higher cost estimate
  • Must evaluate SYSTIMESTAMP - 1 for each execution

Key Takeaway

For optimal partition pruning performance:

  • ✅ Use constant valuesTO_TIMESTAMP('2025-10-01','YYYY-MM-DD')
  • ❌ Avoid dynamic expressionsSYSTIMESTAMP - 1SYSDATE - 7

When the optimizer sees a constant, it can:

  1. Determine exact partition boundaries at parse time
  2. Generate a more efficient execution plan
  3. Skip irrelevant partitions entirely

When using dynamic expressions, the optimizer must:

  1. Defer partition determination to runtime
  2. Potentially scan more partitions
  3. Re-evaluate the expression for each query execution

Conclusion

Oracle's mixed mode auditing provides a seamless transition path from standard to unified auditing. Both mechanisms can coexist, ensuring comprehensive audit coverage. The unified auditing framework's partitioned architecture delivers significant performance advantages through partition pruning, making it the recommended approach for modern Oracle databases.

When querying audit data with time-based filters using event_timestamp_utc, the optimizer automatically leverages partition pruning to scan only relevant monthly partitions, resulting in faster queries and reduced resource consumption—a critical feature for databases with extensive audit histories.

Sunday, September 14, 2025

Autovacuum does NOT support parallel index vacuuming

 

Overview

Autovacuum does NOT support parallel index vacuuming, unlike manual VACUUM operations. This is a fundamental architectural limitation that affects performance characteristics of background maintenance operations.

Key Limitation

Autovacuum workers always process indexes sequentially, regardless of:

  • Number of indexes on the table
  • Index size or complexity
  • Available system resources
  • max_parallel_workers or related settings

Source Code Evidence

Autovacuum Parameter Setup

In src/backend/postmaster/autovacuum.c, the autovacuum_do_vac_analyze() function explicitly sets parallel workers to zero:

static void
autovacuum_do_vac_analyze(autovac_table *tab, BufferAccessStrategy bstrategy)
{
    VacuumParams params;
    
    /* Initialize vacuum parameters */
    memset(&params, 0, sizeof(params));
    
    /* Set various vacuum options */
    params.options = VACOPT_SKIPTOAST | 
                    (dovacuum ? VACOPT_VACUUM : 0) |
                    (doanalyze ? VACOPT_ANALYZE : 0);
    
    /* CRITICAL: Autovacuum never uses parallel workers */
    params.nworkers = 0;  /* No parallel workers for autovacuum */
    
    /* Set other parameters... */
    params.freeze_min_age = freeze_min_age;
    params.freeze_table_age = freeze_table_age;
    params.multixact_freeze_min_age = multixact_freeze_min_age;
    params.multixact_freeze_table_age = multixact_freeze_table_age;
    
    /* Call vacuum with sequential-only parameters */
    vacuum(NIL, &params, bstrategy, vac_context, true);
}

Manual VACUUM vs Autovacuum Comparison

Manual VACUUM (Supports Parallel)

/* In ExecVacuum() - src/backend/commands/vacuum.c */
void
ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
{
    VacuumParams params;
    
    /* Parse PARALLEL option from user command */
    if (vacstmt->options & VACOPT_PARALLEL)
    {
        /* User can specify: VACUUM (PARALLEL 4) table_name; */
        params.nworkers = vacstmt->parallel_workers;
    }
    else
    {
        params.nworkers = 0;  /* Default: no parallel */
    }
    
    /* Manual vacuum can use parallel workers */
    vacuum(vacstmt->rels, &params, bstrategy, vac_context, isTopLevel);
}

Autovacuum (Always Sequential)

/* In autovacuum_do_vac_analyze() - src/backend/postmaster/autovacuum.c */
static void
autovacuum_do_vac_analyze(autovac_table *tab, BufferAccessStrategy bstrategy)
{
    VacuumParams params;
    
    /* Autovacuum NEVER supports parallel workers */
    params.nworkers = 0;  /* Hardcoded to 0 - no user control */
    
    /* No way to override this in autovacuum */
    vacuum(NIL, &params, bstrategy, vac_context, true);
}

Index Vacuuming Process

Sequential Index Processing in Autovacuum

When autovacuum processes indexes, it uses the sequential path in lazy_vacuum_all_indexes():

/* In src/backend/access/heap/vacuumlazy.c */
static void
lazy_vacuum_all_indexes(LVRelState *vacrel)
{
    int nindexes = vacrel->nindexes;
    Relation *indrels = vacrel->indrels;
    
    /* Check if parallel vacuum is possible */
    if (vacrel->params->nworkers > 0 && nindexes > 1)
    {
        /* PARALLEL PATH - Only for manual VACUUM */
        lazy_parallel_vacuum_indexes(vacrel);
    }
    else
    {
        /* SEQUENTIAL PATH - Always used by autovacuum */
        for (int i = 0; i < nindexes; i++)
        {
            lazy_vacuum_one_index(indrels[i], vacrel->stats,
                                 vacrel->dead_items, vacrel->old_live_tuples);
        }
    }
}

Since vacrel->params->nworkers is always 0 for autovacuum, it always takes the sequential path.

Individual Index Vacuum Function

/* Sequential index vacuum - used by autovacuum */
static void
lazy_vacuum_one_index(Relation indrel, LVRelStats *stats,
                     TidStore *dead_items, double old_live_tuples)
{
    IndexBulkDeleteResult *stats_res;
    
    /* Single-threaded index cleanup */
    stats_res = index_bulk_delete(indrel, lazy_tid_reaped,
                                 (void *) dead_items,
                                 stats->num_dead_tuples,
                                 old_live_tuples);
    
    /* Update statistics */
    if (stats_res)
    {
        stats->pages_removed += stats_res->pages_removed;
        stats->tuples_removed += stats_res->tuples_removed;
        pfree(stats_res);
    }
}

Performance Implications

Tables with Many Indexes

For tables with multiple large indexes, autovacuum performance is significantly impacted:

-- Example: Table with 8 indexes
CREATE TABLE large_table (
    id BIGINT PRIMARY KEY,
    col1 INTEGER,
    col2 TEXT,
    col3 TIMESTAMP,
    col4 JSONB,
    col5 NUMERIC,
    col6 UUID,
    col7 INET
);

CREATE INDEX idx1 ON large_table (col1);
CREATE INDEX idx2 ON large_table (col2);
CREATE INDEX idx3 ON large_table (col3);
CREATE INDEX idx4 ON large_table USING GIN (col4);
CREATE INDEX idx5 ON large_table (col5);
CREATE INDEX idx6 ON large_table (col6);
CREATE INDEX idx7 ON large_table (col7);

Autovacuum behavior:

  • Processes all 8 indexes sequentially
  • Total time = sum of individual index vacuum times
  • Cannot utilize multiple CPU cores for index cleanup

Manual VACUUM behavior:

-- Can process indexes in parallel
VACUUM (PARALLEL 4) large_table;
  • Can process up to 4 indexes simultaneously
  • Total time ≈ max(individual index vacuum times)
  • Utilizes multiple CPU cores

Resource Utilization Differences

Autovacuum Resource Usage

/* Autovacuum characteristics */
- Single worker process per table
- Sequential index processing
- Lower CPU utilization
- Longer vacuum duration
- Designed for minimal impact on workload

Manual Parallel VACUUM Resource Usage

/* Manual parallel vacuum characteristics */
- Leader process + multiple worker processes
- Parallel index processing
- Higher CPU utilization
- Shorter vacuum duration
- Can impact concurrent workload more significantly

Why Autovacuum Doesn't Support Parallel Processing

1. Background Process Design Philosophy

/*
 * Autovacuum is designed to be minimally intrusive:
 * - Runs in background with low priority
 * - Uses cost-based delay to throttle I/O
 * - Avoids competing for resources with user queries
 * - Parallel workers would increase resource contention
 */

2. Complexity Management

/*
 * Parallel worker management adds complexity:
 * - Dynamic shared memory allocation
 * - Inter-process communication
 * - Error handling across multiple processes
 * - Resource cleanup on worker failure
 */

3. Cost-Based Delay Coordination

In src/backend/postmaster/autovacuum.c:

/*
 * Cost-based delay balancing across workers:
 * - Autovacuum balances vacuum_cost_delay across all active workers
 * - Parallel workers within a single vacuum would complicate this
 * - Current design: one worker per table, simple cost accounting
 */
static void
AutoVacuumUpdateDelay(void)
{
    /* Rebalance cost delay across all autovacuum workers */
    int nworkers_for_balance = pg_atomic_read_u32(&AutoVacuumShmem->av_nworkersForBalance);
    
    if (nworkers_for_balance > 0)
    {
        /* Distribute delay across workers */
        VacuumCostDelay = VacuumCostDelayLimit / nworkers_for_balance;
    }
}

4. Historical Design

/*
 * Timeline of features:
 * - Autovacuum: PostgreSQL 8.1 (2005)
 * - Parallel vacuum: PostgreSQL 11 (2018)
 * 
 * Autovacuum predates parallel vacuum by 13 years
 * Retrofitting parallel support would require significant changes
 */

Workarounds and Alternatives

Manual Parallel VACUUM for Large Tables

-- Identify tables that would benefit from parallel vacuum
SELECT 
    schemaname,
    tablename,
    n_dead_tup,
    last_autovacuum,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as size
FROM pg_stat_user_tables 
WHERE n_dead_tup > 10000
ORDER BY n_dead_tup DESC;

-- Run manual parallel vacuum during maintenance windows
VACUUM (PARALLEL 4, VERBOSE) large_table;