When using Oracle GoldenGate in Integrated Extract mode, one of the most overlooked but essential steps is ensuring that the LogMiner dictionary remains fresh and available in the redo stream. Although it is neither a common practice nor generally advised, there may be instances when it becomes necessary to modify the extract to align with a specific timestamp or System Change Number (SCN). In such cases, the LogMiner Dictionary is an indispensable resource.

“When using Integrated Extract, Oracle GoldenGate depends on the LogMiner dictionary for information about the database schema at the time of capture. If you alter the Extract to a specific SCN or timestamp, a LogMiner dictionary must be available at or before that SCN in the redo log stream. Without it, LogMiner cannot interpret the changes, and Extract will fail.”

What Is the LogMiner Dictionary?

The LogMiner dictionary is a snapshot of the data dictionary metadata that Oracle GoldenGate Integrated Extract uses to correctly interpret redo entries. It includes:

  • Table definitions
  • Column data types
  • Primary and unique keys
  • Supplemental logging information

GoldenGate’s Integrated Extract relies on this dictionary to translate raw redo entries into logical changes (INSERTs, UPDATEs, DELETEs) for replication.

What Happens If the Dictionary Is Missing or Outdated?

If the dictionary:

  • It isn’t present in the redo stream,
  • Was purged from archived logs,
  • Or doesn’t include new schema changes (e.g., table/column additions),

…then GoldenGate may fail to start, or worse, silently miss or misinterpret data changes.

OGG-01296: “Error retrieving dictionary from redo logs.”

“Occurs when the LogMiner dictionary does not exist or is incomplete for the SCN at which Extract is starting. You must rebuild the dictionary using DBMS_CAPTURE_ADM.BUILD and ensure it is written to the redo logs.”

Solution: Periodically Rebuild the Dictionary

To maintain healthy and resilient replication, Oracle recommends periodically rebuilding the LogMiner dictionary into the redo stream, especially in environments with frequent schema changes.

Automating Dictionary Builds Every Day

You can automate this process using Oracle Scheduler (DBMS_SCHEDULER).

Here’s a simple job that executes DBMS_CAPTURE_ADM.BUILD once every day from CDB:

sqlplus '/as sysdba'
SQL>BEGIN
  DBMS_SCHEDULER.CREATE_JOB (
    job_name         => 'BUILD_LOGMINER_DICT_HOURLY',
    job_type         => 'PLSQL_BLOCK',
    job_action       => '
      BEGIN 
        DBMS_LOGMNR_D.BUILD(
          options => DBMS_LOGMNR_D.STORE_IN_REDO_LOGS
        ); 
      END;',
    start_date       => SYSTIMESTAMP,
    repeat_interval  => 'FREQ=DAILY; INTERVAL=1',
    enabled          => TRUE,
    comments         => 'Build LogMiner dictionary every hour for Integrated Extract');
END;
/

Important:

  • The dictionary is written to the redo log stream, not the file system.
  • Adjust the interval for your needs, e.g., Daily (repeat_interval => ‘FREQ=DAILY; INTERVAL=1’)
  • Run this from the CDB$ROOT container (not a PDB).
  • Ensure the executing user has privileges on DBMS_CAPTURE_ADM.
  • Make sure supplemental logging is enabled: ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Verifying It Works

You can confirm the dictionary was written into redo logs:

SELECT 
  sequence#,
  first_change#,
  TO_CHAR(completion_time, 'YYYY-MM-DD HH24:MI:SS') AS completion_time,
  dictionary_begin,
  dictionary_end
FROM 
  v$archived_log
WHERE 
  dictionary_begin = 'YES' OR dictionary_end = 'YES'
AND standby_dest = 'NO' 
AND name IS NOT NULL 
AND status = 'A'
ORDER BY 
  sequence# DESC;

This shows redo log files where the dictionary build was recorded.

Why This Matters in Production

Maintaining an up-to-date LogMiner dictionary in the redo stream is essential for recoverability and flexibility in production replication environments.

Here’s why:

Enables SCN-Based Recovery or Rollback

If you ever need to:

  • Restart an Integrated Extract from a specific SCN or timestamp (e.g., due to a data issue, failover, or reprocessing requirement), or
  • Use the ALTER EXTRACT … SCN <scn> command,

…then a valid dictionary must exist in the redo log stream at or before that SCN.
Without it, GoldenGate cannot decode the redo entries, and the Extract will fail to start.

By proactively writing the dictionary to redo every hour, you ensure your Extract processes remain resilient and restartable, even after unexpected disruptions.

Clean Up

To delete the job:

BEGIN
  DBMS_SCHEDULER.DROP_JOB('BUILD_LOGMINER_DICT_HOURLY', FORCE => TRUE);
END;
/

The FORCE => TRUE ensures the job is dropped even if it’s running.

If the dictionary is written to the redo log stream using DBMS_CAPTURE_ADM.BUILD, then no manual cleanup is required.

  • The dictionary metadata is embedded in online redo logs (and then archived redo logs).
  • As redo logs are cycled and archived, the old dictionary entries naturally age out.
ScenarioCleanup Needed?Method
DBMS_LOGMNR_D.BUILD with STORE_IN_FLAT_FILEYesDelete old .ora files manually
You created custom log/dictionary tablesYesPurge data from those tables
Dictionary info in redo is bloating archived logsIndirectAdjust redo size, retention, log switching frequency

Additional References:

How to configure a new Integrated Extract to start at a historical old date when a dictionary build exists (Doc ID 2046831.1)

“An integrated Extract may fail with OGG-01296 if the LogMiner dictionary is missing or outdated. A dictionary is needed by LogMiner to interpret redo for table metadata. It is recommended to rebuild the LogMiner dictionary periodically to ensure any SCN-based restarts of Extract can be supported.”


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.