In Oracle replication environments, especially with GoldenGate capture, two factors frequently determine whether your pipeline runs smoothly or starts lagging:
- Long-running transactions
- Nearest dictionary build alignment
If either is overlooked, you can end up with capture delays, memory pressure, or even inconsistencies during schema changes. Let’s walk through why this matters, and share a ready-to-use script to spot both issues in real time.
Why Long-Running Transactions Matter
A transaction that stays open for too long:
- Holds redo records in limbo, preventing them from being purged.
- Can cause extract processes to stall while waiting for commit markers.
- Inflates memory and redo log usage, impacting both source DB and GoldenGate.
Identifying and breaking down these transactions early helps avoid latency spikes and replication backlogs.
Why Dictionary Builds Matter
When Oracle performs a dictionary build, it captures the structure of tables and columns needed by replication tools. GoldenGate relies on these builds for schema mapping.
If a long-running transaction starts before the dictionary build completes, then GoldenGate may not “see” the right metadata when applying changes. That’s why pairing transactions with the nearest prior dictionary build is essential.
The Script
Here’s a PL/SQL block you can run to check both at once:
SET SERVEROUTPUT ON SIZE 1000000
DECLARE
CURSOR tx_cursor IS
SELECT
s.sid,
s.serial#,
s.username,
s.status,
s.program,
s.logon_time,
t.start_date,
t.start_scn,
SYSDATE AS current_time,
ROUND((SYSDATE - t.start_date) * 24 * 60, 2) AS duration_minutes,
FLOOR((SYSDATE - t.start_date) * 86400 / 3600) AS dur_hours,
FLOOR(MOD((SYSDATE - t.start_date) * 86400, 3600) / 60) AS dur_minutes,
FLOOR(MOD((SYSDATE - t.start_date) * 86400, 60)) AS dur_seconds
FROM
v$transaction t
JOIN v$session s ON t.ses_addr = s.saddr
WHERE
s.username IS NOT NULL
ORDER BY
t.start_date;
v_redo_seq v$archived_log.sequence#%TYPE;
v_dict_seq v$archived_log.sequence#%TYPE;
v_dict_change v$archived_log.first_change#%TYPE;
v_dict_time VARCHAR2(30);
tx_found BOOLEAN := FALSE;
BEGIN
FOR tx IN tx_cursor LOOP
tx_found := TRUE;
-- Get redo log sequence for transaction SCN
BEGIN
SELECT sequence#
INTO v_redo_seq
FROM v$archived_log
WHERE first_change# <= tx.start_scn
AND tx.start_scn < next_change#
AND standby_dest = 'NO'
AND status = 'A'
AND name IS NOT NULL
ORDER BY sequence# DESC
FETCH FIRST 1 ROWS ONLY;
EXCEPTION
WHEN NO_DATA_FOUND THEN
v_redo_seq := NULL;
END;
-- Get most recent dictionary build prior to transaction
BEGIN
SELECT sequence#, first_change#, TO_CHAR(completion_time, 'YYYY-MM-DD HH24:MI:SS')
INTO v_dict_seq, v_dict_change, v_dict_time
FROM (
SELECT sequence#, first_change#, completion_time
FROM v$archived_log
WHERE dictionary_begin = 'YES'
AND standby_dest = 'NO'
AND name IS NOT NULL
AND status = 'A'
AND completion_time <= tx.start_date
ORDER BY completion_time DESC
)
WHERE ROWNUM = 1;
EXCEPTION
WHEN NO_DATA_FOUND THEN
v_dict_seq := NULL;
v_dict_change := NULL;
v_dict_time := 'N/A';
END;
-- Output
DBMS_OUTPUT.PUT_LINE('------- Long Running Transaction -----------------------------------------------------');
DBMS_OUTPUT.PUT_LINE('SID: ' || tx.sid || ', Serial#: ' || tx.serial#);
DBMS_OUTPUT.PUT_LINE('Username: ' || tx.username || ', Program: ' || tx.program);
DBMS_OUTPUT.PUT_LINE('Start Time: ' || TO_CHAR(tx.start_date, 'YYYY-MM-DD HH24:MI:SS'));
DBMS_OUTPUT.PUT_LINE('Current Time: ' || TO_CHAR(tx.current_time, 'YYYY-MM-DD HH24:MI:SS'));
DBMS_OUTPUT.PUT_LINE('Duration: ' || tx.duration_minutes || ' minutes (' ||
LPAD(tx.dur_hours, 2, '0') || ':' ||
LPAD(tx.dur_minutes, 2, '0') || ':' ||
LPAD(tx.dur_seconds, 2, '0') || ')');
DBMS_OUTPUT.PUT_LINE('Start SCN: ' || tx.start_scn ||
' (Redo Sequence#: ' || NVL(TO_CHAR(v_redo_seq), 'N/A') || ')');
DBMS_OUTPUT.PUT_LINE('Dictionary Build Prior to Transaction Start: Seq# ' || NVL(TO_CHAR(v_dict_seq), 'N/A') ||
', SCN ' || NVL(TO_CHAR(v_dict_change), 'N/A') ||
', Time ' || v_dict_time);
END LOOP;
IF NOT tx_found THEN
DBMS_OUTPUT.PUT_LINE('No open user transactions found.');
END IF;
END;
/
Sample Output
------- Long Running Transaction ---------------------------------------------------------------
SID: 17, Serial#: 41557
Username: OGGADMIN, Program: Oracle SQL Developer for VS Code/25.1.1
Start Time: 2025-06-04 13:55:12
Current Time: 2025-06-04 14:05:55
Duration: 10.72 minutes (00:10:43)
Start SCN: 4081121 (Redo Sequence#: 125)
------- Nearest Dictionary Build ---------------------------------------------------------------
Dictionary Build Prior to Transaction Start: Seq# 120, SCN 4079370, Time 2025-06-04 13:55:05
Key Takeaways
- Run this before schema changes to validate no long transactions overlap dictionary builds.
- In GoldenGate environments, use it to explain replication lag or capture stalls.
- Cluster-wide checks: Replace
v$withgv$for RAC/Exadata. - Healthy practice: Educate developers/ETL teams on the impact of multi-hour open transactions.
With one script, you can correlate open user transactions, their SCNs, and the dictionary metadata context. This ensures that replication is running against a stable foundation and that corrective actions can be taken before issues snowball.
Curious about what’s really happening behind a Dictionary Build? I covered that in detail in a previous blog: https://alexlima.com/2025/05/30/why-you-should-rebuild-the-logminer-dictionary-periodically-for-oracle-goldengate/
Call to Action
- Ensure flagged long-running transactions are truly part of your GoldenGate capture.
- This action ensures that only transactions actually captured by GoldenGate are flagged as “long-running.” By validating against your Extract’s configuration, you avoid chasing irrelevant transactions that don’t impact replication.
- Identify the exact INSERT, UPDATE, or DELETE statements tied to a given transaction.
- This action lets you drill down into the SQL activity behind a transaction. By identifying the exact DML operations (INSERT, UPDATE, DELETE), you gain clear visibility into what data is being modified, helping you decide whether to wait, exclude, or remediate.

Leave a comment