cancel
Showing results for 
Search instead for 
Did you mean: 

Logminer not showing expected records

Former Member
0 Kudos

Hi. I am using logminer to look at inserts into the TST03 data. I know that the spool data in question is supposed to have a certain number of rows in it (e.g. 47), implying that I should find 47 inserts via logminer.

What usually happens is that a query against v$logmnr_contents lists insterts for rows 2, 4, 6, 8, ..., but not 1, 3, 5, 7, .... Oddly, both 46 and 47 show up.

I have another case where there are 48 rows in TST03 for the spool. In that case, log miner only shows inserts for the even numbered rows (i.e., 2, 4, 6, ..., 44, 46 & 48).

I have another situation where I have 6 rows in TST03, but when looking through logminer, I only see an insert for the 6th row, and not the previous 5. Hence why I said "usually" above.

Does logminer sometimes not show all operations?

What could cause this behavior?

Is there something I need to ensure is set in order to see all data?

In this case, one of the columns of TST03 is a long raw column (DCONTENTS). Could this be problematic?

James

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

I also think the long column might be the reason for the behaviour, there are a lot of exceptions regarding long columns and logminer. Check the documentation and search for LONG: [10g Logminer|http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm]

I suggest you alternatively check the table TST01, to verify if you see all rows inserted.

Regards

Michael

stefan_koehler
Active Contributor
0 Kudos

Hello,

just an additional information - Michael is right.

Please check the documentation about Supplemental Logging:

http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm#i1032078

=> This option specifies that when a row is updated, all columns of that row (except for LOBs, LONGS, and ADTs) are placed in the redo log file.

Regards

Stefan

Former Member
0 Kudos

Hi, this is good information. Are you saying that if supplemental logging is turned on, that we would then see all the expected information?

stefan_koehler
Active Contributor
0 Kudos

Hello James,

unfortunately you don't tell your oracle version.

You can check metalinknote #249001.1 for the Oracle 10g Features with Logminer:

> 4) Support for Additional Data Types

>LogMiner now supports the mining of multi-byte CLOB and NCLOB data types and also extends the support for tables containing LONG and LONG RAW columns.

To be honest i have never tested it with Supplemental Logging - but maybe i can make a test tomorrow and post the result.

Regards

Stefan

Former Member
0 Kudos

Thanks. I will have a look at the metalink note. Our version of Oracle is 10.2.0.2.

stefan_koehler
Active Contributor
0 Kudos

Hello James,

ok now i was very snoopy and i tested it right now. My tests were made on a 10.2.0.4.0 database.


SQL> select SUPPLEMENTAL_LOG_DATA_ALL from v$database;
SUP
---
NO

SQL> select a.group#, a.status, b.member from v$log a, v$logfile b where a.group# = b.group#;
    GROUP# STATUS           MEMBER
---------- ---------------- -----------------------------------------
         1 ACTIVE           /oracle/TST/oradata/redolog/redo01.log
         2 CURRENT          /oracle/TST/oradata/redolog/redo02.log
         3 INACTIVE         /oracle/TST/oradata/redolog/redo03.log

SQL> create table ZTEST(a number, b number, c long);
SQL> insert into ZTEST values (1,2,'AAA');
SQL> insert into ZTEST values (3,4,'BBB');
SQL> commit;

SQL> update ZTEST SET c ='CCC' where a=1;
SQL> commit;

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oracle/TST/oradata/redolog/redo02.log', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
SQL> SELECT SQL_REDO from V$LOGMNR_CONTENTS WHERE SEG_OWNER = 'TEST' and SEG_NAME = 'ZTEST';
SQL_REDO
----------------------------------------------------------------
create table ZTEST(a number, b number, c long);
insert into "TEST"."ZTEST"("A","B","C") values ('1','2','AAA');
insert into "TEST"."ZTEST"("A","B","C") values ('3','4','BBB');

SQL> DROP TABLE ZTEST;
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

SQL> select SUPPLEMENTAL_LOG_DATA_ALL from v$database;
SUP
---
YES

SQL> select a.group#, a.status, b.member from v$log a, v$logfile b where a.group# = b.group#;
    GROUP# STATUS           MEMBER
---------- ---------------- -----------------------------------------
         1 INACTIVE         /oracle/TST/oradata/redolog/redo01.log
         2 ACTIVE           /oracle/TST/oradata/redolog/redo02.log
         3 CURRENT          /oracle/TST/oradata/redolog/redo03.log
         
SQL> create table ZTEST(a number, b number, c long);
SQL> insert into ZTEST values (1,2,'AAA');
SQL> insert into ZTEST values (3,4,'BBB');
SQL> commit;

SQL> update ZTEST SET c ='CCC' where a=1;
SQL> commit;

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oracle/TST/oradata/redolog/redo03.log', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
SQL> SELECT SQL_REDO from V$LOGMNR_CONTENTS WHERE SEG_OWNER = 'TEST' and SEG_NAME = 'ZTEST';

SQL_REDO
----------------------------------------------------------------
create table ZTEST(a number, b number, c long);
insert into "TEST"."ZTEST"("A","B","C") values ('1','2','AAA');
insert into "TEST"."ZTEST"("A","B","C") values ('3','4','BBB');
update "TEST"."ZTEST" set "C" = 'CCC' where "A" = '1' and "B" = '2' and "C" = 'AAA' and ROWID = 'AAANfsAAEAAAAAeAAA';

SQL> DROP TABLE ZTEST;
SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

As you can see with Oracle 10g and supplemental logging you get the LONG columns too.

Regards

Stefan

Answers (2)

Answers (2)

Former Member
0 Kudos

See my previous posting.

Former Member
0 Kudos

Just a quick note to say that though the information provided is great, I was not able to fully resolve the situation we had encountered. The business was able to deal without a recovery of the information that was lost and since the data would have been removed after a couple weeks anyway, we no longer sought a resolution to the missing logminer information.

So, I will mark this as answered, though it was not resolved. That way anyone reading the forum and coming across this, if they have the same situation, then they will know that there may be more homework to do.