cancel
Showing results for 
Search instead for 
Did you mean: 

Issues after upgrading to IQ 16.0 SP3

Former Member
0 Kudos

Hi,

I migrated from 15.2 ESD#2 to 16.0 SP3.  After the migration I'm encountering some issues, the most relevant are:

1.-  A load table (20M rows, around 60 columns) that use to take 30min in 15.2 is now taking over 1hr.

2.- the truncate table from the same table in 1, take around 30 minutes.  Should not truncate be almost instantaneous?

3.- An insert update from SAS (72K rows, 10 columns) that use to take 1.5 hrs in 15.2 is now causing the server to stop responding after around 1 hr.  It is necessary to kill the server process because even the stop_iq command do not stop the server.

4.- Several index look to be corrupted even if the sp_iqcheckdb 'verify database" return no errors.  After repairing the all the indexes if the table with sp_iqrebuildindex, the queries start to work well.  Does this means that I need to rebuild all the indexes in the database.

5.- After upgrade, I'd some proxy tables from ASE 12.5.3 that connect via remote server using JDBC, but when I try querying the proxy tables the server crash.  To solve this I need to drop the proxy tables, drop the remote servers, recreate the remote serves using ODBC, and then recreate the proxy tables.  Should not the server send error messages instead of crashing?

Have anyone encounter this issues?  Can you share your experiences and solutions?

Thanks a lot,

Uvernes

Accepted Solutions (1)

Accepted Solutions (1)

c_baker
Employee
Employee
0 Kudos

Is memory properly configured?.   Do you have enough -iqmc, -iqtc AND -iqlm configured?

Remember IQ 16 also requires -iqlm for loading.  The split for a loader should start at 1/3 of cache memory for each (-iqmc, -iqtc, -iqlm) as opposed to the suggested 60:40 split (-iqtc:-iqmc) in pre-16.0 loaders.  Without -iqlm set, it defaults to 2 GB.

Are you using RLV on any table (the symptoms you describe in 1, 2 and 3 point partially to this)?  This introduces additional requirements for bulk operations - the need to set snapshot_versioning to 'table-level' and run sp_iqmergerlvstor before performing the bulk operation (this covers 1 and 2).

3 appears to be related to -iqrlvmem not being large enough.  The use of RLV requires a little more management.  Even though there are default merge settings on the RLV to merge into the main store, if there are not occasional 'commit's then the whole set of inserts or bulk loads rows (treated as row-by-row - see above) must fit into RLV memory until the commit (this depends on whether SAS is doing bulk inserts or row-by-row, of course, and if you are using RLV).

Did you migrate the database to a new IQ16 or upgrade?

If you upgraded:

  • Did you run 'alter database upgrade'?  Also Options used in IQ 15.x should be set back to default in 16.
  • What is the setting of FP_NBIT_IQ15_COMPATIBILITY?

As far as 4 goes, you must set FP_NBIT_IQ15_COMPATIBILITY='OFF' at the global AND user (dba or loader) level for it to be used, but you MUST also rebuild exisiting old FP indexes to be n-bitFP (which you mention in 4).

In order to use the new tieredHG indexes, you must also set CREATE_HG_WITH_EXACT_DISTINCTS = 'Off';

To convert an existing HG index to a tieredHG, you then need to run sp_iqrebuildindex with the 'retier' opotion.  This is not a long operation. You can have a mix of tiered and non-tiered HG indexes in the database.  As the 'retier' option is only a toggle to convert an existing HG from 'exact' to 'tiered' and vice-versa, you can check it's current setting by running sp_iqindexmetadata and checking the value of 'Maintains Exact Distinct'.

Chris

c_baker
Employee
Employee
0 Kudos

As a followup, The following will  parse the sp_iqindexmetadata for all HG indexes to help when determining the tiered status of the HG indexes, for feeding into sp_iqrebuildindex...retier.

create or replace procedure DBA.sp_iqshowtiered()

begin

  declare "tiered" char(1);

  select rtrim("table_name") as 'table_name',rtrim("table_owner") as 'table_owner',rtrim("index_name") as 'index_name',rtrim("column_name") as 'column_name',"index_type","unique_index",' ' as "tiered_index"

    into #tier_temp

    from "sp_iqindex"()

    where "index_type" = 'HG';

commit;

--create indexes to avoid index advisor

--create unique HG index tier_HG on #tier_temp(table_name, table_owner, index_name);

create HG index index_name_HG on #tier_temp(index_name);

create HG index table_name_HG on #tier_temp(table_name);

create LF index table_owner_LF on #tier_temp(table_owner);

  for "FORLOOP" as "FORCRSR" dynamic scroll cursor for

    select "table_name","table_owner","index_name" from #tier_temp order by "table_name" asc,"table_owner" asc,"index_name" asc

  do

    execute immediate 'select (case when substring(value2,1,1)=''N'' then ''Y'' else ''N'' end) into tiered from sp_iqindexmetadata (''' || "index_name" || ''',''' || "table_name" || ''',''' || "table_owner" || ''') where value1 = ''Maintains Exact Distinct'';';

    execute immediate 'update #tier_temp set tiered_index = tiered where table_name=''' || "table_name" || ''' and table_owner=''' || "table_owner" || ''' and index_name=''' || "index_name" || ''';'

  end for;

  select * from #tier_temp

end;

Former Member
0 Kudos

Hi Chris,

Thanks a lot for your replay.

This is the cfg file that I’m using to start the database:

-n MyDwIqSrvr

-x tcpip{port=4100}

-cl 256m

-ch 512m

-gc 20

-gd all

-gl all

-gm 100

-gn 150

-iqgovern 100

-gp 4096

-ti 4400

-iqrlvmem 2048

-iqlm 5120

-iqmc 5120

-iqtc 5120

-iqmsgsz 2047

-iqmsgnum 2

-xs http{port=8084}

For the slow loading table, I only tested before configuring RLV (no IQ RVL Store dbspace created) and the truncate and load are slow.  I’d not tested with RLV enable

For the table been loaded from SAS, I first tested without doing any RLV configuration (no IQ RVL Store dbspace created), and the server slow down; then I set RLV on the table and test both ‘table-level’ and ‘row-level’ on the SNAPSHOT_VERSIONING option, in both cases the server slow down after 1 hrs and was not responding.

Question:  Is IQ 16.0 SP3 use RLV by default if the IQ RVL Store is not configured?

Yes, I run ‘ALTER DATABASE UPGRADE SYSTEM PROCEDURE AS DEFINER OFF’

And after migration I set the following options:

SET OPTION PUBLIC.FP_NBIT_IQ15_Compatibility  = OFF                                                                                                    

SET OPTION PUBLIC.CREATE_HG_WITH_EXACT_DISTINCTS = OFF

SET OPTION PUBLIC.REVERT_TO_V15_OPTIMIZER=OFF

Note that after migration the sp_iqcheckdb ‘verify database’ and the sp_iqcheckdb ('allocation database') returns no error.

Thanks a lot for the sp_iwshowtiered() procedure.

Will be executing tests with your recommendations and will post the results.

Thanks again,

Uvernes

saroj_bagai
Contributor
0 Kudos

Can you post your sp_iqrowdensity and sp_indexmetadata output?

saroj_bagai
Contributor
0 Kudos


And also include html plan for the load

c_baker
Employee
Employee
0 Kudos

In order to use RLV, you must create an RLV store (this is just like any normal IQ dbspace, so the usual performance caveats apply) e.g.:

create dbspace IQ_RLV using file IQ_RLV_1 '/home/sybase/IQ_LABS/IQ_RLV_1.iq' size 1000 IQ RLV store;

For a table to use RLV, you must 'attach' it to the RLV store, no matter what dbspace the table pages occupy i.e. you do not create the table in the RLV store:

alter table lineitem enable RLV STORE;

Tables NOT in the RLV store will use normal page-level versioning (keep option base_tables_in_rlv_store = 'off' - be selective).  The RLV store should be sized to hold enough concurrent loads AND the maximum commit size of the batch - I do not have any guidelines yet.

Once a table is attached to the RLV store, you can treat it like an OLTP table for row-based operations.  Merges from the RLV store are done automatically (controlled by the various RV_merge... options) but a small -rlvmem or RLV store might negatively affect performance.

You do not want to perform LOAD TABLE on an RLV-enabled table if it is in row-level versioning mode.  This can slow a LOAD down way too much.  Same with TRUNCATE.  What you want to do is shown by the following example.  e.g. truncate:

--set temporary option disable_ri_check = 'on'; 
set temporary option snapshot_versioning = 'row-level';
sp_iqmergerlvstore 'BLOCKING','lineitem','dba';
set temporary option snapshot_versioning = 'table-level';

truncate table lineitem;
truncate table orders;
truncate table partsupp;
truncate table supplier;
truncate table part;
truncate table customer;
truncate table nation;
truncate table region;
commit;

Similarly for LOAD TABLE:

set temporary option snapshot_versioning = 'Table-level';

sp_iqmergerlvstore 'BLOCKING','lineitem','dba';

set temporary option query_name = 'lineitem';

message 'Loading lineitem' to client;

LOAD TABLE LINEITEM (

L_ORDERKEY '|',

L_PARTKEY '|',

L_SUPPKEY '|',

L_LINENUMBER '|',

L_QUANTITY '|',

L_EXTENDEDPRICE '|',

L_DISCOUNT '|',

L_TAX '|',

L_RETURNFLAG '|',

L_LINESTATUS '|',

L_SHIPDATE '|',

L_COMMITDATE '|',

L_RECEIPTDATE '|',

L_SHIPINSTRUCT '|',

L_SHIPMODE '|',

L_COMMENT '|'

)

USING FILE './raw_data/lineitem.tbl'

escapes off

quotes off

row delimited by '\x0d\x0a'

ignore constraint null 0, unique 0, data value 0, foreign key 0, check 0, all 100

MESSAGE LOG './raw_data/lineitem.err'

ROW LOG './raw_data/lineitem.log'

LOG DELIMITED BY '|'

Notify 100000

WITH CHECKPOINT ON;

commit;

checkpoint;

commit;

The critical part is to merge the remaining rows from the RLV store before performing the bulk operation on the affected table.  Once the bulk operation is complete, you can

set temporary option snapshot_versioning = 'row-level';

again.

I'm not sure why the non-RLV loads were slower in IQ16, unless it has something to do with still trying to maintain the old FP indexes (you might want to rebuild them all a n-bitFP).  I would expect any row-by-row loading to be slow, regardless.

When you tested the RLV loading with SAS, you may have still run out of RLV memory.  I hit a similar issue when I tested.  It relates to the number of rows being committed - the whole batch must fit in RLV memory.  I would suggest increasing RLV memory or see if you can set a commit size in SAS to commit after every x rows.  Until the commit, the rows cannot be merged into the TLV store automatically (based the RV_merge options) or by sp_iqmergerlvstore (non-blocking) if you were to run it in another session.

HTH

Chris

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi,

Just want to give an update in this:

Following Chris suggestions I executed sp_iqmergerlvstore 'BLOCKING',myTable,'dba' and disable RLV on the table referred in 1 and 2 (the slow truncate and load) and rebuild all the indexes, taking care of rebuild the HG indexes as tiered indexes;  This solved the performance problems for that table. Thanks a lot Chris!.

Since I got some others queries that were returning erroneous results, I rebuild all the indexes in the database; these seem to solved some issues and reduce the number of database crashes.

I continued testing and got some additional database crashes. Since RLV was not related to my loading and truncate performance issues, I decided to disable RLV in the tables where I’d configured it and remove completely the RLV DBspace.  While trying to execute the statement ‘ALTER TABLE DBA.myTable disable RLV STORE” the server crashes a few times, the only way I was able to disable RLV on those tables was to drop and recreate the tables.  After dropping and recreating the tables, I was able to drop the RLV dbspace.

I still having some database crashing.  I’m investigating the cause, I suspect they are related to the use of cursors and the call in the cursor iteration of a SP that use temporary tables, but I’m not sure, will post as soon as I’ll have a more clear idea about what is causing the database to crash.

Regarding the RLV, I’ll come back to test it after the database is stabilized.

If you have experienced similar database behaviors, please share your experiences and solutions.

Thanks a lot for your help.