cancel
Showing results for 
Search instead for 
Did you mean: 

Sizing of Large Memory cache in IQ 16

Former Member
0 Kudos

Hi

I'm in the middle of a migration project from IQ 12.7 to IQ 16 SP4.5. Currently, 96GB is allocated for IQ on the new RHEL server and I have configured Main Cache, Temp Cache and Large Memory to 32GB each. However, I have problems with loading of relatively wide tables. When loads into two wide tables run concurrently, one of LOADs fail and I receive following messages in IQ Message Log:

I. 11/04 12:23:29. 0000000070 Exception Thrown from slib/s_llhtable.cxx:187, Err# 13, tid 2 origtid 2

I. 11/04 12:23:29. 0000000070    O/S Err#: 0, ErrID: 517 (hos_memexception); SQLCode: -1013130, SQLState: 'QBC75', Severity: 14

I. 11/04 12:23:29. 0000000070 [22160]: All IQ large memory has been used, allocation canceled [size: 33554495]

-- (slib/s_llhtable.cxx 187)

The size "33554495" is always the same.

When I increase -iqlm to 48GB the problem doesn't occur. Monitoring via sp_iqsysmon (-section lma) reveals that when run separately, the first load allocates about 33GB of "Large Memory Inflexible" and the second load allocates about 10GB.

When run concurrently, the consumption of "Large Memory Inflexible" is about 42 GB. The first table has 766 columns and the second one 572, mostly NBit FP indexes.The number of rows inserted in each load is not big at all, fewer than a million. There are no conversions to flat FP during the loads.I use default values for FP_NBIT_Autosize_Limit and FP_NBIT_Lookup_MB.

Questions:

1. Do such massive memory requirements make sense at all? Does it sound like a normal behavior or a bug?

2. I read the section about the large memory cache in the IQ Sizing Guide. Still, it is not clear to me how to calculate the requirements for loads of NBit indexes. Specifically, is there a way to calculate it precisely based on the output of sp_iqindexmetadata (TokenCount, CountSize, DictSize)?

3. In IQ 15.*, FP(3) indexes proved to be problematic. In many cases, FP(3) brought more harm than good, in some cases I had to just ban using of such indexes entirely to avoid a significant drop in performance. Now we have NBit indexes, which potentially can grow even beyond the limits of FP(3). Specifically, I have tens of NBit indexes with >16 bits on the first table in my test case. Should it be considered a good practice? Maybe it worth converting such indexes to FlatFP, geven that I prefer better performance over storage savings?

Thanks in advance

Leonid Gvirtz

Accepted Solutions (0)

Answers (1)

Answers (1)

c_baker
Employee
Employee
0 Kudos

The LM is used to store the FP lookup tables.  The issue is with your varchars in the tables.

Even though IQ 16 stores varchars on disk with only what is used in the varchar (vs 15.x), they are expanded in LM memory to the full varchar definition length in early 16 SPs.

This is fixed in SP08, where the largest stored varchar on disk sets the column size for the lookup table in memory.

Recommend you upgrade to SP08PL01 or higher.

Chris

Former Member
0 Kudos

Hi Chris

The first table in my example, one that requires 30+ GB of Large Memory for load, has only 7 VARCHAR columns out of 766. The second table requires 10 GB of Large Memory and it has only one VARCHAR column out of 572. So, the problem must be somewhere else.

Do you know in what units values of TokenCount, CountSize, DictSize are reported in sp_iqindexmetadata output? Is it possible to base the sizing on this value somehow?

Thanks

Leonid Gvirtz