cancel
Showing results for 
Search instead for 
Did you mean: 

An error while executing query.Error details: SAP DBTech JDBC: [423]: AFL error: [423] "SYSTEM"."AFL_WRAPPER_GENERATOR"...

Former Member
0 Kudos

Error details: SAP DBTech JDBC: [423]: AFL error:  [423] "SYSTEM"."AFL_WRAPPER_GENERATOR": line 32 col 1 (at pos 1198): [423] (range 3) AFL error exception: AFL error: registration finished with errors, see indexserver trace. This is the error message text that I am receiving (PA 1.14 build 870) - see the beginning of the message text in the subject line of this post.

Hi Experts!

Each time that I run algorithms (I use one of the most straightforward ones, like HANA K-Means) on on object (table or analytical view) that contains variables of INTEGER type, each time I add more than 2 or 3 such variables as inputs in the algorithm node, I receive the above-mentioned error message:

An error occurred while executing the query.Error details: SAP DBTech JDBC: [423]: AFL error:  [423] "SYSTEM"."AFL_WRAPPER_GENERATOR": line 32 col 1 (at pos 1198): [423] (range 3) AFL error exception: AFL error: registration finished with errors, see indexserver trace

If I change all of my variables to DOUBLE type, the error message goes away.

Does that mean that PA requires that INTEGER type is avoided? If so, what is the recommendation as to how joins should be built in HANA in order to create the objects (views) consumed by PA? Using DOUBLE for this purpose would be cumbersome if not a showstopper.

And above all, why INTEGER is supported in such a limited way (i.e. only a couple of variables with a rather low "collective cartesian dimensionality", i.e. with a small number of values per variable)?

Any hints or knowledge would be highly appreciated.

Thanks,

Sergey

Message was edited by: Jason Lax Shortened title to something manageable.

Accepted Solutions (0)

Answers (3)

Answers (3)

Former Member
0 Kudos

Two and a half years after my original post, I noticed that some progress has been made: the INTEGER type is now supported by HANA PAL. But the DECIMAL type still cannot be used and needs to be "approximated" by DOUBLE. With all the possible outcomes as to data representation, rounding, etc.

0 Kudos

Hi

if you still have this issue you can have a look at the Note below:

1930665 - Labor Demand Planning - determination of planned execution times unsuccessful

https://service.sap.com/sap/support/notes/1930665

The same error is also resolved in this blog:

http://scn.sap.com/thread/3657025

And also resolved here: https://scn.sap.com/thread/3311619

Test this construction :

call SYSTEM.afl_wrapper_generator ('PAL_ANOMALY_DETECTION1', 'AFLPAL', 'ANOMALYDETECTION', PDATA)

and check PDATA(only distinct values in it)

Hope that helps

Cheers

Kingsley

Former Member
0 Kudos

Hi Sergy,

I also faced a similar issue when invoking the PAL library. I also face this issue while executing a few vector based codes in R. On further research I found its way the numbers are treated in R. R treats numbers as "numeric" (double in C and other languages). Numeric values are physically stored without any extra leading or trailing zeroes. Thus, the declared precision and scale of a column are maximums, not fixed allocations. (In this sense the numeric type is more akin to varchar(n) than to char(n).) The actual storage requirement is two bytes for each group of four decimal digits, plus three to eight bytes overhead. In addition to ordinary numeric values, the numeric type allows the special value NaN, meaning "not-a-number"

This changes the treatment of null values. Most classes in R accept numeric values as opposed to integer. If you have null values in a column declared as integer R does not know how to treat it and throws an error. There are some conversions you need to do using functions like as.numeric or storage.mode to declare your data set as numeric and than run the algorithms. You can also change the column type to double.

I had faced this issue while doing a stock simulator where my default dataset was throwing error similar to above. I found out that there were null values in my dataset that were treated as NA since datatpe was integer. On changing the storage mode to numeric for the dataset it worked. I may not be 100% correct here but this is what I could gather from R forums that I visited to solve my error. Hope it helps you a little.