cancel
Showing results for 
Search instead for 
Did you mean: 

How to see if what a specific load process is doing?

justin_molenaur2
Contributor
0 Kudos

Hi all, having a little bit of a problem in that I have a load (not replication) job that seems to be hanging (not failing) but also not loading any data into HANA or not finishing.

- Attempted to load table STXL in full, 440k or so rows loaded into HANA

- Created an include for BOR event to filter all but 1 record, worked fine in about 3 minutes and 1 record present in HANA

- Added additional code to decompress a compressed column format, code shown below. This was tested in a 'standalone' fashion as a program in an ECC system, so this is migrated to SLT as an include. In this case, the load job is just "hanging", it's not failing or not finishing. When I look in the monitor, can see it running for the last say 15 minutes "In Process".

From here, no data ever makes it into HANA and the job in SLT doesn't seem to finish.

How to diagnose where this is getting hung up or where the problem is? Normally if the code is bad, I would get a hard error in the application logs, which I don't get here.

Code for include

*&---------------------------------------------------------------------*

*&  Include           Z_TEST_IMPORT_TEXT

*&---------------------------------------------------------------------*

types: begin of ty_stxl_raw,

         clustr type stxl-clustr,

         clustd type stxl-clustd,

       end of ty_stxl_raw.

DATA: lt_stxl_raw type standard table of ty_stxl_raw,

      wa_stxl_raw type ty_stxl_raw,

      lt_tline type standard table of tline,

      wa_tline type tline.

IF <WA_S_STXL>-TDNAME <> '0020874233'.

  SKIP_RECORD.

ENDIF.

clear  lt_tline[].

wa_stxl_raw-clustr = <WA_S_STXL>-clustr.

wa_stxl_raw-clustd = <WA_S_STXL>-clustd.

append wa_stxl_raw to lt_stxl_raw.

import tline = lt_tline from internal table lt_stxl_raw.

READ TABLE lt_tline into wa_tline INDEX 1.

<WA_R_STXL>-TEXT = wa_tline-tdline.

Accepted Solutions (0)

Answers (2)

Answers (2)

Former Member
0 Kudos

Hi Justin,

thank your very much for your support. We managed to spot the cause for the dumps in the initial load process. Starting the initial load via the program "DMC Starter" in SE38 helped a lot.


We came across to major issues:

1. If the text in CLUSTD consists of more than one line in STXL, a dump occurs. We put a filter on column CLUSTR (CLUSTR < 7902) AND a filter on colum SRTF2 (SRTF2 = 0) to get only the texts which consist of one row in STXL.

2. We're getting another dump at the "READ TABLE lt_tline INTO..." line in cases where there are more than 345 lines in the internal Table. We also put a filter on the number of lines to be processed.

3. Another issue we came across is, that it is only possible to work with CHAR fields inside the include. So we have a restriction to a maximum of 5000 characters for the texts. If we want to use NCLOB fields for large texts we get an error in the ABAP Editor. So only CHAR fields are working. I guess it has something to do with a restriction of the internal ABAP variables. But I'm not an ABAP expert.

Bottom line is, we got the replication to work. But we still have a couple of restrictions which we have to tackle.

Best regards,

Frank

justin_molenaur2
Contributor
0 Kudos

Frank, awesome to hear you worked through the issues.

Regarding DMC_STARTER, can you share how you were able to find the issue, specifically with such a large dataset, did you just step through until you hit a dump? When I referred to the documentation on debugging (that I had not yet created!), here is what I gathered just last week as a rough start. I'll probably repost this as a blog if you can confirm it is similar to your steps.

- Ensure however you would like to test has already been started

- For example if you want to test initial load, start the initial load and let it fail, and follow the steps below. When DMC_STARTER kicks up, it will start initial load again in debug.

- If you want to test replication phase (picking up logging table records to process), get a table to replication status, stop the master job, then make a change in the source to log new entries in the logging table and follow the steps below. When DMC_STARTER kicks up, it will start in replication phase, picking up logging table records for processing.

After the above is "set up" for testing, do the following, some of the functions will have different names based on your implementation.

- SE38, DMC_STARTER

- Use Migration Object Z_<TABLE_NAME>_<CONFIGURATION_ID>, Z_STXL_001 for example, access plan = 1, test mode = X

- /h to enable debug

- Find OLC function module call CALL FUNCTION olc_rto_ident, set breakpoint, continue to enter the FM

- Enter 1CADMC/OLC_100000000000*, find CALL FUNCTION '/1CADMC/IL_100000000000769', set breakpoint, continue to enter the FM

- Enter /1CADMC/IL_100000000000769, find PERFORM _BEGIN_OF_RECORD_0001_, set breakpoint, continue to enter

- Enter PERFORM _RULE_BOR_C., the include code should be found here and you can step through.

Regarding your issues discovered

1) When I referred the code changes I need to tag on the other blog, I was referring to the following

- Check for <WA_S_STXL>-SRTF2 = '0' to only grab the first line (as you mention)

- The CLUSTR size as you mention is a helpful check.

- Also need to put a check above the processing of the record to be able to handle DELETE's, else you'll get a short dump. When a deletion occurs, only the key columns are passed through this logic, and when we are working (importing) on a non-key field it will of course be empty and cause a dump.

if <WA_S_STXL>-CLUSTD is not initial.

*Put Source fields into internal table for IMPORT statement to work on, main logic here

endif.

2) The specific texts we were targeting would never be this large, they only consist of one single line so we never hit this issue. In the case you exceed 345 lines, are you just truncating the remainder?

All in all, great stuff - good to see someone taking this concept further. If you want a deeper look into the folks who are doing much heavier text processing with additional logic (and much stronger ABAP'ers than I ), see Mass reading standard texts (STXH, STXL)

Happy HANA!

Justin

Former Member
0 Kudos

Hi Justin,

I'm having the same problem with SLT replication oft the STXL table. I'm using your include to decipher the longtext in CLUSTR column.

In my case the replication status stays in "Initial Load". What we found out is, that there's a dump in ST22 relating the include script.

We are getting an "CONNE_IMPORT" Error. We are currently assuming that is has something to do with some japanese characters in the clustd field.

We filtered the replication to one single row, which the initial load was succesfull and the replication worked fine after the load.

Did you happen to get to the cause of your issue?

justin_molenaur2
Contributor
0 Kudos

I have a document I need to share that will illustrate how to debug, but in absence of that let's get some basic information.

- Set up the same scenario that was causing the failure, I believe you mention starting replication

- I assume that the only way you know that an error occurred is through ST22, all other status mechanisms (LTRC, Studio, etc) are all green?

- Check the logging table to see if you have any records that have Operation = 'D'

If the last point is true, you may be capturing deletes and the code doesn't handle those as shown above, I need to revise it.

To prove your Japanese characters theory, you can choose the very first record that is sitting in the log table in ECC and perform the READ_TEXT function on it in the source. There you should see if that is the case.

Regards,

Justin

Former Member
0 Kudos

A better debugging is definitely desirable as we didn't find any solution to get the paricular row that caused the dump. Even in the dump is mentioned more than a particular row.

Some more details about our scenario:

- Our SLT version is DMIS_2011_1_731

- The replication for several ECC tables in HANA is up and running

- Replication for table STXL is based on your tutorial "How To...Load and Convert SAP Long Text into HANA using SLT" except the filter, we implemented the filter for the rows with entries in table "DMC_ACSPL_SELECT"

The initial load is working fine if we deactivate the ABAP include for the long text decompression. After activating the ABAP Include we are getting the dump after ~15.000 rows. For the rows before the dump comes up we are getting the decompressed texts in HANA.

We tried to filter the initial load to one particular row. In this particular case, the initial load was succesfull and also the replication worked fine. We altered the longtext in ECC and got the altered text replicated into HANA. Hence I think that the dump is caused by some characers in the longtext which the Script isn't able to handle. The first row mentioned in the dump contains some japanese characters which we see when opening the underlying transaction in ECC.

Best regards,

Frank

justin_molenaur2
Contributor
0 Kudos

So this is definitely a different issue than I ran into.

What I can already tell you is that you need to add an IF statement to cover the cases where the text is deleted. Put a check for IF <WA_S_STXL>-CLUSTD is not initial before the conversion part starts.

The issue here is that for Operation = D (Delete), only the key fields are filled at the time the include executes. CLUSTD is not a key field and therefore is initial and the decompression fails/causes the short dump. I need to update the original post when I get time.

Anyhow - in the short dumps it may show you a clue on which record it failed on. Even the debug procedure I mention won't help you there. If you suspect that the Japanese characters are the issue, once you locate a "problem" record and then attempt to use the READ_TEXT FM in the source system or even write a program that does the same conversion and see if you can simulate the failure.

Not sure if you can use the language field in STXL to help you narrow down that search and prove your hypothesis.

Regards,

Justin