cancel
Showing results for 
Search instead for 
Did you mean: 

Mapping Issue for IDoc to JDBC interface

former_member621323
Participant
0 Kudos

Hi All,

I am having problem in implementing logic in IDoc to JDBC interface where I have to filter out E1WBB07-KSCHL = VKP0.

Source IDoc structure is like ->

E1WBB01(occ 0 -1000)

|-> E1WBB03 (occ 0-100)

|-> E1WBB07(occ 0-1000)

|-> KSCHL

DATAB

DATBI

Now, For each KSCHL = VKA0 there should be a duplicate VKP0 record. From these 2 records only the VKA0 should get processed and VKP0 ignored.

Duplicates for VKP0 and VKA0 can be identified by identical DATAB and DATBI.

Suppose, in one E1WBB03 segment,there are 4 E1WBB07 segment having following values.

1: KSCHL=VKP0, DATAB=20102011, DATBI=25102011

2: KSCHL=VKP0, DATAB=26102011, DATBI=30102011

3: KSCHL=VKA0, DATAB=26102011, DATBI=30102011

4: KSCHL=VKP0, DATAB=01112011, DATBI=31129999

2 & 3 are duplicates. From these, 2 should get dropped.

As a result only 1, 3 and 4 should get processed.

How can I proceed with this..?...I have tried some work around but not able to do it successfully. Is a UDF required to compare DATAB and DATBI. If yes how it can be written.?

Thnx in advance,

Praveen.

Accepted Solutions (1)

Accepted Solutions (1)

PriyankaAnagani
Active Contributor
0 Kudos

Hi Praveen,

Concatinate the two fields DATAB & DATBI and apply the below logic...

sortByKey>collapseContext>targetField

SortByKey:

input1:concatination of DATAB & DATBI -->removecontext

input2:KSCHL-->removeContext

Regards

Priyanka

former_member621323
Participant
0 Kudos

Hi Priyanka,

your soln wont work as collapse context will keep only the 1st value but I need other values also for VKA0 which are not duplicate.

Regards,

Praveen.

PriyankaAnagani
Active Contributor
0 Kudos

Hi Praveen,

Use SplitByValue change before collapseContext....

SortByKey>SplitByalueChange>CollapseContext-->target

Regards,

Priyanka

former_member621323
Participant
0 Kudos

Hi Priyanka,

Thnx for the response.

I have tried your logic but the problem is its not removing the duplicate VKP0 value.

Regards,

Praveen.

former_member621323
Participant
0 Kudos

Hi,

Can anyone tell me how to achieve this using UDF and if possible provide the code for the UDF? Input will be KSCHL,BATAB and DATBI for the UDF.

Regards,

Praveen.

prasanthi_chavala
Active Contributor
0 Kudos

Praveen,

You can achieve this using standard functions itself..check out the sample given in the below link where the out will have values VKP0 & VKA0 as per your eg given above with four segments.

[http://www.flickr.com/photos/69291190@N06/6299487406/]

Thanks,

Prasanthi.

Former Member
0 Kudos

Hi Praveen,

Please find the below udf , the inputs will be the count of E1WBB07 segment [cnt] , KSCHL[dt1] , DATAB [dt2], DATBI [dt3].

You have to change only the result.addValue according to the target field. KSCHL[dt1] , DATAB [dt2], DATBI [dt3].

Please refer this link for mapping screenshot [http://www.flickr.com/photos/69284901@N04/6299622189]

int len = Integer.parseInt(cnt[0]);
for(int i=0;i<len;i++)
{
  if ((i == 0)) 
  {
   if((dt1<i>.equals("VKP0")) || (dt1<i>.equals("VKA0")))
     result.addValue(dt1<i>);
  }
  if ((i != 0)) 
{
 if(dt1<i>.equals(dt1[i-1]))
     result.addValue(SUPPRESS);
else
    result.addValue(dt1<i>);
}
}

- Muru

former_member621323
Participant
0 Kudos

Hi Prasanthi,

Many thnx for your reply.

Your soln working fine for single sgment, but I have multiple segment in IDoc where E1WBB07 segment has 3 or 4 values of VKP0 and VKA0 which are not duplicate, in that case collapse context removing values which are not intended to be removed.

Regards,

Praveen.

former_member621323
Participant
0 Kudos

Hi Muru,

Thnx for your reply.

Your code is not properly working. Let me clarify a bit. In the IDoc there will be many E1WBB03 segment and each E1WBB03 will contain many E1WBB07 segment. In those E1WBB07 segment there might a pair of duplicate values for VKP0 and VKA0 as explaind in my 1st post. but other E1WBB07 segment in other E1WBB03 segment might not contain duplicate pair. and there may other KSCHL value like ZTIN etc other than VAK0 and VKP0.

I want to write the UDF in such way, that it will compare DATAB and DATBI for two E1WBB07 segment and if its equal or the concat value is equal then add VKA0 to the resultlist else add the values present in KSCHL to the resultlist.

Any input will be highly appreciatd.

Regards,

Praveen.

Answers (4)

Answers (4)

Former Member
0 Kudos

chk below mapping:

change the context of DATAB, DATAB1 and KSCHL to E1WBB03 (right click-> context) in all the mappings shown below

1)


DATAB
 ------------concat-----sort----splibyvalue(value change)-----collapse context---TargetNode
DATBI

2)


DATAB
 ------------concat-----sortbykey \
DATBI                    /        \
                     /             \
KSCHL------------/                 \
                                     \
----------------------------------------FormatByExample----sort-----UDF1----Target KSCHL
                                           /
DATAB                                    /
 ----concat---sort--splibyvalue(value change)-
DATBI

3)


DATAB
 ---concat ( ; )-----sort-splibyvalue(value change)---collapse context--splitbyvalue (each value)--UDF2---TargetDATAB
DATBI

4)


DATAB
 ------------concat ( ; )-----sort----splibyvalue(value change)-----collapse context--splitbyvalue (each value)--UDF3---TargetDATABI
DATBI

UDF1: execution type : all values of a context...input: var1


int a=var1.length;
int count=0;
if(a>=2)
{
for(int i=0;i<a;i++)
{
if(var1<i>.equals("VKA0"))
{
count= count+1;
}
}
}
else
{
result.addValue(var1[0]);
}

if(count>1)
{
for(int i=0;i<count;i++)
{
result.addValue("VKA0");
}
}

UDF2:execution type: single value...input: var1


String [] temp= var1.split(";");
return temp[0];

UDF3: execution type: single value...input: var1


String [] temp= var1.split(";");
return temp[1];

former_member621323
Participant
0 Kudos

Hi Amit,

Thnx for your valuable input. But I guess its becoming a bit complicated. My target structure contains 5 tabke structures of database. So its become difficult to match all data if I am using sorting. Hence I want to use a filter mapping before the main mapping got executed. This filter mapping will be between IDoc and IDoc and wit will filter out E1WBB07 segment which is not required.

I will concat DATAB and DATBI and feed this input alongside KSCHL to a udf. When concatenated value is euqal,VKA0 will be added to resultlist. If not eqaul then all values will be added.

Take this case : E1WBB01-E1WBB03 which contains below 4 segments of E1WBB07.

E1WBB07-1: KSCHL=VKP0 DATAB=20092011 DATBI=25092011 KWERT=3.50

E1WBB07-2: KSCHL=VKP0 DATAB=26092011 DATBI=30092011 KWERT=3.50

E1WBB07-3: KSCHL=VKA0 DATAB=26092011 DATBI=30092011 KWERT=2.99

E1WBB07-4: KSCHL=VKP0 DATAB=01102011 DATBI=31129999 KWERT=3.50

now concatenated value of 2 and 3 and same,these are duplicate. From this, only VKA0 will be picked and VKP0 filtered. rest will be picked as well. Finaly, 1,3,4 will be passed for next mapping. If there is no duplicate value, the all 4 will be passed.

With Regards,

Praveen.

former_member621323
Participant
0 Kudos

Hello All,

Any new inouts please..??... I am not able to resolve this yet..

Regards,

Praveen.

PriyankaAnagani
Active Contributor
0 Kudos

Hi Praveen,

Please try with the below logic....and if it is not working try to post your expected target structure for the sample data in your first post so that I can provide you the mapping logic.

formatByExample>UDF>targetElement

formatByExample:

input1:sortByKey

input2:concatination of DATAB & DATBI>removecontext>sort-->splitByValueChange

SortByKey:

input1:result of concatination-->removecontext

input2:KSCHL-->removeContext

public void udf(String[] var1, ResultList result, Container container) throws StreamTransformationException{

int length = var1.length;

if(length == 1)

result.addValue(var1[0]);

else{

for(int i=0;i<length;i++){

if(var1<i>.equals("VKAO") || var1<i>.equals("VKPO") ){

result.addValue("VKAO");

}

else

result.addValue(var1<i>);

}

}

}

Execution type of udf is all values of context.

Regards,

Priyanka

former_member621323
Participant
0 Kudos

Hi Priyanka,

Thnx for your reply. Actualy I want to avoid sorting as its getting difficult to match data with other segment in the incoming IDocs. I am thinking noe for a filter mapping before the main mapping which will filter out E1WBB07 segment which has duplicate VKP0 values and send those values to the target structure in main mapping.

Can you provide me the code to filter out this duplicate VKP0 values like the one in my 1st post.?

With Regards,

Praveen.

Former Member
0 Kudos

Go for a UDF (queue function). Inputs would be the fields KSCHL, DATAB and DATBI.

For sam values of DATAB and DATBI, check the values for KSCHL for VKAO and VKPO.

Regards

former_member621323
Participant
0 Kudos

Hi,

Thnx for the reply. Can you provide me the code for UDF?

Regards,

Praveen.

Former Member
0 Kudos

Praveen

Concatenate DATAB and DATBI followed by splitByValue (at Value change) then use collapse context to produce the target.

It will remove the duplicate and you can use the supressed context in formatByExample to generate the target fields of DATAB and DATBI.

Use Same logic with format by example to produce the target mapped with KSCHL.

Regards

Raj