cancel
Showing results for 
Search instead for 
Did you mean: 

Split source message into multiple target messages before message mapping executes

Former Member
0 Kudos

HI,

I have a scenario where Data is pulled from database through JDBC pull. Each pull might have around 100 records. We need to send each record to target as individual message. Also any mapping error in one record should not stop the processing and sending of other records.I have appliad the 1:N mapping and have put the multiplicity of the interface as unbounded in interface determination. The problem is, in case of mapping error the whole batch fails even though the error is for single record. This is because the split happens after the message mapping.

Do we have any way such that the data pulled from database is split into multiple messages for each record before the actual message mapping execute ?I know in BPM there is option of ForEach. But we want to avoid BPM as large amount of records are being processed, also the mapping has a JDBC lookup in it.

Accepted Solutions (0)

Answers (1)

Answers (1)

rajasekhar_reddy14
Active Contributor
0 Kudos

1)Why are you looking for this kind of design, you want to avoind dependecy of every record? if your mapping logic realy solid and covered all validations then you can avoind mapping failures(95%).

2)My idea would be maintaining correct data in DB table before JDBC adapter pulls the data and write validtaions in PI also and use multi mapping,even though message failed due to mapping error then always have an option to restart the compelte message.

3)Write a JDBC select statement to pull only one record at a time, but i conside this deisgn is very poor.

i do not recommend BPM for this interface.

Former Member
0 Kudos

Hi Raja,

Yes they dont want dependency within records. And the data is critical. So they want that if any one of the records fails due to any error (IE or AE), other records in that data pull should get processed. Do we have any other option in PI without using BPM to split into single records right after data pull.