cancel
Showing results for 
Search instead for 
Did you mean: 

Enhancing the capability of existing Module

Former Member
0 Kudos

Hi Experts,

We have a module, which reads 70 files sequentially. Each file contains around 25000 records(Around 1.5 MB).Each file has got 3 levels e.g. Header, Content and Detail.

Step1>> The module reads each file at header and content level, and validates the correctness. If the content record is not correct, the corresponding detail level records are deleted from the original source and added to the error log.

Step2>> After the file is pre-scanned for the validation and the correct data is filtered out in the previous step, it is again read from the top to bottom at the Detailed level. Depending on the value present in 4-5 fields (Out of total 13), data are segregated into various groups, and are dumped as different files in different locations. We term each group as Scenario. There are 12 such scenarios.

Step3>> Later separate communication channels pick the data from various locations (for each scenario) and post the data to the target system.

The overall process is slow and taking around 25 minutes per file. Business doesn't allow us to split the file. So, we are asked to redesign the entire process.

Step1 and Step 2 are taking more time than other, as the program reads the file field by field, doing validation and applying rules of segregation.

We are thus thinking of dumping the data into database table after Step1 is completed, and reading data from the database through JDBC adapter. But we are not sure. We understand that dumping data at the XI server is not a good practice. So we are seeking for better alternatives.

Could you please advise, what could be the best possible means for the enhancement?

Regards,

SS

Accepted Solutions (1)

Accepted Solutions (1)

rajasekhar_reddy14
Active Contributor
0 Kudos

is there any way to implement this logic in out side PI

you mentioned that it is taking 25 mnts to complete the process, i confused here like is interface execution taking long time ? are you expecting freezing java code or you want go for new design?

my though here if it possible to implement step1 and step2 outside PI then it would be really right way to go.

reagrds,

raj

Former Member
0 Kudos

Unfortunately not outside PI :).

A new design within PI is possible.We are allowed to change the Java Code, and the way it carried out.

We may not be using the File I/O any more. In the extreme case, as I said, we may dump the data in database tables, and apply queries to fetch data for different scenarios, instead of doing it at code level.

Any thought?

Regards,

SS

Edited by: Subhendu Sahu on Aug 16, 2011 1:50 PM

former_member854360
Active Contributor
0 Kudos

Yes

Crete a project in NWDS and copy paste the same source code then modify the code according to your requirement.

Ryan-Crosby
Active Contributor
0 Kudos

Hi,

Without seeing any details in your requirements I envision something like loading the data into a database like you say... BW would be perfect for this I think. Load the file content into tables (drop tables first if the data needs to be refreshed for each set of files) using whatever transformation rules you need to split the data into your 12 separate scenarios. Then send the data to the different targets for loading.

Regards,

Ryan Crosby

Answers (1)

Answers (1)

ravi_raman2
Active Contributor
0 Kudos

Hi,

Doing heavy complex processing inside an Adapter Module is NOT SAP best Practice, especially so much of Java.lang.io....

This is what i would do ..

1) read the entire file, pass this to mapping single payload source, different target mappings, put the segregation logic into the Java Mapping.

2) write different files to locations...

Processing as usual.

Regards

Ravi Raman

Former Member
0 Kudos

Thanks for the reply.

As I said, Step 2 and Step 3 involve reading each line of the file and applying rule. The log says that, a lot of time is spared for that. That's the reason, we want to dump the data in database and apply query to fetch scenario specific data.(which I believe will save a lot of time)

Updating database through mapping is also not recommended, rather EJBs are better means for database operations . Can we split the whole functionality into 2 modules? 1 module just validates the data and dump into database. The second module calls different Java proxies(beans), which intern fetch scenario specific data and send to Integration engine for further processing?

Please share your views.

Regards,

SS