buffering data using hash table - is this wise?
I'm required (for a forecasting Z-development) to fill a huge table with data from sales orders, deliveries, and material master. Since the table is an infostructure and they want to be able to search/report on quite some material master fields, the resulting table has a long key and therefor I need to take extra care that I fill it up correctly.
For performance reasons,
Is it a good idea to use hashed tables (one for each relevant material master table, e.g. MARA and MVKE) as a data buffer in the function module I use to create 'initial' (= key-only) lines for this purpose? or is there a better way to have consistence and performance at the same time?
(the hashed tables are global to the function group, and they're never refreshed by the function module; the function module is called about 20.000 times by the main program. Every time it doesn't find the info it's looking for in the hash table, it fetches it from the database, but first it looks into the hash table)