cancel
Showing results for 
Search instead for 
Did you mean: 

11.5: Time-Consuming to concatenate columns in a repeater loop?

Former Member
0 Kudos

Hi,

I am using a repeater loop and assignment block to concatenate the column values into a single string to pass to a Query Template as part of a WhereClause

This seems to be extremely time-consuming, so I was wondering if I am doing something wrong or if there is a faster way to do the concatenation. This creation of the WhereClause was taking ~5sec with 353 rows in the repeater, and ~4.5min with 2483 rows in the repeater.

Repeat on TempXML

{

If(first)

Assign tempString = expression

else

Assign tempString = tempString & expression

}

The expression does not seem to be that complex: Below is the expression of the Else clause.

Local.tempString =

Local.tempString & " OR (a.ID=" & Repeat.Output{/Row//ID} & " AND TO_DATE('" & Repeat.Output{/Row/DATE} & "','" & Transaction.strDateFormatORA & "')" & " between b.START_DATE AND b.END_DATE ) "

tempXML was earlier assigned as the result of the following branch:

StringListToXMLParser(ID)

StringListToXMLParser(DATE)

ColumnAliasID(ID)

ColumnAliasDate(Date)

Union( ColumnAliasID.Result and ColumnAliasDate.Result)

Normalize the Union

tempXML = Normalize.Output

Any help, ideas, thoughts, or just verifying this is how it is will be appreciated.

Thanks alot,

Kerby

Accepted Solutions (0)

Answers (2)

Answers (2)

agentry_src
Active Contributor
0 Kudos

Hi Kerby,

Pardon my curiousity, but why in the world would you need 2438 rows to be added to a where clause? It does not sound like a very efficient approach to retrieving your data. But on the other hand, I don't know your circumstances.

Have you saved the aggregated where clause using a tracer? Save it and then paste it into the query template and see how long the query template takes to execute. That would let you check to make sure that it is the where clause aggregation and not the actual execution of the query that is taking so long.

It looks like the ID and Date need to be paired up. If that is not the case or if the dates fall into common ranges, you could use ID IN () instead of ID=. That would make the query more efficient on the database side.

And what Service Release of 11.5 are you using?

Mike

Former Member
0 Kudos

Hi Michael,

We are using service release: 11.5.1.b57

At a high level, the need for 2000+ rows in the where clause is because we have 2000+ possible rows of data from different source systems, and now we need to look up those discrete parameters from a configuration database to find out which source system is needed for which parameters and the respective query templates to use. Also, some of the 2000+ rows that could have valid data, will not actually have data.

Basically, it's a two step process, get "ALL" the available rows of procedure-parameter pairings and then pass the paramIDs, paramDates and procedures from those rows to the BLS I am working with to get the actual "Results" for the rows. And to get the results we look up information about the parameters in a config database to find out the source system, query template, date range the parameter is valid, etc (this is where the where clause is used) and then query the source systems necessary for the results. So there could easily be 2k-5k+ necessary rows to look up in the configuration DB. Hopefully this makes sense at a high level.

The problem is that the ID and dates do need to be paired up, so the string becomes exceedingly long. I placed event loggers around the creation of the where clause and the query to the DB, and creating the where clause was taking ~4.5min while querying the DB was only ~1 min.

Thanks.

Kerby

Former Member
0 Kudos

Kerby:

Why not simply use an outer join in the database and get all of the combined data in a single query?

Rick

Former Member
0 Kudos

Hi Rick,

Different source systems for the "ALL" rows versus the "RESULTS" rows. And, because the application is up and working (although sometimes slowly), I need to try to limit the amount of reconstructive surgery I do and mainly just provide improvements to the structure already in place.

Ala, switch from querying (respective) source systems for each distinct procedure and instead do a batch query and only go to a source system once. However, the gain in improvement from only querying once is eaten up by the time to concatenate the whereClause together. Thus, my present problem... 😃

Thanks,

Kerby

Former Member
0 Kudos

What are the two source systems? Is either or both of them MS SQL Server? If so, an alternative would be to use a linked server and do the outer join...

Former Member
0 Kudos

Hi Rick,

They are all Oracle(either 8g or 10g depending on the system). Part of the problem with doing a single outer join is that the system that needs to be queried depends on the parameter ID and the date range of the data desired. i.e. If the date range is X, we query Werum, if it is before X, then we need to go to previous (obsolete) systems to get the data. So in the BLS, we lookup in an Oracle configuration DB the parameter ID and daterange of each parameter and the system and corresponding query template we need to use to get the data.

Hope this helps. I don't really think I'll be able to drastically simplify the structure, I'm just looking for a way to optimize the lookups to be more efficient than the current method.

Thanks.

Kerby

Former Member
0 Kudos

While not ideal, the other way would be to use a stored procedure and iterate through the records and append the string in PL/SQL, and return the string.

Former Member
0 Kudos

Thanks Rick, I'll look into doing it that way. That does seem like that could work.

Former Member
0 Kudos

You could try and unwind the loop with a for-each and proc 10 or so rows each time, or you might try to format it using and xsl.

Former Member
0 Kudos

Hi Christian,

That is roughly how it is currently being done (unwind and query with 0-100 rows at a time). I was hoping to speed it up by making it a single query and then later queries to source systems would also only be done once.

Thanks,

Kerby