cancel
Showing results for 
Search instead for 
Did you mean: 

Expert suggestion on Delta Merge of delta data in HANA

chandan_praharaj
Contributor
0 Kudos

Hi Experts,

I am using the below sequence step to refresh the data by calc view in one Z table of my schema. Where I am deleting the data and loading again, if I am writing the below in a Stored Proc and Scheduling in XSJOBS.

Is there any better step where I can only update the delta records , so that I can increase the speed of the XSJOBS, by reducing the run time of data update.


create local temporary table #ZZ_TAB( , );  // Same as CALC_VIEW Structure
insert into #ZZ_TAB
select * from CALC_VIEW;
delete from "CHANDAN_SCHEMA"."ZZ_TAB";
insert into "CHANDAN_SCHEMA"."ZZ_TAB"
select * from #ZZ_TAB;
merge delta of "CHANDAN_SCHEMA"."ZZ_TAB" WITH PARAMETERS ('FORCED_MERGE' = 'ON');
drop table #ZZ_TAB;

Accepted Solutions (0)

Answers (1)

Answers (1)

lbreddemann
Active Contributor
0 Kudos

Looks like you want to micro-manage a lot here.

It's very likely thought that your performance problems are not due to the increased run time of queries that execute on data in the delta store.

Instead I highly recommend to review the query logic and to try and get rid of temporary table data copy actions.

chandan_praharaj
Contributor
0 Kudos

Hi Lars,

Yes regarding optimising the select query on CALC VIEW, I am trying to optimise. But Here in a single Stored Proc, I am trying to put multiple load of these Z table on my schema from different calc views.

So need your expert advice, if I can further optimise(by using update of only delta records) , As I have huge data volume (payment data in the item level).

As I have to load big chunk of data, how can I only update only the delta records?

Many thanks for your input.

Regards,

Chandan

lbreddemann
Active Contributor
0 Kudos

As mentioned before you should not try to optimize with relative unimportant pieces of a procedure that is broken in the first place.

You very likely won't get tremendous benefits out of having the data in main instead of delta store right there. So focusing on this is not the key to this problem.

Look into why you need to copy the data all over the place at all.

Why can't you build the queries on the original data?

The key point of making stuff faster is to don't do unnecessary work - not to do the same amount of work just much quicker.

chandan_praharaj
Contributor
0 Kudos

Look into why you need to copy the data all over the place at all.

Why can't you build the queries on the original data?

Because , I am creating a Calc View again on this table, which is consumed in Fiori via ODATA. In order to make the UI respond faster, I have thought of this approach.

So I am trying to run the Stored Proc every 5 min , to replicate the data in Z table , so that the Top most calc view, which is consumed in UI should be faster.

Hope this clarifies my Architectural approach.

Regards,

Chandan

lbreddemann
Active Contributor
0 Kudos

Ok, so your data is max. 5 minutes old, but you need to optimize in the under-second area to eliminate delta log read effects?

That doesn't seem to make sense at all.

How much data do you try to expose in the UI?

How long takes your calc view to execute the query? And how many records are returned?

What is the maximum response time for the UI and why?

What you are trying to implement should be a cache instead - and that would be the very last resort to go to.