Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Number Range Object vs. concurrent access, need sequence

Former Member
0 Kudos

Hi there,

-- short version --

1. there's a number range object

2. many users access it while working in the system

3. i need to ensure, that for a single database update consisting of many single record updates in a loop, the returned number object values are in a sequence (assumption: number_get_next-QUANTITY=1).

4. will a mutex suffice (performance impact is negligible) ?

4.b. if mutex is to be used, do I have to turn number buffering off ?

-- long version --

An external company designed an export interface, which logs changes in HCM module on the fly.

Internal table with changes is processed on a per row basis.

Let's assume the table looks as follows:


IT_CHANGES-INFTY - infotype
IT_CHANGES-MODE  - <i>nsert / [M]odify / [D]elete
IT_CHANGES-X     - some data

IT_CHANGES[1]: 0001/D/some_data1
IT_CHANGES[2]: 0001/I/some_data2
IT_CHANGES[3]: 0001/M/some_data3
IT_CHANGES[4]: 0002/D/some_data4

Now the interface works more less like this:

LOOP AT IT_CHANGES.
  PERFORM prepare_log_entry.
  PERFORM generate_unique_number.
  PERFORM insert_to_db.
ENDLOOP.

So basically What the interface does is take row 1, prepare log entry, generate unique record number(using number range), insert into Z* table, rinse and repeat.

Now there's a need to ensure, that all the entries with INFTY=0001 have their record numbers in a sequence.

Assuming per row internal table processing, and concurrency(many users can perform actions logged using the same number range object), it may happen, and it in fact did happen, that numbers were not in sequence.

The problem is I do not want to modify the interface to work with tables not rows, as it would require almost rewriting it from scratch (basically its composed of many subinterfaces, logging various activities).

I've came up with an idea, which I need a hand with, will this ensure numbers generated in a sequence:

LOCK_MUTEX.
LOOP AT IT_CHANGES.
  PERFORM prepare_log_entry.
  PERFORM generate_unique_number.
  PERFORM insert_to_db.
ENDLOOP.
UNLOCK_MUTEX.

I know, creating a mutex might is not an ellegant way to solve the issue(performance wise), but assuming there's about 100 log entries a day I think It would not have a significant performance impact.

Cheers,

Bart

4 REPLIES 4

matt
Active Contributor
0 Kudos

The requirement for an uninterrupted sequence is also one found in many finance applications. For these, it is sufficient - and required - to turn number range buffering off.

I've not looked at whether your mutex idea will work or not - if it did, but required number range buffering off, then surely it doesn't add any value?

matt

Former Member
0 Kudos

Oh well, I'll have to try the mutex. Hope it'll work.

matt
Active Contributor
0 Kudos

Here's a possibility you might like to explore. You can have gaps, but need each record within a set to be sequential. You know that there won't be more than e.g. 100 records in each set. Get the next number from the NRO, multiply by 100. Then, within the processing of the set, just add 1.

So user 1 gets the number 1 from the NRO, and therefore generates records:

100

101

102

103...165

And user 2, a split second later, gets number 2, and therefore generates records:

200

201

202

203

204...243.

This would work regardless of NRO being buffered or not. The only limitations are the number of records in each set, and the size of the sequential field.

Former Member
0 Kudos

Hi,

I thought about this approach for a while because in fact the number of records to be processed in a loop >= number of get_next_number calls(some records are filtered). Solving it this way requires some modifications(modifying code, interfaces, classes). Anyways I'll have to keep that in mind in case performance hit becomes an issue.

Cheers,

Bart