cancel
Showing results for 
Search instead for 
Did you mean: 

MobiLink is deleting my data

Former Member
0 Kudos

I'm using Sql Anywhere 16, with MobiLink. Both consolidated and remote are SQL Anywhere. I'm using a timestamp-based download technique.

The problem:

1. Users want to change the case of text in Primary Key columns

2. prevent_article_pkey_update setting won't allow it

3. So users delete, then reinsert with changed case

4. MobiLink cannot handle this scenario

     - Change is uploaded as a delete and an insert

     - Upload_Insert is processed before Upload_Delete

     - Insert fails with PK violation

     - Delete succeeds

     - Delete is downloaded

     - Both copies of row are now gone

If the user does the insert with identical case in the primary key column, the delete/insert get uploaded as an update. Since no data actually changes, there is no error. But if the primary key column is changed from lowercase to uppercase, the above problem occurs.

I know that the best practice is to use global autoincrement primary keys, but short of changing my table structure in a production database, what can I do to get around this problem?

TIA,

Eric

Accepted Solutions (1)

Accepted Solutions (1)

regdomaratzki
Advisor
Advisor
0 Kudos

Is the case sensitivity of the consolidated and remote database the same? I suspect not, and I think that might be the real source of your problem.  If you delete pkey "X" and then insert pkey "X" at the remote and then run dbmlsync, it should be sent as an update.  The fact that it is sent as an insert and delete tells me that the remote database is case sensitive.  The fact that the insert fails with a PK violation at the consolidated database tells me that the consolidated database is NOT case sensitive.

Reg

Former Member
0 Kudos

Reg,

I so wanted that to be it! Unfortunately both are case insensitive. To be precise, I'm looking at Database Properties/Extended Information/Case Sensitive. Both are off.

(BTW, long time no speak. Nice to see you're still around.)

Eric Murchie-Beyma

regdomaratzki
Advisor
Advisor
0 Kudos

I was wrong.  Delete "x" and then insert "X" will be an insert and delete, even on a case insensitive database.   

However, the following statements you made confuses me :

- insert fails with PK violation

- delete suceeds

The upload should either completely suceed or rollback.  How exactly does an upload with the insert and delete end up with hald the transaction applied?  Are you using transactional uploads at the remote database?  Any chance you can post an ML Log with the failure/success ?

Reg

Former Member
0 Kudos

Reg,

Turns out we have a handle_error procedure that ignores PK not unique among other things, then keeps going (returns 1000). Long story, but we'd be at a standstill if we didn't do this. Then we have someone review the error logs to fix whatever it was. Okay, so that explains the lost data. Maybe it's time to review our handle_error procedure.

However, I think I see a double-bind in the logic when prevent_article_pk_update is combined with a user who wants to change the case of a PK value. At the remote, it of course has to be done as a delete/insert. A perfectly legal action. Ordinarily, that would upload as an update, but it CAN'T when prevent_article_pk_update is on, because the consolidated won't be allowed to do an update. So the other choice is to upload the delete and the insert. But MobiLink processes them in the wrong order.

And I DID get an update when the inserted row had the same case as the deleted row. It's only when I changed the case that the upload became a separate delete and insert. But again, whichever way the upload is done, it can't work.

It would be nice if prevent_article_pk_update were able to recognize that a case change doesn't change the identity of the row in a case-insensitive database, and allow the update.

Short of that, can you see any way to get MobiLink to handle my user's delete/insert?

Thanks,

-Eric

regdomaratzki
Advisor
Advisor
0 Kudos

Interesting problem, and I’m surprised that it hasn’t come up before anytime in the last 15 years.

We’re only concerned about the issue if the case sensitivity of the remote database and consolidated database are the same.  If they are different and something odd is happening, having a different case sensitivity between the consolidated and remote was a poor design decision, and odd things happening is not unexpected.  Furthermore, if both remote and consolidated are case sensitive, there’s no problem, since there isn’t a primary key violation during the upload phase.  The only issue is when the remote and consolidated are both case incentive (your situation), so we’re going to assume that to be true going forward.

The situation you’re running into is unique, since you’ve also coded a handle_error procedure that is ignoring primary key violations.  If that had not been coded, the upload would have failed, and the remote database would have had to ‘fix’ the issue (likely deleting the newly inserted row that had a different case).  We’re not overly happy about the behavior without the custom handle_error procedure either, although it does prevent loss of data.

We weren’t initially sure whether this behavior was something we’d done on purpose or not because of how [insert random RDBMS here that we support] reacted to the situation.  We now don’t think it’s something we did on purpose, but a change in case of the primary key on a case insensitive database is definitely something we did not consider during the initial implementation of the code that coalesces operations on the same row when scanning the transaction log.  That lead us to thing about how we could address the issue with a code change.  Our initial though was to send the two operations as an update, but the Mobilink stream is not setup to send different values (differing only in case or otherwise) for the primary key in the post and pre images of the row.  There are implications with conflict resolution and filtering of rows in the download stream that would need to be considered, in addition to the fact that the upload_update synchronization script would somehow need to change the primary key as well.  This is a scary, non-trivial change that will change a majority of the coalescing code at the remote side, likely require a change to the MobiLink stream so we can tell MobiLink whether the remote is case sensitive or not, and some major changes to how conflict resolution and row filtering occurs in the MobiLink Server.   A simpler fix could involve recognizing the delete/insert with a different case as an update of the “same” primary key value, but then the update sent to the consolidated would include the changes to non-pkey columns, but not the primary key.  Also not a great solution.

We started to consider solutions that don’t involve a code change, and we’re not super excited about any of them, but we think they’re better than the code change.

  1. At the remote side, if you can guarantee that the delete of ‘X’ and the insert of ‘x’ take place in separate transactions, you could tell dbmlsync to use transactional uploads (-tu), so that a commit will also take place in the consolidated database between the insert and the delete.  We’re really trying to ensure that instead of DELETE – INSERT – SYNCH that you make sure the order is DELETE – SYNCH – INSERT, which will solve the issue.
  2. At the remote side, if you can’t guarantee that the delete of ‘X’ and the insert of ‘x’ take place in separate transactions, you could keep a shadow table on the remote side that tracks which pkey values for the table have been deleted, and to not allow a different mixed case primary key to be re-inserted until there is a successful synchronization.  Sample code :
    create table Admin (
      admin_id varchar(64) primary key,
      data     varchar(64)
    );
    
    create table admin_delete_shadow (
      pkey bigint default autoincrement primary key,
      admin_id varchar(64)
    );
    
    create trigger ad_admin after delete on Admin 
    referencing old as pr for each row
    begin
      insert into admin_delete_shadow(admin_id) values (pr.admin_id);
    end;
    
    create trigger bi_admin before insert on Admin
    referencing new as nr for each row
    begin
      declare @deleted_hash varchar(40);
      if exists ( select 1 from admin_delete_shadow where admin_id = nr.admin_id ) then
        select hash(admin_id, 'SHA1') into @deleted_hash from admin_delete_shadow where admin_id = nr.admin_id;
        if ( @deleted_hash != hash( nr.admin_id, 'SHA1' ) ) then
          raiserror 28033 'Insert of primary key ''%1!'' on table Admin cannot take place until a synchronization occurs.' , nr.admin_id;
        end if;
      end if;
    end;
    
    create procedure sp_hook_dbmlsync_end()
    begin
      if exists ( select 1 from #hook_dict where name = 'exit code' and value = 0 ) then
        delete from admin_delete_shadow;
      end if;
    end;
    
  3. Combine ideas (1) and (2), and track inserts and deletes on the tables in questions between synchronization, and only do transactional uploads (which are typically more expensive) if you determine that the case of a primary key has changed between synchronizations.  Unfortunately, whether an upload is transactional or not is not an extended option, so you can’t change it in the sp_hook_dbmlsync_set_extended_options stord procedure, which would have been super convenient.  You need to actually change the way you invoke dblmsync based on rows in the shadow table on tables in question.
  4. Something slightly more complicated, but this time at a server side.  You’d need only modify the synchronization scripts in the consolidated database, and not the schema at every remote database.  Make use of the handle_upload event, where you can access to the contents of the upload stream.  In this event, check the rows that are being sent as inserts and the rows that are being sent as deletes and see whether you are running into the issue.  If you are then “do the correct thing” with the row in question in the handle_upload event, and also track the table_name and primary key value of the row you handled in a temporary table.  You’ll now need to modify your upload_insert and upload_delete scripts to be stored procedures that will skip rows already handled in the handle_upload event.

Reg

Former Member
0 Kudos

Reg,

Thanks very much for the thorough answer. The client-side ideas aren't viable for us because they'll result in a missing row in between sync cycles, and also because we don't have enough control at the remote sites to do some of those steps.

The server-side solution looks like it can work. I'm going to add one wrinkle though. The handle_upload event will save the data for rows that need special handling, but not change the data yet. Then the upload scripts will be responsible for reversing the order of the insert and delete. That way, if the insert depends on any operations that happened earlier in the upload stream, the data will be in place when it's needed.

Thanks again.

Eric Murchie-Beyma

regdomaratzki
Advisor
Advisor
0 Kudos

Your wrinkle sounds fine.  While you're coding things in a handle_upload event, now is the perfect time to also review your handle_error procedure.  Since you have access to the upload stream in the event, you may be able to better handle the other database errors you've mentionned. 

Reg Domaratzki

PS : Nice to talk to you again too.  Feel free to email me at firstname.lastname@sap.com if you feel like chatting

PPS : No, it wasn't becuase I knew Eric that he got a well researched answer.  He had a really interesting problem that caught my interest.

Answers (0)