cancel
Showing results for 
Search instead for 
Did you mean: 

Experiences on JPA persistence providers other than standard SAP's

former_member190457
Contributor
0 Kudos

Hi developers,

in this thread I would like to gather experiences from all those who have tried using JPA persistence providers other than SAP standard implementation in productive environments

It would be very interesting if you could share you experiences, for instance with regards to:

- advantages of a specific implementation

- disadvantages (if any)

- new features available (e.g. Lazy support for Lob and single-value relationships)

- ease of transport (did you manage to somehow use the java dictionary DCs or what?)

- ease of build (e.g. EclipseLink requires additional build plugins to be developed)

- ease of overall setup: how long did it take to set the DI stuff up (SLD SC creation/track creation/check-in/check-out/activate/transport/...)?

thank you so much for sharing your experiences.

Regards

Vincenzo

Accepted Solutions (0)

Answers (1)

Answers (1)

rolf_paulsen
Active Participant
0 Kudos

Hi Vincenco,

unfortunately no response up to now. Since our project is a few monts before productive use, I did not yet answer - and besides, you already know that our experiences with EclipseLink are good up to now.

I just want to correct one misleading point in your list.

You wrote:

- ease of build (e.g. EclipseLink requires additional build plugins to be developed)

This is not correct. EcliseLink works great without any build plugins. You just do not have lazy loading of @ManyToOne and @OneToOne relations. SAP JPA does not support lazy loading in these cases, too (before 7.30).

But unlike SAP JPA (before 7.30), EclipseLink supports a way to get lazy loading in these cases. Because the SAP NetWeaver JEE implementation is functionally incomplete, you have to use static bytecode weaving to get lazy loading. But this is an issue of SAP NetWeaver, not of EclipseLink. See

Why do you get no feedback on your interesting question?

Possible reasons:

- In many projects, SAP JPA obviously does a very good job.

- Java'ers just use Hibernate and EclipseLink (and maybe others) and it simply works as espected. Remember: SAP introduced heavy class loading to support hibernate and SAP contributed to EclipseLink to support NetWeaver Platform. So where is the risk?

- In days of JPA, Java'ers do not need tools like Data Dictionary, and Java developers would never populate a system data source with their application tables.

- Maybe SAP NetWeaver is not too widely used as "full" JEE application server.

Regards

- Rolf

former_member190457
Contributor
0 Kudos

Hi Rolf, thank you for your interesting post

Actually I found a blog where it was explained that a NWDI build plugin had to be developed in order to build EclipseLink-enabled DCs. I am unable to find it at the moment, but if we can do without a build plugin then so much the better!

What I really find odd in SAP JPA 1.0, as stated in other posts, is that field-level lazy loading is not supported.

This a huge pain point in particular when you manipulate large LOBs.

- In days of JPA, Java'ers do not need tools like Data Dictionary, and Java developers would never populate a system data source with their application tables.

How do you handle db tables using NWDI without Data Dictionary DCs? It would be interesting to know how transports can be done as well.

Thanks, regards

Vincenzo

rolf_paulsen
Active Participant
0 Kudos

Hi Vincenzo,

How do you handle db tables using NWDI without Data Dictionary DCs? It would be interesting to know how transports can be done as well.

There is no (easy) way to deploy table structures without Data Dictionary DCs. We deploy tables simply using DCs of type "External Library" (something like "Text" will also do) and put the DB-Scripts for table generation inside. These scripts can be created mostly automatically by EcliseLink based upon the Entities, including DB seqeunces, foreign key constraints etc.

The DB admin unzips and runs them with some batch scripts we also deploy.

Automatic execution of SQL script on startup may be possible also, look e.g. [here|http://onpersistence.blogspot.com/2010/03/running-sql-script-on-startup-in.html], but our DB admins will never accept this.

We decided not to use Data Dictionary DCs in the beginning of our project because of the following reasons:

1. Deployment only works for the system datasource schema which has restrictions (see [SAP help|http://help.sap.com/saphelp_nwce72/helpdata/en/d1/10d085b61e4fb1bcf893a675aadd0d/content.htm]) and was not desired.

2. Generation of DDL via Data Dictionary is incomplete anyway, e.g. no sequences can be deployed/generated, so DB admin must do manual work anyway.

3. Undeployment of Data Dictionary DCs may be dangerous (loss of table content)

4. Data Dictionary DCs are mostly useful only for initial setup, not for (complex) DDL changes in future releases.

5. Our DB admins do not accept DDL they cannot look at prior exeution.

One of the greatest advantages of EclipseLink compared to Data Dictionary DCs is the easier development process.with "eclipselink.ddl-generation=drop-and-create-tables" feature. (Hibernate has a similar feature.) Add/rename a field/coliumn in your entity, relations, etc., run a Java app or JUnit test with reference to EntityManager, and your DB schema is fresh. You cannot achieve this agility using Data Dictionary DCs.

Regards

-Rolf

former_member190457
Contributor
0 Kudos

Hi Rolf,

With regards to primary key generation, I have found in literature several examples where technical keys are used rather than semantic keys.

I followed this path and used integer key fields and automatic JPA key value generation.

What I'm planning to do is to widen the field from int (32 bit) to long (64 bit) to future-proof the application.

Moreover, I am also considering using Oracle sequences to generate ids rather than auto-generation, since the latter often causes lost-values in id generation because of block-allocation policies (size default = 50).

Do you have experiences in these regards?

Thanks and regards

Vincenzo

rolf_paulsen
Active Participant
0 Kudos

Hi Vincenco,

yes, semantic IDs do not have a place in JPA. Semantic keys are needed, but not as IDs.

Both SAP JPA and EclipseLink use @GenerationType.TABLE generation if you define @GenerationType.AUTO. ("AUTO" just means that the persistence provider "should pick an appropriate strategy for the particular database" (javadoc)

Both TABLE and SEQUENCE are somewhat automatic.

I guess the fact of lost-values is because the fetching of IDs is done in another transaction (probably for performance reasons, not to have the sequence table as a bottle neck).

On Oracle, the combination of allocationSize on JPA side and INCREMENT BY / CACHE on Database side is as follows: allocationSize must be equal to INCREMENT BY, but JPA uses the intermediate numbers (which is not the case in normal (PL)SQL programming. There is no annotation-JPA-pendant to CACHE. But that JPA uses the intermediate numbers from memory may be considered as a JPA way of sequence caching (that may be further improved by Sequence CACHE for really big mass insers in one transaction).

CACHE>1 will give you lost values even with allocationSize = Increment BY = 1.

On the other hand, allocationSize=1 may give bad performance on mass inserts because the JPA provider must ask the database for every instance. allocationSize>1 (e.g. 20) is better but will again yield lost values. (But who cares with "long"?)

There is one important issue with both automated value creation strategies - GeneratorType.TABLE and GeneratorType.SEQUENCE: The ID cannot be set by yourself on instantiation of an Entity object. JPA spec defines that the ID is set at the latest on EntityManager.flush or EntityManager.commit, which is sometimes too late if you have container managed transaction boundaries.

But both SAP JPA and EclipseLink assure that the ID with Table and Sequence is set already after call EntityManager.persist(newObject). This improves a lot, but may be not enough.

Example:


@Entity(name="T_ORDER")
public class Order {
...
	@OneToMany(targetEntity=OrderItem.class, mappedBy="order", cascade=CascadeType.ALL)
	private List<OrderItem> items = new ArrayList<OrderItem>();
...
	public void addItem (OrderItem item) {
		this.items.add(item);
		item.setOrder(this);
	}
}

@Entity(name="T_ORDERITEM")
public class OrderItem {
	@ManyToOne(targetEntity = Order.class)
	private Order order;
...
}


...
EntityManager em = emf.createEntityManager();
em.getTransaction().begin();

Order o = new Order();
OrderItem i1 = new OrderItem();
o.addItem(i1);
em.persist(o);
OrderItem i2 = new OrderItem();
o.addItem(i2);

At the end of this snippet, o and i1 have ID != null but i2 has ID==null. there is no way to "auto-persist" an object which gets into a relation to an already persisted object. i2 gets an ID!= null after flush or commit.

This may be tricky if your business logic that adds items is "pojo" without acces to EntityManager or if you do not want to mess up your business logic with flushes.

How to "broadcast" the unique IDs of just inserted order items to the User Interface if they are not yet set in the last line of your SLSB?

We switched to simple UUIDs that are generated on instanciation. long VARCHAR2s, but it works fine and is very common.

Regards

-Rolf