cancel
Showing results for 
Search instead for 
Did you mean: 

The truth about shared PBDs Please

Former Member
0 Kudos

I’m currently working on simplifying and unifying the build and deployment strategy for an 81 application suite.  Here’s a brief description:

The main application sports the UI and a massive set of supportive logic.  Another 80 applications implement various I/O features and perform specialized calculation and housekeeping tasks. All client installations have the main application and some subset of auxiliary applications.  Each application is deployed to its own folder.

The main application contains all common code as well as UI code. The other 80 applications have unique code in addition to all common code.  Unique code is in an application unique set of libraries. Every application has its own copy of all its required PBLs.  Development rules require that common code be modified only by the main application.  Common code is currently shared by Copying and Pasting all shared libraries into each application’s folder. Every application has its own copy of 17 massive libraries.

It currently takes about 20 hrs to compile the entire suite.  Since each application has its own code, parallel compiling in separate VMs is possible.

I’m currently part way through pulling all source code into a unified SCC repository thereby eliminating massive source code redundancy that is prone to confusion and defective deployments.

I have complete control of the build process using custom code based on PBorcapi, a TopWizProgramming product.

My question revolves around the possibility of eliminating duplicate PBD compiles. Assuming  the main application is built first and the 17 massive PBDs copied and added as PBDs to the library list (or left as PBLs on the library list and just copied into each auxiliary application’s folder after main application compile and the EXE build is told they are ready).

Can the PBDs be shared?  This will eliminate (17 * 80) PBD compiles.  A significant time saving

Here’s the key point from Boris Gasin written in 1998

http://nntp-archive.sybase.com/nntp-archive/action/article/%3C367_3524EB48.198D2AE@earthling.net%3E

when a PB object is compiled, Powerbuilder will save offsets
into function tables of all the related objects. The relationship may
be through inheritance or association (instance variables). This is
why you get a warning whenever you change an existing function
definition. Something like "Changing the function arguments requires
that you regenerate all objects that call this function".

An example of this would be the relationship between pfc_u_dw (in PFC
layer) and n_tr (in PFE layer) if one application changes a function
definition in n_tr and rebuilds, an offset to that function is saved
in the compiled version of pfc_u_dw. If the second application tries
to call their version of this function PB will look for that function
in the wrong place, which will probably cause a GPF. Doing a full
rebuild will synchronize the references for the second application but
may break them for the first, and so on...

It seems like the lynch pin is the function declaration.  If a signature changes all references to that call must be changed.  I’m guessing the full rebuild does the function table building

So doesn’t the key question become, How do we force a signature change down to all referencing applications?

Thanks for responding

Yakov

Accepted Solutions (0)

Answers (4)

Answers (4)

Former Member
0 Kudos

That's exactly how we had it setup - but we did all our builds with PowerGen, so it was extremely easy to setup.  Trying to do this with OrcaScript would have been a disaster.

Target #1 is a "dummy" target that just references the shared framework.  It's also got a simple app-level PBL with an app object and a main window (because you need those), but the "app" doesn't do anything.

PowerGen does a full regen/compile of this target first, creating PBDs (which we keep) and an EXE (which we throw away).

For each subsequent app (the real apps now), we do NOT regen or compile the framework PBLs.  We only regen and compile the PBLs that are app-specific.  These create PBDs and an EXE for each application, saving the time of the full rebuild, and not introducing any reference errors into the shared PBDs.

We also NEVER EVER did a regen from within the PB IDE.  Some guy working on app #19 would kick off a regen, and the guy working on app #7 in the next cube would start getting runtime PB errors...  Don't do it.

Former Member
0 Kudos

We also use powergen for this purpose.  We don't have a 'dummy' target, just the main one that builds out the pbds for the shared pbls.   In the 2ndary apps, select exclude for the regenerate, and don't checkmark the create now option for the shared pbds.

It sounds like Paul's method would be a cleaner approach for sharing out to a large number of related apps (ours only shares with 3 or 4).

Using/having a function in one exe and not having it in another exe that is sharing the pbds does not mess anything up.  Its possible that this may have been an issue in earlier versions of PB (I kind of remember 6.5 having issues) and that they had been fixed in later versions (7 or 8 maybe).

To add to what Bruce said regarding the public interface of the shared objects, we had a discussion in the ISV session regarding these issues  (more in context of sending out patch updates) in Atlanta 2 ISUGS ago.   Someone had stated that you can in fact add instance variables to an object without regeneration of everything if you add them at the END (as the last variables in the instance list).  this doesn't mess up the offsets for everything else since they are at the end.  I think he mentioned you can do the same with functions but you have to edit the source to ensure it is at the end. 

Former Member
0 Kudos

Hi Yakov;

  Boris was 100% correct. I never trust handing out just one PBD unless it only contains concrete level objects. Deploying modified base or abstract level ancestors in a PBD that has changed is basically a potential "death sentence" for your PB application (as Bruce also points out very well).

  Personally, I would never eliminate the duplicate compiles for the common class source code as like many of my client's applications. That way, each application or sub-system can be at a different version of the common code at any time due to roll-out, deployment, upgrade, etc restrictions in the various production environments.

Good luck!

Regards ... Chris

Former Member
0 Kudos

There might be some myths about it.

As far as I understood it, once you have compiled the objects it doesn't use the ancestors anymore; would only be a waistof time.

Furthermore duplicates of objects having the same name in the class library is problematic because you might have a loading problem where you might end up with the wrong object; and you can only have one of them loaded.

If you want to swap a single PBD from different compile cycles then you need to design your application also for that intention and adapt your scripts for it. When you want to call a function (or get an attribute) from an object sitting in a different PBD then it is required to use the dynamic keyword because that is what that keyword is designed for, and leaving it out is a scripting error. And use try catch and all that. I know that it might not sound popular or pushy, but that is the way it is.

Ben

Former Member
0 Kudos

No, PB uses dynamic inheritance. That means that the ancestor(s) are utilized at run time - that is every time a method or event is called.

You are probably thinking of Static inheritance like C++ uses.

Former Member
0 Kudos

Hmm odd, I remember it worked differently from some old conf. presentation years ago ; maybe a very old version. Might not have a forgotton PB 6 anymore. 
Once you compile you could aswell put all ancestors for efficiency since inheritance is for development as my impression was that it was also done it way, but apparently not.

Former Member
0 Kudos

That happens when you type on a laptop:

Hmm odd, I remember it worked differently from some old conf. presentation years ago ; maybe a very old version. Might not have a forgotton PB 6 anymore. 
Once you compile you could aswell put all ancestors in the decendant for efficiency since inheritance is for development and my impression was that it was also done that way, but apparently not.

Former Member
0 Kudos

   Nope ... its been that way since PB 1.0. That is the way Dave Litwack & Kim Sheffield designed PB from day #1.   

  FYI: It makes no difference where you put your ancestors in the library list since the PB 4/5 days as all frequently used ancestor classes are cached automatically by the PB Class loader.  

PS: A significant redesign was done in PB 8 to address dynamic inheritance and class loader performance & caching. 

former_member190719
Active Contributor
0 Kudos
Furthermore duplicates of objects having the same name in the class library is problematic because you might have a loading problem where you might end up with the wrong object; and you can only have one of them loaded.

No, not really.  The library search path is a search path.  PB starts at the top and works down.  If you have two objects of the same type with the same name, PB will always find the first one in the search path and use it.  The second one will never be referenced.

Former Member
0 Kudos

Hi Bruce;

  That is not what I was saying ...

  Once the ancestors are cached - the library path will not be used again unless the class definition needs to be reloaded. Which is most unlikely after the internal changes introduced in the PB Loader in the v8 time frame.  

Regards ... Chris.

former_member190719
Active Contributor
0 Kudos
 That is not what I was saying ...

I wasn't responding to you.

Former Member
0 Kudos

My apologizes Bruce!

Former Member
0 Kudos

Hi Yakov;

  Boris was 100% correct. I never trust handing out just one PBD unless it only contains concrete level objects. Deploying modified base or abstract level ancestors in a PBD that has changed is basically a potential "death sentence" for your PB application (as Bruce also points out very well).

  Personally, I would never eliminate the duplicate compiles for the common class source code as like many of my client's applications. That way, each application or sub-system can be at a different version of the common code at any time due to roll-out, deployment, upgrade, etc restrictions in the various production environments.

Good luck!

Regards ... Chris

former_member190719
Active Contributor
0 Kudos
It seems like the lynch pin is the function declaration.  If a signature changes all references to that call must be changed.  I’m guessing the full rebuild does the function table building

Think in broader terms.  Any change to the public interface of the object in question:

  1.  Add or remove a public instance variable

  2.  Add, remove a public method or modify the signature of a public method

  3.  Add or remove an event or change an existing event signature

  4.  Add or remove a public RPCFUNC declaration or modify the signature of an existing RPCFUNC declaration.

Any of those will cause the pointers to change and require a recomiple of all objects that reference or refer to that object.

We use a shared framework layer, in particular PFC (including an extensive extension layer).  So long as we don't make those kinds of changes in the framework layer we can use PBDs and do incremental builds of the applications.  If we do end up modifying the framework layer, then we're doing more extensive regeneration of the applications.  I could still be an incremental build, but it will take longer.

We also do "EBFs" by just deploying modified objects in a special PBD that we add into the library list at the top at runtime.  Whenever we want to do a new EBF, we just add the additional objects to the PBD.  But we're real careful not to do something like a major code refactoring between EBFs, or we might was well just send out a whole new application.  It does mean however that any time we modify an ancestor object or an object that's referenced in many locations, we have to deploy the objects that descent from or reference that object, even though those objects haven't been modified by us.  It's because the regen process modifies those pointers and we need to deploy the objects that have been updated with the updated pointers.

Case in point.  We use custom transaction objects in our applications that have RPCFUNC declarations for stored procedures we use.  We declare that custom transaction object to be the type that SQLCA is created from . The issue is that makes it a global variable, and any addition of RPCFUNC declarations to it between EBFs would pretty much require a complete redeploy, since all objects reference the global variables.  So in our branch code line we add new stored procedure calls through embedded SQL within object methods, but in our trunk (the next major release) we add it as an RPCFUNC.

Former Member
0 Kudos

Thinking out load here.  Assume main app (the place controlling shared PBLs) is compiled first.  Then the library list of satellite apps points to shared pbDs.  (they can't change code) Now assuming main app only calls public function01 and satellite app calls public function02.  Can I expect this to work?