The truth about shared PBDs Please
I’m currently working on simplifying and unifying the build and deployment strategy for an 81 application suite. Here’s a brief description:
The main application sports the UI and a massive set of supportive logic. Another 80 applications implement various I/O features and perform specialized calculation and housekeeping tasks. All client installations have the main application and some subset of auxiliary applications. Each application is deployed to its own folder.
The main application contains all common code as well as UI code. The other 80 applications have unique code in addition to all common code. Unique code is in an application unique set of libraries. Every application has its own copy of all its required PBLs. Development rules require that common code be modified only by the main application. Common code is currently shared by Copying and Pasting all shared libraries into each application’s folder. Every application has its own copy of 17 massive libraries.
It currently takes about 20 hrs to compile the entire suite. Since each application has its own code, parallel compiling in separate VMs is possible.
I’m currently part way through pulling all source code into a unified SCC repository thereby eliminating massive source code redundancy that is prone to confusion and defective deployments.
I have complete control of the build process using custom code based on PBorcapi, a TopWizProgramming product.
My question revolves around the possibility of eliminating duplicate PBD compiles. Assuming the main application is built first and the 17 massive PBDs copied and added as PBDs to the library list (or left as PBLs on the library list and just copied into each auxiliary application’s folder after main application compile and the EXE build is told they are ready).
Can the PBDs be shared? This will eliminate (17 * 80) PBD compiles. A significant time saving
Here’s the key point from Boris Gasin written in 1998
“when a PB object is compiled, Powerbuilder will save offsets
into function tables of all the related objects. The relationship may
be through inheritance or association (instance variables). This is
why you get a warning whenever you change an existing function
definition. Something like "Changing the function arguments requires
that you regenerate all objects that call this function".
An example of this would be the relationship between pfc_u_dw (in PFC
layer) and n_tr (in PFE layer) if one application changes a function
definition in n_tr and rebuilds, an offset to that function is saved
in the compiled version of pfc_u_dw. If the second application tries
to call their version of this function PB will look for that function
in the wrong place, which will probably cause a GPF. Doing a full
rebuild will synchronize the references for the second application but
may break them for the first, and so on...
It seems like the lynch pin is the function declaration. If a signature changes all references to that call must be changed. I’m guessing the full rebuild does the function table building
So doesn’t the key question become, How do we force a signature change down to all referencing applications?
Thanks for responding