Skip to Content

Solving Performance Problems in Transactional Fiori Applications

This blog contains some of the points that can be considered to improve performance of complex Fiori applications. Complex here refers to too many fields on the Fiori UI which indirectly means an oData service with more than 20 entities and each entity having more than 10 attributes. We are talking about services which have more than 300 attributes. The concepts mentioned here are based on the experience which we had during our developments in such complex applications

Master Detail Screen: One of the templates available in Fiori is the master detail template. Application built using this would contain a Master list on the left hand side and a detail screen on the right.

Master List

Any Fiori application is launched via the Launchpad tile. Hence the Initial load time is the time taken by the application to render its initial page once we click on the tile. In a master detail screen this would mean time taken for getting the master list + time taken for fetching the details of the lead selected record

  • Evaluate whether time taken for fetching the list takes more time in case it does then we can consider pagination of the list. Thus instead of fetching 100 records we can fetch 10 records at a time which is supported using the $top and $skip options within the oData services
  • In case it ends up that even fetching 10 records takes more time then we can fetch only the first record and in parallel execute the two queries: fetch of data on the right hand side and fetch the remaining records in the first 10 records. Parallel execution is possible by using jquery deferred concepts.

        Example coding:

        // create a deferred object var deferLoad = jQuery.Deferred();

        // tell the deferred object what to do after a task which might be async is done deferLoad.then(function(s2Controller){ s2Controller.loadInitialAppData(s2Controller.aSorter, s2Controller.aFilter); })

      // Resolve would trigger the then deferLoad.resolve(s2Controller);

Detail Screen

oData services in some cases will not be modelled from scratch we could as well take any existing model (Genil Model or any other model) and then build the oData model from this.  Thus the hierarchy or the relationships are cascaded from these models.

    • Re-implement $expand you would have already read in several blogs that the default implementation of $expand is not performance friendly in most cases especially when you have a complex oData model with hundreds of attributes. Hence the first option to consider is to re-implement the $expand as per the application needs.

          Re-implementation of $expand would have generally two aspects

                  1. Building the static structure in line with the entities that are queried

                  2.Providing data to this deep or nested structure

    •  When we talk about 300 attributes it is important to see how we distribute or segment this data in different pages within the application. You would not want to load all the information in the first click itself in that case you will probably have a flat service with one entity and 300 attributes within them which is never the case. Hence bring in the lazy load concept in the UI so that only the needed information is fetched at all time from the backend to the UI.
    •  The UI design should balance the number of click that the end user has to perform to read the complete data. This should take precedence over designing the UI to improve performance implicitly.
    •  Modelling of description fields there are two options that could come to mind in modelling the description fields.

                            1. Add another attribute for every description field within the entity

                            2. Add a common entity which would have navigations from the entity which needs description. This entity could have key, text attributes

                            The first option fits for display applications as here we always have only a single description associated to a key as this would reduce the number of  queries. Whereas the second option could be better in case of create or change application where we can also query for the value helps. Based on the application needs we can decide to have both or choose one of the options

  • Asynchronous calls help in improving the performance and if not used in an apt manner can consume more time. Let us assume that in the detail screen we have 5 tabs and once the initial tab is loaded we have an option to make all the calls for the remaining 4 tabs to fetch the data asynchronously. This has the advantage that if we take some time to go through the data in initial tab then the remaining tabs are populated with data during this time. Hence the subsequent click on any tab would be faster. Now when we load the backend with 4 queries in parallel we have to be careful on how many request can be handled by backend. Sometimes it might so happen that loading data on click would be better than making such asynchronous calls. Hence a thorough analysis needs to be done before the usage of asynchronous calls. If used wisely they help in bringing down the performance.

  • Tradeoff between asynchronous calls and batch calls. The advantage of batch call is that the number of roundtrips are reduced but the result comes only once all the queries are executed however the time taken would be the maximum time taken by one of the queries within the batch whereas in case of asynchronous calls roundtrips are more but the result comes as an when the individual queries finishes execution. Based on the UI design we can use option which is more apt.
Former Member