This article from the SAP Insider discusses the demands on a supply chain system that require analyzing huge amounts of data in near real time. mySAP™ Supply Chain Management (mySAP SCM) offers a way to solve this dilemma of processing where the data is and processing on main memory for performance: SAP® APO (SAP Advanced Planner and Optimizer) with SAP liveCache. SAP APO increases the speed of transactions for supply chain planning many times over. Now, with the newest version of SAP liveCache and SAP APO, SAP customers will continue to see high performance and availability, but they will also find simpler and more powerful tools for full point-of-failure recovery of planning data.
19 Dec 2003
Planning and decision-making for a company's supply chain - from determining when your supplier should send the next widget to creating a detailed, long-term production schedule - involves analyzing huge amounts of data from your own business processes and from the partners along your supply chain. In contrast to typical ERP data models, the demands on such planning systems require models designed to process vast amounts of data in near real time.
Thus companies often find themselves in a data-processing dilemma: for any heavy-duty planning solution to achieve acceptable performance under these circumstances, processing must be done where the data is. To get another performance boost, processing must be done on main memory. The first paradigm omits extra network roundtrips, the second one avoids unnecessary disk access. And then, of course, efficient backup and recovery of this critical data is key.
mySAP Supply Chain Management (mySAP SCM) offers a way out of this dilemma: SAP APO (SAP Advanced Planner and Optimizer) with SAP liveCache. SAP APO increases the speed of transactions for supply chain planning many times over. Now, with the newest version of SAP liveCache and SAP APO, SAP customers will continue to see high performance and availability, but they will also find simpler and more powerful tools for full point-of-failure recovery of planning data.
High Performance and Unique Point-of-Failure Recovery
SAP APO offers planning functionality for strategic, tactical, and operational planning of supply chains. Combined with SAP liveCache, it helps SAP customers respond to the data-processing challenges of supply chain planning.
Designed to improve the flow of information, SAP APO offers real-time and collaborative decision processes, advanced planning, and optimization as part of the SAP system to cover long- and short-term planning issues, such as supply network planning, demand planning, and production planning. (For more information, see the article "Supply Chain Planning with mySAP SCM" in SAP Insider.)
SAP APO pulls data out of your mySAP SCM solution and other applications and transforms and stores it to its own object- and network-based data model, with its own representation in its own database server. The system can either work off the SAP APO database server for more basic planning data, or off SAP liveCache for high-performance, high-memory planning issues. Application servers can also handle multiple connections to the different databases (see Figure 1), meaning that multiple processes and applications can connect simultaneously with different systems during the same session, and can work on data in either SAP liveCache or the SAP APO database server.
Figure 1. Architecture of SAP APO Systems with SAP liveCache
SAP liveCache is based on a memory-centric offshoot of the SAP DB technology (1) shipped with SAP APO since Release 2.0. For the most resource-intensive planning questions, SAP APO pushes performance-critical application logic to SAP liveCache. The data required for those processes is also pushed to SAP liveCache, where it is kept persistent. The persistence of both the data and the application logic is a real benefit, since it allows different processes to work on the same data and avoids bottlenecks by following the paradigm "run the logic where the data is."
SAP liveCache and SAP APO's New Approach to Backup and Recovery
With such critical business data, backup and recovery is always a concern. SAP liveCache's self-sufficiency in backup and point-of-failure recovery is one of the unique features that separates SAP APO from its competitors.
Until recently, SAP APO handled recovery through logging on the application level, together with switching log areas and SAP liveCache checkpointing. This required complex synchronization of processes during any restart of SAP APO and SAP liveCache, and involved special treatment to recognize the point where users could return to their work. Furthermore, there was potential for slow-downs at the SAP liveCache checkpoints during normal operation.
In the latest release of SAP liveCache (7.4) delivered with SAP APO 3.1, administrators now have a simpler solution: new, encapsulated logging and recovery capabilities for the persistent data in SAP liveCache. In fact, the applications no longer have to deal with logging and recovery procedures at all.
Although the SAP APO and SAP liveCache databases are independent, they are designed to ensure the consistency of data and transactions. Any open SAP APO transactions whose modifications have been rolled back during an SAP liveCache restart will receive a return code, and will be undone by the application and thus the SAP APO database.
Likewise, SAP APO cannot execute procedures on SAP liveCache data while SAP liveCache is unavailable, so this ensures consistency between the SAP APO core system and SAP liveCache when it comes to transactions. Unlike earlier implementations, users can now go on with their work as soon as SAP liveCache is up and running again, and recovery time is significantly decreased.
With backup and recovery moved away from the applications, administration functions for SAP liveCache are available right within SAP liveCache's own Database Manager tool. Figure 2 shows the backup process displayed in the Database Manager, which provides support for backup and user guidance during recovery, and maintains media definitions, protocols, and backup history. (2) (Note that external backup tools are also an option, but be sure to check any third-party tool's documentation regarding compatibility with SAP liveCache's administration functions.)
Figure 2. New Recovery Functions in the Database Manager
In Figure 2, the administrator sees SAP liveCache's default recovery route, recommended for optimal performance: a complete data backup is restored (DAT_00001), followed by an additional incremental backup (PAG_00002) for recovery, followed by a log backup (LOG_00008). During log recovery, SAP liveCache will switch from the log backup to its own log volume as soon as it reaches a log page in the backup that is also available in the log volume.
Now, administrators can customize the backup process in the Database Manager. For example, the administrator could choose an alternative recovery strategy, based on log backups only, by checking "LOG_00003." The tool would then mark all logs for recovery and omit the incremental backup.
SAP liveCache 7.4 Also Planned for SAP APO 3.0
SAP liveCache 7.4 is currently delivered with SAP APO 3.1, but a release is also planned for those customers using SAP APO 3.0. With this scenario, the application logging that customers currently have in place in APO 3.0 will be switched off; logging will be performed by SAP liveCache 7.4 only. To migrate to 7.4, the current SAP liveCache data will be saved back into the SAP APO database. After SAP liveCache is upgraded, that data will be reloaded.
For more information on release and availability, SAP users can log on to http://service.sap.com/scm and go to mySAP SCM Technology -> Backup and Recovery.
SAP liveCache - The Technology
As the price of memory chips decreases while performance increases, the driving forces behind SAP liveCache - high availability and increased performance and persistence - have become even more accessible to SAP customers. With these hardware innovations and SAP liveCache technology, data can primarily be processed in main memory, and normally does not have to be moved between the disk and memory during transactional processing. As a result, supply chain management and planning applications can hit even higher performance targets.
For example, moving a typical set of data from SAP liveCache to the application server for processing (see figure, top right) would take at least 1 millisecond (A), compared with the microseconds it would take if processed inside SAP liveCache (B).
Figure 3. Processing of Data with the SAP liveCache Approach
SAP liveCache depends on the enhanced use of stored procedures for persistent stores of C++ class instances. It is designed to speed up processing by dynamically linking application code (in C++) directly to the database kernel code during runtime. As a result, the stored procedures are executed directly in the SAP liveCache address space without switching back and forth to the application server. The application data is completely available in main memory and will only be written to disk to achieve the requisites for recovery and thus persistency. This has no impact on performance because SAP liveCache is supplied with a highly efficient asynchronous I/O system
Figure 4. Call to SAP liveCache from the Application Server
Consider a stored procedure call to SAP liveCache (see figure, bottom right). In this case, SAP APO is looking to schedule an order via SAP liveCache. The call is performed from the ABAP code layer of an SAP APO application. The procedure "schedule_order" is identified in the code by name; a stored C++ procedure with the same name has been registered to SAP liveCache during startup. An ABAP call will be bound to an SAP liveCache session. When the call occurs, that session runs as a task within a thread. Then, within SAP liveCache, the Object Management System (OMS) within SAP liveCache locates the "schedule_order" procedure code in the external C++ library. Then the OMS adapts the parameters of "schedule_order" to match its own object and network structures. This C++ procedure is executed directly inside the SAP liveCache kernel address space, using the OMS class interface of SAP liveCache. From there, the processing returns to the application server.
By providing a powerful interface to achieve persistence together with a stored procedure model, SAP liveCache solves the data-processing dilemma. For your most resource-intensive planning problems, the application can "run where the data is" - within SAP liveCache - with a minimized number of roundtrips and disk I/O, and a powerful backup and recovery concept to safeguard your supply chain planning data and keep your planning processes running smoothly.
SAP APO is unique in its use of this SAP liveCache technology, which means that users and administrators reap the benefits: high performance, high availability, and simplified, reliable backup and recovery.
SAP liveCache is currently available on Tru64, the 64-bit versions of Solaris, HP-UX, and AIX and Windows Server. SAP liveCache on Windows 2000 also supports AWE. A general release for .NET Server (Synonymous with Windows IA64) is planned for liveCache 7.4 in 2003.
For more information on SAP APO and SAP liveCache 7.4, registered SAP customers can visit the mySAP SCM Technology section at http://service.sap.com/scm. For more on SAP DB technology, on which liveCache is based, visit www.mysql.com/maxdb.
(1) For more on SAP DB, see "SAP DB, SAP's Open Source RDBMS: Free, Scalable Database Management" in SAP Insider (October-December 2001).
(2) The Database Manager is designed as a client/server application and provides a batch interface as well. The user interface is a Windows interface or, for non-Windows platforms, it is a browser-based interface with nearly the same functionality. All interfaces are clients that communicate with the Database Manager server, which runs as an independent program on the same server as the SAP liveCache kernel.
Jörg Hoffmeister is a product manager for SAP DB at SAP Labs Berlin. He is also in charge of software production and development support. He has been working on the SAP database since 1985 and has extensive experience in database integration and database development.