cancel
Showing results for 
Search instead for 
Did you mean: 

If data is in memory, what happens in case of a power outage?

Former Member
0 Kudos

Before a system starts, all data is on a disk. Operational data will be loaded into memory, and additional data can be loaded on demand while the application is running. If data gets changed, delta logs will be written to the disk as well, so that in case of a power failure or another crash the valid state of data can be recreated from the log entries.

marius_movila
Member
0 Kudos

After a power failure, the database can be restarted like a disk-based database:

--> The system is normally restarted ("lazy" reloading of tables to keep the restart time short)

--> The system returns to its last consistent state (by replaying the redo log since the last savepoint)

Accepted Solutions (0)

Answers (2)

Answers (2)

Former Member
0 Kudos

@Karin

Simple , Basic one and a Very Good question.

I was Reading lot on HANA , but your question did not came to my mind.

This is a common mans question.

By the way , I have read all the above comments , But yet to understand still , what happens to data inside memory (HANA )when power goes off. and consider there is no power backup

Some one make me understand on this , and on Hana

Regards

Former Member
0 Kudos

I am not an expert, but well I think it will depends on what is in memory the DB itself, indexes...and the technology supporting it:

Found this on wiki:

ACID support

In their simplest form, main memory databases store data on volatile memory devices. These devices lose all stored information when the device loses power or is reset. In this case, MMDBs can be said to lack support for the durability portion of the ACID properties. Volatile memory-based MMDBs can, and often do, support the other three ACID properties of atomicity, consistency and isolation.

Many MMDBs add durability via the following mechanisms:

* Snapshot files, which record the state of the database at a given moment in time. These are normally generated when the MMDB does a controlled shut-down, or on request, and thus while they give a measure of persistence to the data (in that not everything is lost in the case of a system crash) they only offer partial durability (as 'recent' changes will be lost). For full durability, they will need to be supplemented by...

* Transaction Logging, which records changes to the database in a journal file and facilitates automatic recovery of an in-memory database.

* Non-volatile RAM, usually in the form of static RAM backed up with battery power (battery RAM), or an electrically erasable programmable ROM (EEPROM). With this storage, the MMDB system can recover the data store from its last consistent state upon reboot.

* High Availability implementations that rely on database replication, with automatic failover to an identical standby database in the event of primary database failure. To protect against loss of data in the case of a complete system crash, replication of a MMDB is normally used in conjunction with one or more of the mechanisms listed above.

Some MMDBs allow the database schema to specify different durability requirements for selected areas of the database - thus, faster-changing data that can easily be regenerated or that has no meaning after a system shut-down would not need to be journalled for durability (though it would have to be replicated for high availability), whereas configuration information would be flagged as needing preservation.

Former Member
0 Kudos

"Transaction Logging, which records changes to the database in a journal file and facilitates automatic recovery of an in-memory database."

this is the only persisence layer required...in a way we are going back to the original way accounting has always been done: journaling and then closing the account balances into financial reports at predetermined intervals (mostly months).

esjewett
Active Contributor
0 Kudos

I'm kind of nit-picky on this topic, but I can't help but point out that while transaction logging is the normal way to get guaranteed write-persistence out of an in-memory database, it does to a large certain extent defeat the purpose of "in-memory" to require that every transaction be written to a disk before closing the transaction.

This is just one more way that these things are not as simple as migrating an application into a memory-based datastore. A memory-based datastore in no way guarantees a significant speedup of a business process or user interaction.

Ethan

Former Member
0 Kudos

No every transaction is not written to the disk based database at the very moment. In a simple scenario consider a retail store which has 100s of thousand transactions in a day. Now during the day lets say the transaction system performs inserts only to the in-memory database and at the end of day these transactions are rolled up to the main database. In this case the transaction application can be kind of real-time and the failover strategy needs to be designed only for a days worth of transactions. In fact this the approach which most of the banks also use where the daily transaction log is a different file and is updated to the main database at the end of the day.

esjewett
Active Contributor
0 Kudos

If we spend a day inserting records into a memory-only database and then we lose power and our UPS fails, then what happens to the data?

I am simply saying that total transaction time is constrained by the slowest aspect of the transaction. If one of our business requirements is to write to a storage medium that is less volatile (the recoverable transaction log) but slower, then our transaction times will be constrained by the write speed of the slower medium.

Banks do indeed take the approach of writing to a non-volatile transaction log during the day, and their write speeds are constrained by this factor. The write speed constraint is manageable because a transaction log is a very simple data structure that supports sequential, append-only writes, which are quite fast either on disk or in memory.

My point is just that there is a lot of complexity around these types of systems. It is not as simple as just moving an application to memory. Moving to memory involves a trade-off, and the downsides of this trade-off may require application and data model redesign. Further redesign maybe also be necessary to realize the full upside of memory-based datastores as well.

Former Member
0 Kudos

Of course your point is very valid, but I am very positive soon good strategies will be out there is market.

But for data-warehousing / reporting applications this is not a constraint and I suppose that is the reason why most in-memory applications out there in market are in this area.

Former Member
0 Kudos

Back up is expected in all these critical scenarios.

Regards,

Rajesh.

sanjay_ram
Participant
0 Kudos

There is a misunderstanding here, each time a new record arrives, it is written both to the memory and also the physical disk in parallel without slowing down each other, so if the memory part crashes, the data can be loaded back into the memory from the physical disk.

lbreddemann
Active Contributor
0 Kudos

This again is a misunderstanding.

We don't write every single new record to disk immediately.

Instead the logging goes into log buffer first. Only COMMITs need to be synchronously written to log.