on 11-05-2012 4:26 AM
Hi,
I have the following question about HANA and i cannot find any answer on any blogs :-
1) If the crux of Hana speed is all about running on RAM memory instead of a physical hard disk, why can’t all the other competitors do the same to build their database on RAM instead of hard disks? What is stopping them from doing so? I have to ask this because HANA's speed is not about its secret logic or anything like that it is purely because of the fact that RAM memory is used to hold the entire database.
Couple of points:
1. There are a number of things that differentiate HANA versus other database platforms. True, one of them is the fact that HANA runs fully in-memory, but there is also the compression, the combination of row-store and column-store, the parallelism across nodes, etc. Not to mention that the HANA development teams work very closely with Intel on processor specifications so as to optimize for, and influence development of, the Intel CPU platform.
2. It is not as simple as Oracle or IBM dedicating teams of programmers to convert their disk-based databases to an in-memory database. This is a major architectural shift, and cannot be done simply. Virtually *all* of their existing database code would have to be modified to allow for this change. In addition, as mature database platforms with many customers, they must maintain backwards compatibility or they risk breaking support for existing applications and interfaces. SAP have many skilled programmers as well, and have been working on in-memory database technology for over 15 years. It is not a trivial task.
3. Oracle (TimesTen) and IBM (solidDB and Cognos/TM1) both have fully in-memory database products that they have acquired. Oracle has built their latest analytical solution (Exalytics) on TimesTen, and IBM is aggressively pushing Cognos planning solutions based on TM1, as well as solidDB as an in-memory cache for DB2 and other relational databases (similar to Oracle's use of TimesTen). I would invite you to read here for more on this topic: http://www.dbms2.com/category/memory-centric-data-management/
Cheers,
David.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Guys:
Adding to the above points: HANA in addition to in-memory also exploits the MPP (massive parrallel processing) hardware there by boosting the processing power. In-memory and MPP together make HANA a good high performance platform for high-volume / process intensive applications.
Regards,
Rama
Exactly. HANA can go as deep as parallelizing (is that a word?) the processes per each core available in the system. e.g. effectively running a query in 40 or 80+ parallel threads (or multiples of that in case of a scale-out deployment). In that sense, HANA is indeed a full-fletched appliance, whose software was made to get specific advantages of that hardware configuration (i.e. the Intel processors).
In addition to that, the optimizations in the OLAP engine (that runs the analytical views) in order to read blocks of memory when running such queries in column stores, instead of running full scans on the queried tables, makes HANA what it is.
Finally, the persistence in the log files, immediate in-memory merge and posterior recurring data file merge is also another great advantage. Of course, that is leveraged by using FusionIO technology for the log disks.
Best regards,
Henrique.
Hi
They do! Storing data in memory is not a new technology. Open source dbs do that. For example Facebook uses that technolog. And also they give support for this technology. And also I am not sure but MS-SQL supports storing data in memory. Every company and every product has different approaches.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
79 | |
9 | |
9 | |
7 | |
7 | |
7 | |
6 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.