There has been major changes in mysql and postgres over a couple of years. Mysql has been focusing on improving and optimizing innodb. Postgres on the other hand has been focusing on database replication and hot standby.
Recently postgres came out with version 9.0 which has built-in replication and hot standby - the two most requested feature in postgresql. Earlier people used to shy away from postgres because there was no proper "easily deployable" solution available for replication. Now with this release, postgres had taken a major step forward. Here http://www.postgresql.org/docs/9.0/static/release-9-0 is a list of features that has been introduced in postgres 9.0
Mysql has released the rc version of Mysql 5.5 which has a bunch of improvements over the previous version of mysql. Support for multi-core cpus, Changes in Innodb for effective use of available I/O capacity, semisynchronous replication - are some of the features that mysql 5.5 promices. Here is a list of all the new features in MySQL 5.5 http://dev.mysql.com/doc/refman/5.5/en/mysql-nutshell.html
It has been a long time, since posted my last benchmark http://jayant7k.blogspot.com/2008/06/mysql-versus-postgresql.html. And i believe it is time i do some rough benchmarks and post it out. The scope is to check out innodb tables in mysql 5.5.6 versus the tables in postgresql 9.0. I am focusing only on inserts and selects. And i will be benchmarking pure inserts and selects only. Thie blog focuses only on inserts. I will be focusing on selects in my next blog. I am running these tests on my laptop which has a Intel Core 2 Duo T5870 @ 2.00 GHz and 3 GB of RAM
I have created a simple php script to perform the benchmark. Which spawns out multiple php scripts that work on background. Let me know if you need the scripts and i will share it with you.
Mysql:
Innodb Settings in my.cnf :
innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 8M
innodb_log_file_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
innodb_file_per_table
Table structure :
CREATE TABLE `data` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`val` int(11) NOT NULL,
`txt` varchar(20) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `idx_val` (`val`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
Avg Time for 20000 inserts with concurrency of 2 : 1367 seconds
Load on machine : 3.4
size on disk : 9 MB
Avg Time for 50000 inserts with concurrency of 5 : 1537.2 seconds
Load on machine : 4.6
Size on disk : 13 MB
Avg Time for 100000 inserts with concurrency of 10 : 1255
Load on machine : 3.5
Size on disk : 17 MB
Avg Time for 200000 inserts with concurrency of 20 : 1403
Load on machine : 4.2
Size on disk : 26 MB
Time for 400000 inserts with concurrency of 40 : 1730
Load in machine : 6.6
Size on disk : 52 MB
Postgresql :
Settings in postgresql.conf:
shared_buffers = 64MB
work_mem = 1MB
synchronous_commit = on
Table structure :
Column | Type | Modifiers | Storage | Description
--------+-----------------------+---------------------------------------------------+----------+-------------
id | integer | not null default nextval('data_id_seq'::regclass) | plain |
val | integer | not null | plain |
txt | character varying(20) | | extended |
Indexes:
"data_pkey" PRIMARY KEY, btree (id)
"idx_val" btree (val)
Has OIDs: no
Avg Time for 20000 inserts with concurrency of 2 : 221.3 seconds
Load on machine : 2.0
size on disk : 2 MB
Avg Time for 50000 inserts with concurrency of 5 : 232 seconds
Load on machine : 2.0
Size on disk : 4.8 MB
Avg Time for 100000 inserts with concurrency of 10 : 248.75 seconds
Load on machine : 4.0
Size on disk : 9.8 MB
Avg Time for 200000 inserts with concurrency of 20 : 276.34
Load on machine : 3.6
Size on disk : 19 MB
Time for 400000 inserts with concurrency of 40 : 350.11
Load in machine : 5.8
size on disk : 38 MB
The graph shows that mysql is heavy as compared to pgsql. The base timings are almost 5 times more in mysql as compared to pgsql. Also as the concurrency goes up the time required for inserts in mysql spikes up more steeply as compared to that required for postgres.
I did a sample run on mysql by turning innodb_flush_logs_at_trx_commit=2 and the benefit I got was a lot
Avg Time for 20000 inserts with concurrency of 2 (innodb_flush_logs_at_trx_commit=2) : 5.2 seconds
Avg Time for 100000 inserts with concurrency of 10 (innodb_flush_logs_at_trx_commit=2) : 18.69 seconds
Similarly i disabled synchronous_commit on postgres and did a sample run
Avg Time for 20000 inserts with concurrency of 2 (synchronous_commit = off) : 2.95 seconds
Avg Time for 100000 inserts with concurrency of 10 (synchronous_commit = off) : 15.06 seconds
PS : The average time is the time for 10000 inserts (inserts per instance)
Lets see what do the selects tell - in the next blog.
35 comments:
Were thes tests done with XA disabled and sync_binlog=0? What about the new Barracuda format with compressed tables for InnoDB?
Interesting and surprising to see given MySQL has been touting that performance is much better with 5.5. Thanks for doing this.
There's no analysis as to *why* mysql is slower...
Is it flushing too aggressively? 64M is a puny innodb_log_file_size. It's recommended that your log file be 3x innodb_buffer_pool_size, with a combined size less than 4G.
You're not demonstrating any of the 5.5 features that do give it a performance boost in this benchmark.
Being a buddy database learner,your blog is very informative for me,its a new one i learned here.
From the numbers, it is hard to believe that PostgreSQL is really fsync'ing properly there. I doubt a laptop HDD can do 1000 writes per second. Even assuming we have perfect group commit going on, with two threads it is still 500 per second for the first test.
InnoDB's numbers are about right on for writes per second for a cheap commodity hdd.
I am curious to see your PHP code. The relative numbers are not anywhere near the results I get - however, I'm not testing on a laptop.
@Matthew : I was just following the recommendation that innodb_log_file_size should be 25% of innodb_buffer_pool.
Check out http://dev.mysql.com/doc/refman/5.5/en/innodb-configuration.html
@Harrison : You may be right. But postgres is a more advanced and reliable database as compared to mysql.
@Andrew : I would be glad to share the code. And i would request you to send me the numbers that you have got.
@Matthew : The reason why mysql is slower is not in the scope now. I will focus on it once i am through with selects and random read/write benchmarks. But mysql does some aggressive I/O - i could hear my HDD crunching while i was running inserts on mysql.
Certainly, on commodity hardware, the drive can't crank out that many fsyncs per second.
For example, 7200 RPM = 120 RPS = max of 120 fsyncs per second. So if either database is fsyncing once per insert, then the minimum time to do 400,000 inserts is 400,000 / 120 = 3333.33 seconds. Both databases finished faster than that, so either:
1. The test was run on a 15K drive.
2. One or both databases weren't configured to fsync properly.
-or-
3. The fsyncs piggybacked on each other.
There's not enough information here to draw firm conclusions, but #3 seems plausible. Write-ahead log entries are pretty small, so there's probably no reason that each drive revolution can't write out all the queued-up WAL.
Of course, we'd need a lot more details to be sure.
Ok so it looks like we're all wrong...
Sensible values range from 1MB to 1/N-th of the size of the buffer pool, where N is the number of log files in the group (default 2)
http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_log_file_size
Giving 1:1 ratio of total log and innodb_buffer_pool_size
This conversation saved my day:
- I believe postgres is not fsyncing properly.
- You may be right. But postgres is a more advanced and reliable database as compared to mysql.
@Jayant, how it is more reliable? If it isn't properly syncing data to disk, whereas MySQL is, then that isn't true. What OS is this on?
@Robert, I had assumed group commit to allow for combining multiple fsyncs. However, in the case with 2 threads, there can be at most 2 being combined, which is still ~500 per second.
Hi,
Did you have binlog enabled for MySQL when you ran the benchmark?
I suspect the interesting detail here is that this laptop is running OS X. If you change Postgres' wal_sync_method to fsync_writethrough you get much more reasonable numbers under OS X. On my MBP, for instance, I get about 1544s for a concurrency of 5 which is very similar to the MySQL numbers here.
"But postgres is a more advanced and reliable database as compared to mysql. "
and this proves what? for all we know you could have invented those numbers, for mysql and postgres. Why is the benchmark code not attached to the post? there is not a single detail besides "it is a insert".
@Matthew : point taken. innodb_bin_log_size should have been 128 MB instead of 64 MB - since innodb_buffer size was 256 MB.
@Andrew : I am running linux - ubuntu 10.04 - kernel 2.6.32-24. Linux does not support fsync_writethrough http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm
@Andy : I had disabled mysql_bin_log to save on disk io.
@Harrison : Here is a comparison of reliability of mysql and postgres http://wiki.postgresql.org/wiki/Why_PostgreSQL_Instead_of_MySQL:_Comparing_Reliability_and_Speed_in_2007 This is on linux.
@Anonymous : Send me your email address and i will share the code with you. This is an independent benchmark - i am not biased towards any database. I have been using mysql for a long time now.
@ Daniƫl van Eeden :
innodb_file_format = Barracuda
sync_binlog = 0
innodb_support_xa = ON
fsync is not going to work on a laptop harddrive, unless you turn off the harddrives cache. These kind of drives lie to you, that's also why server grade drives look slower: They don't lie.
Better settings for wal_buffers, checkpoint_segments and checkpoint_completion_target might speed up write actions.
Current setting for shared_buffers is very small, 64MB is next to nothing. It isn't a problem in this test, but will be in others.
@Anonymous : I agree, i have not configured postgres properly. Could suggest proper values for wal_buffers, checkpoint_segments, checkpoint_completion_target and shared_buffers...
@Harrison - In the 2 thread case, PG apparently did 20,000 inserts in 221.3 seconds. That's only 20,000/221.3 = 90.3 commits/second, which doesn't require any fsync piggybacking at all. Is my math off?
Of course, as others have said, it is very possible that the drive cache is lying to PG or that the wal_sync_method needs to be changed. It would be interesting to know whether this is the case but I don't see how to verify based on the provided details.
> @Harrison : Here is a comparison of
> reliability of mysql and postgres
> http://wiki.postgresql.org/wiki/Why_PostgreSQL_Instead_of_MySQL:_Comparing_Reliability_and_Speed_in_2007
> This is on linux.
* this is from 2007, so about three years old, and compares against MySQL 5.0 (which is already beyond end of life), several things mentioned in that article are no longer valid for 5.1 or the upcoming 5.5
* part of it focuses on MyISAM, when comparing PostgreSQL to MySQL you need to compare against InnoDB though as both pretty close to each other conceptually. comparing to MyISAM (or ndbcluster, or ARCHIVE, or BLACKHOLE storage engines) makes less sense as they aim for different needs (InnoDB is going to be the default engine starting with 5.5 btw.)
* the article doesn't really say anything about InnoDB data integrity, and as far as i can tell InnoDB is head to head with PostgreSQL in that respect
* most important InnoDB/PG differences: InnoDB does not need VACUUM (neither manual nor in its automated form) as its pruge thread cleans up old MVCC entries as soon as they get out of scope, and InnoDB has allowed direct updates between major versions ever since without a need for dump/restore (unless the MySQL server or SQL layer above it has screwed up, e.g. on changed charset collations)
* transactional DDL is still something were PostgreSQL shines and MySQL ... well ... simply doesn't have it
* CHECK constraints silently being ignored is indeed a still unresolved WTF ... there were reasons for implementing, or actually faking, them this way, but they are not compelling reasons IMHO. CHECK constraints have nothing to do with foreign keys though (both are CONSTRAINTS, but that's about it), so seeing half of the small foreign key section actually talking about CHECK ... well ...
* MySQL does indeed accept some invalid input like 0000-00-00 dates by default, or picks the default storage engine on CREATE if the requested one is not available, it *does* raise warnings on this though. It's 'just' that most applications don't bother to check for warnings unless you turn on the "treat warnings as errors" SQL_MODE. And the 'unsafe' options still being the default settings is simply the curse of backwards compatibility i'm afraid ...
* The "Transaction Locking and Scalability" section even confirms that PostgreSQL and InnoDB are more or less on par, with the exception of massive parallel workloads ... an area where MySQL has substantially caught up in recent years (esp. in the upcoming 5.5)
* the PG optimizer/executor is probably still better (lacking the hands on experience here to comment on this)
But still: a 3 year old paper that is now 3 major releases behind for PG and almost 2 for MySQL and that focuses a lot on the shortcomings of MyISAM where you actually want to compare against InnoDB is not a good reference for deciding which of the two is faster or more reliable ...
PS: your results really seem to have "fsync() not properly syncing to disk" written all over them ...
@Hartmut : Sorry old link - check this link from 2009 http://wiki.postgresql.org/wiki/Why_PostgreSQL_Instead_of_MySQL_2009
It clearly states that though you store data in innodb, the mysql metadata tables are still in MyISAM format which are vulnerable to traditional data corruption issues associated with storage engine. Postgres also supports fulltext searches on its tables, which is not possible on innodb tables - they can happen only on MyISAM tables.
Direct upgrades is definitely one point where innodb and mysql scores over postgresql.
Here are some links which give an idea about fsync()
http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm
http://www.postgresql.org/docs/8.3/static/runtime-config-wal.html
I have turned on synchronous_commit - which is supposed to guarantee me all data availability irrespective of system crash.
If my laptop HDD lies about fsync - why would it lie only for postgresql? It should then also lie for innodb as well. And in that case both of them should be at par...
Also mysql has binlog disabled. So i wont be able to create slaves without enabling them. Whereas postgres does replication from WAL - so I would be able to do replication of postgres at its present I/O whereas for mysql i will have to increase I/O by enabling binary logs to do replication.
PS : I have been using mysql for a long time now and am pretty new to postgres - so am not very familiar with postgres tuning.
Postgres 9.0 now supports upgrades in place so not many cases where the dump/load procedure is needed anymore to upgrade to a new, major version
Yay! Aren't we having fun today.
Of course I agree with everything regarding the need to not accept bogus data or use MyISAM... that's why removed those in Drizzle.
On the other hand, there's still something a little misleading above. You mention the metadata tables being in MyISAM format. While true, this might imply a little more serious of a case than it actually is. (for the record, we've also removed those in Drizzle) The primary use most people have for those table is user accounts. You might also store timezone information. It's not like you're storing table definitions or the like. So although the fact part of the statement is correct, the existence of those tables making Postgres somehow better at reliably storing data is a little off.
Now, if you wanted to pick on something, the FRM file is a much better choice (also removed in Drizzle) A crash during a table alter can certainly have bad effects on FRM - but then, we've already covered that Postgres has transactional DDL and MySQL does not.
MySQL can do fulltext search on Innodb tables via Sphinx and the Sphinx Storage Engine plugin for MySQL. It's external, sure - but up until a few weeks ago the most common methods for doing replication in Postgres were all external add ons, so I think both camps recognize the need to have features not directly in the core. I'd love to see Postgres' internal full-text search compared against MySQL+SphinxSE.
Thanks for working on benchmarking and taking the abuse that comes with it! As is pretty obvious, doing a convincing benchmark on two different pieces of software is tricky and fraught with peril.
Can you just set innodb_flush_log_at_trx_commit=2 and let us know the new result?
Also, re:
@Harrison : You may be right. But postgres is a more advanced and reliable database as compared to mysql.
Statements like that make me think of everyone's favorite assessment of MongoDB's superiority.
Each has pros and cons, and each application and situation has its own needs. Trying to label either with a blanket "more reliable" or "more advanced" is a bit simplistic and a disservice to folks in general.
Benchmarks with code, data and methodology, on the other hand == WIN.
Yay! Aren't we having fun today.
Of course I agree with everything regarding the need to not accept bogus data or use MyISAM... that's why removed those in Drizzle.
On the other hand, there's still something a little misleading above. You mention the metadata tables being in MyISAM format. While true, this might imply a little more serious of a case than it actually is. (for the record, we've also removed those in Drizzle) The primary use most people have for those table is user accounts. You might also store timezone information. It's not like you're storing table definitions or the like. So although the fact part of the statement is correct, the existence of those tables making Postgres somehow better at reliably storing data is a little off.
Now, if you wanted to pick on something, the FRM file is a much better choice (also removed in Drizzle) A crash during a table alter can certainly have bad effects on FRM - but then, we've already covered that Postgres has transactional DDL and MySQL does not.
MySQL can do fulltext search on Innodb tables via Sphinx and the Sphinx Storage Engine plugin for MySQL. It's external, sure - but up until a few weeks ago the most common methods for doing replication in Postgres were all external add ons, so I think both camps recognize the need to have features not directly in the core. I'd love to see Postgres' internal full-text search compared against MySQL+SphinxSE.
Thanks for working on benchmarking and taking the abuse that comes with it! As is pretty obvious, doing a convincing benchmark on two different pieces of software is tricky and fraught with peril.
Hi Jayant,
"If my laptop HDD lies about fsync - why would it lie only for postgresql? It should then also lie for innodb as well. And in that case both of them should be at par..."
If your laptop is a Mac, then fsync() definitely does lie about writing to disk. The Mac fsync() does not write everything to disk, and my experience with database engines is that it leads to corrupted data after a crash.
This is why InnoDB does not use fsync() on the Mac!
It uses: fcntl(fh, F_FULLFSYNC, 0)
This call really writes data to disk, and it is dog slow! At least 5 to 10 times slower than fsync().
Which would explain the difference in speed that you are measuring.
@Rene : Have published the results for innodb_flush_log_at_trx_commit=2 for innodb and similarly for synchronous_commit=off for pgsql. check it out and let me know your thoughts.
Jayant,
Thanks for running this. However, you really should run tests on something more realistic than on a Mac laptop. Might I suggest utilizing an Amazon or GoGrid account in order to run them cheaply on a server (using local storage, of course)? Shouldn't cost more than a few $ in Amazon fees.
Also, for your laptop, I'd recommend the following basic PostgreSQL settings:
shared_buffers = 512MB
maintenance_work_mem = 256MB
effective_cache_size = 1GB
checkpoint_segments = 16
Other settings are unlikely to make much difference on a laptop.
If you are running Linux kernel 2.6.32, and you are using ext4, the type of fsync calls that PostgreSQL does by default on Linux will use write barriers to make sure data is flushed out of the drive's cache properly. If you tested with ext3, the write barrier implementation does not work properly in this area, and your PostgreSQL numbers are accordingly inflated. I wrote a discussion of this topic at
http://archives.postgresql.org/pgsql-performance/2010-01/msg00370.php that references http://www.phoronix.com/scan.php?page=article&item=linux_perf_regressions&num=1, where you can clearly see performance tank in the default config once proper fsync support was added to ext4 in 2.6.32
As pointed out already, it's impossible to get more than about 100 true commits per client out of a regular 7200 RPM hard drive unless there's some sort of cheating going on. However, as also pointed out, the 2-thread results work out to only 90 commits/second for PostgreSQL. Completely possible that proper fsync is happening here. But if this is on ext3, PostgreSQL is definitely cheating, and MySQL might be using an additional Linux API to force data out that puts it at a disadvantage in comparison.
As I don't believe that PostgreSQL performance is actually 5X faster than InnoDB at low client counts in particular, I am suspicious that something is wrong with the MySQL config here too. Since the test code isn't public, and not nearly enough details about the test environment have been presented, even as a PostgreSQL advocate I would have to recommend considering this whole test suspect for now. I wouldn't be surprised to find, for example, that there's some sort of difference in how autocommit is being handled in the two cases, due to how the PHP code is written and the defaults of the respective drivers. There's a number of ways this type of benchmark comparison might have gone wrong, and it looks like one of them happened. I'm just not sure which yet.
I would highly recommend that any update to the graphs here report their results in transactions/second rather than any other way, because it makes the fsync type of error (more commits than the drive is physically capable of) easier to spot when it happens. Also, always include the results with a single client; also helpful for the same reason because it removes group commit from the picture.
Finally, I'm the author of both the TuningPGWAL.htm document and main one for the "Why PostgreSQL Instead of MySQL" document referenced here. Neither have been updated completely since 2007. The 2009 version of the PG/MySQL one referenced above is clearly labeled at its top as a work in progress, and unfortunately that update has yet to complete. It does not reflect the performance/reliability trade-offs of either database as they stand here in 2010. That's what I felt was a fair comparison of the state of things in 2007, but both MySQL and PostgreSQL have changed in many ways since then. As one major example, PostgreSQL 9.0 that was just released this month now includes fully integrated support for version upgrades from 8.3 and 8.4 without doing a dump and reload. That code has been available for 8.3->8.4 upgrades for about a year now too. That particular issue with PostgreSQL, one area MySQL did have a major lead on, has been resolved.
I have published the code here : http://jayant7k.blogspot.com/2010/10/database-speed-tests-mysql-and.html
I am using ext4 file system.
This is a wonderful content. I will bookmark this site and visit again. It is very informative. Thanks for sharing.
ext4 had a regression in Linux where fsync basically did not work. It's long since been fixed, but rest assured, Postgres is not 'cheating,' although it was a victim to a broken contract with Linux at that time.
Post a Comment