1. Big downtime gets a lot of attention in the MySQL world. There will be some downtime when you replace a failed master. With GTID in MariaDB and MySQL that time will soon be much smaller. There might be lost transactions if you use asynchronous replication. You can also lose transactions with synchronous replication depending on how you define lose. I don't think this gets sufficient appreciation in the database community. If the higher commit latency from sync replication prevents your data service from keeping up with demand then update requests will timeout and requested changes will not be done. This is one form of small downtime. Whether or not you consider this to be a lost transaction it is definitely an example of lousy quality of service.

    My future project, MarkDB, might have a mode where it never loses a transaction. This is really easy to implement. Just return an error on calls to COMMIT.

    2

    View comments

  2. I looked at the release notes for 5.6.14 and then my bzr tree that has been upgraded to 5.6.14. I was able to find changes in bzr based on bug numbers. However for the 5 changes I checked I did not see any regression tests. For the record, I checked the diffs in bzr for these bugs: 1731508, 1476798, 1731673, 1731284, 1730289.

    I think this is where the MySQL Community team can step up and help the community understand this. Has something changed? Or did the tests move over here?
    2

    View comments

  3. Google search results for mariadb trademark are interesting. I forgot so much had been written about this in the past. Did the trademark policy ever get resolved? This discussion started in 2010.

    2

    View comments

  4. There aren't many new files under mysql-test for 5.6.14. Is this compression or something else? Many bugs were fixed per the release notes.

    diff --recursive --brief mysql-5.6.13 mysql-5.6.14 | grep  "Only in"
    Only in mysql-5.6.14/man: ndb_blob_tool.1
    Only in mysql-5.6.14/mysql-test/include: have_valgrind.inc
    Only in mysql-5.6.14/support-files: mysql.5.6.14.spec
    Only in mysql-5.6.14/unittest/gunit: log_throttle-t.cc
    Only in mysql-5.6.14/unittest/gunit: strtoll-t.cc
    Only in mysql-5.6.13/packaging/rpm-uln: mysql-5.6-stack-guard.patch
    Only in mysql-5.6.13/support-files: mysql.5.6.13.spec

    2

    View comments

  5. I have been wondering what the Foundation has been up to. I had high hopes for it and even contributed money but it has been very quiet. Fortunately I learned that it has been busy making decisions, maybe not in public, but somewhere. And at Percona London we will be told why it forked MariaDB prior to 5.6 and reimplemented a lot of features.

    In other news the Percona London lineup looks great and I appreciate that Oracle is part of it.
    3

    View comments

  6. Someone I know used to make jokes about their plans to run MySQL 4.0 forever. It wasn't a horrible idea as 4.0 was very efficient and the Google patch added monitoring and crash-proof replication slaves. I spent time this week comparing MySQL 5.7.2 with 5.6.12 and 5.1.63. To finish the results I now have numbers for 4.1.22. I wanted to include 4.0 but I don't think it works good when compiled with a modern version of gcc and I didn't want to debug the problem. The result summary is that 4.1.22 is much faster at low concurrency and much slower at high concurrency. Of course we want the best of both worlds -- 4.1.22 performance at low concurrency and 5.7.2 performance at high. Can we get that?

    I used sysbench for single-threaded and high concurrency workloads. The database is cached by InnoDB. All of the QPS numbers are in previous posts except for the 4.1.22 results. I only include the charts and graphs here as the differences between 4.1.22 and modern MySQL stand out.

    Single thread results

    For all of the charts the results for 4.1.22 are at the top. The first result is for a workload that fetches 1 row by primary key via SELECT. MySQL 4.1.22 is much better than modern MySQL.


    The next result is for a workload that fetches 1 row by primary key via HANDLER. MySQL 4.1.22 is still the best but the difference is smaller than for SELECT.


    The last result is for a workload that updates 1 row by primary key. The database uses reduced durability (no binlog, no fsync on commit). Modern MySQL has gotten much slower. Bulk loading a database with MySQL might be a lot slower than it was in 4.1.22.

    Concurrent results

    MySQL 4.1.22 looked much better than modern MySQL on the single-threaded results. It looks much worse on the high-concurrency workloads and displays a pattern that was well known back in the day -- QPS collapses once there are too many concurrent requests.

    Here is an example of that pattern for SELECT by primary key.


    This is an example of the collapse for fetch 1 row by primary key via HANDLER.


    The final example of the collapse is for UPDATE 1 row by primary key. Note that 5.1.63 with and without the Facebook patch also collapses.

    5

    View comments

  7. Many of my write-intensive benchmarks use reduced durability mode (fsync off, binlog off) because that was required to understand whether other parts of the server might be a bottleneck. Fortunately real group commit exists in 5.6 and 5.7 and it works great. Results here compare the performance between official 5.1, 5.6 and 5.7 along with Facebook 5.1. I included FB 5.1 because it was the first to have group commit and the first to use that in production. But the official version of real group commit is much better, as is the MariaDB version. Performance for the same workload without group commit is here.

    I compared 5 binaries:
    • orig5612.gc - MySQL 5.6.12 with group commit
    • orig572.gc - MySQL 5.7.2 with group commit
    • fb5163.gc - MySQL 5.1.63 and the FB patch with group commit
    • fb5163.nogc - MySQL 5.1.63 and the FB patch without group commit
    • orig5163 - MySQL 5.1.63 without group commit

    This graphs displays performance for an update-only workload. The client is 8 sysbench processes that run on one host with mysqld on another host. The total number of clients tested was 8, 16, 32, 64, 128 and 256 where the clients are evenly divided between the sysbench processes. The test database was cached by InnoDB and 8 tables with 8M rows each were used. The clients were evenly divided between the 8 tables. Each transaction is an auto-commit UPDATE that changes the non-indexed column for 1 row found by primary key lookup. The binlog was enabled and InnoDB did fsync on commit. The test server has 24 CPUs with HT enabled and storage is fast flash.
    This table has all of the results from the test.

    binary8163264128256
    orig5612.gc111081772828789395334770851110
    orig572.gc110211758327133371454373646717
    fb5163.gc80511470922486288823174331888
    fb5163.nogc745767846722668968106537
    orig5163694271256957659866486568

    For comparison I include these results from tests that disabled the binlog and fsync on commit. In these tests performance of 5.1.63 collapses under concurrency. I did not debug the cause but using the binlog and fsync on commit improved performance. Note also that 5.6 and 5.7 can do ~70k TPS in a reduced durability configuration and ~50k in a durable configuration.  So durability costs about 2/7 of the peak.

    binary8163264128
    fb5163.noahi2854341978317581448010208
    orig5163.noahi2771447582352311493610308
    fb5612.noahi2593547862696797573071983
    orig5612.noahi2692051842732887875772966
    orig5612.psdis2713850902717117720871674
    orig5612.psen2657648000694517572970466
    orig572.noahi2608948382731908337375456
    orig572.psdis2509048368717958234875060
    orig572.psen2537545751691547902370585
    0

    Add a comment

  8. These are results for sysbench with a cached database and concurrent workload. All data is in the InnoDB buffer pool and I used three workloads (select only, handler only, update only) as described here. The summary is that MySQL sustains much higher update rates starting with 5.6 and that improves again in 5.7. Read-only performance also improves but to get a huge increase over 5.1 or 5.6 you need a workload with extremely high concurrency.

    The tests used one server for clients and another for mysqld. Ping between the hosts takes ~250 microseconds. The mysqld host has 24 CPUs with HT enabled. Durability was reduced for the update test -- binlog off, no fsync on commit and host storage was fast for the writes/fsyncs that had to be done. The names used to describe the binaries is described here. Each test was repeated for 8, 16, 32, 64 and 128 concurrent clients.

    You might also notice there is a performance regression in the FB patches for MySQL 5.6. I am still trying to figure that out. The regression is less than the one in 5.6/5.7 when the PS is enabled but I hope we can get per-table and per-user resource monitoring with less overhead.

    SELECT by PK

    binary8163264128
    fb5163.noahi3122859099128500184677192537
    orig5163.noahi3245067192126999183784193457
    fb5612.noahi2899658856118444168934175622
    orig5612.noahi3320759713124882176286184590
    orig5612.psdis2796363654123344174108180259
    orig5612.psen2919257937116613160917164578
    orig572.noahi3005362649121101171441180280
    orig572.psdis3083562925117282165293171528
    orig572.psen3186958030117433156647160074

    HANDLER by PK

    binary8163264128
    fb5163.noahi3461373636156946239452207223
    orig5163.noahi3801483000152202223349133286
    fb5612.noahi3455283458152313243776266989
    orig5612.noahi3606484524158341246033276491
    orig5612.psdis3853771292159299242109272497
    orig5612.psen3499782608151322228510249636
    orig572.noahi3326073790161773242909280488
    orig572.psdis3424471770151687239340272236
    orig572.psen3772372841153221226125248008

    UPDATE by PK

    binary8163264128
    fb5163.noahi2854341978317581448010208
    orig5163.noahi2771447582352311493610308
    fb5612.noahi2593547862696797573071983
    orig5612.noahi2692051842732887875772966
    orig5612.psdis2713850902717117720871674
    orig5612.psen2657648000694517572970466
    orig572.noahi2608948382731908337375456
    orig572.psdis2509048368717958234875060
    orig572.psen2537545751691547902370585
    3

    View comments

  9. I used sysbench to measure the performance for concurrent clients connecting and then running a query. Each transaction in this case is one new connection followed by a HANDLER statement to fetch 1 row by primary key.  Connection create is getting faster in 5.6 and even more so in 5.7. But enabling the performance schema with default options significantly reduces performance. See bug 70018 if you care about that.

    There are more details on my test setup in previous posts. For this test clients and server ran on separate hosts and ping takes ~250 usecs between them today. Eight sysbench processes were run on the client host and each process created between 1 and 16 connections to mysqld. The database is cached by InnoDB and the clients were divided evenly between the tables.  Each table has 8M rows.

    These are results in TPS for 8, 16, 32, 64 and 128 concurrent clients. Each transaction is connect followed by a HANDLER fetch. The binaries orig572.psen and orig5612.psen use the performance schema with default options for MySQL 5.7.2 and 5.6.12. Throughput is much worse compared to the same code without the PS. All binary names are explained here.

    binary8163264128
    fb5163.noahi40418084163231638016066
    orig5163.noahi40267848155871591215741
    fb5612.noahi40047425231252457024688
    orig5612.noahi40277601261552802128091
    orig5612.psdis40087643256402751727631
    orig5612.psen42059366211972145621592
    orig572.noahi41729248286133945139721
    orig572.psdis40257612276003796338044
    orig572.psen40017870184372298223240

    And this chart has data for some of the binaries.

    1

    View comments

  10. I used sysbench to understand the changes in connection create performance between MySQL versions 5.1, 5.6 and 5.7. The test used single-threaded sysbench where each query created a new connection and then selected one row by PK via HANDLER. The database was cached by InnoDB and both the single client thread and mysqld ran on the same host. The tests were otherwise the same as described in a previous post.

    The summary is that connection create has gotten faster in MySQL 5.6 and 5.7 but enabling the performance schema with default options reduces that by about 10% for a single threaded workload. Bug 70018 is open to reduce this overhead. The memory consumed per increment of max_connections by the PS might also be interesting to you.

    binaryQPS
    fb5163.noahi2087
    orig5163.noahi2122
    fb5612.noahi2656
    orig5612.noahi2775
    orig5612.psdis2706
    orig5612.psen2468
    orig572.noahi2687
    orig572.psdis2611
    orig572.psen2427
    1

    View comments

Loading