- The v4 Google patch makes things faster.
- Only the v4 Google patch enforced innodb_max_dirty_pages_pct=20. InnoDB is not good at enforcing this limit on write-intensive workloads. This problem has not received much press. If you are running a critical OLTP server and want it to recover quickly after a crash then this limit must be enforced. Code has been added to the v4 Google patch to delay user sessions by making them flush dirty pages prior to making more pages dirty if the limit has been exceeded. This is enabled by the my.cnf parameter innodb_check_max_dirty_foreground. Nobody has reviewed this code (hint, hint).
tpcc-mysql was run with 20 warehouses and 8 users with a 600 second warmup and 3600 second measurement period.
The 5077 binaries were faster than the v4 Google patch at innodb_max_dirty_pages_pct=20 because they did not enforce that limit. I have included a result for the v4 Google patch at innodb_max_dirty_pages_pct=50 to provide an additional reference point.
|Binary||TpmC||innodb_max_dirty_pages_pct||Avg %dirty pages|
Throughput over time for v4-5037:
- tests were run twice for each binary with innodb_max_dirty_pages_pct set to 20 and 80
- the Percona highperf b13 5.0.77 binary uses more memory for the same value of innodb_buffer_pool_size, so I had to reduce that value to 1G
The my.cnf settings for the v4 Google patch:
innodb_buffer_pool_size=1200MThe my.cnf settings for Percona highperf b13 5.0.77:
innodb_max_dirty_pages_pct=80 or 20
innodb_log_file_size=1900MThe my.cnf settings for 5.0.77: