- These results do not include contention on rw-locks used by MySQL. That requires more hacking.
- Fast mutexes used by this test spin for 4 microseconds using pthread_mutex_trylock before blocking with pthread_mutex_lock.
- The fast mutex version of the server is no faster than the version that always uses pthread_mutex_lock. However, I am more interested in using fast mutexes to get mutex contention stats. We can then use the stats to find bottlenecks in the server and make it faster.
Results from the 8-core server for the sysbench oltp test with 64-threads and a CPU-bound workload. This lists the number of times that the caller probably went to sleep waiting for the lock.
- 18474212 --> LOCK_open
- 1882317 --> prepare_commit_mutex
- 1662725 --> LOCK_global_client_user_stats
- 523034 --> THR_LOCK_heap
- 446372 --> LOCK_alarm
Results from the 16-core server for the sysbench oltp test with 64 threads and a CPU-bound workload. This lists the number of times that the caller probably went to sleep waiting for the lock.
- 48687529 --> LOCK_open
- 5759089 --> LOCK_alarm
- 2922732 --> prepare_commit_mutex
- 1958603 --> THR_lock_heap
- 1808405 --> LOCK_global_client_user_stats
Notes on the hotspots:
- Antony Curtis has a proposed change that reduces contention for LOCK_open. I will look at it soon. This is by far the worst hotspot.
- LOCK_alarm is a known problem. It is used to implement timeouts for network IO. That is not needed on most platforms as there are system calls that do network read/write with a timeout. Drizzle may have already changed this.
- prepare_commit_mutex is locked in innobase_xa_prepare. XA is used internally when the binlog is enabled to guarantee that the binlog and InnoDB agree during crash recovery. If group commit is fixed for InnoDB then contention for this might be reduced.
- LOCK_global_client_user_stats is used for data produced by SHOW USER_STATISTICS. This is only in the Google patch. We think we can reduce contention on it.
- THR_LOCK_heap is used for allocation by the HEAP storage engine.