日常业务开发中,与数据库的交互大多通过数据库连接池进行。对于数据池连接池大小的设置,经常有人会犯理所当然的错误,即连接池越大、并发越高、数据库的吞吐量就越大、延迟或等待时间就越小。

数据库连接池大小要设置为多大才算合理的呢?
直接看一个公式:

连接池大小 = ((cpu核心数 * 2) + 有效磁盘数)

下面是这个公式解释:

A formula which has held up pretty well across a lot of benchmarks for years is
that for optimal throughput the number of active connections should be somewhere
near ((core_count * 2) + effective_spindle_count). Core count should not include
HT threads, even if hyperthreading is enabled. Effective spindle count is zero if
the active data set is fully cached, and approaches the actual number of spindles
as the cache hit rate falls. … There hasn’t been any analysis so far regarding
how well the formula works with SSDs.

建议细看连接池组件HikariCP项目wiki中关于数据库连接池大小的讨论:

https://github.com/brettwooldridge/HikariCP/wiki/About-Pool-Sizing

这个wiki的中文翻译:

https://mp.weixin.qq.com/s?__biz=MzU4MDUyMDQyNQ==&mid=2247483806&idx=1&sn=0f6134e651cadbf03693992a642d5098