Based on few tutorials that I show I created the following MariaDB configuration for my server. Server has 32 GB ram, SSD, 8 cores and is dedicated to run MySQL. The biggest table has about 1,5 million rows, and we are running updates every second. So I am trying to create an optimal configuration for writing. A simple update record sometimes need 2-3 seconds. Should I change something to increase update queries speed?
[mysqld] # MyISAM key_buffer_size = 32M myisam_recover = FORCE,BACKUP # SAFETY innodb = FORCE innodb_strict_mode = 1 max_allowed_packet = 16M max_connect_errors = 1000000 skip_name_resolve # BINARY LOGGING expire_logs_days = 14 sync_binlog = 1 # CACHES AND LIMITS max_connections = 500 max_heap_table_size = 32M open_files_limit = 65535 query_cache_type = 0 query_cache_size = 0 table_definition_cache = 4096 table_open_cache = 10240 thread_cache_size = 50 tmp_table_size = 32M # INNODB innodb_buffer_pool_size = 12G innodb_buffer_pool_instances = 12 innodb_flush_method = O_DIRECT innodb_flush_log_at_trx_commit = 1 innodb_file_per_table = 1 innodb_log_files_in_group = 2 innodb_log_file_size = 1G # LOGGING log_error = /var/lib/mysql/srv.defrop.com.err slow_query_log = 1 slow_query_log_file = /var/lib/mysql/mysql-slow.log long_query_time = 2 # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links = 0 #custom performance_schema = on
Example log of query
# Time: 190415 10:36:59 # [email protected]: defrop_defdb[defrop_defdb] @ localhost  # Thread_id: 2420 Schema: defrop_defrop QC_hit: No # Query_time: 2.404021 Lock_time: 0.000053 Rows_sent: 0 Rows_examined: 2 # Rows_affected: 1 SET timestamp=1555317419; UPDATE `backlinks` SET `backlinks`.`crawler_id` = '10.0.0.28', `backlinks`.`used_time`=NOW() WHERE `backlinks`.`campaign_id`=710 AND `backlinks`.`googlebot_id` IS NULL AND `backlinks`.`used_time` IS NULL LIMIT 1;
If the server is solely used by MariaDB then you can dedicate 16-24 GB of RAM to the DB service (now it is limited to the 12 GB). That allows to make bigger InnoDB pools. If your tables/indexes are fit into the RAM then your service need no expensive additional disk I/O.
You can reduce the
max_connections variable to the reasonable value but that require clients to release the connections immediately after use. That can decrease the buffers consumption and overall peak RAM requirements.
Also you have to set up the
read_rnd_buffer_size variables to the appropriate values to ensure your queries do not use on-disk temporary tables for the intermediate datasets. But that values shouldn't be too big as far as they are defined for each connection and cumulative RAM consumption can reach
(max_connections * all_buffers) + innodb_buffer_pool_size and even more.
The good starting point is the
SET @@global.slow_launch_time = 0.5, @@global.slow_query_log = 1;
This command will start logging of the queries running for more than 0.5 second for further analysis. If the global variable
log_queries_not_using_indexes is set to 1 then queries with no indexes will also be logged there. As far as your DB isn't too big it's more probable that slowdown is caused by poor indexing than bad server configuration.