首页 >数据库 >mysql教程 >Tips on benchmarking Go + MySQL_MySQL

Tips on benchmarking Go + MySQL_MySQL

WBOY
WBOY原创
2016-06-01 13:14:16992浏览

We just released, as an open source release, our newpercona-agent(https://github.com/percona/percona-agent), the agent to work withPercona Cloud Tools.This agent is written inGo.

I will give a webinar titled “Monitoring All MySQL Metrics with Percona Cloud Tools” on June 25 that will cover the new features in percona-agent and Percona Cloud Tools, where I will also explain how it works. You are welcome toregister nowand join me.

There will be more posts about percona-agent, but in the meantime I want to dedicate this one to Go, Go with MySQL and some performance topics.

I have had an interest in the Go programming language for a long time, but in initial versions I did not quite like the performance of the gorountine scheduler. See my report from more than two years ago on runtime:reduce scheduling contention for large$GOMAXPROCS.

Supposedly this performance issue was fixed in Go 1.1, so this is a good time to revisit my benchmark experiment.

A simple run of prime or fibonachi numbers calculation in N threas is quite boring, so I am going to run queries againstPercona Server. Of course it adds some complication as there are more moving parts(i.e. go scheduler, go sql driver, MySQL by itself), but it just makes the experiment more interesting.

Source code of my benchmark:Go-pk-bench:
This is probably not the best example of how to code in Go, but that was not the point of this exercise. This post is really about some tips to take into account when writing an application in Go using a MySQL(Percona Server)database.

So, first, we will need a MySQL driver for Go. The one I used two years ago (https://github.com/Philio/GoMySQL) is quite outdated. It seems the most popular choice today isGo-MySQL-Driver, and this is the one we use for internal development. This driver is based on the standard Go“database/sql” package. This package kind of provides a standard Go-way to deal with SQL-like databases. “database/sql” seems to work out OK, with some questionable design decisions as for my taste. So using “database/sql” and Go-MySQL-Driver you will need to deal with some quirks like almost unmanageable connection pool.

The first thing you should take into account it is a proper setting of
runtime.GOMAXPROCS().

If you do not do that, Go scheduler will use the default, which is 1. That binary will use one and only 1 CPU(so much for a modern concurrent language).

The commandruntime.GOMAXPROCS(runtime.NumCPU())
will prescribe to use all available CPUs. Always remember to use this if you care about multi-threaded performance.

The next problem I faced in the benchmark is that when I ran queries in a loop, i.e. to repeat as much possible…

rows, err := db.Query("select k from sbtest"+strconv.Itoa(tab+1)+" where id = "+strconv.Itoa(i))

rows,err:=db.Query("select k from sbtest"+strconv.Itoa(tab+1)+" where id = "+strconv.Itoa(i))

… very soon we ran out of TCP ports. Apparently “database/sql” and Go-MySQL-Driver and its smart connection pool creates aNEW CONNECTION for each query. I can explain why this happens, but using the following statement:

'db.SetMaxIdleConns(10000)'

'db.SetMaxIdleConns(10000)'

helps(I hope somebody with “database/sql” knowledge will explain what it is doing).

So after these adjustments we now can run the benchmark, which by query you see is quite simple – run primary key lookups against Percona Server which we know scales perfectly in this scenario(I used sysbench to create 64 tables 1mln rows each, all this fits into memory). I am going to run this benchmark with 1, 2, 4, 8, 16, 24, 32, 48, 64 user threads.

Below you can see graphs for MySQL Throughput and CPU Usage(both graph are built using new metrics graphing inPercona Cloud Tools)

MySQL Throughput(user threads are increasing from 1 to 64)
mysql-go

CPU Usage(user threads are increasing from 1 to 64)
Cpu-go

I would say the result scales quite nicely, at least it is really much better than it was two years ago. It is interesting to compare with something, so there is a graph from an identical run, but now I will use sysbench + lua for main workload driver.

MySQL Throughput(sysbench, user threads are increasing from 1 to 64)
mysql-sysbench

CPU Usage(sysbench, user threads are increasing from 1 to 64)
cpu-sysbench

From the graphs(this is what I like them for),we can clearly see increases in User CPU utilization(and actually we are able to use CPUs on 100% in user+system usage)and it clearly corresponds to increased throughput.

And if you are a fan of raw numbers:

MySQL Throughput, q/s (more is better)Threads| Go-MySQL | sysbench1	|13,189	|16,7652	|26,837	|33,5344	|52,629	|65,9438	|95,553	| 116,95316	| 146,979	| 182,78124	| 169,739	| 231,89532	| 181,334	| 245,93948	| 198,238	| 250,49764	| 207,732	| 251,972

MySQLThroughput,q/s(moreisbetter)

Threads  |Go-MySQL|sysbench

1    |  13,189    |  16,765

2    |  26,837    |  33,534

4    |  52,629    |  65,943

8    |  95,553    |116,953

16    |146,979    |182,781

24    |169,739    |231,895

32    |181,334    |245,939

48    |198,238    |250,497

64    |207,732    |251,972

(one with a knowledge of Universal Scalability Law can draw a prediction till 1000 threads, I leave it as a homework)

So, in conclusion, I can say that Go+MySQL is able to show decent results, but it is still not as effective as plan raw C(sysbench), as it seems it spends some extra CPU time in system calls.

If you want to try these new graphs in Percona Cloud Tools and see how it works with your system –join the free beta!

声明:
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn