搜尋
首頁資料庫mysql教程Q&A: Even More Deadly Mistakes of MySQL Developm_MySQL

Percona Webinars On Wednesday I gave a presentation on “ How to Avoid Even More Common (but Deadly) MySQL Development Mistakes ” for  Percona MySQL Webinars .  If you missed it, you can still  register  to view the recording and my slides.

Thanks to everyone who attended, and especially to folks who asked the great questions.  I answered as many as we had time for  during the session, but here are all the questions with my complete answers:

Q: Disk bandwidth also not infinite ;-)

Indeed, you’re right!

We discussed in the webinar the impact on network bandwidth from using column wildcards in queries like SELECT *, but it’s also possible that using SELECT * can impact disk operations. Varchar, Blob, or Text columns can be stored on extra pages in the database, and if you include those columns in your query needlessly, it can cause the storage engine to do a lot of seeks and page reads unnecessarily.

For more details on string storage in InnoDB, see Peter Zaitsev’s blog on Blob Storage in Innodb .

Q: How many tables can be joined in a single query? What is the optimal number of joins?

MySQL has a limit of 63 table references in a given query. This limits how many JOIN operations you can do, and also limits the number of UNIONs. Actually you can go over this limit if your JOIN or UNION don’t reference any tables, that is, create a derived table of one row of expressions.

If you do join a lot of tables (or even self-join the same table many times), you’re likely to hit a practical scaling limit long before you reach 63 table references. The practical limit in your case depends on many factors, including the length of the tables, the data types, the type of join expressions in your queries, and your physical server’s capabilities. It’s not a fixed limit I can cite for you.

If you think you need dozens of table references in a single query, you should probably step back and reconsider your database design or your query design.

I often see this type of question (“what is the limit on the number of joins?”) when people try to use key/value tables, also called Entity-Attribute-Value, and they’re trying to pivot attributes from rows into columns, as if the table were stored in a conventional way with one column per attribute. This is a broken design for many reasons, and the scalability of many-way joins is just one problem with it.

Q: How many indexes can be created in a single table? Any limitation? What is the optimal number of indexes?

All MySQL storage engines support at least 16 indexes per table.

As far as the optimal number of indexes, I don’t pay attention to the number of indexes (as long as it remains lower than the max of 16). I try to make sure I have the right indexes for my queries. If you put an arbitrary cap of for example 8 or 10 indexes on a given table, then you might be running queries that lack a needed index, and the unnecessary extra cost of running that query is probably greater than the cost of maintaining the one extra index it needs.

That said, there are cases where you have such variation in query types that there’s no way to have optimal indexes to cover every possible case. Given that you can have multi-column indexes, and multi-column indexes with columns in different orders, there are n-factorial possible indexes on a table with n columns.

Q: There is a table with 3 columns: id(int), user_id(int), day(date). There is a high chance same user_id will ‘exist’ for every day. I read data by “where user_id = some_id” (very high throuhput) and delete all entries once a day by cron using “where sent_date = ’2014-01-01′ “. Have approx 6M rows per day deletion is pretty painfull. Will partitioning by column ‘day’ help me deleting those bulks faster? If yes – how much faster? How much will it slow down SELECTs? – not all entries are deleted, but only entries for some specific old day, e.g. ‘ WHERE day = ’1 week ago’

Range partitioning by date would give you the opportunity to ALTER TABLE…DROP PARTITION, so you could remove all data for a given date very quickly, much faster than deleting millions of rows. The performance of DROP PARTITION is like that of DROP TABLE, because each partition is physically stored like a separate table.

Searching for “where user_id = ?” would not be able to take advantage of partition pruning, but it would still be able to use an index on user_id. And if you drop old partitions, the benefit of searching a smaller table could be a good tradeoff.

Q: Regarding 20% selectivity as a threshold for the optimizer preferring a table-scan to an index lookup – is that a tunable?

No, it’s not tunable, it’s a fixed behavior of the query optimizer. If you search for a value and the optimizer estimates that > 20% of rows contain the value you search for, it will bypass the index and just do a table-scan.

For the same reason that the index of a book doesn’t contain very common words, because the list of pages that word appears on would be too long, and flipping back and forth from the back of the book to each listed page would actually be more work than just reading the book.

Also keep in mind my figure of 20% is approximate. Your results may vary. This is not a magic threshold in the source code, it’s just a tendency I have observed.

Q: Regarding generating synthetic test data, it sounds like a pretty easy perl script to write.

Yes, it might be easy to do that for one given table. But every table is different, and you might have hundreds of tables in dozens of applications to generate test data for. You might also want to vary the distribution of data values from one test to another.

Writing a test-data generator for one particular case is easy, so you might reasonably do it as a one-off task. Writing a general-purpose test-data generator that you can use for many cases is more work.

Q: Would love to have the set of URLs cited in the presentation without having to go back and mine them out of the presentation.

Open source message queues:

  • Redis (Resque)
  • OpenMQ
  • ActiveMQ
  • RabbitMQ
  • Gearman
  • Beanstalk
  • Kafka
  • Kestrel

MySQL Performance Blog articles:

  • Exploring Message Brokers
  • Kiss Kiss Kiss (keep it simple)
  • Why you don’t want to shard

Open source test-data generator:

  • Databene Benerator

Load-testing tools for web applications:

  • JMeter
  • Siege

Load-testing tools to replay query logs:

  • Percona Playback
  • pt-log-player

Further reading for implementing business rules:

  • Real-World Rules Engines
  • The Enterprise Rules Engine (a warning)
  • Drools, the business logic integration platform

Q: How to best use mysql query cache?

Any cache is best used if you read from it many times for each time you write to it. So we’d like to estimate the average ratio of query cache reads to writes, to estimate how much leverage it’s giving us.

mysql> SHOW GLOBAL STATUS LIKE 'Qcache%';<br>

Check the values for QCache_hits (which are cases when a query result was read from the query cache) over QCache_inserts (which are cases when the desired query result was not in the cache, and had to be run and then the result stored in the cache). I like to see a ratio of 1000% or more (i.e. 10:1 hits to inserts).

If you have a poor ratio, for example less than 1:1 or less than 100%, then consider disabling the query cache, because it may be costing more to maintain it than the performance benefit it’s giving you.

Keep in mind that this is only a guideline, because the calculation I described is only an average. It could be that the queries served by the query cache are very expensive, so using the cached result is a great benefit even if it accounts for a small number of hits. The only way to be certain is to load-test your application under your load, and compare overall performance results with the query cache enabled or disabled, and at different sizes.

Q: How to detect when too much indexes start to affect performance?

Some people are reluctant to create indexes because they have been warned that indexes require synchronous updates when you INSERT, UPDATE, or DELETE rows. Some people also make the generalization that indexes harm writes but benefit reads. Bot of these are not true.

Your DML operations aren’t really updating indexes in real time. InnoDB includes a feature called change buffering, which defers index updates. The change buffer is gradually merged into the index over time. That way, InnoDB can handle a big spike in traffic without it hurting throughput as much. You can monitor how much content in the change buffer remains to be merged:

mysql> SHOW GLOBAL STATUS LIKE 'Innodb_ibuf_size';<br>

It’s also not accurate that indexes hurt writes. UPDATE and DELETE statements usually have a WHERE clause, to apply the changes to particular rows. These conditions use indexes to reduce the examined rows, just like in SELECT statements. But in UPDATE and DELETE statements, it’s even more important to use indexes, because otherwise the statement has to lock a lot of rows to ensure it locks the rows you’re changing.

So I generally say, don’t avoid indexes based only on the number of indexes you have, just make sure your indexes are being employed by the queries you run, and drop indexes that aren’t used. Here are a couple of past blog posts that show how to do this:

  • Find unused indexes
  • Quickly finding unused indexes (and estimating their size)

Thanks again for attending my webinar!  Here are some more tips:

  • Check out upcoming  Percona Training  classes in North America and Europe.
  • Join Percona and the MySQL community at our Percona Live .
  • Watch more  webinars  from Percona in the future!
陳述
本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn
MySQL的角色:Web應用程序中的數據庫MySQL的角色:Web應用程序中的數據庫Apr 17, 2025 am 12:23 AM

MySQL在Web應用中的主要作用是存儲和管理數據。 1.MySQL高效處理用戶信息、產品目錄和交易記錄等數據。 2.通過SQL查詢,開發者能從數據庫提取信息生成動態內容。 3.MySQL基於客戶端-服務器模型工作,確保查詢速度可接受。

mysql:構建您的第一個數據庫mysql:構建您的第一個數據庫Apr 17, 2025 am 12:22 AM

構建MySQL數據庫的步驟包括:1.創建數據庫和表,2.插入數據,3.進行查詢。首先,使用CREATEDATABASE和CREATETABLE語句創建數據庫和表,然後用INSERTINTO語句插入數據,最後用SELECT語句查詢數據。

MySQL:一種對數據存儲的初學者友好方法MySQL:一種對數據存儲的初學者友好方法Apr 17, 2025 am 12:21 AM

MySQL適合初學者,因為它易用且功能強大。 1.MySQL是關係型數據庫,使用SQL進行CRUD操作。 2.安裝簡單,需配置root用戶密碼。 3.使用INSERT、UPDATE、DELETE、SELECT進行數據操作。 4.複雜查詢可使用ORDERBY、WHERE和JOIN。 5.調試需檢查語法,使用EXPLAIN分析查詢。 6.優化建議包括使用索引、選擇合適數據類型和良好編程習慣。

MySQL初學者友好嗎?評估學習曲線MySQL初學者友好嗎?評估學習曲線Apr 17, 2025 am 12:19 AM

MySQL適合初學者,因為:1)易於安裝和配置,2)有豐富的學習資源,3)SQL語法直觀,4)工具支持強大。儘管如此,初學者需克服數據庫設計、查詢優化、安全管理和數據備份等挑戰。

SQL是一種編程語言嗎?澄清術語SQL是一種編程語言嗎?澄清術語Apr 17, 2025 am 12:17 AM

是的,sqlisaprogramminglanguges pecialized fordatamanage.1)它具有焦點,focusingonwhattoachieveratherthanhow.2)sqlisessential forquerying forquerying,插入,更新,更新,和detletingdatainrelationalDatabases.3)

解釋酸的特性(原子,一致性,隔離,耐用性)。解釋酸的特性(原子,一致性,隔離,耐用性)。Apr 16, 2025 am 12:20 AM

ACID屬性包括原子性、一致性、隔離性和持久性,是數據庫設計的基石。 1.原子性確保事務要么完全成功,要么完全失敗。 2.一致性保證數據庫在事務前後保持一致狀態。 3.隔離性確保事務之間互不干擾。 4.持久性確保事務提交後數據永久保存。

MySQL:數據庫管理系統與編程語言MySQL:數據庫管理系統與編程語言Apr 16, 2025 am 12:19 AM

MySQL既是數據庫管理系統(DBMS),也與編程語言緊密相關。 1)作為DBMS,MySQL用於存儲、組織和檢索數據,優化索引可提高查詢性能。 2)通過SQL與編程語言結合,嵌入在如Python中,使用ORM工具如SQLAlchemy可簡化操作。 3)性能優化包括索引、查詢、緩存、分庫分錶和事務管理。

mySQL:使用SQL命令管理數據mySQL:使用SQL命令管理數據Apr 16, 2025 am 12:19 AM

MySQL使用SQL命令管理數據。 1.基本命令包括SELECT、INSERT、UPDATE和DELETE。 2.高級用法涉及JOIN、子查詢和聚合函數。 3.常見錯誤有語法、邏輯和性能問題。 4.優化技巧包括使用索引、避免SELECT*和使用LIMIT。

See all articles

熱AI工具

Undresser.AI Undress

Undresser.AI Undress

人工智慧驅動的應用程序,用於創建逼真的裸體照片

AI Clothes Remover

AI Clothes Remover

用於從照片中去除衣服的線上人工智慧工具。

Undress AI Tool

Undress AI Tool

免費脫衣圖片

Clothoff.io

Clothoff.io

AI脫衣器

AI Hentai Generator

AI Hentai Generator

免費產生 AI 無盡。

熱門文章

R.E.P.O.能量晶體解釋及其做什麼(黃色晶體)
1 個月前By尊渡假赌尊渡假赌尊渡假赌
R.E.P.O.最佳圖形設置
1 個月前By尊渡假赌尊渡假赌尊渡假赌
R.E.P.O.如果您聽不到任何人,如何修復音頻
1 個月前By尊渡假赌尊渡假赌尊渡假赌
R.E.P.O.聊天命令以及如何使用它們
1 個月前By尊渡假赌尊渡假赌尊渡假赌

熱工具

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

這個專案正在遷移到osdn.net/projects/mingw的過程中,你可以繼續在那裡關注我們。 MinGW:GNU編譯器集合(GCC)的本機Windows移植版本,可自由分發的導入函式庫和用於建置本機Windows應用程式的頭檔;包括對MSVC執行時間的擴展,以支援C99功能。 MinGW的所有軟體都可以在64位元Windows平台上運作。

EditPlus 中文破解版

EditPlus 中文破解版

體積小,語法高亮,不支援程式碼提示功能

SublimeText3漢化版

SublimeText3漢化版

中文版,非常好用

SublimeText3 Linux新版

SublimeText3 Linux新版

SublimeText3 Linux最新版

禪工作室 13.0.1

禪工作室 13.0.1

強大的PHP整合開發環境