Category Archives: Databases

Lesson 07: Advanced MySQL Querying

Notes/errata/updates for Chapter 7:
See the official book errata at http://tahaghoghi.com/LearningMySQL/errata.php – Chapter 7 includes pages 223 – 275.

Supplemental blog post – ORDER BY NULL – read the blog post and the comments!

GROUP BY and HAVING examples – Supplemental blog post. The example of HAVING in the text shows a use case where HAVING is the same function as WHERE. This blog posts shows examples of HAVING that you cannot do any other way.

In the section called “The GROUP BY clause”, on pages 231-232, the book says:
“you can count any column in a group, and you’ll get the same answer, so COUNT(artist_name) is the same as COUNT(*) or COUNT(artist_id).” This is not 100% true; COUNT does not count NULL values, so if you had 10 rows and 1 artist_name was NULL, COUNT(artist_name) would return 9 instead of 10. COUNT(*) counts the number of rows and would always return 10, so COUNT(*) is preferable when you intend to count the number of rows.

Also in that section, on page 233 when they show you the example:
SELECT * FROM track GROUP BY artist_id;
– Note that they explain the result is not meaningful. In most other database systems, this query would not be allowed.

In the “Advanced Joins” section, specifically on page 238 at the bottom where they say “There’s no real advantage or disadvantage in using an ON or a WHERE clause; it’s just a matter of taste.” While that’s true for the MySQL parser, it’s much easier for humans to read, and see if you missed a join condition, if you put the join conditions in an ON clause.

In the section on Nested Queries, on page 251, it says “nested queries are hard to optimize, and so they’re almost always slower to run than the unnested alternative.” MySQL has gotten better and better at optimizing nested queries, so this statement isn’t necessarily true any more.

A “derived table”, is a nested query in the FROM Clause, as described in the section heading with that name (p. 262).

In the “Table Types” subsection (p. 267), it says that MyISAM is a good choice for storage engines, and that “you very rarely need to make any other choice in small-to medium-size applications”. However, it’s recommended to use InnoDB for better concurrency, transaction support and being safer from data corruption in a crash situation. Indeed, the default storage engine in more recent versions of MySQL is InnoDB.

In addition, the lingo has been changed since the book was written; we now use “storage engine” instead of “table type”. The examples that use CREATE TABLE or ALTER TABLE with TYPE may need to be changed to STORAGE ENGINE instead of TYPE.

Finally, you can skip the section on BDB since it has been deprecated (p. 274-5).

Topics covered:
Aliases
Join style
Joins (JOIN, INNER, COMMA, STRAIGHT, RIGHT, LEFT, NATURAL)
UNION and UNION ALL
Data aggregation (DISTINCT, GROUP BY, HAVING
Subqueries and Nested Queries (including ANY, SOME, ALL, IN, NOT IN, EXISTS, NOT EXISTS, correlated subqueries, derived tables, row subqueries)
User Variables
Transactions/locking
Table Types/Storage engines

Reference/Quick Links for MySQL Marinate

Lesson 06: Working with Database Structures

Notes/errata/updates for Chapter 6:
See the official book errata at http://tahaghoghi.com/LearningMySQL/errata.php – Chapter 6 includes pages 179 – 222.

Other notes:
At the end of the “Creating Tables” section (p.183-4), it says “We like using the underscore character to separate words, but that’s just a matter of style and taste; you could use underscores or dashes, or omit the word-separating formatting altogether.” While this is true, beware of using a dash, because MySQL will try to interpret “two-words”, thinking – is a minus sign. I recommend avoiding dashes for this reason (even though the book does this on page 215).

At the end of the “Collation and Character Sets” section (p.186), it says “When you’re creating a database, you can set the default character set and sort order for the database and its tables.” Note that the default character set for the server will set the default character set for any new databases created if a default character set is not specified; there is no change in existing databases. In turn, the default character set for the database sets the default character set for any new tables created but does not change any existing tables, and the default character set for a table determines the default character set for each column, which can be overridden by specifying a character set when defining a column.

Under the “Other Features” section it references a section called “Table Types”. This section is in chapter 7, p. 267.

Under the “Other Features” section it shows the SHOW CREATE TABLE example (p. 187). By default, MySQL shows you output in horizontal format – that is, one table row is shown as one row in the output. However, you can have MySQL show you output in vertical format, where one column is shows as one row in the output. Instead of using ; to end a query, use \G

Try it with:
SHOW CREATE TABLE artist;
vs
SHOW CREATE TABLE artist\G

And see the difference.

In the “Column Types” section on page 194, it says that “Only one TIMESTAMP column per table can be automatically set to the current date and time on insert or update.” This is not true as of MySQL version 5.6.5 and higher. As per the documentation at https://dev.mysql.com/doc/refman/5.6/en/timestamp-initialization.html: “For any TIMESTAMP or DATETIME column in a table, you can assign the current timestamp as the default value, the auto-update value, or both.”

In the section called “The AUTO_INCREMENT Feature”, on page 211, it says “If, however, we delete all the data in the table, the counter is reset to 1.” The example shows the use of TRUNCATE TABLE. Note that if you deleted all the data in the table with DELETE, such as “DELETE FROM count WHERE 1=1;”, the counter is NOT reset.

Supplemental material:
Data types:
Podcast on Strings
Podcast on Numeric data types
Podcast on ENUM, SET and different SQL modes
Podcast on Times and time zones

Topics covered:
How to CREATE, DROP and ALTER databases, tables, columns and indexes
Collations and character sets
Data types
AUTO_INCREMENT

Reference/Quick Links for MySQL Marinate

MySQL Marinate – So you want to learn MySQL! – START HERE

Want to learn or refresh yourself on MySQL? MySQL Marinate is the FREE virtual self-study group is for you!

MySQL Marinate quick links if you know what it is all about.

This is for beginners – If you have no experience with MySQL, or if you are a developer that wants to learn how to administer MySQL, or an administrator that wants to learn how to query MySQL, this course is what you want. If you are not a beginner, you will likely still learn some nuances, and it will be easy and fast to do. If you have absolutely zero experience with MySQL, this is perfect for you. The first few chapters walk you through getting and installing MySQL, so all you need is a computer and the book.

The format of a virtual self-study group is as follows:
Each participant acquires the same textbook (Learning MySQL, the “butterfly O’Reilly book”, published 2007). You can acquire the textbook however you want (e.g. from the libary or from a friend, hard copy or online). Yes, the book is old, but SQL dates back to at least the 1970’s and the basics haven’t changed! There are notes and errata for each chapter so you will have updated information. The book looks like this:

O'Reilly Butterfly book picture

O’Reilly Butterfly book picture

Each participant commits to reading each chapter (we suggest one chapter per week as a good deadline), complete the exercises and post a link to the completed work.

Each participant obtains assistance by posting questions to the comments on a particular chapter.

Note: There is no classroom instruction.

How do I get started?

– Watch sheeri.com each week for the chapters to be posted.

– Get Learning MySQL
Acquire a book (the only item that may cost money). Simply acquire Learning MySQL – see if your local library has it, if someone is selling their copy, or buy it new.

– Start!
When your book arrives, start your virtual learning by reading one chapter per week. Complete the exercises; if you have any questions, comments or want to learn more in-depth, that’s what the comments for!

FAQs:
Q: Does this cover the Percona patch set or MariaDB forks?

A: This covers the basics of MySQL, which are applicable to Percona’s patched MySQL or MariaDB builds, as well as newer versions of MySQL.

Q: What do I need in order to complete the course?

A: All you need is the book and access to a computer, preferably one that you have control over. Windows, Mac OS X or Unix/Linux will work. A Chromebook or tablet is not recommended for this course.

Q: Where can I put completed assignments?

A: Completed assignments get uploaded to github. See How to Submit Homework

Q: The book was published in 2007. Isn’t that a bit old?

A: Yes! The basics are still accurate, and we will let you know what in the book is outdated. I have contacted O’Reilly, offering to produce a new edition, and they are not interested in updating the book. We will also have optional supplemental material (blog posts, videos, slides) for those who want to learn more right away. We are confident that this self-study course will make you ready to dive into other, more advanced material.

Soak it in!

Reference/Quick Links for MySQL Marinate

Cost/Benefit Analysis of a MySQL Index

We all know that if we add a MySQL index to speed up a read, we end up making writes slower. How often do we do the analysis to look at how much more work is done?

Recently, a developer came to me and wanted to add an index to a very large table (hundreds of gigabytes) to speed up a query. We did some testing on a moderately used server:

Set long_query_time to 0 and turn slow query logging on
Turn slow query logging off after 30 minutes.

Add the index (was on a single field)

Repeat the slow query logging for 30 minutes at a similar time frame (in our case, we did middle of the day usage on a Tuesday and Wednesday, when the database is heavily used).

Then I looked at the write analysis – there were no DELETEs, no UPDATEs that updated the indexed field, and no UPDATEs that used the indexed field in the filtering. There were only INSERTs, and with the help of pt-query-digest, here’s what I found:

INSERT analysis:
Query hash 0xFD7…..
Count: 2627 before, 2093 after
Exec time:
– avg – 299us before, 369us after (70us slower)
– 95% – 445 us before, 596us after
– median – 273us before, 301us after

I extrapolated the average per query to 2400 queries, and got:
**Total, based on 2400 queries – 71.76ms before, 88.56ms after, 16.8ms longer**

There was only one read query that used the indexed field for ORDER BY (or anywhere at all!), so the read analysis was also simple:

Read analysis:
Query hash 0xF94……
Count:187 before, 131 after
Exec time:
– avg – 9ms before, 8ms after. 1 ms saved
– 95% – 20ms before, 16 ms after
– median – 9ms before, 8 ms after

Again, extrapolating to average for 150 queries:
**Total, based on 150 queries: 150ms saved**

So we can see in this case, the index created a delay of 16.8 ms in a half-hour timeframe, but saved 150 ms in reads.

It is also impressive that the write index added very little time – 70 microseconds – but saved so much time – 1 millisecond – that there were 16 times the number of writes than reads, but we still had huge improvement, especially given the cost.

I cannot make a blanket statement, that this kind of index will always have this kind of profile – very tiny write cost for a very large read savings – but I am glad I did this analysis and would love to do it more in the future, to see what the real costs and savings are.

Query Reviews (part 2): pt-query-digest

Query reviews (part 1): Overview

The 1st post in the series gave an overview of what a query review is and the value they can bring you. So now let’s talk about how one is done, specifically, how to do a query review using pt-query-digest.

The point of a query review is that it is a comprehensive review of queries. Imagine if you could get a list of all queries that run on your system, and then you systematically looked at each query to determine if it is optimized. That is the basic concept behind a query review.

So, how do you get a list of queries?

pt-query-digest can use a slow query log, binary log, general log or tcpdump. I usually use a slow query log with long_query_time set to 0, so I can capture all the successful queries and their timings. If this is too much overhead, consider using Percona Server’s log_slow_rate_limit and log_slow_rate_type parameters to only log every nth session/query. This means that if you have 5000 queries per second, you can set the slow logging rate to every 100th query, and reduce the write overhead for the slow query log to 50 queries per second (instead of all 5000 queries).

So you have your log, now what? Well, we need to process it. The –type option is where you set what your log type is (binlog, genlog, slowlog, tcpdump). Default is slowlog.

By default, pt-query-digest will give you a report of the top 95% worst queries. You can change that with the –limit parameter – note that –limit just limits the output; pt-query-digest still processes all the queries in the log file. If –limit is followed by an integer, it will limit the output to the top X queries; if it’s followed by a percentage (e.g. 10%) it will output the top percentage of queries.

As this is a query review of all queries, we will want to set the limit to 100%.

There are a lot of other options that pt-query-digest has, but many of them are there so we can distill and get queries that meet a certain criteria. The point of a query review is to look at ALL queries, so we do not need to use those options.

In fact, the only other options we need are related to the review itself. Because a review is systematic, we need a place to store information related to the review. How about a database for that? In fact, pt-query-digest has a –review option that takes parameters to store the information into a table.

Here is the command I recently used to start a query review. It was run from the shell commandline, and I used –no-report because I did not want anything other than the table and its rows created:
[sheeri.cabral@localhost]$ pt-query-digest --no-report --type slowlog --limit 100% --review h=localhost,u=sheeri.cabral,D=test,t=query_review --create-review-table --ask-pass mysql_slow.log

You can see that –review has a number of arguments, comma-separated, to identify a table on a host to put the queries into. I used the –create-review-table flag to create the table, since it did not already exist, and –ask-pass because I do not type in passwords in a shell command.

pt-query-digest then spends some time analyzing the file then creating and populating the table. Here’s a sample row in the table:

*************************** 1. row ***************************
checksum: 11038208160389475830
fingerprint: show global status like ?
sample: show global status like ‘innodb_deadlocks’
first_seen: 2017-06-03 11:20:59
last_seen: 2017-06-03 11:32:15
reviewed_by: NULL
reviewed_on: NULL
comments: NULL

The checksum and fingerprint are ways to make the query portable, no matter what values are used. The fingerprint takes out all the differences among iterations of the query, and puts ? in its place. So if you have a query that’s used over and over, like
SELECT first_name FROM customers WHERE id in (1,2,3)
the fingerprint would look like
SELECT first_name FROM customers WHERE id in (?+)

The sample provides a way for us to copy and paste into an EXPLAIN (or my favorite, EXPLAIN FORMAT=JSON) statement, so that we can assess the query.

So then we can go through the process of optimizing the query. In the end, this query has nothing to tweak to optimize, so I update the reviewed_on date, the reviewed_by person, and the comments:

mysql> UPDATE test.query_review set reviewed_on=NOW(), reviewed_by='sheeri.cabral', comments='no mechanism to optimize' WHERE checksum=11038208160389475830;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

On to the next query – we shall get the next query that has not yet been reviewed:
mysql> select * from test.query_review where reviewed_on is null limit 1\G

If you have already done some query reviews, your WHERE clause may look something like where reviewed_on is null OR reviewed_on < NOW()-interval 6 month.

And then look at that query for optimization. Lather, rinse, repeat. This is a GREAT way to get familiar with how developers (and ORMs) are writing queries.

Some tricks and tips – first take a look at all the queries less than 50 characters or so – you can easily update those to be all set to reviewed, with whatever message you want.
mysql> select fingerprint from query_review where length(sample)<50;
+----------------------------------------------+
| fingerprint |
+----------------------------------------------+
| administrator command: Ping |
| set session `wait_timeout` = ? |
| show tables |
| rollback |
| select * from information_schema.processlist |
| select @@session.tx_isolation |
| show status |
| start transaction |
| select user() |
| show query_response_time |
| set autocommit=? |
| show full processlist |
| show databases |
| administrator command: Statistics |
| show plugins |
| show slave status |
| commit |
| set names ? |
| show global status like ? |
| administrator command: Quit |
| show /*!? global */ status |
| set names utf? |
| select @@version_comment limit ? |
| show engine innodb status |
| select database() |
+----------------------------------------------+
25 rows in set (0.00 sec)

One great feature is that you can add columns to the table. For example, maybe you want to add an “indexes” column to the table, and list the index or indexes used. Then after the query review is complete, you can look at all the indexes in use, and see if there is an index defined in a table that is NOT in use.

You can review all the queries and run a query review every 6 months or every year, to look at any new queries that have popped up, or queries that have been removed (note first_seen and last_seen in the table).

You can also see how the query performance changed over time using the –history flag to pt-query-digest, which can populate a table with statistics about each query. But that is a topic for another post!

Query reviews are excellent ways to look comprehensively at your queries, instead of just the “top 10” slow, locking, most frequent, etc. queries. The EXPLAINing is long and slow work but the results are worth it!

Why does the MySQL optimizer not do what I think it should?

In May, I presented two talks – one called “Are you getting the best out of your indexes?” and “Optimizing Queries Using EXPLAIN”. I now have slides and video for both of them.

The first talk about indexing should probably be titled “Why is MySQL doing this?!!?!!?” It gives insight into why the MySQL optimizer chooses indexes that you do not expect; especially when it does not use an index you expect it to.

The talk has something for everyone – for beginners it explains B-trees and how they work, and for the more seasoned DBA it explains concepts like average value group size, and how the optimizer uses those concepts applied to metadata to make decisions.

Slides are at http://technocation.org/files/doc/2017_05_MySQLindexes.pdf.
Click the slide image below to go to the video at https://www.youtube.com/watch?v=e39-UfxQCCsSlide from MySQL indexing talk

The EXPLAIN talk goes through everything in EXPLAIN – both the regular and JSON formats – and describes what the fields mean, and how you can use them to figure out how to best optimize your query. There are examples that show where you can find red flags, so that when you EXPLAIN your own queries, you can be better prepared for gotchas. The EXPLAIN talk references the indexing talk in a few places (both talks were given to the same audience, about a week apart), so I highly recommend you watch that one first.

Slides are at http://technocation.org/files/doc/2017_05_EXPLAIN.pdf.
Click the slide image below to go to the video at https://www.youtube.com/watch?v=OlclCoWXplgSlide image from the EXPLAIN talk

Upgrading from MySQL 5.1 to MariaDB 5.5

In my last post, a tale of two MySQL upgrades, a few folks asked if I would outline the process we used to upgrade, and what kind of downtime we had.

Well, the processes were different for each upgrade, so I will tackle them in separate blog posts. The first step was to upgrade all our MySQL 5.1 machines to MariaDB 5.5. As mentioned in the previous post, MariaDBs superior performance for subqueries is why we switched and we switched back to MySQL for 5.6 to take full advantage of the performance_schema.

It is not difficult to blog about our procedure, as we have documentation on each process. My first tip would be to do that in your own environment. This also enables other folks to help, even if they are sysadmins and not normally DBAs. You may notice the steps contain items that might be obvious to someone who has done maintenance before we try to write them detailed enough that if you were doing it at 3 am and a bit sleep-deprived, you could follow the checklist and not miss anything. This also helps junior and aspiring DBAs not miss any steps as well.

The major difference between MySQL 5.1 and MySQL 5.5 (and its forks, like MariaDB) is that FLOAT columns are handled differently. On MySQL 5.1, a float value could be in scientific notation (e.g. 9.58084e-05) and in 5.5, its not (e.g. 0.0000958084). This makes checksumming difficult, as all FLOAT values will show differences even when they are the same number. There is a workaround for this, devised by Shlomi Noach.

We have an n+1 architecture for databases at Mozilla this means that we have an extra server. If we need 1 master and 3 slaves, then n+1 is 1 master and 4 slaves. Because of this, there are 2 different ways we upgrade the first slave we upgrade, and subsequent slaves/masters.

These steps are copied and pasted from our notes, with minor changes (for example, item #2 is send out maintenance notices but in our document we have the e-mail addresses to send to).

Assumptions: Throughout these notes we use /var/lib/mysql, as that is our standard place for MySQL. You may need to change this to suit your environment. We are also using Red Hat Enterprise Linux for our operating system, so this procedure is tailored to it (e.g. yum install/yum remove). We control packages using the freely available puppet mysql module we created.

For the first slave
The overall procedure is to perform a logical backup the database, create a new empty installation of the new server version, and import the backup. Replication does work from MySQL 5.1 to MariaDB 5.5 and back (at least, on the 25 or so clusters we have, replication worked in both directions. Your mileage may vary).

1. Make sure the slave has the same data as the master with checksums (the previous checksum is fine, they should be running every 12 hours).

2. Send out maintenance notices.

3. Take the machine out of any load balanced services, if appropriate

4. Set appropriate downtimes in Nagios

5. Start a screen session on the server

6. Do a SHOW PROCESSLIST to see if there are any slaves of the machine. If so, move them to another master if they are needed. [we have a different checklist for this]

7. Do a SHOW SLAVE STATUS to see if this machine is a slave.
a. If this machine is a slave, ensure that its master will not delete its binlogs while the upgrade is occurring.

b. If this machine is a slave, do a SLAVE STOP; and copy the master.info file somewhere safe [or the slave_master_info table if using that]

8. Stop access to the machine from anyone other than root (assuming you are connecting from root):

UPDATE mysql.user SET password=REVERSE(password) WHERE user!='root'; FLUSH PRIVILEGES;

9. See what the default character set is for the server and databases:
SHOW VARIABLES LIKE 'character_set_server'; SHOW VARIABLES LIKE 'character_set_database';
SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE DEFAULT_CHARACTER_SET_NAME!='utf8' AND SCHEMA_NAME NOT IN ('mysql');

If applicable, change the server defaults to UTF8 and change databases to utf8 with ALTER DATABASE dbname DEFAULT CHARACTER SET utf8;

10. Stop access to the machine from anyone other than root (assuming you are connecting from root): UPDATE mysql.user SET password=REVERSE(password) WHERE user!='root'; FLUSH PRIVILEGES;

11. Check to see how big the data is:
mysql> SELECT SUM(DATA_LENGTH)/1024/1024/1024 AS sizeGb FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA!='information_schema';

12. Determine how you can export the data, given the size. You may be able to export without compression, or you may need to do a mysqldump | gzip -c > file.sql, then compress the old data files instead of just moving them aside.

13. Do a du -sh * of the datadir and save for later, if you want to compare the size of the database to see how much space is returned after defragmenting

14 .Export the data from all databases, preserving character set, routines and triggers. Record the time for documentations sake. Im assuming the character set from step 9 is utf8 (if its something like latin1, youll need to put in default-character-set=latin1 in the command). If the machine has slaves, make sure to use master-data=1. If you need to compress, change the shell command accordingly:
time mysqldump --all-databases --routines --triggers --events > `date +%Y-%m-%d`_backup.sql

15. Stop MySQL

16. Copy the config file (usually /etc/my.cnf) to a safe place (like /etc/my.cnf.51)

17. Do a rpm -qa | egrep -i "percona|mysql". Do a yum remove for the mysql/percona packages. Its OK if it also removes related packages, like perl-DBD, but make a note of them, because you will want to reinstall them later. Sample:
yum remove Percona-Server-client Percona-Server-shared-compat Percona-XtraDB-Cluster-devel Percona-Server-server

18. Move the /var/lib/mysql directory to /var/lib/mysql-old. Compress any files that need compression (if you need to compress, to decompress the sql file). If you absolutely cannot keep the files, see if you can copy them somewhere. We really want to preserve the old data directory just in case we need to revert.

19. Decompress the sql file, if applicable.

20. Install the proper packages by changing puppet to use maridb55 instead of mysql51 or percona51. Verify with rpm -qa | egrep -i percona|mysql|maria

[this may be different in your environment; we use the freely available puppet mysql module we created.

21. Run mysql_install_db

22. Make any changes to /etc/my.cnf (e.g. run puppet). When going from MySQL 5.1 to 5.5, there are no particular global changes Mozilla made.

– when we went from MySQL 5.0 to MySQL 5.1, we did a global change to reflect the new slow query log options.

23. chown -R mysql:mysql /var/lib/mysql/

24. chmod 775 /var/lib/mysql

25. Start MySQL and check the error logs for any warnings. Get rid of any warnings/errors, and make sure MySQL is running.

26. Turn off binary logging. Import the export, timing how long it takes, for reference:

time mysql < YYYY_MM_DD_backup.sql

27. Restart MySQL and look for errors, you may need to run mysql_upgrade.

28. Turn on binary logging, if applicable.

29. Test.

30. If this machine was a slave, re-slave it. Let it catch up, making sure there are no data integrity errors, and no replication errors.

31. Reinstate permissions on the users:
UPDATE mysql.user SET password=REVERSE(password) WHERE user!='root'; FLUSH PRIVILEGES;

32. Re-slave any slaves of this machine, if needed.

33. Turn back on Nagios, making sure all the checks are green first.

34. Run a checksum on the master to propagate to this slave, and double-check data integrity on the slave. Note that you will want to use –ignore-columns with the output of this command in the checksum, to avoid false positives from scientific notation change (see http://www.sheeri.com/mysql-5-1-vs-mysql-5-5-floats-doubles-and-scientific-notation/)

Find FLOAT/DOUBLE fields to ignore in checksum: SELECT GROUP_CONCAT(DISTINCT COLUMN_NAME) FROM INFORMATION_SCHEMA.COLUMNS WHERE DATA_TYPE IN ('float','double') AND TABLE_SCHEMA NOT IN ('mysql','information_schema','performance_schema');

35. Put the machine back into the load balancer, if applicable.

36. Inform folks the upgrade is over

On the first upgrade, we did what is usually recommended – do a logical export with mysqldump, and then an import. With other upgrades in the same replication hierarchy, we can take advantage of Xtrabackup to stream the new version directly to the machine to be upgraded.

The general procedure here is similar to the above, except that a logical export is not taken. After preparation steps are taken, a new empty MariaDB 5.5 server is installed. Then we use xtrabackup to backup and restore the existing MariaDB 5.5 server to the machine we are upgrading.

For subsequent slaves, and the master

  1. Coordinate with affected parties ahead of time
  2. Send out any notices for downtime
  3. Take the machine out of any load balanced services, if appropriate. If the machine is a master, this means failing over the master first, so that this machine becomes a regular slave. [we have a different checklist for how to failover]
  4. Set appropriate downtimes in Nagios, including for any slaves
  5. Start a screen session on the server
  6. Do a SHOW PROCESSLIST to see if there are any slaves of the machine. If so, move them to another master if they are needed.
  7. Do a SHOW SLAVE STATUS to see if this machine is a slave.
    1. If this machine is a slave, ensure that the master will not delete its binlogs while the upgrade is occurring.
    2. If this machine is a slave, do a SLAVE STOP; and copy the master.info file somewhere safe
    3. If this machine is a slave, do a SLAVE STOP; and copy the master.info file somewhere safe
  8. Save a list of grants from pt-show-grants, just in case there are users/permissions that need to be preserved.  [this is done because sometimes masters and slaves have different users, though we try to keep everything consistent]
  9. Figure out how big the backup will be by doing a du -sh on the datadir of the already-upgraded machine to be backed up, and make sure the new machine has enough space to keep the old version and have the new version as well.
  10. Stop MySQL on the machine to be upgraded.
  11. Copy the config file (usually /etc/my.cnf) to a safe place (like /etc/my.cnf.51)
  12. Do a rpm -qa | egrep -i "mysql|percona". Do a yum remove for the mysql packages (at least mysql-server, mysql). Its OK if it also removes related packages, like perl-DBD, but make a note of them, because you will want to reinstall them later.
  13. Move the /var/lib/mysql directory to /var/lib/mysql-old. Compress any files that need compression. If you absolutely cannot keep the files, see if you can copy them somewhere. We really want to preserve the old data directory just in case we need to revert.
  14. Install the proper packages by changing puppet to use maridb55 instead of mysql51 or percona51, running puppet manually. Verify with rpm -qa | egrep -i "percona|mysql|maria"
  15. Run mysql_install_db
  16. Make any changes to /etc/my.cnf (or run puppet). When going from MySQL 5.1 to 5.5, there are no particular changes.
  17. chown -R mysql:mysql /var/lib/mysql/
  18. chmod 775 /var/lib/mysql
  19. Start MySQL and check the error logs for any warnings. Get rid of any warnings/errors, and make sure MySQL is started.
  20. Stop MySQL, and move or delete the datadir that was created on upgrade.
  21. If you are directly streaming the backup to the machine to be upgraded, do this on the machine to be upgraded:
    cd $DATADIR
    nc -l 9999 | tar xfi -
  22. On the machine to be backed up (that is already upgraded), in a screen session, making sure you get any slave info:
    time innobackupex --slave-info --stream=tar $DATADIR | nc (IP/hostname) 9999
  23. Once xtrabackup is complete, fix permissions on the datadir:
    chown -R mysql:mysql /var/lib/mysql/
    chmod 775 /var/lib/mysql
  24. Prepare the backup:
    time innobackupex --apply-logs --target-dir=/var/lib/mysql
  25. Fix permissions on the datadir again:
    chown -R mysql:mysql /var/lib/mysql/
    chmod 775 /var/lib/mysql
  26. Restart MySQL and look for errors
  27. Test.
  28. If this machine was a slave, re-slave it. Let it catch up, making sure there are no data integrity errors, and no replication errors.
  29. Re-slave any slaves of this machine, if needed.
  30. Turn back on Nagios, making sure all checks are green first.
  31. Put the machine back into the load balancer, if applicable.
  32. Inform folks the upgrade is over

Its long and detailed, but not particularly difficult.

A Tale of Two MySQL Upgrades

At the beginning of 2013, Mozillas MySQL databases were a mix of MySQL 5.0, Perconas patched MySQL 5.1, Perconas patched MySQL 5.5 and MariaDB 5.5. MySQL 5.1 was released in November 2008 so at the beginning of the year, we still had databases with no new major features in 4 years. Currently we have almost all our databases at Oracle’s MySQL 5.6 — the only stragglers are our cluster running TokuDB and a few machines that are no longer in use. Heres a graph showing the state of our machines you can see that in the first half of the year we concentrated on upgrading our 5.0 and 5.1 servers to 5.5, and then in the second half of the year we upgraded everything to MySQL 5.6 (click on the image to get the .png file):

After running some tests, we determined that MariaDB 5.5 was the best option for us and our particular workload. For most of our servers, it did not matter whether we use Percona, MariaDB or Oracles MySQL, but our Bugzilla servers really benefited from MariaDBs better subquery optimization, so we went with that. We had set up some Percona 5.5 servers over the spring/summer of 2012, when we moved some of our infrastructure to a new data center.

We upgraded to MySQL 5.5 to be on a recent version of MySQL. In the middle of the year, we had a choice should we stay where we were, or should we upgrade? We had no particular directive from developers to upgrade for the new MySQL 5.6 features. However, we have been doing more and more digging into our systems, and we really wanted the performance_schema features so we could dig even more. We want to be able to parse queries in real-time, perhaps with Anemometer without having to take an offline log file and run pt-query-digest on it.

So, we chose to upgrade to MySQL 5.6. Unfortunately, there were no other GA products to test against by mid-2013, neither MariaDB nor Percona had a GA 5.6 product, so our bake-off was functional only, not performance-related. Oracles MySQL 5.6 passed with flying colors, and so we proceeded to upgrade.

Now, we have a recent and consistent version of MySQL installed, that we can work with to gain insights into our systems. A pretty great goal to have been met for 2013!

MySQL Workbench “Clean up SQL” Feature

I was playing around with MySQL Workbench earlier in the week, and ran across the “clean up SQL” feature, which I thought was neat. Here’s a picture-based demonstration – you can click on the pictures to make them bigger, so … Continue reading

Learn MySQL for Free with MySQL Marinate, Season 3!

The 3rd season of MySQL Marinate begins October 1st! Join the meetup group and RSVP at season 3 to join! You can do the work on MySQL, or if you prefer, MariaDB or Percona.

If you do not have the book yet, you can still do the first week by using the online material from Browse Contents on the OReilly book page for Learning MySQL. There is homework for week 1, see the master list for all the information.

If you would like to learn MySQL from the ground up, consider joining us. This is for beginners If you have no experience with MySQL, or if you are a developer that wants to learn how to administer MySQL, or an administrator that wants to learn how to query MySQL, this course is what you want.

If you are not a beginner, you are welcome to join too maybe you need a refresher, or maybe you just want to test your knowledge or earn badges. Thats OK too!

The format of a virtual self-study group is as follows:

Each participant acquires the same textbook (Learning MySQL, the butterfly OReilly book, published 2007). You can acquire the textbook however you want (e.g. from the libary or from a friend) but if you buy the book, we ask that you buy it from our Amazon Store, to help pay for meetup fees.

Each participant commits to read one chapter per week, complete the exercises and post a link to the completed work. Tweet using the hashtag #mysqlmarinate.

Each participant obtains assistance by posting questions to a discussion area set up on the Virtual Tech Self Study Message Board for each chapter.

Each participant receives a badge upon finishing each chapter and all assignments.

Note: There is no classroom or video instruction.

How do I get started?

Become a member of the Virtual Tech Self Study Meetup Group.

Register for MySQL Marinate. RSVP to this event: Yes

Acquire the book (the only item that may cost money). Get your hands Learning MySQL see if your local library has it, if someone is selling their copy, or buy it from our Amazon Store (this helps pay for meetup fees).

When your book arrives, start your virtual learning by reading one chapter per week. Complete the exercises; if you have any questions, comments or want to learn more in-depth, thats what the forums are for!

Learning MySQL

FAQs:

Q: How long will the course last?

A: We will cover 12 lessons (chapters) in the book, so 12 (twelve) weeks starting October 1st, though we will have one week that is a break so that you can catch up if you need to or you have a week off if you need it. Refer to the MySQL Marinate Season 3 Master Discussion List for specific dates.

By January 1st, 2014, you will know MySQL!!

Q: Can I get ahead?

A: Sure! This is go-at-your-own-pace. To prevent spoilers, please put comments in the appropriate chapter threads.

Q: Does this cover the Percona patch set or MariaDB forks?

A: This covers the basics of MySQL, which are immediately transferable to Perconas patched MySQL or MariaDB builds.

Q: What do I need in order to start the course?

A: All you need is the book and access to a computer, preferably one that you have control over. Installing MySQL is chapter 2, so really, all you need is the book and a computer to start, you dont have to worry about any prerequisites. If you do not have the book yet, you can still do the first week by using the online material from Browse Contents at the OReilly Learning MySQL page.

Q: Where can I put completed assignments?

A: You will be able to put completed assignments on github. Instructions are in week 1.

Q: The book was published in 2007. Isnt that a bit old?

A: Yes! OReilly is working on new material, but it is not ready yet. The basics are still accurate, and we will let you know what in the book is outdated. We will also have optional supplemental/extra credit material for those who want to learn more right away. We are confident that this self-study course will make you ready to dive into other, more advanced material.

Soak it in!

*either tomorrow or today, depending on when and where you read this. Or it was in the past, if you have to catch up on your blog posts. Its OK, you can join us late, too. You can go at your own pace.