Percona Live: MySQL Conference and Expo Call for Papers Extended!

Sheeri’s feed from the Mozilla.com IT blog
Sheeri’s feed from the Mozilla.com IT blog

I’m not a wizard with infographics, web there are almost 400 databases at Mozilla, patient in 11 different categories. Here is how each category fares in number of databases:

Mozilla DBs in 2012

Here is how each category measures up with regards to database size – clearly, our crash-stats database (which is on Postgres, not MySQL) is the largest:

2012 size of all Mozilla databases

So here is another pie chart with the relative sizes of the MySQL databases:
2012 size of MySQL databases at Mozilla

I’m sure I’ve miscategorized some things (for instance, are metrics on AMO classified under AMO/Marketplace or “internal tools”?) but here are the categories I used:

Categories:
air.m.o – air.mozilla.org
AMO/Marketplace – addons/marketplace
blog/web page – it’s a db behind a blog or mostly static webpage
bugzilla – Bugzilla
Crash-stats – Socorro, crash-stats.mozilla.com – Where apps like Firefox send crash details.
Internal tool – If the db behind this is down, moco/mofo people may not be able to do their work. This covers applications from graphs.mozilla.org to inventory.mozilla.org to the PTO app.
release tool – If this db is down, releases can not happen (but this db is not a tree-closing db).
SUMO – support.mozilla.org
Tree-closing – if this db is down, the tree closes (and releases can’t happen)
World-facing – if this db is down, non moco/mofo ppl will notice. These are specifically tools that folks interact with, including the Mozilla Developer Network and sites like gameon.mozilla.org
World-interfacing – This db is critical to tools we use to interface with the world, though not necessarily world visible. basket.mozilla.org, Mozillians, etc.

The count of databases includes all production/dev/stage servers. The size is the size of the database on one of the production/dev/stage machines. For example, Bugzilla has 6 servers in use – 4 in production and 2 in stage. The size is the size of the master in production and the master in stage, combined. This way we have not grossly inflated the size of the database, even though technically speaking we do have to manage the data on each of the servers.

For next year, I hope to be able to gather this kind of information automatically, and have easily accessible comprehensive numbers for bandwidth, number of queries per day on each server, and more.

Sheeri’s feed from the Mozilla.com IT blog

I’m not a wizard with infographics, web there are almost 400 databases at Mozilla, patient in 11 different categories. Here is how each category fares in number of databases:

Mozilla DBs in 2012

Here is how each category measures up with regards to database size – clearly, our crash-stats database (which is on Postgres, not MySQL) is the largest:

2012 size of all Mozilla databases

So here is another pie chart with the relative sizes of the MySQL databases:
2012 size of MySQL databases at Mozilla

I’m sure I’ve miscategorized some things (for instance, are metrics on AMO classified under AMO/Marketplace or “internal tools”?) but here are the categories I used:

Categories:
air.m.o – air.mozilla.org
AMO/Marketplace – addons/marketplace
blog/web page – it’s a db behind a blog or mostly static webpage
bugzilla – Bugzilla
Crash-stats – Socorro, crash-stats.mozilla.com – Where apps like Firefox send crash details.
Internal tool – If the db behind this is down, moco/mofo people may not be able to do their work. This covers applications from graphs.mozilla.org to inventory.mozilla.org to the PTO app.
release tool – If this db is down, releases can not happen (but this db is not a tree-closing db).
SUMO – support.mozilla.org
Tree-closing – if this db is down, the tree closes (and releases can’t happen)
World-facing – if this db is down, non moco/mofo ppl will notice. These are specifically tools that folks interact with, including the Mozilla Developer Network and sites like gameon.mozilla.org
World-interfacing – This db is critical to tools we use to interface with the world, though not necessarily world visible. basket.mozilla.org, Mozillians, etc.

The count of databases includes all production/dev/stage servers. The size is the size of the database on one of the production/dev/stage machines. For example, Bugzilla has 6 servers in use – 4 in production and 2 in stage. The size is the size of the master in production and the master in stage, combined. This way we have not grossly inflated the size of the database, even though technically speaking we do have to manage the data on each of the servers.

For next year, I hope to be able to gather this kind of information automatically, and have easily accessible comprehensive numbers for bandwidth, number of queries per day on each server, and more.

I’m not a wizard with infographics, drugs there are almost 400 databases at Mozilla, geriatrician in 11 different categories. Here is how each category fares in number of databases:

Mozilla DBs in 2012

Here is how each category measures up with regards to database size – clearly, our crash-stats database (which is on Postgres, not MySQL) is the largest:

2012 size of all Mozilla databases

So here is another pie chart with the relative sizes of the MySQL databases:
2012 size of MySQL databases at Mozilla

I’m sure I’ve miscategorized some things (for instance, are metrics on AMO classified under AMO/Marketplace or “internal tools”?) but here are the categories I used:

Categories:
air.m.o – air.mozilla.org
AMO/Marketplace – addons/marketplace
blog/web page – it’s a db behind a blog or mostly static webpage
bugzilla – Bugzilla
Crash-stats – Socorro, crash-stats.mozilla.com – Where apps like Firefox send crash details.
Internal tool – If the db behind this is down, moco/mofo people may not be able to do their work. This covers applications from graphs.mozilla.org to inventory.mozilla.org to the PTO app.
release tool – If this db is down, releases can not happen (but this db is not a tree-closing db).
SUMO – support.mozilla.org
Tree-closing – if this db is down, the tree closes (and releases can’t happen)
World-facing – if this db is down, non moco/mofo ppl will notice. These are specifically tools that folks interact with, including the Mozilla Developer Network and sites like gameon.mozilla.org
World-interfacing – This db is critical to tools we use to interface with the world, though not necessarily world visible. basket.mozilla.org, Mozillians, etc.

The count of databases includes all production/dev/stage servers. The size is the size of the database on one of the production/dev/stage machines. For example, Bugzilla has 6 servers in use – 4 in production and 2 in stage. The size is the size of the master in production and the master in stage, combined. This way we have not grossly inflated the size of the database, even though technically speaking we do have to manage the data on each of the servers.

For next year, I hope to be able to gather this kind of information automatically, and have easily accessible comprehensive numbers for bandwidth, number of queries per day on each server, and more.

Sheeri’s feed from the Mozilla.com IT blog

I’m not a wizard with infographics, web there are almost 400 databases at Mozilla, patient in 11 different categories. Here is how each category fares in number of databases:

Mozilla DBs in 2012

Here is how each category measures up with regards to database size – clearly, our crash-stats database (which is on Postgres, not MySQL) is the largest:

2012 size of all Mozilla databases

So here is another pie chart with the relative sizes of the MySQL databases:
2012 size of MySQL databases at Mozilla

I’m sure I’ve miscategorized some things (for instance, are metrics on AMO classified under AMO/Marketplace or “internal tools”?) but here are the categories I used:

Categories:
air.m.o – air.mozilla.org
AMO/Marketplace – addons/marketplace
blog/web page – it’s a db behind a blog or mostly static webpage
bugzilla – Bugzilla
Crash-stats – Socorro, crash-stats.mozilla.com – Where apps like Firefox send crash details.
Internal tool – If the db behind this is down, moco/mofo people may not be able to do their work. This covers applications from graphs.mozilla.org to inventory.mozilla.org to the PTO app.
release tool – If this db is down, releases can not happen (but this db is not a tree-closing db).
SUMO – support.mozilla.org
Tree-closing – if this db is down, the tree closes (and releases can’t happen)
World-facing – if this db is down, non moco/mofo ppl will notice. These are specifically tools that folks interact with, including the Mozilla Developer Network and sites like gameon.mozilla.org
World-interfacing – This db is critical to tools we use to interface with the world, though not necessarily world visible. basket.mozilla.org, Mozillians, etc.

The count of databases includes all production/dev/stage servers. The size is the size of the database on one of the production/dev/stage machines. For example, Bugzilla has 6 servers in use – 4 in production and 2 in stage. The size is the size of the master in production and the master in stage, combined. This way we have not grossly inflated the size of the database, even though technically speaking we do have to manage the data on each of the servers.

For next year, I hope to be able to gather this kind of information automatically, and have easily accessible comprehensive numbers for bandwidth, number of queries per day on each server, and more.

I’m not a wizard with infographics, drugs there are almost 400 databases at Mozilla, geriatrician in 11 different categories. Here is how each category fares in number of databases:

Mozilla DBs in 2012

Here is how each category measures up with regards to database size – clearly, our crash-stats database (which is on Postgres, not MySQL) is the largest:

2012 size of all Mozilla databases

So here is another pie chart with the relative sizes of the MySQL databases:
2012 size of MySQL databases at Mozilla

I’m sure I’ve miscategorized some things (for instance, are metrics on AMO classified under AMO/Marketplace or “internal tools”?) but here are the categories I used:

Categories:
air.m.o – air.mozilla.org
AMO/Marketplace – addons/marketplace
blog/web page – it’s a db behind a blog or mostly static webpage
bugzilla – Bugzilla
Crash-stats – Socorro, crash-stats.mozilla.com – Where apps like Firefox send crash details.
Internal tool – If the db behind this is down, moco/mofo people may not be able to do their work. This covers applications from graphs.mozilla.org to inventory.mozilla.org to the PTO app.
release tool – If this db is down, releases can not happen (but this db is not a tree-closing db).
SUMO – support.mozilla.org
Tree-closing – if this db is down, the tree closes (and releases can’t happen)
World-facing – if this db is down, non moco/mofo ppl will notice. These are specifically tools that folks interact with, including the Mozilla Developer Network and sites like gameon.mozilla.org
World-interfacing – This db is critical to tools we use to interface with the world, though not necessarily world visible. basket.mozilla.org, Mozillians, etc.

The count of databases includes all production/dev/stage servers. The size is the size of the database on one of the production/dev/stage machines. For example, Bugzilla has 6 servers in use – 4 in production and 2 in stage. The size is the size of the master in production and the master in stage, combined. This way we have not grossly inflated the size of the database, even though technically speaking we do have to manage the data on each of the servers.

For next year, I hope to be able to gather this kind of information automatically, and have easily accessible comprehensive numbers for bandwidth, number of queries per day on each server, and more.

You may have noticed that I stopped posting the “weekly news” from the Mozilla DB Team. After going through the Operations Report Card and applying it to DBAs in OurSQL Podcast numbers 111, website 112, 114, 115 and 116, I started thinking that the updates were really more like metrics, and it would better serve my own purposes better to do the updates monthly.

The purposes of doing this type of blog post are:
0) Answering “So what does a DBA do, anyway?”
1) Answering “DBA? At Mozilla? Does Firefox have a database? Why does Mozilla have databases, and what for?”
2) Showing what the DB team does for Mozilla, so that folks will understand that “just keeping things working” is actually a lot of work. It also helps compile yearly reviews of accomplishments.

We are also starting to get some metrics information. This month we started easy – number of MySQL and Postgres machines, number of unique databases (mysql, information_schema, performance_schema and test are ignored, and duplicates, like the same database on a master and 2 slaves, are ignored), and version information.

As of today, we have 9 unique databases across 8 Postgres servers in 4 clusters, with 6 being Postgres 9.0 and 2 being Postgres 9.2 – we are currently upgrading all our Postgres databases to 9.2 and expect that by the end of December all servers will be using 9.2.

We have 427 unique databases across 98 MySQL DB machines in 20 clusters, with 3 being MySQL 5.0, 71 being MySQL 5.1 (mostly Percona’s patched 5.1), and 24 being MariaDB 5.5.

And in the last week of October and the month of November, we have:

  • Documented 4 more of our Nagios checks.
  • Started to upgrade Postgres databases to Postgres 9.2
  • Decommissioned a legacy database cluster for Firefox themes.
  • Built a new database cluster (complete with monitoring and backups) for a new Sentry implementation.
  • Upgraded 14 machines for general operating system updating purposes and to ensure that be2net drivers are up-to-date; out-of-date drivers can (and have!) caused servers to crash.
  • Upgraded MySQL 5.0 to MySQL 5.1 across 3 clusters and 7 machines.
  • Did a quarterly purge of Crash Stats data.
  • Had to re-sync 6 slaves when a transaction rolled back on the master, but some of the tables modified were MyISAM. So the master had data in some tables that was out of sync with the slaves.
  • Assisted in converting to use UTC timestamps in the Elmo database behind the Mozilla localization portal and the Bugzilla Anthropology Project, prompting a blog post on converting timezone-specific times in MySQL.
  • Decommissioned a legacy “production generic” database cluster that had over 60 databases on it.
  • Built a 5th database backup instance due to growing backup needs.
  • Changed binary log format to MIXED on our JIRA installation due to JIRA requirements and an upgrade to MySQL 5.1 issuing warnings that MySQL 5.0 had not.
  • Added checksums to the database cluster that runs Input and Firefox about:home snippets.
  • Archived and dropped the database behind Rock Your Firefox.
  • Exported Bugzilla data for a research project. Did you know if you are doing academic research, you can get a copy of Mozilla’s public Bugzilla data?
  • Gave read-only database access to a developer behind the Datazilla project.
  • Updated the email list for vouched Mozillians.
  • Backfilled missing crash-stats data after some failed cron scripts.
  • Cleared some junk data from the Datazilla databases.
  • Added new custom fields to our implementation of Bugzilla for upcoming release versions: Firefox 20, Thunderbird 20, Thunderbird ESR 20 and seamonkey 217.
  • Added 10 new machines to the Graphs database, and added new sets of machines for Mozilla ESR 17 and Thunderbird ESR 17.
  • Gave read-only database access to the two main leads of the Air Mozilla project.
  • Debugged and turned off a 10-second timeout in our load balancing pool that was causing Postgres monitors and processors to lose connection to their databases.
  • Discovered that the plugins database actually does better with the query_cache turned on, and tuned its size.
  • Tweaked tokudb_cache_size and innodb_buffer_pool_size on our Datazilla databases so that less swap would be used.
  • Created 2 read/write accounts for 2 people to access the development database for Mozillians.
  • Gave access to Datazilla databases for staging.

Wednesday, Nov 28th was my 1-year anniversary at Mozilla. Tomorrow is December! 2012 went by very quickly.

Sheeri’s feed from the Mozilla.com IT blog

I’m not a wizard with infographics, web there are almost 400 databases at Mozilla, patient in 11 different categories. Here is how each category fares in number of databases:

Mozilla DBs in 2012

Here is how each category measures up with regards to database size – clearly, our crash-stats database (which is on Postgres, not MySQL) is the largest:

2012 size of all Mozilla databases

So here is another pie chart with the relative sizes of the MySQL databases:
2012 size of MySQL databases at Mozilla

I’m sure I’ve miscategorized some things (for instance, are metrics on AMO classified under AMO/Marketplace or “internal tools”?) but here are the categories I used:

Categories:
air.m.o – air.mozilla.org
AMO/Marketplace – addons/marketplace
blog/web page – it’s a db behind a blog or mostly static webpage
bugzilla – Bugzilla
Crash-stats – Socorro, crash-stats.mozilla.com – Where apps like Firefox send crash details.
Internal tool – If the db behind this is down, moco/mofo people may not be able to do their work. This covers applications from graphs.mozilla.org to inventory.mozilla.org to the PTO app.
release tool – If this db is down, releases can not happen (but this db is not a tree-closing db).
SUMO – support.mozilla.org
Tree-closing – if this db is down, the tree closes (and releases can’t happen)
World-facing – if this db is down, non moco/mofo ppl will notice. These are specifically tools that folks interact with, including the Mozilla Developer Network and sites like gameon.mozilla.org
World-interfacing – This db is critical to tools we use to interface with the world, though not necessarily world visible. basket.mozilla.org, Mozillians, etc.

The count of databases includes all production/dev/stage servers. The size is the size of the database on one of the production/dev/stage machines. For example, Bugzilla has 6 servers in use – 4 in production and 2 in stage. The size is the size of the master in production and the master in stage, combined. This way we have not grossly inflated the size of the database, even though technically speaking we do have to manage the data on each of the servers.

For next year, I hope to be able to gather this kind of information automatically, and have easily accessible comprehensive numbers for bandwidth, number of queries per day on each server, and more.

I’m not a wizard with infographics, drugs there are almost 400 databases at Mozilla, geriatrician in 11 different categories. Here is how each category fares in number of databases:

Mozilla DBs in 2012

Here is how each category measures up with regards to database size – clearly, our crash-stats database (which is on Postgres, not MySQL) is the largest:

2012 size of all Mozilla databases

So here is another pie chart with the relative sizes of the MySQL databases:
2012 size of MySQL databases at Mozilla

I’m sure I’ve miscategorized some things (for instance, are metrics on AMO classified under AMO/Marketplace or “internal tools”?) but here are the categories I used:

Categories:
air.m.o – air.mozilla.org
AMO/Marketplace – addons/marketplace
blog/web page – it’s a db behind a blog or mostly static webpage
bugzilla – Bugzilla
Crash-stats – Socorro, crash-stats.mozilla.com – Where apps like Firefox send crash details.
Internal tool – If the db behind this is down, moco/mofo people may not be able to do their work. This covers applications from graphs.mozilla.org to inventory.mozilla.org to the PTO app.
release tool – If this db is down, releases can not happen (but this db is not a tree-closing db).
SUMO – support.mozilla.org
Tree-closing – if this db is down, the tree closes (and releases can’t happen)
World-facing – if this db is down, non moco/mofo ppl will notice. These are specifically tools that folks interact with, including the Mozilla Developer Network and sites like gameon.mozilla.org
World-interfacing – This db is critical to tools we use to interface with the world, though not necessarily world visible. basket.mozilla.org, Mozillians, etc.

The count of databases includes all production/dev/stage servers. The size is the size of the database on one of the production/dev/stage machines. For example, Bugzilla has 6 servers in use – 4 in production and 2 in stage. The size is the size of the master in production and the master in stage, combined. This way we have not grossly inflated the size of the database, even though technically speaking we do have to manage the data on each of the servers.

For next year, I hope to be able to gather this kind of information automatically, and have easily accessible comprehensive numbers for bandwidth, number of queries per day on each server, and more.

You may have noticed that I stopped posting the “weekly news” from the Mozilla DB Team. After going through the Operations Report Card and applying it to DBAs in OurSQL Podcast numbers 111, website 112, 114, 115 and 116, I started thinking that the updates were really more like metrics, and it would better serve my own purposes better to do the updates monthly.

The purposes of doing this type of blog post are:
0) Answering “So what does a DBA do, anyway?”
1) Answering “DBA? At Mozilla? Does Firefox have a database? Why does Mozilla have databases, and what for?”
2) Showing what the DB team does for Mozilla, so that folks will understand that “just keeping things working” is actually a lot of work. It also helps compile yearly reviews of accomplishments.

We are also starting to get some metrics information. This month we started easy – number of MySQL and Postgres machines, number of unique databases (mysql, information_schema, performance_schema and test are ignored, and duplicates, like the same database on a master and 2 slaves, are ignored), and version information.

As of today, we have 9 unique databases across 8 Postgres servers in 4 clusters, with 6 being Postgres 9.0 and 2 being Postgres 9.2 – we are currently upgrading all our Postgres databases to 9.2 and expect that by the end of December all servers will be using 9.2.

We have 427 unique databases across 98 MySQL DB machines in 20 clusters, with 3 being MySQL 5.0, 71 being MySQL 5.1 (mostly Percona’s patched 5.1), and 24 being MariaDB 5.5.

And in the last week of October and the month of November, we have:

  • Documented 4 more of our Nagios checks.
  • Started to upgrade Postgres databases to Postgres 9.2
  • Decommissioned a legacy database cluster for Firefox themes.
  • Built a new database cluster (complete with monitoring and backups) for a new Sentry implementation.
  • Upgraded 14 machines for general operating system updating purposes and to ensure that be2net drivers are up-to-date; out-of-date drivers can (and have!) caused servers to crash.
  • Upgraded MySQL 5.0 to MySQL 5.1 across 3 clusters and 7 machines.
  • Did a quarterly purge of Crash Stats data.
  • Had to re-sync 6 slaves when a transaction rolled back on the master, but some of the tables modified were MyISAM. So the master had data in some tables that was out of sync with the slaves.
  • Assisted in converting to use UTC timestamps in the Elmo database behind the Mozilla localization portal and the Bugzilla Anthropology Project, prompting a blog post on converting timezone-specific times in MySQL.
  • Decommissioned a legacy “production generic” database cluster that had over 60 databases on it.
  • Built a 5th database backup instance due to growing backup needs.
  • Changed binary log format to MIXED on our JIRA installation due to JIRA requirements and an upgrade to MySQL 5.1 issuing warnings that MySQL 5.0 had not.
  • Added checksums to the database cluster that runs Input and Firefox about:home snippets.
  • Archived and dropped the database behind Rock Your Firefox.
  • Exported Bugzilla data for a research project. Did you know if you are doing academic research, you can get a copy of Mozilla’s public Bugzilla data?
  • Gave read-only database access to a developer behind the Datazilla project.
  • Updated the email list for vouched Mozillians.
  • Backfilled missing crash-stats data after some failed cron scripts.
  • Cleared some junk data from the Datazilla databases.
  • Added new custom fields to our implementation of Bugzilla for upcoming release versions: Firefox 20, Thunderbird 20, Thunderbird ESR 20 and seamonkey 217.
  • Added 10 new machines to the Graphs database, and added new sets of machines for Mozilla ESR 17 and Thunderbird ESR 17.
  • Gave read-only database access to the two main leads of the Air Mozilla project.
  • Debugged and turned off a 10-second timeout in our load balancing pool that was causing Postgres monitors and processors to lose connection to their databases.
  • Discovered that the plugins database actually does better with the query_cache turned on, and tuned its size.
  • Tweaked tokudb_cache_size and innodb_buffer_pool_size on our Datazilla databases so that less swap would be used.
  • Created 2 read/write accounts for 2 people to access the development database for Mozillians.
  • Gave access to Datazilla databases for staging.

Wednesday, Nov 28th was my 1-year anniversary at Mozilla. Tomorrow is December! 2012 went by very quickly.

So, cure I’ve started a new job as a Senior Database Engineer at Salesforce, price and one of the services I help provide is adding users to MySQL. We have some nice chef recipes, look so all I have to do is update a few files, including adding in the MySQL password hash.

Now, when I added myself, I just logged into MySQL and generated a password hash. But when my SRE (systems reliability engineer) colleague needed to generate a password, he did not have a MySQL system he could login to.

The good news is it’s easy to generate a MySQL password hash. The MySQL password hash is simply a SHA1 hash of a SHA1 hash, with * put in the beginning. Which means you do not need a MySQL database to create a MySQL password hash – all you need is a programming language that has a SHA1 function (well, and a concatenate function).

And I found it, of course, on this post at StackExchange. So you don’t have to click through, here is what it says – and I have tested all these methods and I get the same password hash. I have changed their example of “right” to “PASSWORD HERE” so it’s more readable and obvious where the password goes, in case you copy and paste from here.

Some one-liners:

MySQL (may require you add -u(user) -p):

mysql -NBe "select password('PASSWORD HERE')"

Python:

python -c 'from hashlib import sha1; print "*" + sha1(sha1("PASSWORD HERE").digest()).hexdigest().upper()'

Perl:

perl -MDigest::SHA1=sha1_hex -MDigest::SHA1=sha1 -le 'print "*". uc sha1_hex(sha1("PASSWORD HERE"))'

PHP:

php -r 'echo "*" . strtoupper(sha1(sha1("PASSWORD HERE", TRUE))). "
";'

Hopefully these help you – they have enabled my colleagues to easily generate what’s needed without having to find (or create) a MySQL instance that they can already login to.

 

 
Sheeri’s feed from the Mozilla.com IT blog

I’m not a wizard with infographics, web there are almost 400 databases at Mozilla, patient in 11 different categories. Here is how each category fares in number of databases:

Mozilla DBs in 2012

Here is how each category measures up with regards to database size – clearly, our crash-stats database (which is on Postgres, not MySQL) is the largest:

2012 size of all Mozilla databases

So here is another pie chart with the relative sizes of the MySQL databases:
2012 size of MySQL databases at Mozilla

I’m sure I’ve miscategorized some things (for instance, are metrics on AMO classified under AMO/Marketplace or “internal tools”?) but here are the categories I used:

Categories:
air.m.o – air.mozilla.org
AMO/Marketplace – addons/marketplace
blog/web page – it’s a db behind a blog or mostly static webpage
bugzilla – Bugzilla
Crash-stats – Socorro, crash-stats.mozilla.com – Where apps like Firefox send crash details.
Internal tool – If the db behind this is down, moco/mofo people may not be able to do their work. This covers applications from graphs.mozilla.org to inventory.mozilla.org to the PTO app.
release tool – If this db is down, releases can not happen (but this db is not a tree-closing db).
SUMO – support.mozilla.org
Tree-closing – if this db is down, the tree closes (and releases can’t happen)
World-facing – if this db is down, non moco/mofo ppl will notice. These are specifically tools that folks interact with, including the Mozilla Developer Network and sites like gameon.mozilla.org
World-interfacing – This db is critical to tools we use to interface with the world, though not necessarily world visible. basket.mozilla.org, Mozillians, etc.

The count of databases includes all production/dev/stage servers. The size is the size of the database on one of the production/dev/stage machines. For example, Bugzilla has 6 servers in use – 4 in production and 2 in stage. The size is the size of the master in production and the master in stage, combined. This way we have not grossly inflated the size of the database, even though technically speaking we do have to manage the data on each of the servers.

For next year, I hope to be able to gather this kind of information automatically, and have easily accessible comprehensive numbers for bandwidth, number of queries per day on each server, and more.

I’m not a wizard with infographics, drugs there are almost 400 databases at Mozilla, geriatrician in 11 different categories. Here is how each category fares in number of databases:

Mozilla DBs in 2012

Here is how each category measures up with regards to database size – clearly, our crash-stats database (which is on Postgres, not MySQL) is the largest:

2012 size of all Mozilla databases

So here is another pie chart with the relative sizes of the MySQL databases:
2012 size of MySQL databases at Mozilla

I’m sure I’ve miscategorized some things (for instance, are metrics on AMO classified under AMO/Marketplace or “internal tools”?) but here are the categories I used:

Categories:
air.m.o – air.mozilla.org
AMO/Marketplace – addons/marketplace
blog/web page – it’s a db behind a blog or mostly static webpage
bugzilla – Bugzilla
Crash-stats – Socorro, crash-stats.mozilla.com – Where apps like Firefox send crash details.
Internal tool – If the db behind this is down, moco/mofo people may not be able to do their work. This covers applications from graphs.mozilla.org to inventory.mozilla.org to the PTO app.
release tool – If this db is down, releases can not happen (but this db is not a tree-closing db).
SUMO – support.mozilla.org
Tree-closing – if this db is down, the tree closes (and releases can’t happen)
World-facing – if this db is down, non moco/mofo ppl will notice. These are specifically tools that folks interact with, including the Mozilla Developer Network and sites like gameon.mozilla.org
World-interfacing – This db is critical to tools we use to interface with the world, though not necessarily world visible. basket.mozilla.org, Mozillians, etc.

The count of databases includes all production/dev/stage servers. The size is the size of the database on one of the production/dev/stage machines. For example, Bugzilla has 6 servers in use – 4 in production and 2 in stage. The size is the size of the master in production and the master in stage, combined. This way we have not grossly inflated the size of the database, even though technically speaking we do have to manage the data on each of the servers.

For next year, I hope to be able to gather this kind of information automatically, and have easily accessible comprehensive numbers for bandwidth, number of queries per day on each server, and more.

You may have noticed that I stopped posting the “weekly news” from the Mozilla DB Team. After going through the Operations Report Card and applying it to DBAs in OurSQL Podcast numbers 111, website 112, 114, 115 and 116, I started thinking that the updates were really more like metrics, and it would better serve my own purposes better to do the updates monthly.

The purposes of doing this type of blog post are:
0) Answering “So what does a DBA do, anyway?”
1) Answering “DBA? At Mozilla? Does Firefox have a database? Why does Mozilla have databases, and what for?”
2) Showing what the DB team does for Mozilla, so that folks will understand that “just keeping things working” is actually a lot of work. It also helps compile yearly reviews of accomplishments.

We are also starting to get some metrics information. This month we started easy – number of MySQL and Postgres machines, number of unique databases (mysql, information_schema, performance_schema and test are ignored, and duplicates, like the same database on a master and 2 slaves, are ignored), and version information.

As of today, we have 9 unique databases across 8 Postgres servers in 4 clusters, with 6 being Postgres 9.0 and 2 being Postgres 9.2 – we are currently upgrading all our Postgres databases to 9.2 and expect that by the end of December all servers will be using 9.2.

We have 427 unique databases across 98 MySQL DB machines in 20 clusters, with 3 being MySQL 5.0, 71 being MySQL 5.1 (mostly Percona’s patched 5.1), and 24 being MariaDB 5.5.

And in the last week of October and the month of November, we have:

  • Documented 4 more of our Nagios checks.
  • Started to upgrade Postgres databases to Postgres 9.2
  • Decommissioned a legacy database cluster for Firefox themes.
  • Built a new database cluster (complete with monitoring and backups) for a new Sentry implementation.
  • Upgraded 14 machines for general operating system updating purposes and to ensure that be2net drivers are up-to-date; out-of-date drivers can (and have!) caused servers to crash.
  • Upgraded MySQL 5.0 to MySQL 5.1 across 3 clusters and 7 machines.
  • Did a quarterly purge of Crash Stats data.
  • Had to re-sync 6 slaves when a transaction rolled back on the master, but some of the tables modified were MyISAM. So the master had data in some tables that was out of sync with the slaves.
  • Assisted in converting to use UTC timestamps in the Elmo database behind the Mozilla localization portal and the Bugzilla Anthropology Project, prompting a blog post on converting timezone-specific times in MySQL.
  • Decommissioned a legacy “production generic” database cluster that had over 60 databases on it.
  • Built a 5th database backup instance due to growing backup needs.
  • Changed binary log format to MIXED on our JIRA installation due to JIRA requirements and an upgrade to MySQL 5.1 issuing warnings that MySQL 5.0 had not.
  • Added checksums to the database cluster that runs Input and Firefox about:home snippets.
  • Archived and dropped the database behind Rock Your Firefox.
  • Exported Bugzilla data for a research project. Did you know if you are doing academic research, you can get a copy of Mozilla’s public Bugzilla data?
  • Gave read-only database access to a developer behind the Datazilla project.
  • Updated the email list for vouched Mozillians.
  • Backfilled missing crash-stats data after some failed cron scripts.
  • Cleared some junk data from the Datazilla databases.
  • Added new custom fields to our implementation of Bugzilla for upcoming release versions: Firefox 20, Thunderbird 20, Thunderbird ESR 20 and seamonkey 217.
  • Added 10 new machines to the Graphs database, and added new sets of machines for Mozilla ESR 17 and Thunderbird ESR 17.
  • Gave read-only database access to the two main leads of the Air Mozilla project.
  • Debugged and turned off a 10-second timeout in our load balancing pool that was causing Postgres monitors and processors to lose connection to their databases.
  • Discovered that the plugins database actually does better with the query_cache turned on, and tuned its size.
  • Tweaked tokudb_cache_size and innodb_buffer_pool_size on our Datazilla databases so that less swap would be used.
  • Created 2 read/write accounts for 2 people to access the development database for Mozillians.
  • Gave access to Datazilla databases for staging.

Wednesday, Nov 28th was my 1-year anniversary at Mozilla. Tomorrow is December! 2012 went by very quickly.

So, cure I’ve started a new job as a Senior Database Engineer at Salesforce, price and one of the services I help provide is adding users to MySQL. We have some nice chef recipes, look so all I have to do is update a few files, including adding in the MySQL password hash.

Now, when I added myself, I just logged into MySQL and generated a password hash. But when my SRE (systems reliability engineer) colleague needed to generate a password, he did not have a MySQL system he could login to.

The good news is it’s easy to generate a MySQL password hash. The MySQL password hash is simply a SHA1 hash of a SHA1 hash, with * put in the beginning. Which means you do not need a MySQL database to create a MySQL password hash – all you need is a programming language that has a SHA1 function (well, and a concatenate function).

And I found it, of course, on this post at StackExchange. So you don’t have to click through, here is what it says – and I have tested all these methods and I get the same password hash. I have changed their example of “right” to “PASSWORD HERE” so it’s more readable and obvious where the password goes, in case you copy and paste from here.

Some one-liners:

MySQL (may require you add -u(user) -p):

mysql -NBe "select password('PASSWORD HERE')"

Python:

python -c 'from hashlib import sha1; print "*" + sha1(sha1("PASSWORD HERE").digest()).hexdigest().upper()'

Perl:

perl -MDigest::SHA1=sha1_hex -MDigest::SHA1=sha1 -le 'print "*". uc sha1_hex(sha1("PASSWORD HERE"))'

PHP:

php -r 'echo "*" . strtoupper(sha1(sha1("PASSWORD HERE", TRUE))). "
";'

Hopefully these help you – they have enabled my colleagues to easily generate what’s needed without having to find (or create) a MySQL instance that they can already login to.

 

 

You may have noticed that I stopped posting the “weekly news” from the Mozilla DB Team. After going through the Operations Report Card and applying it to DBAs in OurSQL Podcast numbers 111, abortion 114, 115 and 116, I started thinking that the updates were really more like metrics, and it would better serve my own purposes better to do the updates monthly.

The purposes of doing this type of blog post are:
0) Answering “So what does a DBA do, anyway?”
1) Answering “DBA? At Mozilla? Does Firefox have a database? Why does Mozilla have databases, and what for?”
2) Showing what the DB team does for Mozilla, so that folks will understand that “just keeping things working” is actually a lot of work. It also helps compile yearly reviews of accomplishments.

We are also starting to get some metrics information. This month we started easy – number of MySQL and Postgres machines, number of unique databases (mysql, information_schema, performance_schema and test are ignored, and duplicates, like the same database on a master and 2 slaves, are ignored), and version information.

As of today, we have 9 unique databases across 8 Postgres servers in 4 clusters, with 6 being Postgres 9.0 and 2 being Postgres 9.2 – we are currently upgrading all our Postgres databases to 9.2 and expect that by the end of December all servers will be using 9.2.

We have 427 unique databases across 98 MySQL DB machines in 20 clusters, with 3 being MySQL 5.0, 71 being MySQL 5.1 (mostly Percona’s patched 5.1), and 24 being MariaDB 5.5.

And in the last week of October and the month of November, we have:

  • Documented 4 more of our Nagios checks.
  • Started to upgrade Postgres databases to Postgres 9.2
  • Decommissioned a legacy database cluster for Firefox themes.
  • Built a new database cluster (complete with monitoring and backups) for a new Sentry implementation.
  • Upgraded 14 machines for general operating system updating purposes and to ensure that be2net drivers are up-to-date; out-of-date drivers can (and have!) caused servers to crash.
  • Upgraded MySQL 5.0 to MySQL 5.1 across 3 clusters and 7 machines.
  • Did a quarterly purge of Crash Stats data.
  • Had to re-sync 6 slaves when a transaction rolled back on the master, but some of the tables modified were MyISAM. So the master had data in some tables that was out of sync with the slaves.
  • Assisted in converting to use UTC timestamps in the Elmo database behind the Mozilla localization portal and the Bugzilla Anthropology Project, prompting a blog post on converting timezone-specific times in MySQL.
  • Decommissioned a legacy “production generic” database cluster that had over 60 databases on it.
  • Built a 5th database backup instance due to growing backup needs.
  • Changed binary log format to MIXED on our JIRA installation due to JIRA requirements and an upgrade to MySQL 5.1 issuing warnings that MySQL 5.0 had not.
  • Added checksums to the database cluster that runs Input and Firefox about:home snippets.
  • Archived and dropped the database behind Rock Your Firefox.
  • Exported Bugzilla data for a research project. Did you know if you are doing academic research, you can get a copy of Mozilla’s public Bugzilla data?
  • Gave read-only database access to a developer behind the Datazilla project.
  • Updated the email list for vouched Mozillians.
  • Backfilled missing crash-stats data after some failed cron scripts.
  • Cleared some junk data from the Datazilla databases.
  • Added new custom fields to our implementation of Bugzilla for upcoming release versions: Firefox 20, Thunderbird 20, Thunderbird ESR 20 and seamonkey 217.
  • Added 10 new machines to the Graphs database, and added new sets of machines for Mozilla ESR 17 and Thunderbird ESR 17.
  • Gave read-only database access to the two main leads of the Air Mozilla project.
  • Debugged and turned off a 10-second timeout in our load balancing pool that was causing Postgres monitors and processors to lose connection to their databases.
  • Discovered that the plugins database actually does better with the query_cache turned on, and tuned its size.
  • Tweaked tokudb_cache_size and innodb_buffer_pool_size on our Datazilla databases so that less swap would be used.
  • Created 2 read/write accounts for 2 people to access the development database for Mozillians.
  • Gave access to Datazilla databases for staging.

Wednesday, Nov 28th was my 1-year anniversary at Mozilla. Tomorrow is December! 2012 went by very quickly.

Sheeri’s feed from the Mozilla.com IT blog

I’m not a wizard with infographics, web there are almost 400 databases at Mozilla, patient in 11 different categories. Here is how each category fares in number of databases:

Mozilla DBs in 2012

Here is how each category measures up with regards to database size – clearly, our crash-stats database (which is on Postgres, not MySQL) is the largest:

2012 size of all Mozilla databases

So here is another pie chart with the relative sizes of the MySQL databases:
2012 size of MySQL databases at Mozilla

I’m sure I’ve miscategorized some things (for instance, are metrics on AMO classified under AMO/Marketplace or “internal tools”?) but here are the categories I used:

Categories:
air.m.o – air.mozilla.org
AMO/Marketplace – addons/marketplace
blog/web page – it’s a db behind a blog or mostly static webpage
bugzilla – Bugzilla
Crash-stats – Socorro, crash-stats.mozilla.com – Where apps like Firefox send crash details.
Internal tool – If the db behind this is down, moco/mofo people may not be able to do their work. This covers applications from graphs.mozilla.org to inventory.mozilla.org to the PTO app.
release tool – If this db is down, releases can not happen (but this db is not a tree-closing db).
SUMO – support.mozilla.org
Tree-closing – if this db is down, the tree closes (and releases can’t happen)
World-facing – if this db is down, non moco/mofo ppl will notice. These are specifically tools that folks interact with, including the Mozilla Developer Network and sites like gameon.mozilla.org
World-interfacing – This db is critical to tools we use to interface with the world, though not necessarily world visible. basket.mozilla.org, Mozillians, etc.

The count of databases includes all production/dev/stage servers. The size is the size of the database on one of the production/dev/stage machines. For example, Bugzilla has 6 servers in use – 4 in production and 2 in stage. The size is the size of the master in production and the master in stage, combined. This way we have not grossly inflated the size of the database, even though technically speaking we do have to manage the data on each of the servers.

For next year, I hope to be able to gather this kind of information automatically, and have easily accessible comprehensive numbers for bandwidth, number of queries per day on each server, and more.

I’m not a wizard with infographics, drugs there are almost 400 databases at Mozilla, geriatrician in 11 different categories. Here is how each category fares in number of databases:

Mozilla DBs in 2012

Here is how each category measures up with regards to database size – clearly, our crash-stats database (which is on Postgres, not MySQL) is the largest:

2012 size of all Mozilla databases

So here is another pie chart with the relative sizes of the MySQL databases:
2012 size of MySQL databases at Mozilla

I’m sure I’ve miscategorized some things (for instance, are metrics on AMO classified under AMO/Marketplace or “internal tools”?) but here are the categories I used:

Categories:
air.m.o – air.mozilla.org
AMO/Marketplace – addons/marketplace
blog/web page – it’s a db behind a blog or mostly static webpage
bugzilla – Bugzilla
Crash-stats – Socorro, crash-stats.mozilla.com – Where apps like Firefox send crash details.
Internal tool – If the db behind this is down, moco/mofo people may not be able to do their work. This covers applications from graphs.mozilla.org to inventory.mozilla.org to the PTO app.
release tool – If this db is down, releases can not happen (but this db is not a tree-closing db).
SUMO – support.mozilla.org
Tree-closing – if this db is down, the tree closes (and releases can’t happen)
World-facing – if this db is down, non moco/mofo ppl will notice. These are specifically tools that folks interact with, including the Mozilla Developer Network and sites like gameon.mozilla.org
World-interfacing – This db is critical to tools we use to interface with the world, though not necessarily world visible. basket.mozilla.org, Mozillians, etc.

The count of databases includes all production/dev/stage servers. The size is the size of the database on one of the production/dev/stage machines. For example, Bugzilla has 6 servers in use – 4 in production and 2 in stage. The size is the size of the master in production and the master in stage, combined. This way we have not grossly inflated the size of the database, even though technically speaking we do have to manage the data on each of the servers.

For next year, I hope to be able to gather this kind of information automatically, and have easily accessible comprehensive numbers for bandwidth, number of queries per day on each server, and more.

You may have noticed that I stopped posting the “weekly news” from the Mozilla DB Team. After going through the Operations Report Card and applying it to DBAs in OurSQL Podcast numbers 111, website 112, 114, 115 and 116, I started thinking that the updates were really more like metrics, and it would better serve my own purposes better to do the updates monthly.

The purposes of doing this type of blog post are:
0) Answering “So what does a DBA do, anyway?”
1) Answering “DBA? At Mozilla? Does Firefox have a database? Why does Mozilla have databases, and what for?”
2) Showing what the DB team does for Mozilla, so that folks will understand that “just keeping things working” is actually a lot of work. It also helps compile yearly reviews of accomplishments.

We are also starting to get some metrics information. This month we started easy – number of MySQL and Postgres machines, number of unique databases (mysql, information_schema, performance_schema and test are ignored, and duplicates, like the same database on a master and 2 slaves, are ignored), and version information.

As of today, we have 9 unique databases across 8 Postgres servers in 4 clusters, with 6 being Postgres 9.0 and 2 being Postgres 9.2 – we are currently upgrading all our Postgres databases to 9.2 and expect that by the end of December all servers will be using 9.2.

We have 427 unique databases across 98 MySQL DB machines in 20 clusters, with 3 being MySQL 5.0, 71 being MySQL 5.1 (mostly Percona’s patched 5.1), and 24 being MariaDB 5.5.

And in the last week of October and the month of November, we have:

  • Documented 4 more of our Nagios checks.
  • Started to upgrade Postgres databases to Postgres 9.2
  • Decommissioned a legacy database cluster for Firefox themes.
  • Built a new database cluster (complete with monitoring and backups) for a new Sentry implementation.
  • Upgraded 14 machines for general operating system updating purposes and to ensure that be2net drivers are up-to-date; out-of-date drivers can (and have!) caused servers to crash.
  • Upgraded MySQL 5.0 to MySQL 5.1 across 3 clusters and 7 machines.
  • Did a quarterly purge of Crash Stats data.
  • Had to re-sync 6 slaves when a transaction rolled back on the master, but some of the tables modified were MyISAM. So the master had data in some tables that was out of sync with the slaves.
  • Assisted in converting to use UTC timestamps in the Elmo database behind the Mozilla localization portal and the Bugzilla Anthropology Project, prompting a blog post on converting timezone-specific times in MySQL.
  • Decommissioned a legacy “production generic” database cluster that had over 60 databases on it.
  • Built a 5th database backup instance due to growing backup needs.
  • Changed binary log format to MIXED on our JIRA installation due to JIRA requirements and an upgrade to MySQL 5.1 issuing warnings that MySQL 5.0 had not.
  • Added checksums to the database cluster that runs Input and Firefox about:home snippets.
  • Archived and dropped the database behind Rock Your Firefox.
  • Exported Bugzilla data for a research project. Did you know if you are doing academic research, you can get a copy of Mozilla’s public Bugzilla data?
  • Gave read-only database access to a developer behind the Datazilla project.
  • Updated the email list for vouched Mozillians.
  • Backfilled missing crash-stats data after some failed cron scripts.
  • Cleared some junk data from the Datazilla databases.
  • Added new custom fields to our implementation of Bugzilla for upcoming release versions: Firefox 20, Thunderbird 20, Thunderbird ESR 20 and seamonkey 217.
  • Added 10 new machines to the Graphs database, and added new sets of machines for Mozilla ESR 17 and Thunderbird ESR 17.
  • Gave read-only database access to the two main leads of the Air Mozilla project.
  • Debugged and turned off a 10-second timeout in our load balancing pool that was causing Postgres monitors and processors to lose connection to their databases.
  • Discovered that the plugins database actually does better with the query_cache turned on, and tuned its size.
  • Tweaked tokudb_cache_size and innodb_buffer_pool_size on our Datazilla databases so that less swap would be used.
  • Created 2 read/write accounts for 2 people to access the development database for Mozillians.
  • Gave access to Datazilla databases for staging.

Wednesday, Nov 28th was my 1-year anniversary at Mozilla. Tomorrow is December! 2012 went by very quickly.

So, cure I’ve started a new job as a Senior Database Engineer at Salesforce, price and one of the services I help provide is adding users to MySQL. We have some nice chef recipes, look so all I have to do is update a few files, including adding in the MySQL password hash.

Now, when I added myself, I just logged into MySQL and generated a password hash. But when my SRE (systems reliability engineer) colleague needed to generate a password, he did not have a MySQL system he could login to.

The good news is it’s easy to generate a MySQL password hash. The MySQL password hash is simply a SHA1 hash of a SHA1 hash, with * put in the beginning. Which means you do not need a MySQL database to create a MySQL password hash – all you need is a programming language that has a SHA1 function (well, and a concatenate function).

And I found it, of course, on this post at StackExchange. So you don’t have to click through, here is what it says – and I have tested all these methods and I get the same password hash. I have changed their example of “right” to “PASSWORD HERE” so it’s more readable and obvious where the password goes, in case you copy and paste from here.

Some one-liners:

MySQL (may require you add -u(user) -p):

mysql -NBe "select password('PASSWORD HERE')"

Python:

python -c 'from hashlib import sha1; print "*" + sha1(sha1("PASSWORD HERE").digest()).hexdigest().upper()'

Perl:

perl -MDigest::SHA1=sha1_hex -MDigest::SHA1=sha1 -le 'print "*". uc sha1_hex(sha1("PASSWORD HERE"))'

PHP:

php -r 'echo "*" . strtoupper(sha1(sha1("PASSWORD HERE", TRUE))). "
";'

Hopefully these help you – they have enabled my colleagues to easily generate what’s needed without having to find (or create) a MySQL instance that they can already login to.

 

 

You may have noticed that I stopped posting the “weekly news” from the Mozilla DB Team. After going through the Operations Report Card and applying it to DBAs in OurSQL Podcast numbers 111, abortion 114, 115 and 116, I started thinking that the updates were really more like metrics, and it would better serve my own purposes better to do the updates monthly.

The purposes of doing this type of blog post are:
0) Answering “So what does a DBA do, anyway?”
1) Answering “DBA? At Mozilla? Does Firefox have a database? Why does Mozilla have databases, and what for?”
2) Showing what the DB team does for Mozilla, so that folks will understand that “just keeping things working” is actually a lot of work. It also helps compile yearly reviews of accomplishments.

We are also starting to get some metrics information. This month we started easy – number of MySQL and Postgres machines, number of unique databases (mysql, information_schema, performance_schema and test are ignored, and duplicates, like the same database on a master and 2 slaves, are ignored), and version information.

As of today, we have 9 unique databases across 8 Postgres servers in 4 clusters, with 6 being Postgres 9.0 and 2 being Postgres 9.2 – we are currently upgrading all our Postgres databases to 9.2 and expect that by the end of December all servers will be using 9.2.

We have 427 unique databases across 98 MySQL DB machines in 20 clusters, with 3 being MySQL 5.0, 71 being MySQL 5.1 (mostly Percona’s patched 5.1), and 24 being MariaDB 5.5.

And in the last week of October and the month of November, we have:

  • Documented 4 more of our Nagios checks.
  • Started to upgrade Postgres databases to Postgres 9.2
  • Decommissioned a legacy database cluster for Firefox themes.
  • Built a new database cluster (complete with monitoring and backups) for a new Sentry implementation.
  • Upgraded 14 machines for general operating system updating purposes and to ensure that be2net drivers are up-to-date; out-of-date drivers can (and have!) caused servers to crash.
  • Upgraded MySQL 5.0 to MySQL 5.1 across 3 clusters and 7 machines.
  • Did a quarterly purge of Crash Stats data.
  • Had to re-sync 6 slaves when a transaction rolled back on the master, but some of the tables modified were MyISAM. So the master had data in some tables that was out of sync with the slaves.
  • Assisted in converting to use UTC timestamps in the Elmo database behind the Mozilla localization portal and the Bugzilla Anthropology Project, prompting a blog post on converting timezone-specific times in MySQL.
  • Decommissioned a legacy “production generic” database cluster that had over 60 databases on it.
  • Built a 5th database backup instance due to growing backup needs.
  • Changed binary log format to MIXED on our JIRA installation due to JIRA requirements and an upgrade to MySQL 5.1 issuing warnings that MySQL 5.0 had not.
  • Added checksums to the database cluster that runs Input and Firefox about:home snippets.
  • Archived and dropped the database behind Rock Your Firefox.
  • Exported Bugzilla data for a research project. Did you know if you are doing academic research, you can get a copy of Mozilla’s public Bugzilla data?
  • Gave read-only database access to a developer behind the Datazilla project.
  • Updated the email list for vouched Mozillians.
  • Backfilled missing crash-stats data after some failed cron scripts.
  • Cleared some junk data from the Datazilla databases.
  • Added new custom fields to our implementation of Bugzilla for upcoming release versions: Firefox 20, Thunderbird 20, Thunderbird ESR 20 and seamonkey 217.
  • Added 10 new machines to the Graphs database, and added new sets of machines for Mozilla ESR 17 and Thunderbird ESR 17.
  • Gave read-only database access to the two main leads of the Air Mozilla project.
  • Debugged and turned off a 10-second timeout in our load balancing pool that was causing Postgres monitors and processors to lose connection to their databases.
  • Discovered that the plugins database actually does better with the query_cache turned on, and tuned its size.
  • Tweaked tokudb_cache_size and innodb_buffer_pool_size on our Datazilla databases so that less swap would be used.
  • Created 2 read/write accounts for 2 people to access the development database for Mozillians.
  • Gave access to Datazilla databases for staging.

Wednesday, Nov 28th was my 1-year anniversary at Mozilla. Tomorrow is December! 2012 went by very quickly.

The call for papers for Percona Live: MySQL Conference and Expo 2013 has been extended through October 31st. The conference will be held in Santa Clara, food April 22nd through Thursday April 25th (and this year it’s not during Passover!).

Why You Should Submit
Percona Live is a meeting of the minds – not just the April Santa Clara conference, hospital but all the Percona Live conferences. If you get a proposal accepted, you get free admission to the conference!

There is no cost to submit, and you do not have to tell anyone you submitted. I have submitted to conferences and been rejected – it stinks. But there is no reason not to submit. Submit a few presentations on different topics, because the presentation you have in mind might be submitted by other people too. If you have a presentation about backups rejected, it might be that someone else is doing a presentation on backups. Try to find something unique that nobody else is doing – I have not seen anything close to my talk on White-Hat Google Hacking, for example.

Submitting Your Project
I am not on the conference content committee, but I have been for the former O’Reilly MySQL Conference, OSCon, and several other conferences. If you are submitting a proposal about a project you have, or a product your company makes, it is not too hard to make a presentation that is worthy of acceptance. All you need to do is give a talk where the audience learns, even if they have no interest in your product.

Let me go a bit more into this: you have this project/product related to MySQL. It solves a problem – usually that problem is a lack of feature in MySQL. An introductory presentation about your project/product should talk about traditional ways to solve the problem within MySQL, and then for 10-15 minutes at the end, talk about your project/product. This way, even someone who will never use your product/project will learn something.

Let’s say you have a NoSQL solution. If you have a key-value store, talk about the problems of using unstructured or semi-structured data in MySQL, including how it can be done with MySQL (having big tables with 2 fields, keys and values). Or if you have a document or graph storage solution, talk about the problems of storing blobs of text or navigating through graphs while trying to use MySQL. That should take up 30 of the 45 minutes in a session, and the last 10 minutes can be explaining how your solution makes those problems easier to solve (with 5 minutes at the end for questions).

Why I Am Not Submitting
The main reason I am not submitting to Percona Live: MySQL Conference and Expo is a protest of the conference itself. This specific Percona Live conference was born out of Percona being unwilling to work together with members of the IOUG MySQL Council along with every other major vendor in the MySQL Space (Oracle, Monty Program, SkySQL, Continuent, Tokutek, and more) to make a conference that would be useful for everyone. Percona refused to work with us (the core team being myself, Giuseppe Maxia and Sarah Novotny) and made their own conference. By calling it “Percona Live” they ensured that Oracle would not be able to send representatives, because Percona is a direct competitor to Oracle (it can be argued that they called it “Percona Live” because it’s a continuation of their conference series; however, the fact remains that the name ensures Oracle cannot send employees). Percona refused to change the branding of the event, which was the one block Oracle had against sending their employees.

Percona Live is not the place to see MySQL engineers within Oracle, which has more software engineers working on MySQL than Percona and Monty Program combined. Roadmap discussions are rare, and only happen when a community member decides to do the best they can and figure out the roadmap. Therefore, Percona Live: MySQL Conference and Expo is a deliberate move by Percona to fracture the community.

But there are plenty of other very big reasons I am not submitting to Percona Live:


  • I do not need to go. For several years I worked in consulting firms, and Percona Live is a great place to go to meet potential new customers.

  • A big reason I speak is to help people learn. I am doing that in many ways – blogging, publishing the weekly OurSQL podcast, and next year from January through the end of March, teaching MySQL beginners through the free MySQL Marinate program.

  • I am also submitting to speak at conferences I have never spoken at (or not spoken at in several years) such as SCALE (social linux expo), Confoo and some Oracle conferences such as the annual one held by the New Zealand Oracle User Group. And those conferences are just for the first quarter of 2013.

  • Percona Live is full of fantastic speakers. Other conferences, like the ones listed in the previous bullet point, do not traditionally have the same level of MySQL expertise at them. By speaking at those conferences, I can bring in something that’s missing. Speaking at Percona Live, I am one of many MySQL experts. I feel I have more value to give the other conferences.

  • All these projects are a lot of work. By not speaking at Percona Live: MySQL Conference and Expo, I can hopefully restore some of the energy that I use for the free MySQL Marinate program, the weekly OurSQL podcast, and the other conferences I am (or hope to be) speaking at.

At the end of the day, my personal mission is to help people, and not speaking at Percona Live: MySQL Conference and Expo really does not change that mission. Hopefully this post explains my reasons for not submitting, so that 1) the community does not think my proposals were rejected when my name does not appear on the speaker list and 2) if the conference committee was wondering why they had not seen proposals by me, now they know why, and that they will not be seeing any from me.

Comments are closed.