Install MariaDB via Yum, Apt, etc.

To get the latest MariaDB for your distribution (Yum, Apt, etc.), use:

https://downloads.mariadb.org/mariadb/repositories/

It will generate a config entry like:

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1/centos7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

which you can install by saving to /etc/yum.repos.d/mariadb.repo and run: yum install MariaDB-server MariaDB-client

Rackspace API: Trove install to enable root user for cloud DB

Rackspace Trove set-up:

Mac OSX:

sudo easy_install pip
sudo pip install python-troveclient

# trove:
# http://docs.rackspace.com/cdb/api/v1.0/cdb-getting-started/content/Install_Trove_Client.html

trove list
trove root-enable

Set sort order on group by in MySql

Problem: When using GROUP BY in MySql you don’t really have control over which item will be selected in the result set (and using ORDER BY in your main query won’t help due to the processing order). You also are limited to a single value.

Answer: Use MIN or MAX in your select statement. For example in a case where you had a bunch of days grouped by some attribute and you wanted to find both the first and last date, you can use something like:

SELECT...
MIN(my_days.date) AS start_date, 
MAX(my_days.date) AS end_date,
...
GROUP BY...

Mysql 5.1 master-slave replication

Finally did this, at least a simple Master -> Slave arrangement, but still pretty cool and great for any case that your production server is (mostly) read-only.

dev.mysql.com/5.1/replication

No real issues to add. The mysql site (see link above) has pretty clear instructions.

Still would like to try a full replication cluster someday.


quick fix if replication stops:
mysql -u root --password="xxx" -Bse "stop slave"
mysql -u root --password="xxx" -Bse "reset slave"
mysql -u root --password="xxx" -Bse "slave start"

Script to check and log/email replication errors:
mysql_replication_check_script.txt
will also try the above quick fix procedure. Enter mysql user details and email as well as log location prior to use.


Fixing replication via full reset:

If the above quick-fix doesn’t work, step-by-step procedure to resync a master-slave replication from scratch:

At the master:

RESET MASTER;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
And copy the values of the result of the last command somewhere.

Wihtout closing the connection to the client (because it would release the read lock) issue the command to get a dump of the master:

mysqldump -uroot -p –all-databases > /a/path/mysqldump.sql
Now you can release the lock, even if the dump hasn’t end. To do it perform the following command in the mysql client:

UNLOCK TABLES;
Now copy the dump file to the slave using scp or your preferred tool.

At the slave:

Open a connection to mysql and type:

STOP SLAVE;
Load master’s data dump with this console command:

mysql -uroot -p < mysqldump.sql
Sync slave and master logs:

RESET SLAVE;
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98;
Where the values of the above fields are the ones you copied before.

Finally type

START SLAVE;
And to check that everything is working again, if you type

SHOW SLAVE STATUS;
you should see:

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

(thanks david-espart @SO)

Export script for mysql export over web

For auto-update scripts on a different server, etc. it’s nice to have a web-accessible database export. Make sure you set up adequate security! (both for access as well as php exec). You may not want to store db password in this file and you also would want to use a read-only mysql account. You can pass additional parameters as needed to the mysqldump command.

<?php
header("Content-type: text/txt; charset=utf-8");
header('Content-Description: data download');
header('Content-Disposition: attachment; filename=' . date('Ymd-') . 'dbname_data.sql');
header('Content-Transfer-Encoding: 8bit');
header('Expires: 0');
header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
header('Pragma: public');
system("mysqldump dbname -u dbuser -pdbpass");
exit;

example of script using above

echo "update db"
curl http://host.domain.com/_util/exportdb.php | \
	/usr/local/bin/mysql dbname -u mysqluser
echo
echo "svn update svnrepo reponame"
svn update /path/to/local/svnrepo

You could also compress and decompress the data if it’s a large file:

to compress on the server, modify these lines:

<?php
system("mysqldump dbname -u dbuser -pdbpass | gzip");
header('Content-Encoding: gzip');

and change the script to decode:

curl --header 'accept-encoding: gzip' http://host.domain.com/_util/exportdb.php | \
	gunzip | /usr/local/bin/mysql -u mysqluser dbname

I found this method more compatible than trying to use php’s compression.