Installing MySQL on CentOS 6.2 with PaceMaker, MHA and more
When I work with High Availability software, I’m reminded of the maze in the original computer adventure game “You are Lost in a maze of twisty-turny passages all alike…”.
If you search the web for HA programs you will find many well maintained projects all related that refer each other. The goal of this document is to give you with a step by step guide to a production worthy MySQL system. It should provide at least 99.999% access to your data and be able to scale read requests as you grow.
I have chosen these programs and utilities because they are free (as in beer) and each has enterprise support available. (When you make the money to pay for it.) If you start with this MySQL platform you will avoid many common problems. Just write your application to read and write data from different servers.
Here is what we’ll be using:
- CentOS (Redhat) 6.2
- MySQL 5.5.xx (Percona) or the MySQL of your choice
- Pacemaker - to monitor MySQL’s health
- MySQL HA (MHA) – to fail over when something happens to the master
- DRDB Cluster Management Console
- Percona Toolkit – to make live easy
- OpenArk Kit – a set of utilities for MySQL
- xtrabackup for database backups
- mytop and innotop – for real time monitoring
If you want to see all this work check out my Youtube Video at : http://tinyurl.com/7yqj5gz
I’ve worked hard to make these instructions cut and past. The GREEN stuff gets cut and pasted into the linux command line. The BLUE is copied into an application (vi, crm etc), but the RED needs to be edited to fit your environment (passwords, IP). I enjoyed the work. I hope you do to.
Getting started
Every good system starts with good hardware. Two things database servers hunger for are disk space and memory. You should supply yours with as much of each as you can afford. My rule of thumb is three times size of one copy of the data. A production system with a RAID-10 disk system is good and two network ports or more is recommended.
To test this installation I’m building on a VMware server. If you’d like to know more about my hardware read my “Building a Home VMware server” story.
The Hardware
The operating system I’m using is CentOS 6.2 64 bit. Don’t use the 32bit versions.
How you define disk space is important part. Choose the type, size and partitions carefully. You don’t want daily activity filling the root file system (/) and taking down the database. The common places this can happen are, the MySQL data directory, the /tmp directory and log space. Keep these in separate partitions. I’ve found the OS system logs in /var/log take care of them selves. MySQL logs should be keep in the MySQL /data directory.
I’m test hardware (VMware ESXi 4) has four SAS hard disks. For each MySQL server (db1 and db2) I create four (4) 15Gig virtual hard disks, one on each physical SAS disk. I split each virtual disk into four partitions. The /boot partition is RAID-1 and the / (root), /data and /tmp partitions are RAID-5. The sizes of these partitions depend you your needs but your /boot needs to be about 1G and the / needs to be at least 12G.
This table show my for virtual disk and how they are raid-ed and partitioned.
Md0 – /boot |
Md1 – / |
Md2 – /tmp |
Md3 – /data |
|
Disk 0 – 15G |
256M |
4G |
512M |
11G |
Disk 1 – 15G |
256M |
4G |
512M |
11G |
Disk 2 – 15G |
256M |
4G |
512M |
11G |
Disk 3 – 15G |
256M |
Checksum |
512M |
Checksum |
Total |
1G |
12G |
2G |
33G |
I created a KickStart for this install to make is simpler. http://www.mysqlfanboy.com/mysql.ks
I left the /data parition out of the kickstart because of a bug. I then build it my hand with this command.
mdadm -Cv /dev/md3 -l5 -n4 /dev/sda6 /dev/sdb6 /dev/sdc6 /dev/sdd6
Install the OS
Start with a minimum installation so you’ll have as few applications installed as possible. No desktop or server applications are needed.
After the install you should update the installed packages and include any packages you know you will be wanting or needed.
yum -y update yum -y install openssh-clients rsync wget perl-DBI perl-TermReadKey
Remove any supplied MySQL.
Even with a minimum install there is a little clean up. I remove packages that are un-needed. For better security you should remove anything that connect to remote systems like bluetooth and printing.
rpm -e mysql rpm -e mysql-libs --nodeps
Security
For the install process I turn off the firewall. In secure environments I recommend you leave it in place and deal with restrictions as they come up.
service iptables stop chkconfig iptables off service ip6tables stop chkconfig ip6tables off
Because we are moving the MySQL data directory, you will need to disable SELinux or update it. To disable it, edit /etc/selinux/config and change the SELINUX line to SELINUX=disabled. Then, so you don’t have to reboot Linux for this to take effect, just write a 0 to the SELinux control file.
vi /etc/selinux/config
SELINUX=disabled
echo 0 >/selinux/enforce
If you don’t want to disable To update SELinux for the new data directory you will need to have the the SELinux tools installed.
yum -y install policycoreutils-python semanage fcontext -a -t mysqld_db_t "/data/mysql(/.*)?" restorecon -Rv /data/mysql
Swap System
Serving queries from memory is much better then feeding it from disk. MySQL works hard to store data in a way to improve disk assess. Feeding data from swap-ed out memory seems like a bad thing. You not only have to go extra system code and the data is not stored in a way optimized for data.
Turning swap off altogether is a bad idea. Some system utilities and applications expect there to be some swap available. Turning down how much swap is used is the way to go.
Edit system control file and turn ‘swappness’ to zero.
vi /etc/sysctl.conf
and add the line
sys.swappiness=0
Syncing Time
Time sync is very important to maintaining accurate data. You may want to edit the /etc/ntp.conf file to point to your primary NTP time server. CentOS and Redhat provide time servers for your use. I recommend using pool.ntp.org.
yum -y install ntp chkconfig ntpd on ntpdate 0.pool.ntp.org service ntpd start
Setup SSH
Now is a good time to make sure DNS works on all servers and each knows their names.
MHA need ssh access to each server. You need to create ssh keys and copy to the mysql servers.
Skip this if you already have ssh keys installed.
ssh-keygen -t dsa -f ~/.ssh/id_dsa -N "" cp ~/.ssh/id_dsa.pub ~/.ssh/authorized_keys scp -r ~/.ssh db2:.
Network configuration
You can edit all the network setting with the network system configuration utility.
system-config-network
Except you need to insure the hosts can ALWAY resolve each other.
vi /etc/hosts
192.168.2.160 db.grennan.com db
192.168.2.161 db1.grennan.com db1
192.168.2.162 db2.grennan.com db2
192.168.2.163 db3.grennan.com db3
scp /etc/hosts db2:/etc
Now is a good time to reboot each system and check the install.
init 6
MySQL Setup
Install the MySQL of your choise. I’m installing Percona’s version 5.5.
rpm -e mysql-libs-5.1.52 --nodeps rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm yum -y install Percona-Server-shared-compat yum -y install percona-toolkit yum -y install Percona-Server-server-55.x86_64 yum -y install Percona-Server-client-55
The default location for MySQL data is /var/lib/mysql. MySQL benefits from being in it’s own partition. I’ll be putting the MySQL data in /data/mysql.
service mysql stop mkdir /data mkdir /data/mysql cp -prv /var/lib/mysql/* /data/mysql mv /var/lib/mysql /var/lib/mysql-empty ln -s /data/mysql /var/lib/mysql chown -R mysql.mysql /data/mysql
Configure MySQL
You need to change the server-id number on each server. The minimum setting you will need are:
vi /etc/my.cnf
[mysqld]
log-bin=mysql-bin
server-id=1 # Each system needs a unique id number
innodb_flush_log_at_trx_commit=2
sync_binlog=1
# relay_log_purge=0 # uncomment on slaves
# read_only=1 # uncomment on slaves
Percona has a service to help you configure MySQL for your hardware. MySQL Configuration Wizard
Now we need to add a couple of user before we install the replicator. We also need to set a password for the root user. The root password is blank so just it return.
If you have skip-name-resolve set you will need to substitute the host names for IP addresses.
service mysql start mysql -h localhost -u root -p
The next set of command are entered into the mysql client. You should have a ‘mysql >’ prompt.
DROP USER ``@`localhost`; DROP USER ``@`db1.grennan.com` ; GRANT ALL on *.* TO `root`@`192.168.2.%` IDENTIFIED by `P@ssw0rd` with GRANT option; GRANT ALL on *.* TO `root`@`localhost` IDENTIFIED by `P@ssw0rd` with GRANT option; CREATE USER `repl`@`192.168.2.%` IDENTIFIED BY `RepP@ssw0rd`; GRANT REPLICATION SLAVE ON *.* TO `repl`@`192.168.2.%`; grant all on *.* to `hauser`@`db1.grennan.com` identified by `P@ssw0rd`; grant all on *.* to `hauser`@`db2.grennan.com` identified by `P@ssw0rd`; flush privileges; quit ;
Now, stop MySQL on all servers and copy all the MySQL data files to the slaves.
From the master (db1) at a system prompt ‘#’ :
service mysql stop ssh db2 `service mysql stop` rsync -rog --delete /data/mysql root@db2:/data
Rather than type the password for MySQL each type we connect. We can also setup a usr .my.cnf to prevent this. This should connect you to the master (RW) server from each host.
vi ~/.my.cnf
[Client]
user=root
password=P@ssw0rd
host=localhost
socket=/data/mysql/mysql.sock
scp .my.cnf db2:.
Start Replication
On each system start MySQL and connect the two slaves to the master.
On the master we need to now the bin-file and the position. Note the numbers in purple.
service mysql start mysql -e `reset master; reset slave;` mysql -e `show master status\G`
*************************** 1. row ***************************
File: mysql-bin.000001
Position: 107
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.00 sec)
On all slaves, use the master’s bin log position to setup replication. Start the slave and note that the IO and SQL are running. (Yes).
service mysql start
mysql
stop slave; CHANGE MASTER TO MASTER_HOST='192.168.2.201', MASTER_USER='repl', MASTER_PASSWORD='RepP@ssw0rd', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=107 ; start slave ; show slave status\G
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Installing MHA
Every MySQL server needs to have a copy of the MHA Node and Manager installed. Download the latest version of MHA from the Google Code server.
http://code.google.com/p/mysql-master-ha/downloads/list
There are a number of required Perl modules. I chose to use the Fedora evaluation packages to resolve these.
- DBD::mysql
- Config::Tiny
- Log::Dispatch
- Parallel::ForkManager
- Time::HiRes
Here is my install with the fedora packages.
wget http://download.fedora.redhat.com/pub/epel/6/x86_64/epel-release-6-5.noarch.rpm rpm -i epel-release-6-5.noarch.rpm yum -y install perl-DBD-MySQL yum -y install perl-Config-Tiny yum -y install perl-Log-Dispatch yum -y install perl-Parallel-ForkManager yum -y install perl-Time-HiRes wget http://mysql-master-ha.googlecode.com/files/mha4mysql-node-0.53-0.el6.noarch.rpm rpm -i mha4mysql-node-0.53-0.el6.noarch.rpm wget http://mysql-master-ha.googlecode.com/files/mha4mysql-manager-0.53-0.el6.noarch.rpm rpm -i mha4mysql-manager-0.53-0.noarch.rpm
MHA only need one configuration file on each server. You can name this file anything you want. Change the setting and make sure the directories exist and are writable.
vi /etc/MHA.cnf
Insert this configuration data into the MHA.cnf file.
[server default]
user=root
password=P@ssw0rd
manager_workdir=/var/log/masterha
manager_log=/var/log/masterha/MHA.log
remote_workdir=/var/log/masterha
[server1]
hostname=db1
[server2]
hostname=db2
You should have already installed SSH keys (see above) and the ‘hauser’ installed in MySQL.
masterha_check_ssh is used to check if you have SSH working. Look for the OK after the connect test.
masterha_check_ssh --conf=/etc/MHA.cnf
Thu Dec 1 10:10:52 2011 – [info] Starting SSH connection tests..
Thu Dec 1 10:10:53 2011 – [debug]
Thu Dec 1 10:10:52 2011 – [debug] Connecting via SSH from root@db1(192.168.0.11) to root@db2(192.168.2.12)..
Thu Dec 1 10:10:53 2011 – [debug] ok.
masterha_check_repl is used to check MySQL replicaton. This is chatty. Your are looking for the ‘MySQL Replication Health is OK’ at the end. You should pay attention to any warnings.
masterha_check_repl --conf=/etc/MHA.cnf
—
Thu Dec 1 10:07:22 2011 – [info] Checking slave configurations..
Thu Dec 1 10:07:22 2011 – [warning] read_only=1 is not set on slave db2(192.168.0.11:3306).
Thu Dec 1 10:07:22 2011 – [warning] relay_log_purge=0 is not set on slave db2(192.168.0.12:3306).
—
MySQL Replication Health is OK.
You now have MHA installed. If the configure it wrong MHA will give some errors that are not helpful and die. check the system logs (/var/log/messages) and your MHA settings and try again.
We’ll be using Pacemaker and a command like this to fail over to a slave when the master dies.
masterha_master_switch –master_state=dead –dead_master_host=db1 –conf=/etc/MHA.cnf
Putting the H in High Availability (HA)
There are may ways a system can fail and even with MySQL running. How do you know when your dead? How do you make the new is system ready to accept data? Are your servers even on the same network? I’ve chosen PaceMaker because it is the most complete and flexible.
Install Pacemaker / CoroSync
Neither RedHat or CentOS supply PaceMaker packeges. RedHat support their own propitiatory clustering suite. CentOS does supply heartbeat. Thankfully PaceMaker project provide it in EPEL repository for Redhat 5.
Note we are installing PaceMaker for RedHat 5 not 6.2. At the time of this writing a version for 6.2 was not available. This create some dependence problems we have to work around.
wget -O /etc/yum.repos.d/pacemaker.repo http://clusterlabs.org/rpm/epel-5/clusterlabs.repo yum -y install libtool-ltdl ln -s /usr/lib64/libltdl.so.7.2.1 /usr/lib64/libltdl.so.3 yum -y install net-snmp wget http://www.clusterlabs.org/rpm/epel-5/x86_64/cluster-glue-1.0.6-1.6.el5.x86_64.rpm wget http://www.clusterlabs.org/rpm/epel-5/x86_64/cluster-glue-libs-1.0.6-1.6.el5.x86_64.rpm rpm -i cluster-glue-1.0.6-1.6.el5.x86_64.rpm --nodeps rpm -i cluster-glue-libs-1.0.6-1.6.el5.x86_64.rpm --nodeps yum -y install pacemaker corosync heartbeat
Again, make sure all the name resolves are in place on both DB1 and DB2. Heartbeat and Pacemaker provide the group communications and process management respectivly. The nodes must be able to communicate with each other.
Configure CoroSync
On both DB1 and DB2.
corosync-keygen chown root:root /etc/corosync/authkey chmod 400 /etc/corosync/authkey
vi /etc/corosync/corosync.conf
totem {
version: 2
token: 5000 # How long before declaring a token lost (ms)
token_retransmits_before_loss_const: 20 # How many token retransmits
# before forming a new configuration
join: 1000 # How long to wait for join messages in the membership protocol (ms)
consensus: 7500 # How long to wait for consensus to be achieved before
# starting a new round of membership configuration (ms)
vsftype: none # Turn off the virtual synchrony filter
max_messages: 20 # Number of messages that may be sent by
#one processor on receipt of the token
secauth: off # Disable encryption
threads: 0 # How many threads to use for encryption/decryption
clear_node_high_bit: yes # Limit generated nodeids to 31-bits (positive signed integers)
# Optionally assign a fixed node id (integer)
# nodeid: 1234
interface {
ringnumber: 0
# The following three values need to be set based on your environment
bindnetaddr: 192.168.2.201
mcastaddr: 239.255.42.0
mcastport: 5405
}
}
logging {
fileline: off
to_syslog: yes
to_stderr: no
syslog_facility: daemon
debug: on
timestamp: on
}
amf {
mode: disabled
}
vi /etc/corosync/service.d/pcmk
service {
# Load the Pacemaker Cluster Resource Manager
name: pacemaker
ver: 0
}
If you didn’t edit this on both servers copy it from second server.
rsync -rop /etc/corosync db2:/etc
Now start corosync on both servers.
chkconfig corosync on /etc/init.d/corosync start
Check CoroSync healty. Look for ‘no faluts’ and both members show up as members.
corosync-cfgtool -s
Printing ring status.
Local node ID -922572608
RING ID 0
id = 192.168.2.201
status = ring 0 active with no faults
corosync-objctl | grep members
totem.join=1000 # How long to wait for join messages in the membership protocol (ms)
runtime.totem.pg.mrp.srp.members.-922572608.ip=r(0) ip(192.168.2.201)
runtime.totem.pg.mrp.srp.members.-922572608.join_count=1
runtime.totem.pg.mrp.srp.members.-922572608.status=joined
runtime.totem.pg.mrp.srp.members.-905795392.ip=r(0) ip(192.168.2.202)
runtime.totem.pg.mrp.srp.members.-905795392.join_count=1
runtime.totem.pg.mrp.srp.members.-905795392.status=joined
Patch for Pacemaker
Yves Trudeau had created a newer version of the mysql resource agent for Pacemaker. He keeps it at:
https://github.com/y-trudeau/. Just download it and copy it into the ocf resource heartbeat directory.
wget https://raw.github.com/y-trudeau/resource-agents/master/heartbeat/mysql cp mysql /usr/lib/ocf/resource.d/heartbeat/mysql
Configure Pacemaker
Fencing is normally used and is enabled by default and should be! We’ll start my turning Shoot the other Node in the Head (stonith) off. TURN THIS BACK ON BEFORE YOU GO INTO PRODUCTION! You disable it with:
crm configure property stonith-enabled="false"
If your cluster consists of just two nodes, switch the quorum feature off with:
crm configure property no-quorum-policy=ignore
This is a bit of magic. I’ll explain after we’re done. Create file to configure crm and paste this document into it.
crm configure node db2.grennan.com attributes IP="192.168.2.202"
crm configure
Now your in the CRM configuration utility. Type in the blue part of this txt and edit the red parts to fit your configuration.
crm(live)configure# primitive failover-ip ocf:heartbeat:IPaddr \ params ip="192.168.2.200" \ operations $id="failover-ip-operations" \ op monitor start-delay="0" interval="2"
crm(live)configure# primitive p_mysql ocf:heartbeat:mysql \ params pid="/var/run/mysqld/mysqld.pid" socket="/data/mysql/mysql.sock" \ test_passwd="P@ssw0rd" enable_creation="false" replication_user="root" \ replication_passwd="P@ssw0rd" MHASupport="true" \ operations $id="p_mysql-operations" \ op monitor interval="20" timeout="30" depth="0" \ op monitor interval="10" role="Master" timeout="30" depth="0" \ op monitor interval="30" role="Slave" timeout="30" depth="0"
crm(live)configure# ms ms_mysql p_mysql meta clone-max="2" notify="true"
crm(live)configure# colocation col_ms_mysql_failover-ip inf: failover-ip ms_mysql:Master
crm(live)configure# order ord_ms_mysql_failover-ip inf: ms_mysql:promote failover-ip:start
crm(live)configure# commit
crm(live)configure# quit
Monitor Pacemaker
Back at the system prompt you can monitor the health of Pacemaker from the command line.
crm_mon -Arf
============
Last updated: Fri Feb 10 14:28:53 2012
Last change: Fri Feb 10 14:26:15 2012 via crm_attribute on db1.grennan.com
Stack: openais
Current DC: db1.grennan.com – partition with quorum
Version: 1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558
2 Nodes configured, 2 expected votes
3 Resources configured.
============
Online: [ db1.grennan.com db2.grennan.com ]
Full list of resources:
failover-ip (ocf::heartbeat:IPaddr): Started db1.grennan.com
Master/Slave Set: ms_mysql [p_mysql]
Masters: [ db1.grennan.com ]
Slaves: [ db2.grennan.com ]
Node Attributes:
* Node db1.grennan.com:
+ IP : 192.168.2.201
+ master-p_mysql:0 : 3601
+ readerOK : 1
+ writerOK : 1
* Node db2.grennan.com:
+ IP : 192.168.2.202
+ master-p_mysql:1 : 1
+ readerOK : 0
+ writerOK : 0
Migration summary:
* Node db1.grennan.com:
* Node db2.grennan.com:
Install DRBD Management Console
The DRBD Management Console is a Java application that eases the burden of managing your DRBD and Pacemaker/Corosync or Heartbeat based cluster systems.
Download this Jar file to your workstation.
When you run it you will need to add the server logins and cluster set. You don’t need to edit anything. Skip any installations or configurations.
http://sourceforge.net/projects/lcmc/files/LCMC-1.2.3.jar/download
[I'll post a video of configuring the Management Console here.]
But wait there is MORE
Percona Tool kit
You install the Perconal Tool kit when you installed MySQL. :-) Percona Toolkit is a collection of advanced command-line tools used by Percona (http://www.percona.com/) support staff to perform a variety of MySQL and system tasks that are too difficult or complex to perform manually. As a DBA you will find them very useful. Here is a small sample:
- pt-duplicate-key-checker – Find duplicate indexes and foreign keys on MySQL tables.
- pt-heartbeat – Monitor MySQL replication delay.
- pt-index-usage – Read queries from a log and analyze how they use indexes.
- pt-query-advisor – Analyze queries and advise on possible problems.
- pt-query-digest – Analyze query execution logs and generate a query report, filter, replay, or transform queries for MySQL, PostgreSQL, memcached, and more.
- pt-query-profiler – Execute SQL statements and print statistics, or measure activity caused by other processes.
- pt-show-grants – Print MySQL grants so you can effectively replicate, compare and version-control them.
- pt-table-checksum – Perform an online replication consistency check, or checksum MySQL tables efficiently on one or many servers.
- pt-table-sync – Synchronize MySQL table data efficiently.
- pt-visual-explain – Format EXPLAIN output as a tree.
openark kit
Shlomi Noach maintains another great set of utility.
- oak-apply-ri: apply referential integrity on two columns with parent-child relationship.
- oak-block-account: block or release MySQL users accounts, disabling them or enabling them to login.
- oak-chunk-update: perform long, non-blocking UPDATE/DELETE operation in auto managed small chunks.
- oak-get-slave-lag: print slave replication lag and terminate with respective exit code.
- oak-hook-general-log: hook up and filter general log entries based on entry type or execution plan criteria.
- oak-kill-slow-queries: terminate long running queries.
- oak-modify-charset: change the character set (and collation) of a textual column.
- oak-online-alter-table: perform a non-blocking ALTER TABLE operation.
- oak-prepare-shutdown: make for a fast and safe MySQL shutdown.
- oak-purge-master-logs: purge master logs, depending on the state of replicating slaves.
- oak-repeat-query: repeat query execution until some condition holds.
- oak-security-audit: audit accounts, passwords, privileges and other security settings.
- oak-show-limits: show AUTO_INCREMENT “free space”.
- oak-show-replication-status: show how far behind are replicating slaves on a given master.
You can find them here. http://code.openark.org/forge/openark-kit
MyTOP
Mytop is a console-based (non-gui) tool for monitoring the threads and overall performance of MySQL. The original development was done by Jeremy D. Zawodny
I would be amiss if I didn’t say something about innotop. Some consider this a successors to mytop. It is if you are using InnoDB. I still find mytop useful. More than half of my tables are MyISAM.
wget http://www.mysqlfanboy.com/mytop/mytop-1.9.tar.gz tar zxf mytop-1.9.tar.gz cd mytop-1.9 perl Makefile.PL make install
wget https://innotop.googlecode.com/files/innotop-1.8.0.tar.gz cd innotop-1.8.0 perl Makefile.PL make install
Backup
Percona XtraBackup is a hot backup utility that doesn’t lock your database during the backup!
It’s best to run this from one of your slaves. I often create a local copy and then rsync the files off the backup system to a remote (off site) server using a cron script.
yum -y install xtrabackup
innobackupex --user=root --password=`P@ssw0rd` /tmp/Backup
Here is an example script.
export HOST=`/bin/hostname -a`
mkdir /root/Backup
/usr/bin/innobackupex –user=root –password=`P@ssword` /root/Backup
/usr/bin/xtrabackup –prepare –target-dir=/tmp/backup
/bin/find /tmp/Backup/* -mtime +1 -exec rm {} -Rf \;
Last word
If you’ve made it this fare, now is a good time to restart everything and do some testing.
init 6
Tweet
Baron Schwartz wrote:
“I would be amiss if I didn’t say something about innotop. Some consider this a predecessors to mytop. It is of you are using InnoDB. I still find mytop useful. More than half of my tables are MyISAM.”
I still don’t understand how you think it’s a predecessor :) It was created years afterwards, not before, and has all of the same MyISAM functionality, plus about 20x more functionality for other things. Well, to each his own :)
Link | February 14th, 2012 at 5:02 pm
admin wrote:
Yes. Wow, I read and re-read and still I miss these things.
With the work I’ve been doing on mytop I think they are more separate tools. I’ve been working to make mytop more focused on myisam and replication. We use a lot of myisam in my office.
Link | February 15th, 2012 at 9:39 am
Sov1et wrote:
Hi.
I don’t see where is the link between pacemaker and masterha_master_switch ?
Link | March 2nd, 2012 at 6:46 am
admin wrote:
It is written into (or being written into) the Pacemaker handler code (patch) written by Yves Trudeau. (search for MHA) There is a check box in the handler details for MHA support but it doesn’t seem to call any code at this time.
https://raw.github.com/y-trudeau/resource-agents/master/heartbeat/mysql
I also see Yves has moved about a month ago to:
https://github.com/ClusterLabs/resource-agents
and I don’t see this support there. Maybe the work will fall to me.
Link | March 13th, 2012 at 10:44 am
Heschel.Special wrote:
Great post for looking into MHA. It looks like Yves ripped out the code as stated above for the ocf agent and was wondering if you found an alternative for your config. Obviously were things stop on the above example. I was also wondering if you could post you CIB file for corosync. Was wondering how you handled split-brain scenarios.
Link | February 11th, 2013 at 2:22 pm