Click here to download a pdf format of this post.
I'm going to write it step by step, with almost no explanations, just steps and tips:
First of all what I've tried to create is:
- Active/Passive MySQL cluster
- Two physical nodes
- Non-global zone on each node
- MySQL is running in the zone
- Private interconnect – Two interfaces on each server connected thru two Ethernet switches
- One central storage, with one LUN for MySQL data and one LUN for quorum
- Hosts names are: host1 & host2
- Zones names are: zone1 & zone2
- IPMP is used for public interfaces on physical nodes
- MPXIO is used for HBA's multipathing
- Using ZFS as MySQL data filesystem
Resource I've used:
- http://docs.sun.com/app/docs/doc/820-2555
- http://docs.sun.com/app/docs/doc/819-2993
- http://wikis.sun.com/display/SunCluster/Deployment+Example+-+Installing+MySQL+in+a+Non-Global+Zone
- http://www.sun.com/software/solaris/howtoguides/twonodecluster.jsp
So, here we go:
1. Preparations:
a. On the Ethernet switches - Disable spanning tree on the private interconnect ports
b. No need to configure IP's for the private interconnect interfaces
c. It is encouraged to create two separate VLAN's for the private interconnect interfaces
d. Default interconnect network configuration is: 172.16.0.0 with netmask 255.255.248.0, the default can be changed when configuring the cluster
e. If using Jumbo frames for the public network, also configure Jumbo frames for the private interconnect
f. Disable Solaris power management
* Edit /etc/power.conf file and change the line autopm default to autopm disable
* Run: pmconfig
g. /globaldevices file system, is a local filesystem, can be stored on local disks, with the size of 512MB
h. Packages needed for the cluster are: SUNWrsm, SUNWrsmo
2. Make sure that RPC is open for public network:
# svccfg
svc:/network/rpc/bind> select network/rpc/bind
svc:/network/rpc/bind> setprop config/local_only=false
svc:/network/rpc/bind> quit
# svcadm refresh network/rpc/bind:default
If TCP Wrapper is used, also add: rpcbind: ALL too /etc/hosts.allow file on both servers, if you don't wish to allow rpc for everyone, you can restrict it to both servers public and private ip's.
Tip: run tail –f on the messages file for both servers in different sessions, this will help you find the problems in no time
3. Create the zones on both servers
a. Define public IP for both zones
b. Configure /etc/hosts to include the public IP name
4. Install the cluster on host1 and host2 (can be done simultaneously)
a. Mount CD
b. Define DISPLAY environment variable to your X machine
c. Run ./installer
i. Install Cluster Core, Cluster Manager (not required) and MySQL agent
ii. Select to configure cluster after install
5. Change root .profile (or whatever you are using) to include /usr/sbin:/usr/cluster/bin in the PATH environment variable and /usr/cluster/man in the MANPATH environment variable
6. Create SSH key based authentication between the two physical servers for the root account (use RSA keys)
7. After the installation has completed, you need to run scinstall to configure the cluster
a. If TCP wrapper is enabled, add this line to /etc/hosts.allow:
sccheckd: localhost
it is required for sccheck utility to work properly.
b. Don't use Auto Quorum configuration, we will add it later
c. NOTE: both servers will be rebooted at the end of scinstall, without confirmation!!!
d. If you want a different private interconnect IP's/Subnet, you can change it in this step
8. After both servers rebooted, check the cluster status by running:
clnode status
9. Adding a quorum disk device
a. Use format to label the device
b. The cluster is using DID pseudo names for devices, to find a device DID name run:
cldevice list -v
c. Run clsetup and add a disk quorum device
d. If quorum added successfully, press yes for resting installmode
10. Check that the quorum is defined by running:
clquorum list
11. Check that the cluster installmode is disabled:
cluster show -t global | grep installmode
12. Enable automatic node reboot if all monitored disk paths fail:
clnode set -p reboot_on_path_failure=enabled +
a. Check that it has changed:
clnode show
13. Registering the cluster storage and network service
clresourcetype register SUNW.gds SUNW.HAStoragePlus
14. Create a new Resource Group that includes both zones
clresourcegroup create –n host1:zone1,host2:zone2 RG–MYSQL
15. Check that the resource group was added:
clresourcegroup status
16. Create the ZFS pool – zMysql
17. Before we will be able to add zMysql pool as a cluster resource, we need to export it because the cluster will change the pool devices to the DID location:
zpool export zMysql
18. Add the zMysql as a resource in RG-MYSQL:
clresource create -g RG-MYSQL -t SUNW.HAStoragePlus -p AffinityOn=TRUE –p \
Zpools=zMysql -p ZpoolsSearchDir=/dev/did/dsk RS-MYSQL-HAS
19. Check that zMysql was added as a resource to RG-MYSQL:
clresource list
20. Now we can import it back
zpool import zMysql
21. Add a new Virtual IP resource to our RG-MYSQL resource group:
a. Add the VIP to /etc/hosts on both physical servers and zones (we used vip-mysql)
b. Add the VIP to the resource group:
clreslogicalhostname create -g RG-MYSQL -h vip-mysql -N \
private@host1,private@host2 RS-VIP
c. private = our IPMP group name on each server, if not using IPMP, replace private with the NIC name, for example e1000g1
d. Check if VIP added as a resource:
clresource list
22. You can move the resource group to the other servers by running:
clresourcegroup switch -n host1:zone1 RG-MYSQL
23. You can move the resource group back by:
clresourcegroup remaster RG-MYSQL
24. Install MySQL on zone1 and create the databases on zMysql filesystem
25. Install MySQL software on zone2
26. Create mysql account and group on both zones with same uid and gid
27. Start the MySQL instance on zone1
28. Copy my.cnf to the zMySQL filesystem (/data)
29. Edit my.cnf file and add this line to [mysqld] section:
bind-address =
30. Create root@vip-mysql account and grant permissions:
# mysql
mysql> GRANT ALL ON *.* TO 'root'@'vip-mysql' IDENTIFIED BY 'mypassword';
mysql> UPDATE mysql.user SET grant_priv='Y' WHERE user='root' AND host='vip-mysql';
31. Now, the next step is to create the cluster database in our MySQL instance, this database is used for MySQL checks by the cluster
a. cp /opt/SUNWscmys/util/mysql_config /data/
b. chmod 400 /data/mysql_config
c. Edit the file /data/mysql_config to look like this:
MYSQL_BASE=/opt/mysql
MYSQL_USER=root
MYSQL_PASSWD=mypassword
MYSQL_HOST=vip-mysql
FMUSER=fmuser
FMPASS=fmuserNewPassword
MYSQL_SOCK=/tmp/vip-mysql.sock
MYSQL_NIC_HOSTNAME=vip-mysql
MYSQL_DATADIR=/data
d. Note: Don't use $ signs in the password, the cluster scripts will fail...(tried to put \$, but failed for account fmuser)
e. Select an appropriate password for fmuser account
f. Create the database:
ksh /opt/SUNWscmys/util/mysql_register -f /data/mysql_config
g. If the script fails, look at /tmp for more information: cat /tmp/.mysql.error
32. Next step is to register the MySQL as a resource in the cluster, for that we will need to edit one more file:
a. cp /opt/SUNWscmys/util/ha_mysql_config /data/
b. chmod 400 /data/ha_mysql_config
c. Edit the file /data/ha_mysql_config to look like this:
RS=RS-MYSQL-DB
RG=RG-MYSQL
#PORT=3306
LH=RS-VIP
HAS_RS=RS-MYSQL-HAS
# local zone specific options
ZONE=
ZONE_BT=
PROJECT=
# mysql specifications
BASEDIR=/opt/mysql
DATADIR=/data
MYSQLUSER=mysql
MYSQLHOST=vip-mysql
FMUSER=fmuser
FMPASS=fmuserNewPassword
LOGDIR=/data
CHECK=YES
d. Register MySQL as a resource:
ksh /opt/SUNWscmys/util/ha_mysql_register -f /data/ha_mysql_config
e. If MySQL is registered as a Solaris Service, disable it:
svcadm disable mysql
f. Enable RS-MYSQL-DB resource:
clresource enable RS-MYSQL-DB
g. Cluster MySQL resource is saving data to /tmp, check the result/errors in /tmp
No comments:
Post a Comment