Home >Database >Mysql Tutorial >ClusterControl Module for Puppet_MySQL
July 7, 2014
By Severalnines
If you are automating your infrastructure using Puppet, then this blog is for you. We are glad to announce the availability of a Puppet module for ClusterControl. For those using Chef, we already publishedChef cookbooksfor Galera Cluster and ClusterControl some time back.
The ClusterControl module initial release is available on Puppet Forge , installing the module is as easy as:
$ puppet module install severalnines-clustercontrol
If you haven’t change the default module path, this module will be installed under /etc/puppet/modules/clustercontrol on your Puppet master host. ClusterControl supports following database clusters:
This module makes use of the Severalnines repository for yum and apt packages. This repository hosts the latest stable release of ClusterControl and all of its components.
ClusterControl and all of its components requires post-installation procedures, like setting up MySQL, granting users, setting up Apache and etc. This module will automate most of these.
If you lookup the Severalnines package repository, you will find the following packages:
The Severalnines Repository installation instructions are available at http://repo.severalnines.com .
We’ll now show you how to deploy ClusterControl on top of an existing database cluster using the ClusterControl Puppet module.
This module requires the following criteria to be met:
**Please review the module’s requirement available at Puppet Forge for more details.
Now we should have the Puppet module installed. The first thing that we need to do is to generate a SSH key. ClusterControl requires a proper configuration of passwordless SSH using SSH key. It also needs an API token. The following are two pre-deployment steps that you need to complete:
1. Generate a SSH key:
$ bash /etc/puppets/modules/clustercontrol/files/s9s_helper.sh --generate-key
** This step is compulsory. The above command will generate a RSA key (if not exists) to be used by the module and the key must exist in the Puppet master module's directory before the deployment begins.
2. Generate an API token:
$ bash /etc/puppets/modules/clustercontrol/files/s9s_helper.sh --generate-tokenb7e515255db703c659677a66c4a17952515dbaf5
** Copy the generated token and specify in the node definition under api_token .
Both steps described above need to be executed once (unless you intentionally want to regenerate them all). Now, we can configure the database nodes to be managed, as per example architectures below:
As illustrated in the above figure, we have a three-node Percona XtraDB Cluster running on CentOS 6.5 64bit. The SSH user is root and the MySQL datadir is using the default /var/lib/mysql .
Therefore, the node definition in Puppet master would be as simple as:
# ClusterControl hostnode "clustercontrol.local" { class { 'clustercontrol': is_controller => true, email_address => 'admin@localhost.xyz', mysql_server_addresses => '192.168.1.11,192.168.1.12,192.168.1.13', api_token => 'b7e515255db703c659677a66c4a17952515dbaf5' }}# Monitored DB hostsnode "galera1.local", "galera2.local", "galera3.local" { class {'clustercontrol': is_controller => false, mysql_root_password => 'r00tpassword', clustercontrol_host => '192.168.1.10' }}
Once done, you can either instruct the agent to pull the configuration from the Puppet master and apply it immediately:
$ puppet agent -t
Or, wait for the Puppet agent service to apply the catalog automatically (depending on the runinterval value, default is 30 minutes). Once completed, open the ClusterControl UI page at http://[ClusterControl IP address]/clustercontrol and login using the specified email address with default password ‘admin’.
You should see something similar to below:
Take note that this module will install the RSA key at $HOME/.ssh/id_rsa_s9s . Details of this in the Puppet Forge readme page.
For MySQL Cluster, extra options are needed to allow ClusterControl to manage your management and data nodes. You may also need to add NDB data directory (e.g /mysql/data ) into the datadir list so ClusterControl knows which partition is to be monitored. In the following example, /var/lib/mysql is mysql API datadir and /mysql/data is NDB datadir.
The following figure shows our MySQL Cluster architecture running on Debian 7 (Wheezy) 64bit:
The node definition would be:
# ClusterControl hostnode "clustercontrol.local" { class { 'clustercontrol': is_controller => true, email_address => 'admin@localhost.xyz', cluster_type => 'mysqlcluster', mysql_server_addresses => '192.168.1.11,192.168.1.12', mgmnode_addresses => '192.168.1.11,192.168.1.12', datanode_addresses => '192.168.1.13,192.168.1.14', datadir => '/var/lib/mysql,/mysql/data', api_token => 'b7e515255db703c659677a66c4a17952515dbaf5' }}# Monitored DB hostsnode "mysql1.local", "mysql2.local", "data1.local", "data2.local" { class {'clustercontrol': is_controller => false, mysql_root_password => 'dpassword', clustercontrol_host => '192.168.1.10' }}
MySQL Replication node definition will be similar to Galera cluster’s. In following example, we have a three-node MySQL Replication running on RHEL 6.5 64bit on Amazon AWS. The SSH user is ec2-user with passwordless sudo:
The node definition would be:
# ClusterControl hostnode "clustercontrol.local" { class { 'clustercontrol': is_controller => true, email_address => 'admin@localhost.xyz', ssh_user => 'ec2-user', cluster_type => 'replication', mysql_server_addresses => 'mysql-master.aws,mysql-slave1.aws,mysql-slave2.aws', api_token => 'b7e515255db703c659677a66c4a17952515dbaf5' }}# Monitored DB hostsnode "mysql-master.aws", "mysql-slave1.aws", "mysql-slave2.aws" { class {'clustercontrol': is_controller => false, mysql_root_password => 'dpassword', clustercontrol_host => 'clustercontrol.aws' }}
The MongoDB Replica Set runs on Ubuntu 12.04 LTS 64bit with sudo user ubuntu and password 'mySuDOpassXXX'. There is also an arbiter node running on mongo3.local . In MongoDB, the module does not require mysql_cmon_password and mysql_root_password which specifically for MySQL granting.
The node definition would be:
# Monitored mongoDB hostsnode 'mongo1.local', 'mongo2.local', 'mongo3.local' { class {'clustercontrol' : is_controller => false, ssh_user => 'ubuntu', clustercontrol_host => '192.168.1.40' }}# ClusterControl hostnode 'clustercontrol.local' { class {'clustercontrol' : is_controller => true, ssh_user => 'ubuntu', sudo_password => 'mySuDOpassXXX', email_address => 'admin@localhost.xyz', cluster_type=> 'mongodb', mongodb_server_addresses => 'mongo1.local:27017,mongo2.local:27017', mongoarbiter_server_addresses => 'mongo3.local:30000', datadir => '/var/lib/mongodb', api_token => 'b7e515255db703c659677a66c4a17952515dbaf5' }}
MongoDB Sharded Cluster needs to have mongocfg_server_addresses and mongos_server_addresses options specified. The mongodb_server_addresses value should be to the list of shard servers in the cluster. In the below example, we have a three-node MongoDB Sharded Cluster running on CentOS 5.6 64bit with 2 mongos nodes, 3 shard servers and 3 config servers:
The node definition would be:
# Monitored mongoDB hostsnode 'mongo1.local', 'mongo2.local', 'mongo3.local' { class {'clustercontrol' : is_controller => false, clustercontrol_host => '192.168.1.40' }}# ClusterControl hostnode 'clustercontrol.local' { class {'clustercontrol' : is_controller => true, email_address => 'admin@localhost.xyz', cluster_type=> 'mongodb', mongodb_server_addresses => '192.168.1.41:27018,192.168.1.42:27018,192.168.1.43:27018', mongocfg_server_addresses => '192.168.1.41:27019,192.168.1.42:27019,192.168.1.43:27019', mongos_server_addresses => '192.168.1.41:27017,192.168.1.42:27017', datadir => '/var/lib/mongodb', api_token => 'b7e515255db703c659677a66c4a17952515dbaf5' }}
Please have a look at the documentation at the ClusterControl Puppet Forge page for more details. In our upcoming post, we are going to elaborate on how to deploy new database clusters with ClusterControl using existing modules available in Puppet Forge.