Wednesday, October 19, 2016

Updating Default PHP 5.5.9 to PHP 5.6 on Ubuntu 14.04

Presuming that we have the default PHP 5.5.9 which comes with Ubuntu 14.04.5 (Trusty):

root@indra:/# php -v
PHP 5.5.9-1ubuntu4.20 (cli) (built: Oct  3 2016 13:00:37)
Copyright (c) 1997-2014 The PHP Group
Zend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies
    with Zend OPcache v7.0.3, Copyright (c) 1999-2014, by Zend Technologies

 and we want to upgrade to PHP 5.6, here are the steps:

1. Patch the system first.

apt-get update
apt-get upgrade -y

2. Restart the server.

reboot

3. Add the repository and install PHP 5.6:

apt-get install software-properties-common
add-apt-repository ppa:ondrej/php
apt-get update
apt-get install php5.6 php5.6-mcrypt php5.6-mbstring php5.6-curl php5.6-cli php5.6-mysql php5.6-gd php5.6-intl php5.6-xsl


4. Re-configure Apache to use PHP 5.6:

a2dismod php5
a2enmod php5.6
service apache2 restart

5. Verify that PHP 5.6 is running. On CLI:

php -v

root@indra:/etc/apache2# php -v
PHP 5.6.27-1+deb.sury.org~trusty+1 (cli)
Copyright (c) 1997-2016 The PHP Group
Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies
    with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2016, by Zend Technologies

On CGI, access phpinfo() PHP script using a browser and verify PHP 5.6 is being shown.

Tuesday, June 07, 2016

Moving MySQL Data Folder on cPanel Environment

According to an advice from a cPanel engineer, the best way to move MySQL data folder to a different folder (e.g. on a different partition with more available disk space) on a cPanel / CentOS environment is to create symbolic link rather than modifying the my.cnf file.

Presuming that the original MySQL data folder is located on /var/lib/mysql and the partition with more available disk space is mounted as /home, these are the steps on how to move the MySQL data folder from /var/lib/mysql to /home/var_mysql/mysql.

1. Backup the whole MySQL database, just in case.

mkdir /home/backup (if it doesn't exist yet)
mysqldump --all-databases | gzip > /home/backup/alldatabases.sql.gz

2. Stop MySQL service and verify that it's stopped.

/etc/init.d/mysql stop
/etc/init.d/mysql status

3. Create destination folder, move the folder and all the files and subfolders from existing to new destination folder, change permission settings and create symbolic link.

mkdir /home/var_mysql
mv /var/lib/mysql /home/var_mysql
chown -R mysql:mysql /home/var_mysql/mysql
ln -s /home/var_mysql/mysql /var/lib/mysql


4. Start back MySQL service, and verify that it's started.

/etc/init.d/mysql start
/etc/init.d/mysql status

That's all. :)

Wednesday, June 01, 2016

Ceph - Crush Map has Legacy Tunables

I upgraded Ceph from the old Dumpling version to the latest Jewel version. In addition to the OSDs not able to start up due to some permission settings on /var/lib/ceph (we need to change the permission settings recursively to ceph:ceph), I am also having this HEALTH_WARN messages:

indra@sc-test-nfs-01:~$ ceph status
    cluster d3dc01a3-c38d-4a85-b040-3015455246e6
     health HEALTH_WARN
            too many PGs per OSD (512 > max 300)
            crush map has legacy tunables (require bobtail, min is firefly)
            crush map has straw_calc_version=0

     monmap e3: 3 mons at {sc-test-ceph-01=192.168.3.3:6789/0,sc-test-ceph-02=192.168.3.4:6789/0,sc-test-nfs-01=192.168.3.2:6789/0}
            election epoch 50, quorum 0,1,2 sc-test-nfs-01,sc-test-ceph-01,sc-test-ceph-02
     osdmap e100: 3 osds: 3 up, 3 in
      pgmap v965721: 704 pgs, 6 pools, 188 MB data, 59 objects
            61475 MB used, 1221 GB / 1350 GB avail
                 704 active+clean

To resolve the problem is very simple, use below command:

ceph osd crush tunables optimal

indra@sc-test-nfs-01:~$ ceph osd crush tunables optimal
adjusted tunables profile to optimal

Ceph status after the adjustment:

indra@sc-test-nfs-01:~$ ceph status
    cluster d3dc01a3-c38d-4a85-b040-3015455246e6
     health HEALTH_WARN
            too many PGs per OSD (512 > max 300)
     monmap e3: 3 mons at {sc-test-ceph-01=192.168.3.3:6789/0,sc-test-ceph-02=192.168.3.4:6789/0,sc-test-nfs-01=192.168.3.2:6789/0}
            election epoch 50, quorum 0,1,2 sc-test-nfs-01,sc-test-ceph-01,sc-test-ceph-02
     osdmap e101: 3 osds: 3 up, 3 in
      pgmap v965764: 704 pgs, 6 pools, 188 MB data, 59 objects
            61481 MB used, 1221 GB / 1350 GB avail
                 704 active+clean

The warning messages related to the crush map are gone. Yay!

PS. Ignore the "too many PGs per OSD" warning, due to I have limited number of OSDs and too many pools and PGs on my test environment.

Source: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg10225.html
Reference: http://docs.ceph.com/docs/master/rados/operations/crush-map/