Saturday, August 5, 2017

衝破

因為知道自己的渴望,祈求.
因此絕不讓自己陷入平靜的絕望中。

Thursday, August 3, 2017

How to Encrypt a Directory in Ubuntu

encryptfolder

Encrypt your data /srv in Ubuntu

Install package

apt install ecryptfs-utils

Adding encryption information

root@encdir:~# cat .ecryptfsrc
key=passphrase:passphrase_passwd_file=/opt/passwd.txt
ecryptfs_sig=5826dd62cf81c615
ecryptfs_cipher=aes
ecryptfs_key_bytes=16
ecryptfs_passthrough=n
ecryptfs_enable_filename_crypto=n
root@encdir:~# cat /opt/passwd.txt
passphrase_passwd=inquartikinquartik

Monunt /srv directory as an Encryption Directory

root@encdir:~# mount -t ecryptfs /srv /srv


Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_key_bytes=16
  ecryptfs_cipher=aes
  ecryptfs_sig=024f9aaf95b0eaf8
WARNING: Based on the contents of [/root/.ecryptfs/sig-cache.txt],
it looks like you have never mounted with this key
before. This could mean that you have typed your
passphrase wrong.

Would you like to proceed with the mount (yes/no)? : yes
Would you like to append sig [024f9aaf95b0eaf8] to
[/root/.ecryptfs/sig-cache.txt]
in order to avoid this warning in the future (yes/no)? : yes
Successfully appended new sig to user sig cache file
Mounted eCryptfs

check result

root@encdir:~# mount

/srv on /srv type ecryptfs (rw,relatime,ecryptfs_sig=024f9aaf95b0eaf8,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs)

Automatically Mount After Reboot

root@encdir:~# cat /etc/fstab
/dev/fd0        /media/floppy0  auto    rw,user,noauto,exec,utf8 0       0
.
.
.
/srv /srv ecryptfs defaults 0 0

Performance test

With Encryption

root@encdir:~# mount -t ecryptfs /srv /srv
root@encdir:~# dd if=/dev/zero of=/srv/testp bs=10k count=100000 oflag=sync
100000+0 records in
100000+0 records out
1024000000 bytes (1.0 GB, 977 MiB) copied, 26.8949 s, 38.1 MB/s

Without Encryption

root@encdir:~# umount /srv
root@encdir:~# dd if=/dev/zero of=/srv/testpp bs=10k count=100000 oflag=sync
100000+0 records in
100000+0 records out
1024000000 bytes (1.0 GB, 977 MiB) copied, 17.158 s, 59.7 MB/s

Overhead

It's about 30% overhead after directory encryption.

OpenStack and Sqlalchemy

sqlalchmy

很感謝這篇文章

http://www.dangtrinh.com/2013/06/sqlalchemy-python-module-with-mysql.html

OpneStack的Sqlalchemy

要用Python我們的確應該看看OpenStack提供了什麼樣的幫助,首先 OpenStack定義的DB schema都放在這個檔案下

nova/nova/db/sqlalchemy/models.py

我們來看一下他是用什麼方式定義的,將定義放在Class中。

class InstanceTypes(BASE, NovaBase):
    """Represents possible flavors for instances.

    Note: instance_type and flavor are synonyms and the term instance_type is
    deprecated and in the process of being removed.
    """
    __tablename__ = "instance_types"

    __table_args__ = (
        schema.UniqueConstraint("flavorid", "deleted",
                                name="uniq_instance_types0flavorid0deleted"),
        schema.UniqueConstraint("name", "deleted",
                                name="uniq_instance_types0name0deleted")
    )

    # Internal only primary key/id
    id = Column(Integer, primary_key=True)
    name = Column(String(255))
    memory_mb = Column(Integer, nullable=False)
    vcpus = Column(Integer, nullable=False)
    root_gb = Column(Integer)
    ephemeral_gb = Column(Integer)
    # Public facing id will be renamed public_id
    flavorid = Column(String(255))
    swap = Column(Integer, nullable=False, default=0)
    rxtx_factor = Column(Float, default=1)
    vcpu_weight = Column(Integer)
    disabled = Column(Boolean, default=False)
    is_public = Column(Boolean, default=True)

我們再看看OpenStack import了什麼

from sqlalchemy import (Column, Index, Integer, BigInteger, Enum, String,
                        schema, Unicode)
from sqlalchemy.dialects.mysql import MEDIUMTEXT
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import orm
from sqlalchemy import ForeignKey, DateTime, Boolean, Text, Float

from nova.db.sqlalchemy import types

Rabbitmq and Linux Keepalive Setting

rabbitmq

RabbitMQ Keepalive System Setting

需配置系统tcpkeepalivetime相关参数,减小keepalive时间及探测次数

tcp keepalive (time=7200, intvl=5, probes=9)


net.ipv4.tcp_keepalive_time = 60
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 5

OpenStack Trove

trove

Trove Installation

ls
wget http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
mv mysql.qcow2 trove-mysql.qcow2

glance image-create --name "mysql-5.6" --file trove-mysql.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress

sudo trove-manage datastore_update mysql ''
Glance_Image_ID=$(glance image-list | awk '/ mysql-5.6 / { print $2 }')
sudo trove-manage datastore_version_update mysql mysql-5.6 mysql ${Glance_Image_ID} '' 1
FLAVOR_ID=$(openstack flavor list | awk '/ m1.small / { print $2 }')
trove create mysql-instance ${FLAVOR_ID} --size 5 --databases myDB --users user:r00tme --datastore_version mysql-5.6 --datastore mysql
trove list
cd ../devstack/
ls
tail -f n-cpu.log -n 100
echo $FLAVOR_ID1
trove create mysql-instance ${FLAVOR_ID} --size 5 --databases myDB --users user:r00tme --datastore_version mysql-5.6 --datastore mysql
trove list
trove create mysql-instance ${FLAVOR_ID} --size 5 --databases myDB --users user:r00tme --datastore_version mysql-5.6 --datastore mysql
trove list
trove list

Error

Error Message

2016-08-22 16:15:16.712 7651 DEBUG trove.taskmanager.models [-] Successfully created security group for instance: d87625a2-17ac-4bb0-9c50-19ca1fe92084 create_instance /opt/stack/trove/trove/taskmanager/models.py:393
2016-08-22 16:15:16.712 7651 DEBUG trove.taskmanager.models [-] Begin _create_server_volume_individually for id: d87625a2-17ac-4bb0-9c50-19ca1fe92084 _create_server_volume_individually /opt/stack/trove/trove/taskmanager/models.py:783
2016-08-22 16:15:16.713 7651 DEBUG trove.taskmanager.models [-] trove volume support = True _build_volume_info /opt/stack/trove/trove/taskmanager/models.py:811
2016-08-22 16:15:16.713 7651 DEBUG trove.taskmanager.models [-] Begin _create_volume for id: d87625a2-17ac-4bb0-9c50-19ca1fe92084 _create_volume /opt/stack/trove/trove/taskmanager/models.py:844
2016-08-22 16:15:16.713 7651 ERROR trove.taskmanager.models [-] Failed to create volume for instance d87625a2-17ac-4bb0-9c50-19ca1fe92084
Endpoint not found for service_type=volumev2, endpoint_type=publicURL, endpoint_region=RegionOne.
Traceback (most recent call last):
  File "/opt/stack/trove/trove/taskmanager/models.py", line 815, in _build_volume_info
    volume_size, volume_type, datastore_manager)
  File "/opt/stack/trove/trove/taskmanager/models.py", line 845, in _create_volume
    volume_client = create_cinder_client(self.context)
  File "/opt/stack/trove/trove/common/remote.py", line 128, in cinder_client
    endpoint_type=CONF.cinder_endpoint_type)
  File "/opt/stack/trove/trove/common/remote.py", line 71, in get_endpoint
    endpoint_type=endpoint_type)
NoServiceEndpoint: Endpoint not found for service_type=volumev2, endpoint_type=publicURL, endpoint_region=RegionOne.

Trove needs volumev2 from Cinder for a data drive as data storage ?

119 def cinder_client(context):
120     if CONF.cinder_url:
121         url = '%(cinder_url)s%(tenant)s' % {
122             'cinder_url': normalize_url(CONF.cinder_url),
123             'tenant': context.tenant}
124     else:
125         url = get_endpoint(context.service_catalog,
126                            service_type=CONF.cinder_service_type,
127                            endpoint_region=CONF.os_region_name,
128                            endpoint_type=CONF.cinder_endpoint_type)
stack@trove:/etc/trove$ openstack service list
+----------------------------------+-------------+----------------+
| ID                               | Name        | Type           |
+----------------------------------+-------------+----------------+
| 7ebcc121e88c427a81b509334dd839e4 | trove       | database       |
| 90125dfe6a434ef3b0174cb7248c69f2 | nova_legacy | compute_legacy |
| 9a07a66686fa4e0a89201d98f137a898 | neutron     | network        |
| 9a8a8b2da8104b8c8422d134b2dff319 | nova        | compute        |
| b506135021f64a98899c378cbd47bf5f | keystone    | identity       |
| e0cb6a6687b043db869e5c0e06683d33 | glance      | image          |
+----------------------------------+-------------+----------------+

Hence, we add cinder to local.conf

CINDER_BRANCH=stable/mitaka
# Enable Cinder - Block Storage service for OpenStack
VOLUME_GROUP="cinder-volumes"
enable_service cinder c-api c-vol c-sch c-bak

After that, We can see the volumev2

stack@ubuntu:~/devstack$ openstack service list
+----------------------------------+-------------+----------------+
| ID                               | Name        | Type           |
+----------------------------------+-------------+----------------+
| 23058a3ea403442fb92f602fd4ebb777 | cinderv2    | volumev2       |
| 297f61ee0df84e4f8b49657af3b816cf | nova        | compute        |
| 674ab4b086c64dc8aa51afabc7a8f203 | neutron     | network        |
| 6e506e2ae0c14ca6a605cbf7828f0a1d | cinder      | volume         |
| b961bd89072e4abeabdf7088854f4e55 | glance      | image          |
| ddd741dae5904cd49d26badc8d17e7ef | keystone    | identity       |
| f6ade7c1e3564fa28e5c5c73a181c3a3 | nova_legacy | compute_legacy |
+----------------------------------+-------------+----------------+
[[local|localrc]]
DEST=/opt/stack

ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=tokentoken
HOST_IP=192.168.140.20

ENABLED_SERVICES=key,rabbit,mysql,horizon
ENABLED_SERVICES+=,n-api,n-crt,n-cpu,n-net,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,g-api,g-reg

# Enable Cinder - Block Storage service for OpenStack
CINDER_BRANCH=stable/mitaka
VOLUME_GROUP="cinder-volumes"
enable_service cinder c-api c-vol c-sch c-bak

# Enabling trove
TROVE_BRANCH=stable/mitaka
enable_plugin trove git://git.openstack.org/openstack/trove stable/mitaka stable/mitaka
enable_plugin trove-dashboard git://git.openstack.org/openstack/trove-dashboard stable/mitaka


# Enabling Neutron (network) Service
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service q-metering
enable_service neutron

Q_PLUGIN=ml2
#Q_USE_DEBUG_COMMAND=True
if [ "$Q_PLUGIN" = "ml2" ]; then
  #Q_ML2_TENANT_NETWORK_TYPE=gre
  Q_ML2_TENANT_NETWORK_TYPE=vxlan
  :
fi
## Neutron options
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=256
NETWORK_GATEWAY=10.0.0.1
PRIVATE_SUBNET_NAME=privateA

PUBLIC_SUBNET_NAME=public-subnet
FLOATING_RANGE=192.168.140.0/24
PUBLIC_NETWORK_GATEWAY=192.168.140.254
##Q_FLOATING_ALLOCATION_POOL=start=192.168.27.102,end=192.168.27.110
PUBLIC_INTERFACE=eth0
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex

LIBVIRT_TYPE=qemu

## Enable Trove

ENABLED_SERVICES+=,trove,tr-api,tr-tmgr,tr-cond


IMAGE_URLS="http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz"

SCREEN_LOGDIR=/opt/stack/screen-logs
SYSLOG=True
LOGFILE=~/devstack/stack.sh.log


Q_USE_DEBUG_COMMAND=True

# RECLONE=No
RECLONE=yes
OFFLINE=False

After installing Cinder, we still got the error message.

No valid host was found. There are not enough hosts available.
Code
500
Details
File "/opt/stack/nova/nova/conductor/manager.py", line 392, in build_instances context, request_spec, filter_properties) File "/opt/stack/nova/nova/conductor/manager.py", line 436, in _schedule_instances hosts = self.scheduler_client.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/utils.py", line 372, in wrapped return func(*args, **kwargs) File "/opt/stack/nova/nova/scheduler/client/__init__.py", line 51, in select_destinations return self.queryclient.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/client/__init__.py", line 37, in __run_method return getattr(self.instance, __name)(*args, **kwargs) File "/opt/stack/nova/nova/scheduler/client/query.py", line 32, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/rpcapi.py", line 121, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 158, in call retry=self.retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in _send timeout=timeout, retry=retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 470, in send retry=retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 461, in _send raise result
Created
Aug. 23, 2016, 5:48 a.m.

After we switch to flavor m1.small. It works. We can see the status from Horizon

mysql-instance  mysql-5.6    10.0.0.4 fd1d:6b4e:634a:0:f816:3eff:fea4:f2c2 m1.small  Active nova    None    Running 1 minute    
stack@trove2:~/trove-test$ trove list
+--------------------------------------+----------------+-----------+-------------------+--------+-----------+------+
| ID                                   | Name           | Datastore | Datastore Version | Status | Flavor ID | Size |
+--------------------------------------+----------------+-----------+-------------------+--------+-----------+------+
| 0d1cf949-2db9-4d73-8843-fc7a7d279a11 | mysql-instance | mysql     | mysql-5.6         | ERROR  | 3         |    5 |
| f86da618-0d7f-464b-b051-769f1864095e | mysql-instance | mysql     | mysql-5.6         | BUILD  | 2         |    5 |
+--------------------------------------+----------------+-----------+-------------------+--------+-----------+------+

Monday, July 24, 2017

Architect of OpenStack L3 Router HA

l3routerha

在两个netowrk node中,我们分别看到了virtual-router qrouter-f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec

[root@openstackcontroller13 ~]# ip netns list|grep f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec
qrouter-f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec
[root@openstackcontroller13 ~]# ip netns exec qrouter-f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec bash
[root@openstackcontroller13 ~]# ifconfig
ha-880fa0e2-8d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 169.254.192.4  netmask 255.255.192.0  broadcast 169.254.255.255
        inet6 fe80::f816:3eff:fe90:adf0  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:90:ad:f0  txqueuelen 0  (Ethernet)
        RX packets 609493  bytes 32927812 (31.4 MiB)
        RX errors 0  dropped 43  overruns 0  frame 0
        TX packets 304608  bytes 16449072 (15.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-3674d949-4c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.89.151.168  netmask 255.255.0.0  broadcast 0.0.0.0
        inet6 fe80::f816:3eff:fe8e:b815  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:8e:b8:15  txqueuelen 0  (Ethernet)
        RX packets 68441559  bytes 19571436387 (18.2 GiB)
        RX errors 0  dropped 2251  overruns 0  frame 0
        TX packets 55319  bytes 5194356 (4.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-352424b9-3e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 192.168.20.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::f816:3eff:fe12:5526  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:12:55:26  txqueuelen 0  (Ethernet)
        RX packets 3675  bytes 366823 (358.2 KiB)
        RX errors 0  dropped 13  overruns 0  frame 0
        TX packets 1394  bytes 132232 (129.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@openstackcontroller12 ~]# ip netns exec qrouter-f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec bash
[root@openstackcontroller12 ~]# ifconfig
ha-71d6264d-9d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 169.254.192.3  netmask 255.255.192.0  broadcast 169.254.255.255
        inet6 fe80::f816:3eff:fee7:4c03  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:e7:4c:03  txqueuelen 0  (Ethernet)
        RX packets 800714  bytes 43265351 (41.2 MiB)
        RX errors 0  dropped 31  overruns 0  frame 0
        TX packets 12  bytes 1008 (1008.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-3674d949-4c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        ether fa:16:3e:8e:b8:15  txqueuelen 0  (Ethernet)
        RX packets 58884872  bytes 18031883270 (16.7 GiB)
        RX errors 0  dropped 2002  overruns 0  frame 0
        TX packets 1  bytes 110 (110.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-352424b9-3e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        ether fa:16:3e:12:55:26  txqueuelen 0  (Ethernet)
        RX packets 500  bytes 57320 (55.9 KiB)
        RX errors 0  dropped 17  overruns 0  frame 0
        TX packets 1  bytes 110 (110.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

我们查一下keepalived

ps aux|grep keepalived |grep f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec

root     39958  0.0  0.0 111636  1364 ?        Ss   Oct26   0:23 keepalived -P -f /var/lib/neutron/ha_confs/f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec/keepalived.conf -p /var/lib/neutron/ha_confs/f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec.pid -r /var/lib/neutron/ha_confs/f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec.pid-vrrp

上述进程是透过network namespace下执行的。然而,network namespace并没有隔离进程,因此,在任何地方均可以看到全部进程。 顺便refer我之前对network namespace的研究

http://gogosatellite.blogspot.tw/2016/06/playing-openvswitch-and-namespace-veth.html

/var/lib/neutron/ha_confs/f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec/keepalived.conf

vrrp_instance VR_2 {
    state BACKUP
    interface ha-71d6264d-9d
    virtual_router_id 2
    priority 50
    garp_master_delay 60
    nopreempt
    advert_int 2
    track_interface {
        ha-71d6264d-9d
    }
    virtual_ipaddress {
        169.254.0.2/24 dev ha-71d6264d-9d
    }
    virtual_ipaddress_excluded {
        10.89.151.168/16 dev qg-3674d949-4c
        192.168.20.1/24 dev qr-352424b9-3e
        fe80::f816:3eff:fe12:5526/64 dev qr-352424b9-3e scope link
        fe80::f816:3eff:fe8e:b815/64 dev qg-3674d949-4c scope link
    }
    virtual_routes {
        0.0.0.0/0 via 10.89.1.254 dev qg-3674d949-4c
    }
}

官网的解释

https://wiki.openstack.org/wiki/Neutron/L3HighAvailability_VRRP

global_defs {
    router_id ${VR_ID}
}
vrrp_sync_group VG${VR_GROUP_ID} {
    group {
        VI_HA
    }
    % if NOTIFY_SCRIPT:
    notify_master ${NOTIFY_SCRIPT}
    % endif
}

vrrp_instance VI_HA {
    % if TYPE == 'MASTER':
    state MASTER
    % else:
    state SLAVE
    % endif
    interface ${L3_AGENT.get_ha_device_name(TRACK_PORT_ID)}
    virtual_router_id ${VR_ID}
    priority ${PRIORITY}
    track_interface {
        ${L3_AGENT.get_ha_device_name(TRACK_PORT_ID)}
    }
    virtual_ipaddress {
        % if EXTERNAL_PORT:
        ${EXTERNAL_PORT['ip_cidr']} dev ${L3_AGENT.get_external_device_name(EXTERNAL_PORT['id'])}
        % if FLOATING_IPS:
        ${FLOATING_IPS[0]['floating_ip_address']}/32 dev ${L3_AGENT.get_external_device_name(EXTERNAL_PORT['id'])}
        % endif
        % endif

        % if INTERNAL_PORTS:
        ${INTERNAL_PORTS[0]['ip_cidr']} dev ${L3_AGENT.get_internal_device_name(INTERNAL_PORTS[0]['id'])}
        % endif
    }
    virtual_ipaddress_excluded {
        % if EXTERNAL_PORT:
        % for FLOATING_IP in FLOATING_IPS[1:]:
        ${FLOATING_IP['floating_ip_address']}/32 dev ${L3_AGENT.get_external_device_name(EXTERNAL_PORT['id'])}
        % endfor
        % endif

        % for INTERNAL_PORT in INTERNAL_PORTS[1:]:
        ${INTERNAL_PORT['ip_cidr']} dev ${L3_AGENT.get_internal_device_name(INTERNAL_PORT['id'])}
        % endfor
    }

    % if EXTERNAL_PORT:
    virtual_routes {
        0.0.0.0/0 via ${EXTERNAL_PORT['ip_cidr'].split('/')[0]} dev ${L3_AGENT.get_external_device_name(EXTERNAL_PORT['id'])}
    }
    % endif
}

virtual_ipaddress为VIP的设定,virtual_ipaddress_excluded为namespace内network device的IP的设定。 standby并未设定任何值,直到fail over产生。此网路设定还包含Mac Address的设定,两台相同。

Friday, July 21, 2017

How to Setup High Availability Kubernetes

hakubernetes

Setup Kubernetes HA in

Environment

Kubernetes Version 1.6.

We have two servers. Master1: 172.16.155.158 Master2: 172.16.155.165 Etcd: 172.16.155.158

Here we only have one etcd, and we focus on discussion how to set up 2 Kubernetes Masters. And not considering etcd clustering.

There are two ways to connect to two api-servers. 1. How container connects to Kubernetes API Server -> kubernetes.default service 2. How Minion Node connect to Kubernetes API Server -> Loadbalancer to Host IP

We discuss how to setup kubernetes.default service to allow all conainer connect to API-server. and we then discuss how minion connects to api-server through a real loadbalancer to API-Server. And finally we discuss how it works while launching a container.

Master1 Setting

root@kuberm:~/kube1.6config/deploy/webscale# cat /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
User=root
ExecStart=/opt/bin/kube-apiserver \
 --insecure-bind-address=0.0.0.0 \
 --insecure-port=8080 \
 --etcd-servers=http://172.16.155.158:2379\
 --logtostderr=false \
 --allow-privileged=false \
 --service-cluster-ip-range=172.18.0.0/16 \
 --admission-control=NamespaceLifecycle,ServiceAccount,LimitRanger,SecurityContextDeny,ResourceQuota \
 --service-node-port-range=30000-32767 \
 --advertise-address=172.16.155.158 \
 --v=6 \
 --storage-backend="etcd2" \
 --log-dir="/var/log/kubernetes" \
 --client-ca-file=/srv/kubernetes/ca.crt \
 --tls-private-key-file=/srv/kubernetes/server.key \
 --tls-cert-file=/srv/kubernetes/server.cert \
 --service_account_key_file=/srv/kubernetes/server.key \
 --secure-port=6443 \
 --apiserver-count=2

Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

we mention that --advertise-address=172.16.155.158 \ is quite critical that the api-server is only allow this host ip. That's why kuberetes.default will route to hostip to connect to api-server.

where the kubernetes service is located in container ip address and route to hostip address to connect to api-server claimed in config file.

root@kuberm:~/kube1.6config/deploy/webscale# kubectl get endpoints kubernetes
NAME         ENDPOINTS                                 AGE
kubernetes   172.16.155.158:6443,172.16.155.165:6443   34d
root@kuberm:~/kube1.6config/deploy/webscale# cat /lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
ExecStart=/opt/bin/kube-controller-manager \
  --master=172.16.155.158:8080 \
  --root-ca-file=/srv/kubernetes/ca.crt \
  --service-account-private-key-file=/srv/kubernetes/server.key \
  --logtostderr=false \
  --log-dir="/var/log/kubernetes" \
  --v=3 \
  --leader-elect=true
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
root@kuberm:~/kube1.6config/deploy/webscale# cat /lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
ExecStart=/opt/bin/kube-scheduler \
  --logtostderr=true \
  --master=172.16.155.158:8080 \
  --leader-elect=true
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
root@kuberm:~/kube1.6config/deploy/webscale# cat /lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
ExecStart=/opt/bin/kube-proxy  \
  --hostname-override=172.16.155.158 \
  --master=http://172.16.155.158:8080 \
  --logtostderr=true
Restart=on-failure

[Install]
WantedBy=multi-user.target
root@kuberm:~/kube1.6config/deploy/webscale# cat /lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
ExecStart=/opt/bin/kubelet \
  --hostname-override=172.16.155.158 \
  --api-servers=http://172.16.155.158:8080 \
  --register-node=true \
  --logtostderr=false \
  --log-dir="/var/log/kubernetes" \
  --v=3 \
  --cluster_dns=172.18.0.5 \
  --cluster_domain=cluster.local
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

Master2 Setting

root@kuberm2:~# cat /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
User=root
ExecStart=/opt/bin/kube-apiserver \
 --insecure-bind-address=0.0.0.0 \
 --insecure-port=8080 \
 --etcd-servers=http://172.16.155.158:2379\
 --logtostderr=false \
 --allow-privileged=true \
 --service-cluster-ip-range=172.18.0.0/16 \
 --admission-control=NamespaceLifecycle,ServiceAccount,LimitRanger,SecurityContextDeny,ResourceQuota \
 --service-node-port-range=30000-32767 \
 --advertise-address=172.16.155.165 \
 --v=6 \
 --storage-backend="etcd2" \
 --log-dir="/var/log/kubernetes" \
 --client-ca-file=/srv/kubernetes/ca.crt \
 --tls-private-key-file=/srv/kubernetes/server.key \
 --tls-cert-file=/srv/kubernetes/server.cert \
 --service_account_key_file=/srv/kubernetes/server.key \
 --secure-port=6443 \
 --apiserver-count=2

Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
root@kuberm2:~# cat /lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
ExecStart=/opt/bin/kube-controller-manager \
  --master=172.16.155.165:8080 \
  --root-ca-file=/srv/kubernetes/ca.crt \
  --service-account-private-key-file=/srv/kubernetes/server.key \
  --logtostderr=false \
  --log-dir="/var/log/kubernetes" \
  --v=3 \
  --leader-elect=true

Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
root@kuberm2:~# cat /lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
ExecStart=/opt/bin/kube-scheduler \
  --logtostderr=true \
  --master=172.16.155.165:8080 \
  --leader-elect=true
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

Check Result

That's check the container connected api-server, said kubernetes.default.
It shoud have two api-servers through kube-proxy loadbalancer.

root@kuberm:~/kube1.6config/deploy/webscale# kubectl get svc kubernetes
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   172.18.0.1   <none>        443/TCP   34d
root@kuberm:~/kube1.6config/deploy/webscale# kubectl get endpoints kubernetes
NAME         ENDPOINTS                                 AGE
kubernetes   172.16.155.158:6443,172.16.155.165:6443   34d
root@kuberm:~/kube1.6config/deploy/webscale# kubectl get svc kubernetes -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2017-05-03T05:08:44Z
  labels:
    component: apiserver
    provider: kubernetes
  name: kubernetes
  namespace: default
  resourceVersion: "397592"
  selfLink: /api/v1/namespaces/default/services/kubernetes
  uid: 94c098f6-2fbe-11e7-9a3a-000c295cb5bb
spec:
  clusterIP: 172.18.0.1
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 6443
  sessionAffinity: ClientIP
  type: ClusterIP
status:
  loadBalancer: {}
root@kuberm:~/kube1.6config/deploy/webscale# /opt/bin/etcdctl get /registry/services/endpoints/default/kubernetes
{"kind":"Endpoints","apiVersion":"v1","metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/endpoints/kubernetes","uid":"94c24aba-2fbe-11e7-9a3a-000c295cb5bb","creationTimestamp":"2017-05-03T05:08:44Z"},"subsets":[{"addresses":[{"ip":"172.16.155.158"},{"ip":"172.16.155.165"}],"ports":[{"name":"https","port":6443,"protocol":"TCP"}]}]}

Check multiple kube-scheduler and kube-controller

Using the config of --leader-elect=true for both kube-scheduler and kube-controller, you will see the process is still running but not both of them is working. You might see the log to understand that. Only one process is working, eventhough the process is existed.

Slave Of Kube-Scheduler

ube-scheduler 200 OK in 1 milliseconds
I0607 11:00:05.179156   17298 leaderelection.go:248] lock is held by kuberm and has not yet expired
I0607 11:00:05.179162   17298 leaderelection.go:185] failed to acquire lease kube-system/kube-scheduler
I0607 11:00:09.334061   17298 round_trippers.go:417] GET http://172.16.155.165:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 2 milliseconds
I0607 11:00:09.334184   17298 leaderelection.go:248] lock is held by kuberm and has not yet expired
I0607 11:00:09.334191   17298 leaderelection.go:185] failed to acquire lease kube-system/kube-scheduler
I0607 11:00:12.110069   17298 round_trippers.go:417] GET http://172.16.155.165:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 2 milliseconds
I0607 11:00:12.110194   17298 leaderelection.go:248] lock is held by kuberm and has not yet expired
I0607 11:00:12.110201   17298 leaderelection.go:185] failed to acquire lease kube-system/kube-scheduler

Master of Kube-Scheduler

I0607 11:00:53.679760   18257 round_trippers.go:417] GET http://172.16.155.158:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 1 milliseconds
I0607 11:00:53.684782   18257 round_trippers.go:417] PUT http://172.16.155.158:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 4 milliseconds
I0607 11:00:53.684914   18257 leaderelection.go:204] succesfully renewed lease kube-system/kube-scheduler
I0607 11:00:55.686845   18257 round_trippers.go:417] GET http://172.16.155.158:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 1 milliseconds
I0607 11:00:55.693945   18257 round_trippers.go:417] PUT http://172.16.155.158:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 6 milliseconds
I0607 11:00:55.694060   18257 leaderelection.go:204] succesfully renewed lease kube-system/kube-scheduler

Master of Kube-controller

607 11:03:56.630699   17442 nodecontroller.go:1044] node kubermnode2 hasn't been updated for 8m30.112359775s. Last OutOfDisk is: &NodeCondition{Type:OutOfDisk,Status:Unknown,LastHeartbeatTime:2017-05-12 18:15:25 +0800 CST,LastTransitionTime:2017-06-06 14:35:21 +0800 CST,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0607 11:03:56.630725   17442 nodecontroller.go:1044] node kubermnode2 hasn't been updated for 8m30.112386319s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2017-05-12 18:15:25 +0800 CST,LastTransitionTime:2017-06-06 14:35:21 +0800 CST,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0607 11:03:56.630739   17442 nodecontroller.go:1044] node kubermnode2 hasn't been updated for 8m30.112399821s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2017-05-12 18:15:25 +0800 CST,LastTransitionTime:2017-06-06 14:35:21 +0800 CST,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}

Slave of kube-controller

ube-controller-manager 200 OK in 1 milliseconds
I0607 11:04:32.485502   17291 leaderelection.go:248] lock is held by kuberm and has not yet expired
I0607 11:04:32.485506   17291 leaderelection.go:185] failed to acquire lease kube-system/kube-controller-manager
I0607 11:04:36.263032   17291 round_trippers.go:417] GET http://172.16.155.165:8080/api/v1/namespaces/kube-system/endpoints/kube-controller-manager 200 OK in 1 milliseconds
I0607 11:04:36.263122   17291 leaderelection.go:248] lock is held by kuberm and has not yet expired
I0607 11:04:36.263125   17291 leaderelection.go:185] failed to acquire lease kube-system/kube-controller-manager

Failed Over on Kube-Scheduler

Stop the kube-schduler in Master, in slave you will see the failed over after 10 sec. You might try kube-controller, their failed over is individual.

I0607 11:27:38.747508    1487 leaderelection.go:248] lock is held by kuberm and has not yet expired
I0607 11:27:38.747513    1487 leaderelection.go:185] failed to acquire lease kube-system/kube-scheduler
I0607 11:27:41.168987    1487 round_trippers.go:417] GET http://172.16.155.165:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 3 milliseconds
I0607 11:27:41.176275    1487 round_trippers.go:417] PUT http://172.16.155.165:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 4 milliseconds
I0607 11:27:41.176565    1487 leaderelection.go:189] successfully acquired lease kube-system/kube-scheduler
I0607 11:27:41.179086    1487 event.go:217] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"758d2812-4b2d-11e7-9e10-000c295cb5bb", APIVersion:"v1", ResourceVersion:"558042", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kuberm2 became leader
I0607 11:27:41.181071    1487 round_trippers.go:417] POST http://172.16.155.165:8080/api/v1/namespaces/kube-system/events 201 Created in 2 milliseconds
I0607 11:27:43.178612    1487 round_trippers.go:417] GET http://172.16.155.165:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 1 milliseconds
I0607 11:27:43.182060    1487 round_trippers.go:417] PUT http://172.16.155.165:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 3 milliseconds
I0607 11:27:43.182111    1487 leaderelection.go:204] succesfully renewed lease kube-system/kube-scheduler

Friday, June 30, 2017

Mariadb 10.2.6 installation and playing

mariadb

installation

apt-get install software-properties-common
apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8

root@mariadbn:~# cat /etc/apt/sources.list.d/mariadb.list
deb [arch=amd64,i386] http://mirror.jmu.edu/pub/mariadb/repo/10.2/ubuntu trusty main
deb-src http://mirror.jmu.edu/pub/mariadb/repo/10.2/ubuntu trusty main


apt-get update
apt install mariadb-server

Edit Config

/etc/mysql/my.cnf

[mysqld]
.
bind-address            = 0.0.0.0
.
.

and

[galera]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so

# Galera Cluster Configuration
wsrep_cluster_name="gggggcluster"
wsrep_cluster_address="gcomm://172.16.155.182,172.16.155.183?pc.wait_prim=no"

# Galera Synchronization Configuration
wsrep_sst_method=rsync

# Galera Node Configuration
wsrep_node_address="172.16.155.182"
wsrep_node_name="mariadbn"
[mysqldump]
quick
quote-names
max_allowed_packet      = 16M

you might edit different attribute on different node

wsrep_node_address="host ip"
wsrep_node_name="hostname"

Modify remote access

GRANT ALL ON *.* TO 'root'@'%' IDENTIFIED BY 'password';

Modify debian-sys-maint

root@mariadbn:~# cat /etc/mysql/debian.cnf
# Automatically generated for Debian scripts. DO NOT TOUCH!
[client]
host     = localhost
user     = root
password = password
socket   = /var/run/mysqld/mysqld.sock
[mysql_upgrade]
host     = localhost
user     = root
password = password
socket   = /var/run/mysqld/mysqld.sock
basedir  = /usr

Start Clustering

in first node

service mysql start --wsrep-new-cluster

in sendong node

service mysql start

Check result

mysql -uroot -ppassword

MariaDB [(none)]> SHOW STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 2     |
+--------------------+-------+
1 row in set (0.00 sec)

Add database

create database aa;

You might check the new created database in another node.

Set up Bootstrap

It Seems Maraidb solve the boot sequence problem temporarily

Adding ?pc.wait_prim=no to wsrep_cluster_address to each node.

wsrep_cluster_address="gcomm://172.16.155.182,172.16.155.183?pc.wait_prim=no"

And connect to one of node and into mariadb with the following command.

set global wsrep_provider_options="pc.bootstrap=true";

It works well, while system reboot. If we set pc.bootstrap=true to node1,

Reboot Problem

It still have some problem in version 10.2.6 but it's better than old version. Sometimes it works, but sometimes it failed.

If you cannot restart mariadb, at any node

 root@mariadbnn:~# cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid:    e1b56403-5d49-11e7-8b80-06dac255d854
seqno:   -1
safe_to_bootstrap: 0

change safe_to_bootstrap: 1 and restart it with

service mysql start --wsrep-new-cluster

Reboot test

Trouble Shooting

Access denied for user 'debian-sys-maint'@'localhost' (using password: YES)

Method1

/etc/mysql/debian.cnf

[client]
host     = localhost
user     = debian-sys-maint
password = 5aULtSThxY8iKQf2
socket   = /var/run/mysqld/mysqld.sock
[mysql_upgrade]
host     = localhost
user     = debian-sys-maint
password = 5aULtSThxY8iKQf2
socket   = /var/run/mysqld/mysqld.sock
basedir  = /usr
mysql> GRANT ALL PRIVILEGES ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED BY '[debian-sys-maint password]';

Method2

replace /etc/mysql/debian.cnf

[client]
host     = localhost
user     = root
password = [password]
socket   = /var/run/mysqld/mysqld.sock
[mysql_upgrade]
host     = localhost
user     = root
password = [password]
socket   = /var/run/mysqld/mysqld.sock
basedir  = /usr

Change [password] to root password that is accessing to db's password.

Master wrong sequence

Nov  5 16:12:04 mariadb3 mysqld: } joined {
Nov  5 16:12:04 mariadb3 mysqld: } left {
Nov  5 16:12:04 mariadb3 mysqld: } partitioned {
Nov  5 16:12:04 mariadb3 mysqld: })
Nov  5 16:12:04 mariadb3 mysqld: 161105 16:12:04 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 3
Nov  5 16:12:04 mariadb3 mysqld: 161105 16:12:04 [Note] WSREP: Flow-control interval: [28, 28]
Nov  5 16:12:04 mariadb3 mysqld: 161105 16:12:04 [Note] WSREP: Received NON-PRIMARY.
Nov  5 16:12:04 mariadb3 mysqld: 161105 16:12:04 [Note] WSREP: New cluster view: global state: c3069ad9-a23e-11e6-a5ff-cbf732092ea0:6, view# -1: non-Primary, number of nodes: 3, my index: 0, protocol version -1
Nov  5 16:12:04 mariadb3 mysqld: 161105 16:12:04 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
Nov  5 16:12:06 mariadb3 mysqld: 161105 16:12:06 [Note] WSREP: (30fec36f, 'tcp://0.0.0.0:4567') turning message relay requesting off
Nov  5 16:43:24 mariadb3 ntpdate[840]: step time server 91.189.94.4 offset 1877.225237 sec
Nov  5 16:43:39 mariadb3 ntpdate[1735]: adjust time server 91.189.94.4 offset -0.046589 sec
Nov  5 16:43:48 mariadb3 /etc/init.d/mysql[1912]: 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in
Nov  5 16:43:48 mariadb3 /etc/init.d/mysql[1912]: #007/usr/bin/mysqladmin: connect to server at 'localhost' failed
Nov  5 16:43:48 mariadb3 /etc/init.d/mysql[1912]: error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111 "Connection refused")'
Nov  5 16:43:48 mariadb3 /etc/init.d/mysql[1912]: Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!
Nov  5 16:43:48 mariadb3 /etc/init.d/mysql[1912]:
Nov  5 16:43:49 mariadb3 kernel: [   38.395295] init: plymouth-upstart-bridge main process ended, respawning

Slave Wrong Sequence

ov  5 16:43:21 mariadb2 mysqld: } joined {
Nov  5 16:43:21 mariadb2 mysqld: } left {
Nov  5 16:43:21 mariadb2 mysqld: } partitioned {
Nov  5 16:43:21 mariadb2 mysqld: })
Nov  5 16:43:21 mariadb2 mysqld: 161105 16:43:21 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 1, memb_num = 3
Nov  5 16:43:21 mariadb2 mysqld: 161105 16:43:21 [Note] WSREP: Flow-control interval: [28, 28]
Nov  5 16:43:21 mariadb2 mysqld: 161105 16:43:21 [Note] WSREP: Received NON-PRIMARY.
Nov  5 16:43:21 mariadb2 mysqld: 161105 16:43:21 [Note] WSREP: New cluster view: global state: c3069ad9-a23e-11e6-a5ff-cbf732092ea0:6, view# -1: non-Primary, number of nodes: 3, my index: 1, protocol version -1
Nov  5 16:43:21 mariadb2 mysqld: 161105 16:43:21 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
Nov  5 16:43:23 mariadb2 mysqld: 161105 16:43:23 [Note] WSREP: (52092468, 'tcp://0.0.0.0:4567') turning message relay requesting off

Slave Wrong Sequence

Nov  5 16:43:21 mariadb1 mysqld: } joined {
Nov  5 16:43:21 mariadb1 mysqld: } left {
Nov  5 16:43:21 mariadb1 mysqld: } partitioned {
Nov  5 16:43:21 mariadb1 mysqld: })
Nov  5 16:43:21 mariadb1 mysqld: 161105 16:43:21 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 2, memb_num = 3
Nov  5 16:43:21 mariadb1 mysqld: 161105 16:43:21 [Note] WSREP: Flow-control interval: [28, 28]
Nov  5 16:43:21 mariadb1 mysqld: 161105 16:43:21 [Note] WSREP: Received NON-PRIMARY.
Nov  5 16:43:21 mariadb1 mysqld: 161105 16:43:21 [Note] WSREP: New cluster view: global state: c3069ad9-a23e-11e6-a5ff-cbf732092ea0:6, view# -1: non-Primary, number of nodes: 3, my index: 2, protocol version -1
Nov  5 16:43:21 mariadb1 mysqld: 161105 16:43:21 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
Nov  5 16:43:23 mariadb1 mysqld: 161105 16:43:23 [Note] WSREP: (666a866f, 'tcp://0.0.0.0:4567') turning message relay requesting off