Thursday, September 22, 2016

Explaining Keystone Domain, Project, Group, User and Role in a Single Image

keystoneUserandRole

Explaining Keystone Domain, Project, Group, User and Role in a Single Image

  1. Fig1
    • Each Domain is unique in OpenStack
    • Different Project can be mapped to one Domain, but cannot map to another Domain
  2. Fig2
    • One User can be mapped to different Prject
    • One User have different role in different Project
  3. Fig 3
    • Different User can map to one Group
    • One Group can be mapped to different Project
    • One Group can have different role to different Project
  4. Fig 4
    • Domains have admin
    • Projects have admin
    • Group have admin
    • Group can have different admin to different Project

Playing Haproxy in Layer 7 : Redirect Traffics to Different Backends Based on HTTP Reqeust Header

haproxy

Playing Haproxy in Layer 7 : Redirect Traffics to Different Backends Based on HTTP Reqeust Header

Haproxy 1.6 installation

Following the steps to install haproxy 1.6.

sudo add-apt-repository ppa:vbernat/haproxy-1.6
sudo apt-get update
sudo apt-get dist-upgrade
apt-get install haproxy

Simple testing

Simply test the haproxy we installed to make sure the haproxy is right.

adding to /etc/haproxy/haproxy.cfg

global
  log         127.0.0.1 syslog
  maxconn     1000
  user        haproxy
  group       haproxy
  daemon

defaults
  log  global
  mode  http
  option  httplog
  option  dontlognull
  option  http-server-close
  option  forwardfor except 127.0.0.0/8
  option  redispatch
  option  contstats
  retries  3
  timeout  http-request 10s
  timeout  queue 1m
  timeout  connect 10s
  timeout  client 1m
  timeout  server 1m
  timeout  check 10s

listen  http_proxy
  bind        127.0.0.1:80
  balance     roundrobin
  server      server1 127.0.0.1:6000 maxconn 100

Runing the python SimpleHTTPServer as a Haproxy Backend Server.

root@haproxy:~# python -m SimpleHTTPServer 6000
Serving HTTP on 0.0.0.0 port 6000 ...

Using Curl to test it. You should get the following result and the result is right. Now we can go to next step.

root@haproxy:~# curl 127.0.0.1:80
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
<title>Directory listing for /</title>
<body>
<h2>Directory listing for /</h2>
<hr>
<ul>
<li><a href=".bash_history">.bash_history</a>
<li><a href=".bashrc">.bashrc</a>
<li><a href=".cache/">.cache/</a>
<li><a href=".profile">.profile</a>
<li><a href=".ssh/">.ssh/</a>
<li><a href=".viminfo">.viminfo</a>
<li><a href="haproxy.cfg">haproxy.cfg</a>
<li><a href="sources.list">sources.list</a>
</ul>
<hr>
</body>

Redirect by Layer 7 header

Now we are going to redirect traffic according to reqeust header information.

Simple Cases to capture header

An simple test to make sure the haproxy knowing the request header.

Chaning the /etc/haproxy/haproxy.cfg

global
 # variables memory consumption, in bytes
 tune.vars.global-max-size 1048576
 tune.vars.reqres-max-size     512
 tune.vars.sess-max-size      2048
 tune.vars.txn-max-size        256
 log         127.0.0.1 syslog
  maxconn     1000
  user        haproxy
  group       haproxy
  daemon

defaults
 mode http

frontend f_myapp
 bind 127.0.0.1:80
 http-request set-var(txn.host) req.hdr(Host)
 http-request set-var(txn.ua) req.hdr(User-Agent)
 default_backend b_myapp

backend b_myapp
 http-response set-header Your-Host %[var(txn.host)]
 http-response set-header Your-User-Agent %[var(txn.ua)]
 server s1 127.0.0.1:6000 check

Test it. And you will see the response informaiton contains the request header information (host, and User-Agent).

curl -H "host":"hhh" -H "User-Agent":"hhh" http://127.0.0.1:80 -sv
.
.
> Accept: */*
> host:hhh
> User-Agent:hhh
.
.

Try this and you will NOT see the response header contains the reqest header.

curl http://127.0.0.1:80 -sv

Redirect the traffic by Header

This topic is the most important.
According to the following config setting, we can redirect the traffic to different backends based on the request header.

/etc/haproxy/haproxy.cfg

global
 # variables memory consumption, in bytes
 tune.vars.global-max-size 1048576
 tune.vars.reqres-max-size     512
 tune.vars.sess-max-size      2048
 tune.vars.txn-max-size        256
 log         127.0.0.1 syslog
  maxconn     1000
  user        haproxy
  group       haproxy
  daemon

defaults
 mode http

frontend f_myapp
 bind 127.0.0.1:80

use_backend bk_app1 if { hdr(Host) -i app1.domain1.com app1.domain2.com region2 }
 default_backend b_myapp

backend bk_app1
 http-response set-header Your-Host %[var(txn.host)]
 http-response set-header Your-User-Agent %[var(txn.ua)]
 server s1 127.0.0.1:7000 check

backend b_myapp
 http-response set-header Your-Host %[var(txn.host)]
 http-response set-header Your-User-Agent %[var(txn.ua)]
 server s1 127.0.0.1:6000 check
root@haproxy:~# python -m SimpleHTTPServer 7000
Serving HTTP on 0.0.0.0 port 7000 ...

When hdr(Host) satisfies the condition of app1.domain1.com app1.domain2.com region2, the traffic will be redirect to bk_app1 that is the port 7000 running by SimpleHTTPServer.
When hdr(Host) NOT satisfies the condition, it will be redirect to default_backend b_myapp.
For simply, we shutdown the port 6000 backend server.

You can test app1.domain1.com, app1.domain2.com, and region2 as the reqeust header.

root@haproxy:~# curl -H "Host:"region2" " http://127.0.0.1:80 -sv
* Rebuilt URL to: http://127.0.0.1:80/
* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Accept: */*
> Host:region2
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
.
.

Try region3 and it should return 503 since we didn't launch port 6000 for detection the error.

root@haproxy:~# curl -H "Host:"region3" " http://127.0.0.1:80 -sv
* Rebuilt URL to: http://127.0.0.1:80/
* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Accept: */*
> Host:region3
>
* HTTP 1.0, assume close after body
< HTTP/1.0 503 Service Unavailable
< Cache-Control: no-cache
< Connection: close
< Content-Type: text/html
<
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
* Closing connection 0

Some Warning

When we restart the haproxy, we found some warning as followed.
But it not affect our experience, so we ignore it.

root@haproxy:~# service haproxy restart
 * Restarting haproxy haproxy                                                                                                                                                        [WARNING] 265/160929 (3164) : config : missing timeouts for frontend 'f_myapp'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 265/160929 (3164) : config : missing timeouts for backend 'bk_app1'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 265/160929 (3164) : config : missing timeouts for backend 'b_myapp'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 265/160929 (3165) : config : missing timeouts for frontend 'f_myapp'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 265/160929 (3165) : config : missing timeouts for backend 'bk_app1'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 265/160929 (3165) : config : missing timeouts for backend 'b_myapp'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.

Friday, September 9, 2016

Minimun Installation Steps to Ceph Jewel and Plaing with It

Cephjewelinstall

Minimun Installation Steps to Ceph Jewel and Plaing with It

Our goal is to reduce the installation procudures that can help peolple playing Ceph Jewel immediately.
Hence, we use the minimum installation with single VM and one disk drive to achive the goal.
I do not recommand using OS direcotry as OSD drive, since it's not a usual situation.
So attaching one disk on this VM first.

Let's start to discuss the following topic.

  • Envrironment
  • Installation
  • Result
  • Playing with Ceph Jewel

Envrironment

OS

  • Ubuntu14.04
  • Kernel 3.16.0-30-generic
  • VMware VM

/etc/hosts Setting

root@cephserver:~/mycephfiles# cat /etc/hosts
127.0.0.1   localhost

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.2.141 cephserver

Data Drive setting

root@cephserver:~/mycephfiles# ls /dev/sd*
sda   sda1  sda2  sda5  sdb

The sdb will be the only one OSD in our Ceph testing bed

Ceph Jewel Installation

We installed the Ceph Jewel by using the following commands.

wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
echo deb http://download.ceph.com/debian-jewel/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt update
apt-get install ceph-deploy
mkdir mycephfiles
cd mycephfiles
ceph-deploy new cephserver
echo 'osd crush chooseleaf type = 0' >> ceph.conf
echo osd pool default size = 1 >> ceph.conf
ceph-deploy install cephserver
ceph-deploy mon create-initial
mkfs.ext4 /dev/sdb
ceph-deploy osd create cephserver:sdb
ceph-deploy osd activate cephserver:/dev/sdb1
ceph-deploy admin cephserver
chmod +r /etc/ceph/ceph.client.admin.keyring
Ceph -s
ceph osd tree

Result

After installation, we check the Ceph is healthy or not. First of all, check the status of Ceph.

root@cephserver:~# ceph -s
    cluster af0ac66e-5020-4218-926a-66d57895fafd
     health HEALTH_WARN
            too many PGs per OSD (448 > max 300)
     monmap e1: 1 mons at {cephserver=192.168.2.141:6789/0}
            election epoch 5, quorum 0 cephserver
     osdmap e24: 1 osds: 1 up, 1 in
            flags sortbitwise
      pgmap v230: 448 pgs, 4 pools, 3291 MB data, 947 objects
            3333 MB used, 1990 GB / 1994 GB avail
                 448 active+clean

Check the status of all the OSD, it must be the "up" status.

root@cephserver:~# ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 1.94730 root default
-2 1.94730     host cephserver
 0 1.94730         osd.0            up  1.00000          1.00000

As before, we directly activate the /dev/sdb, and now we can see which directory it located.

root@cephserver:~# mount 
.
.
/dev/sdb1 on /var/lib/ceph/osd/ceph-0 type xfs (rw,noatime,inode64)

Playing with Ceph Jewel

A big Difference to Other Ceph version

Ceph Jewel is running properly in Kernel greater than 4.4. So, if you kernel is less the 4.4, you will get the errors when you map a image shown as followed.

root@cephserver:~#rbd map pool101/realimage1

rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (6) No such device or address

The solution is disable some features to make it working properly by using the below command.

rbd feature disable pool101/realimage1 deep-flatten fast-diff object-map exclusive-lock
rbd map pool101/realimage1

Playing Pool, Create, Map and Mount

Now we can add some data in the Ceph.

root@cephserver:~# ceph osd pool create pool101 128
root@cephserver:~# rbd create -p pool101 realimage1 --size 102400 --image-format 2
rbd feature disable pool101/realimage1 deep-flatten fast-diff object-map exclusive-lock
root@cephserver:~# rbd map pool101/realimage1
/dev/rbd0
root@cephserver:~# mkfs.ext4 /dev/rbd0
root@cephserver:~# mount /dev/rbd0 /mnt/cephtest/
root@cephserver:~# touch /mnt/cephtest/aa && echo "v1"> /mnt/cephtest/aa

Playing Snapshot

You can snapshot the data, and the snapshot is read only.

root@cephserver:~# rbd snap create pool101/realimage1@snap1
rbd snap protect pool101/realimage1@snap1

Playing Clone and Mount

If you want to write to some snapshot, you have to clone the snapshot. Now you can read and write the data to it.

root@cephserver:~# rbd clone pool101/realimage1@snap1 realimage1snap1clone1
root@cephserver:~#rbd feature disable realimage1snap1clone1 deep-flatten fast-diff object-map exclusive-lock
root@cephserver:~# rbd map realimage1snap1clone1
/dev/rbd1
root@cephserver:~# mount /dev/rbd1 /mnt/cephclone
root@cephserver:~# cat /mnt/cephclone/aa
v1