Sunday, June 26, 2016

DevStack Multiple Node: Adding Compute Node by using Devstack Mitaka

addingcomputenodemitaka

Adding Compute Node by using Devstack Mitaka

Install Contol Node

Intital

Becase my Laptop's Ram is too small so I have to kill some process to keep the VM running smoothly.

pkill -9 ceilometer
pkill -9 nova-compute
service libvirt-bin stop

Adding Compute Node

Installation from Local.conf

The local.conf is here.

[[local|localrc]]
HOST_IP=172.16.235.170
SERVICE_HOST=172.16.235.128
MYSQL_HOST=172.16.235.128
RABBIT_HOST=172.16.235.128
GLANCE_HOSTPORT=172.16.235.128:9292
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password

## Neutron options
PUBLIC_INTERFACE=eth0
ENABLED_SERVICES=n-cpu,rabbit,q-agt


LIBVIRT_TYPE=qemu
SCREEN_LOGDIR=/opt/stack/screen-logs
SYSLOG=True
LOGFILE=~/devstack/stack.sh.log

RECLONE=No
#RECLONE=yes
OFFLINE=True

After a while, you will see the result.

========================
DevStack Components Timed
========================

run_process - 6 secs



This is your host IP address: 172.16.235.170
This is your host IPv6 address: ::1

Reuslt

To understand the new compute node, mitakacompute, join the system or not.

stack@mitakagnocchi:~/devstack$ nova hypervisor-list
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status  |
+----+---------------------+-------+---------+
| 1  | mitakagnocchi       | down  | enabled |
| 2  | mitakacompute       | up    | enabled |
+----+---------------------+-------+---------+

Now you can play OpenStack with multiple node via simple command to launch an instance.

http://gogosatellite.blogspot.tw/2016/06/minimum-steps-to-vm.html

Deep Dive

Check Networking

We check the compute node to make sure the networking is vxlan supported for the local.conf.

    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-ac10eb80"
            Interface "vxlan-ac10eb80"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="172.16.235.170", out_key=flow, remote_ip="172.16.235.128"}

In Controll Node, we can see the vxLan protocol enabled.

root@mitakagnocchi:~# ovs-vsctl show
4d075257-2e3b-4287-8bbf-d06f33910752
    Bridge br-ex
    .
    .
Port "vxlan-ac10ebaa"
 Interface "vxlan-ac10ebaa"
     type: vxlan
     options: {df_default="true", in_key=flow, local_ip="172.16.235.128", out_key=flow, remote_ip="172.16.235.170"}

In the VM testing, we ran out of memory so I stop the compute role, libvirt, ceilometer in control node. The VM is running on the new node.


stack@mitakagnocchi:~/devstack$ nova list
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------+
| ID                                   | Name | Status | Task State | Power State | Networks                                               |
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------+
| 817d75be-764f-4ccb-bd83-438b4397c091 | 123  | ACTIVE | -          | Running     | private=10.0.0.7, fd03:64f3:a8ad:0:f816:3eff:feed:30f6 |
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------+

Adding floating IP

You can read my blog to know how to use nova command line to operate a VM.

http://gogosatellite.blogspot.tw/2016/06/minimum-steps-to-vm.html

After launch a VM, we add a floating ip 172.16.235.6 to VM.

root@mitakagnocchi:~# ping 172.16.235.6
PING 172.16.235.6 (172.16.235.6) 56(84) bytes of data.
64 bytes from 172.16.235.6: icmp_seq=1 ttl=64 time=5.17 ms
64 bytes from 172.16.235.6: icmp_seq=10 ttl=64 time=1000 ms
64 bytes from 172.16.235.6: icmp_seq=11 ttl=64 time=1.31 ms

To check the IPtables in q-route namespace

ip netns exec qrouter-6801884e-0b91-4b4d-bb93-e1fe9ada91bd bash
root@mitakagnocchi:~# iptables -t nat -S
-P PREROUTING ACCEPT
.
.
-A neutron-l3-agent-OUTPUT -d 172.16.235.6/32 -j DNAT --to-destination 10.0.0.5
-A neutron-l3-agent-POSTROUTING ! -i qg-49a0878f-4d ! -o qg-49a0878f-4d -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d 172.16.235.6/32 -j DNAT --to-destination 10.0.0.5
-A neutron-l3-agent-float-snat -s 10.0.0.5/32 -j SNAT --to-source 172.16.235.6
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -o qg-49a0878f-4d -j SNAT --to-source 172.16.235.3
.
.

How it works between Compute Node and Network Node(Control Node)

stack@mitakagnocchi:~/devstack$ sudo ovs-vsctl show
   Bridge br-tun
        .
        Port "vxlan-ac10ebaa"
            Interface "vxlan-ac10ebaa"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="172.16.235.128", out_key=flow, remote_ip="172.16.235.170"}

To check the Network Node, we found the vxlan protocol running in Network node, hence we can access floating ip(Network node), then connect to VM(Compute Node) via vxlan(br-tun).

Vlan Tag

root@mitakacompute:~# ovs-vsctl show
dc2ae015-4882-4a0e-8ecb-7d4739f088cc
    Bridge br-int
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo5956ce9e-3a"
            tag: 1
            Interface "qvo5956ce9e-3a"
        Port "qvoff132ae7-4b"
            tag: 2
            Interface "qvoff132ae7-4b"
        Port br-int
            Interface br-int
                type: internal

At Computer node. qvo5956ce9e-3a launched by demo user in admin project, and qvoff132ae7-4b by tester user in tester project.

At Network node(control node). To check DHCP of tester tenant.

stack@mitakagnocchi:~$ sudo ovs-vsctl show
Bridge br-int
    fail_mode: secure
.
.
    Port "tapa657602c-b7"
        tag: 3
        Interface "tapa657602c-b7"
            type: internal

where tapa657602c-b7 is belong to tester subnet with vlan tag 3 that is different with in Compute node. That means, Same Tenant are with different vlan tags in different nodes. It's quite reasonable.

Flow Table In compute node

ovs-ofctl dump-flows br-tun
.
.
cookie=0x9a2f3857b1f3b3d4, duration=12614.642s, table=4, n_packets=214, n_bytes=24699, idle_age=33, priority=1,tun_id=0x41b actions=mod_vlan_vid:1,resubmit(,10)
 cookie=0x9a2f3857b1f3b3d4, duration=2468.361s, table=4, n_packets=15, n_bytes=1838, idle_age=2397, priority=1,tun_id=0x87 actions=mod_vlan_vid:2,resubmit(,10)

Flow Table In Control node

ovs-ofctl dump-flows br-tun
.
.
 cookie=0x896af9c2b96ed0d4, duration=3054.077s, table=22, n_packets=14, n_bytes=1132, idle_age=2550, dl_vlan=3 actions=strip_vlan,set_tunnel:0x87,output:2
 cookie=0x896af9c2b96ed0d4, duration=14680.049s, table=22, n_packets=341, n_bytes=32522, idle_age=37, dl_vlan=1 actions=strip_vlan,set_tunnel:0x41b,output:2

How to rejoin in Compute Node

Same as All in one.

Shutdown/Reboot -> ./unstack -> ./stack.sh

And Adding the following lines in local.conf.

RECLONE=No
#RECLONE=yes
OFFLINE=True

Friday, June 24, 2016

Minimum Steps To VM

minimumstepstovm

Minimum Steps To VM After DevStack

Sometimes, You just want to launch a VM and deep dive into OpenStack.
You don't really need to use it from Horizon.
This blog provides you a minimum steps to launch a VM and ssh login to VM.

Devstack Installaion

Baed on my privious Blog

http://gogosatellite.blogspot.tw/2016/06/install-openstack-ceilometer-and.html

Minimum Steps

Launch An Instance

Create Environment

source openrc demo admin

Get Image ID

stack@mitakagnocchi:~/devstack$ glance image-list
+--------------------------------------+---------------------------------+
| ID                                   | Name                            |
+--------------------------------------+---------------------------------+
| 75d4ca92-7f36-49ab-ae70-6aeb3a2ec03f | cirros-0.3.3-x86_64-uec         |
| 65e2060c-518e-474b-b288-cc2b5211f749 | cirros-0.3.3-x86_64-uec-kernel  |
| c77fe924-5051-4939-8081-9bace82a0148 | cirros-0.3.3-x86_64-uec-ramdisk |

Get Network ID

stack@mitakagnocchi:~/devstack$ nova net-list
+--------------------------------------+---------+------+
| ID                                   | Label   | CIDR |
+--------------------------------------+---------+------+
| 4fd6554f-b3bb-4cea-b58b-45ca966652ab | private | None |
| a10c3555-bcd7-42ea-98c5-89412acdae36 | public  | None |
+--------------------------------------+---------+------+

Get Flavor ID

stack@mitakagnocchi:~/devstack$ nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

Launch A Instance

stack@mitakagnocchi:~/devstack$ nova boot --image 75d4ca92-7f36-49ab-ae70-6aeb3a2ec03f --flavor 1  --nic net-id=4fd6554f-b3bb-4cea-b58b-45ca966652ab  lala1
+--------------------------------------+----------------------------------------------------------------+
| Property                             | Value                                                          |
+--------------------------------------+----------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                         |
| OS-EXT-AZ:availability_zone          |                                                                |
| OS-EXT-SRV-ATTR:host                 | -                                                              |
| OS-EXT-SRV-ATTR:hostname             | lala1                                                          |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                              |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000002                                              |
| OS-EXT-SRV-ATTR:kernel_id            | 65e2060c-518e-474b-b288-cc2b5211f749                           |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                              |
| OS-EXT-SRV-ATTR:ramdisk_id           | c77fe924-5051-4939-8081-9bace82a0148                           |
| OS-EXT-SRV-ATTR:reservation_id       | r-xg7gtpwc                                                     |
nova boot --image <img-id> --flavor <flavor-id>  --nic net-id=<private net-id>  <instance_name>

To Check VM Status

stack@mitakagnocchi:~/devstack$ nova list
+--------------------------------------+-------+--------+------------+-------------+--------------------------------------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                                               |
+--------------------------------------+-------+--------+------------+-------------+--------------------------------------------------------+
| 1db0600f-c30b-49a0-9d46-b501002ad8b6 | lala1 | ACTIVE | -          | Running     | private=fd1d:8cc0:1aa4:0:f816:3eff:fec2:d782, 10.0.0.4 |
+--------------------------------------+-------+--------+------------+-------------+--------------------------------------------------------+

Adding Floating IP

Get a floating IP first.


stack@mitakagnocchi:~/devstack$ nova floating-ip-create
+--------------------------------------+--------------+-----------+----------+--------+
| Id                                   | IP           | Server Id | Fixed IP | Pool   |
+--------------------------------------+--------------+-----------+----------+--------+
| 956031a6-00d5-4f51-9046-3c90ec6df64e | 172.16.235.5 | -         | -        | public |
+--------------------------------------+--------------+-----------+----------+--------+

Associate a floating ip.

stack@mitakagnocchi:~/devstack$ nova add-floating-ip --fixed-address 10.0.0.4 lala1 172.16.235.5

To Check floating IP

stack@mitakagnocchi:~/devstack$ nova list
+--------------------------------------+-------+--------+------------+-------------+----------------------------------------------------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                                                             |
+--------------------------------------+-------+--------+------------+-------------+----------------------------------------------------------------------+
| 1db0600f-c30b-49a0-9d46-b501002ad8b6 | lala1 | ACTIVE | -          | Running     | private=fd1d:8cc0:1aa4:0:f816:3eff:fec2:d782, 10.0.0.4, 172.16.235.5 |
+--------------------------------------+-------+--------+------------+-------------+----------------------------------------------------------------------+

Release Security Group

Check Rules of Default Security Group.

stack@mitakagnocchi:~/devstack$ nova secgroup-list-rules default
+-------------+-----------+---------+----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+----------+--------------+
|             |           |         |          | default      |
|             |           |         |          | default      |
+-------------+-----------+---------+----------+--------------+

Free ICMP.

stack@mitakagnocchi:~/devstack$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Free TCP

stack@mitakagnocchi:~/devstack$ nova secgroup-add-rule default tcp 1 65535 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 1         | 65535   | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Free UDP

stack@mitakagnocchi:~/devstack$ nova secgroup-add-rule default udp 1 65535 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| udp         | 1         | 65535   | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Connect to VM via Floating IP

Now we can login to VM

stack@mitakagnocchi:~/devstack$ rm ../.ssh/known_hosts
stack@mitakagnocchi:~/devstack$ ssh cirros@172.16.235.5
The authenticity of host '172.16.235.5 (172.16.235.5)' can't be established.
RSA key fingerprint is 7f:ac:8d:c0:48:12:fb:57:68:9d:ec:10:02:55:fb:ce.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.16.235.5' (RSA) to the list of known hosts.
cirros@172.16.235.5's password:
$

Furthermore

To Internet

Modify Nameserver, you then can go to internet.

Delete a VM

nova force-delete <instance-id>

Install OpenStack Ceilometer and Gnocchi By Using DevStack Mitaka

mitakagnocchi

Devstack Mitaka with Ceilometer and Gnocchi

用VMware安裝,第一次失敗,在裝neutron的部分停掉了,我想是因為我用console連,timeout掉了。 因此改用vmware的介面下執行./stack.sh,成功了。

基於之前Mitaka的設定,

http://gogosatellite.blogspot.tw/2016/04/using-devstack-to-install-openstack.html

現在,主要是想加入ceilometer與gnocchi於Devstack Mitaka版,local.conf如下

[[local|localrc]]
DEST=/opt/stack

ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=tokentoken
HOST_IP=172.16.235.128

ENABLED_SERVICES=key,rabbit,mysql,horizon
ENABLED_SERVICES+=,n-api,n-crt,n-cpu,n-net,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,g-api,g-reg

enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer
enable_service ceilometer


enable_plugin gnocchi https://github.com/openstack/gnocchi master
enable_service gnocchi-api,gnocchi-metricd
enable_service gnocchi-grafana


# Enabling Neutron (network) Service
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service q-metering
enable_service neutron

## Neutron options
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=256
NETWORK_GATEWAY=10.0.0.1
PRIVATE_SUBNET_NAME=privateA

PUBLIC_SUBNET_NAME=public-subnet
FLOATING_RANGE=172.16.235.0/24
PUBLIC_NETWORK_GATEWAY=172.16.235.2
##Q_FLOATING_ALLOCATION_POOL=start=192.168.27.102,end=192.168.27.110
PUBLIC_INTERFACE=eth0
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex

LIBVIRT_TYPE=qemu

IMAGE_URLS="http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz"

SCREEN_LOGDIR=/opt/stack/screen-logs
SYSLOG=True
LOGFILE=~/devstack/stack.sh.log


Q_USE_DEBUG_COMMAND=True

# RECLONE=No
RECLONE=yes
OFFLINE=False

Now the result is here,我們成功地部署了,包含ceilometer and gnocchi。 Ceilometer的部署相對於之前的版本(Kilo),設定減少很多。

========================
DevStack Components Timed
========================

run_process - 84 secs
test_with_retry - 4 secs
apt-get-update - 70 secs
pip_install - 503 secs
restart_apache_server - 22 secs
wait_for_service - 15 secs
git_timed - 273 secs
apt-get - 538 secs



This is your host IP address: 172.16.235.128
This is your host IPv6 address: ::1
Horizon is now available at http://172.16.235.128/dashboard
Keystone is serving at http://172.16.235.128:5000/
The default users are: admin and demo
The password: password

我們檢查一下ceilometer的daemon。

stack@mitakagnocchi:~/devstack$ ps aux|grep ceilo
stack    107554  1.1  1.1 204500 47908 pts/21   Sl+  12:23   0:02 /usr/bin/python /usr/local/bin/ceilometer-polling --polling-namespaces central --config-file /etc/ceilometer/ceilometer.conf
stack    108038  2.0  1.0 107848 41780 pts/22   S+   12:23   0:03 /usr/bin/python /usr/local/bin/ceilometer-agent-notification --config-file /etc/ceilometer/ceilometer.conf
stack    108444  7.1  1.4 1328268 57436 pts/22  Sl+  12:23   0:12 /usr/bin/python /usr/local/bin/ceilometer-agent-notification --config-file /etc/ceilometer/ceilometer.conf
stack    108445  7.1  1.5 1811836 61684 pts/22  Sl+  12:23   0:12 /usr/bin/python /usr/local/bin/ceilometer-agent-notification --config-file /etc/ceilometer/ceilometer.conf
root     109136  0.0  0.0  62100  3552 pts/23   S+   12:23   0:00 sudo tail -f /var/log/apache2/ceilometer.log
root     109143  0.0  0.0   4388   732 pts/23   S+   12:23   0:00 tail -f /var/log/apache2/ceilometer.log
root     109419  0.0  0.0  62100  3552 pts/24   S+   12:23   0:00 sudo tail -f /var/log/apache2/ceilometer_access.log
root     109423  0.0  0.0   4388   700 pts/24   S+   12:23   0:00 tail -f /var/log/apache2/ceilometer_access.log
stack    109972  2.1  1.0 117364 42836 pts/25   S+   12:23   0:03 /usr/bin/python /usr/local/bin/ceilometer-collector --config-file /etc/ceilometer/ceilometer.conf
stack    110356  0.7  1.5 1676168 63200 pts/25  Sl+  12:23   0:01 /usr/bin/python /usr/local/bin/ceilometer-collector --config-file /etc/ceilometer/ceilometer.conf
stack    110357  0.4  1.4 1298976 59712 pts/25  Sl+  12:23   0:00 /usr/bin/python /usr/local/bin/ceilometer-collector --config-file /etc/ceilometer/ceilometer.conf
root     110588  0.0  0.0  15060  1964 pts/26   S+   12:23   0:00 sg libvirtd /usr/local/bin/ceilometer-polling --polling-namespaces compute --config-file /etc/ceilometer/ceilometer.conf
stack    110590  1.1  1.4 282380 56812 pts/26   Sl+  12:23   0:01 /usr/bin/python /usr/local/bin/ceilometer-polling --polling-namespaces compute --config-file /etc/ceilometer/ceilometer.conf

我們檢查一下gnocchi的daemon。

stack@mitakagnocchi:~/devstack$ ps aux|grep gno
rabbitmq  31793  0.5  1.3 1305776 54568 ?       Sl   11:53   0:10 /usr/lib/erlang/erts-5.10.4/bin/beam.smp -W w -K true -A30 -P 1048576 -- -root /usr/lib/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.2.4/sbin/../ebin -noshell -noinput -s rabbit boot -sname rabbit@mitakagnocchi -boot start_sasl -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit error_logger {file,"/var/log/rabbitmq/rabbit@mitakagnocchi.log"} -rabbit sasl_error_logger {file,"/var/log/rabbitmq/rabbit@mitakagnocchi-sasl.log"} -rabbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/lib/rabbitmq_server-3.2.4/sbin/../plugins" -rabbit plugins_expand_dir "/var/lib/rabbitmq/mnesia/rabbit@mitakagnocchi-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit@mitakagnocchi"
stack    111567  0.0  0.2 432280  8808 ?        Sl   12:23   0:00 (wsgi:gnocchi)    -k start
stack    111568  1.4  2.3 871876 92856 ?        Sl   12:23   0:02 (wsgi:gnocchi)    -k start
root     112189  0.0  0.0  62100  3648 pts/27   S+   12:24   0:00 sudo tail -f /var/log/apache2/gnocchi.log
root     112191  0.0  0.0   4388   716 pts/27   S+   12:24   0:00 tail -f /var/log/apache2/gnocchi.log
root     112475  0.0  0.0  62100  3632 pts/28   S+   12:24   0:00 sudo tail -f /var/log/apache2/gnocchi-access.log
root     112476  0.0  0.0   4388   676 pts/28   S+   12:24   0:00 tail -f /var/log/apache2/gnocchi-access.log
stack    112967  0.7  1.7 267628 72268 pts/29   S+   12:24   0:01 /usr/bin/python /usr/local/bin/gnocchi-metricd -d -v --config-file /etc/gnocchi/gnocchi.conf
stack    113342  1.2  2.0 424076 81092 pts/29   Sl+  12:24   0:02 /usr/bin/python /usr/local/bin/gnocchi-metricd -d -v --config-file /etc/gnocchi/gnocchi.conf
stack    113343  1.4  2.0 424072 80916 pts/29   Sl+  12:24   0:02 /usr/bin/python /usr/local/bin/gnocchi-metricd -d -v --config-file /etc/gnocchi/gnocchi.conf
stack    113345  1.1  1.9 569424 78988 pts/29   Sl+  12:24   0:01 /usr/bin/python /usr/local/bin/gnocchi-metricd -d -v --config-file /etc/gnocchi/gnocchi.conf
stack    113462  2.2  2.2 499660 90612 pts/30   Sl+  12:24   0:03 /usr/bin/python /usr/local/bin/gnocchi-statsd -d -v --config-file /etc/gnocchi/gnocchi.conf

Gnocchi-api並沒有顯示,主要是放在Apache2中了,我們檢查一下

stack@mitakagnocchi:/etc/apache2$ grep -R gnocchi *
sites-available/gnocchi.conf:    WSGIDaemonProcess gnocchi lang='en_US.UTF-8' locale='en_US.UTF-8' user=stack display-name=%{GROUP} processes=2 threads=32
sites-available/gnocchi.conf:    WSGIProcessGroup gnocchi
sites-available/gnocchi.conf:    WSGIScriptAlias / /var/www/gnocchi/app.wsgi
sites-available/gnocchi.conf:    ErrorLog /var/log/apache2/gnocchi.log
sites-available/gnocchi.conf:    CustomLog /var/log/apache2/gnocchi-access.log combined
sites-enabled/gnocchi.conf:    WSGIDaemonProcess gnocchi lang='en_US.UTF-8' locale='en_US.UTF-8' user=stack display-name=%{GROUP} processes=2 threads=32
sites-enabled/gnocchi.conf:    WSGIProcessGroup gnocchi
sites-enabled/gnocchi.conf:    WSGIScriptAlias / /var/www/gnocchi/app.wsgi
sites-enabled/gnocchi.conf:    ErrorLog /var/log/apache2/gnocchi.log
sites-enabled/gnocchi.conf:    CustomLog /var/log/apache2/gnocchi-access.log combined

Devstack在Mitaka與Kilo的Ceilometer設定

這部分確實也困擾了一下,看了一下代碼才發現,改變很大,因此,依循著代碼從新設定local.conf

Kilo版

stack@mitakagnocchi:~/test/devstack$ grep -R ceilometer *
clean.sh:source $TOP_DIR/lib/ceilometer
doc/source/index.rst:* `lib/ceilometer <lib/ceilometer.html>`__
files/apache-ceilometer.template:    WSGIDaemonProcess ceilometer-api processes=2 threads=10 user=%USER% display-name=%{GROUP}
files/apache-ceilometer.template:    WSGIProcessGroup ceilometer-api
files/apache-ceilometer.template:    ErrorLog /var/log/%APACHE_NAME%/ceilometer.log
files/apache-ceilometer.template:    CustomLog /var/log/%APACHE_NAME%/ceilometer_access.log combined
functions-common:        elif [[ $service == ceilometer-* ]]; then
functions-common:            if [[ ! $file_to_parse =~ $package_dir/ceilometer ]]; then
functions-common:                file_to_parse="${file_to_parse} ${package_dir}/ceilometer"
functions-common:#   **ceilometer** returns true if any service enabled start with **ceilometer**
functions-common:        [[ ${service} == "ceilometer" && ${ENABLED_SERVICES} =~ "ceilometer-" ]] && enabled=0
lib/cinder:    if is_service_enabled ceilometer; then
lib/neutron-legacy:    if is_service_enabled ceilometer; then
lib/neutron-legacy:        _configure_neutron_ceilometer_notifications
lib/neutron-legacy:function _configure_neutron_ceilometer_notifications {
lib/ceilometer:# lib/ceilometer
lib/ceilometer:#   enable_service ceilometer-acompute ceilometer-acentral ceilometer-anotification ceilometer-collector ceilometer-api
lib/ceilometer:#   enable_service ceilometer-alarm-notifier ceilometer-alarm-evaluator
lib/ceilometer:#   enable_service ceilometer-aipmi
lib/ceilometer:# ceilometer-anotification service.

Mitaka版

stack@mitakagnocchi:~/devstack$ grep -R ceilometer *
doc/source/plugin-registry.rst:|ceilometer                  |git://git.openstack.org/openstack/ceilometer                             |
doc/source/plugin-registry.rst:|ceilometer-powervm          |git://git.openstack.org/openstack/ceilometer-powervm                     |
doc/source/plugin-registry.rst:|collectd-ceilometer-plugin  |git://git.openstack.org/openstack/collectd-ceilometer-plugin             |
lib/cinder:    if is_service_enabled ceilometer; then
lib/neutron-legacy:    if is_service_enabled ceilometer; then
lib/neutron-legacy:        _configure_neutron_ceilometer_notifications
lib/neutron-legacy:function _configure_neutron_ceilometer_notifications {
lib/nova:    if is_service_enabled ceilometer; then
lib/tempest:    local tempest_services="key,glance,nova,neutron,cinder,swift,heat,ceilometer,horizon,sahara,ironic,trove"
lib/swift:    if is_service_enabled ceilometer; then
lib/swift:        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer "set log_level" "WARN"
lib/swift:        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer paste.filter_factory "ceilometermiddleware.swift:filter_factory"
lib/swift:        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer control_exchange "swift"
lib/swift:        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer url $(get_transport_url)
lib/swift:        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer driver "messaging"
lib/swift:        iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer topic "notifications"
lib/swift:        SWIFT_EXTRAS_MIDDLEWARE_LAST="${SWIFT_EXTRAS_MIDDLEWARE_LAST} ceilometer

ceilometer的Daemon在Mitaka版也改變了許多。

stack@mitakagnocchi:/etc/apache2$ cat /etc/gnocchi/gnocchi.conf

[DEFAULT]
use_syslog = True
debug = True
.
.

[storage]
file_basepath = /opt/stack/data/gnocchi/
driver = file
.
.
.
[indexer]
url = mysql+pymysql://root:password@127.0.0.1/gnocchi?charset=utf8
stack@mitakagnocchi:/etc/apache2$

Regular update time is set here with 600 secs. 所以我們要有點耐心看到結果。

stack@mitakagnocchi:/etc/apache2$ cat /etc/ceilometer/pipeline.yaml
---
sources:
    - name: meter_source
      interval: 600
      meters:
          - "*"
      sinks:
          - meter_sink
    - name: cpu_source
      interval: 600

Trouble

However, we met some trouble.

2016-06-03 13:21:33.078 113343 DEBUG gnocchi.storage [-] Processing new and to delete measures process_background_tasks /opt/stack/gnocchi/gnocchi/storage/__init__.py:183
2016-06-03 13:21:33.078 113343 DEBUG gnocchi.storage [-] Expunging deleted metrics process_background_tasks /opt/stack/gnocchi/gnocchi/storage/__init__.py:191

Storage並未正確寫入,可見於/etc/gnocchi/gnocchi.conf

ls /opt/stack/data/gnocchi/measure/

Try

enable_plugin gnocchi https://git.openstack.org/openstack/gnocchi

But not working.

It must be some setting about dispatcher setting in ceilometer.conf to connect to gnocchi.

solution

Thanks for this blog

https://blog.sileht.net/autoscaling-with-heat-ceilometer-and-gnocchi.html

kill process

kill -9 xxx (ceilometer-collector)

In /etc/ceilometer/ceilometer.conf adding

[DEFAULT]
.
.
dispatcher = gnocchi

[dispatcher_gnocchi]
filter_project = gnocchi
filter_service_activity = True
archive_policy = low
url = http://172.16.235.128:8041

and run

/usr/bin/python /usr/local/bin/ceilometer-collector --config-file /etc/ceilometer/ceilometer.conf

after a while with some failed information. It works

/stack/ceilometer/ceilometer/dispatcher/__init__.py:59
2016-06-03 15:36:08.944 121820 WARNING ceilometer.dispatcher [-] Failed to load any dispatchers for ceilometer.dispatcher.event


2016-06-03 15:39:51.337 121820 DEBUG oslo_messaging._drivers.amqpdriver [-] received message msg_id: None reply to None __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:201
2016-06-03 15:39:51.363 121821 DEBUG oslo_messaging._drivers.amqpdriver [-] received message msg_id: None reply to None __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:201
2016-06-03 15:39:51.405 121820 DEBUG oslo_messaging._drivers.amqpdriver [-] received message msg_id: None reply to None __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:201
2016-06-03 15:39:51.433 121821 DEBUG oslo_messaging._drivers.amqpdriver [-] received message msg_id: None reply to None __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:201
2016-06-03 15:39:51.459 121820 DEBUG oslo_messaging._dri

2016-06-03 15:39:52.217 121820 DEBUG ceilometer.dispatcher.gnocchi [-] 1 measures posted against 1 metrics through 1 resources batch_measures /opt/stack/ceilometer/ceilometer/dispatcher/gnocchi.py:418
2016-06-03 15:39:52.219 121820 DEBUG ceilometer.dispatcher.gnocchi [-] Resource cache hit for update 87b402ed-7be2-492d-8174-9679916df672 _if_not_cached /opt/stack/ceilometer/ceilometer/dispatcher/gnocchi.py:453
2016-06-03 15:39:52.221 121821 DEBUG ceilometer.dispatcher.gnocchi [-] 1 measures posted against 1 metrics through 1 resources batch_measures /opt/stack/ceilometer/ceilometer/dispatcher/gnocchi.py:418
2016-06-03 15:39:52.225 121821 DEBUG ceilometer.dispatcher.gnocchi [-] Resource cache hit for update 2664af34-1cc4-4fe9-94e3-b8b2b00677e7 _if_not_cached /opt/stack/ceilometer/ceilometer/dispatcher/gnocchi.py:453
2016-06-03 15:39:52.241 121821 DEBUG ceilometer.dispatcher.gnocchi [-] 1 measures posted against 1 metrics through 1 resources batch_measures /opt/stack/ceilometer/ceilometer/dispatcher/gnocchi.py:418
2016-06-03 15:39:52.243 121821 DEBUG ceilometer.dispatcher.gnocchi [-] Resource cache hit for update 17f048d0-c659-4ac7-889a-1c30405c5998 _if_not_cached /opt/stack/ceilometer/ceilometer/dispatcher/gnocchi.py:453
2016-06-03 15:39:52.261 121821 WARNING ceilometer.dispatcher.gnocchi [-] gnocchi project not found in keystone, ignoring the filter_service_activity option
2016-06-03 15:39:52.336 121820 WARNING ceilometer.dispatcher.gnocchi [-] gnocchi project not found in keystone, ignoring the filter_service_activity option
2016-06-03 15:39:52.392 121820 DEBUG ceilometer.dispatcher.gnocchi [-] 1 measures posted against 1 metrics through 1 resources batch_measures /opt/stack/ceilometer/ceilometer/dispatcher/gnocchi.py:418

It really works !!! to check the result.

stack@mitakagnocchi:~/devstack$ ls /opt/stack/data/gnocchi/|wc -l
31
stack@mitakagnocchi:~/devstack$ ls /opt/stack/data/gnocchi/|wc -l
60

Restart

There is no rejoin in Devstack Mitaka Version

But, we can still do rejoin using the following way.

reboot/shutdown -> edit local.conf -> ./unstack -> ./stack

where edit local.conf to

# RECLONE=No
RECLONE=yes
OFFLINE=True

If failed do it again from reboot.

The result

========================
DevStack Components Timed
========================

run_process - 82 secs
test_with_retry - 8 secs
restart_apache_server - 22 secs
wait_for_service - 18 secs



This is your host IP address: 172.16.235.128
This is your host IPv6 address: ::1
Horizon is now available at http://172.16.235.128/dashboard
Keystone is serving at http://172.16.235.128:5000/
The default users are: admin and demo
The password: password

After several test, I disconnect the networking, I then found it failed to rejoin. But if we disable grafana installation, it will be perfect if no networking there.

In local.conf

#enable_service gnocchi-grafana

modify security group

We delete all rules of default security group and allow all traffic passed including Ingress and Egress for All ICMP and All TCP, via Horizon.
Including Ingress is to let outside packet to VM, Egress is to let VM to gateway, so that you can login to VM and goto public network from VM.

Adding floating IP

We add the floating ip, 172.16.235.5 to VM via Horizon.

Into namespace qrouter, to check the floating ip 172.16.235.5 exsted.

stack@mitakagnocchi:~/devstack$ sudo ip netns exec qrouter-6801884e-0b91-4b4d-bb93-e1fe9ada91bd bash
root@mitakagnocchi:~/devstack# ip a
.
.

10: qg-49a0878f-4d: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/ether fa:16:3e:92:32:c4 brd ff:ff:ff:ff:ff:ff
    inet 172.16.235.3/24 brd 172.16.235.255 scope global qg-49a0878f-4d
       valid_lft forever preferred_lft forever
    inet 172.16.235.5/32 brd 172.16.235.5 scope global qg-49a0878f-4d

To check NAT Iptables rules, We can see the 172.16.235.5/32 forwards to destination 10.0.0.4.

root@mitakagnocchi:~/devstack# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat
-N neutron-l3-agent-snat
-N neutron-postrouting-bottom
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-OUTPUT -d 172.16.235.5/32 -j DNAT --to-destination 10.0.0.4
-A neutron-l3-agent-POSTROUTING ! -i qg-49a0878f-4d ! -o qg-49a0878f-4d -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d 172.16.235.5/32 -j DNAT --to-destination 10.0.0.4
-A neutron-l3-agent-float-snat -s 10.0.0.4/32 -j SNAT --to-source 172.16.235.5
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -o qg-49a0878f-4d -j SNAT --to-source 172.16.235.3
-A neutron-l3-agent-snat -m mark ! --mark 0x2/0xffff -m conntrack --ctstate DNAT -j SNAT --to-source 172.16.235.3
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat

Login to VM

Goto VM (InGress Setting in Security Group)

stack@mitakagnocchi:~/devstack$ ssh cirros@172.16.235.5
The authenticity of host '172.16.235.5 (172.16.235.5)' can't be established.
RSA key fingerprint is be:8a:07:03:55:a2:43:3e:5f:4f:b1:0f:d4:7d:cf:ad.
Are you sure you want to continue connecting (yes/no)?

Go to public network (Egress Setting in Security Group)

PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=127 time=125.356 ms
64 bytes from 8.8.8.8: seq=1 ttl=127 time=1076.524 ms
64 bytes from 8.8.8.8: seq=2 ttl=127 time=77.237 ms

Saturday, June 18, 2016

Playing Openvswitch And Namespace: Floating IP, Iperf testing, QOS, Security Group, and Firewall

namespace_floatingip

How to Build Floating IP

based on the knowledge of L3-routing

http://gogosatellite.blogspot.tw/2016/06/playing-openvswitch-and-namespace-veth.html

We ignore some operations from how to create name space, and how to have two nic, that one for private, another for public.

Into namespace, qrouter-f8cfe243-5908-4003-a776-59ce891e5d80 that is L3 routing namespace.

ip netns exec qrouter-f8cfe243-5908-4003-a776-59ce891e5d80 bash

adding floating ip you want 172.16.235.15.

ip addr add 172.16.235.15/24 dev qg-0155e3e5-11

where qg-0155e3e5-11 is binded in br-ex to internet world shown in the following.

Show the result of floating ip 172.16.235.15

root@kiloceilo:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
7: qg-0155e3e5-11: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/ether fa:16:3e:be:57:b1 brd ff:ff:ff:ff:ff:ff
    inet 172.16.235.3/24 brd 172.16.235.255 scope global qg-0155e3e5-11
       valid_lft forever preferred_lft forever
    inet 172.16.235.13/32 brd 172.16.235.13 scope global qg-0155e3e5-11
       valid_lft forever preferred_lft forever
    inet 172.16.235.15/24 scope global secondary qg-0155e3e5-11
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:febe:57b1/64 scope link
       valid_lft forever preferred_lft forever
8: qr-19bae3c4-0b: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/ether fa:16:3e:fe:cc:77 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-19bae3c4-0b
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fefe:cc77/64 scope link
       valid_lft forever preferred_lft forever

where qr-19bae3c4-0b is the inteface that connect to private VM.

Routing 172.16.235.15 to 10.0.0.146 that is VM private IP.

iptables -t nat -A PREROUTING -i qg-0155e3e5-11  -p tcp -d 172.16.235.15  -j DNAT --to-destination 10.0.0.146

Of course we can ping 172.16.235.15

root@kiloceilo:~# ping 172.16.235.15
PING 172.16.235.15 (172.16.235.15) 56(84) bytes of data.
64 bytes from 172.16.235.15: icmp_seq=1 ttl=64 time=0.566 ms

Now you can connect to VM via the floating IP.

root@kiloceilo:~# ssh cirros@172.16.235.15
cirros@172.16.235.15's password:

If you are using OpenStack and try to build obove test in OpenStack environment, watch out the security group. Clean the security group first, and rebuild it with allowing ICMP and TCP for all ports.

Security group

prepare environment

The environment is based on my previous blog

http://gogosatellite.blogspot.tw/2016/06/playing-openvswitch-and-namespace-veth.html

Create vlandropnet testing

ip netns add vlandropnet
ovs-vsctl add-port vlanbr vlandrop tag=100 -- set interface vlandrop type=internal
ip link set vlandrop netns vlandropnet
ip netns exec vlandropnet ip address add 10.0.0.5/24 dev vlandrop
ip netns exec vlandropnet ip link set dev vlandrop up
ip netns exec vlandropnet ip link set dev lo up

Create vlanconnet testing

ip netns add vlanconnet
ovs-vsctl add-port vlanbr vlancon tag=100 -- set interface vlancon type=internal
ip link set vlancon netns vlanconnet
ip netns exec vlanconnet ip address add 10.0.0.6/24 dev vlancon
ip netns exec vlanconnet ip link set dev vlancon up
ip netns exec vlanconnet ip link set dev lo up
root@ovsvxlan1:~# ip netns exec vlanconnet ping 10.0.0.5
PING 10.0.0.5 (10.0.0.5) 56(84) bytes of data.
64 bytes from 10.0.0.5: icmp_seq=1 ttl=64 time=0.924 ms
64 bytes from 10.0.0.5: icmp_seq=2 ttl=64 time=0.273 ms

Iperf testing

In vlanconnet:

root@ovsvxlan1:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.0.0.5 port 5001 connected with 10.0.0.6 port 51749
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  26.2 GBytes  22.4 Gbits/sec

In vlandropnet:

root@ovsvxlan1:~# iperf -c 10.0.0.5
------------------------------------------------------------
Client connecting to 10.0.0.5, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.6 port 51750 connected with 10.0.0.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  27.8 GBytes  23.9 Gbits/sec

Drop Method

We introduce two method. I prefer Method 2 but it contains some bugs in openflow and openvswitch 2.02 So We have to use method 1.

Method 1: Working Drop in Mac/IP

Any flow to 10.0.0.5 will be droped

 ovs-ofctl add-flow vlanbr "priority=100,eth_type=0x800,ip_dst=10.0.0.5,action=drop"

or by MAC

ovs-ofctl add-flow vlanbr "priority=100,eth_type=0x800,nw_proto=1,dl_dst=3a:69:76:f3:76:29,action=drop"

Result

root@ovsvxlan1:~# ip netns exec vlanconnet ping 10.0.0.5
PING 10.0.0.5 (10.0.0.5) 56(84) bytes of data.
^C

Check the flow table of vlanbr

root@ovsvxlan1:~# ovs-ofctl dump-flows vlanbr
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=45.519s, table=0, n_packets=6, n_bytes=588, idle_age=6, priority=100,ip,nw_dst=10.0.0.5 actions=drop
 cookie=0x0, duration=4264.462s, table=0, n_packets=101, n_bytes=7810, idle_age=4, priority=0 actions=NORMAL

Method 2: Working Drop in Port

Working in port should be the best choise, since we are working on switch.

To get the port number of vlandrop device in vlanbr bridge.

ovs-vsctl get Interface vlandrop ofport
6

Config the port 6 with dropping ICMP.

ovs-ofctl add-flow vlanbr "in_port=6,icmp,actions=drop"

Check the vlanbr flow

root@ovsvxlan1:~# ovs-ofctl dump-flows vlanbr
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=251.62s, table=0, n_packets=5, n_bytes=490, idle_age=225, icmp,in_port=6 actions=drop
 cookie=0x0, duration=7476.585s, table=0, n_packets=262, n_bytes=20092, idle_age=225, priority=0 actions=NORMAL

Result: Cannot ping 10.0.0.5.

root@ovsvxlan1:~# ip netns exec vlanconnet ping 10.0.0.5
PING 10.0.0.5 (10.0.0.5) 56(84) bytes of data.
^C

Clean the flow table.

root@ovsvxlan1:~# ovs-ofctl --strict del-flows vlanbr "idle_age=36, icmp,in_port=6"

For more testing, we can tryp arp. Of course you can ping it connected.

root@ovsvxlan1:~# ovs-ofctl add-flow vlanbr "in_port=6,arp,actions=drop"

exam

One can drop all the packet from in_port=3, is src and dst =port 3, to destination 10.0.0.5.

ovs-ofctl add-flow vlanbr 'in_port=3,tcp,nw_dst=10.0.0.5 action=drop'

Mention that in_port is FROM and TO, I really confuse in it for a long while. So that, it means a packet with request or response with (src=port3 & dst=10.0.0.5) or (dst=port3 & dst=10.0.0.5) will be droped. where port3=10.0.0.7, port2=10.0.0.6, and port1=10.0.0.5 * 5->6 (yes) * 5->7 (drop) since response drop but request pass. * 6->5 (yes) * 6->7 (yes) * 7->5 (drop) since request drop. * 7->6 (yes)

ovs-ofctl add-flow vlanbr 'in_port=3,tcp action=drop'

One can use above command, that means any request/response to/from in_port=3 will be droped.

drop a specific port

We can try iperf port 5001.

The most important thing is that you must have to input dl_dst, destination mac address, or it will fail. But if you just assign an protocol, you don't need to input dl_dst.

A good habbit is just input the destination mac address.

ovs-ofctl add-flow vlanbr "dl_dst=3a:69:76:f3:76:29,tcp,tp_dst=5001,actions=drop"

or

ovs-ofctl add-flow vlanbr "dl_dst=3a:69:76:f3:76:29,dl_type=0x0800,nw_proto=6,tp_dst=5001,actions=drop"

where dl_type=0x0800,nw_proto=6=tcp, you can type man ovs-ofctl to check it. It also supports range of ports by using mask, also check man ovs-ofctl.

Into vlanconnet:

root@ovsvxlan1:~# nc -v 10.0.0.5 5001
nc: connect to 10.0.0.5 port 5001 (tcp) failed: No route to host

It cannot connect to port 5001 of 10.0.0.5 that is vlandropnet.

Firewall

To drop source mac address.

vs-ofctl add-flow vlanbr "dl_src=aa:8f:df:0c:74:27,dl_dst=3a:69:76:f3:76:29,tcp,tp_dst=5001,actions=drop"

Other mac address can acess port 5001.

The result is droped:

root@ovsvxlan1:~# nc -v 10.0.0.5 5001
^C

To drop by using source ip.

ovs-ofctl add-flow vlanbr "ip_src=10.0.0.6,dl_dst=3a:69:76:f3:76:29,tcp,tp_dst=5003,actions=drop"

Drop all connection in this subnet

ovs-ofctl add-flow vlanbr priority=500,dl_type=0x800,nw_src=10.0.0.0/24,nw_dst=10.0.0.0/24,actions=drop

Some Useful command

root@ovsvxlan1:~# ovs-ofctl dump-flows vlanbr
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=637.232s, table=0, n_packets=0, n_bytes=0, idle_age=637, priority=100,ip,nw_dst=10.0.0.1 actions=drop
 cookie=0x0, duration=3920.83s, table=0, n_packets=85, n_bytes=6690, idle_age=610, priority=0 actions=NORMAL

Delete flow, just provide two uniq parameter.

ovs-ofctl --strict del-flows vlanbr "priority=100,ip,nw_dst=10.0.0.1"

root@ovsvxlan1:~# ovs-ofctl dump-flows vlanbr
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=3954.638s, table=0, n_packets=85, n_bytes=6690, idle_age=644, priority=0 actions=NORMAL

short notation

ip     Same as dl_type=0x0800.

icmp   Same as dl_type=0x0800,nw_proto=1.

tcp    Same as dl_type=0x0800,nw_proto=6.

udp    Same as dl_type=0x0800,nw_proto=17.

sctp   Same as dl_type=0x0800,nw_proto=132.

arp    Same as dl_type=0x0806.

rarp   Same as dl_type=0x8035.

Not working command

QOS

ovs-vsctl set Interface vlandrop ingress_policing_rate=100
ovs-vsctl set Interface vlandrop ingress_policing_burst=10

install latest version of openvswitch

http://www.docoreos.com/?p=79

to run arbitry port in namespace for testing

You can modify 80 to any port number you wish.

python -m SimpleHTTPServer 80