To host your
application within your own container, use the following steps.
-
Navigate to the
lxc-app-topo-bootstrap
directory and ensure the vagrant
instance is running. If not, launch the vagrant instance.
annseque@ANNSEQUE-WS02 MINGW64 ~/vagrant-xrdocs/lxc-app-topo-bootstrap (master)
$ vagrant status
Current machine states:
rtr aborted (virtualbox)
devbox aborted (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
annseque@ANNSEQUE-WS02 MINGW64 ~/vagrant-xrdocs/lxc-app-topo-bootstrap (master)
$ vagrant up
Bringing machine 'rtr' up with 'virtualbox' provider...
Bringing machine 'devbox' up with 'virtualbox' provider...
==> rtr: Clearing any previously set forwarded ports...
==> rtr: Clearing any previously set network interfaces...
==> rtr: Preparing network interfaces based on configuration...
rtr: Adapter 1: nat
rtr: Adapter 2: intnet
==> rtr: Forwarding ports...
rtr: 57722 (guest) => 2222 (host) (adapter 1)
rtr: 22 (guest) => 2223 (host) (adapter 1)
==> rtr: Running 'pre-boot' VM customizations...
==> rtr: Booting VM...
==> rtr: Waiting for machine to boot. This may take a few minutes...
rtr: SSH address: 127.0.0.1:2222
rtr: SSH username: vagrant
rtr: SSH auth method: private key
rtr: Warning: Remote connection disconnect. Retrying...
...
==> rtr: Machine booted and ready!
...
==> rtr: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> rtr: flag to force provisioning. Provisioners marked to run always will still run.
==> devbox: Checking if box 'ubuntu/trusty64' is up to date...
==> devbox: A newer version of the box 'ubuntu/trusty64' is available! You currently
==> devbox: have version '20160801.0.0'. The latest is version '20160826.0.1'. Run
==> devbox: `vagrant box update` to update.
==> devbox: Clearing any previously set forwarded ports...
==> devbox: Fixed port collision for 22 => 2222. Now on port 2200.
==> devbox: Clearing any previously set network interfaces...
==> devbox: Preparing network interfaces based on configuration...
devbox: Adapter 1: nat
devbox: Adapter 2: intnet
==> devbox: Forwarding ports...
devbox: 22 (guest) => 2200 (host) (adapter 1)
==> devbox: Booting VM...
==> devbox: Waiting for machine to boot. This may take a few minutes...
devbox: SSH address: 127.0.0.1:2200
devbox: SSH username: vagrant
devbox: SSH auth method: private key
devbox: Warning: Remote connection disconnect. Retrying...
devbox: Warning: Remote connection disconnect. Retrying...
==> devbox: Machine booted and ready!
...
devbox: Guest Additions Version: 4.3.36
devbox: VirtualBox Version: 5.0
==> devbox: Configuring and enabling network interfaces...
==> devbox: Mounting shared folders...
devbox: /vagrant => C:/Users/annseque/vagrant-xrdocs/lxc-app-topo-bootstrap
==> devbox: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> devbox: flag to force provisioning. Provisioners marked to run always will still run.
==> rtr: Machine 'rtr' has a post `vagrant up` message. This is a message
==> rtr: from the creator of the Vagrantfile, and not from Vagrant itself:
==> rtr:
==> rtr:
==> rtr: Welcome to the IOS XRv (64-bit) Virtualbox.
==> rtr: To connect to the XR Linux shell, use: 'vagrant ssh'.
==> rtr: To ssh to the XR Console, use: 'vagrant port' (vagrant version > 1.8)
==> rtr: to determine the port that maps to guestport 22,
==> rtr: then: 'ssh vagrant@localhost -p <forwarded port>'
...
annseque@ANNSEQUE-WS02 MINGW64 ~/vagrant-xrdocs/lxc-app-topo-bootstrap (master)
$ vagrant status
Current machine states:
rtr running (virtualbox)
devbox running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
-
Access the
devbox
through SSH and install LXC tools.
To launch an LXC
container, you need the following, which can be obtained by installing LXC
tools:
annseque@ANNSEQUE-WS02 MINGW64 ~/vagrant-xrdocs/lxc-app-topo-bootstrap (master)
$ vagrant ssh devbox
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-87-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Thu Sep 1 03:55:29 UTC 2016
System load: 0.99 Processes: 94
Usage of /: 3.9% of 39.34GB Users logged in: 0
Memory usage: 14% IP address for eth0: 10.0.2.15
Swap usage: 0% IP address for eth1: 11.1.1.20
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
25 packages can be updated.
12 updates are security updates.
New release '16.04.1 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
------------------------------------------------------------------------------------------------
Last login: Wed Aug 31 04:02:20 2016 from 10.0.2.2
vagrant@vagrant-ubuntu-trusty-64:~$ sudo apt-get update
Ign http://archive.ubuntu.com trusty InRelease
Get:1 http://security.ubuntu.com trusty-security InRelease [65.9 kB]
...
Get:33 http://archive.ubuntu.com trusty-backports/universe Translation-en [36.8 kB]
Hit http://archive.ubuntu.com trusty Release
...
Hit http://archive.ubuntu.com trusty/universe Translation-en
Ign http://archive.ubuntu.com trusty/main Translation-en_US
Ign http://archive.ubuntu.com trusty/multiverse Translation-en_US
Ign http://archive.ubuntu.com trusty/restricted Translation-en_US
Ign http://archive.ubuntu.com trusty/universe Translation-en_US
Fetched 4,022 kB in 16s (246 kB/s)
Reading package lists... Done
----------------------------------------------------------------------------------------------
vagrant@vagrant-ubuntu-trusty-64:~$ sudo apt-get -y install lxc
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
bridge-utils cgmanager cloud-image-utils debootstrap dnsmasq-base euca2ools
genisoimage libaio1 libboost-system1.54.0 libboost-thread1.54.0 liblxc1
libmnl0 libnetfilter-conntrack3 libnspr4 libnss3 libnss3-nssdb librados2
librbd1 libseccomp2 libxslt1.1 lxc-templates python-distro-info python-lxml
python-requestbuilder python-setuptools python3-lxc qemu-utils sharutils
uidmap
Suggested packages:
cgmanager-utils wodim cdrkit-doc btrfs-tools lvm2 lxctl qemu-user-static
python-lxml-dbg bsd-mailx mailx
The following NEW packages will be installed:
bridge-utils cgmanager cloud-image-utils debootstrap dnsmasq-base euca2ools
genisoimage libaio1 libboost-system1.54.0 libboost-thread1.54.0 liblxc1
libmnl0 libnetfilter-conntrack3 libnspr4 libnss3 libnss3-nssdb librados2
librbd1 libseccomp2 libxslt1.1 lxc lxc-templates python-distro-info
python-lxml python-requestbuilder python-setuptools python3-lxc qemu-utils
sharutils uidmap
0 upgraded, 30 newly installed, 0 to remove and 52 not upgraded.
Need to get 6,469 kB of archives.
After this operation, 25.5 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ trusty/main libaio1 amd64 0.3.109-4 [6,364 B]
...
Get:30 http://archive.ubuntu.com/ubuntu/ trusty-updates/main debootstrap all 1.0.59ubuntu0.5 [29.6 kB]
Fetched 6,469 kB in 22s (289 kB/s)
Selecting previously unselected package libaio1:amd64.
(Reading database ... 62989 files and directories currently installed.)
Preparing to unpack .../libaio1_0.3.109-4_amd64.deb ...
...
Setting up lxc (1.0.8-0ubuntu0.3) ...
lxc start/running
Setting up lxc dnsmasq configuration.
Processing triggers for ureadahead (0.100.0-16) ...
Setting up lxc-templates (1.0.8-0ubuntu0.3) ...
Setting up libnss3-nssdb (2:3.23-0ubuntu0.14.04.1) ...
Setting up libnss3:amd64 (2:3.23-0ubuntu0.14.04.1) ...
Setting up librados2 (0.80.11-0ubuntu1.14.04.1) ...
Setting up librbd1 (0.80.11-0ubuntu1.14.04.1) ...
Setting up qemu-utils (2.0.0+dfsg-2ubuntu1.27) ...
Setting up cloud-image-utils (0.27-0ubuntu9.2) ...
Processing triggers for libc-bin (2.19-0ubuntu6.9) ...
-
Verify that the
LXC was properly installed.
vagrant@vagrant-ubuntu-trusty-64:~$ sudo lxc-start --version
1.0.8
-
Create the LXC
container with a standard Ubuntu base template and launch it in
devbox
.
vagrant@vagrant-ubuntu-trusty-64:~$ sudo lxc-create -t ubuntu --name xr-lxc-app
Checking cache download in /var/cache/lxc/trusty/rootfs-amd64 ...
Installing packages in template: ssh,vim,language-pack-en
Downloading ubuntu trusty minimal ...
I: Retrieving Release
I: Retrieving Release.gpg
...
Generation complete.
Setting up perl-modules (5.18.2-2ubuntu1.1) ...
Setting up perl (5.18.2-2ubuntu1.1) ...
Processing triggers for libc-bin (2.19-0ubuntu6.9) ...
Processing triggers for initramfs-tools (0.103ubuntu4.4) ...
Download complete
Copy /var/cache/lxc/trusty/rootfs-amd64 to /var/lib/lxc/xr-lxc-app/rootfs ...
Copying rootfs to /var/lib/lxc/xr-lxc-app/rootfs ...
Generating locales...
en_US.UTF-8... up-to-date
Generation complete.
Creating SSH2 RSA key; this may take some time ...
Creating SSH2 DSA key; this may take some time ...
Creating SSH2 ECDSA key; this may take some time ...
Creating SSH2 ED25519 key; this may take some time ...
update-rc.d: warning: default stop runlevel arguments (0 1 6) do not match ssh Default-Stop values (none)
invoke-rc.d: policy-rc.d denied execution of start.
Current default time zone: 'Etc/UTC'
Local time is now: Thu Sep 1 04:46:22 UTC 2016.
Universal Time is now: Thu Sep 1 04:46:22 UTC 2016.
##
# The default user is 'ubuntu' with password 'ubuntu'!
# Use the 'sudo' command to run tasks as root in the container.
##
-
Verify if the
LXC container has been successfully created.
vagrant@vagrant-ubuntu-trusty-64:~$ sudo lxc-ls --fancy
NAME STATE IPV4 IPV6 AUTOSTART
------------------------------------------
xr-lxc-app STOPPED - - NO
-
Start the LXC
container.
You will be
prompted to log into the LXC container. The login credentials are
ubuntu/ubuntu
.
vagrant@vagrant-ubuntu-trusty-64:~$ sudo lxc-start --name xr-lxc-app
<4>init: plymouth-upstart-bridge main process (5) terminated with status 1
...
xr-lxc-app login: ubuntu
Password:
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-87-generic x86_64)
* Documentation: https://help.ubuntu.com/
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
ubuntu@xr-lxc-app:~$
-
Install your
application within the LXC container.
For the sake
of illustration, in this example we will install the iPerf application.
ubuntu@xr-lxc-app:~$ sudo apt-get -y install iperf
[sudo] password for ubuntu:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
iperf
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 56.3 kB of archives.
After this operation, 174 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ trusty/universe iperf amd64 2.0.5-3 [56.3 kB]
Fetched 56.3 kB in 16s (3,460 B/s)
Selecting previously unselected package iperf.
(Reading database ... 14648 files and directories currently installed.)
Preparing to unpack .../iperf_2.0.5-3_amd64.deb ...
Unpacking iperf (2.0.5-3) ...
Setting up iperf (2.0.5-3) ...
ubuntu@xr-lxc-app:~$
-
Change the SSH
port inside the container and verify that it has been correctly assigned.
When you
deploy your container to IOS XR, it shares the network namespace with XR. Since
IOS XR already uses Ports 22 and 57722 for other purposes, you must pick some
other port number for your container.
ubuntu@xr-lxc-app:~$ sudo sed -i s/Port\ 22/Port\ 58822/ /etc/ssh/sshd_config
[sudo] password for ubuntu:
ubuntu@xr-lxc-app:~$ cat /etc/ssh/sshd_config | grep Port
Port 58822
ubuntu@xr-lxc-app:~$
-
Shut the
container down.
ubuntu@xr-lxc-app:~$ sudo shutdown -h now
ubuntu@xr-lxc-app:~$
Broadcast message from ubuntu@xr-lxc-app
(/dev/lxc/console) at 5:17 ...
The system is going down for halt NOW!
<4>init: tty4 main process (369) killed by TERM signal
...
wait-for-state stop/waiting
* Asking all remaining processes to terminate...
...done.
* All processes ended within 1 seconds...
...done.
* Deactivating swap...
...done.
mount: cannot mount block device /dev/sda1 read-only
* Will now halt
-
Assume the
root user role.
vagrant@vagrant-ubuntu-trusty-64:~$ sudo -s
root@vagrant-ubuntu-trusty-64:~# whoami
root
-
Navigate to
the
/var/lib/lxc/xr-lxc-app/
directory and package the
rootfs
into a tar ball.
root@vagrant-ubuntu-trusty-64:~# cd /var/lib/lxc/xr-lxc-app/
root@vagrant-ubuntu-trusty-64:/var/lib/lxc/xr-lxc-app# ls
config fstab rootfs
root@vagrant-ubuntu-trusty-64:/var/lib/lxc/xr-lxc-app# cd rootfs
root@vagrant-ubuntu-trusty-64:/var/lib/lxc/xr-lxc-app/rootfs# tar -czvf xr-lxc-app-rootfs.tar.gz *
tar: dev/log: socket ignored
root@vagrant-ubuntu-trusty-64:/var/lib/lxc/xr-lxc-app/rootfs#
-
Transfer the
rootfs
tar ball to the home directory (~/
or
/home/vagrant
) and verify if the transfer is
successful.
root@vagrant-ubuntu-trusty-64:/var/lib/lxc/xr-lxc-app/rootfs# mv *.tar.gz /home/vagrant
root@vagrant-ubuntu-trusty-64:/var/lib/lxc/xr-lxc-app/rootfs# ls -l /home/vagrant
total 120516
-rw-r--r-- 1 root root 123404860 Sep 1 05:22 xr-lxc-app-rootfs.tar.gz
root@vagrant-ubuntu-trusty-64:/var/lib/lxc/xr-lxc-app/rootfs#
-
Create an LXC
spec XML file for specifying attributes required to launch the LXC container
with the application.
You must
navigate to the
/home/vagrant
directory on
devbox
and use a vi editor to create the XML file. Save
the file as
xr-lxc-app.xml
.
A sample LXC
spec file to launch the application within the container is as shown.
root@vagrant-ubuntu-trusty-64:/var/lib/lxc/xr-lxc-app/rootfs# exit
exit
vagrant@vagrant-ubuntu-trusty-64:~$ pwd
/home/vagrant
vagrant@vagrant-ubuntu-trusty-64:~$ vi xr-lxc-app.xml
-------------------------------------------------------------------------------------
<domain type='lxc' xmlns:lxc='http://libvirt.org/schemas/domain/lxc/1.0' >
<name>xr-lxc-app</name>
<memory>327680</memory>
<os>
<type>exe</type>
<init>/sbin/init</init>
</os>
<lxc:namespace>
<sharenet type='netns' value='global-vrf'/>
</lxc:namespace>
<vcpu>1</vcpu>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/lib64/libvirt/libvirt_lxc</emulator>
<filesystem type='mount'>
<source dir='/misc/app_host/xr-lxc-app/'/>
<target dir='/'/>
</filesystem>
<console type='pty'/>
</devices>
</domain>
In IOS-XR the
global-vrf
network namespace contains all the XR GigE
or management interfaces. The
sharenet
configuration in the XML file ensures that the
container on being launched has native access to all XR interfaces.
/misc/app_host/
on
IOS XR is a special mount volume that is designed to provide nearly 3.9GB of
disk space. This mount volume can be used to host custom container
rootfs
and other large files without occupying disk
space on XR. In this example, we expect to untar the
rootfs
to the
/misc/app_host/xr-lxc-app/
directory.
-
Verify if the
rootfs
tar ball and the LXC XML spec file are present
in the home directory.
root@vagrant-ubuntu-trusty-64:~# pwd
/home/vagrant
root@vagrant-ubuntu-trusty-64:~# ls -l
total 119988
-rw-r--r-- 1 root root 122863332 Jun 16 19:41 xr-lxc-app-rootfs.tar.gz
-rw-r--r-- 1 root root 590 Jun 16 23:29 xr-lxc-app.xml
root@vagrant-ubuntu-trusty-64:~#
-
Transfer the
rootfs
tar ball and XML spec file to XR.
There are two
ways of transferring the files: Through the GigE interface (a little slower) or
the management interface. You can use the method that works best for you.
-
Create a
directory (/misc/app_host/xr-lxc-app/
)on XR (rtr
) to untar the
rootfs
tar ball.
vagrant@vagrant-ubuntu-trusty-64:~$ exit
logout
Connection to 127.0.0.1 closed.
annseque@ANNSEQUE-WS02 MINGW64 ~/vagrant-xrdocs/lxc-app-topo-bootstrap (master)
$ vagrant ssh rtr
Last login: Fri Sep 2 05:49:01 2016 from 10.0.2.2
xr-vm_node0_RP0_CPU0:~$ sudo mkdir /misc/app_host/xr-lxc-app/
-
Navigate to
the
/misc/app_host/xr-lxc-app/
directory and untar the tar
ball.
xr-vm_node0_RP0_CPU0:~$ cd /misc/app_host/xr-lxc-app/
xr-vm_node0_RP0_CPU0:/misc/app_host/xr-lxc-app$ sudo tar -zxf ../scratch/xr-lxc-app-rootfs.tar.gz
tar: dev/audio3: Cannot mknod: Operation not permitted
...
-
Use the XML
spec file to launch the container and verify its existence on XR.
xr-vm_node0_RP0_CPU0:/misc/app_host/xr-lxc-app$ virsh create /misc/app_host/scratch/xr-lxc-app.xml
Domain xr-lxc-app created from /misc/app_host/scratch/xr-lxc-app.xml
xr-vm_node0_RP0_CPU0:/misc/app_host/xr-lxc-app$ virsh list
Id Name State
----------------------------------------------------
2095 xr-lxc-app running
4932 sysadmin running
12086 default-sdr--1 running
-
Log into the
container. The default login credentials are
ubuntu/ubuntu
.
There are two
ways of logging into the container. You can use the method that works best for
you:
-
Logging into the container
by using
virsh command:
xr-vm_node0_RP0_CPU0:/misc/app_host/xr-lxc-app$ virsh console xr-lxc-app
Connected to domain xr-lxc-app
Escape character is ^]
init: Unable to create device: /dev/kmsg
* Stopping Send an event to indicate plymouth is up [ OK ]
* Starting Mount filesystems on boot [ OK ]
* Starting Signal sysvinit that the rootfs is mounted [ OK ]
* Starting Fix-up sensitive /proc filesystem entries [ OK ]
xr-lxc-app login: * Starting OpenSSH server [ OK ]
Ubuntu 14.04.5 LTS xr-lxc-app tty1
xr-lxc-app login: ubuntu
Password:
Last login: Fri Sep 2 05:40:11 UTC 2016 on lxc/console
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.14.23-WR7.0.0.2_standard x86_64)
* Documentation: https://help.ubuntu.com/
ubuntu@xr-lxc-app:~$
-
Logging into the container
by using SSH:
Use the
SSH port number you configured, 58822, and any of XR interface IP addresses to
log in.
xr-vm_node0_RP0_CPU0:/misc/app_host/xr-lxc-app$ ssh -p 58822 ubuntu@11.1.1.10
Warning: Permanently added '[11.1.1.10]:58822' (ECDSA) to the list of known hosts.
ubuntu@11.1.1.10's password:
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.14.23-WR7.0.0.2_standard x86_64)
* Documentation: https://help.ubuntu.com/
Last login: Fri Sep 2 07:42:37 2016
ubuntu@xr-lxc-app:~$
Note |
-
To exit
the container, use the press
CTRL and ]
keys simultaneously.
-
To
access the container directly from your host machine, ensure you forward the
intended port (in this example, 58822) to your laptop (any port of your
choice), in the Vagrant file:
node.vm.network "forwarded_port", guest: 58822, host: 58822
You can
then SSH to the LXC container by using the following command:
ssh -p 58822 vagrant@localhost
|
-
Verify if the
interfaces on XR are available inside the LXC container.
The LXC
container operates as your own Linux server on XR. Because the network
namespace is shared between the LXC and XR, all of XR interfaces (GigE,
management, and so on) are available to bind to and run your applications.
ubuntu@xr-lxc-app:~$ ifconfig
Gi0_0_0_0 Link encap:Ethernet HWaddr 08:00:27:5a:29:77
inet addr:11.1.1.10 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe5a:2977/64 Scope:Link
UP RUNNING NOARP MULTICAST MTU:1514 Metric:1
RX packets:186070 errors:0 dropped:0 overruns:0 frame:0
TX packets:155519 errors:0 dropped:3 overruns:0 carrier:1
collisions:0 txqueuelen:1000
RX bytes:301968784 (301.9 MB) TX bytes:10762900 (10.7 MB)
Mg0_RP0_CPU0_0 Link encap:Ethernet HWaddr 08:00:27:13:ad:eb
inet addr:10.0.2.15 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe13:adeb/64 Scope:Link
UP RUNNING NOARP MULTICAST MTU:1514 Metric:1
RX packets:170562 errors:0 dropped:0 overruns:0 frame:0
TX packets:70309 errors:0 dropped:0 overruns:0 carrier:1
collisions:0 txqueuelen:1000
RX bytes:254586763 (254.5 MB) TX bytes:3886846 (3.8 MB)
fwd_ew Link encap:Ethernet HWaddr 00:00:00:00:00:0b
inet6 addr: fe80::200:ff:fe00:b/64 Scope:Link
UP RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:2 errors:0 dropped:1 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:140 (140.0 B)
fwdintf Link encap:Ethernet HWaddr 00:00:00:00:00:0a
inet6 addr: fe80::200:ff:fe00:a/64 Scope:Link
UP RUNNING NOARP MULTICAST MTU:1496 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:155549 errors:0 dropped:1 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:10765764 (10.7 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:64 errors:0 dropped:0 overruns:0 frame:0
TX packets:64 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:9400 (9.4 KB) TX bytes:9400 (9.4 KB)
-
Configure the
container to communicate outside XR with other nodes in the network.
By default, the IOS-XRv vagrant box is set up to talk to the internet using a default route through your management port.
If you want the router to use the routing table to talk to other nodes in the network, then you must configure tpa-address. This becomes the src-hint for all Linux application traffic.
In this
example, we use Loopback 0 for tpa-address to ensure that the IP address for any
originating traffic for applications on the XR is a reachable IP address across
your topology.
ubuntu@xr-lxc-app:~$ exit
logout
Connection to 11.1.1.10 closed.
xr-vm_node0_RP0_CPU0:/misc/app_host/xr-lxc-app$ exit
logout
Connection to 127.0.0.1 closed.
annseque@ANNSEQUE-WS02 MINGW64 ~/vagrant-xrdocs/lxc-app-topo-bootstrap (master)
$ vagrant port rtr | grep 22
22 (guest) => 2223 (host)
57722 (guest) => 2222 (host)
annseque@ANNSEQUE-WS02 MINGW64 ~/vagrant-xrdocs/lxc-app-topo-bootstrap (master)
$ ssh -p 2223 vagrant@localhost
vagrant@localhost's password:
RP/0/RP0/CPU0:ios# configure
Fri Sep 2 08:03:05.094 UTC
RP/0/RP0/CPU0:ios(config)# interface loopback 0
RP/0/RP0/CPU0:ios(config-if)# ip address 1.1.1.1/32
RP/0/RP0/CPU0:ios(config-if)# exit
RP/0/RP0/CPU0:ios(config)# tpa address-family ipv4 update-source loopback 0
RP/0/RP0/CPU0:ios(config)# commit
Fri Sep 2 08:03:39.602 UTC
RP/0/RP0/CPU0:ios(config)# exit
RP/0/RP0/CPU0:ios# bash
Fri Sep 2 08:03:58.232 UTC
[xr-vm_node0_RP0_CPU0:~]$ ip route
default dev fwdintf scope link src 1.1.1.1
10.0.2.0/24 dev Mg0_RP0_CPU0_0 proto kernel scope link src 10.0.2.15
You can see
the configured Loopback 0 IP address (1.1.1.1).
-
Test your
application within the launched container.
We installed
iPerf in our container. We will run the iPerf server within the container, and
the iPerf client on the
devbox
and see if they can communicate. Basically, the
hosted application within a container on
rtr
should be able to talk to a client application on
devbox
.
-
Check if
the iPerf server is running within the LXC container on XR.
[xr-vm_node0_RP0_CPU0:~]$ssh -p 58822 ubuntu@11.1.1.10
Warning: Permanently added '[11.1.1.10]:58822' (ECDSA) to the list of known hosts.
ubuntu@11.1.1.10's password:
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.14.23-WR7.0.0.2_standard x86_64)
* Documentation: https://help.ubuntu.com/
Last login: Fri Sep 2 07:47:28 2016 from 11.1.1.10
ubuntu@xr-lxc-app:~$ iperf -s -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 64.0 MByte (default)
------------------------------------------------------------
-
Check if
XR Loopback interface is accessible on devbox. (Open a new Git bash window for
this step.)
annseque@ANNSEQUE-WS02 MINGW64 ~
$ cd vagrant-xrdocs
annseque@ANNSEQUE-WS02 MINGW64 ~/vagrant-xrdocs (master)
$ cd lxc-app-topo-bootstrap/
annseque@ANNSEQUE-WS02 MINGW64 ~/vagrant-xrdocs/lxc-app-topo-bootstrap (master)
$ vagrant ssh devbox
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-87-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Fri Sep 2 05:51:19 UTC 2016
System load: 0.08 Users logged in: 0
Usage of /: 6.4% of 39.34GB IP address for eth0: 10.0.2.15
Memory usage: 28% IP address for eth1: 11.1.1.20
Swap usage: 0% IP address for lxcbr0: 10.0.3.1
Processes: 77
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
53 packages can be updated.
26 updates are security updates.
New release '16.04.1 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Fri Sep 2 05:51:21 2016 from 10.0.2.2
vagrant@vagrant-ubuntu-trusty-64:~$ sudo ip route add 1.1.1.1/32 via 11.1.1.10
vagrant@vagrant-ubuntu-trusty-64:~$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=255 time=1.87 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=255 time=10.5 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=255 time=4.13 ms
^C
--- 1.1.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2007ms
rtt min/avg/max/mdev = 1.876/5.510/10.520/3.661 ms
-
Install
the iPerf client on
devbox
.
vagrant@vagrant-ubuntu-trusty-64:~$ sudo apt-get install iperf
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
iperf
0 upgraded, 1 newly installed, 0 to remove and 52 not upgraded.
Need to get 56.3 kB of archives.
After this operation, 174 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ trusty/universe iperf amd64 2.0.5-3 [56.3 kB]
Fetched 56.3 kB in 10s (5,520 B/s)
Selecting previously unselected package iperf.
(Reading database ... 64313 files and directories currently installed.)
Preparing to unpack .../iperf_2.0.5-3_amd64.deb ...
Unpacking iperf (2.0.5-3) ...
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
Setting up iperf (2.0.5-3) ...
-
Launch the
iPerf client on
devbox
and verify if it is communicating with the iPerf
server within the LXC on XR.
vagrant@vagrant-ubuntu-trusty-64:~$ iperf -u -c 1.1.1.1
------------------------------------------------------------
Client connecting to 1.1.1.1, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 11.1.1.20 port 37800 connected with 1.1.1.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec
[ 3] Sent 893 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 1.791 ms 0/ 893 (0%)
You have successfully hosted an application within a Linux container by using vagrant.