Docker Container Application Hosting
You can create your own container on IOS XR, and host applications within the container. The applications can be developed using any Linux distribution. Docker container application hosting is suited for applications that use system libraries that are different from those libraries provided by the IOS XR root file system.
In docker container application hosting, you can manage the amount of resources (memory and CPU) consumed by the hosted applications.
Docker Container Application Hosting Architecture
This section describes the docker container application hosting architecture.
The docker client, run from the bash shell, interacts with dockers (docker 1 and docker 2) by using the docker commands. The docker client sends the docker commands to docker daemon, which, then, executes the commands. The docker daemon uses the docker.sock Unix socket to communicate with the dockers.
When the docker run command is executed, a docker container is created and started from the docker image. Docker containers can be either in global-vrf namespace or any other defined namespace (for example, VRF-blue).
The docker utilizes overlayfs under the /var/lib/docker folder for managing the directories.
To host an application in docker containers, see Hosting an Application in Docker Containers.
App Hosting Components on IOS XR
The following are the components of App Hosting:
-
Docker on IOS XR: The Docker daemon is included with the IOS XR software on the base Linux OS. This inclusion provides native support for running applications inside Docker containers on IOS XR. Docker is the preferred method for running TPAs on IOS XR.
-
Appmgr: While the Docker daemon comes packaged with IOS XR, Docker applications can only be managed using appmgr. Appmgr allows users to install applications packaged as RPMs and then manage their lifecycle using the IOS XR CLI and programmable models.
-
PacketIO: This is the router infrastructure that implements the packet path between TPAs and IOS XR running on the same router. It enables TPAs to leverage XR forwarding for sending and receiving traffic.
TPA Security
IOS XR is equipped with inherent safeguards to prevent third party applications from interfering with its role as a Network OS.
-
Although IOS XR doesn't impose a limit on the number of TPAs that can run concurrently, it does impose constraints on the resources allocated to the Docker daemon, based on the following parameters:
-
CPU: ¼ of the CPU per core available in the platform.
-
RAM: Maximum of 1GB.
-
Disk space is restricted by the partition size, which varies by platform and can be checked by executing "run df -h" and examining the size of the /misc/app_host or /var/lib/docker mounts.
-
-
All traffic to and from the application is monitored by the XR control protection, LPTS.
-
Signed Applications are supported on IOS XR. Users have the option to sign their own applications by onboarding an Owner Certificate (OC) through Ownership Voucher-based workflows as described in RFC 8366. Once an Owner Certificate is onboarded, users can sign applications with GPG keys based on the Owner Certificate, which can then be authenticated during the application installation process on the router.
The table below shows the various functions performed by appmgr.
Package Manager |
Lifecyle Manager |
Monitoring and Debugging |
---|---|---|
|
|
|
Customize Docker Run Options Using Application Manager
Feature Name |
Release Information |
Description |
---|---|---|
Customize Docker Run Options Using Application Manager |
Release 24.1.1 |
You can now leverage Application Manager to efficiently overwrite default docker runtime configurations, tailoring them to specific parameters like CPU usage, security settings, and health checks. You can thus optimize application performance, maintain fair resource allocation among multiple dockers, and establish non-default network security settings to meet specific security requirements. Additionally, you can accurately monitor and reflect the health of individual applications. This feature modifies the docker-run-opts option command. |
With this feature, runtime options for docker containerized applications on IOS-XR can be configured during launch using the appmgr activate " command. AppMgr, which oversees docker containerized applications, ensures that these runtime options can effectively override default configurations, covering aspects like CPU, security, and health checks during the container launch.
This feature introduces multiple runtime options that allow users to customize different parameters of docker containers. The configuration of these runtime options is flexible, as users can use either command or Netconf for the configuration process. Regardless of the chosen method, runtime options must be added to docker-run-opts as needed.
The following are the docker run option commands introduced in IOS-XR software release 24.1.1.
Docker Run Option |
Description |
---|---|
--cpus |
Number of CPUs |
--cpuset-cpus |
CPUs in which to allow execution (0-3, 0,1) |
--cap-drop |
Drop Linux capabilities |
--user, -u |
Sets the username or UID |
--group-add |
Add additional groups to run |
--health-cmd |
Run to check health |
--health-interval |
Time between running the check |
--health-retries |
Consecutive failures needed to report unhealthy |
--health-start-period |
Start period for the container to initialize before starting health-retries countdown |
--health-timeout |
Maximum time to allow one check to run |
--no-healthcheck |
Disable any container-specified HEALTHCHECK |
--add-host |
Add a custom host-to-IP mapping (host:ip) |
--dns |
Set custom DNS servers |
--dns-opt |
Set DNS options |
--dns-search |
Set custom DNS search domains |
--domainname |
Container NIS domain name |
--oom-score-adj |
Tune host's OOM preferences (-1000 to 1000) |
--shm-size |
Option to set the size of /dev/shm |
--init |
Run an init inside the container that forwards signals and reaps processes |
--label, -l |
Set meta data on a container |
--label-file |
Read in a line delimited file of labels |
--pids-limit |
Tune container pids limit (set -1 for unlimited) |
--work-dir |
Working directory inside the container |
--ulimit |
Ulimit options |
--read-only |
Mount the container's root filesystem as read only |
--volumes-from |
Mount volumes from the specified container(s) |
--stop-signal |
Signal to stop the container |
--stop-timeout |
Timeout (in seconds) to stop a container |
Prior to IOS-XR software release 24.1.1, only the below mentioned docker run option commands were supported.
Docker Run Option |
Description |
---|---|
--publish |
Publish a container's port(s) to the host |
--entrypoint |
Overwrite the default ENTRYPOINT of the image |
--expose |
Expose a port or a range of ports |
--link |
Add link to another container |
--env |
Set environment variables |
--env-file |
Read in a file of environment variables |
--network |
Connect a container to a network |
--hostname |
Container host name |
--interactive |
Keep STDIN open even if not attached |
--tty |
Allocate a pseudo-TTY |
--publish-all |
Publish all exposed ports to random ports |
--volume |
Bind mount a volume |
--mount |
Attach a filesystem mount to the container |
--restart |
Restart policy to apply when a container exits |
--cap-add |
Add Linux capabilities |
--log-driver |
Logging driver for the container |
--log-opt |
Log driver options |
--detach |
Run container in background and print container ID |
--memory |
Memory limit |
--memory-reservation |
Memory soft limit |
--cpu-shares |
CPU shares (relative weight) |
--sysctl |
Sysctl options |
Guidelines and Limitations
-
For the options
--mount
and--volume
, only the following values can be configured:-
"/var/run/netns"
-
"/var/lib/docker"
-
"/misc/disk1"
-
"/disk0"
-
-
The maximum allowed size for shm-size option is 64 Mb.
-
Prior to Release 24.4.1, all container logs were recorded with an info severity level (sev-6), regardless of the docker run time options used. From Release 24.4.1, if the docker run time option
-it
is used, the container logs are generated with an info severity level (sev-6). However, if the --it
option is not included, the logs are produced with an error severity level (sev-3). -
From Release 24.4.1, you can use the rsyslog daemon to forward syslog messages to remote syslog servers. To know more, see Support for logging functionality on third-party applications.
Configuration
This section provides the information on how to configure the docker run time options.
In this example we configure the docker run time option --pids-limit to limit the number of process IDs using appmgr.
Router#appmgr application alpine_app activate type docker source alpine docker-run-opts "-it –pids-limit 90" docker-run-cmd "sh"
Router#
In this example we configure the docker run time option --pids-limit to limit the number of process IDs using Netconf.
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="101">
<edit-config>
<target>
<candidate/>
</target>
<config>
<appmgr xmlns=http://cisco.com/ns/yang/Cisco-IOS-XR-um-appmgr-cfg>
<applications>
<application>
<application-name>alpine_app</application-name>
<activate>
<type>docker</type>
<source-name>alpine</source-name>
<docker-run-cmd>/bin/sh</docker-run-cmd>
<docker-run-opts>-it --pids-limit=90</docker-run-opts>
</activate>
</application>
</applications>
</appmgr>
</config>
</edit-config>
Verification
This example shows how to verify the docker run time option configuration.
Router# show running-config appmgr
Thu Mar 23 08:22:47.014 UTC
appmgr
application alpine_app
activate type docker source alpine docker-run-opts "-it –pids-limit 90" docker-run-cmd "sh"
!
!
You can also use docker inspect container id to verify the docker run time option configuration.
Router# docker inspect 25f3c30eb424
[
{
"PidsLimit": 90,
}
]
Prioritize Traffic for TPAs in Sandbox Environments
Feature Name |
Release Information |
Description |
---|---|---|
Prioritize Traffic for TPAs in Sandbox Environments |
Release 24.1.1 | You can now optimize network performance, implement traffic segregation, and prevent packet drops due to congestion for Third
Party Application (TPA) within the Sandbox environment, improving reliability and efficiency. This is achieved through enhanced
LPTS-based traffic prioritization for TPAs hosted within a sandbox container.
This feature introduces these changes: |
With this enhancement, you have the flexibility to categorize traffic flows from TPAs hosted in a sandbox based on priority levels, offering better granular control over traffic handling. Prior to this release, traffic from TPAs hosted in a sandbox flowed through a single queue, leading to policer overload and subsequent packet drop.
Configuring Traffic Prioritization for TPA in a Sandbox
During the configuration of a TPA port, you can now set the priority for the port as High, Medium, or Low.
Configuring high priority traffic port
This example shows how to configure TPA traffic in port 2018 to high LPTS flow priority.
Router(config)# sandbox flow TPA-APPMGR-HIGH ports 2018
Configuring medium priority traffic port
This example shows how to configure TPA traffic in port 6666 to medium LPTS flow priority.
Router(config)# sandbox flow TPA-APPMGR-MEDIUM ports 6666
Configuring low priority traffic portThis example shows how to configure TPA traffic in port 60100 to low LPTS flow priority.
Router(config)# sandbox flow TPA-APPMGR-LOW ports 60100
Verification
This example shows how to verify TPA traffic prioritization.
Router(config)# show lpts pifib hardware police location
TPA-APPMGR-HIGH 103 np NPU 1940 1000 0 0 0
TPA-APPMGR-HIGH 103 np NPU 1940 1000 1456 0 1
TPA-APPMGR-MED 104 np NPU 1940 1000 0 0 0
TPA-APPMGR-MED 104 np NPU 1940 1000 1455 0 1
TPA-APPMGR-LOW 105 np NPU 1940 1000 0 0 0
TPA-APPMGR-LOW 105 np NPU 1940 1000 1456 0 1
Docker Application Management using IPv6 Address
Feature Name |
Release Information |
Description |
---|---|---|
Docker Application Management using IPv6 Address |
Release 24.4.1 |
Introduced in this release on: Fixed Systems(8200, 8700);Modular Systems (8800 [LC ASIC: P100]) (select variants only*) *This feature is now supported on:
|
Docker Application Management using IPv6 Address |
Release 7.11.1 |
In this release, you gain the ability to manage Docker applications within containers using IPv6 addresses via the router's management interface. Leveraging IPv6 addresses provides expanded addressing options, enhances network scalability, and enables better segmentation and isolation of applications within the network. Prior to this update, only IPv4 addresses could be used to manage docker applications. |
The Application Manager in IOS-XR software release 7.3.15 introduces support for an application networking feature that facilitates traffic forwarding across Virtual Routing and Forwarding (VRF) instances. This feature is implemented through the deployment of a relay agent contained within an independent docker container.
The relay agent acts as a bridge, connecting two network namespaces within the host system and actively transferring traffic between them. Configurations can be made to establish forwarding between either a single pair of ports or multiple pairs, based on your network requirements.
One of the main uses of this feature is to allow the management of Linux-based Docker applications that are running in the default VRF through a management interface. This management interface can be located in a separate VRF. This feature ensures that Docker applications can be managed seamlessly across different VRFs.
In the IOS-XR software release 7.11.1, enhanced management capabilities are offered for docker applications. Now, you can leverage IPv6 addresses to manage applications within docker containers via the management interface of the Cisco 8000 router. This update provides improved accessibility and control over your Docker applications using IPv6 addressing. Prior to the IOS-XR software release 7.11.1, application management for docker containers could only be conducted using IPv4 addresses.
Restrictions and Limitations
In configuring your setup, please consider the following restrictions and limitations:
-
VRF Forwarding Limitation: The Virtual Routing and Forwarding (VRF) is only supported for Docker apps with host networking.
-
Relay Agent Availability and Management: The relay agent container is designed to be highly available. It will be managed by the Application Manager (App Mgr).
-
Relay Agent Creation: For each pair of forwarded ports, one relay agent container will be created.
-
Port Limitation per Application: The total effective number of ports for each application is limited to a maximum of 10.
Configure VRF Forwarding
To manage a Docker application using the Application Manager through the Management Interface, follow these steps:
Procedure
Step 1 |
Configure the app manager: The application manager is configured to access the docker application. Use the appmgr applicationapplication-name keyword to enable and specify configuration parameters for the VRF forwarding. A typical example would look like this: Example:
|
||
Step 2 |
Enable Basic Forwarding Between Two Ports: To enable traffic forwarding between two ports in different VRFs, use the following configuration: Example:
This command enables traffic on port 5000 at all addresses in vrf-mgmt to be forwarded to the destination veth device in vrf-default on port 8000. To enable VRF forwarding between multiple ports, follow the steps below:
|
Verifying VRF Forwarding for Application Manager
Use the show appmgr application name keyword to verify the VRF forwarding. A typical example would look like this:
RP/0/RP0/CPU0:ios#show appmgr application name swan info detail
Thu Oct 26 11:59:32.798 UTC
Application: swan
Type: Docker
Source: swanagent
Config State: Activated
Docker Information:
Container ID: f230a2396b85f6b3eeb01a8a4450a47e5bd8499fe5cfdb141c2d0fba905b63ec
Container name: swan
Labels: com.azure.dev.image.build.buildnumber=2.3.2-dev-ricabrah-partho-xr-dev.1+28,com.azure.dev.image.build.definitionname=swanagentXR,com.azure.dev.image.build.repository.uri=https://1Wan@dev.azure.com/1Wan/SWAN/_git/swanagentXR,com.azure.dev.image.system.teamfoundationcollectionuri=https://dev.azure.com/1Wan/,com.azure.dev.image.build.builduri=vstfs:///Build/Build/8518,com.azure.dev.image.build.repository.name=swanagentXR,com.azure.dev.image.build.sourcebranchname=partho-xr-dev,com.azure.dev.image.build.sourceversion=0ebd43521870844688660c131b0921ea7e2dcb27,com.azure.dev.image.system.teamproject=SWAN,image.base.ref.name=mcr.microsoft.com/mirror/docker/library/alpine:3.15
Image: swancr.azurecr.io/swanagentxr-iosxr:2.4.0-0ebd435
Command: "./agentxr"
Created at: 2023-10-26 11:58:45 +0000 UTC
Running for: 48 seconds ago
Status: Up 47 seconds
Size: 0B (virtual 29.3MB)
Ports:
Mounts: /var/lib/docker/appmgr/config/swanagent/hostname,/var/lib/docker/appmgr/config/swanagent,/var/lib/docker/ems/grpc.sock,/var/run/netns
Networks: host
LocalVolumes: 0
Vrf Relays:
Vrf Relay: vrf_relay.swan.6a98f0ed060bffa
Source VRF: vrf-management
Source Port: 11111
Destination VRF: vrf-default
Destination Port: 10000
IP Address Range: 172.16.0.0/12
Status: Up 45 seconds
Use the show running-config appmgr keyword to check the running configuration.
Router#show running-config appmgr
Thu Oct 26 12:04:06.063 UTC
appmgr
application swan
activate type docker source swanagent docker-run-opts "--vrf-forward vrf-management:11111 vrf-default:10000 -it --restart always --cap-add=SYS_ADMIN --net=host --log-opt max-size=20m --log-opt max-file=3 -e HOSTNAME=$HOSTNAME -v /var/run/netns:/var/run/netns -v {app_install_root}/config/swanagent:/root/config -v {app_install_root}/config/swanagent/hostname:/etc/hostname -v /var/lib/docker/ems/grpc.sock:/root/grpc.sock"
!
!