Configuring External Control Plane

This module contains the following topics:

Configuring External Control Plane

In Cisco routers, the route processor (RP) that runs the control plane is hosted in the router chassis along with line cards. Hence, there is a tight coupling of control and data planes. Cisco NCS 6000 supports running up to three SDRs on the same RP. With the external control plane (ECP) feature, two external servers are connected to the NCS 6000 chassis through the control ethernet. This ensures control plane reachability across the NCS 6000 chassis and both the servers and provides the additional system resources that are required to scale beyond the current restriction of three named SDRs.

Table 1. SDR Scale Information

Cisco IOS XR Release

Supported Scale

Supported Dell Server Model

Release 6.4.1

up to 4 named SDRs

Dell R630

Release 6.6.1

up to 6 named SDRs with ISSU

Dell R630

Release 7.0.1

up to 6 named SDRs with ISSU

Dell R640


Note

The CLI command hw-module location <card> bootmedia is not supported for CC cards. Instead, use integrated Dell Remote Access Controller (iDRAC).


Pre-requisites

This section provides information about the hardware and software requirements for configuring the external control plane feature.

Hardware Requirements for NCS 6000 router

  • Two Dell R630 or R640 PowerEdge Servers with enterprise version of integrated Dell Remote Access Controller (iDRAC)

  • NCS 6008 Line Card Chassis (2T or 1T). For 2T line cards, ensure that the following hardware components are present:

    • Two FAN Trays (NC6-FANTRAY)

    • Universal Fabric Cards (NC6-FC2-U)

  • Two Route Processor cards for NCS6008 (NC6-RP)

  • Power Trays with 6 PEM for each Power Tray (NCS-AC-PWRTRAY)

Hardware Requirements for Dell R630 Server

  • Dell PERC H730 Integrated RAID Controller with 1GB cache for Dell R630 Server

Hardware Requirements for Dell R640 Server

  • Dell PERC H740P Mini (Embedded) with 8GB cache for Dell R640 Server

Pluggagbles common for the pair of Dell Servers

  • Four Intel SFP optics for each Dell server (FTLX8571D3BCVIT1) of which two optics for the management connectivity and another two optics for the control ethernet connectivity.


    Note

    Dell servers do not support Cisco Copper or SFP optics.


  • Four LC to LC OM3 multimode cables required for the control ethernet connectivity. Two cables for the control Ethernet connectivity between each RP and two server NIC ports (used for Expansion Ethernet).

  • Four Cisco SFP+ optics for control Ethernet connectivity (SFP-10G-SR) at the RPs

  • Two Cisco 1000BASE-T Copper SFP optics for XR management port connectivity at the RP Sysadmin VM port.

  • Four LC to LC OM3 multimode cables required for the management connectivity.

Software Requirements for Dell R630 Server

  • Cisco IOS XR 6.4.1 latest version with the required packages.

  • Dell iDRAC enterprise edition version 2.41.40.40 or later on the Dell servers.

  • Dell server BIOS version 2.4.3 or later.

  • Supported browser with latest Java or HTML5 plug-in installed for accessing Dell iDRAC's virtual console.

Software Requirements for Dell R640 Server

  • Cisco IOS XR 7.0.1 latest version with the required packages.

  • Dell iDRAC enterprise edition version 3.21.26.22 or later on the Dell servers.

  • Dell server BIOS version 1.6.12 or later.

  • NIC firmware version 18.8.9

  • Supported browser with latest Java or HTML5 plug-in installed for accessing Dell iDRAC's virtual console.

Pre-configuration tasks

You should complete the following pre-configuration tasks before configuring the ECP feature:

Setting up the Connection Between the Router and the Servers

Before configuring ECP, you need to establish control Ethernet connectivity in mesh mode between control Ethernet ports of RPs in NCS6000 and NIC ports of the Dell servers. This step ensures control Ethernet fail over connectivity which ensures minimal loop avoidance protocol (MLAP) availability between the NCS6000 router and Dell servers. MLAP helps to avoid control Ethernet loops. You should use the LC-to-LC OM3 multimode cables to establish the connectivity.

NCS6000 router supports:

  • Dell R630 in Cisco IOS XR Release 6.4.x

  • Dell R630 in Cisco IOS XR Release 6.6.x

  • Dell R640 in Cisco IOS XR Release 7.0.x


    Note

    Do not use a combination of the Dell R630 and the R640 servers. Both servers must be of the same model (either R630 or R640).


The control Ethernet connections between the NCS6000 line card chassis and Dell servers needs to be established as per the following figures:

Figure 1. Control Ethernet and Management Connections for Dell R630
Figure 2. Control Ethernet and Management Connections for Dell R640

Note

In Dell R640, the ports connecting to XR VM and System Admin VM are placed in a different location when compared to Dell R630.


You can use one of the following options to connect to the console port on the Dell server:

  • RJ-45 to DB-9 Female cable

  • RJ-45-to-DB-9 Adapter

Once the Dell servers are powered on, ensure that the power supply LEDs at the cable end of both the AC power supplies display solid green light which indicates that power supplies are working or in good state.

Configuring Dell iDRAC

This section provides information about how to configure the Integrated Dell Remote Access Controller (iDRAC) for Dell servers. The Dell iDRAC allows system administrators to manage the Dell servers remotely.

  1. Configure an IP address for iDRAC on the Dell server.

  2. Access the iDRAC web interface by specifying the IP address (https://iDRAC-IP-address) in the browser address bar.

  3. Log in using the default credentials and then create your log in credentials when prompted.

    • Default user name : root

    • Default password : calvin


    Note

    The root user should change the default iDRAC password to ensure secure login to iDRAC.
  4. Note down the service tag information for the Dell server from the system summary page. The service tag information is required to update the chassis serial number during the rack configuration.

  5. Ensure that iDRAC and BIOS firmware versions on both the servers are same by selecting System Inventory > Firmware Inventory. Update the firmware versions if required.


    Note

    For more information about firmware update, see http://www.support.dell.com.

Setting Up Dell Servers for Enabling ECP

This section provides information about setting up the Dell servers for enabling ECP. The configuration tasks include:

  • Configuring the Dell LifeCycle Controller.

  • Using the Dell LifeCycle Controller to configure the following:

    • Serial communication settings in BIOS.

    • Single root I/O virtualization (SR-IOV) mode for NIC ports.

    • Hard disk drive Configuration.

    • NIC Configuration.

Perform the following steps to set up the Dell servers for enabling ECP.

Configuring Dell LifeCycle Controller

Perform the following steps to configure Dell LifeCycle Controller.

  1. Launch the iDRAC web interface of the Dell server and log in to iDRAC.

  2. In the System Summary, locate Virtual Console Preview and click Launch to launch the virtual console.

  3. In the iDRAC Virtual Console, select Next Boot > LifeCycle Controller and then click OK.

  4. Select Power > Power Cycle System (cold boot) and then click Yes in the confirmation window to complete the power cycle.

  5. On the LifeCycle Controller page, select the required Language and Keyboard type and then click Next.

  6. On the Product Overview page, click Next.

  7. Enter the network settings on the LifeCycle Controller Network Settings page and then click Next.

  8. Once the settings are applied, click Finish. LifeCycle Controller page is displayed.

Configure the Serial Communication and NIC Port Virtualization Settings Using Dell LifeCycle Controller

Once you configure the Dell Lifecycle Controller, perform the following steps to set the serial communication and NIC port virtualization settings using the Dell LifeCycle Controller. You need to set the virtualization mode as single root I/O virtualization (SR-IOV) mode in both NIC ports.

  1. Select System Setup and then click Advanced Hardware Configuration.

  2. In the System BIOS page, click Serial Communication and ensure that the following settings are correct.

    • Serial Communication: Auto.

    • Serial Port Address: Serial Device1=COM2, Serial Device2=COM1

    • External Serial Connector: Serial Device1

    • Failsafe Baud Rate: 115200

    • Remote Terminal Type: VT100/VT220

    • Redirection After Boot: Enabled

  3. In the System BIOS page, select Integrated Devices and ensure that the SR-IOV Global Enable setting is Enabled.

  4. Click Back to return to System Setup page.

  5. On the Main Configuration page, click Finish.

  6. Click Yes to confirm the changes. The success window appears confirming settings are applied.

  7. Click OK to return to Device Settings page.

Converting Physical Disks to RAID Capable Mode

Once you complete configuring the NIC port virtualization settings, you need to prepare the disk storage for the NCS 6008 software image installation. First, the physical disk drives need to be converted to RAID capable mode and then virtual hard disks are created. Perform the following steps to convert physical disk drives to RAID capable mode.


Note

To configure RAID, a minimum of 4 physical disks are required in the system which can be grouped in to 2 virtual disks.
  1. On the Device Settings page, select:

    • Integrated RAID Controller Dell PERC <PERC H730 Mini> Configuration Utility for Dell R630 server.

    • Integrated RAID Controller Dell PERC <PERC H740 Mini> Configuration Utility for Dell R640 server.

  2. For Dell R630 server:

    • Select Advanced Controller Management in the Integrated RAID Controller Dell PERC <PERC H730 Mini> Configuration Utility page

    For Dell R640 server:

    • Select Advanced Controller Management in the Integrated RAID Controller Dell PERC <PERC H740 Mini> Configuration Utility page

  3. On the Advanced Controller Management page, ensure that the controller mode is RAID.


    Note

    If the controller mode is host bus adapter (HBA), you need to change the controller mode to RAID by selecting Switch to RAID mode and then reboot the server.
  4. Return to the Integrated RAID Controller Dell PERC Configuration Utility page and then select Configuration Management .

  5. Select Convert to RAID Capable to convert the drives as RAID capable. The eligible non-RAID disks are listed.

  6. From the list of physical disks, select all the physical disks and then click OK.

  7. Select Confirm and then click Yes to confirm the changes.

  8. Click Back to return to Configuration Management .

Creating Virtual Disks

Perform the steps in this procedure to create virtual disks out of the RAID capable disks. In this task, two virtual disks are created. The first RAID 1 virtual disk is created to be hosted on the Solid-State Drive (SSD) units in the Dell servers. A second virtual disk is created with the remaining physical drives so that the first virtual disk appear first while loading the Cisco software.

  1. On the Configuration Management page, select Create Virtual Disk.

  2. Select the RAID level as RAID 1, select physical disks from Unconfigured Capacity, and then click Select Physical Disks.

  3. Select the following configuration settings for the physical disks:

    • Select Media Type:

      SSD
    • Select Interface Type:

      Both
    • Logical Sector Size: Both

  4. Select the two SSD drives and then click Apply Changes.

  5. Set the virtual disk size as 200 GB and then click Create Virtual Disk.

  6. Select Confirm and then click Yes to confirm the changes. The first virtual disk is created.

  7. Click OK and return to Configuration Management to create the second virtual disk.

  8. On the Configuration Management page, select Create Virtual Disk.

  9. Select the RAID level as RAID 1, select physical disks from Free Capacity, and then click Select Disk Groups.

  10. Click Apply Changes and then OK .

  11. Use the free space on the disk group for the second virtual disk and then click Create Virtual Disk.

  12. Select Confirm and then click Yes to confirm the changes. The second virtual disk is created.

  13. Return to Dashboard View >Configuration Management page.

    In the Dashboard View page, under Properties area the number of Virtual Disks value should be 2.

  14. Return to Device Settings page.

  15. Click Finish and return to System Setup Main Menu page.
  16. Click Finish and return to System Setup page.
  17. Click Exit and Virtual Console window opens.

    This completes the storage settings required for the ECP feature configuration on Dell server.

    Repeat the above steps on the second Dell server as well to complete the configuration steps in BIOS and Device Settings page.

Changing the BIOS Boot Sequence

Perform the following steps to change the BIOS boot sequence.

  1. Select System BIOS > Boot Settings > BIOS Boot Settings.

  2. In BIOS Boot Settings, click Boot Sequence.

  3. Move Hard drive C: to the top of the boot order and then click OK.

  4. Under Boot Option Enable/Disable, uncheck the options except Hard drive C: and then click Back to return to System BIOS setting.

  5. In the System BIOS Settings page, click Finish and then click Yes to save the changes and click OK to the save successful message.

  6. On the System Setup page, click Finish and then click Yes in the confirmation message to reboot the server and the changed BIOS settings to take effect.

Configuring NIC Ports

Perform the following steps to configure NIC ports ettings in the Dell Lifecycle Controller for ECP.

  1. Launch the iDRAC web interface of the Dell server and log in to iDRAC.

  2. In the System Summary, locate Virtual Console Preview and click Launch to launch the virtual console.

  3. In the iDRAC Virtual Console, select Next Boot > LifeCycle Controller and then click OK.

  4. Select System Settings and then click Device Settings.

  5. On the Device setting page, click Integrated NIC 1 Port 1: Intel ®Ethernet 10G 4P X710 /i350 rNDC

  6. Click Device Level Configuration and select the Virtualization Mode setting as SR-IOV.

  7. Return to the Device Settings page

  8. On the Device Settings page, click NIC in Slot 1 Port 1: Intel ® Ethernet Converged Network Adapter X710 .

  9. Click Device Level Configuration and select the Virtualization Mode setting as SR-IOV.

  10. Return to the Device Settings page.

  11. Repeat the Step 8 and Step 9 for the below two NIC ports participating in the ECP feature:

    • NIC in Slot 2 Port 1: Intel ® Ethernet Converged Network Adapter X710

    • NIC in Slot 2 Port 2: Intel ® Ethernet Converged Network Adapter X710

    This ensures that the Virtualization Mode is set to SR-IOV for all NIC ports relevant to the ECP feature.


Note

  • Integrated NIC 1 Port 1: Intel ®Ethernet 10G 4P X710 /i350 rNDC is used for Control Ethernet connectivity

  • NIC in Slot 1 Port 1: Intel ® Ethernet Converged Network Adapter X710 is used for Control Ethernet connectivity

  • NIC in Slot 2 Port 1: Intel ® Ethernet Converged Network Adapter X710 is used for Management connectivity

  • NIC in Slot 2 Port 2: Intel ® Ethernet Converged Network Adapter X710 is used for Management connectivity


Bringing Up the NCS 6008 Router and Dell Servers for Enabling ECP

Perform the steps in this task to bring up the NCS 6008 router chassis and Dell servers for enabling ECP.


Note

You need to complete the pre-configuration tasks before performing these steps.


  1. Power down both the Dell servers.

  2. Ensure that the IOS-XR image running on the chassis supports the ECP feature and any relevant fixes are installed.

  3. After the NCS 6008 chassis comes up, ensure that the right version is installed by issuing the show version command.

  4. Use the show install active command to confirm that same version is installed on all nodes.

  5. Update the chassis serial numbers of NCS 6008 and Dell servers in the Sysadmin VM running configuration.

    
    sysadmin-vm:0_RP0(config)# chassis serial FMP12020050 
    sysadmin-vm:0_RP0(config-serial-FMP12020050)# rack 0
    sysadmin-vm:0_RP0(config)# chassis serial 2YC4JH2
    sysadmin-vm:0_RP0(config-serial-2YC4JH2)# rack B0
    sysadmin-vm:0_RP0(config)# chassis serial 2YB5JH2
    sysadmin-vm:0_RP0(config-serial-2YB5JH2)#  rack B1
    sysadmin-vm:0_RP0(config)# commit
    
    

    Note

    To identify the serial numbers of the remotely managed Dell servers, see Configuring Dell iDRAC


  6. Power on the Dell servers and load the NCS 6008 image using the virtual media option by performing the following steps:

    1. Launch the Dell iDRAC Virtual Console and then click Virtual Media > Connect Virtual Media.

    2. Click Virtual Media > Map Removable Disk.

    3. On the Virtual Media - Map Removable Disk dialog box, browse the client system, and select the NCS 6008 bootable IMG image and then click Map Device.

    4. In the Virtual Console, click Next Boot >Virtual Floppy and then click Ok to set virtual floppy as the next boot device.

    5. In the Virtual Console, click Power > Power on System to reboot the server.

      After the server is powered on, NCS 6008 software image installation starts from the mapped software image.

Connecting to Sysadmin VM on the Dell Server through iDRAC SSH

To access terminal console of sysadmin VM running at Dell servers, the root user should establish a SSH connection to the iDRAC IP address from a client running Windows, Linux, or Mac OS which is reachable to the network in which servers are present. Once the SSH connection is established, execute the console com2 command at the SSH prompt to connect to the server console.


Note

After executing the console com2 command, you may have to press the enter key multiple times to get the redirected console output of the server at the terminal console.


Creating Named SDRs for the ECP Enabled System

Perform the steps in this task to configure a named-SDR and allocate inventory in an ECP enabled system with Dell PowerEdge servers. The inventory includes RP resources (memory and CPU) and line cards. You can repeat the steps to create multiple named-SDRs.

Procedure


Step 1

config

Example:

sysadmin-vm:0_RP0# config

Enters system administration configuration mode.

Step 2

sdr sdr-name

Example:

sysadmin-vm:0_RP0(config)#  sdr sdr1

Creates a named-SDR and Enters SDR configuration mode.

Step 3

pairing mode inter-rack

Example:

sysadmin-vm:0_RP0(config-sdr-sdr1)#  pairing mode inter-rack

Sets the RP pairing to inter-rack mode.

Step 4

resources card-type cc

Example:

sysadmin-vm:0_RP0(config-sdr-sdr1)#  resources card-type cc 
Enters RP resources allocation mode for Dell servers. Dell servers are modeled as Compute Container (CC) card in the ECP enabled system.
Step 5

location node-id

Example:

sysadmin-vm:0_RP0(config-sdr-sdr1)#  location B0/CB0

Allocates first RP to the named-SDR based on the specified RP location. Here, B0 is the Dell server.

Step 6

location node-id

Example:

sysadmin-vm:0_RP0(config-location-B0/CB0)#  location B1/CB0

Allocates second RP to the named-SDR to be used for redundancy. Here B1 is the second Dell server.

Step 7

exit

Example:

sysadmin-vm:0_RP0(config-location-B1/CB0)#  exit

Exits the RP configuration mode and returns to named-SDR configuration mode.

Step 8

location node-id

Example:

sysadmin-vm:0_RP0(config-sdr-sdr1)#  location 0/0

Allocates line card to the named-SDR based on the specified line card location.

Step 9

commit

Example:

sysadmin-vm:0_RP0(config-location-0/0)# commit

Example: Creating Named SDRs for ECP

This example shows how to configure named SDRs in a ECP enabled system with external servers. In this example, 4 SDRs are created.


sysadmin-vm:0_RP0# config
sysadmin-vm:0_RP0(config)# no sdr default-sdr
sysadmin-vm:0_RP0(config)# commit
sysadmin-vm:0_RP0(config)# sdr sdr1
sysadmin-vm:0_RP0(config-sdr-sdr1)# pairing mode inter-rack
sysadmin-vm:0_RP0(config-sdr-sdr1)# resources card-type cc 
sysadmin-vm:0_RP0(config-sdr-sdr1)# location B0/CBO
sysadmin-vm:0_RP0(config-location-B0/CB0)# location B1/CB0
sysadmin-vm:0_RP0(config-location-B1/CB0)# exit
sysadmin-vm:0_RP0(config-sdr-sdr1)# location 0/0
sysadmin-vm:0_RP0(config-location-0/0)# commit

sysadmin-vm:0_RP0(config)# sdr sdr2
sysadmin-vm:0_RP0(config-sdr-sdr2)# pairing mode inter-rack
sysadmin-vm:0_RP0(config-sdr-sdr2)# resources card-type cc 
sysadmin-vm:0_RP0(config-sdr-sdr2)# location B0/CB0
sysadmin-vm:0_RP0(config-location-B0/CB0)# location B1/CB0
sysadmin-vm:0_RP0(config-location-B1/CB0)# exit
sysadmin-vm:0_RP0(config-sdr-sdr2)# location 0/3
sysadmin-vm:0_RP0(config-location-0/3)# commit

sysadmin-vm:0_RP0(config)# sdr sdr3
sysadmin-vm:0_RP0(config-sdr-sdr3)# pairing mode inter-rack
sysadmin-vm:0_RP0(config-sdr-sdr3)# resources card-type cc 
sysadmin-vm:0_RP0(config-sdr-sdr3)# location B0/CB0
sysadmin-vm:0_RP0(config-location-B0/CB0)# location B1/CB0
sysadmin-vm:0_RP0(config-location-B1/CB0)# exit
sysadmin-vm:0_RP0(config-sdr-sdr3)# location 0/4
sysadmin-vm:0_RP0(config-location-0/4)# commit

sysadmin-vm:0_RP0(config)# sdr sdr4
sysadmin-vm:0_RP0(config-sdr-sdr4)# pairing mode inter-rack
sysadmin-vm:0_RP0(config-sdr-sdr4)# resources card-type cc 
sysadmin-vm:0_RP0(config-sdr-sdr4)# location B0/CB0
sysadmin-vm:0_RP0(config-location-B0/CB0)# location B1/CB0
sysadmin-vm:0_RP0(config-location-B1/CB0)# exit
sysadmin-vm:0_RP0(config-sdr-sdr4)# location 0/5
sysadmin-vm:0_RP0(config-location-0/5)# commit

sysadmin-vm:0_RP0(config)# sdr sdr5
sysadmin-vm:0_RP0(config-sdr-sdr5)# pairing mode inter-rack
sysadmin-vm:0_RP0(config-sdr-sdr5)# resources card-type cc 
sysadmin-vm:0_RP0(config-sdr-sdr5)# location B0/CB0
sysadmin-vm:0_RP0(config-location-B0/CB0)# location B1/CB0
sysadmin-vm:0_RP0(config-location-B1/CB0)# exit
sysadmin-vm:0_RP0(config-sdr-sdr5)# location 0/6
sysadmin-vm:0_RP0(config-location-0/6)# commit

sysadmin-vm:0_RP0(config)# sdr sdr6
sysadmin-vm:0_RP0(config-sdr-sdr6)# pairing mode inter-rack
sysadmin-vm:0_RP0(config-sdr-sdr6)# resources card-type cc 
sysadmin-vm:0_RP0(config-sdr-sdr6)# location B0/CB0
sysadmin-vm:0_RP0(config-location-B0/CB0)# location B1/CB0
sysadmin-vm:0_RP0(config-location-B1/CB0)# exit
sysadmin-vm:0_RP0(config-sdr-sdr6)# location 0/7
sysadmin-vm:0_RP0(config-location-0/7)# commit

What to do next

After the named-SDR are created, verify the VM state for each SDR.

Execute the show sdr command to check that the Status is "RUNNING" for all VMs in each SDR.
sysadmin-vm:0_RP0# show sdr

Wed Nov  08 16:01:06.626 UTC

SDR: SDR1
Location     IP Address      Status           Boot Count  Time Started
-----------------------------------------------------------------------------
B0/CB0/VM1   192.0.0.4       RUNNING          1           08/11/2017 00:33:12
B1/CB0/VM1   192.0.4.4       RUNNING          1           08/11/2017 00:33:01
0/1/VM1      192.0.88.3      RUNNING          1           08/11/2017 00:32:48

SDR: SDR2
Location     IP Address      Status           Boot Count  Time Started
-----------------------------------------------------------------------------
B0/CB0/VM2    192.0.0.6       RUNNING          2           08/11/2017 03:24:43
B1/CB0/VM2    192.0.4.6       RUNNING          2           08/11/2017 03:24:32
0/3/VM2       192.0.68.3      RUNNING          2           08/11/2017 03:25:26

SDR: SDR3
Location     IP Address      Status           Boot Count  Time Started
-----------------------------------------------------------------------------
B0/CB0/VM3    192.0.0.8       RUNNING          2           08/11/2017 02:32:15
B1/CB0/VM3    192.0.4.8       RUNNING          2           08/11/2017 02:32:23
0/4/VM3       192.0.64.3      RUNNING          2           08/11/2017 02:32:40

SDR: SDR4
Location     IP Address      Status           Boot Count  Time Started
-----------------------------------------------------------------------------
B0/CB0/VM4    192.0.0.10       RUNNING          2           08/11/2017 04:32:23
B1/CB0/VM4    192.0.4.10       RUNNING          2           08/11/2017 04:32:32
0/5/VM4       192.0.60.3       RUNNING          2           08/11/2017 04:33:40

SDR: SDR5
Location     IP Address      Status           Boot Count  Time Started
-----------------------------------------------------------------------------
B0/CB0/VM5    192.0.0.12       RUNNING          2           08/11/2017 02:32:17
B1/CB0/VM5    192.0.4.10       RUNNING          2           08/11/2017 02:32:25
0/6/VM5       192.0.66.3       RUNNING          2           08/11/2017 02:32:49

SDR: SDR6
Location     IP Address      Status           Boot Count  Time Started
-----------------------------------------------------------------------------
B0/CB0/VM6    192.0.0.14       RUNNING          2           08/11/2017 04:32:25
B1/CB0/VM6    192.0.4.12       RUNNING          2           08/11/2017 04:32:37
0/7/VM6       192.0.68.3       RUNNING          2           08/11/2017 04:33:50