Cloud Native BNG Release Change Reference

Feature Defaults Quick Reference

The following table indicates what features are enabled or disabled by default.

Feature

Default

End-to-End Flow Control

Disabled – Configuration Required

High Availability and CP Reconciliation

Disabled – Configuration Required

L2TP Subscriber Management

Disabled – Configuration Required

NSO Integration Support

Enabled – Always-on

Rolling Software Update

Disabled – Configuration Required

End-to-End Flow Control

Feature Summary and Revision History

Summary Data

Table 1. Summary Data

Applicable Product(s) or Functional Area

cnBNG

Applicable Platform(s)

SMI

Feature Default Setting

Disabled - Configuration Required

Related Documentation

Not Applicable

Revision History

Table 2. Revision History
Revision Details Release

First introduced.

2021.04.0

Feature Description


Note

This feature is Network Services Orchestrator (NSO) integrated.

The Cloud Native Broadband Network Gateway (cnBNG) manages residential subscribers from different access planes in a centralized way. It accepts and identifies subscriber Control Plane (CP) traffic coming from multiple User Planes (UPs) associated with the CP. When the number of UPs scale, the amount of CP traffic coming from each UP, multiplies.

There are various scenarios where the traffic flow between the CP and UP must be regulated. This is to ensure that the CP attends all the service requests without service interruption. The scenarios that create burstiness or higher flow rates in the traffic flows are as follows:

  • Power outage in a residential area

  • Access network outage for a specific period

  • UP catastrophic events like process crash, route processor reboots, chassis reload are some of the examples.

These scenarios generate sudden spike in traffic going to the CP. To handle these spikes in traffic, it is necessary to flow control and rate limit the CP ingress to ensure that service applications are not overwhelmed with these bursts. The End-to-End Flow Control feature optimizes flow control and rate limit of the traffic toward the CP ingres.

For more information, see the Cloud Native BNG Control Plane Configuration Guide > End-to-End Flow Control chapter.

High Availability and CP Reconciliation

Feature Summary and Revision History

Summary Data

Table 3. Summary Data

Applicable Product(s) or Functional Area

cnBNG

Applicable Platform(s)

SMI

Feature Default Setting

Disabled - Configuration Required

Related Documentation

Not Applicable

Revision History

Table 4. Revision History
Revision Details Release

First introduced.

2021.04.0

Feature Description

The High Availability (HA) and Control Plane (CP) Reconciliation feature support is available for all cnBNG specific service pods. HA has the following impact on pod services:

  • CPU or memory spike can occur if there is a churn of sessions during pod restart. For example, if SM has two replicas, instance 1 and instance 2 and if instance 1 restarts, there will be spike in the CPU or memory in instance 2.

  • Service pod IPCs can timeout if the destination service pod restarts before respondng to the ongoing IPCs.

  • CDL updates of ongoing sessions can fail and may result in desynchronization of the sessions between the pods.

  • Subscriber sessions can desynchronize across all the pods in the CP due to mismatch of session count or session data. The solution is to run reconciliation (that is, CP audit) for sessions across pods.

    • Reconciliation between SM and DHCP for IPoE sessions

    • Reconciliation between SM, DHCP, and PPP for PTA and LNS sessions

    • Reconciliation between SM and PPP for LAC sessions.

    • Reconciliation between Node Manager (NM) and FSOLs for all session types


    Note

    High availability and CP reconciliation is available only for IPOE and PTA session types. LAC and LNS sesison types are not supported.
  • Subscriber sessions can desynchronize between the CP and UP. The solution for this is to run CP to UP reconciliation for sessions between the CP and UP.

  • IP address leaks can occur in IPAM. To address this, run the IPAM reconciliation CLI command, reconcile ipam .

  • ID leaks (CP SubcriberID and PPPoE Session ID) can occur in the NM.

  • Grafana metrics can reset for the restarted pods.

For more information, see the Cloud Native BNG Control Plane Configuration Guide > High Availability and Control Plane Reconciliation chapter.

L2TP Subscriber Management

Feature Summary and Revision History

Summary Data

Table 5. Summary Data

Applicable Product(s) or Functional Area

cnBNG

Applicable Platform(s)

SMI

Feature Default Setting

Disabled - Configuration Required

Related Documentation

Not Applicable

Revision History

Table 6. Revision History
Revision Details Release

First introduced.

2021.04.0

Feature Description

Majority of the digital subscriber line (DSL) broadband deployments use PPPoE sessions to provide Subscriber services. These sessions terminate the PPP link and provide all the features, service, and billing on the same node. These sessions are called PPP Terminated (PTA) sessions .

There are some wireline subscriber deployments are in wholesale-retail model where some ISPs work with others to provide the access and core services separately. In such cases, the subscribers are tunneled between wholesale and retail ISPs using the Layer 2 Tunneling Protocol (L2TP) protocol.

In cnBNG, the L2TP performs the hand-off task of the subscriber traffic to the Internet service provider (ISP). To do this, L2TP uses two network components:

  • L2TP Access Concentrator (LAC)—The L2TP enables subscribers to dial into the L2TP access concentrator (LAC), which extends the PPP session to the LNS. cnBNG provides LAC.

  • L2TP Network Server—The L2TP extends PPP sessions over an arbitary netowrk to a remote network server that is, the L2TP network server (LNS). The ISP provides LNS.

The overall network deployment architecture is also known as VPDN (Virtual Private Dial up Network). The overall topology of LAC and LNS is depicted as follows:



The CP for LAC and LNS depend on the L2TP session termination. Developing these two control planes in a single cnBNG microservice has the following benefits:

Simplified Single L2TP Control Plane

  • Enables a single Control Plane (CP) to handle both LAC and LNS services

  • Reduces the configuration complexity of the current XR L2TP

    vpdn-groups, vpdn-templates, l2tp-class and so on are simplified.

  • Supports LC subscriber (not supported on the physical BNG)

  • Avoids Ns/Nr checkpointing issues of pBNG to support RPFO

Collocated LAC and LNS

  • Supports LAC and LNS in the same cnBNG CP, with different User Plane (UPs)

  • Enables sharing of the same AAA and Policy Plane

  • Simplifies management and troubleshooting

Flexible Deployment Options

The integration of LAC and LNS into a centralized cnBNG CP provides highly flexible deployments options to suit different customer use-cases and needs. For example, the cnBNG CP can host the CP functionality either for a LAC or LNS UP. Also, the same CP cluster can act as a CP for both LAC and LNS UPs from different data centers (or even from the same user-plane, if the user-plane supports it) except for the same session at the same time.

For more information, see the Cloud Native BNG Control Plane Configuration Guide > PPPoE and L2TP Subscriber Management chapter.

L2TP Features

The cnBNG supports the following Layer 2 Tunneling Protocol (L2TP) features:

  • Tunnel authentication

  • L2TP congestion control

  • AVP encryption

  • Tunnel Hello interval

  • IP ToS value for tunneled traffic

  • IPv4 don't fragment bit

  • DSL line information attributes

  • IPv4 tunnel source address

  • IPv4 tunnel destination address

  • IPv4 destination load balancing

  • Tunnel mode

  • MTU for LCP negotiation

  • TCP maximum support

  • Start-Control-Connection-Request (SCCRQ) timeout

  • SCCRQ retries

  • Control packet retransmission

  • Control packet retransmission retries

  • Receive window size for control channel

  • Rx and Tx connect speed

  • Tunnel VRF

  • Tunnel session limit

  • Weighed and Equal Loadbalancing

  • Tunnel password for authentication

  • Domain name and tunnel assignment

  • LCP and Authentication renegotiation

  • LAC hostnames for tunnelling requests

NSO Integration Support

Feature Summary and Revision History

Summary Data

Table 7. Summary Data

Applicable Product(s) or Functional Area

cnBNG

Applicable Platform(s)

SMI

Feature Default Setting

Enabled – Always-on

Related Documentation

Not Applicable

Revision History

Table 8. Revision History
Revision Details Release

First introduced.

2021.04.0

Feature Description

The following features are Network Services Orchestrator (NSO) integrated:

  • Cisco Common Data Layer

  • Authentication, Authorization, and Accounting

  • IPoE (DHCP)

  • IPAM

  • Monitor Subscriber and Protocol

  • Multiple Replica Support for cnBNG Services

  • PPPoE and L2TP

  • Subscriber Manager

For more information, see the respective chapters in the Cloud Native BNG Control Plane Configuration Guide.

Rolling Software Update

Feature Description

The cnBNG Rolling Software Update feature enables incremental update of pod instances with minimal downtime. In Kubernetes, this implementation is possible only with rolling updates.

Subscriber Microservices Infrastructure (SMI) platorm supports rolling software upgrade for cnBNG pods. The “Pod Restart and Reconciliation” and “Multiple Replica Support for cnBNG Services” features depend on this feature. For more information, see Multiple Replica Support for cnBNG Services and High Availability and CP Reconciliation.

The cnBNG has a three-tier architecture consisting of Protocol, Service, and Session tiers. Each tier includes a set of microservices (pods) for a specific functionality. Within these tiers, there exists a Kubernetes Cluster comprising of Kubernetes (K8s) master and worker nodes (including Operation and Management nodes).

For high availability (HA) and fault tolerance, a minimum of two K8s worker nodes are required for each tier. Each worker node can have multiple replicas. Kubernetes orchestrates the pods using the StatefulSets controller. The pods require a minimum of two replicas for fault tolerance.

The following figure depicts the cnBNG K8s Cluster with 12 nodes – three Master nodes, three Operations and Management (OAM) worker nodes, two Protocol worker nodes, two Service worker nodes, and two Session (data store) worker nodes.

Figure 1. cnBNG Kubernetes Cluster



Note

  • OAM worker nodes: These nodes host the Ops Center pods for configuration management and metrics pods for statistics and Key Performance Indicators (KPIs).

  • Protocol worker nodes: These nodes host the cnBNG protocol related pods for UDP based interfaces like N4, RADIUS, and GTP.

  • Service worker nodes: These nodes host the cnBNG application related pods that perform session and FSOL management.

  • Session worker nodes: These nodes host the database-related pods that store subscriber session data.


For more information, see the Cloud Native BNG Control Plane Configuration Guide > Rolling Software Update chapter.