Carrier Grade NAT Overview and Benefits
To implement the Carrier Grade NAT, you should understand the following concepts:
Carrier Grade NAT Overview
Carrier Grade Network Address Translation (CGN) is a large scale NAT that is capable of providing private IPv4 to public IPv4 address translation in the order of millions of translations to support a large number of subscribers, and at least 10 Gbps full-duplex bandwidth throughput.
CGN is a workable solution to the IPv4 address completion problem, and offers a way for service provider subscribers and content providers to implement a seamless transition to IPv6. CGN employs network address and port translation (NAPT) methods to aggregate many private IP addresses into fewer public IPv4 addresses. For example, a single public IPv4 address with a pool of 32 K port numbers supports 320 individual private IP subscribers assuming each subscriber requires 100 ports. For example, each TCP connection needs one port number.
A CGN requires IPv6 to assist with the transition from IPv4 to IPv6.
Benefits of Carrier Grade NAT
CGN offers these benefits:
-
Enables service providers to execute orderly transitions to IPv6 through mixed IPv4 and IPv6 networks.
-
Provides address family translation but not limited to just translation within one address family.
-
Delivers a comprehensive solution suite for IP address management and IPv6 transition.
IPv4 Address Shortage
A fixed-size resource such as the 32-bit public IPv4 address space will run out in a few years. Therefore, the IPv4 address shortage presents a significant and major challenge to all service providers who depend on large blocks of public or private IPv4 addresses for provisioning and managing their customers.
Service providers cannot easily allocate sufficient public IPv4 address space to support new customers that need to access the public IPv4 Internet.
NAT and NAPT Overview
A Network Address Translation (NAT) box is positioned between private and public IP networks that are addressed with non-global private addresses and a public IP addresses respectively. A NAT performs the task of mapping one or many private (or internal) IP addresses into one public IP address by employing both network address and port translation (NAPT) techniques. The mappings, otherwise referred to as bindings, are typically created when a private IPv4 host located behind the NAT initiates a connection (for example, TCP SYN) with a public IPv4 host. The NAT intercepts the packet to perform these functions:
-
Rewrites the private IP host source address and port values with its own IP source address and port values
-
Stores the private-to-public binding information in a table and sends the packet. When the public IP host returns a packet, it is addressed to the NAT. The stored binding information is used to replace the IP destination address and port values with the private IP host address and port values.
Traditionally, NAT boxes are deployed in the residential home gateway (HGW) to translate multiple private IP addresses. The NAT boxes are configured on multiple devices inside the home to a single public IP address, which are configured and provisioned on the HGW by the service provider. In enterprise scenarios, you can use the NAT functions combined with the firewall to offer security protection for corporate resources and allow for provider-independent IPv4 addresses. NATs have made it easier for private IP home networks to flourish independently from service provider IP address provisioning. Enterprises can permanently employ private IP addressing for Intranet connectivity while relying on a few NAT boxes, and public IPv4 addresses for external public Internet connectivity. NAT boxes in conjunction with classic methods such as Classless Inter-Domain Routing (CIDR) have slowed public IPv4 address consumption.
Network Address and Port Mapping
-
Endpoint-independent mapping—Reuses the port mapping for subsequent packets that are sent from the same internal IP address and port to any external IP address and port.
-
Address-dependent mapping—Reuses the port mapping for subsequent packets that are sent from the same internal IP address and port to the same external IP address, regardless of the external port.
Note |
CGN on ISM implements Endpoint-Independent Mapping. |
Translation Filtering
RFC 4787 provides translation filtering behaviors for NATs. These options are used by NAT to filter packets originating from specific external endpoints:
-
Endpoint-independent filtering—Filters out only packets that are not destined to the internal address and port regardless of the external IP address and port source.
-
Address-dependent filtering—Filters out packets that are not destined to the internal address. In addition, NAT filters out packets that are destined for the internal endpoint.
-
Address and port-dependent filtering—Filters out packets that are not destined to the internal address. In addition, NAT filets out packets that are destined for the internal endpoint if the packets were not sent previously.
Prerequisites for Implementing the Carrier Grade NAT
The following prerequisites are required to implement Carrier Grade NAT:
-
You must be running Cisco IOS XR software Release 3.9.1 or above.
-
You must have installed the CGN service package or the pie hfr-services-p.pie-x.x.x or hfr-services-px.pie-x.x.x (where x.x.x specifies the release number of Cisco IOS XR software)
Note
The CGN service package was termed as hfr-cgn-p.pie or hfr-cgn-px.pie for releases prior to Cisco IOS XR Software Release 4.2.0. The CGN service package is referred as hfr-services-p.pie or hfr-services-px.pie in Cisco IOS XR Software Release 4.2.0 and later.
-
You must be in a user group associated with a task group that includes the proper task IDs. The command reference guides include the task IDs required for each command.
-
In case of Intra chassis redundancy, enable CGSE data and control path monitoring in configuration mode, where R/S/CPU0 is the CGSE Location - -
service-plim-ha location is R/S/CPU0 datapath-test
-
service-plim-ha location is R/S/CPU0 core-to-core-test
-
service-plim-ha location is R/S/CPU0 pci-test
-
service-plim-ha location is R/S/CPU0 coredump-extraction
-
service-plim-ha location R/S/CPU0 linux-timeout 500
-
service-plim-ha location R/S/CPU0 msc-timeout 500
Note
All the error conditions result in card reload that triggers switchover to standby CGSE. The option of revertive switchover (that is disabled by default) and forced switchover is also available and can be used if required. Contact Cisco Technical Support with show tech-support cgn information.
-
-
In case of standalone CGSE (without intra chassis redundancy), enable CGSE data and control path monitoring in configuration mode, where R/S/CPU0 is the CGSE Location with auto reload disabled and -
service-plim-ha location R/S/CPU0 datapath-test
-
service-plim-ha location R/S/CPU0 core-to-core-test
-
service-plim-ha location R/S/CPU0 pci-test
-
service-plim-ha location R/S/CPU0 coredump-extraction
-
service-plim-ha location R/S/CPU0 linux-timeout 500
-
service-plim-ha location R/S/CPU0 msc-timeout 500
-
(admin-config) hw-module reset auto disable location R/S/CPU0
Note
All the error conditions result in a syslog message. On observation of Heartbeat failures or any HA test failure messages, contact Cisco Technical Support with show tech-support cgn information.
Note
If you suspect user group assignment is preventing you from using a command, contact your AAA administrator for assistance.
-
CGSE PLIM
A Carrier-Grade Services Engine (CGSE) is a physical line interface module (PLIM) for the Cisco CRS-1 Router. When the CGSE is attached to a single CRS modular service card (forwarding engine), it provides the hardware system running applications such as NAT44, XLAT, Stateful NAT64 and DS-Lite. An individual application module consumes one CRS linecard slot. Multiple modules can be placed inside a single CRS chassis to add capacity, scale, and redundancy.
There can be only one ServiceInfra SVI per CGSE Slot. This is used for the Management Plane and is required to bring up CGSE. This is of local significance within the chassis.
ServiceApp SVI is used to forward the data traffic to the CGSE applications. You can scale up to 256 ServiceApp interfaces for each CGSE. These interfaces can be advertised in IGP/EGP.
CGSE Multi-Chassis Support
The CGSE line card is supported in a multi-chassis configuration. 16 CGSE line cards are supported on each Cisco CRS Router chassis. A maximum of 32 CGN instances are supported.
For CGN applications, such as NAT44, NAT64, XLAT, DS-Lite and 6RD, a maximum of 20 million sessions are supported by each CGSE line card.
CGSE Plus PLIM
CGSE Plus is a mutli-service PLIM for the Cisco CRS-3 Router. The module has a maximum packet processing speed of 40 Gbps full-duplex with a reduced boot time and latency.
Note |
The actual throughput of the application depends on the software logic and the CPU cycles consumed by the software. |
It also supports services redundancy and QoS for service applications.
Note |
On a Cisco CRS router, you cannot enable MPLS or LDP along with a CGSE Plus card. Disable MPLS and LDP before bringing up a CGSE Plus card. Ensure that an MPLS or LDP label is not assigned on a NAT-associated IP address. |
CGSE Plus is brought up in two modes:
-
CGN mode — The Cisco IOS XR and Linux software are tuned to host CGN applications such as NAT44 and 6RD.
-
SESH mode — The Cisco IOS XR and Linux software are tuned to host future applications such as Arbor DDoS services.
For more information on CGSE Plus PLIM, see Cisco CRS Carrier Grade Services Engine Physical Layer Interface Module Installation Note.
Note |
The heartbeat messages must always be in the keep alive state. In case of heartbeat messages being dropped, you must specify an ACL rule to let the heartbeat messages transmit from ServiceInfra mask to ServiceInfra IP. |
P/0/RP0/CPU0:CRS-B#show running-config interface serviceinfra *
Sat Dec 16 07:28:41.088 EDT
interface ServiceInfra3
ipv4 address 3.3.3.1 255.255.255.252
service-location 0/3/CPU0
flow ipv4 monitor snfv4 sampler samp ingress
ipv4 access-group ABF-CGNAT ingress
!
interface ServiceInfra6
ipv4 address 6.6.6.1 255.255.255.252
service-location 0/6/CPU0
ipv4 access-group ABF-CGNAT ingress
RP/0/RP0/CPU0:CRS-B#sh ipv4 access-list ABF-CGNAT
Sat Dec 16 07:29:02.433 EDT
ipv4 access-list ABF-CGNAT
10 permit ipv4 172.18.12.0 0.0.0.3 host 172.23.6.1 nexthop1 vrf CGNAT-COLLECTOR
11 permit ipv4 172.18.13.0 0.0.0.3 host 172.23.6.1 nexthop1 vrf CGNAT-COLLECTOR
20 permit ipv4 172.18.12.0 0.0.0.3 host 172.23.6.2 nexthop1 vrf CGNAT-COLLECTOR
21 permit ipv4 172.18.13.0 0.0.0.3 host 172.23.6.2 nexthop1 vrf CGNAT-COLLECTOR
30 permit udp host 3.3.3.2 host 3.3.3.1
40 permit udp host 6.6.6.2 host 6.6.6.1
RP/0/RP0/CPU0:CRS-B#
In the above example, hearbeat messages are destined from 3.3.3.2 to 3.3.3.1 and 6.6.6.2 to 6.6.6.1.