Feature Description

When you perform GR-setup activities with cnSGW-C and SMF, the GTPC message handling can be optimized between these two racks, as in the following scenarios:

  • The set of IPC messages from cnSGW-C to SMF service pods flow over gtpc-ep pods twice leading to message encoding and decoding overheads.

  • Within a GR pair, these IPC messages can avoid one more processing step, if service pods such as cnSGW-C and SMF can route messages to the corresponding peer GTPC nodes directly.

Note

Before applying the configuration for enabling GTPC IPC on cnSGW or SMF interfaces, it is required to apply inter-rack routing networks using cluster sync. More configuration is required to add BGP routes for supporting new routable networks across rack servers.

The following figure represents a design of the new network layout that is required for supporting the feature, the core setup activities, and their interconnections.

SGW-GTPC Inter-rack IPC

The following steps are performed for the GTPC message handling optimization between two racks and deploying the cross-rack endpoints:

  • The cnSGW-C in IMS-1 Rack-1 routes the IPC request internally to PGW GTPC-EP in DATA-2 Rack-2 passing through the cross-rack GTPI network to the router.

  • The router will then use the GTPE network as the next-hop for forwarding requests to the gtpc-ep pod.

  • The GTPI and GTPE network are new networks added to the Racks during the process of deployment.

  • Also, the feature requires internal GTPC IPC messages, which are received on the active gtpc-ep pod.

  • In this process, the IPC messages from cnSGW-C to SMF service pods flow over the GTPC-EP pods, leading twice to message encoding and decoding outlays.

  • Within a GR pair, such IPC messages can avoid one extra hop of processing, if these service pods (cnSGW-C and SMF) can route messages to the corresponding peer GTPC nodes directly.

    Note

    The configured protocol nodes must be in the same VIP group as S5 and S5e VIP groups are deployed.