HDD Storage

This chapter describes the mechanism implemented in the ASR 5500 platform for short term storage of charging records (CDRs) in the event of loss of communication with an external Charging Gateway Function (CGF).

Overview

The hard disk was introduced in the ASR 5500 platform to add storage capability. The first application is used in CDMA environments to increase buffering for I/O between the gateway and L-ESS to alleviate tight linkage required to avoid record loss due to overrun on the ASR 5500 PSC buffers.

The External Storage System (ESS) is a high availability, fault tolerant, redundant solution for short-term storage of files containing detail records (UDRs/EDRs/FDRs (xDRs)). To avoid loss of xDRs on the chassis due to overwriting, deletion, or unforeseen events such as power or network failure or unplanned chassis switchover, xDRs are off-loaded to ESS for storage and analysis to avoid loss of charging and network analysis information contained in the xDRs. The xDR files can be pulled by the external storage from the chassis, or the chassis can push the xDR files to the external storage using SFTP protocol. In the Push mode, the external storage URL to which the xDR files need to be transferred to is specified. The configuration allows a primary and a secondary server to be configured. Configuring the secondary server is optional. Whenever a file transfer to the primary server fails for four consecutive times, the files will be transferred to the secondary server. The system running with ECS stores xDRs on an external storage, and the billing system collects the xDRs form the external storage and correlates them with the AAA accounting messages using 3GPP2-Correlation-IDs (for PDSN) or Charging IDs (for GGSN).

This release now supports purging/deleting xDR records based on time or volume limit to restrict hard-disk space usage for charging records. When configured, old records can be deleted based on specified storage or time limits.

The second application is intended for UMTS environment. Records generated on ASR 5500 are sent through UDP to an external storage application running on possibly clustered SUN servers utilizing shared storage. In parallel, records are sent over GTPP to a CGF. In addition to (e)GCDRs, the hard disk supports SCDRs and MCDRs generated by SGSN.


Important

The hard disk is not designed to support all features supported by the external storage application and not intended to replace this application in all situations.


The hard disk is useful for other applications:

  • Store the Content Filtering static ratings database instead of using FLASH; valuable for other users including recovery scenarios.

  • IPMS

  • Large volume firewall and other DPI information such as applications/particular user, and users/bay station heretofore not store-able embedded.

The hard drive serves a number of uses in providing storage for various records generated by the mobile gateway that formerly require buffering or treatment outside of the gateway, necessitating purchase and operation of auxiliary servers. For 3GPP2 accounts the hard disk is an enhancement to service, and not a replacement. The hard drive is required to provide non-volatile storage in the ASR 5500. For 3GPP accounts the hard disk can be used instead of external storage in networks where storage and record formatting needs can be met by the hard disk. The communication link between the ASR 5500 and external storage is removed. GTPP continues to be supported. Files can be accessed by either GTPP (streaming) or sFTP (file I/O), but not both. At the same time, different files can be accessed by GTPP or sFTP.

Benefits

The HDD functionality provides an additional level of protection to the wireless operator by ensuring the charging records are preserved in case the Charging Gateway (CGF) goes down or loses connectivity with the ASR 5500 gateway. At the same time, this was implemented in a way that does not require any addition or modification to the existing mediation/billing systems.

Supported Records on HDD

This section describes the various records supported on the HDD:

Accounting Request Records (ACR)

The Accounting Request Records are types of CDRs that contain offline charging information generated by the Diameter Rf interface. If all the Diameter servers configured in a group are down, ACRs are written to files in formats supported by the external node and stored on the HDD. These files are created when the chassis does not have connection with the CDF. From the HDD, ACR files can be pushed/pulled using FTP/SFTP protocols.


Important

ACRs are supported in 10.0 and later releases.


In StarOS releases prior to 12.3, in the standby chassis if HDD is configured and if the HDD profile status is unavailable, the pending Diameter accounting requests will be removed. Release 12.3 onwards, when HDD is configured in the standby chassis, irrespective of HDD profile status, the Diameter accounting requests will be retried to HDD until it successfully writes in to HDD. Any misconfiguration of HDD can result in Diameter accounting requests being piled up in to accounting archive list in the standby chassis. The only way to clean up the list can be either un-configure the HDD or configure active HDD properly.

Directory Structure: By default, the ACR records are placed in the following directory paths:
  • RAM-disk: /records/acr/<policy_name>/

  • HDD: /hd-raid/data/records/acr/<policy_name>/

File Formats: Currently, file format1 to format10 are supported.

Supported Products: HSGW, P-GW, S-GW

Charging Data Records (CDR)

A Charging Data Record is a formatted collection of information about a chargeable event. The CDRs generated by GGSN/SGSN are sent to an external node for storage. CDRs are written to files in formats supported by the external node and stored on the HDD. From the HDD, CDR files can be pushed/pulled using FTP/SFTP protocols.

For information on how to configure push functionality, refer to the Configuring CDR Push section in this chapter.

Directory Structure: In releases prior to 16.0, by default, the CDRs are placed in the following directory paths for local mode:
  • RAM-disk: /records/cdr/<gtpp_group_name><vpn_id>/

  • HDD: /hd-raid/data/records/cdr/<gtpp_group_name><vpn_id>/

In releases prior to 16.0, the CDRs are defaulted to be stored in the following directory paths for GTPP Streaming mode:
  • RAM-disk: /records/cdr/hdd_sec_stor_<gtpp-group-name><vpn-id>/

  • HDD: /hd-raid/data/records/cdr/hdd_sec_stor_<gtpp-group-name><vpn-id>/

In 16.0 and later releases, by default, the CDRs are placed in the following directory paths for local mode:
  • RAM-disk: /records/cdr/<gtpp-group-name>_<context-name>/

  • HDD: /hd-raid/records/cdr/<gtpp-group-name>_<context-name>/

In 16.0 and later releases, the CDRs are defaulted to be stored in the following directory paths for GTPP Streaming mode:
  • RAM-disk: /records/cdr/hdd_sec_stor_<gtpp-group-name>_<context-name>/

  • HDD: /hd-raid/records/cdr/hdd_sec_stor_<gtpp-group-name>_<context-name>/

File Formats: The GSS file formats, Custom1 to Custom8 are supported.

Supported Products: ePDG, GGSN, SGSN, P-GW, S-GW

Diameter Records

The Diameter Records contain CCR messages generated by the Diameter Gy interface. In cases where the Assume-Positive interim-quota is allocated, and CCR-T is not reported/answered, the CCR-T message is written to a local file, and saved in the HDD. This local file and directory information can be fetched and parsed to account for the lost bytes/usage. The retrieval of the file can be done with the PULL mechanism.


Important

This feature requires a valid license to be installed prior to configuring this feature. Contact your Cisco account representative for more information on the licensing requirements.



Important

Diameter records are supported in 19 and later releases.


When the diameter hdd CLI command is configured in the Credit Control Group Configuration mode, the Gy application sends the failed CCR-T messages to the CDR module for storing in the HDD. For more information, see the Command Line Interface Reference guide.

Directory Structure: By default, the Diameter records are placed in the following directory path:
  • RAM-disk: /records/diameter/

  • HDD: /hd-raid/data/records/diameter/

Supported Products: HA, P-GW

Event Data Records (EDR)

The Event Data Records are responsible for definition, generation, and offloading of EDRs generated in the system (as a result of occurrence of an event) to the external billing system. EDRs are basically used for content billing purposes, wherein it is required that a different charging unit be employed for different types of content e.g. HTTP, SMTP, MMS, etc. EDRs are a type of usage records that are configurable by the operator. EDRs are generated per flow subject to available configuration.

Directory Structure: By default, the EDRs are placed in the following directory paths:
  • RAM-disk: /records/edr/

  • HDD: /hd-raid/data/records/edr/

File Formats: In this release, EDRs are supported in the Comma Separated Values (CSV) format.

Supported Products: ECS and other products/features using ECS

Event Records

The Event reporting is a mechanism using which subscriber activities like session creation/deletion, bearer creation/modification/update/deletion are reported to the external server (RTT server). The event report logs assist network operators in maintaining and troubleshooting the network. The event records are stored as files in the HDD and these files are later SFTPd to the external RTT server. To store the event records in the form of files, compress the event record file using the Call Detail Records Module (CDRMOD) which provides support for collecting, storing, and compressing the event records.


Important

Event Records are supported in 12.2 and later releases.


Directory Structure: By default, the Event records are placed in the following directory paths:
  • RAM-disk: /records/event/

  • HDD: /hd-raid/data/records/event/

File Formats: In this release, Event Records are supported in the Comma Separated Values (CSV) format.

Supported Products: SGSN, S-GW

Reporting Event Data Records (REDR)

Reporting Event Data Records are a type of CDRs that contain EDRs generated on flow end conditions, that is reporting flow end EDRs and HTTP transaction EDRs. REDR records are written to files in formats supported by the external node and stored in the HDD. From the HDD, REDR records can be pushed/pulled using FTP/SFTP protocols.


Important

REDRs are supported in 12.2 and later releases.


Directory Structure: By default, the REDRs are placed in the following directory paths:
  • RAM-disk: /records/redr/

  • HDD: /hd-raid/data/records/redr/

File Formats: In this release, REDRs are supported in the Comma Separated Values (CSV) format.

Supported Products: ECS and other products/features using ECS

Usage Data Records (UDR)

The Usage Data Records contain accounting information related to a specific mobile subscriber. UDRs are generated and stored on the system as records in CSV format (comma separated values). The CDR subsystem in conjunction with the External Storage Server (ESS) are responsible for offloading of UDRs. UDRs are generated per content type. The fields required as part of usage data records are configurable and stored in the System Configuration Task (SCT). UDRs are generated at the end of a call, i.e. call termination, time threshold, volume threshold, and handoffs.

Directory Structure: By default, the UDRs are placed in the following directory paths:
  • RAM-disk: /records/udr/

  • HDD: /hd-raid/data/records/udr/

File Formats: In this release, UDRs are supported in the Comma Separated Values (CSV) format.

Supported Products: GGSN, HA, PDSN

Hardware Overview

This section provides information on the hardware components that comprise the HDD feature in the ASR 5500.

The HDD functionality takes advantage of the Hard Disk available in the System Management Card (SMC) of the ASR 5500. The System Management Card (SMC) serves as the primary controller and is responsible for initializing the entire system, and loading the software's configuration image into other cards in the chassis as applicable. SMCs are installed in the chassis slots 8 and 9. During normal operation, the SMC in slot 8 serves as the primary (Active), while the SMC in slot 9 serves as the secondary (Standby).

Each SMC contains an enterprise-class Serial Attached SCSI (SAS) hard disk to load and store configuration data, software updates, buffer accounting information, and store diagnostic or troubleshooting information. Space for CDR storage in the internal Hard Disk is 100 Gigabytes (GB). Redundant control mechanisms allow for data to be written to the hard disks on both the active and standby SMCs.


Important

No hardware changes (PSC, SMC, chassis, etc.) are required to enable the CDR Storage and Retransmission. However, an appropriate software version has to be loaded in the ASR 5500.


How HDD Works

This section describes the working of the HDD functionality.

The functionality for CDR Storage and Retransmission works without requiring an external storage. In normal operating mode, when CGF is up and reachable, the ASR 5500 streams CDRs to the CGF. If the CGF becomes unreachable, the ASR 5500 starts temporarily storing CDRs into the internal hard disk. Once the CGF is up again, the ASR 5500 streams those records stored in its hard disk to the external CGF via GTP protocol. This is called the streaming mode of operation.

When CDR Internal Storage and Retransmission is configured, the ASR 5500 continuously checks for reachability of configured CGFs. When there is no reply to Echo Requests or responses to signaling messages from the CGF, the ASR 5500 assumes that the CGF is down and starts storing the CDRs into its internal hard disk.


Important

Only one CGF server per GTPP group is supported.


This function in the ASR 5500 incorporates partial external storage functionality inside the ASR 5500 gateway. The following diagram depicts the mechanism using external storage (no hard disk configured in the ASR 5500) and using the hard disk.

Figure 1. HDD Mechanism


The following example shows the amount of time that CDRs can be stored in the internal hard disk and the coverage in case CGF is down. Assuming a CDR size of 350 bytes, approximately 285 million CDRs can be stored in 100 GB of hard disk. Based on information from deployed systems, a peak rate of 4M (million) records/hour provides 2.9 days of storage. This means that assuming 2M sessions per gateway (say GGSN) at peak busy hour, and each session generates approximately 2 GCDRs per hour, 4 million CDRs/hour represents the worst case scenario for the Busy Hour. Assuming an average 75% of that busy hour, 0.75 X 96M CDR = 72M CDR per day; for 350 bytes per CDR, it yields approximately 4 days of storage.

CDR Streaming via GTPP

HDD is used to store CDRs when CGF fails and then CDRs are streamed to the CGF when CGF is up. Streaming can be done in a First-In-First-Out (FIFO) or parallel mode. The parallel mode of operation is newly introduced in release 16.0. In FIFO mode, newly generated CDRs are routed to CGF via HDD. In parallel mode, newly generated CDRs from AAA Managers are sent directly to CGF server along with the streamed CDRs from hard disk.

With streaming mode enabled, CDRs are written to HDD when the OCG connectivity is down. Once the OCG connectivity is up, the HDD contents are sent in a FIFO order. New records should be written to disk as long as all contents from disk are not fully flushed. If the disk is completely flushed, the records can be sent to OCG directly.

In the FIFO implementation, CDR streaming works fine but the process of sending the newly generated CDRs to HDD and then to CGF is inefficient. Also, if the CDR generation rate is high and streaming rate is slow, the files in HDD will keep increasing and eventually there will be no space left in hard disk, leading to CDR purging. This can be avoided by sending the newly generated CDRs directly to CGF when streaming from HDD is in progress.

With parallel mode, this behavior is changed to send new CDRs directly to CGF when streaming from HDD is in progress.

The existing CLI command "gtpp storage-server mode streaming" used to set the streaming mode, is extended. The "parallel " keyword used with this command enables the new parallel streaming mode.

In the current implementation, during streaming, GTPP requests are read from HDD and sent to CGF server, till configured max outstanding is reached.

In PARALLEL mode, rate of streaming from HDD will be slow. It is assumed that the billing domain should be capable of handling Out-Of-Order CDRs in parallel streaming mode.

CDR Streaming Workflow

This section describes the streaming behavior in Streaming (FIFO) and Parallel mode.

  • CGF is reachable, No files in HDD:

    FIFO: newly generated CDRs will be sent to CGF server.

    Parallel: newly generated CDRs will be sent to CGF server.

  • CGF server is not reachable:

    FIFO: newly generated CDRs will be stored to HDD.

    Parallel: newly generated CDRs will be stored to HDD

  • CGF is not reachable, HDD has less than 3 files, Now CGF becomes active:

    FIFO: AAAMgr is blocked from sending new CDRs. GTPP requests in HDD are first streamed out to CGF server. After all requests in HDD are flushed, start sending new CDRs to CGF.

    Parallel: Same behavior as in FIFO mode.

  • CGF is not reachable, HDD has 3 or more CDR files, Now CGF becomes active:

    FIFO: GTPP requests in HDD are streamed to CGF server. Newly generated CDRs will be stored in HDD and then sent to CGF

    Parallel: GTPP requests in HDD are streamed to CGF server at a slower pace. Newly generated CDRs will be sent directly to CGF server.

Assumptions / Limitations

  • The rate of streaming from HDD would be slower in parallel mode.
  • Billing domain should be capable of handling Out-Of-Order CDRs in parallel streaming mode.

Deployment Scenarios

The HDD functionality is enabled in the ASR 5500 gateway in the following deployment scenarios:

  • CGF configured but not reachable: The ASR 5500 attempts to stream the CDRs to the configured CGF. If the CGF does not respond to queries from ASR 5500 or GTP messages, CDRs are stored in the internal HDD for future retransmission when CGF becomes reachable again

  • CGF configured and active, then goes down: The ASR 5500 was sending CDRs to CGF (via GTPP) normally. Upon loss of reachability of the CGF, the ASR 5500 determines that CGF is down and starts storing CDRs in its internal HDD.

  • CGF configured, goes down and later becomes available: CDRs were sent (streamed) to CGF until it becomes unreachable. After ASR 5500 determines CGF is down/unreachable, it starts storing CDRs in internal HDD. When CGF becomes available again, CDRs are streamed to CGF, starting from the older CDR first.

HDD Configuration

This section describes how to configure the HDD.

This section covers the following topics:

Configuring HDD

This section describes how to configure the HDD feature.


Important

This feature is disabled by default in the ASR 5500.


In GTPP group mode, an option is added to enable this functionality with local-fallback option to existing gtpp storage-server mode in the ASR 5500:

default gtpp storage-server mode { local | remote | streaming } 

Notes:

  • default : Returns the GTPP group configuration to the default 'remote' value (the ASR 5500 streams CDRs to the configured external CGF) for the GTPP.

  • If remote is configured, the ASR 5500 sends CDRs to the external CGF. In case CGF is down or unreachable, CDRs will be lost.

  • If local is configured, records are stored in the ASR 5500's internal hard disk. Mediation / billing system can retrieve the records through Secure FTP (SFTP).

  • If streaming is configured, then the CDRs are sent to CGF by default. If the CGF is down or unreachable, CDRs are temporarily stored in the internal hard disk and streamed to CGF once it becomes available.

Configuring EDR/UDR Parameters

This section provides an example configuration to configure EDR/UDR file transfer and file properties parameters, including configuring hard disk support on SMC card on ASR 5500, transfer modes, transfer interval, etc.

To configure EDR/UDR file parameters:

configure 
     context <context_name> 
          edr-module active-charging-service 
               cdr { purge { storage-limit storage_limit | time-limit time_limit } [ max-files max_records_to_purge ] | push-interval push_interval | push-trigger space-usage-percent trigger_percentage | remove-file-after-transfer | transfer-mode { pull [ module-only ] | push primary { encrypted-url encrypted_url | url url } [ [ max-files max_records ] [ module-only ] [ secondary { encrypted-secondary-url encrypted_secondary_url | secondary-url secondary_url } ] [ via local-context ] + ] | use-harddisk } 
               file [ charging-service-name { include | omit } ] [ compression { gzip | none } ] [ current-prefix string ] [ delete-timeout seconds ] [ directory directory_name ] [ edr-format-name ] [ exclude-checksum-record ] [ field-separator { hyphen | omit | underscore } ] [ file-sequence-number rulebase-seq-num ] [ headers ] [ name file_name ] [ reset-indicator ] [ rotation [ num-records number | time seconds | volume bytes ] ] [ sequence-number { length length | omit | padded | padded-six-length | unpadded } ] [ storage-limit limit ] [ single-edr-format ] [ time-stamp { expanded-format | rotated-format | unix-format } ] [ trailing-text string ] [ trap-on-file-delete ] [ xor-final-record ] + 
               exit 
          udr-module active-charging-service 
               cdr { purge { storage-limit storage_limit | time-limit time_limit } [ max-files max_records_to_purge ] | push-interval push_interval | push-trigger space-usage-percent trigger_percentage | remove-file-after-transfer | transfer-mode { pull [ module-only ] | push primary { encrypted-url encrypted_url | url url } [ [ max-files max_records ] [ module-only ] [ secondary { encrypted-secondary-url encrypted_secondary_url | secondary-url secondary_url } ] [ via local-context ] + ] | use-harddisk } 
               file [ charging-service-name { include | omit } ] [ compression { gzip | none } ] [ current-prefix string ] [ delete-timeout seconds ] [ directory directory_name ] [ exclude-checksum-record ] [ field-separator { hyphen | omit | underscore } ] [ file-sequence-number rulebase-seq-num ] [ headers ] [ name file_name ] [ reset-indicator ] [ rotation [ num-records number | time seconds | volume bytes ] ] [ sequence-number { length length | omit | padded | padded-six-length | unpadded } ] [ storage-limit limit ] [ time-stamp { expanded-format | rotated-format | unix-format } ] [ trailing-text string ] [ trap-on-file-delete ] [ udr-seq-num ] [ xor-final-record ] + 
               end 

Notes:

  • The cdr command can be configured either in the EDR or the UDR Configuration Mode. Configuring in one mode prevents the configurations from being applied in the other mode.

  • The use-harddisk keyword is only available on the ASR 5500.

  • For the storage-limit parameter, on the ST16 platform, the total storage limit is 256 MB (268435456 bytes).

    On the ASR 5500, the total storage limit is 512 MB (536870912 bytes). This limit is for both UDR and EDR files combined.

  • The push keyword is used to send the EDR/UDR files to the configured L-ESS or any other external server.

  • The purge keyword is used to purge or delete the EDR/UDR records based on time or volume limit. By default, no purge operation is performed by VPNMGR module.

    When the configured threshold limit is reached on the hard disk drive, the records that are created dynamically in the /mnt/hd-raid/data/records/ directory are automatically deleted. Files that are manually created should be deleted manually.

  • The max-files keyword allows the operator to configure the maximum number of files sent per iteration based on configured file-size.

For more information on this command, refer to the Command Line Interface Reference.

Viewing Statistics

To view EDR-UDR file statistics, in the Exec Mode, enter the following command:

show cdr statistics

Pushing EDR/UDR Files Manually

To manually push EDR/UDR files to the configured L-ESS, in the Exec mode, use the following command:

cdr-push { all | local-filename file_name }

Notes:

  • Before you can use this command, the EDR/UDR transfer mode and file locations must be set to push in the EDR/UDR Module Configuration Mode.

  • The cdr-push command is available in the Exec Mode.

  • file_name must be absolute path of the local file to push.

Retrieving EDR and UDR Files

To retrieve UDR or EDR files you must SFTP into the context that was configured for EDR or UDR file generation.

This was done with the FTP-enabled account that you configured in the Enabling Charging Record Retrieval section.

The following commands use SFTP to log on to a context named ECP as a user named ecpadmin , through an interface configured in the ECS context that has the IP address 192.168.1.10 and retrieve all EDR or UDR files from the default locations:

sftp -oUser=ecpadmin@ECP 192.168.1.10:/records/edr/* 
sftp -oUser=ecpadmin@ECP 192.168.1.10:/records/udr/* 

Configuring CDR Push

This section provides an example configuration to configure CDR file transfer and file properties parameters, including configuring hard disk support on SMC card on ASR 5500, transfer modes, transfer interval, and so on.


Important

This CDR push feature is applicable to all types of CDRs, for example, GCDRs, eGCDRs, PGW/SGW CDRs, SGSN CDRs, etc.

To configure CDR push feature:

configure 
   context context_name 
      gtpp group group_name 
         gtpp storage-server local file { compression { gzip | none } | format { custom1 | custom2 | custom3 | custom4 | custom5 | custom6 | custom7 | custom8 } | name { format string [ max-file-seq-num seq_number ] | prefix prefix } | purge-processed-files [ file-namepattern name_pattern | purge-interval purge_interval ] | push { encrypted-url encrypted_url | url url } [ encrypted-secondary-url encrypted_url | secondary-url url ] [ via-local-context ] | rotation { cdr-count count | time-interval time [ force-filerotation ] | volume mb size } | start-file-seq-num seq_num [ recover-file-seq-num ]  
         end 

Notes:

  • The gtpp storage-server local file push command enables the push mode. This configuration will allow a primary and a secondary server to be configured. When a file transfer to primary fails four times, the transfer of CDR files will automatically be failed over to the secondary server. The transfer will switch back to the original primary after 30 minutes, or if there are four transfer failures to the secondary server.


    Important

    After you configure the gtpp storage-server local file push command, you must save the configuration and then reload the chassis for the command to take effect. For information on saving the configuration file and reloading the chassis, refer to the System Administration Guide for your deployment.


  • The keyword [ via-local-context ] is used to specify whether to use the gtpp group's context or use local context to push/reach remote server.


    Important

    If the push is done through GTPP group context then the push rate is lesser compared to via local context, as the HDD is attached to the local context.


    For more information on this command, refer to the Command Line Interface Reference.


Important

After you configure gtpp storage-server local file { compression { gzip | none } } CLI command, you must save the configuration and then reload the chassis for the command to take effect. For information on saving the configuration file and reloading the chassis, refer to the System Administration Guide for your deployment.


Viewing Statistics

To view CDR file statistics, in the Exec Mode, enter the following command:

show cdr statistics

Pushing CDR Files Manually

To manually push CDR files to the configured remote server, in the Exec mode, use the following command:

cdr-push { all | local-filename file_name }

Notes:

  • Before you can use this command, the CDR transfer mode and file locations must be set to push in the GTPP Group Configuration Mode.

  • The cdr-push command is available in the Exec Mode.

  • file_name must be absolute path of the local file to push.

Retrieving CDR Files

To retrieve CDR files you must SFTP into the context that was configured for CDR file generation.

This was done with the FTP-enabled account that you configured in the Enabling Charging Record Retrieval section.

The following commands use SFTP to log on to a context named ECP as a user named ecpadmin , through an interface configured in the ECS context that has the IP address 192.168.1.10 and retrieve all CDR files from the default locations:

sftp -oUser=ecpadmin@ECP 192.168.1.10:/records/cdr/<gtpp-group>-<vpnid>/*