AWS introduces Data Transfer Terminals for high-speed cloud uploads

AWS has announced Data Transfer Terminal locations to move data from customer-provided storage devices to the AWS cloud.

The Data Transfer Terminal (DTT) scheme was announced at its re:Invent event. The carried-in customer storage device uploads data via S3, EFS (Amazon Elastic File System), and other protocols. There are just two initial DTT locations: Los Angeles and New York. AWS says its DTTs “are ideal for customer scenarios that create or collect large amounts of data that need to be transferred to the AWS cloud quickly and securely on an as-needed basis.”

Suggested upload scenarios include video production data for the media and entertainment industry, training data for Advanced Driver Assistance Systems (ADAS), migrating legacy data in the financial services industry, and uploading equipment sensor data for the industrial and agricultural sectors. The uplink is claimed to be high speed and customers arrange an allotted visit time reserved via their AWS Console.

The cloud giant says: “You can schedule an upload window using the Data Transfer Terminal console for your preferred location, date, time, and duration, and then confirm your reservation. You can identify the individuals who are authorized to access the space during the reservation. When the reservation is confirmed, each authorized individual will receive instructions regarding where and how to access the Data Transfer Terminal facility.”

Once uploaded, data can be processed using Amazon’s Athena for large dataset analysis, SageMaker for training and deploying machine learning models, or generalized applications running on EC2 (Elastic Compute Cloud).

AWS suggests customers can “significantly reduce the time it takes to upload large amounts of data, enabling [them] to process ingested data within minutes, as opposed to days or weeks,” when customers ship storage devices to AWS. However, AWS has not specified the upload speed or bandwidth. We’re told each DTT facility is equipped with at least two 100G optical fiber cables connected to the AWS network.

Customers are told that, “to prepare for using the Data Transfer Terminal facility and connecting to the network, you need to ensure your uploading device is prepared to connect to the network. You should have the following for an optimal data upload experience:

  • A transceiver type 100G LR4 QSFP
  • An active IP auto configuration (DHCP)
  • Up-to-date software/transceiver drivers.”

The DTT scheme is an alternative to AWS’s Snowball system in which AWS ships a storage device in a container to a customer’s location, where it has data loaded on it. Then it’s shipped to an Amazon location for data upload to the AWS cloud. 

AWS also has its Direct Connect network link, a dedicated, high-bandwidth, low-latency connection between an on-premises site, such as a colocation provider’s cloud-adjacent data center, and AWS. This is when there is a need for ongoing data transfer and real-time data access.

It is likely that most major AWS regions will eventually feature DTT locations.

Read a DTT FAQ and learn about other features here. Check out somewhat sparse DTT documentation here.

AWS Data Transfer Terminal reservations are billed in port-hour increments and may vary based on network utilization. A pricing website should be live soon, though it was unavailable at the time of writing.