Data migrations data often represent the largest technical hurdle that many of our clients face in their journey to the AWS cloud. However, thanks to Amazon’s multiple offerings, our clients’ challenge comes down to choosing the best data migration options for their environment. To choose the right solution, they first need to understand their data and which enterprise AWS data migration options best align with it.
Data Migration Begins with a Data Assessment
Cloud Shift takes by taking each client through our AWS Enterprise SureStart© program. As part of this program, Cloud Shift assesses each client’s applications and data usage. This assessment minimally includes determining the:
- Amount of data to migrate
- Data change rates
- Frequency of data access
- Time available to move the data
- Compliance Requirements for the data
- Available Internet Bandwidth
Though we collect other data, this base line set of information largely informs Cloud Shift as to the best AWS solution to recommend.
In the simplest data migration scenarios, we advise a client to use the free AWS command line tool to sync local directories with s3 buckets. This occurs where they have minimal amounts of data to move. In these cases, the data is static, infrequently accessed, and the client has ample time to move it. Best of all, s3 sync is simple and free.
Enterprise Challenges Demand Enterprise Solutions
However, many times our enterprise clients face more daunting data migration challenges. They have large amounts of data to migrate, perhaps hundreds of terabytes or even petabytes. Their applications frequently access and change the data. They may be under a time crunch to migrate the data in a few weeks or month. They may have little or no downtime to switch application processing from on-premises to the cloud. These challenges and constraints demand a more robust data migration solution.
To address these various data migration challenges AWS offers two solutions: AWS Snowball and AWS File Gateway.
Cloud Shift frequently recommends AWS Snowball to our enterprise clients that must move large amounts of static data quickly. AWS Snowball is a disk storage device that the client rents from Amazon who sends the device to our client’s site. Our client then copies their data to AWS Snowball and, once the copy completes, it ships the Snowball back to AWS. After AWS receives it, it uploads the data from Snowball into the client’s AWS s3 bucket(s).
Snowball bears a striking resemblance to the s3 sync method. It primarily differs in that a client does not move the data over its existing Internet connection. Rather, one copies data directly onto the Snowball. In this way, the data migration does not consume any network bandwidth.
A Snowball often accelerates a data migration since it can move up to 80 terabytes at a time. Moving this same amount of data over a WAN connection can take weeks or even months to complete. In this circumstance, Snowball works very well. Our client can move terabytes or even petabytes of data in a limited time frame that bypasses its existing Internet connection.
The downside of Snowball is that it doesn’t keep the customer data on premises in-sync with the restored data in the cloud. If the on premises data is changing, we must manually send updated files from on premises to s3. The AWS CLI tool can be used to sync files after the data has been shipped to AWS if bandwidth allows.
AWS File Gateway
In situations where the data is changing frequently and bandwidth is adequate, Cloud Shift recommends AWS File Gateway, Amazon’s other enterprise data migration solution. AWS File Gateway is a virtual appliance that runs in the client’s on premises VMware farm and functions as a file server replicating to the cloud.
For instance, Cloud Shift verifies that the client’s application regularly accesses, uses, and changes much of the data in a short time. If this holds true, using Snowball makes little or no sense to use. Too much of the client’s data would change between the time Snowball makes a copy and the time it makes the data in the AWS cloud.
Using AWS File Gateway, our clients can maintain an active copy of the data on-premises for their applications requirements. In the background, File Gateway copies the data and changes made to it to the AWS cloud.
Once the data resides in the AWS cloud, our clients can make copies of it in the AWS cloud. They can then use those copies to simulate their production environment in the cloud. Once validated, they can, often in under an hour and even in minutes, switch their production application to the AWS cloud and continue processing.
Our clients must have a high bandwidth Internet connection to successfully migrate large amounts of data quickly. Alternatively, they may choose to allow the AWS File Gateway to perform the data migration over lower bandwidth existing Internet connections, they understand during the data migration period their other applications may experience degraded Internet connectivity.
In instances where we have determined a client has access to ample Internet bandwidth and know the data migration will not impact its other production applications Cloud Shift often recommends AWS File Gateway.
A Successful Data Migration a Precursor to a Successful Cloud Experience
Cloud Shift understands and respects its clients’ concerns about migrating data to the AWS cloud. These concerns motivate Cloud Shift to find the best solution or solutions to migrate their data to the cloud.
To do so, Cloud Shift uses its AWS Enterprise SureStart program to identify each client’s specific requirements. Only after Cloud Shift understand those needs does Cloud Shift recommend a data migration plan that aligns with them. Whether that recommendation includes one or multiple data migration tools, our end game is always the same: to provide our clients with the best data migration possible so they can have the best cloud experience possible.