We are going to copy files from one storage account to another for the purpose of migrating a terraform state file to another storage account container.
Introduction to the Terraform state file
Terraform is an infrastructure as code tool that allows users to define, manage and version infrastructure in a declarative way. The Terraform state file is a JSON-formatted file that records the current state of the resources managed by Terraform.
The state file is a critical component of Terraform, as it stores the mapping between the configuration and the actual resources created in the cloud provider. The state file is used by Terraform to plan, apply, and destroy changes to the infrastructure, and it is also used to perform various operations such as checking the current state of the infrastructure, updating resources, and rolling back changes.
When you run terraform apply, Terraform reads the configuration files, creates or updates the resources, and records the new state of the resources in the state file. When you run terraform destroy, Terraform reads the state file, identifies the resources to destroy, and removes them from the cloud provider.
The state file is saved locally by default, but it can also be stored remotely, such as in a shared storage backend or remote state storage service. Remote state storage is recommended for teams working collaboratively on infrastructure, as it allows multiple users to access and modify the same state file.
It’s important to note that the state file should not be edited manually, as it contains sensitive information such as resource IDs, keys, and passwords. If the state file is lost or corrupted, it can be rebuilt using the configuration files and the cloud provider APIs, but it can be time-consuming and error-prone.
In summary, the Terraform state file is a JSON-formatted file that records the current state of the resources managed by Terraform, and it is used to plan, apply, and destroy changes to the infrastructure. The state file should be treated as a critical component of Terraform, and it should be stored securely and not edited manually.
What is azcopy?
AzCopy is a command-line tool developed by Microsoft that provides fast and reliable data transfer between on-premises systems and Azure Blob Storage, Azure Files, and Azure Data Lake Storage. AzCopy can be used for copying data to, from, or between Azure storage accounts, and it supports a wide range of scenarios such as migration, backup, and disaster recovery.
AzCopy provides several key features that make it a powerful tool for managing data in Azure. For example, it provides high-performance data transfer that can be optimized for a wide range of network conditions. It also supports both recursive and incremental copy operations, allowing you to transfer only the data that has changed since the last transfer. Additionally, AzCopy provides features such as parallelism, network optimization, and checkpoint-resume, which make it suitable for large-scale data transfer scenarios.
AzCopy is available for Windows, Linux, and macOS, and it can be used with PowerShell and other scripting languages. It also supports integration with Azure Data Factory, allowing you to orchestrate and manage data transfer workflows across multiple data sources and destinations.
In summary, AzCopy is a powerful command-line tool that provides fast and reliable data transfer between on-premises systems and Azure storage services. It provides a wide range of features and is suitable for a variety of scenarios such as migration, backup, and disaster recovery.
The script
$resourcegroupname = 'rg-resourcegroup-name'
$account1name = 'storageaccount1'
$account2name = 'storageaccount2'
az storage account list --resource-group $($resourcegroupname)
az storage container list --resource-group $resourcegroupname --account-name $account1name
az storage container list --resource-group $resourcegroupname --account-name $account2name
$account1key = $(az storage account keys list --account-name $account1name --resource-group $resourcegroupname --query [0].value --output tsv)
$account2key = $(az storage account keys list --account-name $account2name --resource-group $resourcegroupname --query [0].value --output tsv)
Write-Host 'Account Key for account1:' $account1key
Write-Host 'Account Key for account2:' $account2key
az storage container list --account-key $account1key --account-name $account1name
az storage container list --account-key $account2key --account-name $account2name
$account1sas = $(az storage account generate-sas --account-key $account1key --account-name $account1name --expiry 2023-01-06 --https-only --permissions acdlrw --resource-types sco --services bfqt --output tsv)
$account2sas = $(az storage account generate-sas --account-key $account2key --account-name $account2name --expiry 2023-01-06 --https-only --permissions acdlrw --resource-types sco --services bfqt --output tsv)
Write-Host "SAS Token for account1: $account2sas"
Write-Host "SAS Token for account2: $account2sas"
Write-Host "Source url: https://$account1name.blob.core.windows.net?$account1sas"
Write-Host "Destination url: https://$account2name.blob.core.windows.net?$account2sas"
Invoke-WebRequest -Uri 'https://azcopyvnext.azureedge.net/release20220315/azcopy_windows_amd64_10.14.1.zip' -OutFile 'azcopyv10.zip'
Expand-archive -Path '.\azcopyv10.zip' -Destinationpath '.\'
$AzCopy = (Get-ChildItem -path '.\' -Recurse -File -Filter 'azcopy.exe').FullName
# Invoke AzCopy
& $AzCopy copy "https://$account1name.blob.core.windows.net?$account1sas" "https://$account2name.blob.core.windows.net?$account2sas" --recursive