As far a migratory solutions go, this one is very straight forward: Redshift runs in AWS, so as a result of this, the UNLOAD command can unload table data directly to an S3 bucket.
Unload data from the emp table to a private S3 bucket and path named mybucket and mypath, respectively:
unload ('select * from emp')
to 's3://mybucket/mypath/emp-'
credentials 'aws_access_key_id=XXX;aws_secret_access_key=XXX'
delimiter '\001'
null '\\N'
escape
[allowoverwrite]
[gzip];
Notes:
• By default, the UNLOAD command unloads files in parallel from Redshift, creating multiple files. To unload to a single file, use the PARALLEL FALSE option.
Load the emp table data from the S3 bucket into the corresponding emp table in Snowflake:
copy into emp
from s3://mybucket/mypath/
credentials = (aws_key_id = 'XXX' aws_secret_key = 'XXX')
file_format = (
type = csv
field_delimiter = '\001'
null_if = ('\\N')
);
Notes:
• The FIELD_DELIMITER option specifies the CTRL+A (\001) character as the field delimiter. If your data has a different field delimiter, specify that character instead. Verify the field delimiter character is not present in your text data. Replace any instances before loading the data into Snowflake.
• The NULL_IF option differentiates between NULL (\\N) and empty strings. This option is necessary because the UNLOAD command example does not quote text fields.
• The FIELD_DELIMITER and NULL_IF values were chosen for this example because they match the default text formats for Hive and PostgreSQL COPY for unquoted strings.
http://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html (Redshift documentation)
https://docs.snowflake.net/manuals/user-guide/data-load.html
https://docs.snowflake.net/manuals/sql-reference/sql/copy-into-table.html
Find out more about all the benefits Snowflake has to offer for you and your business. Enter your name and email below and we'll be in touch!