A financial company uses Amazon S3 as its data lake and has set up a data warehouse using a multi-node Amazon Redshift cluster. The data files in the data lake are organized in folders based on the data source of each data file. All the data files are loaded to one table in the Amazon Redshift cluster using a separate
COPY command for each data file location. With this approach, loading all the data files into Amazon Redshift takes a long time to complete. Users want a faster solution with little or no increase in cost while maintaining the segregation of the data files in the S3 data lake.
Which solution meets these requirements?
carol1522
Highly Voted 3 years, 9 months agocloudlearnerhere
Highly Voted 2 years, 8 months agocrs1234
2 years, 1 month agokondi2309
Most Recent 1 year, 4 months agoGCPereira
1 year, 5 months agotsk9921
2 years agopk349
2 years, 2 months agoArka_01
2 years, 9 months agorocky48
2 years, 11 months agoBik000
3 years, 1 month agoAWSRanger
3 years, 2 months agoShraddha
3 years, 8 months agolostsoul07
3 years, 8 months agoBillyC
3 years, 8 months agosanjaym
3 years, 8 months agosyu31svc
3 years, 8 months agoPaitan
3 years, 9 months agoSaaho
3 years, 9 months ago