A financial company uses Amazon S3 as its data lake and has set up a data warehouse using a multi-node Amazon Redshift cluster. The data files in the data lake are organized in folders based on the data source of each data file. All the data files are loaded to one table in the Amazon Redshift cluster using a separate
COPY command for each data file location. With this approach, loading all the data files into Amazon Redshift takes a long time to complete. Users want a faster solution with little or no increase in cost while maintaining the segregation of the data files in the S3 data lake.
Which solution meets these requirements?
carol1522
Highly Voted 3 years, 7 months agocloudlearnerhere
Highly Voted 2 years, 6 months agocrs1234
2 years agokondi2309
Most Recent 1 year, 3 months agoGCPereira
1 year, 4 months agotsk9921
1 year, 10 months agopk349
2 years agoArka_01
2 years, 7 months agorocky48
2 years, 9 months agoBik000
2 years, 12 months agoAWSRanger
3 years agoShraddha
3 years, 6 months agolostsoul07
3 years, 6 months agoBillyC
3 years, 6 months agosanjaym
3 years, 6 months agosyu31svc
3 years, 7 months agoPaitan
3 years, 7 months agoSaaho
3 years, 7 months ago