A data engineer must build an extract, transform, and load (ETL) pipeline to process and load data from 10 source systems into 10 tables that are in an Amazon Redshift database. All the source systems generate .csv, JSON, or Apache Parquet files every 15 minutes. The source systems all deliver files into one Amazon S3 bucket. The file sizes range from 10 MB to 20 GB. The ETL pipeline must function correctly despite changes to the data schema.
Which data pipeline solutions will meet these requirements? (Choose two.)
rralucard_
Highly Voted 9 months agoFelix_G
8 months agoLuke97
7 months agoHagarTheHorrible
Most Recent 4 months, 1 week agovaluedate
5 months, 1 week agoOusseyni
6 months, 2 weeks agovaluedate
5 months, 1 week agotgv
5 months agoChristina666
6 months, 2 weeks agoarvehisa
6 months, 3 weeks agolucas_rfsb
7 months agoFelix_G
8 months agochris_spencer
6 months, 2 weeks agoevntdrvn76
9 months ago