HOTSPOT -
You are building an Azure Data Factory solution to process data received from Azure Event Hubs, and then ingested into an Azure Data Lake Storage Gen2 container.
The data will be ingested every five minutes from devices into JSON files. The files have the following naming pattern.
/{deviceType}/in/{YYYY}/{MM}/{DD}/{HH}/{deviceID}_{YYYY}{MM}{DD}HH}{mm}.json
You need to prepare the data for batch data processing so that there is one dataset per hour per deviceType. The solution must minimize read times.
How should you configure the sink for the copy activity? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
ItHYMeRIsh
Highly Voted 3 years, 4 months agoBro111
2 years, 5 months agosensaint
2 years, 5 months agoonyerleft
Highly Voted 3 years, 4 months agoDavico93
2 years, 10 months agorenan_ineu
Most Recent 7 months, 3 weeks agoELJORDAN23
1 year, 3 months agoj888
1 year, 3 months agoblazy002
1 year, 4 months agophydev
1 year, 6 months agoChemmangat
1 year, 8 months agokkk5566
1 year, 8 months agopavankr
1 year, 10 months agorocky48
1 year, 11 months agorzeng
2 years, 6 months agoDeeksha1234
2 years, 9 months agoRafafouille76
3 years, 2 months agokamil_k
3 years, 1 month agoJaws1990
3 years, 3 months agoCanary_2021
3 years, 4 months agojv2120
3 years, 4 months agotony4fit
3 years, 4 months agoAditya0891
2 years, 11 months ago