exam questions

Exam DP-201 All Questions

View all questions & answers for the DP-201 exam

Exam DP-201 topic 2 question 49 discussion

Actual exam question from Microsoft's DP-201
Question #: 49
Topic #: 2
[All DP-201 Questions]

DRAG DROP -
You have a CSV file in Azure Blob storage. The file does NOT have a header row.
You need to use Azure Data Factory to copy the file to an Azure SQL database. The solution must minimize how long it takes to copy the file.
How should you configure the copy process? To answer, drag the appropriate components to the correct locations. Each component may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:

Show Suggested Answer Hide Answer
Suggested Answer:
Input: A delimited text dataset that has a comma a column delimiter columnDelimiter: The character(s) used to separate columns in a file.
The default value is comma ,. When the column delimiter is defined as empty string, which means no delimiter, the whole line is taken as a single column.
Pipeline: A data flow activity that has a general purpose compute type
When you're transforming data in mapping data flows, you can read and write files from Azure Blob storage.
Output: A copy activity that has an explicit schema mapping
Use Copy Activity in Azure Data Factory to copy data from and to Azure SQL Database, and use Data Flow to transform data in Azure SQL Database.
Reference:
https://docs.microsoft.com/en-us/azure/data-factory/format-delimited-text https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-database

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
suman13
Highly Voted 4 years, 2 months ago
2n box: copy activity 3rd box: azure sqldb dataset
upvoted 62 times
...
AngelRio
Highly Voted 4 years ago
Input: A delimited text dataset... Output: Azure SQL DB dataset... Pipeline: Copy Activity ....
upvoted 20 times
...
BitchNigga
Most Recent 4 years ago
I have performed this activity during my course. I am 200% sure that second one is copy activity an third one is sql db with fixed schema
upvoted 9 times
...
cadio30
4 years ago
First layer corresponds to linked services while the second layer is for the dataset and lastly the pipeline level. Therefore second layer is CSV and Azure SQL Database then copy activity.
upvoted 1 times
...
maciejt
4 years, 1 month ago
This is completely wrong. Both middle boxes are an abstraction layer, so if left box is a dataset of input, then right box is a dataset for output. There is no requirement to transform the data, only copy, so pipeline consists only of copy activity. If we were using data flow to copy, then we would not need copy activity, because data flow could copy directly to sql database, but requirement is performance and data flow need to start up the cluster that it is run at, while copy activity works instantly.
upvoted 7 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...