exam questions

Exam DP-700 All Questions

View all questions & answers for the DP-700 exam

Exam DP-700 topic 2 question 26 discussion

Actual exam question from Microsoft's DP-700
Question #: 26
Topic #: 2
[All DP-700 Questions]

HOTSPOT
-

You plan to process the following three datasets by using Fabric:

Dataset1: This dataset will be added to Fabric and will have a unique primary key between the source and the destination. The unique primary key will be an integer and will start from 1 and have an increment of 1.
Dataset2: This dataset contains semi-structured data that uses bulk data transfer. The dataset must be handled in one process between the source and the destination. The data transformation process will include the use of custom visuals to understand and work with the dataset in development mode.
Dataset3: This dataset is in a lakehouse. The data will be bulk loaded. The data transformation process will include row-based windowing functions during the loading process.

You need to identify which type of item to use for the datasets. The solution must minimize development effort and use built-in functionality, when possible.

What should you identify for each dataset? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Show Suggested Answer Hide Answer
Suggested Answer:

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zxc01
Highly Voted 2 months, 1 week ago
Dataset1 -> Dataflow Gen2 Dataset2 -> Notebook Dataset3 -> Dataflow Gen2 (the target is Lakehouse, we cannot use T-SQL)
upvoted 10 times
DarioReymago
2 weeks, 6 days ago
I'm agree w u but I change the first option to use all option Dataset1 -> T-Sql Dataset2 -> Notebook Dataset3 -> Dataflow Gen2 (the target is Lakehouse, we cannot use T-SQL)
upvoted 2 times
...
...
PBridge
Highly Voted 3 weeks, 5 days ago
Dataset1: Will be added to Fabric with a unique primary key (auto-incrementing). Answer: Dataflow Gen2 dataflow Dataset2: Contains semi-structured data, uses bulk data transfer, and requires custom visuals during transformation in development mode. Answer: A notebook Dataset3: Stored in a lakehouse, bulk loaded, and uses row-based windowing functions during transformation. Answer: A T-SQL statement
upvoted 7 times
Rull
1 week, 3 days ago
Agree. For dataset3 you need to use T-SQL, which is definitely supported in a lakehouse. You cannot use a Dataflow Gen2 as it does not support advanced T-SQL functions such as windowing.
upvoted 1 times
...
...
407a475
Most Recent 1 week, 2 days ago
Dataset1 -> Dataflow Gen2 (Primary key is just transfers from source to destination) Dataset2 -> Notebook Dataset3 -> T-SQL (Lakehouse is a source, not destination, only T-SQL can work with row-based windowing functions. KQL has row-based windowing functions as well but can't work with Lakehouse)
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...