exam questions

Exam DP-203 All Questions

View all questions & answers for the DP-203 exam

Exam DP-203 topic 2 question 12 discussion

Actual exam question from Microsoft's DP-203
Question #: 12
Topic #: 2
[All DP-203 Questions]

HOTSPOT -
The following code segment is used to create an Azure Databricks cluster.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:

Show Suggested Answer Hide Answer
Suggested Answer:
Box 1: Yes -
A cluster mode of 'High Concurrency' is selected, unlike all the others which are 'Standard'. This results in a worker type of Standard_DS13_v2.

Box 2: No -
When you run a job on a new cluster, the job is treated as a data engineering (job) workload subject to the job workload pricing. When you run a job on an existing cluster, the job is treated as a data analytics (all-purpose) workload subject to all-purpose workload pricing.

Box 3: Yes -
Delta Lake on Databricks allows you to configure Delta Lake based on your workload patterns.
Reference:
https://adatis.co.uk/databricks-cluster-sizing/
https://docs.microsoft.com/en-us/azure/databricks/jobs
https://docs.databricks.com/administration-guide/capacity-planning/cmbp.html https://docs.databricks.com/delta/index.html

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
GameLift
Highly Voted 3 years, 1 month ago
FROM DP-201, thanks to rmk4ever :: 1. Yes A cluster mode of ‘High Concurrency’ is selected, unlike all the others which are ‘Standard’. This results in a worker type of Standard_DS13_v2. ref: https://adatis.co.uk/databricks-cluster-sizing/ 2. NO recommended: New Job Cluster. When you run a job on a new cluster, the job is treated as a data engineering (job) workload subject to the job workload pricing. When you run a job on an existing cluster, the job is treated as a data analytics (all-purpose) workload subject to all-purpose workload pricing. ref: https://docs.microsoft.com/en-us/azure/databricks/jobs Scheduled batch workload- Launch new cluster via job ref: https://docs.databricks.com/administration-guide/capacity-planning/cmbp.html#plan-capacity-and-control-cost 3.YES Delta Lake on Databricks allows you to configure Delta Lake based on your workload patterns. ref: https://docs.databricks.com/delta/index.html
upvoted 56 times
semauni
1 year, 4 months ago
For 1, where do you see high concurrency?
upvoted 3 times
semauni
1 year, 4 months ago
I hadn't read the other comments yet, apparently it's in 'serverless' :)
upvoted 1 times
...
...
Egocentric
2 years, 7 months ago
agree on this one
upvoted 3 times
...
...
Canary_2021
Highly Voted 2 years, 11 months ago
Answer is Correct. Box 1: Yes "spark.databricks.cluster.profile": "serverless" means that the cluster is a High Concurrency Cluster, which support multi-users. Box 2: No Scheduled jobs should run in standard cluster. High Concurrency clusters are intended for multi-users and won’t benefit a cluster running a single job. Box 3:Yes
upvoted 39 times
...
Software_One
Most Recent 8 months, 3 weeks ago
N, N, Y. 1 : N because that's for single user, shared not support R
upvoted 1 times
...
kkk5566
1 year, 3 months ago
the answer is Yes, No, Yes.
upvoted 1 times
...
hiyoww
1 year, 3 months ago
the naming of the clusters are changed in recent UI: https://docs.databricks.com/en/archive/compute/cluster-ui-preview.html
upvoted 1 times
...
mamahani
1 year, 6 months ago
yes / no / yes
upvoted 3 times
...
Igor85
2 years, 1 month ago
i guess this question won't be relevant anymore, since cluster creation UI has changed
upvoted 3 times
WieIK
1 year, 7 months ago
They still use this question, I had this one on my exam this week
upvoted 3 times
...
...
US007
2 years, 4 months ago
1. should be 'No'. Its a standard cluster and it also has scala which is not supported on High Concurrence cluster.
upvoted 4 times
...
Deeksha1234
2 years, 4 months ago
Yes, No, Yes
upvoted 2 times
...
PallaviPatel
2 years, 10 months ago
Correct Answer. I agree with Canary_2021
upvoted 4 times
...
edba
2 years, 12 months ago
I would say the answer is Yes, No, Yes. Delta lake was supported starting from Azure Databricks Runtime 6.0 with Scala 2.11.12. https://docs.microsoft.com/en-us/azure/databricks/release-notes/runtime/6.0#system-environment
upvoted 3 times
...
thuggie300
3 years, 1 month ago
what is the answer lol
upvoted 3 times
...
aarthy2
3 years, 2 months ago
the same question is in DP-201 with the same answer. https://www.examtopics.com/discussions/microsoft/view/16875-exam-dp-201-topic-2-question-11-discussion/
upvoted 1 times
...
rav009
3 years, 2 months ago
IMO NO, YES, YES
upvoted 1 times
rav009
3 years, 2 months ago
Sorry, it should be NO,NO,YES. For Box 2, the cheapest way is creating the cluster when it's time to execute the job and terminate immediately after the task completes. This is called New Job Clusters . https://docs.microsoft.com/en-us/azure/databricks/jobs
upvoted 3 times
...
...
parwa
3 years, 2 months ago
what is correct answer here please?
upvoted 2 times
amma
3 years, 2 months ago
Yes No No
upvoted 9 times
...
...
Amyqwertyu
3 years, 3 months ago
High Concurrency clusters are intended for use by multiple users. hence correct answer
upvoted 2 times
...
Amalbenrebai
3 years, 3 months ago
NO, NO, YES
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...