exam questions

Exam DP-203 All Questions

View all questions & answers for the DP-203 exam

Exam DP-203 topic 2 question 50 discussion

Actual exam question from Microsoft's DP-203
Question #: 50
Topic #: 2
[All DP-203 Questions]

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to create an Azure Databricks workspace that has a tiered structure. The workspace will contain the following three workloads:
✑ A workload for data engineers who will use Python and SQL.
✑ A workload for jobs that will run notebooks that use Python, Scala, and SQL.
✑ A workload that data scientists will use to perform ad hoc analysis in Scala and R.
The enterprise architecture team at your company identifies the following standards for Databricks environments:
✑ The data engineers must share a cluster.
✑ The job cluster will be managed by using a request process whereby data scientists and data engineers provide packaged notebooks for deployment to the cluster.
✑ All the data scientists must be assigned their own cluster that terminates automatically after 120 minutes of inactivity. Currently, there are three data scientists.
You need to create the Databricks clusters for the workloads.
Solution: You create a High Concurrency cluster for each data scientist, a High Concurrency cluster for the data engineers, and a Standard cluster for the jobs.
Does this meet the goal?

  • A. Yes
  • B. No
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
djincheg
Highly Voted 3 years, 3 months ago
data scientist need scala so standard, jobs need scala so standard, so B but for different reasons
upvoted 45 times
Gina8008
2 years, 10 months ago
engineer has to share the cluster so high -concurrency is correct. the answer should be A
upvoted 2 times
Aditya0891
2 years, 6 months ago
gina8008 you are missing a point here that data scientists uses scala as per question and scala is not supported in high concurrency cluster. So the answer is no
upvoted 8 times
...
...
...
111222333
Highly Voted 3 years, 6 months ago
Correct is A
upvoted 17 times
dfdsfdsfsd
3 years, 6 months ago
Agree. Jobs cannot use a high-concurrency cluster because it does not support Scala.
upvoted 5 times
Aditya0891
2 years, 6 months ago
and what about the data scientists requirement? Read the question properly and dn't mislead people looking for answers. Scala is not supported in high concurrency and data scientists are using scala as per question so answer in No
upvoted 8 times
...
...
...
Chemmangat
Most Recent 1 year, 2 months ago
Selected Answer: B
Answer : B "High Concurrency clusters can run workloads developed in SQL, Python, and R." https://learn.microsoft.com/en-us/azure/databricks/archive/compute/configure
upvoted 1 times
lola_mary5
9 months ago
This is an old link (taken from /archive/) and says: "Standard mode clusters are now called No Isolation Shared access mode clusters. High Concurrency with Tables ACLs are now called Shared access mode clusters." New link: https://learn.microsoft.com/en-us/azure/databricks/compute/configure
upvoted 3 times
...
...
kkk5566
1 year, 3 months ago
Selected Answer: B
B should be correct
upvoted 1 times
...
akhil5432
1 year, 4 months ago
Selected Answer: B
NO is correct answer
upvoted 2 times
...
Hanse
2 years, 9 months ago
As per Link: https://docs.azuredatabricks.net/clusters/configure.html Standard and Single Node clusters terminate automatically after 120 minutes by default. --> Data Scientists High Concurrency clusters do not terminate automatically by default. A Standard cluster is recommended for a single user. --> Standard for Data Scientists & High Concurrency for Data Engineers Standard clusters can run workloads developed in any language: Python, SQL, R, and Scala. High Concurrency clusters can run workloads developed in SQL, Python, and R. The performance and security of High Concurrency clusters is provided by running user code in separate processes, which is not possible in Scala. --> Jobs needs Scala, hence: Standard
upvoted 3 times
...
avijitd
2 years, 11 months ago
Selected Answer: B
NO - as High concurrency not support Scala
upvoted 6 times
...
rashjan
3 years ago
Selected Answer: B
correct: no
upvoted 5 times
...
arjunbhai
3 years ago
Like djincheg said, Data scientists need scala so B. https://docs.microsoft.com/en-us/azure/databricks/clusters/configure
upvoted 2 times
...
Julius7000
3 years, 2 months ago
-Data Engineers: Correct, they are working together, thay need High-Concurency cluster -Jobs: Correct, Standad Cluster since it supports SCALA HOWEVER: - Data Scientists need cluster who terminates after 120 minutes automatically: THAT MEANS ONLY STANDARD AND SINGLE NODE CLUSTERS CAN SUPPORT THAT. Since this is the holistic question, the answer is NO.
upvoted 15 times
...
Julius7000
3 years, 2 months ago
All the data scientists must be assigned their own cluster that terminates automatically after 120 minutes of inactivity. That means they need standard cluster, not high-concurency cluster. STANDARD cluster terminates automatically after 120 minutes: "Standard and Single Node clusters terminate automatically after 120 minutes by default." IMO the answer is NO, since all 3 solutions have to be correct.
upvoted 2 times
...
michalS
3 years, 3 months ago
It's correct that standard cluster is for job workload, but they assigned high concurrency cluster for data scientist, who want to use scala too, so it's false
upvoted 4 times
...
damaldon
3 years, 5 months ago
Answer: A -Data scientist should have their own cluster and should terminate after 120 mins - STANDARD -Cluster for Jobs should support scala - STANDARD https://docs.microsoft.com/en-us/azure/databricks/clusters/configure
upvoted 2 times
kimalto452
3 years, 2 months ago
Solution: You create a High Concurrency cluster for each data scientist Does this meet the goal? A. Yes Answer: A -Data scientist should have their own cluster and should terminate after 120 mins - STANDARD GENIUSSSSSSSSSS
upvoted 1 times
...
...
Sunnyb
3 years, 6 months ago
A is the right answer because Standard cluster supports scala
upvoted 2 times
...
Wisenut
3 years, 6 months ago
I too agree on the comment by 111222333. As per the requirement " A workload for jobs that will run notebooks that use Python, Scala, and SOL". Scala is only supported by Standard
upvoted 6 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...