Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Certified Data Engineer Associate All Questions

View all questions & answers for the Certified Data Engineer Associate exam

Exam Certified Data Engineer Associate topic 1 question 31 discussion

Actual exam question from Databricks's Certified Data Engineer Associate
Question #: 31
Topic #: 1
[All Certified Data Engineer Associate Questions]

A data engineer has configured a Structured Streaming job to read from a table, manipulate the data, and then perform a streaming write into a new table.
The cade block used by the data engineer is below:

If the data engineer only wants the query to execute a micro-batch to process data every 5 seconds, which of the following lines of code should the data engineer use to fill in the blank?

  • A. trigger("5 seconds")
  • B. trigger()
  • C. trigger(once="5 seconds")
  • D. trigger(processingTime="5 seconds")
  • E. trigger(continuous="5 seconds")
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
XiltroX
Highly Voted 1 year, 7 months ago
D is the correct answer
upvoted 5 times
...
4be8126
Highly Voted 1 year, 7 months ago
Selected Answer: D
The correct line of code to fill in the blank to execute a micro-batch to process data every 5 seconds is: D. trigger(processingTime="5 seconds") Option A ("trigger("5 seconds")") would not work because it does not specify that the trigger should be a processing time trigger, which is necessary to trigger a micro-batch processing at regular intervals. Option B ("trigger()") would not work because it would use the default trigger, which is not a processing time trigger. Option C ("trigger(once="5 seconds")") would not work because it would only trigger the query once, not at regular intervals. Option E ("trigger(continuous="5 seconds")") would not work because it would trigger the query to run continuously, without any pauses in between, which is not what the data engineer wants.
upvoted 5 times
...
Raghu_Dasara
Most Recent 1 month ago
D is correct answer ProcessingTime https://learn.microsoft.com/en-us/azure/databricks/structured-streaming/triggers Continues Processing :
upvoted 1 times
...
benni_ale
6 months, 1 week ago
Selected Answer: D
correct syntax is D
upvoted 1 times
...
awofalus
12 months ago
Selected Answer: D
Correct: D
upvoted 1 times
...
vctrhugo
1 year, 2 months ago
Selected Answer: D
# ProcessingTime trigger with two-seconds micro-batch interval df.writeStream \ .format("console") \ .trigger(processingTime='2 seconds') \ .start() https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#triggers
upvoted 2 times
...
AndreFR
1 year, 2 months ago
Selected Answer: D
https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#triggers
upvoted 1 times
...
Atnafu
1 year, 4 months ago
D val query = sourceTable .writeStream .format("delta") .outputMode("append") .trigger(Trigger.ProcessingTime("5 seconds")) .start(destinationTable)
upvoted 1 times
vctrhugo
1 year, 2 months ago
This is Scala example. Exam should be 100% on Python.
upvoted 3 times
...
...
rafahb
1 year, 7 months ago
Selected Answer: D
D os correct
upvoted 2 times
...
surrabhi_4
1 year, 7 months ago
Selected Answer: D
Option D
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...