Your organization uses Dataflow pipelines to process real-time financial transactions. You discover that one of your Dataflow jobs has failed. You need to troubleshoot the issue as quickly as possible. What should you do?
A.
Set up a Cloud Monitoring dashboard to track key Dataflow metrics, such as data throughput, error rates, and resource utilization.
B.
Create a custom script to periodically poll the Dataflow API for job status updates, and send email alerts if any errors are identified.
C.
Navigate to the Dataflow Jobs page in the Google Cloud console. Use the job logs and worker logs to identify the error.
D.
Use the gcloud CLI tool to retrieve job metrics and logs, and analyze them for errors and performance bottlenecks.
The quickest way to troubleshoot a failed Dataflow job is C. Use the Dataflow Jobs page in the Google Cloud console and examine job and worker logs. The console provides immediate and direct access to job status and detailed logs, allowing for rapid identification of errors. Option A (Cloud Monitoring Dashboard) is for proactive monitoring, not immediate failure diagnosis. Option B (Custom Script) is for future alerting, not current troubleshooting. Option D (gcloud CLI) is powerful but slightly less quick and user-friendly than the console for initial log browsing and error identification in this scenario. Therefore, Option C offers the most direct and efficient path to quickly diagnosing a Dataflow job failure.
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
n2183712847
2 months ago