exam questions

Exam Certified Generative AI Engineer Associate All Questions

View all questions & answers for the Certified Generative AI Engineer Associate exam

Exam Certified Generative AI Engineer Associate topic 1 question 60 discussion

Actual exam question from Databricks's Certified Generative AI Engineer Associate
Question #: 60
Topic #: 1
[All Certified Generative AI Engineer Associate Questions]

A Generative AI Engineer is building a production-ready LLM system which replies directly to customers. The solution makes use of the Foundation Model API via provisioned throughput. They are concerned that the LLM could potentially respond in a toxic or otherwise unsafe way. They also wish to perform this with the least amount of effort.

Which approach will do this?

  • A. Ask users to report unsafe responses
  • B. Host Llama Guard on Foundation Model API and use it to detect unsafe responses.
  • C. Add some LLM calls to their chain to detect unsafe content before returning text
  • D. Add a regex expression on inputs and outputs to detect unsafe responses.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Mogit
1 day, 22 hours ago
Selected Answer: C
A: User reporting is reactive, risks customer dissatisfaction, and doesn’t prevent unsafe outputs, making it unsuitable for production. B: Llama Guard is a strong candidate, but its availability on Databricks’ Foundation Model API is uncertain, and hosting it requires additional setup (e.g., model serving, endpoint configuration), which increases effort compared to C. D: Regex is quick to implement but unreliable for complex toxicity detection, leading to poor safety outcomes in a customer-facing system.
upvoted 1 times
...
Hifly_AA
1 month, 1 week ago
Selected Answer: B
B. Host Llama Guard on Foundation Model API and use it to detect unsafe responses. By enabling Databricks’ built-in Llama Guard directly on your Foundation Model API endpoint, you get out-of-the-box toxicity and safety checks with zero changes to your application code. The guard runs before responses are returned, blocking or redacting unsafe content according to its policy. This approach requires the least effort compared to adding custom detection calls or regex rules, and is far more proactive than relying on user reports.
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...