exam questions

Exam AWS Certified AI Practitioner AIF-C01 All Questions

View all questions & answers for the AWS Certified AI Practitioner AIF-C01 exam

Exam AWS Certified AI Practitioner AIF-C01 topic 1 question 217 discussion

A bank is fine-tuning a large language model (LLM) on Amazon Bedrock to assist customers with questions about their loans. The bank wants to ensure that the model does not reveal any private customer data.

Which solution meets these requirements?

  • A. Use Amazon Bedrock Guardrails.
  • B. Remove personally identifiable information (PII) from the customer data before fine-tuning the LLM.
  • C. Increase the Top-K parameter of the LLM.
  • D. Store customer data in Amazon S3. Encrypt the data before fine-tuning the LLM.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
nand2804
2 weeks, 2 days ago
Selected Answer: B
When fine-tuning a large language model (LLM) with customer data, it is essential to ensure data privacy and compliance with regulations (like GDPR or HIPAA). The most effective and direct solution to prevent the model from learning or exposing sensitive customer information is to: ✅ Remove personally identifiable information (PII) from the dataset before fine-tuning. This helps: Prevent the model from memorizing or leaking private data Reduce privacy and compliance risks Follow best practices for data minimization
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...