A bank is fine-tuning a large language model (LLM) on Amazon Bedrock to assist customers with questions about their loans. The bank wants to ensure that the model does not reveal any private customer data.
Which solution meets these requirements?
A.
Use Amazon Bedrock Guardrails.
B.
Remove personally identifiable information (PII) from the customer data before fine-tuning the LLM.
C.
Increase the Top-K parameter of the LLM.
D.
Store customer data in Amazon S3. Encrypt the data before fine-tuning the LLM.
When fine-tuning a large language model (LLM) with customer data, it is essential to ensure data privacy and compliance with regulations (like GDPR or HIPAA). The most effective and direct solution to prevent the model from learning or exposing sensitive customer information is to:
✅ Remove personally identifiable information (PII) from the dataset before fine-tuning.
This helps:
Prevent the model from memorizing or leaking private data
Reduce privacy and compliance risks
Follow best practices for data minimization
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
nand2804
2 weeks, 2 days ago