exam questions

Exam AI-102 All Questions

View all questions & answers for the AI-102 exam

Exam AI-102 topic 7 question 9 discussion

Actual exam question from Microsoft's AI-102
Question #: 9
Topic #: 7
[All AI-102 Questions]

HOTSPOT -

You have an Azure subscription that contains an Azure OpenAI resource.

You configure a model that has the following settings:

• Temperature: 1
• Top probabilities: 0.5
• Max response tokens: 100

You ask the model a question and receive the following response.



For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Show Suggested Answer Hide Answer
Suggested Answer:

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
HaraTadahisa
Highly Voted 10 months, 2 weeks ago
My answer is that No No No
upvoted 15 times
...
takaimomoGcup
Highly Voted 10 months, 3 weeks ago
No No No
upvoted 10 times
...
syupwsh
Most Recent 2 months, 3 weeks ago
All No 1) subscription will be charged for the total number of tokens used in the session, which includes both the prompt tokens and the completion tokens. In the given response, the total tokens used are 123 (37 prompt tokens + 86 completion tokens). 2) response contains a "finish_reason" of "stop," which indicates that the completion ended naturally rather than being truncated due to reaching the Max response tokens limit. The value of Max response tokens is set to 100, but the actual completion used only 86 tokens, which is below the limit. If the prompt had been truncated, the finish_reason would be 'length'. 3) prompt_tokens value is not included in the calculation of the Max response tokens value. The Max response tokens setting only limits the number of tokens in the generated response, not the total number of tokens in the prompt plus the response.
upvoted 3 times
...
testmaillo020
8 months, 1 week ago
1. No: The total tokens used for the session include both the prompt tokens (37) and the completion tokens (86), totaling 123 tokens. Therefore, the subscription will be charged for 123 tokens, not just 86. 2. Yes: The Max response tokens were set to 100, and the completion used 86 tokens. The text completion was not truncated because the response did not exceed the maximum allowed tokens. 3. No: The prompt_tokens are not included in the Max response tokens value. The Max response tokens only refer to the tokens used in the model's response, not the tokens used in the input prompt.
upvoted 3 times
mrg998
7 months, 2 weeks ago
for Q2, should this not be NO, since you said "The Max response tokens were set to 100, and the completion used 86 tokens. The text completion was not truncated because the response did not exceed the maximum allowed tokens."
upvoted 4 times
...
...
cloudrain
8 months, 3 weeks ago
answer is correct. 3rd should be no because "Token costs are for both input and output. For example, suppose you have a 1,000 token JavaScript code sample that you ask an Azure OpenAI model to convert to Python. You would be charged approximately 1,000 tokens for the initial input request sent, and 1,000 more tokens for the output that is received in response for a total of 2,000 tokens." source below https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/manage-costs#understand-the-azure-openai-full-billing-model
upvoted 1 times
cloudrain
8 months, 3 weeks ago
meant to say 3rd should be Yes
upvoted 2 times
...
...
etellez
10 months, 2 weeks ago
copilot says The subscription will be charged 86 tokens for the execution of the session: Yes The text completion was truncated because the Max response tokens value was exceeded: No The prompt_tokens value will be included in the calculation of the Max response tokens value: Yes
upvoted 2 times
...
rookiee1111
10 months, 2 weeks ago
N/N/N A - It takes into account the prompt tokes - hence as per the calculation -123 tokens should be charged B - Text completion was not truncated, because the response token is 86 < 100 C - Prompt_tokens is not included in calculation max_response value.
upvoted 4 times
...
michaelmorar
1 year ago
N - the subscription is NOT charged for 86 tokens - the response does not contain 86 tokens. For reference, each token is roughly four characters for typical English text. N - text completion is clearly under 86 and the sentence is not truncated. the finish_reason here is "stop". If the prompt had been cut off, the finish_reason would have been 'length' N - max tokens is the maximum number to generate in the COMPLETION. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
upvoted 5 times
...
tk1828
1 year ago
N/N/N Subscriptions are charged for both the prompt and completion tokens. Completion tokens is less than max response tokens. It refers to max response tokens only, not max tokens. https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/manage-costs#understand-the-azure-openai-full-billing-model
upvoted 4 times
...
Murtuza
1 year, 1 month ago
The subscription will be charged 86 tokens for the execution of the session. Yes, that’s correct. The completion_tokens value represents the number of tokens in the model’s response, and this is what you’re billed for. The text completion was truncated because the Max response tokens value was exceeded. No, that’s not correct. The response in this case wasn’t truncated. The max_tokens parameter sets a limit on the length of the generated response. If the model’s response had exceeded this limit, it would have been cut off, but in this case, the response is only 86 tokens long, which is less than the max_tokens value of 100. The prompt_tokens value will be included in the calculation of the max_tokens value. Yes, that’s correct. The max_tokens parameter includes both the prompt tokens and the completion tokens. So if your prompt is very long, it could limit the length of the model’s response.
upvoted 1 times
AzureGeek79
7 months, 3 weeks ago
that's correct. The answer is N, Y, Y.
upvoted 1 times
...
...
Murtuza
1 year, 1 month ago
The session execution consumed 86 tokens is NO it should be total of 123 tokens which includes the prompt tokens The text completion was truncated due to exceeding the Max response tokens value is Yes The prompt_tokens value is included in the calculation of the Max response tokens value is YES
upvoted 3 times
...
GHill1982
1 year, 1 month ago
I think it should be N/N/N.
upvoted 3 times
GHill1982
1 year ago
Changing my mind to Y/N/N The subscription will be charged 86 tokens for the execution of the session. Yes, the subscription will be charged for the completion_tokens used during the execution, which in this case is 86 tokens. The text completion was truncated because the Max response tokens value was exceeded. No, the text completion was not truncated due to exceeding the Max response tokens value. The finish_reason is listed as “stop,” which indicates that the model stopped generating additional content because it reached a natural stopping point in the text, not because it hit the token limit. The prompt_tokens value will be included in the calculation of the Max response tokens value. No, the prompt_tokens value is not included in the calculation of the Max response tokens value. The Max response tokens setting only limits the length of the new content generated by the model in response to the prompt.
upvoted 2 times
sergbs
1 year ago
You are wrong. First No. Azure OpenAI base series and Codex series models are charged per 1,000 tokens. https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/manage-costs#understand-the-azure-openai-full-billing-model
upvoted 3 times
...
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago