Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are using an Azure Synapse Analytics serverless SQL pool to query a collection of Apache Parquet files by using automatic schema inference. The files contain more than 40 million rows of UTF-8-encoded business names, survey names, and participant counts. The database is configured to use the default collation.
The queries use OPENROWSET and infer the schema shown in the following table.
You need to recommend changes to the queries to reduce I/O reads and tempdb usage.
Solution: You recommend using OPENROWSET WITH to explicitly define the collation for businessName and surveyName as Latin1_General_100_BIN2_UTF8.
Does this meet the goal?
sheilawu
1 year, 8 months agofireofsea
1 year, 8 months agofireofsea
1 year, 9 months agohoss29
1 year, 8 months agohoss29
1 year, 8 months agoPrudenceK
1 year, 11 months agosgodd_0298
2 years, 1 month agoDarioReymago
2 years, 1 month agosolref
2 years, 2 months agoFer079
2 years, 3 months agolouisaok
2 years, 4 months agoSaffar
2 years, 4 months agolouisaok
2 years, 4 months agoMaazi
2 years, 5 months agoDarioReymago
2 years, 1 month ago