Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Fabric tenant that contains a new semantic model in OneLake.
You use a Fabric notebook to read the data into a Spark DataFrame.
You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
Solution: You use the following PySpark expression:
df.summary()
Does this meet the goal?
stilferx
Highly Voted 1 year agoLotusss
Most Recent 1 month agob01d700
2 months, 4 weeks agoslu239
4 months, 1 week ago2fe10ed
4 months, 4 weeks agoPegooli
9 months, 2 weeks agogover07
8 months, 1 week ago6d1de25
10 months ago7d97b62
10 months ago282b85d
11 months, 2 weeks agoXiltroX
1 year, 2 months agoSamuComqi
1 year, 2 months agoMomoanwar
1 year, 2 months ago