Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Fabric tenant that contains a new semantic model in OneLake.
You use a Fabric notebook to read the data into a Spark DataFrame.
You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
Solution: You use the following PySpark expression:
df.summary()
Does this meet the goal?
stilferx
Highly Voted 1 year, 1 month agoLotusss
Most Recent 2 months, 3 weeks agob01d700
4 months, 3 weeks agoslu239
6 months, 1 week ago2fe10ed
6 months, 3 weeks agoPegooli
11 months, 1 week agogover07
10 months ago6d1de25
11 months, 3 weeks ago7d97b62
12 months ago282b85d
1 year, 1 month agoXiltroX
1 year, 4 months agoSamuComqi
1 year, 4 months agoMomoanwar
1 year, 4 months ago