exam questions

Exam DP-700 All Questions

View all questions & answers for the DP-700 exam

Exam DP-700 topic 1 question 2 discussion

Actual exam question from Microsoft's DP-700
Question #: 2
Topic #: 1
[All DP-700 Questions]

You have a Fabric workspace.
You have semi-structured data.
You need to read the data by using T-SQL, KQL, and Apache Spark. The data will only be written by using Spark.
What should you use to store the data?

  • A. a lakehouse
  • B. an eventhouse
  • C. a datamart
  • D. a warehouse
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
38578c4
Highly Voted 4 months, 3 weeks ago
Selected Answer: B
KQL is avaialble in eventhouse only https://learn.microsoft.com/en-us/fabric/fundamentals/decision-guide-data-store
upvoted 22 times
Jayjay5
3 weeks, 2 days ago
A is still correct because it say's "Read"
upvoted 1 times
...
shmmini
3 months ago
I think both A and B could be a correct answer for this. If my data is in an eventhouse, I can query it using T-SQL, KQL and PySpark If my data is in a Laekehouse, I can query it using SQL and PySpark and Create shortcuts for it in the eventhouse then query it using KQL. I guess this question needs some other precision to only one possible correct answer
upvoted 4 times
...
...
fffsssggg
Highly Voted 5 months, 1 week ago
Selected Answer: B
Eventhouse: Read operations: KQL, Spark and T-SQL Write operations: KQL, Spark https://learn.microsoft.com/en-us/fabric/get-started/decision-guide-data-store
upvoted 8 times
...
LuisPBI25
Most Recent 4 days, 13 hours ago
Selected Answer: B
Lakehouse read operations available are only T-SQL and Spark. However, Eventhouse read operations permitted are T-SQL,Spark and also KQL as it is required in the question.
upvoted 1 times
...
Shw7
1 week ago
Selected Answer: A
a Lakehouse is the correct answer
upvoted 1 times
...
VenkataPhaniPavanKumar
1 week, 5 days ago
Selected Answer: A
Here’s why a lakehouse fits perfectly for your scenario: It supports semi-structured data, such as JSON or Parquet. It can be accessed using T-SQL, KQL, and Apache Spark—offering maximum flexibility across personas. You mentioned data is only written using Spark, and lakehouses are optimized for Spark-based ingestion and processing. It stores data in Delta Lake format in OneLake, making it ACID-compliant and performant across engines. Source: Perplexity.ai and Microsoft Copilot (both says Lakehouse) Quite surprised to understand the gap from microsoft learn and copilot https://learn.microsoft.com/en-us/fabric/fundamentals/decision-guide-data-store
upvoted 1 times
...
Manish0427
2 weeks ago
Selected Answer: A
Although data is only in Unstructured format and only Apache Spark is used to write, which is fulfilled by both a Lakehouse and Eventhouse. However, provided that there is no mention of streaming data, I would prefer to use Lakehouse as a data storage. Eventhouse is primarily designed for Streaming data and KQL analytics. Writing data using Apache Spark is not the ideal usecase for Eventhouse. On the other hand, Lakehouse can store unstructured data, supports native write access via Apache Spark, can read using T-SQL (using SQL endpoint) and KQL (using shortcuts to Kusto). In real world scenario, Lakehouse is the ideal choice in such a case.
upvoted 1 times
...
ogbenisho
2 weeks ago
Selected Answer: B
Obviously, the answer is B - T-SQL, KQL and Spark can be used on Eventhouse - T-SQL & Spark can be used on Lakehouse
upvoted 1 times
...
TayaC
2 weeks, 1 day ago
Selected Answer: A
A. a lakehouse is correct as the constraint is "only written by using Spark"
upvoted 1 times
...
Wipag
2 weeks, 1 day ago
Selected Answer: A
A. a lakehouse Designed to handle structured, semi-structured, and unstructured data. Stored in Delta Lake format, making it accessible from: Spark (native) T-SQL via the SQL analytics endpoint KQL via OneLake integration Supports multi-engine access. Ideal for big data + analytics scenarios, especially with semi-structured data like JSON, Parquet, etc. Spark is commonly used to write to lakehouses.
upvoted 1 times
...
smanzana
2 weeks, 2 days ago
Selected Answer: A
The correct is A
upvoted 1 times
...
vikramkumar
2 weeks, 4 days ago
Selected Answer: A
A is the correct answer
upvoted 1 times
...
NNPRN
3 weeks, 2 days ago
Selected Answer: B
Eventhouse is the right answer because KQL is avaialble in eventhouse only https://learn.microsoft.com/en-us/fabric/fundamentals/decision-guide-data-store
upvoted 1 times
...
DarioReymago
3 weeks, 3 days ago
Selected Answer: A
I select A. Request dont talk about stream data
upvoted 1 times
...
malik777
3 weeks, 4 days ago
Selected Answer: A
event house meant for time series data
upvoted 1 times
...
MohanNaidu08
3 weeks, 6 days ago
Selected Answer: A
You need a storage solution that supports: Semi-structured data Read access via T-SQL, KQL, and Apache Spark Write access via Spark A lakehouse in Microsoft Fabric is designed exactly for this: Supports semi-structured and structured data (e.g., JSON, Parquet, CSV). Can be queried using: T-SQL (via SQL analytics endpoint) KQL (via Kusto query endpoint) Apache Spark (via notebooks or jobs) Optimized for big data processing and analytics. Allows Spark-based writes and multi-engine reads.
upvoted 2 times
...
Ahmadpbi
4 weeks ago
Selected Answer: A
❌ Why Option B (an Eventhouse) is NOT Correct 🔹 What is an Eventhouse in Microsoft Fabric? An Eventhouse is designed for large-scale, time-series, and event data, such as: Logs Telemetry IoT device data It is optimized for append-only workloads. Queries are executed using KQL (Kusto Query Language). It is NOT built to be written using Apache Spark. It does not natively support T-SQL or Spark-based analytics.
upvoted 1 times
...
akmorsh
4 weeks, 1 day ago
Selected Answer: A
Explanation by chatGPT: A Lakehouse in Microsoft Fabric is designed to store semi-structured data and supports multiple query languages including T-SQL, KQL, and Apache Spark. The question specifies that the data will be written only by Spark, and needs to be read by T-SQL, KQL, and Spark, which fits perfectly with the Lakehouse architecture. Eventhouse is designed mainly for event streaming and ingestion. Datamart is typically used for relational, structured data with a semantic model and T-SQL querying but does not natively support Spark or KQL. Warehouse is optimized for structured, relational data and T-SQL querying but does not support KQL or Spark.
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...