DP-300 Actual Exam Questions

Last updated on Dec. 4, 2024.
Vendor:Microsoft
Exam Code:DP-300
Exam Name:Administering Relational Databases on Microsoft Azure
Exam Questions:373
 

Topic 1 - Question Set 1

Question #1 Topic 1

You have 20 Azure SQL databases provisioned by using the vCore purchasing model.
You plan to create an Azure SQL Database elastic pool and add the 20 databases.
Which three metrics should you use to size the elastic pool to meet the demands of your workload? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. total size of all the databases
  • B. geo-replication support
  • C. number of concurrently peaking databases * peak CPU utilization per database
  • D. maximum number of concurrent sessions for all the databases
  • E. total number of databases * average CPU utilization per database
Reveal Solution Hide Solution   Discussion   15

Correct Answer: ACE 🗳️

Question #2 Topic 1

DRAG DROP -
You have SQL Server 2019 on an Azure virtual machine that contains an SSISDB database.
A recent failure causes the master database to be lost.
You discover that all Microsoft SQL Server integration Services (SSIS) packages fail to run on the virtual machine.
Which four actions should you perform in sequence to resolve the issue? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct.
Select and Place:

Reveal Solution Hide Solution   Discussion   13

Correct Answer:
Step 1: Attach the SSISDB database
Step 2: Turn on the TRUSTWORTHY property and the CLR property
If you are restoring the SSISDB database to an SQL Server instance where the SSISDB catalog was never created, enable common language runtime (clr)
Step 3: Open the master key for the SSISDB database
Restore the master key by this method if you have the original password that was used to create SSISDB. open master key decryption by password = 'LS1Setup!' --'Password used when creating SSISDB'
Alter Master Key Add encryption by Service Master Key
Step 4: Encrypt a copy of the master key by using the service master key
Reference:
https://docs.microsoft.com/en-us/sql/integration-services/catalog/ssis-catalog

Question #3 Topic 1

You have an Azure SQL database that contains a table named factSales. FactSales contains the columns shown in the following table.

FactSales has 6 billion rows and is loaded nightly by using a batch process. You must provide the greatest reduction in space for the database and maximize performance.
Which type of compression provides the greatest space reduction for the database?

  • A. page compression
  • B. row compression
  • C. columnstore compression
  • D. columnstore archival compression
Reveal Solution Hide Solution   Discussion   31

Correct Answer: D 🗳️

Question #4 Topic 1

You have a Microsoft SQL Server 2019 database named DB1 that uses the following database-level and instance-level features.
✑ Clustered columnstore indexes
✑ Automatic tuning
✑ Change tracking
✑ PolyBase
You plan to migrate DB1 to an Azure SQL database.
What feature should be removed or replaced before DB1 can be migrated?

  • A. Clustered columnstore indexes
  • B. PolyBase
  • C. Change tracking
  • D. Automatic tuning
Reveal Solution Hide Solution   Discussion   6

Correct Answer: B 🗳️

Question #5 Topic 1

You have a Microsoft SQL Server 2019 instance in an on-premises datacenter. The instance contains a 4-TB database named DB1.
You plan to migrate DB1 to an Azure SQL Database managed instance.
What should you use to minimize downtime and data loss during the migration?

  • A. distributed availability groups
  • B. database mirroring
  • C. Always On Availability Group
  • D. Azure Database Migration Service
Reveal Solution Hide Solution   Discussion   11

Correct Answer: D 🗳️
Azure Database Migration Service can do online migrations with minimal downtime.
Reference:
https://docs.microsoft.com/en-us/azure/dms/dms-overview

Question #6 Topic 1

HOTSPOT -
You have an on-premises Microsoft SQL Server 2016 server named Server1 that contains a database named DB1.
You need to perform an online migration of DB1 to an Azure SQL Database managed instance by using Azure Database Migration Service.
How should you configure the backup of DB1? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Reveal Solution Hide Solution   Discussion   6

Correct Answer:
Box 1: Full and log backups only
Make sure to take every backup on a separate backup media (backup files). Azure Database Migration Service doesn't support backups that are appended to a single backup file. Take full backup and log backups to separate backup files.

Box 2: WITH CHECKSUM -
Azure Database Migration Service uses the backup and restore method to migrate your on-premises databases to SQL Managed Instance. Azure Database
Migration Service only supports backups created using checksum.
Incorrect Answers:
NOINIT Indicates that the backup set is appended to the specified media set, preserving existing backup sets. If a media password is defined for the media set, the password must be supplied. NOINIT is the default.

UNLOAD -
Specifies that the tape is automatically rewound and unloaded when the backup is finished. UNLOAD is the default when a session begins.
Reference:
https://docs.microsoft.com/en-us/azure/dms/known-issues-azure-sql-db-managed-instance-online

Question #7 Topic 1

DRAG DROP -
You have a resource group named App1Dev that contains an Azure SQL Database server named DevServer1. DevServer1 contains an Azure SQL database named DB1. The schema and permissions for DB1 are saved in a Microsoft SQL Server Data Tools (SSDT) database project.
You need to populate a new resource group named App1Test with the DB1 database and an Azure SQL Server named TestServer1. The resources in App1Test must have the same configurations as the resources in App1Dev.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:

Reveal Solution Hide Solution   Discussion   4

Correct Answer:

Question #8 Topic 1

HOTSPOT -
You have an Azure Synapse Analytics dedicated SQL pool named Pool1 and an Azure Data Lake Storage Gen2 account named Account1.
You plan to access the files in Account1 by using an external table.
You need to create a data source in Pool1 that you can reference when you create the external table.
How should you complete the Transact-SQL statement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Reveal Solution Hide Solution   Discussion   13

Correct Answer:
Box 1: dfs -
For Azure Data Lake Store Gen 2 used the following syntax:
http[s] <storage_account>.dfs.core.windows.net/<container>/subfolders
Incorrect:
Not blob: blob is used for Azure Blob Storage. Syntax:
http[s] <storage_account>.blob.core.windows.net/<container>/subfolders

Box 2: TYPE = HADOOP -
Syntax for CREATE EXTERNAL DATA SOURCE.
External data sources with TYPE=HADOOP are available only in dedicated SQL pools.
CREATE EXTERNAL DATA SOURCE <data_source_name>

WITH -
( LOCATION = '<prefix>://<path>'
[, CREDENTIAL = <database scoped credential> ]
, TYPE = HADOOP
)
[;]
Reference:
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-external-tables

Question #9 Topic 1

HOTSPOT -
You plan to develop a dataset named Purchases by using Azure Databricks. Purchases will contain the following columns:
✑ ProductID
✑ ItemPrice
✑ LineTotal
✑ Quantity
✑ StoreID
✑ Minute
✑ Month
✑ Hour
✑ Year
✑ Day
You need to store the data to support hourly incremental load pipelines that will vary for each StoreID. The solution must minimize storage costs.
How should you complete the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Reveal Solution Hide Solution   Discussion   9

Correct Answer:
Box 1: .partitionBy -
Example:
df.write.partitionBy("y","m","d")
.mode(SaveMode.Append)
.parquet("/data/hive/warehouse/db_name.db/" + tableName)
Box 2: ("Year","Month","Day","Hour","StoreID")
Box 3: .parquet("/Purchases")
Reference:
https://intellipaat.com/community/11744/how-to-partition-and-write-dataframe-in-spark-without-deleting-partitions-with-no-new-data

Question #10 Topic 1

You are designing a streaming data solution that will ingest variable volumes of data.
You need to ensure that you can change the partition count after creation.
Which service should you use to ingest the data?

  • A. Azure Event Hubs Standard
  • B. Azure Stream Analytics
  • C. Azure Data Factory
  • D. Azure Event Hubs Dedicated
Reveal Solution Hide Solution   Discussion   5

Correct Answer: D 🗳️

file Viewing page 1 out of 38 pages.
Viewing questions 1-10 out of 373 questions
Next Questions
Browse atleast 50% to increase passing rate cup
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago