Winter Special Flat 65% Limited Time Discount offer - Ends in 0d 00h 00m 00s - Coupon code: netdisc

Google Professional-Cloud-Database-Engineer Google Cloud Certified - Professional Cloud Database Engineer Exam Practice Test

Google Cloud Certified - Professional Cloud Database Engineer Questions and Answers

Testing Engine

  • Product Type: Testing Engine
$42  $119.99

PDF Study Guide

  • Product Type: PDF Study Guide
$36.75  $104.99
Question 1

You are developing a new application on a VM that is on your corporate network. The application will use Java Database Connectivity (JDBC) to connect to Cloud SQL for PostgreSQL. Your Cloud SQL instance is configured with IP address 192.168.3.48, and SSL is disabled. You want to ensure that your application can access your database instance without requiring configuration changes to your database. What should you do?

Options:

A.

Define a connection string using your Google username and password to point to the external (public) IP address of your Cloud SQL instance.

B.

Define a connection string using a database username and password to point to the internal (private) IP address of your Cloud SQL instance.

C.

Define a connection string using Cloud SQL Auth proxy configured with a service account to point to the internal (private) IP address of your Cloud SQL instance.

D.

Define a connection string using Cloud SQL Auth proxy configured with a service account to point to the external (public) IP address of your Cloud SQL instance.

Question 2

You need to perform a one-time migration of data from a running Cloud SQL for MySQL instance in the us-central1 region to a new Cloud SQL for MySQL instance in the us-east1 region. You want to follow Google-recommended practices to minimize performance impact on the currently running instance. What should you do?

Options:

A.

Create and run a Dataflow job that uses JdbcIO to copy data from one Cloud SQL instance to another.

B.

Create two Datastream connection profiles, and use them to create a stream from one Cloud SQL instance to another.

C.

Create a SQL dump file in Cloud Storage using a temporary instance, and then use that file to import into a new instance.

D.

Create a CSV file by running the SQL statement SELECT...INTO OUTFILE, copy the file to a Cloud Storage bucket, and import it into a new instance.

Question 3

You are designing a database strategy for a new web application. You plan to start with a small pilot in one country and eventually expand to millions of users in a global audience. You need to ensure that the application can run 24/7 with minimal downtime for maintenance. What should you do?

Options:

A.

Use Cloud Spanner in a regional configuration.

B.

Use Cloud Spanner in a multi-region configuration.

C.

Use Cloud SQL with cross-region replicas.

D.

Use highly available Cloud SQL with multiple zones.

Question 4

You are working on a new centralized inventory management system to track items available in 200 stores, which each have 500 GB of data. You are planning a gradual rollout of the system to a few stores each week. You need to design an SQL database architecture that minimizes costs and user disruption during each regional rollout and can scale up or down on nights and holidays. What should you do?

Options:

A.

Use Oracle Real Application Cluster (RAC) databases on Bare Metal Solution for Oracle.

B.

Use sharded Cloud SQL instances with one or more stores per database instance.

C.

Use a Biglable cluster with autoscaling.

D.

Use Cloud Spanner with a custom autoscaling solution.

Question 5

You are responsible for designing a new database for an airline ticketing application in Google Cloud. This application must be able to:

Work with transactions and offer strong consistency.

Work with structured and semi-structured (JSON) data.

Scale transparently to multiple regions globally as the operation grows.

You need a Google Cloud database that meets all the requirements of the application. What should you do?

Options:

A.

Use Cloud SQL for PostgreSQL with both cross-region read replicas.

B.

Use Cloud Spanner in a multi-region configuration.

C.

Use Firestore in Datastore mode.

D.

Use a Bigtable instance with clusters in multiple regions.

Question 6

You use Python scripts to generate weekly SQL reports to assess the state of your databases and determine whether you need to reorganize tables or run statistics. You want to automate this report but need to minimize operational costs and overhead. What should you do?

Options:

A.

Create a VM in Compute Engine, and run a cron job.

B.

Create a Cloud Composer instance, and create a directed acyclic graph (DAG).

C.

Create a Cloud Function, and call the Cloud Function using Cloud Scheduler.

D.

Create a Cloud Function, and call the Cloud Function from a Cloud Tasks queue.

Question 7

Your team recently released a new version of a highly consumed application to accommodate additional user traffic. Shortly after the release, you received an alert from your production monitoring team that there is consistently high replication lag between your primary instance and the read replicas of your Cloud SQL for MySQL instances. You need to resolve the replication lag. What should you do?

Options:

A.

Identify and optimize slow running queries, or set parallel replication flags.

B.

Stop all running queries, and re-create the replicas.

C.

Edit the primary instance to upgrade to a larger disk, and increase vCPU count.

D.

Edit the primary instance to add additional memory.

Question 8

You are running an instance of Cloud Spanner as the backend of your ecommerce website. You learn that the quality assurance (QA) team has doubled the number of their test cases. You need to create a copy of your Cloud Spanner database in a new test environment to accommodate the additional test cases. You want to follow Google-recommended practices. What should you do?

Options:

A.

Use Cloud Functions to run the export in Avro format.

B.

Use Cloud Functions to run the export in text format.

C.

Use Dataflow to run the export in Avro format.

D.

Use Dataflow to run the export in text format.

Question 9

You need to redesign the architecture of an application that currently uses Cloud SQL for PostgreSQL. The users of the application complain about slow query response times. You want to enhance your application architecture to offer sub-millisecond query latency. What should you do?

Options:

A.

Configure Firestore, and modify your application to offload queries.

B.

Configure Bigtable, and modify your application to offload queries.

C.

Configure Cloud SQL for PostgreSQL read replicas to offload queries.

D.

Configure Memorystore, and modify your application to offload queries.

Question 10

You are configuring a new application that has access to an existing Cloud Spanner database. The new application reads from this database to gather statistics for a dashboard. You want to follow Google-recommended practices when granting Identity and Access Management (IAM) permissions. What should you do?

Options:

A.

Reuse the existing service account that populates this database.

B.

Create a new service account, and grant it the Cloud Spanner Database Admin role.

C.

Create a new service account, and grant it the Cloud Spanner Database Reader role.

D.

Create a new service account, and grant it the spanner.databases.select permission.

Question 11

Your team is building an application that stores and analyzes streaming time series financial data. You need a database solution that can perform time series-based scans with sub-second latency. The solution must scale into the hundreds of terabytes and be able to write up to 10k records per second and read up to 200 MB per second. What should you do?

Options:

A.

Use Firestore.

B.

Use Bigtable

C.

Use BigQuery.

D.

Use Cloud Spanner.

Question 12

You are running a large, highly transactional application on Oracle Real Application Cluster (RAC) that is multi-tenant and uses shared storage. You need a solution that ensures high-performance throughput and a low-latency connection between applications and databases. The solution must also support existing Oracle features and provide ease of migration to Google Cloud. What should you do?

Options:

A.

Migrate to Compute Engine.

B.

Migrate to Bare Metal Solution for Oracle.

C.

Migrate to Google Kubernetes Engine (GKE)

D.

Migrate to Google Cloud VMware Engine

Question 13

Your organization has a ticketing system that needs an online marketing analytics and reporting application. You need to select a relational database that can manage hundreds of terabytes of data to support this new application. Which database should you use?

Options:

A.

Cloud SQL

B.

BigQuery

C.

Cloud Spanner

D.

Bigtable

Question 14

You are building an application that allows users to customize their website and mobile experiences. The application will capture user information and preferences. User profiles have a dynamic schema, and users can add or delete information from their profile. You need to ensure that user changes automatically trigger updates to your downstream BigQuery data warehouse. What should you do?

Options:

A.

Store your data in Bigtable, and use the user identifier as the key. Use one column family to store user profile data, and use another column family to store user preferences.

B.

Use Cloud SQL, and create different tables for user profile data and user preferences from your recommendations model. Use SQL to join the user profile data and preferences

C.

Use Firestore in Native mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.

D.

Use Firestore in Datastore mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.

Question 15

Your organization works with sensitive data that requires you to manage your own encryption keys. You are working on a project that stores that data in a Cloud SQL database. You need to ensure that stored data is encrypted with your keys. What should you do?

Options:

A.

Export data periodically to a Cloud Storage bucket protected by Customer-Supplied Encryption Keys.

B.

Use Cloud SQL Auth proxy.

C.

Connect to Cloud SQL using a connection that has SSL encryption.

D.

Use customer-managed encryption keys with Cloud SQL.

Question 16

You are writing an application that will run on Cloud Run and require a database running in the Cloud SQL managed service. You want to secure this instance so that it only receives connections from applications running in your VPC environment in Google Cloud. What should you do?

Options:

A.

1. Create your instance with a specified external (public) IP address.

2. Choose the VPC and create firewall rules to allow only connections from Cloud Run into your instance.

3. Use Cloud SQL Auth proxy to connect to the instance.

B.

1. Create your instance with a specified external (public) IP address.

2. Choose the VPC and create firewall rules to allow only connections from Cloud Run into your instance.

3. Connect to the instance using a connection pool to best manage connections to the instance.

C.

1. Create your instance with a specified internal (private) IP address.

2. Choose the VPC with private service connection configured.

3. Configure the Serverless VPC Access connector in the same VPC network as your Cloud SQL instance.

4. Use Cloud SQL Auth proxy to connect to the instance.

D.

1. Create your instance with a specified internal (private) IP address.

2. Choose the VPC with private service connection configured.

3. Configure the Serverless VPC Access connector in the same VPC network as your Cloud SQL instance.

4. Connect to the instance using a connection pool to best manage connections to the instance.

Question 17

Your retail organization is preparing for the holiday season. Use of catalog services is increasing, and your DevOps team is supporting the Cloud SQL databases that power a microservices-based application. The DevOps team has added instrumentation through Sqlcommenter. You need to identify the root cause of why certain microservice calls are failing. What should you do?

Options:

A.

Watch Query Insights for long running queries.

B.

Watch the Cloud SQL instance monitor for CPU utilization metrics.

C.

Watch the Cloud SQL recommenders for overprovisioned instances.

D.

Watch Cloud Trace for application requests that are failing.

Question 18

Your company uses Bigtable for a user-facing application that displays a low-latency real-time dashboard. You need to recommend the optimal storage type for this read-intensive database. What should you do?

Options:

A.

Recommend solid-state drives (SSD).

B.

Recommend splitting the Bigtable instance into two instances in order to load balance the concurrent reads.

C.

Recommend hard disk drives (HDD).

D.

Recommend mixed storage types.

Question 19

An analytics team needs to read data out of Cloud SQL for SQL Server and update a table in Cloud Spanner. You need to create a service account and grant least privilege access using predefined roles. What roles should you assign to the service account?

Options:

A.

roles/cloudsql.viewer and roles/spanner.databaseUser

B.

roles/cloudsql.editor and roles/spanner.admin

C.

roles/cloudsql.client and roles/spanner.databaseReader

D.

roles/cloudsql.instanceUser and roles/spanner.databaseUser

Question 20

You are managing a mission-critical Cloud SQL for PostgreSQL instance. Your application team is running important transactions on the database when another DBA starts an on-demand backup. You want to verify the status of the backup. What should you do?

Options:

A.

Check the cloudsql.googleapis.com/postgres.log instance log.

B.

Perform the gcloud sql operations list command.

C.

Use Cloud Audit Logs to verify the status.

D.

Use the Google Cloud Console.

Question 21

You are designing a highly available (HA) Cloud SQL for PostgreSQL instance that will be used by 100 databases. Each database contains 80 tables that were migrated from your on-premises environment to Google Cloud. The applications that use these databases are located in multiple regions in the US, and you need to ensure that read and write operations have low latency. What should you do?

Options:

A.

Deploy 2 Cloud SQL instances in the us-central1 region with HA enabled, and create read replicas in us-east1 and us-west1.

B.

Deploy 2 Cloud SQL instances in the us-central1 region, and create read replicas in us-east1 and us-west1.

C.

Deploy 4 Cloud SQL instances in the us-central1 region with HA enabled, and create read replicas in us-central1, us-east1, and us-west1.

D.

Deploy 4 Cloud SQL instances in the us-central1 region, and create read replicas in us-central1, us-east1 and us-west1.

Question 22

You plan to use Database Migration Service to migrate data from a PostgreSQL on-premises instance to Cloud SQL. You need to identify the prerequisites for creating and automating the task. What should you do? (Choose two.)

Options:

A.

Drop or disable all users except database administration users.

B.

Disable all foreign key constraints on the source PostgreSQL database.

C.

Ensure that all PostgreSQL tables have a primary key.

D.

Shut down the database before the Data Migration Service task is started.

E.

Ensure that pglogical is installed on the source PostgreSQL database.

Question 23

You are migrating an on-premises application to Google Cloud. The application requires a high availability (HA) PostgreSQL database to support business-critical functions. Your company's disaster recovery strategy requires a recovery time objective (RTO) and recovery point objective (RPO) within 30 minutes of failure. You plan to use a Google Cloud managed service. What should you do to maximize uptime for your application?

Options:

A.

Deploy Cloud SQL for PostgreSQL in a regional configuration. Create a read replica in a different zone in the same region and a read replica in another region for disaster recovery.

B.

Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Take periodic backups, and use this backup to restore to a new Cloud SQL for PostgreSQL instance in another region during a disaster recovery event.

C.

Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Create a cross-region read replica, and promote the read replica as the primary node for disaster recovery.

D.

Migrate the PostgreSQL database to multi-regional Cloud Spanner so that a single region outage will not affect your application. Update the schema to support Cloud Spanner data types, and refactor the application.

Question 24

You are configuring a brand new PostgreSQL database instance in Cloud SQL. Your application team wants to have an optimal and highly available environment with automatic failover to avoid any unplanned outage. What should you do?

Options:

A.

Create one regional Cloud SQL instance with a read replica in another region.

B.

Create one regional Cloud SQL instance in one zone with a standby instance in another zone in the same region.

C.

Create two read-write Cloud SQL instances in two different zones with a standby instance in another region.

D.

Create two read-write Cloud SQL instances in two different regions with a standby instance in another zone.

Question 25

You are building an Android game that needs to store data on a Google Cloud serverless database. The database will log user activity, store user preferences, and receive in-game updates. The target audience resides in developing countries that have intermittent internet connectivity. You need to ensure that the game can synchronize game data to the backend database whenever an internet network is available. What should you do?

Options:

A.

Use Firestore.

B.

Use Cloud SQL with an external (public) IP address.

C.

Use an in-app embedded database.

D.

Use Cloud Spanner.

Question 26

You are the database administrator of a Cloud SQL for PostgreSQL instance that has pgaudit disabled. Users are complaining that their queries are taking longer to execute and performance has degraded over the past few months. You need to collect and analyze query performance data to help identity slow-running queries. What should you do?

Options:

A.

View Cloud SQL operations to view historical query information.

B.

White a Logs Explorer query to identify database queries with high execution times.

C.

Review application logs to identify database calls.

D.

Use the Query Insights dashboard to identify high execution times.

Question 27

Your company's mission-critical, globally available application is supported by a Cloud Spanner database. Experienced users of the application have read and write access to the database, but new users are assigned read-only access to the database. You need to assign the appropriate Cloud Spanner Identity and Access Management (IAM) role to new users being onboarded soon. What roles should you set up?

Options:

A.

roles/spanner.databaseReader

B.

roles/spanner.databaseUser

C.

roles/spanner.viewer

D.

roles/spanner.backupWriter

Question 28

Your DevOps team is using Terraform to deploy applications and Cloud SQL databases. After every new application change is rolled out, the environment is torn down and recreated, and the persistent database layer is lost. You need to prevent the database from being dropped. What should you do?

Options:

A.

Set Terraform deletion_protection to true.

B.

Rerun terraform apply.

C.

Create a read replica.

D.

Use point-in-time-recovery (PITR) to recover the database.

Question 29

Your company wants you to migrate their Oracle, MySQL, Microsoft SQL Server, and PostgreSQL relational databases to Google Cloud. You need a fully managed, flexible database solution when possible. What should you do?

Options:

A.

Migrate all the databases to Cloud SQL.

B.

Migrate the Oracle, MySQL, and Microsoft SQL Server databases to Cloud SQL, and migrate the PostgreSQL databases to Compute Engine.

C.

Migrate the MySQL, Microsoft SQL Server, and PostgreSQL databases to Compute Engine, and migrate the Oracle databases to Bare Metal Solution for Oracle.

D.

Migrate the MySQL, Microsoft SQL Server, and PostgreSQL databases to Cloud SQL, and migrate the Oracle databases to Bare Metal Solution for Oracle.

Question 30

You want to migrate your on-premises PostgreSQL database to Compute Engine. You need to migrate this database with the minimum downtime possible. What should you do?

Options:

A.

Perform a full backup of your on-premises PostgreSQL, and then, in the migration window, perform an incremental backup.

B.

Create a read replica on Cloud SQL, and then promote it to a read/write standalone instance.

C.

Use Database Migration Service to migrate your database.

D.

Create a hot standby on Compute Engine, and use PgBouncer to switch over the connections.

Question 31

Your organization has a busy transactional Cloud SQL for MySQL instance. Your analytics team needs access to the data so they can build monthly sales reports. You need to provide data access to the analytics team without adversely affecting performance. What should you do?

Options:

A.

Create a read replica of the database, provide the database IP address, username, and password to the analytics team, and grant read access to required tables to the team.

B.

Create a read replica of the database, enable the cloudsql.iam_authentication flag on the replica, and grant read access to required tables to the analytics team.

C.

Enable the cloudsql.iam_authentication flag on the primary database instance, and grant read access to required tables to the analytics team.

D.

Provide the database IP address, username, and password of the primary database instance to the analytics, team, and grant read access to required tables to the team.

Question 32

Your organization has strict policies on tracking rollouts to production and periodically shares this information with external auditors to meet compliance requirements. You need to enable auditing on several Cloud Spanner databases. What should you do?

Options:

A.

Use replication to roll out changes to higher environments.

B.

Use backup and restore to roll out changes to higher environments.

C.

Use Liquibase to roll out changes to higher environments.

D.

Manually capture detailed DBA audit logs when changes are rolled out to higher environments.

Question 33

You are configuring the networking of a Cloud SQL instance. The only application that connects to this database resides on a Compute Engine VM in the same project as the Cloud SQL instance. The VM and the Cloud SQL instance both use the same VPC network, and both have an external (public) IP address and an internal (private) IP address. You want to improve network security. What should you do?

Options:

A.

Disable and remove the internal IP address assignment.

B.

Disable both the external IP address and the internal IP address, and instead rely on Private Google Access.

C.

Specify an authorized network with the CIDR range of the VM.

D.

Disable and remove the external IP address assignment.

Question 34

Your organization deployed a new version of a critical application that uses Cloud SQL for MySQL with high availability (HA) and binary logging enabled to store transactional information. The latest release of the application had an error that caused massive data corruption in your Cloud SQL for MySQL database. You need to minimize data loss. What should you do?

Options:

A.

Open the Google Cloud Console, navigate to SQL > Backups, and select the last version of the automated backup before the corruption.

B.

Reload the Cloud SQL for MySQL database using the LOAD DATA command to load data from CSV files that were used to initialize the instance.

C.

Perform a point-in-time recovery of your Cloud SQL for MySQL database, selecting a date and time before the data was corrupted.

D.

Fail over to the Cloud SQL for MySQL HA instance. Use that instance to recover the transactions that occurred before the corruption.

Question 35

Your organization is currently updating an existing corporate application that is running in another public cloud to access managed database services in Google Cloud. The application will remain in the other public cloud while the database is migrated to Google Cloud. You want to follow Google-recommended practices for authentication. You need to minimize user disruption during the migration. What should you do?

Options:

A.

Use workload identity federation to impersonate a service account.

B.

Ask existing users to set their Google password to match their corporate password.

C.

Migrate the application to Google Cloud, and use Identity and Access Management (IAM).

D.

Use Google Workspace Password Sync to replicate passwords into Google Cloud.

Question 36

Your company is evaluating Google Cloud database options for a mission-critical global payments gateway application. The application must be available 24/7 to users worldwide, horizontally scalable, and support open source databases. You need to select an automatically shardable, fully managed database with 99.999% availability and strong transactional consistency. What should you do?

Options:

A.

Select Bare Metal Solution for Oracle.

B.

Select Cloud SQL.

C.

Select Bigtable.

D.

Select Cloud Spanner.

Question 37

You have an application that sends banking events to Bigtable cluster-a in us-east. You decide to add cluster-b in us-central1. Cluster-a replicates data to cluster-b. You need to ensure that Bigtable continues to accept read and write requests if one of the clusters becomes unavailable and that requests are routed automatically to the other cluster. What deployment strategy should you use?

Options:

A.

Use the default app profile with single-cluster routing.

B.

Use the default app profile with multi-cluster routing.

C.

Create a custom app profile with multi-cluster routing.

D.

Create a custom app profile with single-cluster routing.

Question 38

You finished migrating an on-premises MySQL database to Cloud SQL. You want to ensure that the daily export of a table, which was previously a cron job running on the database server, continues. You want the solution to minimize cost and operations overhead. What should you do?

Options:

A.

Use Cloud Scheduler and Cloud Functions to run the daily export.

B.

Create a streaming Datatlow job to export the table.

C.

Set up Cloud Composer, and create a task to export the table daily.

D.

Run the cron job on a Compute Engine instance to continue the export.

Question 39

You are managing a Cloud SQL for MySQL environment in Google Cloud. You have deployed a primary instance in Zone A and a read replica instance in Zone B, both in the same region. You are notified that the replica instance in Zone B was unavailable for 10 minutes. You need to ensure that the read replica instance is still working. What should you do?

Options:

A.

Use the Google Cloud Console or gcloud CLI to manually create a new clone database.

B.

Use the Google Cloud Console or gcloud CLI to manually create a new failover replica from backup.

C.

Verify that the new replica is created automatically.

D.

Start the original primary instance and resume replication.