GCP Associate Cloud Engineer Certification Dumps

This article gives you a brief about the GCP Associate Cloud Engineer Certification, including exam insights, preparation strategies, and sample GCP Associate Cloud Engineer Certification exam questions.

1. What is the GCP Associate Cloud Engineer Certification?

The Google Associate Cloud Engineer Certification is your gateway to high-impact cloud roles and career growth. It is a prestigious credential designed for professionals who design and implement GCP solutions.
It demonstrates your ability to deploy, monitor, and maintain cloud solutions.

2. Why Pursue the Google Associate Cloud Engineer Certification?

  1. Industry Recognition: Google Cloud certifications are globally respected, signaling hands-on proficiency with GCP tools and services. Employers like Deloitte, Spotify, and Twitter prioritize certified professionals for cloud roles.
  2. Skill Validation: The exam tests real-world skills, including deploying applications, managing IAM policies, and configuring network services. This ensures you’re job-ready for cloud engineering tasks.
  3. Career Growth: Certified professionals earn 15–20% higher salaries on average (Payscale, 2023). Roles like Cloud Engineer, DevOps Specialist, and Solutions Architect become accessible.
  4. Organizational Impact: Teams with certified engineers report faster cloud adoption and improved operational efficiency, according to a 2023 Gartner study.

3. Exam Overview: What’s Tested?

The Google Associate Cloud Engineer exam evaluates your ability to:

  • Set up cloud environments using the Console, CLI, or Terraform.
  • Deploy and manage Kubernetes clusters, Compute Engine VMs, and serverless workloads.
  • Monitor operations via Cloud Monitoring, Logging, and Error Reporting.
  • Secure resources using IAM roles, VPC Service Controls, and encryption.

Exam Format:

  • 50–60 multiple-choice questions (2 hours).
  • Available in English, Japanese, Spanish, and Portuguese.
  • Cost: USD 125 (discounts available via Google Cloud events).

4. Step-by-Step Preparation Strategy

  1. Build Hands-On Experience
    • Use the Google Cloud Free Tier to explore core services like Compute Engine, Cloud Storage, and BigQuery.
    • Complete labs on platforms like Qwiklabs to simulate real-world scenarios.
  2. Leverage Official Resources
    • Review the Google Cloud Exam Guide.
    • Enroll in Coursera’s Preparing for Google Cloud Certification specialization.
  3. Practice with Mock Exams
    • Platforms offer realistic practice questions with detailed explanations.
  4. Join Study Communities
    • Engage with peers on forums like r/googlecloud (Reddit) or LinkedIn groups for tips and mentorship.

5. Key Study Tips for Success

  • Master Core Services: Focus on Compute Engine, Kubernetes Engine, Cloud SQL, and VPC networks.
  • Practice CLI & SDKs: Learn gcloudgsutil, and bq commands for efficient resource management.
  • Time Management: Simulate exam conditions to improve speed and accuracy.
  • Review Weak Areas: Use Google Cloud’s documentation to clarify concepts like billing accounts or autoscaling.

6. Career Opportunities After Certification

  1. Cloud Engineer (90K–90K–140K): Deploy and optimize GCP infrastructure.
  2. DevOps Engineer (100K–100K–160K): Automate CI/CD pipelines and ensure scalability.
  3. Solutions Architect (120K–120K–180K): Design cloud-native applications aligned with business goals.
GCP Associate Cloud Engineer Certification

Q1. Your company runs one batch process on an on-premises server that takes around 30 hours to complete. The task runs monthly, can be performed offline, and must be restarted if interrupted. You want to migrate this workload to the cloud while minimizing cost. What should you do?

  1. Migrate the workload to a Compute Engine Preemptible VM.
  2. Migrate the workload to a Google Kubernetes Engine cluster with Preemptible nodes.
  3. Migrate the workload to a Compute Engine VM. Start and stop the instance as needed.✔️
  4. Create an Instance Template with Preemptible VMs On. Create a Managed Instance Group from the template and adjust Target CPU Utilization. Migrate the workload.

Reference

Q2. You are developing a new application and are looking for a Jenkins installation to build and deploy your source code. You want to automate the installation as quickly and easily as possible. What should you do?

  1. Deploy Jenkins through the Google Cloud Marketplace.✔️
  2. Create a new Compute Engine instance. Run the Jenkins executable.
  3. Create a new Kubernetes Engine cluster. Create a deployment for the Jenkins image.
  4. Create an instance template with the Jenkins executable. Create a managed instance group with this template.

Reference

Q3. You have downloaded and installed the gcloud command-line interface (CLI) and have authenticated with your Google Account. Most of your Compute Engine instances in your project run in the europe-west1-d zone. You want to avoid having to specify this zone with each CLI command when managing these instances. What should you do?

  1. Set the europe-west1-d zone as the default zone using the gcloud config subcommand.✔️
  2. In the Settings page for Compute Engine under Default location, set the zone to europe-west1-d.
  3. In the CLI installation directory, create a file called default.conf containing zone=europe-west1-d.
  4. Create a Metadata entry on the Compute Engine page with key compute/zone and value europe-west1-d.

Reference

Q4. The core business of your company is to rent out construction equipment a large scale. All the equipment that is being rented out has been equipped with multiple sensors that send event information every few seconds. These signals can vary from engine status, distance traveled, fuel level, and more. Customers are billed based on the consumption monitored by these sensors. You expect high throughput, up to thousands of events per hour per device, and need to retrieve consistent data based on the time of the event. Storing and retrieving individual signals should be atomic. What should you do?

  1. Create a file in Cloud Storage per device and append new data to that file.
  2. Create a file in Cloud Filestore per device and append new data to that file.
  3. Ingest the data into Datastore. Store data in an entity group based on the device.
  4. Ingest the data into Cloud Bigtable. Create a row key based on the event timestamp.✔️

Reference

Q5. You are asked to set up application performance monitoring on Google Cloud projects A, B, and C as a single pane of glass. You want to monitor CPU, memory, and disk. What should you do?

  1. Enable API and then share charts from projects A, B, and C.
  2. Enable the API and then give the metrics. reader role to projects A, B, and C.
  3. Enable the API and then use the default dashboards to view all projects in sequence.
  4. Enable API, create a workspace under project A, and then add projects B and C.✔️

Reference

Q6. You created several resources in multiple Google Cloud projects. All projects are linked to different billing accounts. To better estimate future charges, you want to have a single visual representation of all costs incurred. You want to include new cost data as soon as possible. What should you do?

  1. Configure Billing Data Export to BigQuery and visualize the data in Data Studio.✔️
  2. Visit the Cost Table page to get a CSV export and visualize it using Data Studio.
  3. Fill in all resources in the Pricing Calculator to get an estimate of the monthly cost.
  4. Use the Reports view in the Cloud Billing Console to view the desired cost information.

Reference

Q7. Your company has workloads running on Compute Engine and on-premises. The Google Cloud Virtual Private Cloud (VPC) is connected to your WAN over a Virtual Private Network (VPN). You need to deploy a new Compute Engine instance and ensure that no public Internet traffic can be routed to it. What should you do?

  1. Create the instance without a public IP address.✔️
  2. Create the instance with Private Google Access enabled.
  3. Create a deny-all egress firewall rule on the VPC network.
  4. Create a route on the VPC to route all traffic to the instance over the VPN tunnel.

Reference

  1. Use Deployment Manager templates to describe the proposed changes and store them in a Cloud Storage bucket.
  2. Use Deployment Manager templates to describe the proposed changes and store them in Cloud Source Repositories.✔️
  3. Apply the changes in a development environment, run gcloud compute instances list, and then save the output in a shared Storage bucket.
  4. Apply the changes in a development environment, run gcloud compute instances list, and then save the output in Cloud Source Repositories.

Reference

  1. Update your instances’ metadata to add the following value: snapshot ‘schedule: 0 1 * * *
    Update your instances’ metadata to add the following value: snapshot ‘retention: 30’
  2. In the Cloud Console, go to the Compute Engine Disks page and select your instance’s disk.
    In the Snapshot Schedule section, select Create Schedule and configure the following parameters: – Schedule frequency: Daily, Start Time: 1:00 AM – 2:00 AM – Autodelete snapshots after: 30 days✔️
  3. Create a Cloud Function that creates a snapshot of your instance’s disk.
    Create a Cloud Function that deletes snapshots that are older than 30 days.
    Use Cloud Scheduler to trigger both Cloud Functions daily at 1:00 AM.
  4. Create a bash script in the instance that copies the content of the disk to Cloud Storage.
    Create a bash script in the instance that deletes data older than 30 days in the backup Cloud Storage bucket.
    Configure the instance’s crontab to execute these scripts daily at 1:00 AM.

Reference

Q10. Your existing application running in Google Kubernetes Engine (GKE) consists of multiple pods running on four GKE n1-standard-2 nodes. You need to deploy additional pods requiring n2-highmem-16 nodes without anydowntime. What should you do?

  1. Use gcloud container clusters upgrade. Deploy the new services.
  2. Create a new Node Pool and specify machine type n2-highmem 16 nodes. Deploy the new pods.✔️
  3. Create a new cluster with n2-highmem-16 nodes. Redeploy the pods and delete the old cluster.
  4. Create a new cluster with both n1-standard-2 and n2-highmem-16 nodes. Redeploy the pods and delete the old cluster.

Reference

Q11. You have an application that uses Cloud Spanner as a database backend to keep current state information about users. Cloud Bigtable logs all events triggered by users. You export Cloud Spanner data to Cloud Storage during daily backups. One of your analysts asks you to join data from Cloud Spanner and Cloud Bigtable for specific users. You want to complete this ad hoc request as efficiently as possible. What should you do?

  1. Create a dataflow job that copies data from Cloud Bigtable and Cloud Storage for specific users.
  2. Create a dataflow job that copies data from Cloud Bigtable and Cloud Spanner for specific users.
  3. Create a Cloud Dataproc cluster that runs a Spark job to extract data from Cloud Bigtable and Cloud Storage for specific users.
  4. Create two separate BigQuery external tables on Cloud Storage and Cloud Bigtable. Use the BigQuery console to join these tables through user fields, and apply appropriate filters.✔️

Reference

Q12. You are hosting an application from Compute Engine virtual machines (VMs) in us-central1-a. You want to adjust your design to support the failure of a single Compute Engine zone, eliminate downtime, and minimize cost. What should you do?

  1. Create Compute Engine resources in us-central1-b.
    Balance the load across both us-central1-a and us-central1-b.✔️
  2. Create a Managed Instance Group and specify us-central1-a as the zone.
    Configure the Health Check with a short Health Interval.
  3. Create an HTTP(S) Load Balancer.
    Create one or more global forwarding rules to direct traffic to your VMs.
  4. Perform regular backups of your application.
    Create a Cloud Monitoring Alert and be notified if your application becomes unavailable.
    Restore from backups when notified.

Q13. A colleague handed over a Google Cloud Platform project for you to maintain. As part of a security checkup, you want to review who has been granted the Project Owner role. What should you do?

  1. In the console, validate which SSH keys have been stored as project-wide keys.
  2. Navigate to Identity-Aware Proxy and check the permissions for these resources.
  3. Enable Audit Logs on the IAM & admin page for all resources, and validate the results.
  4. Use the command “gcloud projects get-iam-policy” to view the current role assignments.✔️

Reference

Q14. You are running multiple VPC-native Google Kubernetes Engine clusters in the same subnet. The IPs available for the nodes are exhausted, and you want to ensure that the clusters can grow in nodes when needed. What should you do?

  1. Create a new subnet in the same region as the subnet being used.
  2. Add an alias IP range to the subnet used by the GKE clusters.
  3. Create a new VPC, and set up VPC peering with the existing VPC.
  4. Expand the CIDR range of the relevant subnet for the cluster.✔️

Reference

Q15. You have a batch workload that runs every night and uses a large number of virtual machines (VMs). It is fault-tolerant and can tolerate some ofthe VMs being terminated. The current cost of VMs is too high. Whatshould you do?

  1. Run a test using simulated maintenance events. If the test is successful, use preemptible N1 Standard VMs when running future jobs.✔️
  2. Run a test using simulated maintenance events. If the test is successful, use N1 Standard VMs when running future jobs.
  3. Run a test using a managed instance group. If the test is successful, use N1 Standard VMs in the managed instance group when running future jobs.
  4. Run a test using N1 standard VMs instead of N2. If the test is successful, use N1 Standard VMs when running future jobs.

Reference

Q16. You are working with a user to set up an application in a new VPC behind a firewall. The user is concerned about data egress. You want to configure the fewest open egress ports. What should you do?

  1. Set up a low-priority (65534) rule that blocks all egress and a high-priority rule (1000) that allows only the appropriate ports.✔️
  2. Set up a high-priority (1000) rule that pairs both ingress and egress ports.
  3. Set up a high-priority (1000) rule that blocks all egress and a low-priority(65534) rule that allows only the appropriate ports.
  4. Set up a high-priority (1000) rule to allow the appropriate ports.

Reference

Q17. Your company runs its Linux workloads on Compute Engine instances. Your company will be working with a new operations partner that does not use Google Accounts. You need to grant access to the instances to your operations partner so they can maintain the installed tooling. What should you do?

  1. Enable Cloud IAP for the Compute Engine instances, and add the operations partner as a Cloud IAP Tunnel User.
  2. Tag all the instances with the same network tag. Create a firewall rule in the VPC to grant TCP access on port 22 for traffic from the operations partner to instances with the network tag.
  3. Set up Cloud VPN between your Google Cloud VPC and the internal network of the operations partner.
  4. Ask the operations partner to generate SSH key pairs and add the public keys to the VM instances.✔️

Reference

Q18. You have created a code snippet that should be triggered whenever a new file is uploaded to a Cloud Storage bucket. You want to deploy this code snippet. What should you do?

  1. Use App Engine and configure Cloud Scheduler to trigger the application using Pub/Sub.
  2. Use Cloud Functions and configure the bucket as a trigger resource.✔️
  3. Use Google Kubernetes Engine and configure a CronJob to trigger the application using Pub/Sub.
  4. Use Dataflow as a batch job, and configure the bucket as a data source.

Reference

Q19. You have been asked to set up Object Lifecycle Management for objects stored in storage buckets. The objects are written once and accessed frequently for 30 days. After 30 days, the objects are not read again unless there is a special need. The objects should be kept for three years, and you need to minimize costs. What should you do?

  1. Set up a policy that uses Nearline storage for 30 days and then moves to Archive storage for three years.
  2. Set up a policy that uses Standard storage for 30 days and then moves to Archive storage for three years.✔️
  3. Set up a policy that uses Nearline storage for 30 days, then moves the Coldline for one year, and then moves to Archive storage for two years.
  4. Set up a policy that uses Standard storage for 30 days, then moves to Coldline for one year, and then moves to Archive storage for two years.

Reference

  1. Enable the Identity Aware Proxy API on the project.
  2. Scan the bucket using the Data Loss Prevention API.
  3. Allow only a single Service Account access to read the data.
  4. Enable Data Access audit logs for the Cloud Storage API.✔️

Reference

Q21. You are the team lead of a group of 10 developers. You provided each developer with an individual Google Cloud Project that they can use as their sandbox to experiment with different Google Cloud solutions. You want to be notified if any of the developers are spending above $500 per month on their sandbox environment. What should you do?

  1. Create a single budget for all projects and configure budget alerts on this budget.
  2. Create a separate billing account per sandbox project and enable BigQuery billing exports. Create a Data Studio dashboard to plot thespending per billing account.
  3. Create a budget per project and configure budget alerts on all of these budgets.✔️
  4. Create a single billing account for all sandbox projects and enable BigQuery billing exports. Create a Data Studio dashboard to plot the spending per project.

Reference

Q22. You are deploying a production application on Compute Engine. You want to prevent anyone from accidentally destroying the instance by clicking the wrong button. What should you do?

  1. Disable the flag ‘Delete boot disk when instance is deleted.’
  2. Enable delete protection on the instance.✔️
  3. Disable Automatic restart on the instance.
  4. Enable Preemptibility on the instance.

Reference

Q23. You are working with a Cloud SQL MySQL database at your company. You need to retain a month-end copy of the database for three years for audit purposes. What should you do?

  1. Set up an export job for the first of the month. Write the export file to an Archive class Cloud Storage bucket.
  2. Save the automatic first-of-the-month backup for three years. Store the backup file in an Archive class Cloud Storage bucket.✔️
  3. Set up an on-demand backup for the first of the month. Write the backup to an Archive class Cloud Storage bucket.
  4. Convert the automatic first-of-the-month backup to an export file. Write the export file to a Coldline class Cloud Storage bucket.

Reference

  1. Grant all members of the DevOps team the role of Project Editor on the organization level.
  2. Grant all members of the DevOps team the role of Project Editor on the production project.
  3. Create a custom role that combines the required permissions. Grant the DevOps team the custom role on the production project.✔️
  4. Create a custom role that combines the required permissions. Grant the DevOps team the custom role on the organization level.

Reference

Q25. You are building an application that processes data files uploaded from thousands of suppliers. Your primary goals for the application are data security and the expiration of aged data. You need to design the application to:
Restrict access so that suppliers can access only their data.
Give suppliers write access to data only for 30 minutes.
Delete data that is over 45 days old.
You have a very short development cycle, and you need to make sure that the application requires minimal maintenance. Which two strategies should you use?

  1. Build a lifecycle policy to delete Cloud Storage objects after 45 days.✔️
  2. Use signed URLs to allow suppliers limited time access to store their objects.✔️
  3. Set up an SFTP server for your application, and create a separate user for each supplier.
  4. Build a Cloud function that triggers a timer of 45 days to delete objects that have expired.
  5. Develop a script that loops through all Cloud Storage buckets and deletes any buckets that are older than 45 days.

Reference

Q26. Your company wants to standardize the creation and management of multiple Google Cloud resources using Infrastructure as Code. You want to minimize the amount of repetitive code needed to manage the environment. What should you do?

  1. Develop templates for the environment using Cloud Deployment Manager.✔️
  2. Use curl in a terminal to send a REST request to the relevant Google API for each resource.
  3. Use the Cloud Console interface to provision and manage all related resources.
  4. Create a bash script that contains all the required steps as gcloud commands.

Reference

Q27. You are performing a monthly security check of your Google Cloud environment and want to know who has access to view data stored in your Google Cloud Project. What should you?

  1. Enable Audit Logs for all APIs that are related to data storage.
  2. Review the IAM permissions for any role that allows for data access.✔️
  3. Review the Identity-Aware Proxy settings for each resource.
  4. Create a Data Loss Prevention job.

Reference

Q28. Your company has embraced a hybrid cloud strategy where some of the applications are deployed on Google Cloud. A Virtual Private Network (VPN) tunnel connects your Virtual Private Cloud (VPC) in Google Cloud with your company’s on-premises network. Multiple applications in Google Cloud need to connect to an on-premises database server, and you want to avoid having to change the IP configuration in all of your applications when the IP of the database changes. What should you do?

  1. Configure Cloud NAT for all subnets of your VPC to be used when egressing from the VM instances.
  2. Create a private zone on Cloud DNS, and configure the applications with the DNS name.✔️
  3. Configure the IP of the database as custom metadata for each instance, and query the metadata server.
  4. Query the Compute Engine internal DNS from the applications to retrieve the IP of the database.

Reference

Q29. You have developed a containerized web application that will serve internal colleagues during business hours. You want to ensure that no costs are incurred outside of the hours the application is used. You have just created a new Google Cloud project and want to deploy the application. What should you do?

  1. Deploy the container on Cloud Run for Anthos, and set the minimum number of instances to zero.
  2. Deploy the container on Cloud Run (fully managed), and set the minimum number of instances to zero.✔️
  3. Deploy the container on App Engine flexible environment with autoscaling, and set the value min_instances to zero in the app.yaml.
  4. Deploy the container on App Engine flexible environment with manual scaling, and set the value instances to zero in the app.yaml

Reference

Q30. You have experimented with Google Cloud using your credit card and expensed the costs to your company. Your company wants to streamline the billing process and charge the costs of your projects to their monthly invoice. What should you do?

  1. Grant the financial team the IAM role of ‘Billing Account User’ on the billing account linked to your credit card.
  2. Set up BigQuery billing export and grant your financial department IAM access to query the data.
  3. Create a ticket with Google Billing Support to ask them to send the invoice to your company.
  4. Change the billing account of your projects to the billing account of your company.✔️

Reference

Q31. You are running a data warehouse on BigQuery. A partner company is offering a recommendation engine based on the data in your data warehouse. The partner company is also running its application on Google Cloud. They manage the resources in their own project, but they need access to the BigQuery dataset in your project. You want to provide the partner company with access to the dataset. What should you do?

  1. Create a Service Account in your own project, and grant this Service Account access to BigQuery in your project.
  2. Create a Service Account in your own project, and ask the partner to grant this Service Account access to BigQuery in their project.
  3. Ask the partner to create a Service Account in their project, and have them give the Service Account access to BigQuery in their project.
  4. Ask the partner to create a Service Account in their project and grant their Service Account access to the BigQuery dataset in your project.✔️

Reference

Q32. Your web application has been running successfully on Cloud Run for Anthos. You want to evaluate an updated version of the application with a specific percentage of your production users (canary deployment). What should you do?

  1. Create a new service with the new version of the application. Split traffic between this version and the version that is currently running.
  2. Create a new revision with the new version of the application. Split traffic between this version and the version that is currently running.✔️
  3. Create a new service with the new version of the application. Add an HTTP Load Balancer in front of both services.
  4. Create a new revision with the new version of the application. Add an HTTP Load Balancer in front of both revisions.

Reference

Q33. Your company developed a mobile game that is deployed on Google Cloud. Gamers are connecting to the game with their personal phones over the Internet. The game sends UDP packets to update the servers about the gamers’ actions while they are playing in multiplayer mode. Your game backend can scale over multiple virtual machines (VMs), and you want to expose the VMs over a single IP address. What should you do?

  1. Configure an SSL Proxy load balancer in front of the application servers.
  2. Configure an Internal UDP load balancer in front of the application servers.
  3. Configure an External HTTP(s) load balancer in front of the application servers.
  4. Configure an External Network load balancer in front of the application servers.✔️

Reference

Q34. You are working for a hospital that stores its medical images in an on-premises data room. The hospital wants to use Cloud Storage for archival storage of these images. The hospital wants an automated process to upload any new medical images to Cloud Storage. You need to design and implement a solution. What should you do?

  1. Create a Pub/Sub topic, and enable a Cloud Storage trigger for the Pub/Sub topic. Create an application that sends all medical images to the Pub/Sub topic.
  2. Deploy a Dataflow job from the batch template, ‘Datastore to Cloud Storage.’ Schedule the batch job on the desired interval.
  3. Create a script that uses the gsutil command line interface to synchronize the on-premises storage with Cloud Storage. Schedule the script as a cron job.✔️
  4. In the Cloud Console, go to Cloud Storage. Upload the relevant images to the appropriate bucket.

Reference

Q35. Your auditor wants to view your organization’s use of data in Google Cloud. The auditor is most interested in auditing who accessed data in Cloud Storage buckets. You need to help the auditor access the data they need. What should you do?

  1. Turn on Data Access Logs for the buckets they want to audit, and then build a query in the log viewer that filters on Cloud Storage.✔️
  2. Assign the appropriate permissions, and then create a Data Studio report on Admin Activity Audit Logs.
  3. Assign the appropriate permissions, and use Cloud Monitoring to review metrics.
  4. Use the export logs API to provide the Admin Activity Audit Logs in the format they want.

Reference

Q36. You received a JSON file that contained a private key of a Service Account in order to get access to several resources in a Google Cloud project. You downloaded and installed the Cloud SDK and want to use this private key for authentication and authorization when performing gcloud commands. What should you do?

  1. Use the command gcloud auth login and point it to the private key.
  2. Use the command gcloud auth activate-service-account and point it to the private key.✔️
  3. Place the private key file in the installation directory of the Cloud SDK and rename it to ‘credentials.json’.
  4. Place the private key file in your home directory and rename it to ‘GOOGLE_APPLICATION_CREDENTIALS’.

Reference

Q37. You are monitoring an application and receive user feedback that a specific error is spiking. You notice that the error is caused by a ServiceAccount having insufficient permissions. You are able to solve the problem, but want to be notified if the problem recurs. What should you do?

  1. In the Log Viewer, filter the logs on severity ‘Error’ and the name of the Service Account.
  2. Create a sink to BigQuery to export all the logs. Create a Data Studio dashboard on the exported logs.
  3. Create a custom log-based metric for the specific error to be used in an Alerting Policy.✔️
  4. Grant Project Owner access to the Service Account.

Reference

Q38. You are developing a financial trading application that will be used globally. Data is stored and queried using a relational structure, and clients from all over the world should get the exact same state of the data. The application will be deployed in multiple regions to provide the lowest latency to end users. You need to select a storage option for the application data while minimizing latency. What should you do?

  1. Use Cloud Bigtable for data storage.
  2. Use Cloud SQL for data storage.
  3. Use Cloud Spanner for data storage.✔️
  4. Use Firestore for data storage.

Reference

Q39. You are about to deploy a new Enterprise Resource Planning (ERP) system on Google Cloud. The application holds the full database in memory for fast data access, and you need to configure the most appropriate resources on Google Cloud for this application. What should you do?

  1. Provision preemptible Compute Engine instances.
  2. Provision Compute Engine instances with GPUs attached.
  3. Provision Compute Engine instances with local SSDs attached.
  4. Provision Compute Engine instances with the M1 machine type.✔️

Reference

Q40. You have developed an application that consists of multiple microservices, with each microservice packaged in its own Docker container image. You want to deploy the entire application on Google Kubernetes Engine so that each microservice can be scaled individually. What should you do?

  1. Create and deploy a Custom Resource Definition per microservice.
  2. Create and deploy a Docker Compose File.
  3. Create and deploy a Job per microservice.
  4. Create and deploy a Deployment per microservice.✔️

Reference

Q41. You will have several applications running on different Compute Engine instances in the same project. You want to specify at a more granular level the service account each instance uses when calling Google Cloud APIs. What should you do?

  1. When creating the instances, specify a Service Account for each instance.✔️
  2. When creating the instances, assign the name of each Service Account as instance metadata.
  3. After starting the instances, use gcloud compute instances update to specify a Service Account for each instance.
  4. After starting the instances, use gcloud compute instances update to assign the name of the relevant Service Account as instance metadata.

Reference

Q42. You are creating an application that will run on Google Kubernetes Engine. You have identified MongoDB as the most suitable database system for your application and want to deploy a managed MongoDB environment that provides a support SLA. What should you do?

  1. Create a Cloud Bigtable cluster and use the HBase API.
  2. Deploy MongoDB Atlas from the Google Cloud Marketplace.✔️
  3. Download a MongoDB installation package, and run it on Compute Engine instances.
  4. Download a MongoDB installation package, and run it on a Managed Instance Group.

Reference

Q43. You are managing a project for the Business Intelligence (BI) department in your company. A data pipeline ingests data into BigQuery via streaming. You want the users in the BI department to be able to run the custom SQL queries against the latest data in BigQuery. What should you do?

  1. Create a Data Studio dashboard that uses the related BigQuery tables as a source, and give the BI team view access to the Data Studio dashboard.
  2. Create a Service Account for the BI team and distribute a new private key to each member of the BI team.
  3. Use Cloud Scheduler to schedule a batch Dataflow job to copy the data from BigQuery to the BI team’s internal data warehouse.
  4. Assign the IAM role of BigQuery User to a Google Group that contains the members of the BI team.✔️

Reference

Q44. Your company is moving its entire workload to Compute Engine. Some servers should be accessible through the Internet, and others should only be accessible over the internal network. All servers need to be able to talk to each other over specific ports and protocols. The current on-premises network relies on a demilitarized zone (DMZ) for the public servers and a Local Area Network (LAN) for the private servers. You need to design the networking infrastructure on Google Cloud to match these requirements. What should you do?

  1. Create a single VPC with a subnet for the DMZ and a subnet for the LAN.
    Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ ✔️
  2. Create a single VPC with a subnet for the DMZ and a subnet for the LAN.
    Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic forthe DMZ.
  3. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LAN.
    Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ.
  4. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LAN.
    Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ.

Reference

Q45. You need to define an address plan for a future new GKE cluster in your VPC. This will be a VPC native cluster, and the default Pod IP range allocation will be used. You must pre-provision all the needed VPC subnets and their respective IP address ranges before cluster creation. The cluster will initially have a single node, but it will be scaled to a maximum of three nodes if necessary. You want to allocate the minimum number of Pod IP addresses. Which subnet mask should you use for the Pod IP address range?

  1. /21
  2. /22✔️
  3. /23
  4. /25

Reference

  1. Project Editor
  2. App Engine Service Admin
  3. App Engine Deployer✔️
  4. App Engine Code Viewer

Reference

Q47. Your company has reserved a monthly budget for your project. You want to be informed automatically of your project spend so that you can take action when you approach the limit. What should you do?

  1. Link a credit card with a monthly limit equal to your budget.
  2. Create a budget alert for 50%, 90%, and 100% of your total monthly budget.✔️
  3. In App Engine Settings, set a daily budget at the rate of 1/30 of your monthly budget.
  4. In the GCP Console, configure billing export to BigQuery. Create a saved view that queries your total spend.

Reference

Q48. You have a project using BigQuery. You want to list all BigQuery jobs for that project. You want to set this project as the default for the bq command-line tool. What should you do?

  1. Use “gcloud config set project” to set the default project.✔️
  2. Use “bq config set project” to set the default project.
  3. Use “gcloud generate config-url” to generate a URL to the Google Cloud Platform Console to set the default project.
  4. Use “bq generate config-url” to generate a URL to the Google Cloud Platform Console to set the default project.

Reference

Q49. You have a Kubernetes cluster with 1 node pool. The cluster receives a lot of traffic and needs to grow. You decide to add a node. What should you do?

  1. Use “gcloud container clusters resize” with the desired number of nodes.✔️
  2. Use “kubectl container clusters resize” with the desired number of nodes.
  3. Edit the managed instance group of the cluster and increase the number of VMs by 1.
  4. Edit the managed instance group of the cluster and enable autoscaling.

Q50. You want to select and configure a solution for storing and archiving data on the Google Cloud Platform. You need to support compliance objectives for data from one geographic location. This data is archived after 30 days and needs to be accessed annually. What should you do?

  1. Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.
  2. Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.
  3. Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.
  4. Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.✔️

Reference

Scroll to Top