This article gives you a brief about the Google Cloud Architect certification, including exam insights, preparation strategies, and sample Google Professional Cloud Architect exam questions.
Table of Contents
1. What is the Google Cloud Architect Certification?
The Google Professional Cloud Architect certification is a prestigious credential designed for professionals who design and implement GCP solutions. It demonstrates your ability to balance business objectives with technical feasibility while ensuring security, compliance, and scalability.
This certification is ideal for:
- Cloud architects, developers, and DevOps engineers
- IT managers overseeing cloud migration
- Professionals targeting roles in cloud infrastructure design
2. Why Pursue the GCP Architect Certification?
- Career Advancement: On average, certified architects earn 25% higher salaries than non-certified peers (Source: Global Knowledge).
- Industry Recognition: Google Cloud certifications are respected globally, boosting your credibility.
- Skill Validation: Master core GCP services like Compute Engine, Kubernetes Engine, BigQuery, and Cloud IAM.
- Networking Opportunities: Join an elite community of cloud experts and access exclusive Google events.
3. Google Professional Cloud Architect Exam: Key Details
- Exam Code: Not required (directly referred to as “Professional Cloud Architect” exam).
- Format: Multiple-choice, multiple-select, and scenario-based questions.
- Duration: 2 hours.
- Domains Covered:
- Designing and planning cloud solutions (25%)
- Managing and provisioning infrastructure (20%)
- Ensuring security and compliance (15%)
- Analyzing technical processes (15%)
- Managing implementation (25%)
4. How to Prepare for the Google Cloud Architect Exam
- Study the Exam Guide: Google’s official exam guide outlines topics, sample questions, and best practices.
- Hands-On Practice: Use GCP’s Free Tier to experiment with services like Cloud Storage, VPC networks, and Cloud Functions.
- Leverage Training Resources:
- Coursera: “Preparing for the Google Cloud Professional Cloud Architect Exam” specialization.
- Qwiklabs: Guided labs for real-world scenarios.
- Join Study Groups: Engage with communities on Reddit (r/googlecloud) or LinkedIn.
- Take Mock Exams: Platforms offer realistic practice tests.

Google Cloud Architect Certification Dumps
Q1. You are using Cloud CDN to deliver static HTTP(S) website content hosted on a Compute Engine instance group. You want to improve the cache hit ratio. What should you do?️
- Customize the cache keys to omit the protocol from the key.✔️
- Shorten the expiration time of the cached objects.
- Make sure the HTTP(S) header ‘Cache-Region’ points to the closest region of your users.
- Replicate the static content in a Cloud Storage bucket. Point CloudCDN toward a load balancer on that bucket.
Q2. Your architecture calls for the centralized collection of all admin activity and VM system logs within your project. How should you collect these logs from both VMs and services?
- All admin and VM system logs are automatically collected by Stackdriver.
- Stackdriver automatically collects admin activity logs for most services. The Stackdriver Logging agent must be installed on each instance to collect system logs.✔️
- Launch a custom syslogd compute instance and configure your GCP project and VMs to forward all logs to it.
- Install the Stackdriver Logging agent on a single compute instance and let it collect all audit and access logs for your environment.
Q3. You have an App Engine application that needs to be updated. You want to test the update with production traffic before replacing the current application version. What should you do?
- Deploy the update using the Instance Group Updater to create a partial rollout, which allows for canary testing.
- Deploy the update as a new version in the App Engine application, and split traffic between the new and current versions.✔️
- Deploy the update in a new VPC, and use Google’s global HTTP load balancing to split traffic between the update and current applications.
- Deploy the update as a new App Engine application, and use Google’s global HTTP load balancing to split traffic between the new and current applications.
Q4. All Compute Engine instances in your VPC should be able to connect to an Active Directory server on specific ports. Any other traffic emerging from your instances is not allowed. You want to enforce this using VPC firewall rules. How should you configure the firewall rules?
- Create an egress rule with priority 1000 to deny all traffic for all instances. Create another egress rule with priority 100 to allow the Active Directory traffic for all instances.✔️
- Create an egress rule with priority 100 to deny all traffic for all instances. Create another egress rule with priority 1000 to allow the Active Directory traffic for all instances.
- Create an egress rule with priority 1000 to allow the Active Directory traffic. Rely on the implied deny egress rule with priority 100 to block all traffic for all instances.
- Create an egress rule with priority 100 to allow the Active Directory traffic. Rely on the implied deny egress rule with priority 1000 to block all traffic for all instances.
Q5. Your customer runs a web service used by e-commerce sites to offer product recommendations to users. The company has begun experimenting with a machine learning model on Google Cloud Platform to improve the quality of results. What should the customer do to improve their model’s results over time?
- Export Cloud Machine Learning Engine performance metrics from Stackdriver to BigQuery, to be used to analyze the efficiency of the model.
- Build a roadmap to move the machine learning model training from Cloud GPUs to Cloud TPUs, which offer better results.
- Monitor Compute Engine announcements for availability of newer CPU architectures, and deploy the model to them as soon as they are available for additional performance.
- Save a history of recommendations and results of the recommendations in BigQuery, to be used as training data.✔️
Q6. A development team at your company has created a Dockerized HTTPS web application. You need to deploy the application on Google Kubernetes Engine (GKE) and make sure that the application scales automatically. How should you deploy to GKE?
- Use the Horizontal Pod Autoscaler and enable cluster autoscaling. Use an Ingress resource to load-balance the HTTPS traffic.
- Use the Horizontal Pod Autoscaler and enable cluster autoscaling on the Kubernetes cluster. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic.✔️
- Enable autoscaling on the Compute Engine instance group. Use an Ingress resource to load-balance the HTTPS traffic.
- Enable autoscaling on the Compute Engine instance group. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic.
Q7. You need to design a solution for global load balancing based on the URL path being requested. You need to ensure operations reliability and end-to-end intransit encryption based on Google best practices. What should you do?
- Create a cross-region load balancer with URL Maps.
- Create an HTTPS load balancer with URL Maps.✔️
- Create appropriate instance groups and instances. Configure SSL proxy load balancing.
- Create a global forwarding rule. Configure SSL proxy load balancing.
Q8. You have an application that makes HTTP requests to Cloud Storage. Occasionally, the requests fail with HTTP status codes of 5xx and 429. How should you handle these types of errors?
- Use gRPC instead of HTTP for better performance.
- Implement retry logic using a truncated exponential backoff strategy.✔️
- Make sure the Cloud Storage bucket is multi-regional for geo-redundancy.
- Monitor https://status.cloud.google.com/feed.atom and only make requests if Cloud Storage is not reporting an incident.
Q9. You need to develop procedures to test a disaster plan for a mission-critical application. You want to use Google-recommended practices and native capabilities within GCP. What should you do?
- Use Deployment Manager to automate service provisioning. Use Activity Logs to monitor and debug your tests.
- Use Deployment Manager to automate service provisioning. Use Stackdriver to monitor and debug your tests.✔️
- Use gcloud scripts to automate service provisioning. Use Activity Logs to monitor and debug your tests.
- Use gcloud scripts to automate service provisioning. Use Stackdriver to monitor and debug your tests.
Q10. Your company creates rendering software that users can download from the company website. Your company has customers all over the world. You want to minimize latency for all your customers. You want to follow Google-recommended practices. How should you store the files?
- Save the files in a Multi-Regional Cloud Storage bucket.✔️
- Save the files in a Regional Cloud Storage bucket, one bucket per zone of the region.
- Save the files in multiple Regional Cloud Storage buckets, one bucket per zone per region.
- Save the files in multiple Multi-Regional Cloud Storage buckets, one bucket per multi-region.
Q11. Your company acquired a healthcare startup and must retain its customers’ medical information for up to 4 more years, depending on when it was created. Your corporate policy is to securely retain this data and then delete it as soon as regulations allow. Which approach should you take?
- Store the data in Google Drive and manually delete records as they expire.
- Anonymize the data using the Cloud Data Loss Prevention API and store it indefinitely.
- Store the data in Cloud Storage and use lifecycle management to delete files when they expire.✔️
- Store the data in Cloud Storage and run a nightly batch script that deletes all expired data.
Q12. You are deploying a PHP App Engine Standard service with Cloud SQL as the backend. You want to minimize the number of queries to the database. What should you do?
- Set the memcache service level to dedicated. Create a key from the hash of the query, and return database values from memcache before issuing a query to Cloud SQL.✔️
- Set the memcache service level to dedicated. Create a cron task that runs every minute to populate the cache with keys containing query results.
- Set the memcache service level to shared. Create a cron task that runs every minute to save all expected queries to a key called ‘cached_queries’.
- Set the memcache service level to shared. Create a key called ‘cached_queries’, and return database values from the key before using a query to Cloud SQL.
Q13. You need to ensure reliability for your application and operations by supporting reliable task scheduling for compute on GCP. Leveraging Google best practices, what should you do?
- Using the Cron service provided by App Engine, publish messages directly to a message-processing utility service running on Compute Engine instances.
- Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.✔️
- Using the Cron service provided by Google Kubernetes Engine (GKE), publish messages directly to a message-processing utility service running on Compute Engine instances.
- Using the Cron service provided by GKE, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
Q14. Your company is building a new architecture to support its data-centric business focus. You are responsible for setting up the network. Your company’s mobile and web-facing applications will be deployed on-premises, and all data analysis will be conducted in GCP. The plan is to process and load 7 years of archived .csv files totaling 900 TB of data and then continue loading 10 TB of data daily. You currently have an existing 100-MB internet connection. What actions will meet your company’s needs?
- Compress and upload both archived files and files uploaded daily using the gsutil -m option.
- Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily.✔️
- Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish one Cloud VPN Tunnel to VPC networks over the public internet, and compress and upload files daily using the gsutil -m option.
- Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a Cloud VPN Tunnel to VPC networks over the public internet, and compress and upload files daily.
Q15. You are developing a globally scaled frontend for a legacy streaming backend data API. This API expects events in strict chronological order with no-repeat data for proper processing. Which products should you deploy to ensure guaranteed-once FIFO (first-in, first-out) delivery of data?
- Cloud Pub/Sub alone
- Cloud Pub/Sub to Cloud Dataflow✔️
- Cloud Pub/Sub to Stackdriver
- Cloud Pub/Sub to Cloud SQL
Q16. Your company is planning to perform a lift and shift migration of its Linux RHEL 6.5+ virtual machines. The virtual machines are running in an on-premises VMware environment. You want to migrate them to Compute Engine following Google-recommended practices. What should you do?
- Define a migration plan based on the list of applications and their dependencies.
Migrate all virtual machines into Compute Engine individually with Migrate for Compute Engine. - Perform an assessment of virtual machines running in the current VMware environment.
Create images of all disks. Import disks on Compute Engine.
Create standard virtual machines where the boot disks are the ones you have imported. - Perform an assessment of virtual machines running in the current VMware environment.
Define a migration plan, prepare a Migrate for Compute Engine migration RunBook, and execute the migration.✔️ - Perform an assessment of virtual machines running in the current VMware environment.
Install a third-party agent on all selected virtual machines.
Migrate all virtual machines into Compute Engine.
Q17. You need to deploy an application to Google Cloud. The application receives traffic via TCP and reads and writes data to the filesystem. The application does not support horizontal scaling. The application process requires full control over the data on the file system because concurrent access causes corruption. The business is willing to accept downtime when an incident occurs, but the application must be available 24/7 to support their business operations. You need to design the architecture of this application on Google Cloud. What should you do?
- Use a managed instance group with instances in multiple zones, use Cloud Filestore, and use an HTTP load balancer in front of the instances.
- Use a managed instance group with instances in multiple zones, use Cloud Filestore, and use a network load balancer in front of the instances.
- Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use an HTTP load balancer in front of the instances.
- Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use a network load balancer in front of the instances.✔️
Q18. Your company has an application running on multiple Compute Engine instances. You need to ensure that the application can communicate with an on-premises service that requires high throughput via internal IPs while minimizing latency. What should you do?
- Use OpenVPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
- Configure a direct peering connection between the on-premises environment and Google Cloud.
- Use Cloud VPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
- Configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google Cloud.✔️
Q19. You are managing an application deployed on Cloud Run for Anthos, and you need to define a strategy for deploying new versions of the application. You want to evaluate the new code with a subset of production traffic to decide whether to proceed with the rollout. What should you do?
- Deploy a new revision to Cloud Run with the new version. Configure the traffic percentage between revisions.✔️
- Deploy a new service to Cloud Run with the new version. Add a Cloud Load Balancing instance in front of both services.
- In the Google Cloud Console page for Cloud Run, set up continuous deployment using Cloud Build for the development branch. As part of the Cloud Build trigger, configure the substitution variable TRAFFIC_PERCENTAGE with the percentage of traffic you want directed to a new version.
- In the Google Cloud Console, configure Traffic Director with a new Service that points to the new version of the application on Cloud Run. Configure Traffic Director to send a small percentage of traffic to the new version of the application.
Q20. You are monitoring Google Kubernetes Engine (GKE) clusters in a Cloud Monitoring workspace. As a Site Reliability Engineer (SRE), you need to triage incidents quickly. What should you do?
- Navigate the predefined dashboards in the Cloud Monitoring workspace, and then add metrics and create alert policies.✔️
- Navigate the predefined dashboards in the Cloud Monitoring workspace, create custom metrics, and install alerting software on a Compute Engine instance.
- Write a shell script that gathers metrics from GKE nodes, publish these metrics to a Pub/Sub topic, export the data to BigQuery, and make a Data Studio dashboard.
- Create a custom dashboard in the Cloud Monitoring workspace for each incident, and then add metrics and create alert policies.
Q21. You are implementing a single Cloud SQL MySQL second-generation database that contains business-critical transaction data. You want to ensure that the minimum amount of data is lost in case of catastrophic failure. Which two features should you implement?
- Sharding
- Read replicas
- Binary logging✔️
- Automated backups✔️
- Semisynchronous replication
Q22. You are working at a sports association whose members range in age from 8 to 30. The association collects a large amount of health data, such as sustained injuries. You are storing this data in BigQuery. Current legislation requires you to delete such information upon request of the subject. You want to design a solution that can accommodate such a request. What should you do?
- Use a unique identifier for each individual. Upon a deletion request, delete all rows from BigQuery with this identifier.✔️
- When ingesting new data in BigQuery, run the data through the Data Loss Prevention (DLP) API to identify any personal information. As part of the DLP scan, save the result to the Data Catalog. Upon a deletion request, query the Data Catalog to find the column with personal information.
- Create a BigQuery view over the table that contains all the data. Upon a deletion request, exclude the rows that affect the subject’s data from this view. Use this view instead of the source table for all analysis tasks.
- Use a unique identifier for each individual. Upon a deletion request, overwrite the column with the unique identifier with a salted SHA256 of its value.
Q23. Your company has announced that it will be outsourcing its operations functions. You want to allow developers to easily stage new versions of a cloud-based application in the production environment and allow the outsourced operations team to autonomously promote staged versions to production. You want to minimize the operational overhead of the solution. Which Google Cloud product should you migrate to?
- App Engine✔️
- GKE On-Prem
- Compute Engine
- Google Kubernetes Engine
Q24. Your company is running its application workloads on Compute Engine. The applications have been deployed in production, acceptance, and development environments. The production environment is business-critical and is used 24/7, while the acceptance and development environments are only critical during office hours. Your CFO has asked you to optimize these environments to achieve cost savings during idle times. What should you do?
- Create a shell script that uses the gcloud command to change the machine type of the development and acceptance instances to a smaller machine type outside of office hours. Schedule the shell script on one of the production instances to automate the task.
- Use Cloud Scheduler to trigger a Cloud Function that will stop the development and acceptance environments after office hours and start them just before office hours.✔️
- Deploy the development and acceptance applications on a managed instance group and enable autoscaling.
- Use regular Compute Engine instances for the production environment, and use preemptible VMs for the acceptance and development environments.
Q25. You are moving an application that uses MySQL from on-premises to Google Cloud. The application will run on Compute Engine and will use Cloud SQL. You want to cut over to the Compute Engine deployment of the application with minimal downtime and no data loss to your customers. You want to migrate the application with minimal modification. You also need to determine the cutover strategy. What should you do?
- Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server.
Stop the on-premises application.
Create a mysqldump of the on-premises MySQL server.
Upload the dump to a Cloud Storage bucket.
Import the dump into Cloud SQL.
Modify the source code of the application to write queries to both databases and read from its local database.
Start the Compute Engine application.
Stop the on-premises application. - Set up Cloud SQL proxy and MySQL proxy.
Create a mysqldump of the on-premises MySQL server.
Upload the dump to a Cloud Storage bucket.
Import the dump into Cloud SQL.
Stop the on-premises application.
Start the Compute Engine application. - Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server.
Stop the on-premises application.
Start Compute Engine application, configured to read and write to an on-premises MySQL server.
Create a replication configuration in Cloud SQL.
Configure the source database server to accept connections from the Cloud SQL replica.
Finalize Cloud SQL replica configuration.
Stop the Compute Engine application.
Promote Cloud SQL replica to a standalone instance.
Restart the Compute Engine application.✔️ - Stop the on-premises application.
Create a mysqldump of the on-premises MySQL server.
Upload the dump to a Cloud Storage bucket.
Import the dump into Cloud SQL.
Start the application on Compute Engine.
Q26. Your organization has decided to restrict the use of external IP addresses on instances to only approved instances. You want to enforce this requirement across all of your Virtual Private Clouds (VPCs). What should you do?
- Remove the default route on all VPCs. Move all approved instances into a new subnet that has a default route to an internet gateway.
- Create a new VPC in custom mode. Create a new subnet for the approved instances, and set a default route to the internet gateway on this new subnet.
- Implement a Cloud NAT solution to remove the need for external IP addresses entirely.
- Set an Organization Policy with a constraint on constraints/compute.vmExternalIpAccess. List the approved instances in the allowedValues list.✔️
Q27. Your company uses the Firewall Insights feature in the Google Network Intelligence Center. You have several firewall rules applied to Compute Engine instances. You need to evaluate the efficiency of the applied firewall ruleset. When you bring up the Firewall Insights page in the Google Cloud Console, you notice that there are no log rows to display. What should you do to troubleshoot the issue?
- Enable Virtual Private Cloud (VPC) flow logging.
- Enable Firewall Rules Logging for the firewall rules you want to monitor.✔️
- Verify that your user account is assigned to the compute.networkAdmin Identity and Access Management (IAM) role.
- Install the Google Cloud SDK, and verify that there are no Firewall logs in the command line output.
Q28. Your company has sensitive data in Cloud Storage buckets. Data analysts have Identity Access Management (IAM) permissions to read the buckets. You want to prevent data analysts from retrieving the data in the buckets from outside the office network. What should you do?
- Create a VPC Service Controls perimeter that includes the projects with the buckets.
Create an access level with the CIDR of the office network.✔️ - Create a firewall rule for all instances in the Virtual Private Cloud (VPC) network for the source range.
Use the Classless Inter-Domain Routing (CIDR) of the office network. - Create a Cloud Function to remove IAM permissions from the buckets, and another Cloud Function to add IAM permissions to the buckets.
Schedule the Cloud Functions with Cloud Scheduler to add permissions at the start of business and remove permissions at the end of business. - Create a Cloud VPN to the office network.
Configure Private Google Access for on-premises hosts.
Q29. You have developed a non-critical update to your application that is running in a managed instance group, and have created a new instance template with the update that you want to release. To prevent any possible impact on the application, you don’t want to update any running instances. You want any new instances that are created by the managed instance group to contain the new update. What should you do?
- Start a new rolling restart operation.
- Start a new rolling replacement operation.
- Start a new rolling update. Select the Proactive update mode.
- Start a new rolling update. Select the Opportunistic update mode.✔️
Q30. Your company is designing its application landscape on Compute Engine. Whenever a zonal outage occurs, the application should be restored in another zone as quickly as possible with the latest application data. You need to design the solution to meet this requirement. What should you do?
- Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs, use the latest snapshot to restore the disk in the same zone.
- Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin up the application in another zone in the same region. Use the regional persistent disk for the application data.✔️
- Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs, use the latest snapshot to restore the disk in another zone within the same region.
- Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin up the application in another region. Use the regional persistent disk for the application data,
Q31. Your company has just acquired another company, and you have been asked to integrate their existing Google Cloud environment into your company’s data center. Upon investigation, you discover that some of the RFC 1918 IP ranges being used in the new company’s Virtual Private Cloud (VPC) overlap with your data center IP space. What should you do to enable connectivity and make sure that there are no routing conflicts when connectivity is established?
- Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new IP addresses so there is no overlapping IP space.
- Create a Cloud VPN connection from the new VPC to the data center, and create a Cloud NAT instance to perform NAT on the overlapping IP space.
- Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply a custom route advertisement to block the overlapping IP space.✔️
- Create a Cloud VPN connection from the new VPC to the data center, and apply a firewall rule that blocks the overlapping IP space.
Q32. You need to migrate Hadoop jobs for your company’s Data Science team without modifying the underlying infrastructure. You want to minimize costs and infrastructure management effort. What should you do?
- Create a Dataproc cluster using standard worker instances.
- Create a Dataproc cluster using preemptible worker instances.✔️
- Manually deploy a Hadoop cluster on Compute Engine using standard instances.
- Manually deploy a Hadoop cluster on Compute Engine using preemptible instances.
Q33. Your company has a project in Google Cloud with three Virtual Private Clouds (VPCs). There is a Compute Engine instance on each VPC. Network subnets do not overlap and must remain separated. The network configuration is shown below. Instance #1 is an exception and must communicate directly with both Instance #2 and Instance #3 via internal IPs.. How should you accomplish this?
- Create a cloud router to advertise subnet #2 and subnet #3 to subnet #1.
- Add two additional NICs to Instance #1 with the following configuration: NIC1 –> VPC: VPC #2 and SUBNETWORK: subnet #2, NIC2 –> VPC: VPC #3 and SUBNETWORK: subnet #3. Update firewall rules to enable traffic between instances.✔️
- Create two VPN tunnels via CloudVPN: 1 between VPC #1 and VPC #2, and
1 between VPC #2 and VPC #3. Update firewall rules to enable traffic between the instances. - Peer all three VPCs: Peer VPC #1 with VPC #2 and peer VPC #2 with VPC #3. Update firewall rules to enable traffic between the instances.
Q34. You need to deploy an application on Google Cloud that must run on a Debian Linux environment. The application requires extensive configuration to operate correctly. You want to ensure that you can install Debian distribution updates with minimal manual intervention whenever they become available. What should you do?
- Create a Compute Engine instance template using the most recent Debian image. Create an instance from this template, and install and configure the application as part of the startup script. Repeat this process whenever a new Google-managed Debian image becomes available.
- Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates.✔️
- Create an instance with the latest available Debian image. Connect to the instance via SSH, and install and configure the application on the instance. Repeat this process whenever a new Google-managed Debian image becomes available.
- Create a Docker container with Debian as the base image. Install and configure the application as part of the Docker image creation process. Host the container on Google Kubernetes Engine and restart the container whenever a new update is available.
Q35. You have an application that runs in Google Kubernetes Engine (GKE). Over the last 2 weeks, customers have reported that a specific part of the application returns errors very frequently. You currently have no logging or monitoring solution enabled on your GKE cluster. You want to diagnose the problem, but you have not been able to replicate the issue. You want to cause minimal disruption to the application. What should you do?
- Update your GKE cluster to use Cloud Operations for GKE.
Use the GKE Monitoring dashboard to investigate logs from affected Pods. - Create a new GKE cluster with Cloud Operations for GKE enabled.
Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster.
Use the GKE Monitoring dashboard to investigate logs from affected Pods. - Update your GKE cluster to use Cloud Operations for GKE, and deploy Prometheus.
Set an alert to trigger whenever the application returns an error.✔️ - Create a new GKE cluster with Cloud Operations for GKE enabled, and deploy Prometheus.
Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster.
Set an alert to trigger whenever the application returns an error.
Q36. You need to deploy a stateful workload on Google Cloud. The workload can scale horizontally, but each instance needs to read and write to the same POSIX filesystem. At a high load, the stateful workload needs to support up to 100 MB/s of writes. What should you do?
- Use a persistent disk for each instance.
- Use a regional persistent disk for each instance.
- Create a Cloud Filestore instance and mount it in each instance.✔️
- Create a Cloud Storage bucket and mount it in each instance using gcsfuse.
Q37. Your company has an application deployed on Anthos clusters (formerly Anthos GKE) that are running multiple microservices. The cluster has both Anthos Service Mesh and Anthos Config Management configured. End users inform you that the application is responding very slowly. You want to identify the microservice that is causing the delay. What should you do?
- Use the Service Mesh visualization in the Cloud Console to inspect the telemetry between the microservices.✔️
- Use Anthos Config Management to create a ClusterSelector selecting the relevant cluster. On the Google Cloud Console page for Google Kubernetes Engine, view the Workloads and filter on the cluster. Inspect the configurations of the filtered workloads.
- Use Anthos Config Management to create a namespaceSelector selecting the relevant cluster namespace. On the Google Cloud Console page for Google Kubernetes Engine, visit the workloads and filter on the namespace. Inspect the configurations of the filtered workloads.
- Reinstall istio using the default istio profile in order to collect request latency. Evaluate the telemetry between the microservices in the Cloud Console.
Q38. You are working at a financial institution that stores mortgage loan approval documents on Cloud Storage. Any change to these approval documents must be uploaded as a separate approval file, so you want to ensure that these documents cannot be deleted or overwritten for the next 5 years. What should you do?
- Create a retention policy on the bucket for 5 years. Create a lock on the retention policy.✔️
- Create the bucket with uniform bucket-level access, and grant a service account the role of Object Writer. Use the service account to upload new files.
- Use a customer-managed key for the encryption of the bucket. Rotate the key after 5 years.
- Create the bucket with fine-grained access control, and grant a service account the role of Object Writer. Use the service account to upload new files.
Q39. Your team will start developing a new application using microservices architecture on Kubernetes Engine. As part of the development lifecycle, any code change that has been pushed to the remote development branch on your GitHub repository should be built and tested automatically. When the build and test are successful, the relevant microservice will be deployed automatically in the development environment. You want to ensure that all code deployed in the development environment follows this process. What should you do?
- Have each developer install a pre-commit hook on their workstation that tests the code and builds the container when committing on the development branch. After a successful commit, have the developer deploy the newly built container image on the development cluster.
- Install a post-commit hook on the remote git repository that tests the code and builds the container when code is pushed to the development branch. After a successful commit, have the developer deploy the newly built container image on the development cluster.
- Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and stores it in Container Registry. Create a deployment pipeline that watches for new images and deploys the new image on the development cluster. Ensure only the deployment tool has access to deploy new versions.✔️
- Create a Cloud Build trigger based on the development branch to build a new container image and store it in Container Registry. Rely on Vulnerability Scanning to ensure the code tests succeed. As the final step of the Cloud Build process, deploy the new container image on the development cluster. Ensure only Cloud Build has access to deploy new versions.
Q40. Your operations team has asked you to help diagnose a performance issue in a production application that runs on Compute Engine. The application is dropping requests that reach it when under heavy load. The process list for affected instances shows a single application process that is consuming all available CPU, and autoscaling has reached the upper limit of instances. There is no abnormal load on any other related systems, including the database. You want to allow production traffic to be served again as quickly as possible. Which action should you recommend?
- Change the autoscaling metric to agent.googleapis.com/memory/percent_used.
- Restart the affected instances on a staggered schedule.
- SSH to each instance and restart the application process.
- Increase the maximum number of instances in the autoscaling group.✔️
Q41. You are implementing the infrastructure for a web service on Google Cloud. The web service needs to receive and store the data from 500,000 requests per second. The data will be queried later in real time, based on exact matches of a known set of attributes. There will be periods where the web service will not receive any requests. The business wants to keep costs low. Which web service platform and database should you use for the application?
- Cloud Run and BigQuery
- Cloud Run and Cloud Bigtable✔️
- A Compute Engine autoscaling managed instance group and BigQuery
- A Compute Engine autoscaling managed instance group and Cloud Bigtable
Q42. You are developing an application using different microservices that should remain internal to the cluster. You want to be able to configure each microservice with a specific number of replicas. You also want to be able to address a specific microservice from any other microservice in a uniform way, regardless of the number of replicas the microservice scales to. You need to implement this solution on Google Kubernetes Engine. What should you do?
- Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster.✔️
- Deploy each microservice as a Deployment. Expose the Deployment in the cluster using an Ingress, and use the Ingress IP address to address the Deployment from other microservices within the cluster.
- Deploy each microservice as a Pod. Expose the Pod in the cluster using a Service, and use the Service DNS name to address the microservice from other microservices within the cluster.
- Deploy each microservice as a Pod. Expose the Pod in the cluster using an Ingress, and use the Ingress IP address name to address the Pod from other microservices within the cluster.
Q43. Your company has a networking team and a development team. The development team runs applications on Compute Engine instances that contain sensitive data. The development team requires administrative permissions for Compute Engine. Your company requires all network resources to be managed by the networking team. The development team does not want the networking team to have access to the sensitive data on the instances. What should you do?
- Create a project with a standalone VPC and assign the Network Admin role to the networking team.
Create a second project with a standalone VPC and assign the Compute Admin role to the development team.
Use Cloud VPN to join the two VPCs. - Create a project with a standalone Virtual Private Cloud (VPC), assign the Network Admin role to the networking team, and assign the Compute Admin role to the development team.✔️
- Create a project with a Shared VPC and assign the Network Admin role to the networking team.
Create a second project without a VPC, configure it as a Shared VPC service project, and assign the Compute Admin role to the development team. - Create a project with a standalone VPC and assign the Network Admin role to the networking team.
Create a second project with a standalone VPC and assign the Compute Admin role to the development team.
Use VPC Peering to join the two VPCs.
Q44. Your company wants you to build a highly reliable web application with a few public APIs as the backend. You don’t expect a lot of user traffic, but traffic could spike occasionally. You want to leverage Cloud Load Balancing, and the solution must be cost-effective for users. What should you do?
- Store static content such as HTML and images in Cloud CDN. Host the APIs on App Engine and store the user data in Cloud SQL.
- Store static content such as HTML and images in a Cloud Storage bucket. Host the APIs on a zonal Google Kubernetes Engine cluster with worker nodes in multiple zones, and save the user data in Cloud Spanner.
- Store static content such as HTML and images in Cloud CDN. Use Cloud Run to host the APIs and save the user data in Cloud SQL.
- Store static content such as HTML and images in a Cloud Storage bucket. Use Cloud Functions to host the APIs and save the user data in Firestore.✔️
Q45. Your company sends all Google Cloud logs to Cloud Logging. Your security team wants to monitor the logs. You want to ensure that the security team can react quickly if an anomaly, such as an unwanted firewall change or server breach, is detected. You want to follow Google’s recommended practices. What should you do?
- Schedule a cron job with Cloud Scheduler. The scheduled job queries the logs every minute for the relevant events.
- Export logs to BigQuery, and trigger a query in BigQuery to process the log data for the relevant events.✔️
- Export logs to a Pub/Sub topic, and trigger a Cloud Function with the relevant log events.
- Export logs to a Cloud Storage bucket, and trigger Cloud Run with the relevant log events.
Q46. You have deployed several instances on Compute Engine. As a security requirement, instances cannot have a public IP address. There is no VPN connection between Google Cloud and your office, and you need to connect via SSH into a specific machine without violating the security requirements. What should you do?
- Configure Cloud NAT on the subnet where the instance is hosted. Create an SSH connection to the Cloud NAT IP address to reach the instance.
- Add all instances to an unmanaged instance group. Configure TCP Proxy Load Balancing with the instance group as a backend. Connect to the instance using the TCP Proxy IP.
- Configure Identity-Aware Proxy (IAP) for the instance and ensure that you have the role of IAP-secured Tunnel User. Use the gcloud command line tool to ssh into the instance.✔️
- Create a bastion host in the network to SSH into the bastion host from your office location. From the bastion host, SSH into the desired instance.
Q47. Your company is using Google Cloud. You have two folders under the Organization: Finance and Shopping. The members of the development team are in a Google Group. The development team group has been assigned the Project Owner role in the Organization. You want to prevent the development team from creating resources in projects in the Finance folder. What should you do?
- Assign the development team group the Project Viewer role on the Finance folder, and assign the development team group the Project Owner role on the Shopping folder.
- Assign the development team group only the Project Viewer role on the Finance folder.
- Assign the development team group the Project Owner role on the Shopping folder, and remove the development team group’s Project Owner role from the Organization.✔️
- Assign the development team group only the Project Owner role on the Shopping folder.
Q48. You are developing your microservices application on Google Kubernetes Engine. During testing, you want to validate the behavior of your application in case a specific microservice suddenly crashes. What should you do?
- Add a taint to one of the nodes of the Kubernetes cluster. For the specific microservice, configure a pod anti-affinity label that has the name of the tainted node as a value.
- Use Istio’s fault injection on the particular microservice whose faulty behavior you want to simulate.✔️
- Destroy one of the nodes of the Kubernetes cluster to observe the behavior.
- Configure Istio’s traffic management features to steer the traffic away from a crashing microservice.
Q49. Your company is developing a new application that will allow globally distributed users to upload pictures and share them with other selected users. The application will support millions of concurrent users. You want to allow developers to focus on just building code without having to create and maintain the underlying infrastructure. Which service should you use to deploy the application?
- App Engine✔️
- Cloud Endpoints
- Compute Engine
- Google Kubernetes Engine
Q50. Your company provides a recommendation engine for retail customers. You are providing retail customers with an API where they can submit a user ID, and the API returns a list of recommendations for that user. You are responsible for the API lifecycle and want to ensure stability for your customers in case the API makes backward-incompatible changes. You want to follow Google-recommended practices. What should you do?
- Create a distribution list of all customers to inform them of an upcoming backward-incompatible change at least one month before replacing the old API with the new API.
- Create an automated process to generate API documentation, and update the public API documentation as part of the CI/CD process when deploying an update to the API.
- Use a versioning strategy for the APIs that increases the version number on every backward-incompatible change.✔️
- Use a versioning strategy for the APIs that adds the suffix ‘DEPRECATED’ to the current API version number on every backward-incompatible change. Use the current version number for the new API.
Q51. Your company has developed a monolithic, 3-tier application to allow external users to upload and share files. The solution cannot be easily enhanced and lacks reliability. The development team would like to re-architect the application to adopt microservices and a fully managed service approach, but they need to convince their leadership that the effort is worthwhile. Which advantage(s) should they highlight to leadership?
- The new approach will be significantly less costly, make it easier to manage the underlying infrastructure, and automatically manage the CI/CD pipelines.
- The monolithic solution can be converted to a container with Docker. The generated container can then be deployed into a Kubernetes cluster.
- The new approach will make it easier to decouple infrastructure from application, develop and release new features, manage the underlying infrastructure, manage CI/CD pipelines, perform A/B testing, and scale the solution if necessary.✔️
- The process can be automated with Migrate for Compute Engine.
Q52. Your team is developing a web application that will be deployed on Google Kubernetes Engine (GKE). Your CTO expects a successful launch, and you need to ensure your application can handle the expected load of tens of thousands of users. You want to test the current deployment to ensure the latency of your application stays below a certain threshold. What should you do?
- Use a load testing tool to simulate the expected number of concurrent users and total requests to your application, and inspect the results.✔️
- Enable autoscaling on the GKE cluster and enable horizontal pod autoscaling on your application deployments. Send curl requests to your application, and validate if the auto scaling works.
- Replicate the application over multiple GKE clusters in every Google Cloud region. Configure a global HTTP(S) load balancer to expose the different clusters over a single global IP address.
- Use Cloud Debugger in the development environment to understand the latency between the different microservices.
Q53. Your company wants to deploy several microservices to help its system handle elastic loads. Each microservice uses a different version of software libraries. You want to enable their developers to keep their development environment in sync with the various production services. Which technology should you choose?
- RPM/DEB
- Containers✔️
- Chef/Puppet
- Virtual machines
Q54. Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. You want to support the data ingestion needs of this sensor network. The receiving infrastructure needs to account for the possibility that the devices may have inconsistent connectivity. Which solution should you design?
- Have each device create a persistent connection to a Compute Engine instance and write messages to a custom application.
- Have devices poll for connectivity to Cloud SQL and insert the latest messages at a regular interval into a device-specific table.
- Have devices poll for connectivity to Cloud Pub/Sub and publish the latest messages on a regular interval to a shared topic for all devices.✔️
- Have devices create a persistent connection to an App Engine application fronted by Cloud Endpoints, which ingests messages and writes them to Cloud Datastore.
Q55. Your organization has a 3-tier web application deployed in the same Google Cloud Virtual Private Cloud (VPC). Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier, and then on to the database tier. Traffic should not flow between the web and the database tier. How should you configure the network with minimal steps?
- Add each tier to a different subnetwork.
- Set up software-based firewalls on individual VMs.
- Add tags to each tier and set up routes to allow the desired traffic flow.
- Add tags to each tier and set up firewall rules to allow the desired traffic flow.✔️