Skip to main content

Professional Cloud Developer

🌸 Passed: February 27, 2025

Passing Memo

Impressions:

  • Number of questions: 50
  • Confident on ~35 questions / Reviewed ~15 questions
  • Completed the first pass in about 70/120 minutes / Used about 30 minutes for review, finishing the exam in about 100/120 minutes.

My impression was that there were many questions I saw for the first time. I also noticed many questions from the Cloud Architect and Network exams, not just the Developer practice exam. There was a lot of overlap with Cloud Architect in particular, so I recommend studying for both at the same time. I felt many questions had answer choices that were difficult to narrow down unless you knew the exact correct answer.

Question Trends:

  • Questions about connecting to Cloud SQL using Private Google Access.
  • Questions about network configuration for instances that need to use an external load balancer.
  • Questions about authentication and authorization during deployment.
  • Questions about Cloud Build code and configuration (finding a solution for a connection error from code).
  • Complex questions involving Pub/Sub triggers during deployment.
  • Questions recommending Cloud Run specifications.
  • Questions asking for the deployment procedure for vulnerability scanning using Artifact Registry.
  • Questions asking for the configuration steps for Binary Authorization.

Exam Overview:

Exam Information - February 27, 2025

Exam Name: Google Cloud Certified - Professional Cloud Developer Remote Proctor: Home Clock Icon: February 27, 2025 (Thursday) 12:15 PM (UTC+09:00)

Warning

Depending on my strained back, I might not be able to set up my room, so I need to consider rescheduling early.

Exam Day
  • Wait for the exam to start on the login page to prepare.
    • Have my ID ready.
  • Review official documentation and weak areas until the exam begins.
Post-Exam TODO
Emergency
The relevant Udemy course was deleted.
  • Request a refund for each course after the exam.
  • Write down my reflections on the exam.

🔥Strategy for the Exam🔥

Study Strategy:
  • Reference Materials

  • Go through practice question sets repeatedly

    • Practice Exams | Google Cloud Database Engineer (GCP)

    • Study only the free questions from Whizlabs

    • 1st pass: 2025/02/08 ~50%

      • Practice Test 1
      • Practice Test 2
      • Practice Test 3
      • Practice Test 4
      • Practice Test 5
    • 2nd pass: 2025/02/25

      • Practice Test 1
      • Practice Test 2
      • Practice Test 3
      • Practice Test 4
      • Practice Test 5
    • Review (incorrect answers)

      • Practice Test 1
      • Practice Test 2
      • Practice Test 3
      • Practice Test 4
      • Practice Test 5
    • Review (questions marked for review)

      • Practice Test 1
      • Practice Test 2
      • Practice Test 3
      • Practice Test 4
      • Practice Test 5
    • Try the Official Practice Exam

      • → Regarding the practice exam: It's covered by the other question sets, so no longer needed.
        • 76.92%: 2025/02/25
      • Review: 2025/02/25
    • Review weak areas

      • 2025/02/26
  • Read GCE articles in the Architecture Center


Weak Areas

Cursors, limits, and offsets|Datastore

Query cursors let an application retrieve a query's results in convenient batches without incurring the overhead of a query offset.

While Datastore mode databases support integer offsets, you should avoid using them. Use cursors instead.

  • SELECT * FROM books LIMIT 10 OFFSET 20;
  • ⭕️SELECT * FROM books WHERE id > last_id ORDER BY id LIMIT 10;

Debug issues with the serial console:Compute Engine

We recommend that you check the serial console logs for connection errors. You can access the serial console as the root user from your local workstation using a browser. This approach is helpful if you cannot log in using SSH, or if the instance doesn't have a network connection.

Accessing a private cluster with Cloud Shell:GKE・Cloud Shell

The private cluster created in the Using auto-generated subnets section, private-cluster-1, has a public endpoint and has authorized networks enabled. To access the cluster by using Cloud Shell, you must add the external IP address of Cloud Shell to the cluster's list of authorized networks.

View the image vulnerabilities:Artifact Registry

Artifact Analysis scans new images uploaded to Artifact Registry. This scan extracts information about system packages in the container. You can view the vulnerability occurrences for images in a registry using the Google Cloud console, the Google Cloud CLI, or the Container Analysis API. If vulnerabilities are present in an image, you can view the details.

GKE Ingress for Application Load Balancers

This page provides an overview of how External Application Load Balancers (HTTPS) work. Google Kubernetes Engine (GKE) provides a built-in and managed Ingress controller, called GKE Ingress. This controller implements Ingress resources as Google Cloud load balancers for HTTP(S) workloads in GKE.

Setting up Cloud KMS in a separate project:Separation of duties (SoD)

To enable separation of duties, you can run Cloud KMS in its own project (for example, your-key-project). Depending on how strict your separation requirements are, you can either:

  • (Recommended) Create your-key-project without an owner at the project level, and designate an Organization Admin, granted at the organization level. Unlike an owner, an Organization Admin cannot directly manage or use keys. They are limited to setting IAM policies that limit who can manage and use keys. Using an organization-level node can help further limit permissions on projects in the organization.

Traffic management (Traffic Director):Cloud Service Mesh

Cloud Service Mesh maintains a service registry of all services in the mesh, by name and their respective endpoints. It maintains this list to manage traffic flows (IP addresses of Kubernetes Pods, IP addresses of Compute Engine VMs in managed instance groups, etc.). The mesh uses this service registry to route traffic to the appropriate endpoints by running a proxy along with your services. Proxyless gRPC workloads can also be used in parallel with workloads that use Envoy proxies.

Instance metadata server:Cloud Run

A Cloud Run instance exposes a metadata server that you can use to get details about your container, such as the project ID, region, instance ID, or service accounts. You can also use the metadata server to generate identity tokens for the service. To access the metadata server's data, send an HTTP request to the http://metadata.google.internal/ endpoint with the Metadata-Flavor: Google header.

BigQuery Job User

BigQuery Job User (roles/bigquery.jobUser)

  • Provides permissions to run jobs, including queries, within the project.
  • Grants permissions to run jobs, including queries, within the project.

Change compute capacity:Cloud Spanner

You can increase the compute capacity of an instance after you create it. In most cases, the request will finish in a few minutes. In rare cases, the scale-up can take up to an hour to complete....

When you remove compute capacity, monitor CPU utilization and request latency in Cloud Monitoring to make sure that CPU utilization stays below 65% for regional instances and below 45% for each region in multi-region instances. During the removal of compute capacity, you may see a temporary increase in request latency.

Using Cloud Trace with Zipkin

A Zipkin server is useful if your applications are instrumented with Zipkin, but you don't want to run your own trace backend, or you want to take advantage of Cloud Trace's advanced analysis tools.

GKE: Best Practices: Recommendations

Regional clusters are composed of a three-Kubernetes-control-plane quorum and offer higher availability than what zonal clusters can offer for your cluster's control plane API. And while existing workloads running on nodes are not affected if the control plane is unavailable, some applications are very sensitive to cluster API availability. For these workloads, we recommend using a regional cluster topology.

Initialization period (formerly cool-down period):GCE

The initialization period is the amount of time that it takes for your application to initialize on a VM instance. While an application is initializing on an instance, the instance's usage data might not reflect its normal circumstances.

If you set an initialization period value that is significantly longer than the time it takes for an instance to initialize, the autoscaler might ignore legitimate utilization data. This might cause the autoscaler to underestimate the required size of your group, which can delay a scale out.

Gateway API resources:GKE networking

The diagram is helpful.

What is a Kubernetes Service?

The purpose of a Service is to group a set of Pod endpoints into a single resource. You can configure various ways to access this group. By default, you get a stable cluster IP address, which clients inside the cluster can use to contact Pods in the Service.

Cluster availability types: GKE

Best practice:

  • For production workloads, use regional clusters, which are more available than zonal clusters. For development environments, use regional clusters with zonal node pools. The cost of a cluster with a regional control plane and zonal node pools is the same as a zonal cluster.

Network isolation options: GKE

Best practice:

  • Use Cloud NAT to allow GKE Pods to access resources with public IP addresses. With Cloud NAT, Pods are not directly exposed to the internet, but can access resources that are on the internet, which improves your cluster's overall security posture.

GKE cluster architecture

The configuration diagram is helpful.

Writing container logs:Cloud Run logs

When you write logs from your service or job, they are picked up automatically by Cloud Logging, as long as the logs are written to any of these locations:

  • Standard output (stdout) or standard error (stderr) streams
  • Any file under the /var/log directory
  • syslog (/dev/log)
  • Logs written with Cloud Logging client libraries, which are available for many popular languages.

Most developers are expected to write logs using standard output and standard error.

Flame graphs:Cloud Profiler

Cloud Profiler displays the profiling data in a flame graph. A flame graph uses screen space more efficiently than a tree or other graph and presents a large amount of information in a compact, readable format. The frames are named by function, and their width is relative to the measurement of total CPU time for that function.

Ramp up the request rate gradually:Cloud Storage

To ensure that Cloud Storage autoscaling always performs optimally, you should gradually ramp up the request rate for any bucket that hasn't had a high request rate for several days, or for any bucket that has a new range of object keys. You don't need to ramp up your request rate if it is less than 1,000 write requests per second or 5,000 read requests per second. If you expect your request rate to go above these thresholds, you should start with a request rate below or near the thresholds and gradually increase the rate, making sure to not double the rate faster than every 20 minutes. If you see problems such as increased latency or error rates, pause the ramp-up or reduce the request rate to allow Cloud Storage more time to scale the bucket. You should use exponential backoff to retry requests when:

  • You receive errors with 408 and 429 response codes.
  • You receive errors with 5xx response codes.

uptime checks:Cloud Monitoring

For HTTP and HTTPS, all URL redirects are followed, and the uptime check uses the final response it receives to evaluate the success criteria. For HTTPS checks, the SSL certificate expiration is computed based on the server certificate received on the final response.

Disaster recovery architecture:Cloud SQL Check the diagram.

Two instances of Cloud SQL, a primary instance and a standby instance, reside in two separate zones within a single region (the primary region). The instances are kept in sync using regional persistent disks.

One instance of Cloud SQL, a cross-region read replica, resides in a second region (the secondary region). For DR, the cross-region read replica is configured to be kept in sync with the primary instance using read replica settings (using asynchronous replication).