Google Cloud Certified Fellow

Today the Google Cloud Certified Fellow program was lunch. I am happy to announce that I am recognised as Fellow #5.

It is a certification outside of standard Professional program and is directed at Technial Leaders working with Anthos. This is how Google describes it.:

The Google Cloud Certified Fellow program is for elite cloud architects and technical leaders who are experts in designing enterprise solutions. This certification program recognizes individuals with deep technical expertise who can translate business requirements into technical solutions using Anthos and Google Cloud.

The Hybrid Multi-cloud Certification is the first certification in this program and assesses both technical skills and business expertise. Achieving this certification demonstrates your leadership, business impact, and technical acumen, as well as your ability to:

Design hybrid and multi-cloud solution architectures with Anthos
• Design for security and compliance
• Provision a solution infrastructure
• Optimize technical and business processes
• Ensure solution and operations reliability”

I will post more information on the program soon. In the meantime you can get more info here:

Anthos 1.2 Time Sync issues on GKE On-Prem nodes

When you login into your GKE-OP nodes you might find out the time is synced with your ESXi host rather then with the Timeserver configured in you DHCP Options or Static IP files used for GKE-OP clusters provisioning.

This issue is actually related to Ubuntu 18 and is connected with settings of timesyncd service.

To see if you are experiencing the issue run

sudo SYSTEMD_LOG_LEVEL=debug /lib/systemd/systemd-timesyncd

ubuntu@gke-03-user0103:~$ sudo SYSTEMD_LOG_LEVEL=debug /lib/systemd/systemd-timesyncd
Failed to create state directory: Permission denied
ubuntu@gke-03-user0103:~$ sudo SYSTEMD_LOG_LEVEL=debug /lib/systemd/systemd-timesyncd
Added new server
Added new server
Selected server
Resolved address for
Selected address of server
Connecting to time server (
Sent NTP request to (
Server has too large root distance. Disconnecting.
Waiting after exhausting servers.

Root cause: there is network delay that causes timeout for the response from NTP

Solution: There is no permanent solution for this issue as the settings of NTP are created when Nodes are deployed using the DHCP or StaticIP files. You can only fix this issue after your nodes are deployed. The settings will be lost when you redeploy.

To workaround this issue set edit the timesyncd.conf file and set RootDistanceMaxSec=20 (you might need to find our the honey spot)

sudo cat /etc/systemd/timesyncd.conf






Now you should check if connection works fine

ubuntu@gke-03-user0103:~$ sudo SYSTEMD_LOG_LEVEL=debug /lib/systemd/systemd-timesyncd
Added new server

freq offset : +0 (0 ppm)
interval/delta/delay/jitter/drift 64s/+0.033s/0.001s/0.000s/+0ppm
Synchronized to time server (

Summary of 2019

It has been a great year for me with some major goals achieved. I am very thankful for all that have made this come true!

Book – special thanks to @Brian Gerrad co-author. Only you know what was the true cost of writing this book 🙂


  • Professional Cloud Architect ’18
  • Professional Data Engineer ’18
  • Associate Cloud Engineer 
  • Professional Cloud Developer 
  • Professional Cloud Network Engineer
  • Professional Cloud Security Engineer 
  • Professional DevOps Engineer (Beta results pending)
  • There is one more that will be announced in January… cannot wait!


  • BitConf Speaker – link
  • vBrownBags Speaker – link
  • Google Next San Francisco ’19
  • Goole Next London ’19
  • Google Developer Group Leads Lisbon
  • GSI Champions Conference in Sunnyvale

Google Developer Group Cloud Bydgoszcz

5 Meetups this year with around 50 participants each!

  • 4 Onsite Meetups
  • 1 Online Meetup


  • GSI Champion
  • Google Cloud Platform Learning Ambassador
  • Start developing Anthos on DPC/DHC
  • Decided to stay with the company having a proposal to work in one of my top 5 companies to work for.

Missed goals

  • GCP Certified Trainer – lack of time
  • Google Developer Expert – builiding protfolio
  • Cloud Guru Instructor – lack of time

Goals 2020

Installing Istio on GKE-OP for Anthos


GKE-OP 1.1.2 supports open source Istio version 1.1.13. To perform the installation you require a user cluster to be installed and validated. The procedure of installation can be found here:

In this article we will show hot to install Istio and a simple microservice application. We will generate some traffic to that application and visualise the flows with Kiali.

The high level steps are as follows:

  • install Helm
  • deploy Istio CRDs
  • deploy Istio
  • expose Telemetry services
  • install BookInfo application

All the steps are performed from the Admin workstation

Installing Helm

Download Helm running:

curl --output helm-v2.16.1-linux-amd64.tar.gz

Unzip it, move to the bin folder and see if you can check the version

tar -zxvf helm-v2.16.1-linux-amd64.tar.gz

mv linux-amd64/helm /usr/local/bin/helm

Helm version

Install CRDs

helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -

Setup Kiali password

KIALI_USERNAME=$(read -p 'Kiali Username: ' uval && echo -n $uval | base64)

KIALI_PASSPHRASE=$(read -sp 'Kiali Passphrase: ' pval && echo -n $pval | base64)

when prompted pass the username and password

cat <<EOF | kubectl apply -f –

apiVersion: v1

kind: Secret


  name: kiali

  namespace: $NAMESPACE


    app: kiali

type: Opaque


  username: $KIALI_USERNAME

  passphrase: $KIALI_PASSPHRASE


Install Istio using the the demo pattern – this icludes Kiali, Grafana and Jeagger.

helm template install/kubernetes/helm/istio --name istio --namespace istio-system \ --values install/kubernetes/helm/istio/values-istio-demo.yaml | kubectl apply -f -

Check that services are running

kubectl get service -n istio-system

kubectl get pods -n istio-system

Edit the Istio ingress gateway to assing IP address the Istio Gateway.

kubectl edit svc -n istio-system istio-ingressgateway



 loadBalancerIP: <IP_Address>

Check that IP is assigned

kubectl get service -n istio-system

Expose Kiali service

For reference you can use:

cat <<EOF | kubectl apply -f –


kind: Gateway


  name: kiali-gateway

  namespace: istio-system



    istio: ingressgateway


  – port:

      number: 15029

      name: http-kiali

      protocol: HTTP


    – „*”


kind: VirtualService


  name: kiali-vs

  namespace: istio-system



  – „*”


  – kiali-gateway


  – match:

    – port: 15029


    – destination:

        host: kiali


          number: 20001


kind: DestinationRule


  name: kiali

  namespace: istio-system


  host: kiali



      mode: DISABLE


Connect to Kiali

Deploy the application

kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)

watch kubectl get pods

kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

Issues with F5 BIG-IP load balancer in GKE-OP Anthos 1.x – K8s APIs not responding

When using F5 BIG-IP load balancer of GKE On-Prem you might be wanting to use evaluation license. Keep in mind that this license has a restriction of 2MBps bandwidth in total. GKE-OP even with one user cluster can cause saturation and slowness of K8s API response. With multiple cluster and Istio installed the API can stop response at all. Note that F5 might not be showing the bandwith is saturated when you use the CLI tools.

Resolution: use full license or request 10GBps evaluation license.

Problems creating pre-check VM in Anthos 1.2 GKE-OP

With Anthos 1.2 there is a new feature that creates a test VM to check connectivities before you deploy your GKE-OP clusters. It helps to avoid issues during the installation.

When installing you GKE On-Prem using the following documentation: you perform checks with the following commands

gkectl check-config --config [PATH_TO_CONFIG]

you will get an error as bellow:

  • Validation Category: F5 BIG-IP
    • [FAILURE] Admin Cluster VIP and NodeIP: Failed to create VM: failed to create VM (not retriable): failed to find VM template "gke-on-prem-osimage-1.14.7-gke.24mage-1.14.7-gke.24-20191120-f71f9a709b' not found
    • [FAILURE] User Cluster VIP and NodeIP: Failed to create VM: failed to create VM (not retriable): failed to find VM template "gke-on-prem-osimage-1.14.7-gke.24-age-1.14.7-gke.24-20191120-f71f9a709b' not found

Root cause: This is cause by the image not being present on the datastore. The installation steps in the GCP docs have wrong sequence.

Solution: run

gkectl prepare --config [CONFIG_FILE] --validate-attestations

After that the VMs get created and connectivity checks can be performed