From April 2019

How to ssh to OKE (k8s) Private Node (worker compute node) via Jump Box (Bastion Server)

In OKE typically you create, for redundancy and high availability reasons, a k8s cluster in 5 or more subnets:

  • 2 are public and, in there, is where the public load balancer is deployed, for example one in AD1 and the other in AD3
  • 3 or more are private, and, in there, is where the worker compute nodes are deployed, for example one subnet in AD1, other in AD2, other in AD3 and looping…

If you need to reach one or more compute worker nodes for some reason, you can create a bastion server (jump box) with a public IP and then do the following:

ssh -i privateKey -N -L localhost:2222:k8scomputenode:22 opc@jumpboxpublicip

ssh -i privateKey -p 2222 opc@localhost

Hope it helps! 🙂



Connecting to OCI DB System with SQLDeveloper via Bastion Box

Recipe for creating a secure connection between sqlDeveloper in our local machine and an Oracle Cloud Infra DB System created in a private subnet of a Virtual Cloud Network network not opened to internet


  • Create a new DB System and grab the private IP of the database system node


  • Create a compute VM with public IP exposed
  • Open a ssh tunnel this way:
ssh -i privatekeyfile -N -L localhost:1521:dbnodeprivateip:1521 opc@jumpboxpublicip
  • Grab the database connection details


  • Create a connection in sqlDeveloper


  • Test the connection


Hope it helps! 🙂



Creating Route Rule for Oracle OCI VCN Remote Peering : InvalidParameter – routeRules[0].networkEntityId may not be null


When creating a route rule for VCN remote peering between 2 Virtual Cloud Networks in different regions in Oracle OCI using the web console, the UI does not provide a way for selecting the DRG:



Create the route rule with the CLI as follows:

oci network route-table create --compartment-id xxx --vcn-id yyy --route-rules '[{"cidrBlock":"","networkEntityId":"zzz"}]'

xxx is the OCID of the compartment in which you want to create the route rule
yyy is the OCID of the VCN in which you are creating the route rule for peering
zzz id the OCID of the DRG

oci network route-table create --compartment-id ocid1.compartment.oc1..aaaaaaaa3sz43qrfhsjmbibsrc6e7c2ftlt53gfnzifvlow2yoz7hk3ni2jq --vcn-id  --route-rules '[{"cidrBlock":"","networkEntityId":""}]'

Hope it helps 🙂

Shared Disk for your Pods: PersistentVolumes for Oracle Kubernetes Engine (OKE) Implemented as NFS File Storage in Oracle Cloud Infrastructure (OCI)

When you deploy in k8s a pod depending in persistent volume attached to block storage (for example this post), the volume created is mounted on a specific node. If that node fails or is stopped, the pods running on it fail when trying to be created in other node according to the replication policies they have, because other nodes do not have the disk mounted.

Oracle Cloud Infrastructure (OCI) File Systems are shared storage that you can easily expose/attach to your pods for those use cases where shared persistent data is needed.

Of course, you can still mount the disk in each node, but this is not a good approach because we have a better way to achieve it.

So, let’s get started

Go to OCI dashboard, create a new File System and a new Export called /myexport. Click on the mount target link and take note of the File System IP address.


This slideshow requires JavaScript.

Download and deploy the following yaml, it creates:

  • a persistentVolume
  • a persistentVolumeClaim
  • a deployment with 3 replicas of a container image with nginx in it
  • a service with public IP and LoadBalancer with round-robin policy
kubectl apply -f k8spod2nfs.yaml

Get a list of the pods and “ssh” to one of them

kubectl get pods
k8spod2nfs-xxx 1/1 Running 0 99s
k8spod2nfs-yyy 1/1 Running 0 99s
k8spod2nfs-zzz 1/1 Running 0 99s
kubectl exec -it k8spod2nfs-xxx bash

Go to /usr/share/nginx/html directory and create or edit a file called index.html

cd /usr/share/nginx/html/

echo hola > index.html

You can also “ssh” to other pod and verify that you see the same file

Now get the list of services and grab the public IP of the k8spod2nfs service

kubectl get services
k8spod2nfs LoadBalancer x.y.z.t 80:31014/TCP 10m
kubernetes ClusterIP <none> 443/TCP 7d4h

Go to http://yourserviceip in your browser:


Change the content of the index.html file:

echo adios >> index.html

Go to http://yourserviceip in your browser again, the data changes


Delete all pods of the k8spod2nfs deployment and wait until at lest one of them are recreated again:

kubectl delete pod -l app=k8spod2nfs

Go to http://yourserviceip in your browser again, the data is still in there!

Delete the deployment and create it again:

kubectl delete -f k8spod2nfs.yaml

kubectl apply -f k8spod2nfs.yaml

Wait until al least one of the ñods are ready and go to http://yourserviceip in your browser again, the data is still in there!

As you can see, the data is shared across all pods and is persistent! (unless you delete it or destroy the OCI FIleSystem)

Hope it helps! 🙂




An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named idna | Ansible for Oracle Cloud Infrastructure (OCI)

This happened to me when testing Oracle Ansible for OCI.


pip install idna

Hope it helps!


Using Ansible in Oracle Cloud Infrastructure OCI

Today a short recipe for Ansible in Oracle Cloud (OCI)

Install it:

brew update
brew install openssl
brew upgrade openssl
brew install python
pip install virtualenv
virtualenv oci_sdk_env
source oci_sdk_env/bin/activate
pip install oci
pip install oci==2.1.3
pip install --upgrade pip
pip install ansible
git clone
cd oci-ansible-modules
pip install idna

Prepare it:

Create test.yml file and put your own cloud account values in bold as follows:

- name : List summary of existing buckets in OCI object storage
  connection: local
  hosts: localhost
    - name: List bucket facts
         namespace_name: 'mxlxxhxtxlsxntxrnxtxxnxl'
         compartment_id: 'ocid1.compartment.oc1..aaaaaaaa3sz43qrfhsjmbibsrc6e7c2ftlt53gfnzifvlow2yoz7hk3ni2jq'
      register: result
    - name: Dump result
        msg: '{{result}}'

Test it:

ansible-plabook test.yml
(oci_sdk_env) MacBook-Pro:oci-ansible-modules javiermugueta$ ansible-playbook test.yml 
 [WARNING]: Unable to parse /etc/ansible/hosts as an inventory source
 [WARNING]: No inventory was parsed, only implicit localhost is available
 [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [List summary of existing buckets in OCI object storage] ******
TASK [Gathering Facts] *********************************************
ok: [localhost]
TASK [List bucket facts] *******************************************
ok: [localhost]
TASK [Dump result] *************************************************
ok: [localhost] => {
    "msg": {
        "buckets": [
                "compartment_id": "ocid1.compartment.oc1..aaaaaaaa3sz43qrfhsjmbibsrc6e7c2ftlt53gfnzifvlow2yoz7hk3ni2jq", 
                "created_by": "ocid1.saml2idp.oc1..aaaaaaaavw5k65pd5empwbumrd7q6puushecinoot5whdnuvjzgf2x2cjy7q/", 
                "defined_tags": null, 
                "etag": "b8376915-101d-4edc-9f16-c7c5f88b50c2", 
                "freeform_tags": null, 
                "name": "xxx", 
                "namespace": "xxx", 
                "time_created": "2019-04-05T22:05:18.974000+00:00"
                "compartment_id": "ocid1.compartment.oc1..aaaaaaaa3sz43qrfhsjmbibsrc6e7c2ftlt53gfnzifvlow2yoz7hk3ni2jq", 
                "created_by": "ocid1.saml2idp.oc1..aaaaaaaavw5k65pd5empwbumrd7q6puushecinoot5whdnuvjzgf2x2cjy7q/", 
                "defined_tags": null, 
                "etag": "cf309d1c-eb85-4247-90e6-222a87933a90", 
                "freeform_tags": null, 
                "name": "prueba", 
                "namespace": "xxx", 
                "time_created": "2019-04-08T20:37:03.937000+00:00"
        "changed": false, 
        "failed": false
PLAY RECAP *********************************************************
localhost                  : ok=3    changed=0    unreachable=0    failed=0

Hope it helps! 🙂


BucketNotEmpty – Bucket named ‘xxxx’ is not empty. Delete all objects first


Oracle OCI object storage “buckets” can’t be deleted from OCI dashboard unless they are empty… and no empty option menu exists at all (at least at the time of this post).

Anyway, you can do it using the CLI… Follow these steps:

If you don’t have OCI CLI installed follow this post

oci os object bulk-delete-ns <identitydomainname>  -bn <bucketname> 
oci os bucket delete --bucket-name <bucketname>

Hope it helps! 😉