Troubleshooting Kubernetes to GCP Mongo Atlas connectivity – so where do you start?

**Update – 03/09/2018**

Since I wrote the original post we have seen further connectivity issues getting Istio working with mongo replica sets. The error looks something like this;

pymongo.errors.ServerSelectionTimeoutError: No replica set members match selector “Primary()”

This has been raised as an issue with Istio 1.0.1 – https://github.com/istio/istio/issues/8410

We believe that Istio is not correctly routing traffic when it comes to replica set hosts, and is using the host list in the service entry for load balancing rather than just a List of nodes which can be talked to externally. This is a big problem for a clustering based data store such as MongoDB where connection flow is incredibly important. This is a hypothetical theory we have rather than something that has been concluded.

As a result of this issue we are no longer using the outbound port whitelisting feature in Istio, as it turned out this feature had been the cause of a few unrelated issues in our environment. Although the post below is still valid in terms of troubleshooting possible connectivity issues, we are no longer using Service Entries to for outbound traffic interception.

************************************

Original Post

In the last few weeks, We have started to prove out running a latency sensitive application in a k8s container against a database running on mongoDB Atlas which is also in GCP.

Scenario

We currently have a service running in Kubernetes in a europe-west4 container, but the metrics are showing that it runs around 5x slower on average when it connects to an on premise mongo database running 3.2.

We wanted to get the database closer to the application, and since the general availability of GCP as a region in Mongo Atlas, this became a lot easier to try out. MongoDB kindly gave us some Atlas credits to try this out, and within 10-15 minutes, I had the following running in Mongo Atlas:

  • 3 node replica set running v3.6 with WiredTiger
    • 2 nodes in Europe-West4 (Netherlands)
    • 1 node in Europe-West1 (Belgium)
  • M30 Production Cluster
    • 7.5GB RAM
    • 2 vCPUs
    • Storage
      • 100GB Storage
      • 6000 IOPS
      • Auto Expand Storage
      • Encrypted Disk

“Only Available in AWS”

Get used to hearing and seeing this both in the documentation, and when discussing with Support or Account Managers. As expected, not all features are available in GCP yet, the main ones that are frustrating us (As of August 2018) are VPC Peering and Encyption at Rest.

VPC Peering

Atlas supports VPC peering with other AWS VPCs in the same region. MongoDB does not support cross-region VPC peering. For multi-region clusters, you must create VPC peering connections per-region.

Depending on the volume we push to and from our databases, lack of support for VPC Peering may be cost prohibitive as you end up paying egress costs twice, both as your data leaves your K8s application project for data entry and also when it leaves GCP Mongo Atlas on data retrieval. Mongo generally give a cost estimation of 7-10% of cluster size for Egress costs.

Encryption at Rest

The following restrictions apply to Encyption at Rest on an Atlas cluster:

  • Continuous Backups are not supported. When enabling backup for a cluster using Encryption at Rest, you must use Cloud Provider Snapshots from AWS or Azure to encrypt your backup snapshots.
  • You cannot enable Encryption at Rest for clusters running on GCP.

So, currently we cannot take advantage of this option because we would eventually want to run backups via Atlas.

Administrators who deploy clusters on GCP and want to enable backup should keep those clusters in a separate project from deployments that use Encryption at Rest or Cloud Provider Snapshots.

Problem

We changed the uri connection string for our test application running in k8s to point towards the mongo atlas cluster. Something like this:

uri = mongodb://node-shard-00-00.gcp.mongodb.net:27017,node-shard-00-01.gcp.mongodb.net:27017,node-shard-00-02.gcp.mongodb.net:27017/myDB?ssl=true&replicaSet=node-shard-0&authSource=admin&compressors=snappy

However, the application was failing its readiness check (healthcheck on pod startup could not connect to the mongo database).

Troubleshooting Avenues

Kubernetes is not easy to diagnose issues in – especially for novices like me!

Using kubectl logs on the pod was not a great help in this instance, but that might be down to application configuration and what it sends to stdin, stdout, stderr (which is the basis of kubectl logs).

This doesnt tell me much in terms of error logging:

kubectl logs my-k8s-service-5f9b978fc8-d6ddc -n my-k8s-service -c masterUsing config environment: PREPROD[24/Aug/2018:13:29:17] ENGINE Bus STARTING[24/Aug/2018:13:29:17] ENGINE Started monitor thread 'Autoreloader'.[24/Aug/2018:13:29:17] ENGINE Started monitor thread '_TimeoutMonitor'.[24/Aug/2018:13:29:17] ENGINE Serving on http://0.0.0.0:8080[24/Aug/2018:13:29:17] ENGINE Bus STARTED

Replicate the connectivity issue locally on the pod

We knew our readiness check was failing on connectivity so runnng an Interactive bash terminal inside the pod and trying to make a connection to the Atlas replica set without the application was a fair shout.

kubectl exec -it my-k8s-service-5f9b978fc8-d6ddc -c master bash -n my-k8s-service

Then I generated a manual mongo uri connection using python (the application runs in python) and tried to retrive the database collections.

//connect using uri
/usr/bin/scl enable rh-python35 python
from pymongo import MongoClient
uri = "mongodb://myuser:mypassword@node-shard-00-00.gcp.mongodb.net:27017,node-shard-00-01.gcp.mongodb.net:27017,node-shard-00-02.gcp.mongodb.net:27017/myDB?ssl=true&replicaSet=node-shard-0&authSource=admin&retryWrites=true&compressors=snappy"
client = MongoClient(uri)
db = client.myDB
collection = db.myCollection
db.collection_names(include_system_collections=False)

and this is what happened:

Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/atcloud/.local/lib/python3.5/site-packages/pymongo/database.py", line 715, in collection_names nameOnly=True, **kws)] File "/home/atcloud/.local/lib/python3.5/site-packages/pymongo/database.py", line 674, in list_collections read_pref) as (sock_info, slave_okay): File "/opt/rh/rh-python35/root/usr/lib64/python3.5/contextlib.py", line 59, in __enter__return next(self.gen) File "/home/.local/lib/python3.5/site-packages/pymongo/mongo_client.py", line 1099, in _socket_for_reads  server = topology.select_server(read_preference) File "/home/.local/lib/python3.5/site-packages/pymongo/topology.py", line 224, in select_server address)) File "/home/.local/lib/python3.5/site-packages/pymongo/topology.py", line 183, in select_servers selector, server_timeout, address) File "/home/.local/lib/python3.5/site-packages/pymongo/topology.py", line 199, in _select_servers_loop self._error_message(selector))pymongo.errors.ServerSelectionTimeoutError: SSL handshake failed: node-shard-00-01.gcp.mongodb.net:27017: EOF occurred in violation of protocol (_ssl.c:645),SSL handshake failed: node-shard-00-02.gcp.mongodb.net:27017: EOF occurred in violation of protocol (_ssl.c:645),SSL handshake failed: node-shard-00-00.gcp.mongodb.net:27017: EOF occurred in violation of protocol (_ssl.c:645)

Failing without the application – thats a good sign as it now allows us to do some quick fail fast testing with the uri connection code above!

Lets copy the pod and start it in a testing environment – to start testing changes more aggresively

Create a mytestpod.yaml file with a basic pod configuration with the same base image as the application:

apiVersion: v1kind: Podmetadata:  annotations:    sidecar.istio.io/inject: "true"  name: dbamohsin-istio-test-podspec:  containers:  - args:    - "10000"    image: eu.gcr.io/somewhere/my-k8s-service    imagePullPolicy: Always    name: master    command: ["/bin/sleep"]

and create it in kubernetes:

kubectl apply -f mytestpod.yaml

Then go into an interactive terminal with bash

kubectl exec -it dbamohsin-istio-test-pod bash

and run the same uri mongo connection as before. Doing this gave the same python connectivity error shown early in the post.

Again, this is a good sign as now we have moved our issue to a less strict environment where we can test changes better.

Can we test the same thing without the istio sidecar…sure!

Delete the pod:

//There are 2 ways of doing this:
kubectl delete -f mytestpod.yaml
kubectl delete pod dbamohsin-istio-test-pod

Modify mytestpod.yaml and comment out the istio sidecar annotation

...
metadata:  annotations:#    sidecar.istio.io/inject: "true"
...

Then re-create the pod and go into an interactive terminal with bash.

Retry the mongo Uri…and it worked!

Python 3.5.1 (default, Oct 21 2016, 21:37:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linuxType "help", "copyright", "credits" or "license" for more information.>>> from pymongo import MongoClient>>> uri = "mongodb://valuations:red9898@node-shard-00-00.gcp.mongodb.net:27017,node-shard-00-01.gcp.mongodb.net:27017,node-shard-00-02.gcp.mongodb.net:27017/myDB?ssl=true&replicaSet=node-shard-0&authSource=admin&retryWrites=true">>> client = MongoClient(uri)>>> db = client.myDB>>> collection = db.myCollection>>> db.collection_names(include_system_collections=False)['col1', 'col2', 'col3', 'col4']

so this narrowed it down to something we potentially hadnt configured in istio.

Connectivity from GCP K8s to GCP Mongo

We use Istio for service discovery/unifying traffic flow management (Auto Trader Case Study on Computer World UK). I didnt know beforehand but we restrict egress traffic out of our GCP projects and as a result we needed a service entry in kubernetes to allow 27017. Some MESH_EXTERNAL services documentation – https://istio.io/blog/2018/egress-tcp/

The good thing is that there is a specific mongo protocol available in istio…We added a service entry to our testing environment as shown below:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
 name: egress-mongo-all

spec:
 hosts:
 - node-shard-00-00.gcp.mongodb.net
 - node-shard-00-01.gcp.mongodb.net
 - node-shard-00-02.gcp.mongodb.net

ports:
 - number: 27017
 name: mongo
 protocol: MONGO
 resolution: DNS

and deployed:

kubectl apply -f egress-mongo-all.yaml

Changes from service entry’s are immediate to pods.

Check its deployed:

kubectl get serviceentry -n core-namespace

and double check the config:

kubectl get serviceentry -n core-namespace -o egress-mongo-atlas.yaml

All good.

Re-enable istio and retry

We re-added the istio sidecar annotation into our mytestpod.yaml file and redeployed. This time, even with Istio, the mongo connection worked fine and retrieved the data.

Other connectivity areas to check

Connectivity into GCP Mongo…Whitelists!

Defining a whitelist can be very tricky as providing a google IP range for our K8s applications is more or less impossible – or would cover most of the internet! The reason for this issue is that only the DNS is static for K8s services, so the IP for an application pod can change multiple times a day, meaning its more or less impossible to assign a strong IP whitelist bound to applications unless an automated check is implemented.

We have several different viewpoints here internally, ranging from we should have a structured IP whitelist to why do we even need a whitelist? Our Authentication and Encryption should be at a level that it can run securely with a 0.0.0.0/0 IP Whitelist.

Dont forget client connectivity…

Bear in mind that you may have an on premise firewall between your laptop and Mongo Atlas and may need your network administrator to enable access for you to go outbound on 27017 to your Atlas Nodes.

Takeaway points

  • Isolate the issue quickly
    • We took the application out of the equation first
    • Then proved that it wasnt that specific google project by moving environments
    • and finally isolated a networking sidecar (istio)
  • Find an easy way to reproduce the issue
    • We created a simple bit of code to quickly test connectivity using the same language as the application (Python)
    • Created a dummy pod that we could quickly trash and recreate and test against BUT ensured the fundamentals of the application remained (Used the same image as the application for the dummy pod)
  • Have an environment where you can potentially break stuff. An infrastructure testing area of some kind.
Advertisements

Count Distinct Values via aggregation framework

Q: Is it possible to count distinct values of a field in mongodb?

A: Yes! This can be done via the aggregation framework in mongo. This takes two group commands; the first groups by all the distinct values, and the second does a count of them all.

pipeline = [ 
    { $group: { _id: "$myNonUniqueFieldId"}  },
    { $group: { _id: 1, count: { $sum: 1 } } }
];

db.runCommand( 
    {
    "aggregate": "collection" , 
    "pipeline": pipeline
    }
);

Passed my M202: MongoDB Advanced Deployment and Operations Course!

The course is a 7 week online programme and then final exam. The course is well recommended – https://university.mongodb.com/courses/10gen/M202/2014_September/about

image

image

error: { "$err" : "not master and slaveOk=false", "code" : 13435 } at src/mongo/shell/query.js:128

When attempting to read data on a secondary, the following error is recieved:

error: { "$err" : "not master and slaveOk=false", "code" : 13435 } at src/mongo/shell/query.js:128

This happens because you are connected to a secondary and slaveOK is set to false. When connecting via  a shell, use the following to enable reads:

rs.slaveOk()

The one caveat with this is that on a system with many secondary’s, there can sometimes be latency between the master and secondary’s which may result in stale data. slaveOk works on the proviso that the user is happy reading data based on eventual consistency.

When connecting via an application, reading from a secondary is achieved via “Read preference”

error: { "$err" : "not master or secondary; cannot currently read from this replSet member", "code" : 13436 }

When attempting to connect to a mongo standalone server without replica set from an application server, the following error is recieved:

error: { "$err" : "not master or secondary; cannot currently read from this replSet member", "code" : 13436 }

On investigation it was found that /etc/mongo.conf had the follwoing parameter set:

replSet        = repqamongo

which means that the mongod process was started as an uninitialised replica.

To resolve, either remove the flag and restart mongod process or run rs.initiate() to turn into a single node replica (a bit pointless though)