Adding a new project into kubectl/kubectx cli

I use two main command line tools when it comes to kubernetes – kubectx and kubectl

Kubectx is a great little utility for switching between kubernetes clusters, and kubectl is the official utility to manage and deploy kubernetes applications.

Adding a context into kubectx/kubectl

Go to the google cloud console and navigate to Kubernetes Engine and then click the connect button on the cluster, which will provide the gcloud command you need to run on your machine to configure access to the cluster locally.

as an example, lets say we have just added gke-dbemohsin-cluster-sandbox 

MYMAC:cloud-architecture dbamohsin$ kubectx
DEV
PREPROD
PROD
TESTING
gke-dbemohsin-cluster-sandbox

To rename a context in kubectx:

kubectx SANDBOX=gke-dbemohsin-cluster-sandbox
Context "gke-dbemohsin-cluster-sandbox" renamed to "SANDBOX".

To add a wrapper alias around context switching, expecially if some context run under different gcloud accounts, you can add the commands to your bash_profile:

alias k8-sandbox='gcloud config configurations activate dbemohsin; kubectx SANDBOX'

and to call the alias:

MYMAC:~ dbamohsin$ k8-sandbox
Switched to context "SANDBOX".

 

Advertisements

Troubleshooting Kubernetes to GCP Mongo Atlas connectivity – so where do you start?

**Update – 03/09/2018**

Since I wrote the original post we have seen further connectivity issues getting Istio working with mongo replica sets. The error looks something like this;

pymongo.errors.ServerSelectionTimeoutError: No replica set members match selector “Primary()”

This has been raised as an issue with Istio 1.0.1 – https://github.com/istio/istio/issues/8410

We believe that Istio is not correctly routing traffic when it comes to replica set hosts, and is using the host list in the service entry for load balancing rather than just a List of nodes which can be talked to externally. This is a big problem for a clustering based data store such as MongoDB where connection flow is incredibly important. This is a hypothetical theory we have rather than something that has been concluded.

As a result of this issue we are no longer using the outbound port whitelisting feature in Istio, as it turned out this feature had been the cause of a few unrelated issues in our environment. Although the post below is still valid in terms of troubleshooting possible connectivity issues, we are no longer using Service Entries to for outbound traffic interception.

************************************

Original Post

In the last few weeks, We have started to prove out running a latency sensitive application in a k8s container against a database running on mongoDB Atlas which is also in GCP.

Scenario

We currently have a service running in Kubernetes in a europe-west4 container, but the metrics are showing that it runs around 5x slower on average when it connects to an on premise mongo database running 3.2.

We wanted to get the database closer to the application, and since the general availability of GCP as a region in Mongo Atlas, this became a lot easier to try out. MongoDB kindly gave us some Atlas credits to try this out, and within 10-15 minutes, I had the following running in Mongo Atlas:

  • 3 node replica set running v3.6 with WiredTiger
    • 2 nodes in Europe-West4 (Netherlands)
    • 1 node in Europe-West1 (Belgium)
  • M30 Production Cluster
    • 7.5GB RAM
    • 2 vCPUs
    • Storage
      • 100GB Storage
      • 6000 IOPS
      • Auto Expand Storage
      • Encrypted Disk

“Only Available in AWS”

Get used to hearing and seeing this both in the documentation, and when discussing with Support or Account Managers. As expected, not all features are available in GCP yet, the main ones that are frustrating us (As of August 2018) are VPC Peering and Encyption at Rest.

VPC Peering

Atlas supports VPC peering with other AWS VPCs in the same region. MongoDB does not support cross-region VPC peering. For multi-region clusters, you must create VPC peering connections per-region.

Depending on the volume we push to and from our databases, lack of support for VPC Peering may be cost prohibitive as you end up paying egress costs twice, both as your data leaves your K8s application project for data entry and also when it leaves GCP Mongo Atlas on data retrieval. Mongo generally give a cost estimation of 7-10% of cluster size for Egress costs.

Encryption at Rest

The following restrictions apply to Encyption at Rest on an Atlas cluster:

  • Continuous Backups are not supported. When enabling backup for a cluster using Encryption at Rest, you must use Cloud Provider Snapshots from AWS or Azure to encrypt your backup snapshots.
  • You cannot enable Encryption at Rest for clusters running on GCP.

So, currently we cannot take advantage of this option because we would eventually want to run backups via Atlas.

Administrators who deploy clusters on GCP and want to enable backup should keep those clusters in a separate project from deployments that use Encryption at Rest or Cloud Provider Snapshots.

Problem

We changed the uri connection string for our test application running in k8s to point towards the mongo atlas cluster. Something like this:

uri = mongodb://node-shard-00-00.gcp.mongodb.net:27017,node-shard-00-01.gcp.mongodb.net:27017,node-shard-00-02.gcp.mongodb.net:27017/myDB?ssl=true&replicaSet=node-shard-0&authSource=admin&compressors=snappy

However, the application was failing its readiness check (healthcheck on pod startup could not connect to the mongo database).

Troubleshooting Avenues

Kubernetes is not easy to diagnose issues in – especially for novices like me!

Using kubectl logs on the pod was not a great help in this instance, but that might be down to application configuration and what it sends to stdin, stdout, stderr (which is the basis of kubectl logs).

This doesnt tell me much in terms of error logging:

kubectl logs my-k8s-service-5f9b978fc8-d6ddc -n my-k8s-service -c masterUsing config environment: PREPROD[24/Aug/2018:13:29:17] ENGINE Bus STARTING[24/Aug/2018:13:29:17] ENGINE Started monitor thread 'Autoreloader'.[24/Aug/2018:13:29:17] ENGINE Started monitor thread '_TimeoutMonitor'.[24/Aug/2018:13:29:17] ENGINE Serving on http://0.0.0.0:8080[24/Aug/2018:13:29:17] ENGINE Bus STARTED

Replicate the connectivity issue locally on the pod

We knew our readiness check was failing on connectivity so runnng an Interactive bash terminal inside the pod and trying to make a connection to the Atlas replica set without the application was a fair shout.

kubectl exec -it my-k8s-service-5f9b978fc8-d6ddc -c master bash -n my-k8s-service

Then I generated a manual mongo uri connection using python (the application runs in python) and tried to retrive the database collections.

//connect using uri
/usr/bin/scl enable rh-python35 python
from pymongo import MongoClient
uri = "mongodb://myuser:mypassword@node-shard-00-00.gcp.mongodb.net:27017,node-shard-00-01.gcp.mongodb.net:27017,node-shard-00-02.gcp.mongodb.net:27017/myDB?ssl=true&replicaSet=node-shard-0&authSource=admin&retryWrites=true&compressors=snappy"
client = MongoClient(uri)
db = client.myDB
collection = db.myCollection
db.collection_names(include_system_collections=False)

and this is what happened:

Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/atcloud/.local/lib/python3.5/site-packages/pymongo/database.py", line 715, in collection_names nameOnly=True, **kws)] File "/home/atcloud/.local/lib/python3.5/site-packages/pymongo/database.py", line 674, in list_collections read_pref) as (sock_info, slave_okay): File "/opt/rh/rh-python35/root/usr/lib64/python3.5/contextlib.py", line 59, in __enter__return next(self.gen) File "/home/.local/lib/python3.5/site-packages/pymongo/mongo_client.py", line 1099, in _socket_for_reads  server = topology.select_server(read_preference) File "/home/.local/lib/python3.5/site-packages/pymongo/topology.py", line 224, in select_server address)) File "/home/.local/lib/python3.5/site-packages/pymongo/topology.py", line 183, in select_servers selector, server_timeout, address) File "/home/.local/lib/python3.5/site-packages/pymongo/topology.py", line 199, in _select_servers_loop self._error_message(selector))pymongo.errors.ServerSelectionTimeoutError: SSL handshake failed: node-shard-00-01.gcp.mongodb.net:27017: EOF occurred in violation of protocol (_ssl.c:645),SSL handshake failed: node-shard-00-02.gcp.mongodb.net:27017: EOF occurred in violation of protocol (_ssl.c:645),SSL handshake failed: node-shard-00-00.gcp.mongodb.net:27017: EOF occurred in violation of protocol (_ssl.c:645)

Failing without the application – thats a good sign as it now allows us to do some quick fail fast testing with the uri connection code above!

Lets copy the pod and start it in a testing environment – to start testing changes more aggresively

Create a mytestpod.yaml file with a basic pod configuration with the same base image as the application:

apiVersion: v1kind: Podmetadata:  annotations:    sidecar.istio.io/inject: "true"  name: dbamohsin-istio-test-podspec:  containers:  - args:    - "10000"    image: eu.gcr.io/somewhere/my-k8s-service    imagePullPolicy: Always    name: master    command: ["/bin/sleep"]

and create it in kubernetes:

kubectl apply -f mytestpod.yaml

Then go into an interactive terminal with bash

kubectl exec -it dbamohsin-istio-test-pod bash

and run the same uri mongo connection as before. Doing this gave the same python connectivity error shown early in the post.

Again, this is a good sign as now we have moved our issue to a less strict environment where we can test changes better.

Can we test the same thing without the istio sidecar…sure!

Delete the pod:

//There are 2 ways of doing this:
kubectl delete -f mytestpod.yaml
kubectl delete pod dbamohsin-istio-test-pod

Modify mytestpod.yaml and comment out the istio sidecar annotation

...
metadata:  annotations:#    sidecar.istio.io/inject: "true"
...

Then re-create the pod and go into an interactive terminal with bash.

Retry the mongo Uri…and it worked!

Python 3.5.1 (default, Oct 21 2016, 21:37:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linuxType "help", "copyright", "credits" or "license" for more information.>>> from pymongo import MongoClient>>> uri = "mongodb://valuations:red9898@node-shard-00-00.gcp.mongodb.net:27017,node-shard-00-01.gcp.mongodb.net:27017,node-shard-00-02.gcp.mongodb.net:27017/myDB?ssl=true&replicaSet=node-shard-0&authSource=admin&retryWrites=true">>> client = MongoClient(uri)>>> db = client.myDB>>> collection = db.myCollection>>> db.collection_names(include_system_collections=False)['col1', 'col2', 'col3', 'col4']

so this narrowed it down to something we potentially hadnt configured in istio.

Connectivity from GCP K8s to GCP Mongo

We use Istio for service discovery/unifying traffic flow management (Auto Trader Case Study on Computer World UK). I didnt know beforehand but we restrict egress traffic out of our GCP projects and as a result we needed a service entry in kubernetes to allow 27017. Some MESH_EXTERNAL services documentation – https://istio.io/blog/2018/egress-tcp/

The good thing is that there is a specific mongo protocol available in istio…We added a service entry to our testing environment as shown below:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
 name: egress-mongo-all

spec:
 hosts:
 - node-shard-00-00.gcp.mongodb.net
 - node-shard-00-01.gcp.mongodb.net
 - node-shard-00-02.gcp.mongodb.net

ports:
 - number: 27017
 name: mongo
 protocol: MONGO
 resolution: DNS

and deployed:

kubectl apply -f egress-mongo-all.yaml

Changes from service entry’s are immediate to pods.

Check its deployed:

kubectl get serviceentry -n core-namespace

and double check the config:

kubectl get serviceentry -n core-namespace -o egress-mongo-atlas.yaml

All good.

Re-enable istio and retry

We re-added the istio sidecar annotation into our mytestpod.yaml file and redeployed. This time, even with Istio, the mongo connection worked fine and retrieved the data.

Other connectivity areas to check

Connectivity into GCP Mongo…Whitelists!

Defining a whitelist can be very tricky as providing a google IP range for our K8s applications is more or less impossible – or would cover most of the internet! The reason for this issue is that only the DNS is static for K8s services, so the IP for an application pod can change multiple times a day, meaning its more or less impossible to assign a strong IP whitelist bound to applications unless an automated check is implemented.

We have several different viewpoints here internally, ranging from we should have a structured IP whitelist to why do we even need a whitelist? Our Authentication and Encryption should be at a level that it can run securely with a 0.0.0.0/0 IP Whitelist.

Dont forget client connectivity…

Bear in mind that you may have an on premise firewall between your laptop and Mongo Atlas and may need your network administrator to enable access for you to go outbound on 27017 to your Atlas Nodes.

Takeaway points

  • Isolate the issue quickly
    • We took the application out of the equation first
    • Then proved that it wasnt that specific google project by moving environments
    • and finally isolated a networking sidecar (istio)
  • Find an easy way to reproduce the issue
    • We created a simple bit of code to quickly test connectivity using the same language as the application (Python)
    • Created a dummy pod that we could quickly trash and recreate and test against BUT ensured the fundamentals of the application remained (Used the same image as the application for the dummy pod)
  • Have an environment where you can potentially break stuff. An infrastructure testing area of some kind.

Setting up SQL Server vNext CTP 2.0 on Docker

Recently I moved into the brave new world of having a Macbook pro as my main laptop, and one of my first frustrations was the inability to be able to run SQL Server Management Studio locally. Unfortunately Microsoft haven’t come up with a solution for that one, so I’m having to do with a mix of SQLPro Studio and running SSMS inside a VDI machine on the mac.

One thing the Mac has made easier for me is the ability to start testing SQL Server vNext locally and without too much messing around. So when Microsoft released vNext 2.0 on the 19th April 2017, it seemed like a good opportunity to give it a try. (Whats new in SQL Server Linux)

This post should cover what you need to do to get a docker image of SQL Server running on macOS Sierra. The Microsoft documentation is pretty good on this subject, and not convoluted but I always find it good to blog how exactly I’ve done things, as there are always little quirks.

Install Docker for Mac

I downloaded the stable version just to be a bit safer and so that I wouldn’t introduce any unnecessary pinch points.

https://docs.docker.com/docker-for-mac/install/

Once installed, Start the Docker application – it may take a few minutes to initialise, but eventually you will end up with the docker ship icon in the top bar.

There are a few minimum requirements for running docker for the SQL Server image:

  • Docker Engine 1.8+ on any supported Linux distribution or Docker for Mac/Windows.
  • Minimum of 4 GB of disk space
  • Minimum of 4 GB of RAM

If your downloading a recent release of docker then the engine shouldn’t be an issue.

At the time of writing, my version is:

$ docker version
Client:Version	17.03.1-ce
API Version: 1.27
Go Version: go1.7.5

Server:
Version: 17.03.1-ce
API Version: 1.27 (minimum version 1.12)
OS/Arch: linux/amd64

Docker default starts with 2GB of RAM, but this can be easily changed to 4GB from the docker preferences in the advanced section.

Pull down and run the latest SQL Server Docker Image – Step by Step details here – https://docs.microsoft.com/en-gb/sql/linux/sql-server-linux-setup-docker

Quick Steps here:

sudo docker pull microsoft/mssql-server-linux

Note that I have added a –name switch to the docker command below. This is to simplify things once we have everything up and running as the container name is quite critical for more or less every command in docker.

sudo docker run --name SQL2017 -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=' -p 1433:1433 -d microsoft/mssql-server-linux

To make sure we have successfully created our container, we can run:

docker ps -a

To familiarise yourself with other docker commands, you can run:

docker --help

Connect to the SQL Server

sudo docker exec -it SQL2017 /bin/bash

The flags are for the following:

-i, –interactive Keep STDIN open even if not attached–privileged Give extended privileges to the command-t, –tty Allocate a pseudo-TTY

This then takes you into the interactive command line for the docker image.

The folder structure by default is /var/opt/mssql for the database files & error logs. Sqlcmd is in /opt/mssql-tools/bin/

Reading the error log

While in exec mode for the SQL2017 container, run the following cat command:

cat /var/opt/mssql/errorlog

There is a very good guide by microsoft on how to troubleshoot SQL Server Linux which can be found here – https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-troubleshooting-guide

Using sqlcmd

Starting with SQL Server vNext CTP 2.0, the SQL Server command-line tools are included in the Docker image. If you attach to the image with an interactive command-prompt (as I have done above), you can run the tools locally.

First step is to add the sqlcmd path to the $PATH environment variable. This step isn’t critical but makes sqlcmd accessible from any location on the docker image

PATH=$PATH:/opt/mssql-tools/binecho 'export PATH="$PATH:/opt/mssql-tools/bin"'
~/.bash_profile&nbsp;
root@5214e1df3c86:/opt/mssql-tools/bin# echo $PATH
/opt/mssql-tools/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
sqlcmd -S . -U sa -P
1&gt; select name from sys.databases
2&gt; go
name
-----------------------------------------------------------
master
tempdb
model
msdb

Installing SQLCMD on macOS

/usr/bin/ruby -e "$(curl -fsSL <a href="https://raw.githubusercontent.com/Homebrew/install/master/install" data-mce-href="https://raw.githubusercontent.com/Homebrew/install/master/install">https://raw.githubusercontent.com/Homebrew/install/master/install</a>)"
brew tap microsoft/mssql-preview <a href="https://github.com/Microsoft/homebrew-mssql-preview" data-mce-href="https://github.com/Microsoft/homebrew-mssql-preview">https://github.com/Microsoft/homebrew-mssql-preview</a> 
brew update 
brew install mssql-tools #for silent install ACCEPT_EULA=y 
brew install mssql-tools

 

Can be quickly tested locally by opening a terminal window and running:

sqlcmd -S my-remote-server -U test -P test
1&gt; select @@SERVERNAME
2&gt; go
-----------------------------------------------------------------
SQLSERVER/MYNAMEDINSTANCE
(1 rows affected)

Connecting into the docker image from host machine

Get the IP of the container

MCR-AL33450:/mohsin.alipatel$ docker inspect --format "{{ .NetworkSettings.Ports}}" SQL2017
map[1433/tcp:[{0.0.0.0 1433}]]

Then make a sqlcmd connection using the IP and port above

MCR-AL33450:/mohsin.alipatel$ sqlcmd -S 0.0.0.0,1433 -U sa -P
1&gt;Select @@servername
2&gt;go-----------------------------------------------------------------------------------------------
5212e1df3c86
(1 rows affected)

This should provide a basic introduction to both docker and to SQL server on linux.