Setting up SOPS (Secrets OPerationS)

We use docker-compose to locally generate terraform plans before committing them into code and triggering pipelines. The code requires some encrypted variables, which can be locally extracted using sops.

SOPS (Secrets OPerationS) is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault and PGP.

GitHub – mozilla/sops

brew install sops

Before you can use sops, you need to authorise against the application-default Credentials.

If not authenticated, when you try to source sops this happens:

sops -d testing.config.sops

Failed to get the data key required to decrypt the SOPS file.

Group 0: FAILED
projects/project-name-here/locations/global/path/terraform: FAILED
– | Cannot create GCP KMS service: google: could not find
| default credentials. See
| https://developers.google.com/accounts/docs/application-default-credentials
| for more information.

Recovery failed because no master key was able to decrypt the file. In
order for SOPS to recover the file, at least one key has to be successful,
but none were.

To Authorise a login:

gcloud auth application-default login

Which then allows you to source the decrypted keys locally:

#Bash
eval $(sops -d testing.config.sops)

#zsh
source <(sops -d testing.config.sops)

Essentially all you are doing with sops is setting up your local machine with the correct decrypted keys:

More about Application default credentials

https://cloud.google.com/sdk/gcloud/reference/auth/application-default/

Application Default Credentials (ADC) provide a method to get credentials used in calling Google APIs. The gcloud auth application-default command group allows you to manage active credentials on your machine that are used for local application development.

These credentials are only used by Google client libraries in your own application.

 

Advertisements

Efficiently retrieving Oracle CLOBS in high latency environments

The Problem

We have recently come across an interesting scenario whereby a java application living within a container in Kubernetes in GCP needs to talk back to an Oracle 12.1 database in an on premise DC.

There are several things that are a given in these situations; that the latency in the chatter between a cloud environment and an on premise environment will be Higher than two services talking in the same DC, and depending on the way you look at it, creating a high latency situation will highlight poor code/queries or inefficiencies that may well have been hidden for years on premise due to the sub 2ms latency that developers and ops were lucky enough to have…

So, althougth there is short term pain, there is definately a lot to gain in spending the resource effort in creating efficiencies in the app to database conversations.

Investigation

So how do you start investigating an application developer complaining that his database queries are taking 70-80 seconds since moving the application to the cloud, queries that used to take 2-3 seconds? Mainly by breaking down the problem and finding little nuggets (or red herrings!) to concentrate on.

We are lucky to have a multi disciplined squad/team so we are able to quickly troubleshoot from various angles including database, networking and our kubernetes architecture.

What metrics did we have?

Although we like to trust our developers, we also like to have proven data or logs to back up slowness claims. Our developer provided us with Kibana Logs and Graphana dashboards which showed that in some cases a particular heavy query was taking a significant amount of time (greather than 30 seconds up to 80 seconds). The application had a timeout of 30 seconds so in these cases, it was user impacting.

We also had the query which was causing the problem, and from the Kibana logs we could see that the issue was only happening on big data sets – or was more exponential on big datasets.

Creating a safe testing environment

We knew we had a problem with the query in production, but I didnt want to affect Prod performance even more so we backported a copy of the poorly performing dataset into our QA database environment.

As an example, if we continued to use prod to test the Select query, we may be inadvertantly flushing out hot data from the buffer pool and replacing it with out testing data.

Creating a repeatable test and finding patterns

Generally when I am investigating a problem, my approach is normally scientific and make sure I am doing both a Fair test and a Controlled experiment. A fair test is important as when changing variables you should only change one thing at a time, and controlling a test is important so that you have one test that follows the normal expected behaviour.

The query itself always returned 3727 rows with around 35MB of data from the database.

I embedded the query into a script and set some sqlplus options:

spool qadb21-local-5000.log
SET AUTOTRACE TRACEONLY
set ARRAYSIZE 5000
set TIMING on
set TERMOUT off
SET TRIMSPOOL ON
SET TRIMOUT ON
SET WRAP OFF
SELECT /*+ ALL_ROWS() */ V.ID as V1,A.*, V.*,M.*
FROM A JOIN V ON A.ID = V.A_ID
LEFT JOIN M ON M.V_ID = V.ID
WHERE A.ID = '2dfdda64-e15c-4cdd-9d56-2db1d013c6a0'
ORDER BY V.DISPLAY_ORDER;
spool off

AUTOTRACE TRACEONLY – I didnt want to write any results to the spool – only the trace. Reason for this is that its possible the writing of the result set to a file would add seconds to the query completion.

ARRAYSIZE – To test fetch size in sqlplus – I tested multiple array sizes of 50, 125, 250, 1000, 2500 & 5000

ALL_ROWS Hint – Just to force the same optimizer results behaviour.

I tested 4 scenarios in total:

  • Running the query directly on the Oracle DB [<1 second runtime]
  • Running the query from my laptop to the Oracle DB [25-30 seconds runtime]
  • Running the query froma container in our GCP testing project to the Oracle DB [1:50 – 2:00 minutes runtime]
  • Running the query from a container in our GCP PreProd project to the oracle DB [1:50 – 2:00 minutes runtime]

By testing 2 different GCP Projects against our database, we can see if the problem is environment specific or a general error. The query was returning in similar durations both in our testing project and our preprod project so we deduced that it was something environment independent – but continued to doublecheck our networking config.

Below is an extract of the kind of output I was using to make sure the test stayed fair. I only took results if the recursive calls were 0 – i.e no hard parses.

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
32406 consistent gets
28457 physical reads
0 redo size
3065681 bytes sent via SQL*Net to client
522366 bytes received via SQL*Net from client
6008 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
3727 rows processed

As the example output above shows, we were seeing over 6000 rountrips on 3727 rows of data. This rang some alarm bells as in a high latency environment, every rountrip is exponential so this was a key area of focus.

SQLPlus has a default fetchsize of 15 so evern at its worst, we should be seeing no more than approx. 500 roundtrips (rows processed / fetchsize * 2).

I setup a sql trace against my connection and this also showed a high level of chatter between the database and client in all test cases. (https://oracle-base.com/articles/misc/sql-trace-10046-trcsess-and-tkprof)

EG:

...
FETCH #139847718960616:c=0,e=4,p=0,cr=0,cu=0,mis=0,r=0,dep=4,og=4,plh=2542797530,tim=15317741724313
CLOSE #139847718960616:c=0,e=1,dep=4,type=3,tim=15317741724339
EXEC #139847718275456:c=0,e=22,p=0,cr=0,cu=0,mis=0,r=0,dep=4,og=4,plh=3765558045,tim=15317741724381
FETCH #139847718275456:c=0,e=19,p=0,cr=3,cu=0,mis=0,r=1,dep=4,og=4,plh=3765558045,tim=15317741724409
FETCH #139847718275456:c=0,e=3,p=0,cr=0,cu=0,mis=0,r=1,dep=4,og=4,plh=3765558045,tim=15317741724424
FETCH #139847718275456:c=0,e=3,p=0,cr=0,cu=0,mis=0,r=0,dep=4,og=4,plh=3765558045,tim=15317741724435
CLOSE #139847718275456:c=0,e=0,dep=4,type=3,tim=15317741724446
...

Networking

Our network engineers testing various parts of our cloud to on prem architecture including transfer speeds between different environments, How many hops it was taking to get to and from the database, and packet loss.

We found that the network route the query was taking was doing one extra hop, so we did some remediation work to correct this – but we didnt see much impact on query times [reduction of less than 1-2 seconds].

We also did a tcpdump of the packets between my laptop and the database server, to see what was actually happening when the query ran…

sudo tcpdump -i en0 host 172.1.2.34 -w query.pcap
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on en0, link-type EN10MB (Ethernet), capture size 262144 bytes
08:58:05.142087 IP 10.10.1.1 > 172.1.2.34: ICMP echo request, id 35751, seq 0, length 64
08:58:05.146186 IP 172.1.2.34 > 10.10.1.1: ICMP echo reply, id 35751, seq 0, length 64
2 packets captured
804 packets received by filter
0 packets dropped by kernel

We ran the pcap file thorugh wireshark and could see straight away that the query was constantly going back and forth to the database – thousands of times in a 2 minute period. This matched up pretty well to what we found with the database statistics showing a high level of roundtrips, and also the SQL Trace output.

Digging Deeper into the fetch size

I broke up the query and looked into all the data type of the result set and found that there was a CLOB column which was storing json data. After investigating I found that sqlplus does individual row retrieval for queries with CLOBs. SQLPlus has been developed so that it doesnt use a feth array when retriving CLOBS (https://asktom.oracle.com/pls/apex/asktom.search?tag=performance-issue-with-clob).

To test that the CLOBs were the culprit, I converted the CLOB data to varchar2 and reran the query and found that the data was returned under 1 seconds with 2 roundtrips.

ALTER TABLE M ADD METRICS_DATA_VAR VARCHAR2(4000);
UPDATE M SET METRICS_DATA_VAR = dbms_lob.substr( METRICS_DATA, 4000, 1 );

How does the Java Driver CLOB retrieval work?

It seems it works in a similar fashion to the sqlplus retrieval. Our CLOBs ranged between 4200 and 5000 characters. The default fetch size for CLOBS is 4000 characters so potentially you could have 2 fetches for 1 CLOB depending on the size.

As of Oracle 11.2g JDBC Driver you can use a prefetch.

statement1.setFetchSize(1000);
if (statement1 instanceof OracleStatement) {
    ((OracleStatement) statement1).setLobPrefetchSize(50000);
}

or can be done globally for the application;

System.setProperty(OracleConnection.CONNECTION_PROPERTY_DEFAULT_LOB_PREFETCH_SIZE, "50000");

Solution

Setting the CONNECTION_PROPERTY_DEFAULT_LOB_PREFETCH_SIZE to 50000 has made a huge difference to our application performance. Why 50000? I worked it out based on being able to fit 10 rows per fetch but not overloading the application java heap.

We have seen queries that were taking 70 seconds on big datasets now taking less that 3 seconds. Round trip latency is the major factor at play here and by reducing the number of fetch trips the query is making to retrieve the data, it significantly boosts performance.

Using Cloudsqlproxy to initiate connection to mysql from the shell

The cloud sql proxy is a really useful way of managing connectivity to cloudSQL instances. The proxy provides secure access to your Cloud SQL Second Generation instances without having to whitelist IP addresses or configure SSL.

Accessing Cloud SQL instances using the Cloud SQL Proxy offers these advantages:

  • Secure connections: The proxy automatically encrypts traffic to and from the database using TLS 1.2 with a 128-bit AES cipher; SSL certificates are used to verify client and server identities.
  • Easier connection management: The proxy handles authentication with Cloud SQL, removing the need to provide static IP addresses.

There are a couple of pre-requisites before this will work:

  • Only Second generation instances can benifit from this feature.
  • Cloud SQL Admin API must be enabled
  • The instance must have  either a public IPv$ address or be configured to use a private IP.
  • The proxy needs to be given GCP authentication credentials, either an individual or a service account.

Full details of exactly what to do are here – https://cloud.google.com/sql/docs/mysql/sql-proxy 

Some notes on what I did on macOS sierra:

  1. Created a Service Account, gave it the Cloud SQL Client role and then generated a new private key (json key)
curl -o cloud_sql_proxy https://dl.google.com/cloudsql/cloud_sql_proxy.darwin.amd64

Unpack the dmg file, then give execute permissions on the proxy file.

chmod +x cloud_sql_proxy

Then start the proxy:

./cloud_sql_proxy -instances=project:region:instance=tcp:3306 \
-credential_file=/Users/dbamohsin/sslkeys/cloudsqlproxy-env.json

The proxu will now start listening on your local machine for connection attempts against port 3306:

2018/11/09 11:53:10 Listening on 127.0.0.1:3306 for project:region:instance
2018/11/09 11:53:10 Ready for new connections

Then in another terminal window, connect as normal to a mysql instance using the localhost IP:

mysqlsh -udbamohsin -p -h127.0.0.1

This should make a connection to the cloudSQL instance using the proxy and the connection will show in the proxy log:

2018/11/09 11:53:42 New connection for "project:region:instance"