Using DISM to enable features

From windows 2012, pkgmgr has been deprecated and cannot be used from the command line to install features on windows server.

Use the following instead to install a feature with a one line command:

dism /online /Enable-Feature /FeatureName:TelnetClient

Ive previously blogged about pkgmgr here –

Duplicate Key Error on local.slaves

We have been getting user assertion errors showing up on our 5 node replica set for a while.


Initially these assertion errors were not showing up in the mongo logs, so we enabled increased logging – details can be found here –

The assertion errors turned out to be related to the local.slaves collection:

[slaveTracking] User Assertion: 11000:E11000 duplicate key error index: local.slaves.$id dup key: { : ObjectId(‘4def89b415e7ee0aa29fd64b’) }
[slaveTracking] update local.slaves query: { _id: ObjectId(‘4def89b415e7ee0aa29fd64b’), host: "", ns: "" }
update: { $set: { syncedTo: Timestamp 1323652648000|784 } }
exception 11000 E11000 duplicate key error index: local.slaves.$id dup key: { : ObjectId(‘4def89b415e7ee0aa29fd64b’) } 0ms

Taken from Mongo Docs:

The duplicate key on local.slaves error, occurs when a secondary or slave changes its hostname and the primary or master tries to update its local.slaves collection with the new name. The update fails because it contains the same _id value as the document containing the previous hostname. The error itself will resemble the following.

This is a benign error and does not affect replication operations on the secondary or slave.

To prevent the error from appearing, drop the local.slaves collection from the primary or master, with the following sequence of operations in the mongo shell:

use local

This should resolve the assertion errors and the new config will be picked up next time the replica syncs:

use local

This topic is also discussed in Jira –

db.currentOp Queries in mongodb

Return active sessions running for more than x seconds:

  function(op) {
    if(op.secs_running > 5) printjson(op);

Waiting for a lock and not a read:

     if(d.waitingForLock && d.lockType != "read") 

Finding active writes:

     if( && d.lockType == "write") 

Finding active reads:

     if( && d.lockType == "read") 

Set additional logging and tracing in mongodb

logLevel Parameter

To set logging on an ad hoc basis, the parameter can be set in the admin database:

--Current log level:
use admin;
db.runCommand({ getParameter: 1, logLevel: 1 })

Logging can be set between 0 and 5, with 5 being the most verbose logging:

--Set log level to 3
use admin;
db.runCommand( { setParameter: 1, logLevel: 3 } )

LogLevel can also be set at instance startup in the mongod.conf under the systemLog.verbosity parameter:

For more details –

Database Profiling

The database profiler collects fine grained data about MongoDB write operations, cursors, database commands on a running mongod instance. You can enable profiling on a per-database or per-instance basis. The database profiling is also configurable when enabling profiling

--get the tracing level and current slow ops threshold

--set the profiling level to 2

See here for full profiling details –

// last few entries
show profile                                                     
// sort by natural order (time in)
// sort by slow queries first
// anything > 20ms                  
// single coll order by response time                      
// regular expression on namespace
db.system.profile.find( { "ns": / } ).sort({millis:-1,$ts:-1})
// anything thats moved    
// large scans                           
// anything doing range or full scans                 

Aggregation framework queries:

--response time by operation type
{ $group : { 
   _id :"$op", 
   "max response time":{$max:"$millis"},
   "avg response time":{$avg:"$millis"}
--slowest by namespace
{ $group : {
  _id :"$ns",
  "max response time":{$max:"$millis"}, 
  "avg response time":{$avg:"$millis"}  
{$sort: {
 "max response time":-1}
--slowest by client
{$group : { 
  _id :"$client", 
  "max response time":{$max:"$millis"}, 
  "avg response time":{$avg:"$millis"}  
{$sort: { 
  "max response time":-1} 

Count Distinct Values via aggregation framework

Q: Is it possible to count distinct values of a field in mongodb?

A: Yes! This can be done via the aggregation framework in mongo. This takes two group commands; the first groups by all the distinct values, and the second does a count of them all.

pipeline = [ 
    { $group: { _id: "$myNonUniqueFieldId"}  },
    { $group: { _id: 1, count: { $sum: 1 } } }

    "aggregate": "collection" , 
    "pipeline": pipeline

2014 in review

The stats helper monkeys prepared a 2014 annual report for this blog.

Here’s an excerpt:

The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 88,000 times in 2014. If it were an exhibit at the Louvre Museum, it would take about 4 days for that many people to see it.

Click here to see the complete report.

Delete a job from multiple servers using SSMS Server Groups

For more detail on Server groups see as that explains it in more detail.

The issue is that as Job IDs are unique across servers, a standard sp_delete_job query wouldnt work as we first need to know the Job ID.

The below sample is to delete the same named job across multiple servers:

USE [msdb]

declare @jobid varchar(38)

select @jobid = job_id from sysjobs where name = 'Job Name'

/****** Object:  Job [Monitoring - Backup Stats]    Script Date: 31/12/2014 09:28:37 ******/
EXEC msdb.dbo.sp_delete_job @job_id=@jobid, @delete_unused_schedule=1


Get every new post delivered to your Inbox.

Join 186 other followers