Working with Filegroups

Adding a file and filegroup to a database

USE [master]
GO
ALTER DATABASE [DataArchive] 
ADD FILEGROUP [Archive_2015]
GO
ALTER DATABASE [DataArchive] 
ADD FILE ( NAME = N'DataArchive_2015', 
FILENAME = N'G:\DataArchive_2015.ndf' , SIZE = 104448KB , FILEGROWTH = 1048576KB ) 
TO FILEGROUP [Archive_2015]
GO

Making a filegroup default for new data

--Make a filegroup default
ALTER DATABASE core_AdvertDataArchive
MODIFY FILEGROUP Archive_2011 DEFAULT

Checking which tables are located in which filegroups

--Return which filegroup a table is located in
SELECT object_name(i.[object_id]) as Name_of_Object,
i.name as Index_Name,
i.type_desc as Index_Type,
f.name as Name_of_Filegroup,
a.type as Object_Type,
f.type,
f.type_desc
FROM sys.filegroups as f 
INNER JOIN sys.indexes as i 
 ON f.data_space_id = i.data_space_id
INNER JOIN sys.all_objects as a 
 ON i.object_id = a.object_id
WHERE a.type ='U' -- User defined tables only
--AND object_name(i.[object_id]) ='employee' -- Specific object
GO

ADRCI – Automatic Diagnostic Repository Command Interpreter

Notes on using ADRCI to package incidents for Oracle support.

From OS command line export Oracle SID and run adrci to launch the command line tool.

export ORACLE_SID=<SID>

adrci

Some useful commands once in the tool

help
show home
show incident
show problem
ips pack problem <problem_number>

Some example output:

ADRCI: Release 11.2.0.3.0 – Production on Fri Dec 4 11:46:46 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u01/app/oracle"

adrci>
adrci> show home

ADR Homes:
diag/rdbms/mydb/INST22

adrci>
adrci> show incident

ADR Home = /u01/app/oracle/diag/rdbms/mydb/INST22:

*************************************************************************
INCIDENT_ID          PROBLEM_KEY                                                                            CREATE_TIME
——————– ———————————————————–                            —————————————-

327230               ORA 7445 [qmcxdAddNmspcAttrFromPfxId()+216]                 2015-12-03 09:33:35.465000 +00:00
330918               ORA 7445 [qmcxdAddNmspcAttrFromPfxId()+216]                 2015-12-04 10:38:12.696000 +00:00

2 rows fetched

adrci>
adrci> show problem

ADR Home = /u01/app/oracle/diag/rdbms/mydb/INST22:

*************************************************************************
PROBLEM_ID           PROBLEM_KEY                                                 LAST_INCIDENT        LASTINC_TIME
——————– ———————————————————– ——————– —————————————-

1                    ORA 7445 [kolarsAssign()+1408]                              35914                2013-07-03 02:21:55.866000 +01:00

1 rows fetched

adrci>
adrci> ips pack problem 1

Generated package 3 in file /u01/app/oracle/diag/rdbms/mydb/INST22/trace/ORA7445qm_20151204120758_COM_1.zip, mode complete

adrci>

Full Credit to a colleague who wrote this up!

winhttp.winhttprequest.5.1 the server name or address could not be resolved

Problem: When using POST or GET with sp_OAMethod in SQL Server, you receive the following error:

Error Source Description
0x80072EE7 WinHttp.WinHttpRequest The server name or address could not be resolved 

An example of a HTTP Post function is provided below:

Create function GetHttp
(
    @url varchar(8000)      
)
returns varchar(8000)
as
BEGIN
    DECLARE @win int 
    DECLARE @hr  int 
    DECLARE @text varchar(8000)

    EXEC @hr=sp_OACreate 'WinHttp.WinHttpRequest.5.1',@win OUT 
    IF @hr <> 0 EXEC sp_OAGetErrorInfo @win

    EXEC @hr=sp_OAMethod @win, 'Open',NULL,'GET',@url,'false'
    IF @hr <> 0 EXEC sp_OAGetErrorInfo @win

    EXEC @hr=sp_OAMethod @win,'SEND'
    IF @hr <> 0 EXEC sp_OAGetErrorInfo @win

    EXEC @hr=sp_OAGetProperty @win,'ResponseText',@text OUTPUT
    IF @hr <> 0 EXEC sp_OAGetErrorInfo @win

    EXEC @hr=sp_OADestroy @win 
    IF @hr <> 0 EXEC sp_OAGetErrorInfo @win 
    
    RETURN @text
END

The function is then called with a URL Variable:

DECLARE @url VARCHAR(500)
SET @url = 'http://google.co.uk'
exec GetHttp @url

Although there are many reasons for the error, if the simpler solutions have not worked (eg: Making sure the URL is correct, that the DNS resolves, Firewall is correctly configured) then the issue may be a hidden proxy.

WinHTTP uses its own proxy and depending on how your servers are configured, it may need to be set or unset.

For me, the proxy was enabled and i needed direct access.

Open an Elavated Powershell window:

> netsh winhttp show proxy

Current WinHTTP proxy settings:

    Proxy Server(s) :  proxy.xxxxxxxxx.co.uk:8080

    Bypass List     :  (none)

> netsh winhttp reset proxy

Current WinHTTP proxy settings:

    Direct access (no proxy server).

> netsh winhttp show proxy

Current WinHTTP proxy settings:

    Direct access (no proxy server).

Should I be using COM Objects post SQL 2000??!?

We did this for an old system, but my recommendation would be to use SQL Server CLR’s to access REST Services.

Great post here on MSDN of how to do this – http://blogs.msdn.com/b/sqllive/archive/2008/06/18/accessing-rest-based-web-services-using-sql-clr.aspx

Using DISM to enable features

From windows 2012, pkgmgr has been deprecated and cannot be used from the command line to install features on windows server.

Use the following instead to install a feature with a one line command:

dism /online /Enable-Feature /FeatureName:TelnetClient

Ive previously blogged about pkgmgr here – https://dbamohsin.wordpress.com/2013/07/05/pkgmgr-utility-to-install-windows-features-including-telnet/

Duplicate Key Error on local.slaves

We have been getting user assertion errors showing up on our 5 node replica set for a while.

image

Initially these assertion errors were not showing up in the mongo logs, so we enabled increased logging – details can be found here – https://dbamohsin.wordpress.com/2015/03/31/set-additional-logging-and-tracing/

The assertion errors turned out to be related to the local.slaves collection:

[slaveTracking] User Assertion: 11000:E11000 duplicate key error index: local.slaves.$id dup key: { : ObjectId(‘4def89b415e7ee0aa29fd64b’) }
[slaveTracking] update local.slaves query: { _id: ObjectId(‘4def89b415e7ee0aa29fd64b’), host: "10.90.47.183", ns: "local.oplog.rs" }
update: { $set: { syncedTo: Timestamp 1323652648000|784 } }
exception 11000 E11000 duplicate key error index: local.slaves.$id dup key: { : ObjectId(‘4def89b415e7ee0aa29fd64b’) } 0ms

Taken from Mongo Docs:

The duplicate key on local.slaves error, occurs when a secondary or slave changes its hostname and the primary or master tries to update its local.slaves collection with the new name. The update fails because it contains the same _id value as the document containing the previous hostname. The error itself will resemble the following.

This is a benign error and does not affect replication operations on the secondary or slave.

To prevent the error from appearing, drop the local.slaves collection from the primary or master, with the following sequence of operations in the mongo shell:

use local
db.slaves.drop()

This should resolve the assertion errors and the new config will be picked up next time the replica syncs:

use local
db.slaves.find()

This topic is also discussed in Jira – https://jira.mongodb.org/browse/SERVER-4473

db.currentOp Queries in mongodb

Return active sessions running for more than x seconds:

db.currentOp().inprog.forEach(
  function(op) {
    if(op.secs_running > 5) printjson(op);
  }
)

Waiting for a lock and not a read:

db.currentOp().inprog.forEach(
   function(d){
     if(d.waitingForLock && d.lockType != "read") 
       printjson(d)
     })

Finding active writes:

db.currentOp().inprog.forEach(
   function(d){
     if(d.active && d.lockType == "write") 
       printjson(d)
     })

Finding active reads:

db.currentOp().inprog.forEach(
   function(d){
     if(d.active && d.lockType == "read") 
       printjson(d)
     })

Set additional logging and tracing in mongodb

logLevel Parameter

To set logging on an ad hoc basis, the parameter can be set in the admin database:

--Current log level:
use admin;
db.runCommand({ getParameter: 1, logLevel: 1 })


Logging can be set between 0 and 5, with 5 being the most verbose logging:

--Set log level to 3
use admin;
db.runCommand( { setParameter: 1, logLevel: 3 } )

LogLevel can also be set at instance startup in the mongod.conf under the systemLog.verbosity parameter:

For more details – http://docs.mongodb.org/manual/reference/configuration-options/#systemLog.verbosity

Database Profiling

The database profiler collects fine grained data about MongoDB write operations, cursors, database commands on a running mongod instance. You can enable profiling on a per-database or per-instance basis. The database profiling is also configurable when enabling profiling

--get the tracing level and current slow ops threshold
db.getProfilingStatus()

--set the profiling level to 2
db.setProfilingLevel(2)


See here for full profiling details – http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/

// last few entries
show profile                                                     
 
// sort by natural order (time in)
db.system.profile.find({}).sort({$natural:-1})
 
// sort by slow queries first
db.system.profile.find({}).sort({$millis:-1})Alimit(10);
 
// anything > 20ms                  
db.system.profile.find({"millis":{$gt:20}})
 
// single coll order by response time                      
db.system.profile.find({"ns":"test.foo"}).sort({"millis":-1})
 
// regular expression on namespace
db.system.profile.find( { "ns": /test.foo/ } ).sort({millis:-1,$ts:-1})
 
// anything thats moved    
db.system.profile.find({"moved":true})
 
// large scans                           
db.system.profile.find({"nscanned":{$gt:10000}})
 
// anything doing range or full scans                 
db.system.profile.find({"nreturned":{$gt:1}})

Aggregation framework queries:

--response time by operation type
db.system.profile.aggregate(
{ $group : { 
   _id :"$op", 
   count:{$sum:1},
   "max response time":{$max:"$millis"},
   "avg response time":{$avg:"$millis"}
}});
 
--slowest by namespace
db.system.profile.aggregate(
{ $group : {
  _id :"$ns",
  count:{$sum:1}, 
  "max response time":{$max:"$millis"}, 
  "avg response time":{$avg:"$millis"}  
}},
{$sort: {
 "max response time":-1}
}); 
 
--slowest by client
db.system.profile.aggregate(
{$group : { 
  _id :"$client", 
  count:{$sum:1}, 
  "max response time":{$max:"$millis"}, 
  "avg response time":{$avg:"$millis"}  
}},
{$sort: { 
  "max response time":-1} 
});

Follow

Get every new post delivered to your Inbox.

Join 249 other followers