Replication – Execution of filter stored procedure 678449641 failed.


Error messages:
The process could not execute ‘sp_replcmds’ on ‘TCY-SQL01\SQLDEV’. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011)
Get help:
Execution of filter stored procedure 678449641 failed. See the SQL Server errorlog for more information. (Source: MSSQLServer, Error number: 18764)
Get help:
The Log Reader Agent failed to construct a replicated command from log sequence number (LSN) {000ff0ac:00000520:003e}. Back up the publication database and contact Customer Support Services. (Source: MSSQLServer, Error number: 18805)
Get help:
The process could not execute ‘sp_replcmds’ on ‘TCY-SQL01\SQLDEV’. (Source: MSSQL_REPL, Error number: MSSQL_REPL22037)
Get help: http://help/MSSQL_REPL22037


The above Error indicates the Log Reader Agent is stuck at LSN {0001d651:00002a49:0006} and is not able to read and create a command for this LSN.
As the Log Reader Agent is stuck at the LSN, it is not moving ahead and since we can have only 1 Log Reader Agent in Transactional Replication which is used for all the publications. All the publications is showing error in the replication monitor and is not moving ahead.
We checked the transaction at the LSN {0001d651:00002a49:0006}.

From the above Error we see “Execution of filter stored procedure 2048634987 failed”
We checked the Filter Stored Procedure ID from sysarticles and found that it was the filter stored procedure for table
Select * from sysarticles where filter=2048634987
However the entry of the filter stored procedure was missing in the sysobjects tables which indicate that the filter stored procedure was deleted due to some reason. To confirm this we used the following query

select * from (select as article_name
, art.pubid as publication_id
, art.artid as article_id
, art.filter as filter_proc_id
, isnull(,‘## MISSING!! ##’) as filter_proc_name
from sysarticles art left outer join sys.objects obj
on art.filter = obj.object_id
where art.filter > 0)as results — and is null (use when only the mismatched articles where required).
where filter_proc_name=‘## MISSING!! ##’


I deleted the publication, recreated it and then created a subscription without initialisation. this prevented a full snapshot.


So to resolve this we identified the publications from syspublications table with pubid=750.
We then deleted the filtering for the above 3 articles and re-added them and the stored procedure got recreated.
We then recreated the snapshot for that publication and marked it’s subscriber for re-initialization.
The Snapshot agent recreated the snapshot and Log Reader Agent started running fine without error.
We checked the replication Monitor and the error vanished..


Useful Links

Find Site holding up mlog & Register Orphaned Mviews

To find which site is holding up an Mlog purge, run the following on the MASTER site, changing criteria where necessary:

select sl.*, rs.snapshot_site, rs.owner,, rs.refresh_method from sys.slog$ sl
left join dba_registered_snapshots rs on rs.SNAPSHOT_ID = sl.SNAPID
where mowner = ‘schema_owner’
and Master = ‘table’

To Register an orphaned Mview destination at the master, run the following, changing where relevant:


The variables the Register_Mview procedure is based on:

   mviewowner   IN   VARCHAR2,
   mviewname    IN   VARCHAR2,
   mviewsite    IN   VARCHAR2,
   mview_id     IN   DATE | BINARY_INTEGER,
   flag         IN   BINARY_INTEGER,
   qry_txt      IN   VARCHAR2,

The Flag variable above is based on a combination

The possible values for the flag parameter are:
                   1 – Fast refreshable (Can_use_log)
                   2 – updatable mview
                  16 – rowid mview
                  32 – primary key mview
           536870912 – object id mview

A materialized view can have more than one of these properties. In this case, sum the values to specify more than one property. (Use the query below to get the values for the properties)

To get the information needed to fill out the variables for the Register_Mview procedure, run the following at the Mview Destination site, changing criteria where necessary:

select s.sowner OWNER, s.vname NAME, snapid,
decode(bitand(s.flag,1),  0, ‘NO’, ‘YES’) CAN_USE_LOG,
       decode(bitand(s.flag,2),  0, ‘NO’, ‘YES’) UPDATABLE,
       decode(bitand(s.flag,16),  16, ‘ROWID’,
       (decode(bitand(s.flag,32), 32, ‘PRIMARY KEY’,
       (decode(bitand(s.flag,536870912),   536870912, ‘OBJECT ID’,
from sys.snap$ s
where sowner = ‘Mview_SCHEMA’;

OWNER             NAME                    SNAPID      CAN_USE_LOG UPDATABLE   REFRESH_METHOD

Mview_SCHEMA     MVTABLE                525         YES         NO          PRIMARY KEY

The flag for the record above would be Can_Use_Log (1) + refresh_Method (32) = 33


Replication – Log reader stuck on Initializing – ‘The process could not execute ‘sp_repldone/sp_replcounters’ on ‘Server\Instance’.

Scenario – Database schema being upgraded on a replicated SQL to SQL database. The subscription is removed and both the publisher and subscriber schema’s are upgraded (DDL and DML changes). The subscription is added back in after completion of upgrade without initialisation as the consistency was kept while upgrading the schema. However, on startup of logreader, it starts scanning through a 10gb log on the publisher database to find pending replicated transactions.


Because the publication is still active, any articles in the publication which have DML against them will be marked in the transaction log as pending replication. This will cause 2 problems:

1. The log will not be able to be backed up until the pending transactions have been pushed  to the subscribers.

2. Constrainst will probably be violated and replication will break as the upgrade DML has already been run on the subscriber and these transactions should not be run again.

What to do?

As the database is in a stable state with both the subscriber and publisher at the same point of time, and because all user connectivity has been broken prior to the work, all the data in the transaction log needs to be ignored for the purpose of replication.

If you try to backup the transaction log with pending transactions still in the log, you will see the following error:

The log was not truncated because records at the beginning of the log are pending replication. Ensure the Log Reader Agent is running or use sp_repldone to mark transactions as distributed.

To clear the pending transactions, run the following against the published database, after stopping the logreader and distribution agent:

exec sp_repldone @xactid = NULL, @xact_seqno = NULL, @numtrans = 0, @time = 0, @reset = 1

If you try to start the log reader now you will see the following error:

2009-11-25 22:40:00.995 Status: 0, code: 18752, text: ‘Only one Log Reader Agent or log-related procedure (sp_repldone, sp_replcmds, and sp_replshowcmds) can connect to a database at a time. If you executed a log-related procedure, drop the connection over which the procedure was executed or execute sp_replflush over that connection before starting the Log Reader Agent or executing another log-related procedure.’.
2009-11-25 22:40:00.995 Status: 0, code: 22017, text: ‘The process could not set the last distributed transaction.’.
2009-11-25 22:40:00.995 Status: 0, code: 22037, text: ‘The process could not execute ‘sp_repldone/sp_replcounters’ on ‘Server\Instance’.

To clean this up,, run the following on the publisher database:


This gives replication a clean slate and a backup of the log will now work as well as the logreader not trundling through 10gb worth of pending transaction.



crontab -e     Edit your crontab file, or create one if it doesn’t already exist.
crontab -l      Display your crontab file.
crontab -r      Remove your crontab file.
crontab -v      Display the last time you edited your crontab file. (This option is only available on a few systems.)


# Oracle broken jobs

0 9 * * *  /path/ > /path/failingjobs.out  2>&1

Date time config for scheduling jobs

.---------------- minute (0 - 59) 
|  .------------- hour (0 - 23)
|  |  .---------- day of month (1 - 31)
|  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ... 
|  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7)  OR sun,mon,tue,wed,thu,fri,sat 
|  |  |  |  |
*  *  *  *  *  command to be executed

There are several ways of specifying multiple date/time values in a field:

  • The comma (‘,’) operator specifies a list of values, for example: "1,3,4,7,8" (space inside the list must not be used)
  • The dash (‘-‘) operator specifies a range of values, for example: "1-6", which is equivalent to "1,2,3,4,5,6"
  • The asterisk (‘*’) operator specifies all possible values for a field. For example, an asterisk in the hour time field would be equivalent to ‘every hour’ (subject to matching other specified fields).

There is also an operator which some extended versions of cron support, the slash (‘/’) operator (called "step"), which can be used to skip a given number of values. For example, "*/3" in the hour time field is equivalent to "0,3,6,9,12,15,18,21".

So "*" specifies ‘every hour’ but the "*/3" means only those hours divisible by 3. The meaning of ‘/’ specifier, however, means "when the modulo is zero" rather than "every". For example, "*/61" in the minute will in fact be executed hourly, not every 61 minutes.

Example: the following will clear the Apache error log at one minute past midnight ( 00:01 of every day of the month, of every day of the week ).

1 0 * * *  echo -n "" > /www/apache/logs/error_log

Slash example: the following will run the script /home/user/ every 5 minutes.

*/5 * * * *  /home/user/

SQL Server Default Languages and Misinterpretation

Problem: It seems that the local settings have an effect on how SQL Server interprets incoming dates.  For e.g. Country A’s server may see dates in the format MM/DD/YYYY while country B may see it as DD/MM/YYYY by default.  This causes problems as a date and month with values lower than 12 most often results in mixed up dates.


Each SQL Server has a default language.  You can see what the default language is by executing these commands (in Query Analyzer).

sp_configure ‘default language’

This will tell you what the default language is (sort of).  It actually returns a config_value with an integer that represents the language id. You can then run…


You will see a list of languages that SQL Server supports. 

The odd thing here is that the server’s language setting will not solve your problem.  This setting configures the default language for NEW users.  By changing this setting, existing users will continue to have their original language.  This is where it gets interesting because it’s the login’s language setting that determines SQL Server’s date format.

For example, if user A has a default language of us_english, then a date of 4/6/2006 will be interpreted as April 4, 2006.  If user B has a default language of ‘British’, then the date will be interpreted as June 6, 2006.

The good news is that you can change the default language for a user so that subsequent logins will exhibit the correct interpretation of dates.  Here’s how:
You can set the default language for a user by issueing the following command.

sp_defaultlanguage @loginame = ‘LoginName’, @language = ‘Language’

After running this command, you will need to logout and back in to the database in order for the change to take affect.  The good news is that the language setting only needs to be done once (for each user in the database).

There is an alternative method, but it only works for the current session.  You can set the language in your query (much the same way the Set DateFormat works).  When you disconnect from the database, the language setting is NOT saved.  Set Language differs from Set DateFormat regarding weekday names and month names, for example:


set language ‘us_english’
Select Convert(DateTime, ‘4/6/2006’), 
       DateName(weekday, ‘4/6/2006’),
       DateName(Month, ‘4/6/2006’)

set language ‘Italian’
Select Convert(DateTime, ‘4/6/2006’), 
       DateName(Weekday, ‘4/6/2006’),
       DateName(Month, ‘4/6/2006’)


You can set the default language for new logins by configuring the server’s default language by using sp_configure.
You can change a user’s default language by using sp_defaultlanguage.
You can temporarily change the language for a query by using Set Language

NOVALIDATE constraints – Primary Key violated. Duplications…

A NOVALIDATE constraint is basically a constraint which can be enabled but for which Oracle will not check the existing data to determine whether there might be data that currently violates the constraint.

This is useful if we know there’s data that violates the constraint but we want to quickly put on a constraint to prevent further violations, with the intention to clean up any possible violations at some future point in time.

It’s also potentially useful if we know the data is clean and so want to prevent the potentially significant overheads of Oracle having to check all the data to ensure there are indeed no violations.

There are a few little traps one can easily fall into if these two very important fundamentals are not understood:

  1. By default, Oracle will attempt to create a Unique Index to police a PK or UK constraint
  2. A NOVALIDATE constraint requires a Non-Unique Index for the constraint to really be “Novalidated”

If we try to create a NOVALIDATE constraint but Oracle appears to be ignoring this and is validating the constraint regardless and so generating an error because of the duplicate entry.

Why ?

Because by default Oracle will attempt to create a Unique index when creating a PK constraint. A Unique index MUST always contain unique values and so complains when it stumbles across our duplicate 42 ID value. The constraint is being effectively validated because the unique index will only be created providing there are indeed no duplicate values.

It’s easy to introduce duplicate rows of data into Oracle tables by running a data load twice without the primary key or unique indexes created or enabled.
Here’s how you remove the duplicate rows before the primary key or unique indexes can be created:

DELETE FROM our_table
WHERE rowid not in
(SELECT MIN(rowid)
FROM our_table
GROUP BY column1, column2, column3 ;

Here column1, column2, column3 constitute the identifying key for each record.
The GROUP BY is used on the columns that make the primary key for the table. This script deletes each row in the group after the first row.