Tag Archives: Reporting

How To fix the 500 Error using Favorite Reports in the SCOM web console.

22 May

First mention this is a non official solution.

The Problem

You have installed SCOM 2012 Sp1 UR2 and have implemented the scom webconsole and reporting service to be running under HTTPS mode. You have created using the native scom console a favorite report and now when you try to open this favorite report in the scom webconsole you get a error 500.

image

image

Analyzing

To see the real error we have to do some web.config changes. So open the web.config file on this location: C:\Program Files\System Center 2012\Operations Manager\WebConsole\MonitoringView

Now we enable the SCOM error logging

image

And to get it displayed on the user page we do

image

Now when you run the favorite report again we get in the webconsole  the real error

image

Okay looks like the reportviewer web component binary dll can’t be found. Hmm but wait wasn’t this a prereq at installation time. So I checked if the 2010 ReportViewer components where installed and yes it was and the dlls where also spotted in the assembly cache. It looks like the webconsole has problems finding the correct version of the Microsoft.ReportViewer.WebForms.dll in the assembly cache.

The Quick non Official Solution

Copy the missing dlls to the correct directory will force the web runtime to first look in this directory for the dlls and then go to the assembly cache. So that’s what i did.

Copy the Microsoft.ReportViewer.WebForms.dll file from the assembly cache to path : C:\Program Files\System Center 2012\Operations Manager\WebConsole\MonitoringView\bin

Come on give me some script to do that ! Okay open PowerShell as admin and run

Copy-Item c:\Windows\assembly\GAC_MSIL\Microsoft.ReportViewer.WebForms\10.0.0.0*\*.dll “C:\Program Files\System Center 2012\Operations Manager\WebConsole\MonitoringView\bin”

And now you try to run the favorite report again in the webconsole …

image

… and Yes its working!

The End.

For me this looks  like a bug and I will address this to Microsoft.

Happy Scomming!

Michel Kamp

Advertisements

Touchdown : ScomExcelWorkbook V2 is released

28 Jan

Hi Community,

After some delays I have finished the V2 version of the ScomExcelWorkbook.
See old  V1 post here: https://michelkamp.wordpress.com/2012/11/25/scom-and-excel-a-perfect-couple/

New features

1) More types to query

The types below can now be used in the Type column:
Events
Objects
Alerts
Performance
TaskResults
Discoveries
Rules
Overrides
Monitors
ManagementPacks

2) Combine data on same sheet

When you make a pivot table from a SCOM data sheet you will notice that if you want to combine data of 2 data sheets into one pivot table it will be a challenge to get it work. So I have made a feature that you can append data to one SCOM data sheet. The only thing you will have to do is to use the same sheet name and type in the query rows. See below for a example.

image

3) Extended Properties

Since I was too lazy to hard type every property name in a column I made it dynamic. Cost me some time to get the object type casting generic but again I learned a lot more of C# reflection.  And that’s what I do it for “learning on the job”.

4) Optimization

As always , some speed-up and code simplifying was done.

 

Okay nice but how to get it…

In my V1 version I decided to only give the download to people that did a PM. I have had a lot of PMs and good responses. This was great but took a lot of time to process. So now I will publish it on my public SkyDrive link below. But you are free to leave me a comment I really would appreciate that.

https://skydrive.live.com/redir?resid=2FFA0FC5B0B89EED!1363&authkey=!AOoo44wQmagQyOI

Download it and give it a try. Please please let me know if you like it or have suggestions/problems.

O yea before I forget:

The sheet contains sample query’s you can delete every line after row 4.  And put your own ones in. Some help can be found here:

https://michelkamp.wordpress.com/2012/12/12/your-scom-sdk-query-cheat-sheet/

 

Happy Scomming,

Michel Kamp

No Mr. SCOM I told you not a availability state report but a performance state report I want!

16 Jan

Sometimes you wonder why not all the reports are as the should be. For example of course you are known with the availability report . Just pick a target and period and you will get a nice report telling you when a target when unhealthy.

image

The challenge.

Okay nice …. but I want a report not based on the availability data but on the performance or configuration or security data. But wait this is build into the availability report isn’t it ?

looking at the report description:

Description:

“For every managed object within System Center Operations Manager, monitors configured in each of the disciplines below determine an objects time in state and then roll-up to an objects overall health. The availability report by default shows an objects time in state as per the monitors that roll-up within the availability discipline.

Entity health

Availability   <= this you get

Configuration <= this you want

Performance <= ..

Security <= ..

O no , it looks like not. So yes it’s a real challenge. That the way we like it.

Solution

Since the availability report was intended to be used for this but at the end it looks like the SCOM program team decided to make it locked on ‘availability’ only.  I know this because when you look into the report definition you will see:

image

So the report is using only the availability rollup as state calculation data. AND this parameter is hidden for gurus as us. How dear they Knipogende emoticon

So we can solve it on several ways. The root solution is that we want to change the value ‘System.Health.AvailabilityState’ to ‘System.Health.PerformanceState’ or ‘System.Health.ConfigurationState’  or ‘System.Health.SecurityState’ to get the report state type we want.

1) export the report from report service and edit the hidden value to false. Import the report and open it in the SCOM console and edit the MonitorName value to for example System.Health.PerformanceState . Run the report and you are done.

2) make a normal report run using the non modified availability report and save it to a Management pack. Now export the MP and open it in notepad and edit the MP.

3) make a normal report run using the non modified availability report save it as favorite. Now open SQL enterprise and lookup the report in the table dbo.favoritereport . Change the ReportParameterValues with the changed parameters.

I know you are thinking right now… what would you do Michel…

I would go for option 1. Because I would also change the report definition to have the correct name as ‘Performance availability’ ect.. and save it also under a different name. Because you must be aware that if you only change the report value to hidden = false and don’t change the report file name….. The next time you import a new service pack or MP version it could be that your report is going to be overwritten… So said that go for the more save one and choose 2.

Let’s go!

1) So make the normal availability report in the SCOM console

2) Save it to a MP

image

3) Export the MP

4) Edit the MP with notepad

image

5) import it in scom. (leave the mp version number unchanged)

6) wait a few minutes and you will see the report in the console

Below the end result. Also notice that you can still click to sub report that that this report are also of the state type you wanted!.

 image

Yes I know that you will have to do this for every 3 report types because you can’t change the monitor type runtime. At the end the decision is at you to use step 1 , 2 or 3.

The End

Every time I tell my self make a short blog post! But every time I notice that I am failing.. But who cares…  (yes okay.. my wife) Knipogende emoticon 

Happy scomming!

Michel Kamp

Authoring SCOM Reports in VS 2010

14 Jan

Hi,

Short post on how to get you dev environment ready for authoring scom reports.

Challenge:

You have installed SCOM 2012 on SQL 2008. You want to author a custom report using Visual studio 2010. When you open visual studio you will notice that NO BI project template is shown. Normally you selected this project template and selected the new report project to make your custom report. How now to continue ?

Solved:

Grab a SQL 2012 ISO (YES 2012) and startup the setup.

1) Select installation:

SNAGHTML1045608b

2) New sql or add features

SNAGHTML104737a8

3) Select SQL features Install

SNAGHTML104bfe9e

4) Now the important step. Select the 3 options here. Most important is the “SQL Server Data Tools”. This features contains the VS BI project template.

SNAGHTML104f3ae3

5) Step though the install windows.

And now open Visual studio 2010 and create a new project. And what do we see ?

Yes the BI template Knipogende emoticon 

SNAGHTML1058011e

 

Now you can create the new SCOM reports. Notice also the NEW chart types !!!

image

 

Remember that if you use custom report code components you must copy the correct .dll assemble to the directory:

C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PrivateAssemblies

The End.

Happy Scomming

Michel Kamp

Don’t let the data warehouse write action fool you!

26 Sep

Yes I know. It’s a long time ago I posted. Vacation and most work pressure were and are still the reason. But never less I will share a problem I undergone that looks a small one but can have big impact.

The problem.

You have a workflow that has a PowerShell/vbs script that outputs a property bag with performance data. The performance data contains multiply counters. Now the performance data is going to be written to the OPSDB and DWHDB.  All works okay, you see the performance data counters in the native console. So you say now its okay because the DWH write actions is also writing the same counters to the DWH….  but when you look in the DWH you see that only one counter is stored. But you are sure the workflow outputted multiply counters…. 

Below the performance counters in the native console. All the 4 perf counters are there (yellow) in the ops console

image

Below the DWH.

You see only one rule (yellow) , this was the first in the property bag.

clip_image002

What could be wrong ???

Analyze

The workflow looks like this:

   <Rule ID=”TransferFile.ReadSec” Enabled=”true” Target=”FileTransferClient” ConfirmDelivery=”true” Remotable=”true” Priority=”Normal” DiscardLevel=”100″>
        <Category>Custom</Category>
        <DataSources>
          <DataSource ID=”SMBFileTransfer” TypeID=”FileTransfer”>            <Debug>false</Debug>
            <IntervalSeconds>300</IntervalSeconds>
          </DataSource>
        </DataSources>
         <WriteActions>
          <WriteAction ID=”ToOps” TypeID=”SystemCenter!Microsoft.SystemCenter.CollectPerformanceData” />
          <WriteAction ID=”ToDWH” TypeID=”SCDW!Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData” />
        </WriteActions>      
      </Rule>

1. Frist you check what the property bag output from the datasource SMBFileTransfer  is containing

<Collection><DataItem type=”System.PropertyBagData” time=”2012-09-20T19:55:28.0638791+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><Property Name=”Instance” VariantType=”8″>c:\destionation</Property><Property Name=”Counter” VariantType=”8″>Read Transfer Kbyte Sec</Property><Property Name=”Value” VariantType=”5″>14450.625</Property></DataItem><DataItem type=”System.PropertyBagData” time=”2012-09-20T19:55:28.1079971+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><Property Name=”Instance” VariantType=”8″>c:\destionation</Property><Property Name=”Counter” VariantType=”8″>Read Transfer Total Sec</Property><Property Name=”Value” VariantType=”5″>0.3</Property></DataItem><DataItem type=”System.PropertyBagData” time=”2012-09-20T19:55:28.1079971+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><Property Name=”Instance” VariantType=”8″>c:\destionation</Property><Property Name=”Counter” VariantType=”8″>Write Transfer Kbyte Sec</Property><Property Name=”Value” VariantType=”5″>14450.625</Property></DataItem><DataItem type=”System.PropertyBagData” time=”2012-09-20T19:55:28.1079971+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><Property Name=”Instance” VariantType=”8″>c:\destionation</Property><Property Name=”Counter” VariantType=”8″>Write Transfer Total Sec</Property><Property Name=”Value” VariantType=”5″>0.3</Property></DataItem></Collection>

You see multiply counter values that have to be converted to performance data.

2. Now we check using the WFAnalyzer the converted performance data. See below. It looks okay.

Recieved DataItem <DataItem type=”System.Performance.Data” time=”2012-09-20T19:55:28.1109383+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><TimeSampled>2012-09-20T19:55:28.0638791+02:00</TimeSampled><ObjectName>SMB File Transfer</ObjectName><CounterName>Read Transfer Kbyte Sec</CounterName><InstanceName>c:\destionation</InstanceName><IsNull Type=”Boolean”>false</IsNull><Value>14450.625</Value></DataItem>

Recieved DataItem <DataItem type=”System.Performance.Data” time=”2012-09-20T19:55:28.1109383+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><TimeSampled>2012-09-20T19:55:28.1079971+02:00</TimeSampled><ObjectName>SMB File Transfer</ObjectName><CounterName>Read Transfer Total Sec</CounterName><InstanceName>c:\destionation</InstanceName><IsNull Type=”Boolean”>false</IsNull><Value>0.3</Value></DataItem>

Recieved DataItem <DataItem type=”System.Performance.Data” time=”2012-09-20T19:55:28.1109383+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><TimeSampled>2012-09-20T19:55:28.1079971+02:00</TimeSampled><ObjectName>SMB File Transfer</ObjectName><CounterName>Write Transfer Kbyte Sec</CounterName><InstanceName>c:\destionation</InstanceName><IsNull Type=”Boolean”>false</IsNull><Value>14450.625</Value></DataItem>

Recieved DataItem <DataItem type=”System.Performance.Data” time=”2012-09-20T19:55:28.1109383+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><TimeSampled>2012-09-20T19:55:28.1079971+02:00</TimeSampled><ObjectName>SMB File Transfer</ObjectName><CounterName>Write Transfer Total Sec</CounterName><InstanceName>c:\destionation</InstanceName><IsNull Type=”Boolean”>false</IsNull><Value>0.3</Value></DataItem>

3. Next step is to check the write actions. This also looks okay. The “ToDWH “ writeaction should write the data to the DWH.

<WriteActions>

<WriteAction ID=”ToOps” TypeID=”SystemCenter!Microsoft.SystemCenter.CollectPerformanceData” />

<WriteAction ID=”ToDWH” TypeID=”SCDW!Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData” />

</WriteActions>

All looks okay….

Solution

After some mailing with the OM development team the answer was found: Writing multiply counters to the DWH from 1 property bag output is NOT supported! So the DWH write module has a one to one reference map that means only one rule can contain one counter. Be aware no error is reported if this happens..

The only way to solve this is to make 1 rule for every performance counter you want to store in the DWH.  Use a condition detection in the rule for filtering the correct performance counter. See below for a example.

<Rule ID=”TransferFile.ReadSec” Enabled=”true” Target=”FileTransferClient” ConfirmDelivery=”true” Remotable=”true” Priority=”Normal” DiscardLevel=”100″>
<Category>Custom</Category>
<DataSources>
<DataSource ID=”SMBFileTransfer” TypeID=”OPS.SMB.Performance.FileTransfer”> <Debug>false</Debug>
<IntervalSeconds>300</IntervalSeconds>
</DataSource>
</DataSources>
<ConditionDetection ID=”Filter” TypeID=”System!System.ExpressionFilter”>
<Expression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type=”String”>CounterName</XPathQuery>
</ValueExpression>
<Operator>Equal</Operator>
<ValueExpression>
<Value Type=”String”>Read Transfer Total Sec</Value>
</ValueExpression>
</SimpleExpression>
</Expression>
</ConditionDetection>
<WriteActions>
<WriteAction ID=”ToOps” TypeID=”SystemCenter!Microsoft.SystemCenter.CollectPerformanceData” />
<WriteAction ID=”ToDWH” TypeID=”SCDW!Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData” />
</WriteActions>
</Rule>

THE END

Maybe this will help you. Till next Time.

Happy SCOMMING

Michel Kamp

SCOM DWH aggregations data loose Tip and Tricks

10 Apr

 

This ‘short’ post will be about the DWH aggregations again. It will contain some tips on how not to loose any data.

!!! All I suggest and do here is at own risk and totally unsupported without instructions given from Microsoft support. !!!!

The problem:

You run a performance report on 1 month. You notice that you are missing some days of aggregated hourly/daily data. You were not having any troubles as you know… till now.

image

Analyze:

First we are going to look if we have any aggregations that are not completed yet.
Run the SQL Query below on the DWH database:

— checking the to be processed aggregations ————–

SELECT     COUNT(*) AS Aggr_behind, Dataset.DatasetDefaultName
FROM         StandardDatasetAggregationHistory INNER JOIN
                      Dataset ON StandardDatasetAggregationHistory.DatasetId = Dataset.DatasetId
WHERE     (StandardDatasetAggregationHistory.DirtyInd = 1)
GROUP BY Dataset.DatasetDefaultName

The result could be as shown below. The Aggr_behind number shows you the aggregations that are not completed yet.

image

In this case with this high number we are having a serious problem. Okay then you just follow the my pervious blog post on how to solve this , this is for States missing but can also be applied for performance data. Look at the FIX: part. To kickoff the aggregation processing.

(https://michelkamp.wordpress.com/2012/03/23/dude-where-my-availability-report-data-from-the-scom-dwh/)

But if you see a performance data set number around 2.  (See picture below) It means 2 aggregations have to be processed yet. This is what we want to see. So everything seems okay. But why are we missing the date period 01-02-2012 till 20-01-2012 ?

image

We could have 2 scenarios here:

1. The data was simply not provided to the DWH ?

2. The data was provided but due to stage/aggregation problems not processed.

For case 1 we have to look at the agents what went wrong. That is for this post out of scope.

For case 2 we have some solutions see below.

Case 2

First let me explain how the aggregation process works at helicopter view.  I am sure I miss some details (so feel free to add / correct me on this!)

image

Looking at the picture above: (click on it to expand)

1. The SCOM Management server DWH writer Datasource writes the Performance Data to a RAW staging table.

2. The DWH staging process processes this data by copying the RAW rows into a process table. Sometimes the table is simple renamed and recreated if the new RAW data count is less then a configured number. If you have a big number of new RAW rows the table rows will be copied in batches. This to minimize the transaction log impact. At last the RAW data is copied into the RAW data partitions tables.

3. The Standard Maintenance process generate the Aggregation sets that have to be processed in step 4. During this process there will be created aggregation process rows in the Aggregation history table with a Dirty Indication (DirtyInd) of 1.

4. The RAW staged partition data will be processed to aggregated hourly and daily data. When the aggregation is complete the Dirty Indication for that aggregation will be set on 0.

5. The stored procedure reads the just aggregated data.

6. Data received from step 5 will be used to generate the report for the end user.

 

So now knowing the data flow what could be wrong ?

The answer we have to search at the grooming process (?) yes, the grooming process. The data in the RAW partitions tables from step 2 has a grooming/retention period. This period is standard 10 days. So if your aggregation is broken for more than 10 days (and you didn’t detected this) you will LOOSE your RAW data and as a result the aggregation process will have nothing to aggregate. So no performance data, resulting in our root problem the date gap in the report.

Solution:

Pfff … nice all of this theory stuff but how do I fix this ?

Simply by :   😉

1. Manually insert the missing RAW data and kickoff the aggregation process. I will blog post on how to do this later. (would be after the MMS)

2. Prevent that this is going to happen again.

To prevent this you can increase the retention/grooming period from 10 days to lets say 30 days. Check if you have enough DB space first. Execute the query below:

update StandardDatasetAggregation
set MaxDataAgeDays = 30
where GroomStoredProcedurename = ‘PerformanceGroom’  and AggregationTypeID= 0

Now you will have 30 days to solve your aggregation problems. Of course this is a workaround to get more air to breath during fixing your aggregation problems.

The best way is to monitor it pro active. Since we can monitor everything we create a monitor that checks the outstanding aggregations every 60 minutes and alerts when a threshold is hit. You can use the query from the analyze part in this post to do this. I would set the threshold on 10 so you will be notified if your aggregation process has a delay of 10 datasets (about 10h). If I have time before I’m going to the MMS I will blog post this extra monitor because with the normal DB watcher you can’t make this one. And of course I will use the VS Authoring extensions for this.

Happy scomming.

Michel

Dude where is my Availability Report data from the SCOM DWH ??

23 Mar

” Huston we got a problem!” when I run a availability report  the data isn’t complete. I’m missing a huge number of days. The graph shows UP (Monitoring unavailable) But I am really sure the server was up and monitored !!

image

 

Analyze:

Don’t panic, we are going to solve this. (I hope..) First we are going to look up the days we are missing. Simply click on the white bar. And the detail report will be rendered.

image

Okay looks like we are missing the most data from of 4-3-2012. And we see strange gaps of data that is present.

Okay that’s what the report says, but I am a core stuff guy I check it this way:

Open an SQL session and connect to the DWH db. Run this query. The last aggregated data will be on the first row. So you know what the last data date is you have. We change the DateTime to the same datetime we used in the report.

SELECT     ManagedEntity.FullName, vStateHourly.*
FROM         ManagedEntityMonitor INNER JOIN
                      ManagedEntity ON ManagedEntityMonitor.ManagedEntityRowId = ManagedEntity.ManagedEntityRowId INNER JOIN
                      State.vStateHourly ON ManagedEntityMonitor.ManagedEntityMonitorRowId = State.vStateHourly.ManagedEntityMonitorRowId
WHERE     (ManagedEntity.FullName LIKE ‘Microsoft.Windows.Computer:opsrms01%’)
AND vStateHourly.DateTime between  ‘20120301’ and ‘20120401’
order by vStateHourly.DateTime desc

The output will be:

image

So the last successful hourly aggregation was 02-03-2012 (dd-mm-yyyy). Hmmmm but when I look at the rendered report I see periods of data after this date ??? I must confess I really don’t have a idea now why Knipogende emoticon

Now we have to find the root cause and fix this missing aggregations. Luckily we can enable debug information for the aggregation process so we can see more what going wrong.

Open SQL and run the query below to enable debugging for the State aggregation .

UPDATE [OperationsManagerDW].[dbo].[StandardDataset]
   SET [DebugLevel] = 5

     WHERE [SchemaName] = ‘State’
GO

Now we can see the debug date with this query:

SELECT     TOP (100) DATEADD(hh, 1, DebugMessage.MessageDateTime) AS CET_datetime, StandardDataset.SchemaName , DebugMessage.MessageLevel, DebugMessage.MessageText
FROM         DebugMessage WITH (nolock) INNER JOIN
                      StandardDataset ON DebugMessage.DatasetId = StandardDataset.DatasetId
WHERE    (StandardDataset.SchemaName = N’State’)
order by messagedatetime desc

The output will be as below:

image

It looks like my aggregation process is way behind !

Since the maintenance for the DWH has a sequence run it means when some procedure before fails (lets say the event staging) the other won’t be hit. So I look in the debug table for other messages with ‘failed’ in the message.

Notice that we are now going a little of track , we main problem was the State report incomplete , but now we are looking at the Events. Just follow me.

SELECT     TOP (100) DATEADD(hh, 1, DebugMessage.MessageDateTime) AS CET_datetime, StandardDataset.SchemaName , DebugMessage.MessageLevel, DebugMessage.MessageText
FROM         DebugMessage WITH (nolock) INNER JOIN
                      StandardDataset ON DebugMessage.DatasetId = StandardDataset.DatasetId
where messagetext like ‘%Failed%’
order by messagedatetime desc

O no this is not good:

image

It looks like the event staging is broken. The error is:

Failed to process stagingarea for data set. Error 777971002, Procedure EventProcessStaging, Line 398, Message: Sql execution failed. Error 515, Level 16, State 2, Procedure DebugMessageInsert, Line 15, Message: Cannot insert the value NULL into column ‘MessageText’, table ‘OperationsManagerDW.dbo.DebugMessage’; column does not allow nulls. INSERT fails.

Mmmm When I look at the error I see the debug procedure that writes to the debug log has a problem writing a debug message. Strange this error… So we have to find the real error.  So I open the stored procedure “EventProcessStaging”. And there I found a BUG .. brrrr. The variable @InsertTableName is not set to a value before it is used as part of the debug message variable @MessageText. Because you can’t concat NULL to a string variable an exception is raised. I fixed this by moving the sql where this variable @InsertTableName is assigned to above the first use of the @InsertTableName variable.  (this is for SCOM 2007 and 2012!) I raised already a bug request @Microsoft throughout the TAP program. This only occurs when you set the debuglevel > 3.

For you it simple means don’t set it above 3  or fix this stp own your own risk. (as I have done ;-0 ) In our case the debug level was already above 3 for the state dataset for the last month. So because the event processing was braking the total maintenance (bad architecture , sorry) all my state staging  was stopped. And caused my empty reports.

So now we know it we can go back to the real issue. The missing states fix.

Fix:

This is now very simple. And if you Bing you can find plenty of info on it. One I found out in the beginning very helpful was this one from the Microsoft SCOM team ( http://blogs.technet.com/b/operationsmgr/archive/2011/09/06/standard-dataset-maintenance-troubleshooter-for-system-center-operations-manager-2007.aspx )

What I do is most of the time :

Set a enable = false override on the rule “Standard Data Warehouse Data Set maintenance rule” for all instances of “Standard Data Set”.

SNAGHTML15ff6a4a

Now I am really sure no maintenance process is running.

And I run my own maintenance process every 1 min. Because I know catching up the state data aggregation will take some time and I don’t want to create problem’s for the other datasets (performance , events ..) I will also run the important ones in the same script.

Open a query to the DWH and run:

USE [OperationsManagerDW]
DECLARE @DataSet uniqueidentifier

Print ‘starting loop of StandardDatasetMaintenance jobs’   
Print ‘Processing dataset:’   
Print @DataSet

while (1=1)
begin
    PRINT getdate()
    Print ‘Start StandardDatasetMaintenance’   
SET @DataSet = (SELECT DatasetId FROM StandardDataset WHERE SchemaName = ‘Perf’)
    EXEC StandardDatasetMaintenance @DataSet

SET @DataSet = (SELECT DatasetId FROM StandardDataset WHERE SchemaName = ‘Exchange2010’)
    EXEC StandardDatasetMaintenance @DataSet

SET @DataSet = (SELECT DatasetId FROM StandardDataset WHERE SchemaName = ‘State’)
    EXEC StandardDatasetMaintenance @DataSet

SET @DataSet = (SELECT DatasetId FROM StandardDataset WHERE SchemaName = ‘Event’)
   
    EXEC StandardDatasetMaintenance @DataSet
    –EXEC StandardDatasetProcessStaging @Dataset
    PRINT getdate()
    PRINT ‘END StandardDatasetMaintenance’
    WAITFOR DELAY ’00:01′
end

now you check the debug log on regularly base to see if the state aggregation is completed.

You can also use the query below:

———————————————————————–
— check first and last aggregation time from still to be processed  data
— first and last date must be equal
———————————————————————–
Declare @DataSet as uniqueidentifier
Set @DataSet = (Select DataSetId From StandardDataSet Where SchemaName = ‘State’)
Select AggregationTypeId, COUNT(*) as ‘Count’, MIN(AggregationDateTime) as ‘First’, MAX(AggregationDateTime) as ‘Last’ From StandardDataSetAggregationHistory
Where DataSetId = @DataSet AND LastAggregationDurationSeconds IS NULL
group by AggregationTypeId

So lets check if the process is running okay. Simply rerun the report. the output will be:

image

looks like its all going to be alright. Just be patient.

DO NOT FORGET:

after the states are complete to remove the overrides , other wise you will have for sure the same and more , problem again.

Not to be continued:

I really hope not. Because in my case we have a DWH size almost against 1TB and because of this size it can be very complex and tricky to solve this sort of problems. So if mr. Murphy is reading this , skip my place please …

Happy SCOMMING,

Michel Kamp