Tag Archives: Authoring

Extending HP network devices with CPU and Memory counters

25 May

Hi,

This blog I had prepared a long time ago but never published it. So now it’s the time to do it.  Since we all know that SCOM 2012 has build in network device monitoring we of course want to use it. This works perfectly , except I you are using devices that are not certified by SCOM. In that case you will not get the CPU and memory counter and Fan/PSU states. For example most of the Procurve network devices from HP are left in the dark. So..

The problem

We want to monitor also CPU and Memory usages from HP network devices.

The solution

I will keep it simple and clear. I will demonstrate the steps using VSAE and give you as bonus the MP at the end. I will use VSAE management pack templates just to illustrate the power of it.

The steps will be:

1. [Classes] create the memory and CPU classes

We just simple make use of on the memory and processor classes that are already present in the system center network library. So we create 2 classes based on that:

image

 

2. [Datasources] create the memory and CPU discovery datasources

Now we have to create the datasources for the discovery of the CPU and memory targets

image

The most important is that you set the device key correctly to the parent node. Otherwise the relationship between the new CPU / memory class and the network device node will not be set. And you will not see any CPU / Memory targets. This can sometime be a ‘error and retry’ process.

3. [Discovery’s] create discovery’s for the CPU and Memory targets

Now we have the discovery datasource we can create the 2 discovery rules for the CPU and Memory targets. We will use the VSAE discovery templates for this. Just simple add new ‘Discovery’ template and you add the 2 rules below.

image

Create the 2 new discovery’s and specify the correct datasources.

image

You specify the datasources we created above to the correct discovery rule. Fill in the correct OID to match the HP processor and Memory OIDs.

image

If you now import the MP will will see at a HP node the CPU and Memory classes created. So continue the the next step.

4) [Rules] create the memory and CPU collection rules

So now we can make the stuff we wanted to see. The Memory and CPU counters. So we just simple use the add new item ‘Rule (Performance Collection)’ template and make the 2 new rules. We are going to use the already build in performance collection rules for the network nodes.

image

image 

The most important is to specify the correct OID for the Memory and CPU counters at the datasource configuration. See below:

CPU: .1.3.6.1.4.1.11.2.14.11.5.1.9.6.1.0
Memory: .1.3.6.1.4.1.11.2.14.11.5.1.1.2.2.1.1.7 and .1.3.6.1.4.1.11.2.14.11.5.1.1.2.2.1.1.5

I am not going to do a deep dive on this. You can just simple reference to the downloadable project source at the end of this post to check the details out.

The results

Now you import the MP and you will see you most wanted performance counters !! Open the Network Summary Dashboards and Node dashboard for the CPU Usage to check it out.

SNAGHTML126f4134

And the ‘free Memory (Percent)’ performance view for the memory usage.

image 

And below the relation ship diagram.

image

The End

The next step could be to add monitors to alert on high Memory or CU usages. I am not going to give you this bonus because its better to get some VSAE practice you self’s… You can do this the same way you did with the collection rules. There is a template for monitors also.

Download VSAE project (for the diehards): http://sdrv.ms/11mkq9j

Download Management Pack example: http://sdrv.ms/11mku8S

Happy Scomming

Michel Kamp

Advertisements

Discovery’s at your demand , yes sir!

27 Apr

Hi,

This time a short post. But I think this could be useful  for SCOM admins.

The challenge.

We all know that one of the big powers of SCOM is the self maintaining of the monitor targets. SCOM uses discovery’s for this that run at regular intervals. Lets say you install a new SQL database instance on a server that has already a SCOM agent on it. Normally you have to wait for 4 hours before the new database instance is discovered. Yes you can speed this up to restart the SCOM agent but now we have a better way.

Analyze

First, all the credits go to the SCOM product team it self’s. It seems the feature was already build in but making it globally known was somehow left behind. There is a agent task called ‘Trigger On Demand Discovery’ that can help you out. But how to operate this task can be painful.

The solution

I have written a PowerShell script that does the hard work for you. Running this script and supplying the correct Discovery and target will result in a instantly run of that discovery. So now you don’t have to wait for the discovery interval of 4 hours to trigger.

How it works:

1) You fill in the $OMserver with the SCOM SDK server FQDN.

2) You fill in the $discoveryname with the display name of the discovery rule you want to trigger. Just copy and paste the displayname from your author pane in the scom console. See picture below.

image

3) You fill in the $targetdisplayname with the name of the main target where this discovery should run. You can find this name by looking at the target from the discovery rule you got from step 2.

image

And fill this in the inventory view.

image

The name “servicemanager.systemcenter.local” is the target display name to use.

btw. of course you can use PowerShell to do this for you…

Below the script:

It triggers the discovery task and then waits for the results and displays it. Be sure to look at the output results property because it only is okay when it contains :

image

The script.

## =======================================================
## Trigger SCOM discovery for a discovery rule and target
## ======================================================
## Michel Kamp

Import-Module operationsmanager
## OM sdk server
$Omserver=”scom01.systemcenter.local”
## discovery display name
$discoveryname=”Service Manager Management Server Properties Discovery”
## target display name
$TargetDisplayName=”servicemanager.systemcenter.local”

## —————————————————-
## MAIN
## —————————————————-
# connect to OM server
$credentials = get-Credential
new-ScommanagementGroupConnection -Computer $Omserver -Credential $credentials

# get task to execute
$task=get-scomtask -name Microsoft.SystemCenter.TriggerOnDemandDiscovery
# make override params
$discovery=get-scomdiscovery -DisplayName $discoveryname
$TargetInstanceId= (Get-SCOMClass -Id   $discovery.target.id  | Get-SCOMClassInstance | ?{$_.displayname -eq $TargetDisplayName}).ID.Tostring()
$DiscoveryID=$discovery.id.tostring()
$override=@{DiscoveryId=$DiscoveryID;TargetInstanceId=$TargetInstanceId}
$instance=get-scomclass -name Microsoft.SystemCenter.ManagementServer | get-scomclassinstance | ?{$_.displayname -eq $Omserver}
# run the task
$task_run=start-scomtask -task $task -instance $instance -override $override

# wait for result
while ( (get-SCOMTask -Id $task_run.TaskId).Status -eq “Started” )
{
    write-Output “Waiting…”
    Sleep -Seconds 2
}
# show task output
get-SCOMTaskResult -BatchID $task_run.BatchId

## —————————————————-
## end script
## —————————————————-

The End.

I already did some more investigation on this topic because I think when you can do it for a discovery you can also do it for every workflow that contains a timed interval trigger module. Can you imagine that you can now trigger every rule or monitor at your demand… so cool and so handy while debugging.  When I have it working I will of course share it with you “the community”.

Happy SCOMMING

Michel Kamp

Get a grip on the DWH aggregations

24 Mar

 

The problem

If you run availability reports or performance reports with a aggregation type of daily or hourly the reports are empty. This problem is described a lot on the web. And I have also written a couple of blog post how to fix this issue. But as you know we are using scom to monitor stuff , so why not monitor this aggregation processing and alert if a processing delay is occurring. ? That’s our mission today….

Analyze

Using SQL enterprise manager and a SQL query on the data warehouse DB we can read out the aggregation processing. This query looks like this:

Select AggregationTypeId, Datasetid, (Select SchemaName From StandardDataSet Where Datasetid = StandardDataSetAggregationHistory.Datasetid) ,  COUNT(*) as ‘Count’, MIN(AggregationDateTime) as ‘First’, MAX(AggregationDateTime) as ‘Last’ From StandardDataSetAggregationHistory
Where LastAggregationDurationSeconds IS NULL
group by AggregationTypeId , Datasetid

The output will show us how many aggregations there have still to be processed /aggreationtype  (20=hourly , 30 = daily).

image

So in this case we have no problem. But I have seen scom environments where the state aggregations where so far behind that it was almost not possible to fix it. This bring up a point: especially the state aggregations are the tricky ones. If you have many ‘flipping’ monitors there will be a lot of state changes and so a lot of aggregations data to process. This process takes a lot of SQL CPU power and also disk space. In most of this cases it was the tempdb data space free or transaction log that was the root cause of the failure.

Solution

In scom we have for every aggregation an target. This target is named ‘Standard data set’. You can find it here:

image

If you compare the screenshot with the results on your scom console you will notice that you don’t have the green healthy state… And that’s why you are reading this post. So lets add this state.

I wanted to give every dataset that has to be processed a health state on how many aggregation it has still to process. So we make a monitor that executes for every data set the query above and if a threshold is hit the health state is changed. Also we will add a rule so that this aggregation behind count is put in a trend graph.

I have used VSAE for this , and I will not share the code but only the idea. Why not ? I believe you have to know what you are doing and by copy & pasting you don’t learn from it if you don’t have done it once from start till end.

The real work

Open a new VSAE project and add a empty MP fragment and a PowerShell fragment.

image

Then you make a datasource that reads the aggregation count. This is done using PowerShell and the SQL snapin.

image

The PowerShell script has as input the GUID of the dataset (property of the target) and as output a property bag with the aggregations count (daily and hourly). I made the script somewhat intelligent by reading out the registry where the data warehouse is located.

Now we use this datasource in a monitor module type to create a 3 state monitor. And since we have created a datasource module we can create also a rule that collects the aggregation behind for the trend graph. Yes know know this is easier to type as to do…

Below a snap of the datasource module

image

And below a snap of the monitor module type

image

and the monitor. Create one for hourly(not shown) an one for daily.

image

At last for trending we have to create a collection rule.

image

Notice that the monitor and collection rule are having as target the “Microsoft.SystemCenter.DataWarehouse.DataSet” alias “standard dataset” and notice the runas profile.

The result

When you have constructed the MP and build/deployed it you will see 2 extra monitors on the standard dataset targets as show above. Open the health explorer to see if all is ok.

image

Above dataset has had a problem. To see some details, view the performance counters and you will see the aggregations trend.

image

In this case the state hourly aggregations where way behind. So I followed one of my own blog posts to solve this one. Where I manually executed in a loop the state aggregation process to speed up the processing.

The End.

Yes I know this post is a bit ‘çloudy’ and not something you can download and import. But I hope by sharing the idea I triggered you to try it your self.

Happy SCOMMING!

Michel Kamp

No Mr. SCOM I told you not a availability state report but a performance state report I want!

16 Jan

Sometimes you wonder why not all the reports are as the should be. For example of course you are known with the availability report . Just pick a target and period and you will get a nice report telling you when a target when unhealthy.

image

The challenge.

Okay nice …. but I want a report not based on the availability data but on the performance or configuration or security data. But wait this is build into the availability report isn’t it ?

looking at the report description:

Description:

“For every managed object within System Center Operations Manager, monitors configured in each of the disciplines below determine an objects time in state and then roll-up to an objects overall health. The availability report by default shows an objects time in state as per the monitors that roll-up within the availability discipline.

Entity health

Availability   <= this you get

Configuration <= this you want

Performance <= ..

Security <= ..

O no , it looks like not. So yes it’s a real challenge. That the way we like it.

Solution

Since the availability report was intended to be used for this but at the end it looks like the SCOM program team decided to make it locked on ‘availability’ only.  I know this because when you look into the report definition you will see:

image

So the report is using only the availability rollup as state calculation data. AND this parameter is hidden for gurus as us. How dear they Knipogende emoticon

So we can solve it on several ways. The root solution is that we want to change the value ‘System.Health.AvailabilityState’ to ‘System.Health.PerformanceState’ or ‘System.Health.ConfigurationState’  or ‘System.Health.SecurityState’ to get the report state type we want.

1) export the report from report service and edit the hidden value to false. Import the report and open it in the SCOM console and edit the MonitorName value to for example System.Health.PerformanceState . Run the report and you are done.

2) make a normal report run using the non modified availability report and save it to a Management pack. Now export the MP and open it in notepad and edit the MP.

3) make a normal report run using the non modified availability report save it as favorite. Now open SQL enterprise and lookup the report in the table dbo.favoritereport . Change the ReportParameterValues with the changed parameters.

I know you are thinking right now… what would you do Michel…

I would go for option 1. Because I would also change the report definition to have the correct name as ‘Performance availability’ ect.. and save it also under a different name. Because you must be aware that if you only change the report value to hidden = false and don’t change the report file name….. The next time you import a new service pack or MP version it could be that your report is going to be overwritten… So said that go for the more save one and choose 2.

Let’s go!

1) So make the normal availability report in the SCOM console

2) Save it to a MP

image

3) Export the MP

4) Edit the MP with notepad

image

5) import it in scom. (leave the mp version number unchanged)

6) wait a few minutes and you will see the report in the console

Below the end result. Also notice that you can still click to sub report that that this report are also of the state type you wanted!.

 image

Yes I know that you will have to do this for every 3 report types because you can’t change the monitor type runtime. At the end the decision is at you to use step 1 , 2 or 3.

The End

Every time I tell my self make a short blog post! But every time I notice that I am failing.. But who cares…  (yes okay.. my wife) Knipogende emoticon 

Happy scomming!

Michel Kamp

Your SCOM SDK Query cheat Sheet

12 Dec

So while  reading the title didn’t you had the feeling that you where back at the college banks… This time I will post a small cheat sheet that you can use in the SCOM excel Workbook is posted last time.

So let’s start.

Using the SDK you have the possibility to get SCOM related data using a sort of SQL query language. Defining a query can be tricky if you don’t know what all possibility’s are. And for sure remember that the key properties are case sensitive !

And please don’t think I found this out my self , all the credits go to the MS Product team at http://msdn.microsoft.com/en-us/library/hh328943.aspx I only wanted to make one page for all.

The query syntax can be found here : http://msdn.microsoft.com/en-us/library/bb437603.aspx (this a 2007 page but is also valid for 2012)

You will see more types of data you can get. I can already tell you that the vnext SCOmExcelWorkbook will be extended with most of the data types shown’ below.

Events

  • Id
  • OriginalId
  • MonitoringObjectId
  • MonitoringClassId
  • MonitoringObjectName
  • MonitoringObjectDisplayName
  • MonitoringObjectPath
  • MonitoringObjectFullName
  • MonitoringRuleId
  • PublisherName
  • Number
  • CategoryId
  • User
  • Channel
  • LevelId
  • LoggingComputer
  • TimeGenerated
  • TimeAdded
  • EventData
  • EventParameters

 

Alerts

  • Id
  • Name
  • Description
  • MonitoringObjectId
  • MonitoringClassId
  • MonitoringObjectName
  • MonitoringObjectDisplayName
  • MonitoringObjectPath
  • MonitoringObjectFullName
  • IsMonitorAlert
  • ProblemId
  • MonitoringRuleId
  • ResolutionState
  • Priority
  • Severity
  • Category
  • Owner
  • ResolvedBy
  • TimeRaised
  • TimeAdded
  • LastModified
  • LastModifiedBy
  • TimeResolved
  • TimeResolutionStateLastModified
  • CustomField1
  • CustomField2
  • CustomField3
  • CustomField4
  • CustomField5
  • CustomField6
  • CustomField7
  • CustomField8
  • CustomField9
  • CustomField10
  • TicketId
  • Context
  • ConnectorId
  • LastModifiedByNonConnector
  • MonitoringObjectInMaintenanceMode
  • MonitoringObjectHealthState
  • ConnectorStatus
  • NetbiosComputerName
  • NetbiosDomainName
  • PrincipalName
  • AlertParams
  • SiteName
  • MaintenanceModeLastModified
  • StateLastModified
  • Management Packs
  • Id
  • Sealed
  • Name
  • FriendlyName
  • Version
  • KeyToken
  • LastModified
  • TimeCreated
  • DisplayName
  • Description
  • VersionId

Performance

  • Id
  • MonitoringObjectId
  • MonitoringClassId
  • MonitoringObjectName
  • MonitoringObjectDisplayName
  • MonitoringObjectPath
  • MonitoringObjectFullName
  • MonitoringRuleId
  • InstanceName
  • ObjectName
  • CounterName
  • HasSignature
  • LearningMonitoringRuleId
  • LastSampledValue

Diagnostics

  • Id
  • Name
  • Accessibility
  • ManagementPackId
  • Enabled
  • TargetMonitoringClassId
  • MonitorId
  • ExecuteOnState
  • Remotable
  • Category
  • Timeout
  • TimeAdded
  • LastModified
  • DisplayName
  • Description
  • HasNonCategoryOverride

Discoveries

  • Id
  • Name
  • Accessibility
  • ManagementPackId
  • Enabled
  • TargetMonitoringClassId
  • ConfirmDelivery
  • Remotable
  • Category
  • Priority
  • TimeAdded
  • LastModified
  • DisplayName
  • Description
  • HasNonCategoryOverride

Rules

  • Id
  • Name
  • ManagementPackId
  • TargetMonitoringClassId
  • Enabled
  • Category
  • DisplayName
  • Description
  • ConfirmDelivery
  • TimeAdded
  • LastModified
  • Remotable
  • Priority
  • DiscardLevel
  • HasNonCategoryOverride

 

Monitors

  • Id
  • Name
  • ManagementPackId
  • Accessibility
  • DisplayName
  • Description
  • TargetMonitoringClassId
  • Algorithm
  • AlgorithmParameter
  • MonitoringRelationshipClassId
  • Category
  • MemberMonitorId
  • ParentMonitorId
  • IsUnitMonitor
  • IsInternalRollupMonitor
  • IsExternalRollupMonitor
  • AlertOnState
  • AlertAutoResolve
  • AlertPriority
  • AlertMessage
  • HasNonCategoryOverride

Recoveries

  • Id
  • Name
  • Accessibility
  • ManagementPackId
  • Enabled
  • TargetMonitoringClassId
  • MonitorId
  • ResetMonitor
  • ExecuteOnState
  • MonitoringDiagnosticId
  • Remotable
  • Category
  • Timeout
  • TimeAdded
  • LastModified
  • DisplayName
  • Description
  • HasNonCategoryOverride

Tasks

  • Id
  • Name
  • ManagementPackId
  • TargetMonitoringClassId
  • Enabled
  • Category
  • DisplayName
  • Description
  • Accessibility
  • Remotable
  • Timeout
  • TimeAdded
  • LastModified

TaskResults

BatchId
ErrorCode
ErrorMessage
Id
LastModified
LocationId
ManagementGroup
ManagementGroupId
Output
ProgressData
ProgressLastModified
ProgressMessage
ProgressValue
RunningAs
Status
StatusLastModified
SubmittedBy
TargetClassId
TargetObjectId
TaskId
TimeFinished
TimeScheduled
TimeStarted

Overrides

  • Id
  • Name
  • ManagementPackId
  • TargetId
  • ContextId
  • ContextObjectId
  • Value
  • Enforced
  • DisplayName
  • Description
  • TimeAdded
  • LastModified

 

Happy SCOMMING

Michel Kamp

Mr. SCOM, don’t play Hide and Seek with views and Mp’s !

7 Oct

Hi , a short post on how to find the management pack where a view is stored.

Problem:

You see a view in the operations console. And you want to know in what MP this view is stored.

Solutions:

1) You could export all MPs and text search for the view display name. This will give you the language display element. And the file containing is the MP you are looking for. But you will probably see multiply matches returned because view display names aren’t unique.

2) We could simply open the native console and use the search feature. Type in the view name you search for and whala… Look at the Management Pack field and you have the answer. Two things about his: (1) it’s a flat list so difficult to overview (2) it’s a way to easy solution for me ; –)

image

2) So … Use a mix of c# and PowerShell to solve this. Since we don’t have a OM12 PowerShell get-views. We have to be creative. And this is the way I like it… Open PowerShell on the OM2012 server. And copy and paste the script below. Change the parameter $viewdisplayname with the view display name you are looking for. You can use PowerShell wildcards. And run it. You will then see a grid view with all the found matches. It also returns the folder that’s contains the view. Using the filter option of the grid view you can now quickly find the correct view.

$viewdisplayname=”*Alert*”
$rms=”localhost”
# Load the Assemblies
[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.EnterpriseManagement.OperationsManager”)
[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.EnterpriseManagement.Core”)
 
$Script:MG = New-Object Microsoft.EnterpriseManagement.ManagementGroup($rms)
$views = $MG.Presentation.GetViews() | where { $_.DisplayName -like $viewdisplayname } | Select-object @{Name = ‘ViewName’;  expression ={ $_.DisplayName}},@{Name = ‘folder’;  expression ={ $_.GetFolders()[0].DisplayName}},@{Name = ‘Mp_Name’;  expression ={ $_.ManagementPackName}}
$views | out-gridview

The End.

Next week I will try to do a post on how to extend SCOM locations so you can now display a target on the overview world map instead of only a web availability target.. And even integrate it on a Bing interactive map with a SCOM 2012 widget…

Happy SCOMMING.

Don’t let the data warehouse write action fool you!

26 Sep

Yes I know. It’s a long time ago I posted. Vacation and most work pressure were and are still the reason. But never less I will share a problem I undergone that looks a small one but can have big impact.

The problem.

You have a workflow that has a PowerShell/vbs script that outputs a property bag with performance data. The performance data contains multiply counters. Now the performance data is going to be written to the OPSDB and DWHDB.  All works okay, you see the performance data counters in the native console. So you say now its okay because the DWH write actions is also writing the same counters to the DWH….  but when you look in the DWH you see that only one counter is stored. But you are sure the workflow outputted multiply counters…. 

Below the performance counters in the native console. All the 4 perf counters are there (yellow) in the ops console

image

Below the DWH.

You see only one rule (yellow) , this was the first in the property bag.

clip_image002

What could be wrong ???

Analyze

The workflow looks like this:

   <Rule ID=”TransferFile.ReadSec” Enabled=”true” Target=”FileTransferClient” ConfirmDelivery=”true” Remotable=”true” Priority=”Normal” DiscardLevel=”100″>
        <Category>Custom</Category>
        <DataSources>
          <DataSource ID=”SMBFileTransfer” TypeID=”FileTransfer”>            <Debug>false</Debug>
            <IntervalSeconds>300</IntervalSeconds>
          </DataSource>
        </DataSources>
         <WriteActions>
          <WriteAction ID=”ToOps” TypeID=”SystemCenter!Microsoft.SystemCenter.CollectPerformanceData” />
          <WriteAction ID=”ToDWH” TypeID=”SCDW!Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData” />
        </WriteActions>      
      </Rule>

1. Frist you check what the property bag output from the datasource SMBFileTransfer  is containing

<Collection><DataItem type=”System.PropertyBagData” time=”2012-09-20T19:55:28.0638791+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><Property Name=”Instance” VariantType=”8″>c:\destionation</Property><Property Name=”Counter” VariantType=”8″>Read Transfer Kbyte Sec</Property><Property Name=”Value” VariantType=”5″>14450.625</Property></DataItem><DataItem type=”System.PropertyBagData” time=”2012-09-20T19:55:28.1079971+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><Property Name=”Instance” VariantType=”8″>c:\destionation</Property><Property Name=”Counter” VariantType=”8″>Read Transfer Total Sec</Property><Property Name=”Value” VariantType=”5″>0.3</Property></DataItem><DataItem type=”System.PropertyBagData” time=”2012-09-20T19:55:28.1079971+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><Property Name=”Instance” VariantType=”8″>c:\destionation</Property><Property Name=”Counter” VariantType=”8″>Write Transfer Kbyte Sec</Property><Property Name=”Value” VariantType=”5″>14450.625</Property></DataItem><DataItem type=”System.PropertyBagData” time=”2012-09-20T19:55:28.1079971+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><Property Name=”Instance” VariantType=”8″>c:\destionation</Property><Property Name=”Counter” VariantType=”8″>Write Transfer Total Sec</Property><Property Name=”Value” VariantType=”5″>0.3</Property></DataItem></Collection>

You see multiply counter values that have to be converted to performance data.

2. Now we check using the WFAnalyzer the converted performance data. See below. It looks okay.

Recieved DataItem <DataItem type=”System.Performance.Data” time=”2012-09-20T19:55:28.1109383+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><TimeSampled>2012-09-20T19:55:28.0638791+02:00</TimeSampled><ObjectName>SMB File Transfer</ObjectName><CounterName>Read Transfer Kbyte Sec</CounterName><InstanceName>c:\destionation</InstanceName><IsNull Type=”Boolean”>false</IsNull><Value>14450.625</Value></DataItem>

Recieved DataItem <DataItem type=”System.Performance.Data” time=”2012-09-20T19:55:28.1109383+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><TimeSampled>2012-09-20T19:55:28.1079971+02:00</TimeSampled><ObjectName>SMB File Transfer</ObjectName><CounterName>Read Transfer Total Sec</CounterName><InstanceName>c:\destionation</InstanceName><IsNull Type=”Boolean”>false</IsNull><Value>0.3</Value></DataItem>

Recieved DataItem <DataItem type=”System.Performance.Data” time=”2012-09-20T19:55:28.1109383+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><TimeSampled>2012-09-20T19:55:28.1079971+02:00</TimeSampled><ObjectName>SMB File Transfer</ObjectName><CounterName>Write Transfer Kbyte Sec</CounterName><InstanceName>c:\destionation</InstanceName><IsNull Type=”Boolean”>false</IsNull><Value>14450.625</Value></DataItem>

Recieved DataItem <DataItem type=”System.Performance.Data” time=”2012-09-20T19:55:28.1109383+02:00″ sourceHealthServiceId=”0F6B7345-4C8E-CFAF-BD7A-454E6C94B77F”><TimeSampled>2012-09-20T19:55:28.1079971+02:00</TimeSampled><ObjectName>SMB File Transfer</ObjectName><CounterName>Write Transfer Total Sec</CounterName><InstanceName>c:\destionation</InstanceName><IsNull Type=”Boolean”>false</IsNull><Value>0.3</Value></DataItem>

3. Next step is to check the write actions. This also looks okay. The “ToDWH “ writeaction should write the data to the DWH.

<WriteActions>

<WriteAction ID=”ToOps” TypeID=”SystemCenter!Microsoft.SystemCenter.CollectPerformanceData” />

<WriteAction ID=”ToDWH” TypeID=”SCDW!Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData” />

</WriteActions>

All looks okay….

Solution

After some mailing with the OM development team the answer was found: Writing multiply counters to the DWH from 1 property bag output is NOT supported! So the DWH write module has a one to one reference map that means only one rule can contain one counter. Be aware no error is reported if this happens..

The only way to solve this is to make 1 rule for every performance counter you want to store in the DWH.  Use a condition detection in the rule for filtering the correct performance counter. See below for a example.

<Rule ID=”TransferFile.ReadSec” Enabled=”true” Target=”FileTransferClient” ConfirmDelivery=”true” Remotable=”true” Priority=”Normal” DiscardLevel=”100″>
<Category>Custom</Category>
<DataSources>
<DataSource ID=”SMBFileTransfer” TypeID=”OPS.SMB.Performance.FileTransfer”> <Debug>false</Debug>
<IntervalSeconds>300</IntervalSeconds>
</DataSource>
</DataSources>
<ConditionDetection ID=”Filter” TypeID=”System!System.ExpressionFilter”>
<Expression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type=”String”>CounterName</XPathQuery>
</ValueExpression>
<Operator>Equal</Operator>
<ValueExpression>
<Value Type=”String”>Read Transfer Total Sec</Value>
</ValueExpression>
</SimpleExpression>
</Expression>
</ConditionDetection>
<WriteActions>
<WriteAction ID=”ToOps” TypeID=”SystemCenter!Microsoft.SystemCenter.CollectPerformanceData” />
<WriteAction ID=”ToDWH” TypeID=”SCDW!Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData” />
</WriteActions>
</Rule>

THE END

Maybe this will help you. Till next Time.

Happy SCOMMING

Michel Kamp