Archive | January, 2012

Drag & Drop SCOM Authoring Meets Visio [Quick Guide]

20 Jan


I was one of the lucky one’s that was able to test a couple of month ago the yesterday announced “System Center 2012 Visio Management Pack Designer” also know as “Visio MP designer” or “VMPD”. It was difficult for me to NOT talk about this before because it was strictly given under NDA. So now the public word out (thanks to Baelson Duque).. Here it is….

Frist this: The screenshots may differ from the RTM version. And I am a bit in a hurry so this will be a non detailed post. Later I will do a deep dive if you want me to.

What is it ? :

With the SCOM Visio MP designer you can drag and drop your MP together. So visualize this one. Just open an Visio template and drag your monitoring targets in to it. Then drag your monitors or rules in to the sheet and connect them. Generate the MP and import it into SCOM. Simply that all , really I don’t lye.

What do you need in the basics:

  1. Visio 2010 Professional if you want only to drag and Drop the MP.
    Visio 2010 Premium if you want also to generate the MP for SCOM import . This due to the schema validation features in premium.
  2. Installed SCOM console
  3. SCOM (2007/2012) Environment

.. the other requirements you can read in the documentation.

Hands on:

Open Visio and Create the new document



You will now see an empty MP document


Now we can start designing. In this example I will create an MP that will monitor an 2 tier application. So a webserver as frontend and a database as backend. The backend will be monitored with a DB watcher and the front end will be having a web site watcher and a performance monitor with an threshold.

Go to the SCOM MP Shapes region. And select the Quick Shapes tab.


Now we select the 2-tier Application Model and drag & drop it into the document.


He see what happens , waauww . We have almost completed our Management Pack…



Now we have to customize the MP. Be sure to pin the properties windows on the right corner of the screen shot. This will be used to specify the parameter values used in the monitors/rules created.

First we select the MP shape on the left corner. Here we specify the MP name and version number.


Now awake readers will see that this template isn’t correct. The Front end role contains the DB watcher. This should be connected to the Backend role.

So we simply delete the connector line


and move the shape and create an new connector line.


Next step is to add a new WEB site monitor to the front end role. This can be done by select the My front end Role shape and press on the blue arrow. Then select the web monitor icon.


The web monitor will be added and connected to the front end role target.
Next step is to specify what website to monitor. We select the just created Web site monitor shape and again we change the values in the parameter window.


We do the same for the Database Monitor.


The result so far will be an MP that will monitor Website and Database availability. Now of course we have to specify the targets for the monitors. This is done with a discovery. Just select the My front end role and the My Backend role and change the parameters. In this case this will be an discovery based on registry key availability. Of course you must create this registry keys on the servers to let the discovery do his work.




The almost end situation will be …..


But I am not ready jet. I want also to monitor one performance counter on the front end role. We simply drag and drop a performance monitor in to the sheet … And yes again the word simply… normally I hate this word because I like the deep guru stuff…. Knipogende emoticon




We select the counter shape and change the property’s



Last step is to connect the shape to the correct front end role target.

This is done with the MP roll-up Monitor Connector shape. Drag and drop it on the sheet and connect the correct ends to the front en role and counter shape.



Tatadaaaa . The end result will be:



Last , yes really last , step is simply (again) press “Generate Management Pack”.


The Check Diagram will be executed first and will report any compilation errors. Fix the errors and press the “Check diagram” to check it again.



If all errors are gone you can press “Generate Management Pack” again. And you will be asked for the save location of the MP.


Supply it and press OK.


Quickly go to this directory and you will see 2 files.


And you are done!. Import it into your (LAB) SCOM environment.

Wait a moment , not so fast. We want prove !!!

Okay, Okay .. lets open the MP in to the MP authoring console. We will see all the generated Target classes , Views , Discovery’s and Monitors. Really really simple !:






Now Open your SCOM console and import the MP.


Go to the Monitoring Pane and look at the generated views.


If you have now configured the correct Discovery Registry Keys and the Correct performance counter. You will see monitoring data coming in and Alerts coming out….

The End:

That was it for now …

Please let me know if you liked this blog and if you want more….

Happy scomming.

Michel Kamp

MMS 2012

11 Jan

Got green light for the MMS!! Today booked the yearly trip to LAS VEGAS for the MMS 2012. This year will be full of new SC news!
For sure I will be attending the TAP/MPV meeting and everything related to SCOM.
Viva las vegas….

The law of Murphy rules….again

6 Jan

Today mr Murphy has visit my SCOM test lab environment.. This resulted in a broken storage box. What again resulted in loosing all my SCOM test virtualservers…. Please mr Murphy next time skip my place… 😥

How To: Configure the Phone 7 Native SCOM Console

5 Jan

Since begin march there is a Phone 7 SCOM console availably in the phone 7 marketplace. This phone 7 app makes it possible to view/manage Alerts , States and Performance data form you existing SCOM management group. This all without any additional installation on your site. Great isn’t it!! . How’s this done. ?? Since the native SDK client isn’t supported on the Phone 7 platform I used a proxy translator service. This proxy is placed into to (AZURE) cloud. So the phone 7 app can access this could proxy and the proxy will access your SCOM SDK service to get the requested data.

BTW: There is a trial  version of this app that’s has full functionality but all data is limited by 5 records,shows ‘buy me’ popup messages and the app will work for 20days.


Now this explained I will give some walk trough :

1) Preparing your SCOM management group:

    1. Publish the SDK service port to the internet. This can be done in ISA or any other firewall by making a outbound rule for port 5724 of your RMS server.
    2. Test if you successfully have done step 1 by opening a CMD an type :
      Telnet <published external IP> 5724
      This should give you a empty screen. If it gives you a time-out your should check step 1 again.

2) Configuring the Phone 7 app:

    1. Download the SCOM console from the marketplace.
      1. on your phone open the marketplace app and search for “opsmgr scom console”
      2. Click on the app and select Try (or buy).
      3. wait till the app is downloaded and installed.
    2. Open the app by pressing on the Icon below.

Small mobile app tile

3. Once started it will bring you to the settings menu.

The first page shown will be the “SCOM SDK” page.

This will be also the only page you will have to change. All other settings are not needed.

2-10-2011 9-01-51 AM

Now change the :

Address: to the <published external IP> that you configured and tested in step 1 of “Preparing your SCOM management group”.

UserName: specify the scom user that has right to logon to scom. This is normally the same user you use for the original native scom console logon.

Domain: The user domain this user is living.

Password: self explaining….

Now press on the “Test SDK LOGON” and wait till a popup message shows you “Logon to SDK SUCCESS”.

And you are ready to rumble….

3) Using the SCOM console app:

In the main screen press in the menu on the “logon” sign. (you can do this automatically by configure the ‘auto login in the setting page’).
Wait till you get a popup.
Now you will be in the main menu.

2-10-2011 8-49-17 AM



3.1 Showing Alerts:

Now press the ‘Alert view’ to view alerts.
The alert page will be loaded.
Now Press on the top one of the colored radio buttons. So if you want to see all alerts of the configures period (see setting page) you press the gray ‘all’ one.
Below in the status bar there will be a ‘loading’ message and a ‘ready’ and ‘count’ message when the alerts are loaded.

See a example below:

2-10-2011 8-51-21 AM

Now you can scroll though the alerts press on one. Once selected you can scroll/wipe to the next screen to see the properties and health model.

TIP: You can see if the alert was generated by a Rule of Monitor by looking for the words “monitor / rule” in the overview or properties page.

TIP: Also you can go directly to all related target performance counters by scroll/wipe to the “actions” screen and select “show performance counters”

TIP: You can close the alert in the actions page.


2-10-2011 8-52-02 AM2-10-2011 8-52-34 AM



3.2 Showing target states:

Go back to the main menu to look at the states.
Press on the “State View”  button.
The same workflow as above is applied.
Press on one of the colored radio buttons to get the targets with the selected health state.




Now you can select one and scroll/wipe to the health explorer ect…


3.3 Showing performance:

Go back to the main menu to look at the performance.
Press on the “Performance View”  button.
Type a part of the counter name you want to display. And below type a part of the target name.

For example type in the “type Counter Name here” : processor time
and for “Type target name here” : Server1 (or empty)

Check on one or more checkboxes from the results shown and a performance graph will be rendered.
Scroll/wipe to the right or left to see this graph.


2-10-2011 8-53-26 AM


3.3 Showing Dashboard:

Go back to the main menu to look at the dashboard feature.
Press on the “Dashboard”  button.
Select one (or more) widgets to display.
Scroll/wipe to the right or left to see the widget output.


2-10-2011 9-00-24 AM



4. That’s it.


Enjoy your SCOM phone 7 app and please let me know if needed more assistance and any feedback is welcome !!


Michel Kamp

[HOWTO] Failed to store data in the Data Warehouse : Arithmetic overflow error converting expression to data type float.

5 Jan


This blog describes the fixing of the error below:


Date and Time:

6-10-2011 10:25:45

Log Name:

Operations Manager


Health Service Modules

Event Number:




Logging Computer:





Failed to store data in the Data Warehouse. Exception ‘SqlException’: Sql execution failed. Error 777971002, Level 16, State 1, Procedure StandardDatasetAggregate, Line 424, Message: Sql execution failed. Error 777971002, Level 16, State 1, Procedure PerformanceAggregate, Line 149, Message: Sql execution failed. Error 8115, Level 16, State 2, Procedure -, Line 1, Message: Arithmetic overflow error converting expression to data type float. One or more workflows were affected by this. Workflow name: Microsoft.SystemCenter.DataWarehouse.StandardDataSetMaintenance Instance name: Performance data set Instance ID: {7547DA11-6328-54C6-00D6-C0729CD41CD8} Management group: SCOM01




It seems the aggregation of the hourly performance tables wend wrong. But what table are we talking about?

Okay looking at the error message the stored procedure what caused the error is PerformanceAggregate . Looking at this procedure you will see the SQL code that is giving the problem below.


SET @Statement =
        'INSERT ' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@InsertTableName) + ' ('
      + '  [DateTime]'
      + ' ,PerformanceRuleInstanceRowId'
      + ' ,ManagedEntityRowId'
      + ' ,SampleCount'
      + ' ,AverageValue'
      + ' ,MinValue'
      + ' ,MaxValue'
      + ' ,StandardDeviation'
      + ')'
      + ' SELECT'
      + '    CONVERT(datetime, ''' + CONVERT(varchar(50), @IntervalStartDateTime, 120) + ''', 120)'
      + '   ,PerformanceRuleInstanceRowId'
      + '   ,ManagedEntityRowId'
      + '   ,COUNT(*)'
      + '   ,AVG(SampleValue)'
      + '   ,MIN(SampleValue)'
      + '   ,MAX(SampleValue)'
      + '   ,ISNULL(STDEV(SampleValue), 0)'
      + ' FROM ' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@CoverViewName)
      + ' WHERE ([DateTime] >= CONVERT(datetime, ''' + CONVERT(varchar(50), @IntervalStartDateTime, 120) + ''', 120))'
      + '   AND ([DateTime] < CONVERT(datetime, ''' + CONVERT(varchar(50), @IntervalEndDateTime, 120) + ''', 120))'
      + ' GROUP BY PerformanceRuleInstanceRowId, ManagedEntityRowId'


Since we are investigating a performance issue the @SchemaName and @CoverViewName would be ‘Perf.vPerfRaw’. Now we have to determine the correct values for the @IntervalStartDateTime and @IntervalEndDateTime. This can be done by looking at the StandardDatasetAggregationHistory table, by running the query below. We know it’s a performance issue so we look at the performance aggregate dataset and then we look in the history table for the last good aggregation for this dataset.


declare @DataSetId as uniqueidentifier

select top 1 @DataSetId=SDS.DataSetId from dbo.StandardDatasetAggregation SDA

inner join StandardDataSet SDS on SDS.DataSetId=SDA.DataSetId

where SDA.BuildAggregationStoredprocedureName like '%PerformanceAggregate%'

select * from dbo.StandardDatasetAggregationHistory SDA

inner join dbo.StandardDataset SD on SD.DatasetId=SDA.DatasetId

where DirtyInd=1 and SDA.DataSetId=@DataSetId

order by AggregationDateTime ASC


And whala the fist record below gives me the data period caused my error:




So I change the @IntervalStartDateTime = 2011-09-28 22:00:00  and @IntervalEndDateTime = 2011-09-30 04:01:28. And the query to execute is born:


SELECT CONVERT(datetime, + CONVERT(varchar(50), '2011-09-29 22:00:00', 120), 120) 







,ISNULL(STDEV(SampleValue), 0) 

FROM Perf.vPerfRaw 

WHERE ([DateTime] >= CONVERT(datetime, + CONVERT(varchar(50), '2011-09-29 21:00:00', 120), 120)) 

AND ([DateTime] < CONVERT(datetime, + CONVERT(varchar(50), '2011-09-29 04:01:00', 120), 120)) 

GROUP BY PerformanceRuleInstanceRowId, ManagedEntityRowId 

Hmm but this query’s gives me:




Yes this is exactly what we want. Now we are going to to change the end date to a lower period so we can isolate the record giving the overflow. Doing this I am getting the error period is ‘2011-09-29 21:05:45’

So next is to hunt down this bad record:


SELECT PerformanceRuleInstanceRowId 



FROM Perf.vPerfRaw 

WHERE ([DateTime] = CONVERT(datetime, + CONVERT(varchar(50), '2011-09-29 21:05:45', 120), 120)) 

order by SampleValue 


Wooaaw found it:




Hmm the STDEV doesn’t like showing large negative number.

Let’s look what this function does:

Returns the statistical standard deviation of all values in the specified expression. May be followed by the OVER clause.


Okay we could look and investigate what data value we must change it to but I am not willing to spend to much time. Since the value is soooo large I probably assume the measurement was false. So I will change it to 0.

I you wanted still to investigate you could use the query below and change the E+217 to a lower value till the query runs okay:


declare @float as Float 

set @float = -1.1031304526204E+217 

select @float 

select STDEV(@float) 

p.s E+154 is the maximum you can apply ;-))


As i said I am going to change this bad records to 0. Since we are looking at a view and this view isn’t updatable we have first to find out the root table containing this data. This isn’t so hard.

The query below gives you the performance RAW table containing the records:

The dadasetid is the same as you had got back in the first query as @DataSetId.


SELECT [StandardDatasetTableMapRowId] 









FROM [OperationsManagerDW].[dbo].[StandardDatasetTableMap] 

where datasetid = '1B1F0F44-A208-4145-8E59-9121357D78F2' 

and [AggregationTypeId] = 0 

and '2011-09-29 21:05:45' between [StartDateTime] and [EndDateTime] 


Running this query will give you below the table we have to change:




Yes yes finally we are there. Now we are going to update the records. The table to use is : Perf.PerfRaw_E721608C35A44620AE3E0DE028C3C5A2

So the update query is:


update Perf.PerfRaw_E721608C35A44620AE3E0DE028C3C5A2 

set SampleValue = 0 

WHERE ([DateTime] = CONVERT(datetime, + CONVERT(varchar(50), '2011-09-29 21:05:45', 120), 120)) 

and SampleValue = -1.1031304526204E+217 


The result is , as expected:




Lets check if its now fixed:


SELECT CONVERT(datetime, + CONVERT(varchar(50), '2011-09-29 22:00:00', 120), 120) 







,ISNULL(STDEV(SampleValue), 0) 

FROM Perf.vPerfRaw 

WHERE ([DateTime] >= CONVERT(datetime, + CONVERT(varchar(50), '2011-09-29 21:05:45', 120), 120)) 

AND ([DateTime] < CONVERT(datetime, + CONVERT(varchar(50), '2011-09-29 21:07:50', 120), 120)) 

GROUP BY PerformanceRuleInstanceRowId, ManagedEntityRowId 


Gives me back:






But what targets workflows caused this bad data. Take the ManagedEntityRowId and PerformanceRuleInstanceRowId data from the bad records.

Below the query for the guilty targets:

select * from dbo.ManagedEntity 

where ManagedEntityRowId in (103425,103424,103426) 


And the below the query for the related workflows:

SELECT PerformanceRule.ObjectName, PerformanceRule.CounterName, PerformanceRuleInstance.InstanceName 

FROM PerformanceRule INNER JOIN 

PerformanceRuleInstance ON PerformanceRule.RuleRowId = PerformanceRuleInstance.RuleRowId 

WHERE (PerformanceRuleInstance.PerformanceRuleInstanceRowId = 346638) 




Michel Kamp


Solving the Gateway 20071 event

5 Jan

After installing a GW or Agent using a certificate you keep getting the 20071 event. Saying “The OpsMgr Connector connected to opstapms01, but the connection was closed immediately without authentication taking place. The most likely cause of this error is a failure to authenticate either this agent or the server . Check the event log on the server and on the agent for events which indicate a failure to authenticate.”

You have double checked every normal solution as the certificate chain, network connection, ports , setup ect.. . But what’s causing this and how to solve it.

A very important step is to check the registry. Go to the OPS reg hive and check if the FQDN name is supplied for the Networkname and AuthenticationName. If this doesn’t match your certificate common name you will get the 20071 event.

Just change it and restart the OpsMgr service.

Happy SCOM’ing

Michel Kamp


Doing the magic on the SNMP tables the right way 1/2

5 Jan

The Problem:

Every time I had to make a special management pack that would require  data from SNMP I bumped against the SNMP table format. In my opinion the SNMP table structure isn’t very easy to read and to process in a management pack.  And let me even not talk about the SNMP table indexing with a ‘foreign key’ to a other SNMP table based on the OID value and not the OID number!! This can become a nightmare for a SCOM MP developer.

So one possible solution:

First of all, this solution I am describing below was one of my first versions that where part of the prototype. Meanwhile I have updated this to more advanced/paid/production versions. Due to this I can only share the prototype code with you. This means that I changed some names and the code could be incomplete. But I am sure most of you can use this as first start.

I wanted to get the SNMP table data in a generic way so I could process it in a generic way. For example normally when you want to discover a application component based on SNMP you do a SNMP probe. But you can only generate the new component class and have to fire up a second SNMP probe to fill-in the other wanted properties.   A better way would be to get all the property information in one SNMP probe.  Yes off course you will use a SNMP walk probe for this. But be aware of its data returning. I can be tricky to map the SNMP data result to rows. See picture below:


So my solution was to make this simpler by reading the SNMP table walk and map it to a real table in PowerShell. Then I could manipulate this PowerShell table very simple and build up the object discovery properties or use it in monitors / rules.

The module will look like this:

Do the SNMP walk –> convert OID values it to arrays –> map the arrays to row records –> output the row records as property bag data

Very simple approach isn’t it??

So the Datamodule would look like this:

 <DataSourceModuleType ID="DataSource.SNMP2Table" Accessibility="Internal" Batching="false"> 
 <xsd:element minOccurs="1" name="IP" type="xsd:string" /> 
 <xsd:element minOccurs="1" name="CommunityString" type="xsd:string" /> 
 <xsd:element minOccurs="1" name="Version" type="xsd:integer" /> 
 <xsd:element minOccurs="1" name="OID" type="xsd:string" /> 
 <xsd:element minOccurs="1" name="Walk" type="xsd:boolean" /> 
 <xsd:element minOccurs="1" name="Root_Table_OID" type="xsd:string" /> 
 <xsd:element minOccurs="1" name="Fields_To_Show" type="xsd:string" /> 
 <xsd:element minOccurs="1" name="Debug" type="xsd:string" /> 
 <xsd:element minOccurs="0" name="IntervalSeconds" type="xsd:integer" /> 
 <OverrideableParameter ID="Debug" Selector="$Config/Debug$" ParameterType="string" /> 
 <OverrideableParameter ID="IntervalSeconds" Selector="$Config/IntervalSeconds$" ParameterType="string" /> 
 <ModuleImplementation Isolation="Any"> 
 <DataSource ID="get" TypeID="DataSource.SnmpGet"> 
 <Value VariantType="8" /> 
 <ProbeAction ID="maptotable" TypeID="Windows!Microsoft.Windows.PowerShellPropertyBagProbe"> 
                  ## convert snmp table to propertie bags
                  ## Michel Kamp
                  ## v1.0.2

                  param ([string] $SNMP_RESULTS , [string] $Root_Table_OID , [string] $Fields_To_Show , [string] $Debug)

                  $guid = [guid]::NewGuid()

                  $Debug_File = $env:systemroot + "\temp\" + $guid + "_" + $Root_Table_OID

                  #$Debug_File = "c:\temp\" + $guid + "_" + $Root_Table_OID
                  If ($Debug  -eq $true) { $SNMP_RESULTS | Out-File -FilePath ($Debug_File + "_RESULTSxml.xml") }

                  [xml] $data = $SNMP_RESULTS
                  $Root_OID = $Root_Table_OID
                  $OID_Fields = $Fields_To_Show.Split(",")

                  If ($Debug -eq $true) { $data.InnerXml | Out-File -FilePath ($Debug_File + "_Dataxml.xml") }

                  $api = New-Object -comObject 'Mom.ScriptAPI'

                  ## fill the array with the SNMP vaules
                  $ValueArray = @{"" = ""}
                  foreach ( $key in $OID_Fields)
                  $ValueArray.Add($ID,($data.DataItem.SnmpVarBinds.SnmpVarBind | Where-Object { $_.OID -like $Root_OID+'.'+$ID+'.*' } ))

                  # clear the debug output file
                  If ($Debug  -eq $true) { Write-Output "PropertyBag"  |  Out-File -FilePath ($Debug_File + "_outputPropertyBag.xml") }

                  ## create the propertie bags
                  ## Loop the snmp records. First field in $OID_Fields gives the record count
                  for ( $x=0; $x -le ($ValueArray[$OID_Fields[0]].Count -1 ) ; $x++ )
                  $bag = $api.CreatePropertyBag()

                  foreach ($y in $OID_Fields)

                  If ($Debug  -eq $true)
                  "OID"+$y + ":" + $ValueArray[$y][$x].OID
                  $ValueArray[$y][$x].Value.InnerXml |  Out-File -Append -FilePath ($Debug_File + "_outputPropertyBag.xml")
                  #Return bag values
                  # Catch all other exceptions thrown by one of those commands
                  catch {
                  If ($Debug  -eq $true) { $Error | Out-File -FilePath ($Debug_File + "catch.xml") }
                  ## end
 <Node ID="maptotable"> 
 <Node ID="get" /> 


The output data will be an Property Bag collection with one SNMP row record / property bag.

So the next part will be mapping it to discovery data. So we get this workflow:

SNMP row records prt bags –> Map to target properties  –> Output target discovery Data 

And again a very straight on approach.

This datamodule will look like this:

 <DataSourceModuleType ID="DataSource.SNMP2Table.DiscoveryData" Accessibility="Internal" Batching="false"> 
 <xsd:element minOccurs="1" name="IP" type="xsd:string" /> 
 <xsd:element minOccurs="1" name="CommunityString" type="xsd:string" /> 
 <xsd:element minOccurs="1" name="Version" type="xsd:integer" /> 
 <xsd:element minOccurs="1" name="OID" type="xsd:string" /> 
 <xsd:element minOccurs="1" name="Root_Table_OID" type="xsd:string" /> 
 <xsd:element minOccurs="1" name="Fields_To_Show" type="xsd:string" /> 
 <xsd:element minOccurs="1" name="InstanceSettings" type="SettingsType" /> 
 <xsd:element minOccurs="1" name="FilterExpression" type="ExpressionType" /> 
 <xsd:element minOccurs="1" name="FilterString" type="xsd:string" /> 
 <xsd:element minOccurs="1" name="ClassId" type="xsd:string" /> 
 <xsd:element minOccurs="1" name="Debug" type="xsd:string" /> 
 <xsd:element minOccurs="0" name="IntervalSeconds" type="xsd:integer" /> 
 <OverrideableParameter ID="Debug" Selector="$Config/Debug$" ParameterType="string" /> 
 <OverrideableParameter ID="IntervalSeconds" Selector="$Config/IntervalSeconds$" ParameterType="int" /> 
 <ModuleImplementation Isolation="Any"> 
 <DataSource ID="table" TypeID="DataSource.SNMP2Table"> 
 <ConditionDetection ID="mapfiltered" TypeID="System!System.Discovery.FilteredClassSnapshotDataMapper"> 
 <Node ID="mapfiltered"> 
 <Node ID="table" /> 


At this point we have all we want to have:

1. module to get the SNMP row record table

2. Module to convert this records to discovery data

What are we missing….. yhea the top discovery rule Knipogende emoticon

This will look like :

Schedule every x minutes –> get the SNMP table as discovery data  -> create the targets with filled in property values

And again the discovery rule will look like below: (remember its prototype code)


 <Discovery ID="Discovery.Gateway" Enabled="true" Target="Server" ConfirmDelivery="true" Remotable="true" Priority="Normal"> 
 <DiscoveryClass TypeID="Gateway" /> 
 <DiscoveryRelationship TypeID="Server2Gateway" /> 
 <DataSource ID="getdata" TypeID="DataSource.SNMP2Table.DiscoveryData"> 
 <FilterString /> 

Hmm but… Yes the only values you have to specify are:

<OID></OID> : This is the TOP OID of the SNMP walk. Will most of the time be the TOP OID of the SNMP table.
<Root_Table_OID></Root_Table_OID> : This the table you want to convert. Will most of the time be the <OID> value.
<Fields_To_Show>1,2,3,4,5</Fields_To_Show> : This are the SNMP columns you want to get the values of.

You see very simple again. The power is in the <Fields_To_Show> parameter. You specify here the column index numbers of the values you want to use.



The value of the field number(s) specified will be returned and can be readout with $Data/Property[@Name=’OIDX‘]$ where X is the field number. So OID1 is the g3gatewayNumber field. Use this value for the <InstanceSettings> elements.

At this point you will have a generic way to read out SNMP tables.


This was the end of part 1. Reading out 1 SNMP table is not so hard to do as you have noticed. But what about reading 2 or more SNMP tables and use a ‘foreign key’ to join this SNMP tables based on the OID value (So NOT the OID key)??? Now it is becoming more interesting.. isn’t it ?? Part 2 will be all about this……


Happy Scom’ing..

Michel Kamp