Archive | VisualStudio Authoring Console Extensions RSS feed for this section

[BUG] VSAE with a PowerShell $Data parameter

27 May

Hi,

This time for a short post on a ‘possible’ bug i detected in VSAE

Problem:

You create a PowerShell script and want to include this script using the $includeFileContent/<script_name>$ tag.

For example

<ProbeAction
ID=Probe
TypeID=Windows!Microsoft.Windows.PowerShellProbe
RunAs=SystemCenter!Microsoft.SystemCenter.DatabaseWriteActionAccount>

<ScriptName>FixTopNQuery.ps1</ScriptName>

<ScriptBody>$IncludeFileContent/FixTopNQuery.ps1$</ScriptBody>

<TimeoutSeconds>300</TimeoutSeconds>

<StrictErrorHandling>true</StrictErrorHandling>

</ProbeAction>

(Added a screenshot below)

The FixTopNQuery.ps1 is a PS script added to the project as “Embedded Resource”.

Now you compile the project and you get a compile error:

Error    1176    The configuration specified for Module Probe is not valid.

: Incorrect expression specified: $DataSet=New-Object System.Data.DataSet

. Unable to resolve this expression. Check the expression for errors. (Hints: Check for correct character casing (upper case/lower case), mismatched “$” signs, double quotes(“), square brackets “[" or "]“). Here is a sample expression: $Data/EventNumber$

(Path = OpsLogix.IMP.Oracle.Dashboards.Task.FixTopNQuery/Probe)    C:\Program Files (x86)\MSBuild\Microsoft\VSAC\Microsoft.SystemCenter.OperationsManager.targets    255    6    Dashboards

(Added a screenshot below)


Hmmm.. Why ?

 

Analyze

After a lot of error and retry I found out that the problem is in the included powershell script. And exactly in this line below:

$DataSet=New-Object System.Data.DataSet

Hmm I hear you thinking, what’s wrong with this statement? That exactly what I was thinking…. But when I change the parameter name the compile was successful…

 

Solution

Do not start a parameter name with $Data in the powershell script. It looks like it’s a reserved word in VSAE.

 

I will share this issue also with the VSAA product team.

Happy Scomming

Michel Kamp

Extending HP network devices with CPU and Memory counters

25 May

Hi,

This blog I had prepared a long time ago but never published it. So now it’s the time to do it.  Since we all know that SCOM 2012 has build in network device monitoring we of course want to use it. This works perfectly , except I you are using devices that are not certified by SCOM. In that case you will not get the CPU and memory counter and Fan/PSU states. For example most of the Procurve network devices from HP are left in the dark. So..

The problem

We want to monitor also CPU and Memory usages from HP network devices.

The solution

I will keep it simple and clear. I will demonstrate the steps using VSAE and give you as bonus the MP at the end. I will use VSAE management pack templates just to illustrate the power of it.

The steps will be:

1. [Classes] create the memory and CPU classes

We just simple make use of on the memory and processor classes that are already present in the system center network library. So we create 2 classes based on that:

image

 

2. [Datasources] create the memory and CPU discovery datasources

Now we have to create the datasources for the discovery of the CPU and memory targets

image

The most important is that you set the device key correctly to the parent node. Otherwise the relationship between the new CPU / memory class and the network device node will not be set. And you will not see any CPU / Memory targets. This can sometime be a ‘error and retry’ process.

3. [Discovery's] create discovery’s for the CPU and Memory targets

Now we have the discovery datasource we can create the 2 discovery rules for the CPU and Memory targets. We will use the VSAE discovery templates for this. Just simple add new ‘Discovery’ template and you add the 2 rules below.

image

Create the 2 new discovery’s and specify the correct datasources.

image

You specify the datasources we created above to the correct discovery rule. Fill in the correct OID to match the HP processor and Memory OIDs.

image

If you now import the MP will will see at a HP node the CPU and Memory classes created. So continue the the next step.

4) [Rules] create the memory and CPU collection rules

So now we can make the stuff we wanted to see. The Memory and CPU counters. So we just simple use the add new item ‘Rule (Performance Collection)’ template and make the 2 new rules. We are going to use the already build in performance collection rules for the network nodes.

image

image 

The most important is to specify the correct OID for the Memory and CPU counters at the datasource configuration. See below:

CPU: .1.3.6.1.4.1.11.2.14.11.5.1.9.6.1.0
Memory: .1.3.6.1.4.1.11.2.14.11.5.1.1.2.2.1.1.7 and .1.3.6.1.4.1.11.2.14.11.5.1.1.2.2.1.1.5

I am not going to do a deep dive on this. You can just simple reference to the downloadable project source at the end of this post to check the details out.

The results

Now you import the MP and you will see you most wanted performance counters !! Open the Network Summary Dashboards and Node dashboard for the CPU Usage to check it out.

SNAGHTML126f4134

And the ‘free Memory (Percent)’ performance view for the memory usage.

image 

And below the relation ship diagram.

image

The End

The next step could be to add monitors to alert on high Memory or CU usages. I am not going to give you this bonus because its better to get some VSAE practice you self’s… You can do this the same way you did with the collection rules. There is a template for monitors also.

Download VSAE project (for the diehards): http://sdrv.ms/11mkq9j

Download Management Pack example: http://sdrv.ms/11mku8S

Happy Scomming

Michel Kamp

Let SCOM check for Updated Management Packs

21 Apr

The challenge

Using the SCOM native console the import from the Microsoft Management Pack Catalog is a nice feature. I like also the feature to check and import updated MPs that you have already imported in your management group. But what I really miss and don’t understand : why did the product team removed the monitor that gives us a alert when a new MP version is in the MP catalog ?. This monitor was build in MOM 2005 but removed in the begin of SCOM 2007.

The solution

So since we are SCOM author diehards we are going to build our own MP update monitor. I am going to use VSAE to build it all. But wait even if you aren’t a SCOM author diehard it still worth reading this post because this time I will share the VSAE project and even the MP with you at the end!!!

Analyze

So I used my good old friend ‘Fiddler’ to backward-engineer what the scom console is doing when I press the ‘check for updated management packs’ button. It seems it sends a SOAP request to a webservice. The SOAP request contains a MP list of the MPs that you have already imported. The answer result of this request will be a MP list with the updated MP versions or an empty list if there aren’t any updates for you.

Building time

Below I’m going to give you a overview what I have done. You can look in the VSAE project for details on it. If you have any questions just let it know and I will help you out.

1) The datasource

So now we are going to make a datasource that runs a PowerShell script. This PowerShell script is simulating the webservice request.

Below a snippet of the code. (the full code is in the VSAE project). What I am doing here are 3 steps:

1) Build a SOAP request message that contains all my MP version meta data from all MPs that I have already imported in my management group.

2) I call the “ManagementPackCatalogWebService.asmx” and execute the method “FindManagementPacks”

3) as last step I check if there are any MPs returned and set the $Status flag according the result. And I return the scom property bag.

# step 1

$MPSoap = get_MP_List
$ret = Do-SOAPRequest -SOAPRequest $MPSoap -URL $MPCatalogURL -SOAPAction $SOAPAction

# step 2

## show MPs that have a Update
$MpList = $ret.Envelope.Body.FindManagementPacksResponse.FindManagementPacksResult.CatalogItem | where { $_.IsManagementPack -eq $true} | select-Object DisplayName

# step 3

## check MP returned

if ( $MpList.Count -eq 0)
{
$Status=”UPTODATE”
}
else
{
$Status=”NOTUPTODATE”
}

  # Create the property bags
$pb = $oAPI.CreatePropertyBag()
$pb.AddValue(“Status”,$Status)
$pb.AddValue(“MpList”,($MpList | Out-String))
$pb

The script above we are going to use in the datasource below

image

2) The Monitor

Now we are going to compose a 2 state UnitMonitorType that uses this datasource. The health state check is done with the “Status” value in the returned property bag.

image

Having this UnitMonitorType composed we can now use it in the real monitor KPI. See below for the KPI. The target is the Management server. I choose this target because I have only one MNG server in my test lab but if you have more it’s better to choose the RMS emulator target.

image

Now when the monitor is unhealthy it will generate an alert message constructed below:

image

 

The result

Building and importing the MP in your SCOM management group will show you the result below:

image 

And of course a nice ALERT message also:

image

 

So now the part you are waiting for..

As promised I will share the VSAE project and the MP it self. Please notice that it is a show case alias prototype MP and so it is far from complete. For example not all display strings are applied and no knowledge is supplied. But that’s up to you to complete…. In my production version I have even build in a recovery/console task that also automatically imports the updated MPs.. Just a idea for you to work out…

MP download: http://sdrv.ms/XPl38e

VSAE project download: http://sdrv.ms/YDUn7T

The End

Feel free to comment or contact me if you have any questions.

Happy SCOMMING

Michel Kamp

Get a grip on the DWH aggregations

24 Mar

 

The problem

If you run availability reports or performance reports with a aggregation type of daily or hourly the reports are empty. This problem is described a lot on the web. And I have also written a couple of blog post how to fix this issue. But as you know we are using scom to monitor stuff , so why not monitor this aggregation processing and alert if a processing delay is occurring. ? That’s our mission today….

Analyze

Using SQL enterprise manager and a SQL query on the data warehouse DB we can read out the aggregation processing. This query looks like this:

Select AggregationTypeId, Datasetid, (Select SchemaName From StandardDataSet Where Datasetid = StandardDataSetAggregationHistory.Datasetid) ,  COUNT(*) as ‘Count’, MIN(AggregationDateTime) as ‘First’, MAX(AggregationDateTime) as ‘Last’ From StandardDataSetAggregationHistory
Where LastAggregationDurationSeconds IS NULL
group by AggregationTypeId , Datasetid

The output will show us how many aggregations there have still to be processed /aggreationtype  (20=hourly , 30 = daily).

image

So in this case we have no problem. But I have seen scom environments where the state aggregations where so far behind that it was almost not possible to fix it. This bring up a point: especially the state aggregations are the tricky ones. If you have many ‘flipping’ monitors there will be a lot of state changes and so a lot of aggregations data to process. This process takes a lot of SQL CPU power and also disk space. In most of this cases it was the tempdb data space free or transaction log that was the root cause of the failure.

Solution

In scom we have for every aggregation an target. This target is named ‘Standard data set’. You can find it here:

image

If you compare the screenshot with the results on your scom console you will notice that you don’t have the green healthy state… And that’s why you are reading this post. So lets add this state.

I wanted to give every dataset that has to be processed a health state on how many aggregation it has still to process. So we make a monitor that executes for every data set the query above and if a threshold is hit the health state is changed. Also we will add a rule so that this aggregation behind count is put in a trend graph.

I have used VSAE for this , and I will not share the code but only the idea. Why not ? I believe you have to know what you are doing and by copy & pasting you don’t learn from it if you don’t have done it once from start till end.

The real work

Open a new VSAE project and add a empty MP fragment and a PowerShell fragment.

image

Then you make a datasource that reads the aggregation count. This is done using PowerShell and the SQL snapin.

image

The PowerShell script has as input the GUID of the dataset (property of the target) and as output a property bag with the aggregations count (daily and hourly). I made the script somewhat intelligent by reading out the registry where the data warehouse is located.

Now we use this datasource in a monitor module type to create a 3 state monitor. And since we have created a datasource module we can create also a rule that collects the aggregation behind for the trend graph. Yes know know this is easier to type as to do…

Below a snap of the datasource module

image

And below a snap of the monitor module type

image

and the monitor. Create one for hourly(not shown) an one for daily.

image

At last for trending we have to create a collection rule.

image

Notice that the monitor and collection rule are having as target the “Microsoft.SystemCenter.DataWarehouse.DataSet” alias “standard dataset” and notice the runas profile.

The result

When you have constructed the MP and build/deployed it you will see 2 extra monitors on the standard dataset targets as show above. Open the health explorer to see if all is ok.

image

Above dataset has had a problem. To see some details, view the performance counters and you will see the aggregations trend.

image

In this case the state hourly aggregations where way behind. So I followed one of my own blog posts to solve this one. Where I manually executed in a loop the state aggregation process to speed up the processing.

The End.

Yes I know this post is a bit ‘çloudy’ and not something you can download and import. But I hope by sharing the idea I triggered you to try it your self.

Happy SCOMMING!

Michel Kamp

Authoring SCOM Reports in VS 2010

14 Jan

Hi,

Short post on how to get you dev environment ready for authoring scom reports.

Challenge:

You have installed SCOM 2012 on SQL 2008. You want to author a custom report using Visual studio 2010. When you open visual studio you will notice that NO BI project template is shown. Normally you selected this project template and selected the new report project to make your custom report. How now to continue ?

Solved:

Grab a SQL 2012 ISO (YES 2012) and startup the setup.

1) Select installation:

SNAGHTML1045608b

2) New sql or add features

SNAGHTML104737a8

3) Select SQL features Install

SNAGHTML104bfe9e

4) Now the important step. Select the 3 options here. Most important is the “SQL Server Data Tools”. This features contains the VS BI project template.

SNAGHTML104f3ae3

5) Step though the install windows.

And now open Visual studio 2010 and create a new project. And what do we see ?

Yes the BI template Knipogende emoticon 

SNAGHTML1058011e

 

Now you can create the new SCOM reports. Notice also the NEW chart types !!!

image

 

Remember that if you use custom report code components you must copy the correct .dll assemble to the directory:

C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PrivateAssemblies

The End.

Happy Scomming

Michel Kamp

VSAE Adding Binary Resource Features

6 Nov

 

Hi , As I’m on age my memory sometimes lets me alone/ So this post is for my own purpose and since I am sharing it also for you. Not saying you are out aged …. ; –)

So this post is short and fast..

Challenge

One of the greatest new features of the schema 2.0 in combination with VSAE is the binary resource distributions. Using this feature you want to include a binary file that will be used in your OM12 SCHEMA 2.0 MP. You want this binary file to be available on the agent where the workflow runs. So do the steps below.

1) Add your resource to the project:

Drag your binary to the MP project. For this example I used a .ZIP file.

image

Most important: Do NOT forget to set it to “Embedded Resource” !!

2) Add your resource to the MP:

<DeployableResource
ID=”AgentResource.Binary”
FileName=”xxxxxxxx.zip”
Accessibility=”Public”
HasNullStream=”false”
     />

3) How to reference it in your workflows

Now you want to know where this binary resource is put at the agent, correct ? Use the $FileResource tag. The example below fills the Value element with the runtime path where the binary is standing.

<Value>$FileResource[Name="AgentResource.Binary"]/Path$</Value>

This FileResource\Path parser is very handy. Because you must know that every time the MP is updated or the agent is restarted the resource path will change.

In runtime this Value element will be filled with for example: (running on the MS server)

C:\Program Files\System Center 2012\Operations Manager\Server\Health Service State\Monitoring Host Temporary Files 4249\194\xxxxxxxx.zip

 

Happy scomming.

Michel Kamp

VSAE the download is there!!!!

30 Jun

No words can say whats on this link hit me

Happy Scomming!!
Michel

Follow

Get every new post delivered to your Inbox.

Join 29 other followers