How To Export the Configuration and Create a Debug Log File

How To Export the Configuration and Create a Debug Log File

Article created 2017-11-15 by Pascal Withopf

This Article describes you how you can export the configuration of your program and create a debug file. These are needed for troubleshooting.
The Article is applicable to EventReporter, MonitorWare Agent and WinSyslog.

How to Export the Configuration

Open the MonitorWare Agent you want to export the configuration from. Click on “File” in the left upper corner and then on “Export Configuration”.

Now you can select the format in which you want to export your configuration. The prefered option is always “Adiscon Config Format”. When able, you should always use it.

It is always helpfull to use a good name for your config file. Just the name “config” will lead to confusion later.

 

Creating a Debug File

To create a debug file, you need to click in the left tab on “debug”. It can be found under “General”.

There you can check “Enable Debug output into file” and specify the file and path name. The next time you start MonitorWare Agent it will automatically create a debug file.

Parsing log messages

Parsing log messages

Created 2006-03-24 by Michael Meckelein.

This article describes how to parse log message via “Post-Process”. It illustrates the logic behind Post-Process action.

Get relevant information from logs

Log files contain a lot of information. In most cases only a small part of the log message is of actual interest. Extracting relevant information is often difficulty. Due to a variety of different log formats a generic parser covering all formats is not available.

Good examples are firewalls. Cisco PIX and Fortigate firewalls both use syslog for logging.. But the content of their respective log messages are very different. Therefore a method is needed to parse the logs in a generic way. Here Post-Process action of Adiscon’s MonitorWare comes into play.

Tool kit for parsing

Post-Process action provides an editor for creating a log format template. A template consists of as many rules as necessary to parse out the relevant information.

Determine necessary information

In order to parse out information it is vital to know the exact structure of the message. Identifying the position of each relevant item is essential. Assuming for auditing purposes the following items are needed:

Timestamp | Source IP-Address | SyslogTag | MessageID | Username | Status | Additional Information

A sample message looks like:

Mar 29 08:30:00 172.16.0.1 %Access-User: 12345: rule=monitor-user-login user=Bob status=denied msg=User does not exist

In order to extract the information let us examine each item within the message. Splitting the message makes it easier to explain. So here we go.

Pos = Position of the character.
*p  = Points to the position the parser stands after parsing the rule.
Log = Message subdivided into its characters.
Pro = Property. In the term of Adiscon a property is the name of the item which is parsed out.

Note that at beginning of the parse process the parser’s pointer points to the first character. Each parse type starts parsing at the current position of the pointer.

Parsing out a Timestamp

The first identified item is a so called Unix/Timestamp. It has always a length of 15 characters. ‘UNIX/LINUX-like Timestamp’ parse type exactly covers the requirement to parse this item. Therefore insert a rule and select ‘UNIX/LINUX-like Timestamp’ type. This rule parses out the timestamp and moves the pointer to the next character after the timestamp. Name the property ‘u-timestamp’ [1].


Post-Process Editor: Inserted a ‘UNIX/LINUX like timestamp’ rule

Get the IP-Address

Next item is the IP address. Note that after the timestamp follows a space and then the IP address. Therefore insert a ‘Character Match’ rule with a space as value. Select the ‘Filler’ [2] property for this rule. ‘Character Match’ requires a user defined value. This parse type compares the given value with the character at the current position of the message. The character has to be identical with the given value otherwise the parse process will fail. After applying this parse type the parse pointer is moved to the position immediately after the given value. In our sample this is the start position of the IP Address (Pos 17).

After that the address can be obtained. Place in a ‘IP V4 Address’ type. This type parses out a valid IP regardless of its length. No need to take care about the characters. Select ‘Source’ property or name it to whatever you prefer. The parser will automatically move the pointer to the position next to the address.


Note the value of ‘Character Match’ rule is a space.

Obtain the syslogtag

Behind the IP it is a blank followed by a percent sign. The percent indicates that the syslogtag is following. To move the pointer to the syslogtag position once again a ‘Character Match’ rule is necessary. It has to match the space (actual position of the pointer) and the percent sign. This content is not needed therefore assign it to the ‘Filler’ property.

A colon is immediately behind the syslogtag. So all characters between the percent sign and the colon are needed. The ‘UpTo’ type can do this job. Insert an ‘UpTo’ rule. As value enter ‘:’ (without the quotes) and select the syslogtag property. Note that after parsing the pointer stands on the first character of the ‘UpTo’ value.


Important: It points to the colon not to the blank.

Take the MessageID

The next interesting item is the MessageID. Move the pointer to start position of the MessageID part. Again, do this by using a ‘Character Match’ rule. Keep in mind that the pointer points to the colon. Behind the colon is a space and then the MessageID starts. Thus, the value of the rule has to be ‘: ‘.

MessageID consist of numbers only. For numeric parsing the ‘Integer’ parse type exist. This type captures all characters until a non-numeric character appears. The pointer is moved behind the number. Note that numeric values with decimal dots can not be parsed with this type (because they are not integers). This means trying to parse 1.1 results in 1, because the dot is a non-numeric value.

Find the username and status

Looking at the remainder of the message indicates that the username is not immediately after syslogtag. Thankfully though, the username always starts with ‘user=’. Consequently the ‘UpTo’ type can be used to identify the username. To get the start position of the username we have to use ‘UpTo’ together with ‘Character Match’. Remember that ‘UpTo’ points to the first character of the given value. For this reason the ‘Character Match’ rule is necessary.

After locating the start position of the username ‘Word’ parse type can be used. ‘Word’ parses as long as a space sign is found. Enter ‘u-username’ as property.


Notice: After parsing a word the pointer stands on the space behind the parsed word.

The steps to get the status are very similar to the previous one.

The last rule – Additional Information

One item of interest is left. The last part of the message contains additional information. It starts after ‘msg=’. So the combination of ‘UpTo’ and ‘Character Match’ is used to go to the right position. All characters after ‘msg=’ until the end of the message are interesting. For this purpose the ‘Rest of Message’ parse type is available. It stores all characters from the current position until the end of the message. This also means that this rule can only be used once in a template and is always the last rule.


Complete parse template.

What happens if the parser fails?

If a rule does not match processing stops at this point. This means all properties of rules which were processed successfully until the non-matching rule occurs are available.

Let’s assume the fourth rule of the following sample does not match.

The first three rules were processed successfully. Therefore u-timestamp and Source are available. But syslogtag and u-messageid are always empty due to the parser never process this rules.

The Post-Process template which was created in this article is available for download. If you have further question on Post-Process, please contact our support.

[1] Using the “u-” prefix is recommended to differentiate between MonitorWare-defined properties and user defined one. It is not required, but often of great aid. A common trap is that future versions of MonitorWare may use property names that a user has also used. MonitorWare will never use any name starting with “u-“, so the prefix also guards against such a scenario.

[2] Filler is a predefined property which acts as a bin for unwanted characters. Essentially, the data is simply discarded.

Please Note: There’s also a StepByStep Guide available which describes how the PostProcessAction works, you can find it here.

IIS Workflow Described

IIS Workflow Described

By Rainer Gerhards
Article Date: 2003-03-20

Abstract

This paper describes the IIS workflow (aka “order of operations”) as far as the author thinks it is. I have tried hard to make the information as complete and accurate as possible, but obviously it might be wrong as I haven’t coded IIS. All information in this paper is taken from official Microsoft documenation as well as working with IIS in lab.

This paper will become part of a larger IIS-logging focussed paper. In fact, I excerpted it from a later release, because I see increasing intereste in IIS workflow caused by some new malware on the web. So this paper is not fully proof-read and there will most probably be some typos, gramatical errors as well as inconsistencies. Especially if you spot an inconsistency, I would appreicate if you let me know at rgerhards@adiscon.com because I obviously would like to have it as easily understandable as possible.

The above mentioned IIS logging paper can be found at http://www.monitorware.com/common/en/SecurityReference/monitoring-iis-logs.pdf

Additional information and papers may also be available at http://www.monitorware.com/common/en/SecurityReference/.

Please note that this document is correct for IIS up to version 5.1. Microsoft has announced considerable change for IIS 6.0, and the information in this document might not be valid for that version.

IIS order of Operations

Conceptually, IIS has three processing stages:

  1. Decoding the target (web page) of the request
  2. Serving the request
  3. Finishing the request, which includes logging

Before we dig into the details, be sure to fully understand the implications of that overall order of processing: no data is logged by IIS before the request is finished. The reason for this is that IIS log files contain data like the total number bytes send or the processing time required. This data is only available when the request is finished. This also means that when an attacker either crashes IIS or exploits a vulnerability that does not allow it to reach step 3, no log data is written. As a side note, the same is true for Apache.

The problem of missing log data from web servers failing in stages one or two can only be addressed by writing two log lines – one at the begin of the request and one at the end. This is currently not supported by IIS (nor other web servers I know of). Adiscon is developing a tool that allows to do this. Please email info@adiscon.com if you are interested in this and would like to receive a copy as soon as it is available.

Now let us dig into the details of the IIS order of operations. A word of warning first: We have compiled this information from what is published in the Microsoft IIS SDK as well as our own testing. We made any attempt to do do it correctly. However, there is a slight chance that we are failing in one regard or another, so please be careful when basing your work on the information contained in here. Depending on what you try to do, it can be a good idea to re-confirm the information in question with somebody at Microsoft.

First of all, it is important to remember that http request come in via TCP. TCP is a stream-oriented protocol, not a packet-oriented one. This means that any TCP application (like IIS) needs to expect that multiple reads of the TCP stream can be necessary to read a piece of information – even if it is a small one. A practical sample: the URL of the requested page is contained in the very first bytes of the HTTP header. So typically, this information is readily available when IIS receives the first TCP packet and begins processing of it. However, it is possible that someone manually connects to IIS via telnet to port 80. As he then types the HTTP header on the keyboard, each character will be delivered in its own packet. As such, the initial packet will just hold one character and not the complete URL that is typically in there. We need to keep this in mind when talking about the IIS order of operations. Some of the stages described here will typically only be involved once, but there might be situations where multiple executions of these stages code base might be necessary. This is more an implementors issue, but it can be a challenge to log analysis and securing IIS as vulnerabilites, too might be caused by implementors not appropriately handling such situations (this especially applies to ISAPI filters, which require a deep understanding of what IIS does).

Stage 1: Decoding the Request

This initial phase is used to read the HTTP headers and identify the page (or script) to be called by the web server. This phase also involves authentication for non-anonymous access.

IIS reads the TCP stream until sufficiently information has been gathered. “Sufficiently” means at least the actual request type (line one of the HTTP header) as well as the host headers and such. I did not find a clear indication of what “sufficient” is. In my point of view, I think it is the complete HTTP header, but I can not prove this to be 100% exact.

Please note that when “NT Challenge / Response” authentication is used. IIS will even initiate the challenge / response process with the client. This involves sending the challenge and receiving the response, two network packets, as part of the initial phase.

As soon as IIS has read enough of the request, it begins to process the headers. At this point, the unprocessed headers are already in a memory buffer.

Though not documented by Microsoft, I think the first thing done is to extract the HTTP host header, which is then used to obtain configuration information from IIS’ metabase. The host header is vitally important, as all processing and analysis – even the further decoding – is depending on the virtual host being requested. The information from the host header will be used to obtain configuration information during all stages of IIS processing, not just decoding. If there is no (valid) host header, then the default web site is used. If there is no default web site and also no web site configured specifically for those requests without a (valid) host header, the request is simply discarded at this stage. Please note that this also means no log entry will be written as this is done in stage 3.

Then, the final URL is decoded. Depending on the ISAPI filters installed, this can be a length process. In its most basic form, the URL decoder looks into the supplied HTTP host header first and obtains the path mapping from the IIS metabase. Please note that not only the configured home directory plays a role in this calculation but also configured virtual directories, if any. Lastly, ISAPI filters have the ability to change the URL decoding in any way they like. As multiple ISAPI filters are loaded, multiple interim decodes may happen. In any case, at the end of this process the URL is fully decoded and pointing to an actual file system path of a file to execute (or deliver, in case of a static page).

Then, authentication is performed. The result of this operation is the user context into which the request should be served. That context is later on used for all permission checks. Please note that in IIS, other than in Apache, every request is executed in the security ycontext of a specific windows user account. This also applies to anonymous requests. In MMC, you can configure the account used for anonymous requests. By default, this is called IUSR_<Machinename>. There may be other default users, depending on the exact techonology and version used (for example, ASP.NET defines another anonymous user account).

Once the user is know, IIS checks the NTFS ACLs (if any) of the previously decoded actual file to be executed. If the authenticated user does not have proper privileges, IIS denies access to it. Please note that it looks like IIS does this check by itself. It is not relying on the operating system to check if the authenticated user has proper permissions. A possible way to do so would have been to impersonate as the authenticated user and try to access the file. However, IIS at this stage is not yet impersonated as the authenticated user. As such, this mode to check the file permissions seems not to be doable. On the bottom line this means that if there is a bug in IIS’ permission checking, the operating system itself is out of luck and can not detect that. Former vulnerabilities (especially the Unicode Path Mapping yulnerability) prove this observation and also show the damage potential this way of processing has.

As written above, authentication can be quite complex, especially if Microsoft proprietary methods are used. If the user can not properly be authenticated, a “Request denied” page is served back to the requestor. In this case, the request is simply not served, which means stage 2 below is not executed. However, stage three will be used in this case and as such logging of invalidly authenticated requests will happen.

Please note the fine difference: if the authentication fails, IIS continues to work and just does not execute the request. If, however, an vulnerability is exploited during this stage, IIS will probably not continue normally and the request will most probably never be logged.

Once this is done, IIS immediately seems to begin reading further incoming data (for example in a post stream). It looks like this is asynchronous to actual request execution (but I have not verified this).

Stage 2: Serving the Request

Serving the request is easy once the incoming parameters are set.

IIS is a multithreaded application. It has a pool of so-called worker threads. One of these threads is assigned the incoming web request for processing. If all worker threads are currently serving incoming (unfinished) requests, IIS typically just creates a new worker thread to serve the new request. In other cases, IIS does not create a new worker thread but suspends processing of the incoming request until a new worker thread is available. Which method IIS chooses is depending by the IIS configuration, machine utilization and machine resources (and maybe some other factors I don’t know about). I am also not sure if the worker thread is assigned beginning in stage 2 or right at the start of the processing sequence, at the start of stage 1 above. In any case, the actual processing will be carried out in the worker thread, and this is important for our description.

Before actually serving the request, IIS checks what kind of processing it must do. There are three potential cases:

  • Static files
  • Script files
  • Executable files

Static files are served by IIS itself. No other software is involved in serving static files. However, ISAPI filters are still involved while static files are being served, so there is a potential for failure here as well as in core IIS. I deem the potential in core IIS to be very minimalistic. Please note that no external process is started when serving static files.

Script and executable files are processed in more or less the same way, thus I describe them together. An executable file is a file that is executable by itself, like an (CGI) exe file. A script file needs the help of a script processor. Examples of the later are ASP or PHP scripts – and also perl.

A new process needs to be created for all files where the script processor is not available as an in-memory ISAPI DLL (ASP and ASP.NET ar such DLLs). Typical examples for non.resident script processors are PHP and perl. In their case, an external process needs to be created. From IIS’ point of view, it does not really make a difference if the new process is created for a script processor or for a “real” executable. A script processor is just an executable that needs to be passed in some parameters in a specific way.

To create a new process, IIS first creates a new process space. This is still done in the security context of IIS itself, most often the SYSTEM account. Then, IIS loads the image file to be executed into that still-empty process stage. Again, this is done by IIS itself. Consequently, file system audit events 560 of source “Security” in the Windows Security Event Log do show the account IIS is running under as the process initiator. This can be misleading when doing analysis of the event logs. When the image is loaded then IIS impersonates into the context of the authenticated user and changes the new processes access token to be that of the authenticated user. If you are analysing this in the Windows Security Event log, there should be a event id 600, source “Security” that holds this information. IIS then runs the newly created process.

The new process communicates via defined interfaces (e.g. CGI) with IIS and the web client. Depending on the actual interface used, IIS asynchronously reads and writes web client data.

There is at least one difference between scripts and “plain” executables: for scripts, there is a timeout in the IIS configuration. If scripts don’t finish within a configured amount of time, they are aborted (at least ASP scripts, not sure for all potential scripting processors). “Real” executables, do not have any runtime restriction. In lab, we have run ISAPI DLLs (a form of executable) for more than one day in a single IIS thread. IIS did neither abort that executable nor log any error or warning message.

This behaviour can be used in an attack: An attacker exploits a vulnerability that allows him to inject code into one of the worker threads. This code will do malicious things and run in a tight loop (much as SQL Slammer [5]did). As such, it will never finish, thus the worker thread will also never finish. What does this mean? Well, we are stuck at stage two and will never reach stage 3, so no logging will happen.

When the executable (or script processor) is done with the request, it terminates and control returns to the calling worker thread. Please note the process termination will be logged in the correct user context in Windows Security Event log. The event id logged is 593 from source “Security”.

Stage 3: Finishing the Request

Once the request has been served, IIS does some cleanup work. Part of that cleanup work is logging. Now IIS has available all information that it potentially needs to write to the log file. Things like the return status of the process executed, the number of bytes sent or the total execution time. It is for this reason that IIS does not log data at any further stage.

Please note that ISAPI filters can modify IIS’ logging behaviour. If requested by a filter, IIS will call the filter right before the log entry is written. The filter can then decide to modify or even drop the log data.

When the filter returns, IIS finally writes the line to the log entry. This also implies that sequence of events in the log file is based on the time they are finished, not initiated. An example: if request A starts at 9:15:01 and ends at 9:15:05 and request B starts at 9:15:02 and ends at 9:15:03, the sequence in the log file will be B first, then A. The reason for this is that A finished after B and the log file sequence is based on the time the request finished. This fact is worth noting, as the actual processing sequence can not accurately be guessed from web server logs when doing forensic analysis. This most probably applies to all web server logs, not just IIS logs.

Copyright

This document is copyrighted © 2003 by Adiscon GmbH and Rainer Gerhards. Anybody is free to distribute it without paying a fee as long as it is distributed unaltered and there is only a reasonable fee charged for it (e.g. a copying fee for a printout handed out). Please note that “unaltered” means as either the PDF file you (hopefully) acquired or a printout of the same on paper. Any other use requires previous written authorization by Adiscon GmbH and Rainer Gerhards.

If you place the document on a web site or otherwise distribute it to a broader audience, I would appreciate if you let me know. This serves two needs: Number one is I am able to notify you when there is an update available (that is no promise!) and number two is I am a creature of curiosity and simply interested in where the paper pops up.

Credits

Many thanks to Tina Bird of www.loganalysis.org, which had the idea to this paper and provided numerous feedback and tough questions. Without that, it wouldn’t exist at all.

Authors Address

Rainer Gerhards
rgerhards@adiscon.com

Adiscon GmbH
Mozartstrasse 21
97950 Grossrinderfeld
Germany

Disclaimer

The information within this paper may change without notice. Use of this information constitutes acceptance for use in an AS IS condition. There are NO warranties with regard to this information. In no event shall the authors be liable for any damages whatsoever arising out of or in connection with the use or spread of this information. Any use of this information is at the user’s own risk.

Other Formats

This paper is also available in PDF format for easy printing and offline reading.

Building a redundant Syslog Server

Building a redundant Syslog Server

Article created 2006-02-01 by Rainer Gerhards.

For many organizations, syslog data is of vital importance. Some may even be required by law or other regulations to make sure that no log data is lost. The question is now, how can this be accomplished?

Let’s first look at the easy part: once the data has been received by the syslog server and stored to files (or a database), you can use “just the normal tools” to ensure that you have a good backup of them. This is pretty straight-forward technology. If you need to archive the data unaltered, you will probably write it to a write-once media like CD or DVD and archive that. If you need to keep your archive for a long time, you will probably make sure that the media persists sufficiently long enough. If in doubt, I recommend copying the data to some new media every now and then (e.g. every five years). Of course, it is always a good idea to keep vital data at least two different locations and on two different media sets. But again, all of this is pretty easy manageable.

We get down to a somewhat slippery ground when it comes to short term failure. Your backup will not protect you from a hard disk failure. OK, we can use RAID 5 (or any other fault-tolerance level) to guard against that. You can eventually even write an audit trail (comes for free with database written data, but needs to be configured and needs resources).

But what about a server failure? By the nature of syslog, any data that is not received is probably lost. Especially with UDP based (standard) syslog, the sender does not even know the receiver has died. Even with TCP based syslog many senders prefer to drop messages than to stall processing (the only other option left – think about it).

There are several ways to guard against such failures. The common denominator is that they all have some pros and cons and none is absolutely trouble-free. So plan ahead.

A very straightforward option is to have two separate syslog servers and make every device send messages to both of them. It is quite unlikely that both of the servers will go down at the same instant. Of course, you should make sure they are connected via separate network links, different switches and use differently fused power connections. If your organization is large, placing them at physically different places (different buildings) can also be beneficial, if the network bandwidth allows. This configuration is probably the safest to use. It can even guard you against the notorious UDP packet losses that standard syslog is prone to (and which happen unnoticed for most of the time). The backdraw of this configuration is that you have almost all messages at both locations. Only if a server fails (or a message is discarded by the network), you only have a single copy. So if you combine both event repositories to a central one, you will have lots of duplicates. The art of handling this is to find a good merge process, which correctly (correctly is a key word!) identifies duplicate lines and drops them. Identifying duplicates can be much harder than it initially sounds, but in most cases there is a good solution. Sometimes a bit sender tweaking is required, but after all, that’s what makes an admin happy…

The next solution is to use some clustering technology. For example, you can use the Windows cluster service to define two machines which act as a virtual single syslog server. The OS (Windows in this sample case) itself keeps track of which machine is up and which one is not. For syslog, an active-passive clustering schema is probably best, that is one where one machine is always in hot standby (aka: not used ;)). This machine only takes processing over when the primary one (the one usually active) fails. The OS handles the task of virtualizing the IP address and the storage system. It also controls the takeover of control from one syslog server software to the next. So this is very little hassle from the application point of view. Senders also send messages only once, resulting in half the network traffic. You also do not have to think about how to consolidate messages into a single repository. Of course, this luxury comes at a price: most importantly, you will not be guarded against dropped UDP packets (because there is only one receiver at one time). Probably more importantly, every “failover” logic has a little delay. So there will be a few seconds (up to maybe a  minute or two) until the syslog server functionality has been carried over to the hot standby machine. During this period, messages will be lost. Finally, clustering is typically relatively expensive and hard to set up.

The third possible solution is to look at the syslog server application itself. My company offers WinSyslog and MonitorWare Agent which can be configured to work in a failover-like configuration. There, the syslog server detects failures and transfers control, just like in the clustering scenario. However, the operating system does not handle the failover itself and obviously the OS so does not need to be any special. This approach offers basically the same pros and cons as the OS clustering approach described above. However, it is somewhat less expensive and probably easier to configure. If the two syslog server machines need not to be dedicated, it can be greatly less expensive than clustering – because no additional hardware for the backup machine would be required. One drawback, however, is that the senders again need to be configured to send messages to both machine, thus doubling the network traffic compared to “real” clustering. However, syslog traffic bandwidth usage is typically no problem, so that should not be too much of a disadvantage.

Question now: how does it work? It’s quite easy! First of all, all senders are configured to send to both receivers simultaneously. The solution then depends on the receiver’s ability to see if its peer is still operational. If so, you define one active and one passive peer. The passive peer checks if the other one is alive (in short periods). If the passive detects that the primary one fails, it enabled message recording. Once it detects that the primary is up again, it disables message recording. With this approach, both syslog servers receive the message, but only one actually records them. The message files can than be merged for a nearly complete picture. Why nearly? Well, as with OS clustering, there is a time frame where the backup does not yet take over control, thus some messages may be lost. Furthermore, when the primary node comes up again, there is another small Window where both of the machines record messages, thus resulting in duplicates (this later problem is not seen with typical OS clustering). So this is not a perfect world, but pretty close to it – depending on your needs. If you are interested in how this is actually done, you can follow our step-by-step instructions for our product line. Similar methodologies might apply to other products, but for obvious reasons I have not researched that. Have a look yourself after you are inspired by the Adiscon sample.

What is the conclusion? There is no perfect way to handling syslog server failure. Probably the best solution is to run two syslog servers in parallel, the first solution I described. But depending on your needs, one of the others might be a better choice for what you try to accomplish. I have given pros and cons with each of them, hopefully this will help you judge what works best for you.

A complete step by step guide on setting up SETP action

How To setup an SETP Action

Article created 2005-05-05 by Hamid Ali raja.

1.
Start the Application.

2.
Select your language – in this example, I use English, so it might be a good idea to
choose English even if that is not your preference. You can change it any time
later, but using English makes it much easier to follow this guide here.

3.
Then define a new rule set, right click
"Rules". A pop up menu will appear. Select "Add Rule Set" from this
menu. On screen, it looks as follows:

4.
Then, a wizard starts. Change the name of the
rule to whatever name you like. We will use "Forward SETP" in this example.
The screen looks as follow:


Click "Next". A new wizard page appears.

5.
Select only Forward by SETP. Do not select any
other options for this sample. Also, leave the "Create a Rule for each of the
following actions" setting selected. Click "Next". You will see a
confirmation page. Click "Finish" to create the rule set.

6.
As you can see, the new Rule Set "Forward
SETP" is present. Please expand it in the tree view until the action level of
the "Forward SETP" Rule and select the "Forward by SETP" action to
configure.

7.
Now, type the IP address or host name of our
central hub server in the "Servername" field:

8.
Make sure you
press the “Save” button – otherwise your changes will not be applied.

Support for Mass Rollouts

Support for Mass Rollouts

A major update to this article was done on 2005-05-04 by Rainer Gerhards.

A mass rollout in the scope of this topic is any case where the product is rolled out to more than 5 to 10 machines and this rollout is to be automatted. This is described first in this article. A special case may also be where remote offices shall receive exact same copies of the product (and configuration settings) but where some minimal operator intervention is acceptable. This is described in the second half of this article.

The common thing among mass rollouts is that the effort required to set up the files for unattended distribution of the configuration file and poduct executable is less than doing the tasks manually. For less than 5 systems, it is often more economical to repeat the configuration on each machine – but this depends on the number of rules and their complexity. Please note that you can also export and re-import configuration settings, so a hybrid solution may be the best when a lower number of machines is to be installed (normal interactive setup plus import of pre-created configuration settings).

Before considering a mass rollout, be sure to read “The MonitorWare Agent Service“. This covers necessary background information and most importantly the command line switches.

Automatted Rollout

The basic idea behind a mass rollout is to create the intended configuration on a master (or baseline) system. This system holds the complete configuration that is later to be applied to all other systems. Once that is system is fully configured, the configuration will be transferred to all others.

The actual transfer is done with simple operating system tools. The complete configuration is stored in the the registry. Thus, it can be exported to a file. This can be done with the client. In the menu, select “Computer”, then select “Export Settings to Registry File”. A new dialog comes up where the file name can be specified. Once this is done, the specified file contains an exact snapshot of that machine’s configuration.

This snapshot can then be copied to all other machines and put into their registries with the help of regedit.exe.

An example batch file to install the product and configuration on the “other” servers might be:

 copy \\server\share\mwagent.exe c:\some-local-dir
 copy \\server\share\libeay32.dll c:\some-local-dir
 copy \\server\share\ssleay32.dll c:\some-local-dir
 copy \\server\share\mwagent.pem c:\some-local-dir
 cd \some-local-dir
 mwagent –i
 regedit \\server\share\configParms.reg

The file “configParams.reg” would be the registry file that had been exported with the configuration client.

Of course, the batch file could also operate off a CD – a good example for DMZ systems which might not have Windows networking connectivity to a home server.

Please note that the above batch file fully installs the product – there is no need to run the setup program at all. All that is needed to distribute the service is the mwagent.exe and its two helper dlls, which are the core service. For a locked-down environment, this also means there is no need to allow incoming connections over Windows RPC or NETBIOS for an engine only install.

Please also note that, in the example above, “c:\some-local-dir” actually is the directory where the product is being installed. The “mwagent -i” does not copy any files – it assumes they are already at their final location. All “mwagent -i” does is to create the necessary entries in the system registry so the MonitorWare Agent is a registered system service.

Subsidary Rollout with consistent Configuration

You can use engine-only install also if you would like to distribute a standadized installation to subsidary administrators. Here, the goal is not have everything done fully automatic, but to ensure that each local administrator can set up a consistent environment with minimal effort.

You can use the following procedure to do this:

  1. Do a complete install on one machine.
  2. Configure that installation the way you want it.
  3. Create a .reg file of this configuration (via the client program)
  4. Copy mwagent.exe, mwagent.pem, libeay32.dll, ssleay32.dll and the .reg file that you created to a CD (for example). Take the thre executable files from the install directory of the complete install done in step 1 (there is no specific engine-only download available).
  5. Distribute the CD.
  6. Have the users create a directory where they copy all four files. This directory is where the product is installed in – it may be advisable to require a consistent name (form an admin point of view – the product does not require this).
  7. Have the users run “mwagent -i” from that directory. It will create the necessary registry entries so that the product becomes a registered service.
  8. Have the users double-click on the .reg file to install the pre-configured parameters (step 3).
  9. Either reboot the machine (neither required nor recommend) or start the service (via the Windows “Servcies” manager or the “net start” command)

Important: The directory created in step 6 actually is the program directory. Do not delete this directory or the files contained in it once you are finished. If you would do, this would disable the product (no program files would be left on the system).

If you need to update an engine-only installation, you will probably only upgrade the master installation and then distribute the new exe files and configuration in the same way you distributed the original version. Please note that it is not necessary to uninstall the application first for an upgrade – at least not as long as the local install directory remains the same. It is, however, vital to stop the service, as otherwise the files can not be overwritten.

Discussion on MonitorWare SystemID and CustomerID

Discussion on MonitorWare SystemID and CustomerID

Created 2004-12-06 by Hamid Ali Raja

SystemID

It is a user-configurable numerical value that has been added for grouping a group of systems and improves filtering. It is just like a numerical code to which you can assign a value and query it afterwards.

CustomerID

It is similar to SystemID. It depends on user that how he uses these.

Let us consider the following scenarios to better understand the functionality of these two:

Scenario 1

A service provider has 2 customers, customer A with 2 subsidiaries and customer B with 3 subsidiaries. How can he use SystemID and CustomerID to configure all systems in different subsidiaries to monitor his customers’ networks?

Solution

His configurations for this scenario will be:

  • For all systems in subsidiary 1 for customer A, CustomerID = 1 and SystemID = 1
  • For all systems in subsidiary 2 for customer A, CustomerID = 1 and SystemID = 2
  • For all systems in subsidiary 1 for customer B, CustomerID = 2 and SystemID = 1
  • For all systems in subsidiary 2 for customer B, CustomerID = 2 and SystemID = 2
  • For all systems in subsidiary 3 for customer B, CustomerID = 2 and SystemID = 3
  • Scenario 2

    A service provider has 2 customers. Customer A has 5 servers and Customer B has 2 servers. Both A and B happen to have a server named “SERVER”. How can the service provider use customer ID to monitor his customer’s servers and differentiate between them?

    Solution

    To monitor customer’s server, you can put in different CustomerIDs into each of the agents.

  • For all systems of Customer A, CustomerID = 1
  • For all systems of Customer B, CustomerID = 2
  • Now with the help of CustomerID, these machines are uniquely identifiable.

    You can also use Set Property feature to rename the server.

    Scenario 3

    A single user has two subsidiaries (A & B) and also wants to group machines by department (marketing, engineering and production). How can he do this using both CustomerID and SystemID?

    Solution:

    He can address his problem by assigning a unique CustomerID to each subsidiary and unique SystemID to individual department.

    Subsidiaries
  • A will be assigned CustomerID = 1
  • B will be assigned CustomerID = 2
  • Departments
  • Marketing department will be assigned SystemID = 1
  • Engineering department will be assigned SystemID = 2
  • Production department will be assigned SystemID = 3
  • If he wants to view all marketing department machines, he queries for SystemID = 1 and to view all machines in subsidiary A, he queries for CustomerID = 1. He can also get machines which belong to both production department and subsidiary 1 by using CustomerID = 1 and SystemID = 3.

    Scenario 4

    I have three subsidiaries A, B and C with 200, 2000 and 5000 machines respectively. If I can use “FromHost” to get the system information then why do I need “SystemID”?

    Solution

    To query all subsidiary C machines using “FromHost” is a lengthy task as it has 5000 elements and you also need to update the queries each time a new machine is installed in a subsidiary.

    If you just query the SystemID, you have a single query element PLUS you do not need to modify the queries when you install and configure your new machine correctly to the subsidiary.

    Which Product Should I Purchase?

    Which Product Should I Purchase?

    Created 2003-02-16 by Wajih-ur-Rehman.
    Updated 2004-09-09 by Tamsila-Q-Siddique.

    1. Overview

    This article gives an overview of MonitorWare Line of Products and provides a guideline to select the right product. This article discusses EventReporter, MonitorWare Agent, WinSyslog, MonitorWare Console, Monilog and AliveMon.

    MonitorWare Agent, WinSyslog and EventReporter work on common concepts but target different needs. They also come in different editions and versions. Click on MonitorWare Agent, EventReporter and, WinSyslog respectively to see the available editions of each product set.

    If you want a product according to your needs, our product positioning chart helps you in taking the decision.

    2. MonitorWare Line of Products

    2.1) MonitorWare Agent

    MonitorWare Agent is a super set of EventReporter and WinSyslog. Since it can perform all tasks of EventReporter and WinSyslog, it can be used on the sending as well as on the receiving side. It also incorporates some of its own special services / services. MonitorWare services are listed below:

    No.Name of the ServicePurpose of the Service
    2.1.1Syslog Server Receives Syslog messages
    2.1.2SETP Server Receives SETP messages
    2.1.3Event Log Monitor Monitors Windows Event Log
    2.1.4File Monitor Monitors text/log files
    2.1.5Heart Beat Send periodic messages
    2.1.6Ping Probe Pings remote server
    2.1.7Port Probe Checks the specified TCP port on the specified machine
    2.1.8NT Service Monitor Monitors NT Service
    2.1.9Disk space Monitor Monitors disk space
    2.1.10SNMP Trap Receiver Receives SNMP messages
    2.1.11Database Monitor Monitors database tables
    2.1.12Serial Port Monitor Monitors devices attached to the local communication ports
    2.1.13CPU / Memory Monitor *Monitors CPU and Memory
    2.1.14MonitorWare Echo Reply *Provides response whether MonitorWare Agent is working or not. It works with MonitorWare Echo Request.
    2.1.15MonitorWare Echo Request *Checks the availability / detecting failure of MonitorWare Agent. It works with MonitorWare Echo Reply.

    You can click here to view more information about MonitorWare Agent.

    2.2) EventReporter

    EventReporter is meant for the purpose of monitoring Windows Event Logs. If you are looking for a product that should only pick up the Windows event logs and forward them to a Syslog server, then Event Reporter is the right choice. EventReporter provides the following services:

    No.Name of the ServicePurpose of the Service
    2.2.1Event Log MonitorMonitors Windows Event Log
    2.2.2Heart BeatSends periodic messages

    You can click here to view more information about EventReporter.

    2.3) WinSyslog

    WinSyslog is a typical Syslog Server. It is basically used for receiving Syslog or SETP messages. WinSyslog provides the following services:

    No.Name of the ServicePurpose of the Service
    2.3.1Syslog ServerReceives Syslog messages
    2.3.2Heart BeatSends periodic messages
    2.3.3SNMP Trap ReceiverReceives SNMP messages
    2.3.4SETP ServerReceives SETP messages

    You can click here to view more information about WinSyslog.

    2.4) MonitorWare Console

    MonitorWare Console is an analytical tool that is used to analyze the data that has been gathered by other Adiscon products. It is a modular application offers modules listed below:

    • Base Product (This has to be purchased in order to use other modules)
    • Network Scanning Tools
    • Windows Reporting Module
    • PIX Reporting Module
    • Knowledge Base Module
    • Devices’ Module
    • Views Module

    You can click here to view more information about MonitorWare Console.

    2.5) Monilog

    Monilog is also an analytical tool but it only generates one report.

    You can click here to view more information about Monilog.

    2.6) AliveMon

    AliveMon is a network monitor that lets you know when servers or routers fail. Configurable alarms enable you to quickly solve problems before they turn into real headache. You can even automatically take corrective actions by auto-starting programs.

    You can click here to view more information about AliveMon.

    3. Comparison

    MonitorWare Agent can act both as a WinSyslog or EventReporter. Whereas, MonitorWare Console and Monilog both act as analytical tools. In this section we are giving the following comparisons to best guide you in your product selecting decision.

      3.1) MonitorWare Agent (Sender) with EventReporter
      3.2) MonitorWare Agent (Receiver) with WinSyslog
      3.3) MonitorWare Console with Monilog

    3.1) Comparison of MonitorWare Agent (Sender) with EventReporter

    For monitoring of any system, you have 2 options. You can either go for EventReporter or you can go for MonitorWare Agent. Choice really depends on your requirements. If you are only interested in monitoring Windows Event Log, then EventReporter is the right choice for you but on the other hand, if you want to perform any of the functions (see 2.1.4, 2.1.6, 2.1.7, 2.1.8 or 2.1.9) on the client to be monitored, then you would have to go for MonitorWare Agent since these features are not present in EventReporter.

    3.2) Comparison of MonitorWare Agent (Receiver) with WinSyslog

    If you only want to receive data sent from various clients, you again have 2 options. You can either go for WinSyslog or for a MonitorWare Agent. Choice again depends on your requirements. If you are only interested to receive Syslog messages, SNMP traps or SETP messages then, WinSyslog is the right choice as a Syslog Server. On the other hand, if you also want to monitor the system on which Syslog Server is running then you would either have to use EventReporter with WinSyslog on that machine or you can use MonitorWare Agent alone since it can act both as a Syslog Server as well as the Monitoring System.

    3.3) Comparison of MonitorWare Console and Monilog

    There is actually a lot of difference between these two products and again, the selection really depends on the requirements at hand. If you just want to see one report on the logs, then you can go for Monilog. Additionally, Monilog is easy and quick to setup. If you are interested in an in-depth analysis which includes the analysis of not only the Windows Event logs but also PIX records, then you can opt for MonitorWare Console which offers about 15 reports in its current version. Hopefully these reports will keep on growing with client feedback. MonitorWare Console does not only offer Reports. There are a lot of other interesting and valuable modules in it which gives you a great power in analyzing your data. These modules include Views which can be auto refreshed at the specified interval and hence display the current state of the data as it enters your system, Network tools like Port Scan, Trace Route, Ping tool, Devices Module in which you can keep track of your devices, Knowledge base module in which you can keep track of the information, Job Manager in which you can schedule automatic generation of reports etc.

    4. Price

    All the above mentioned products come in different flavors and editions. For your convenience we have listed down all the prices at one single point.

    5. Conclusion

    MonitorWare Agent is a high end solution and fulfills all of your requirements but somewhat higher price is the drawback. Adiscon does not want to make you spend for something you do not even need. You can opt for a combination of different products to come up with a cost effective solution for your enterprise. This is a primary driver behind the decision which product to use. If you are in doubt, please contact us and let us know your requirements. We will gladly help you not only to find the best technical solution but also the most cost effective one. If you have any queries, please feel free to contact support@adiscon.com.

    Actively Monitoring Disk Free Space

    Actively Monitoring Disk Free Space By Rainer Gerhards Article Date: 2004-07-22

    Why care about disk free space?

    The obvious answer is that low free space means upcoming problems, like the inability to receive mail (for mail servers) or the inability to store new files (for file servers). There are numerous obvious reasons why free space is an operations management priority. But there are also less obvious reasons: disk space shortage may be caused by a process running wild. Sometimes space consumption is the only warning indicator in such a case. Also, intruders may be the cause of low disk space conditions. For example, movie pirates often break into public servers and mis-use them as FTP servers for pirated videos. As videos are large, this can cause a sharp decrease in disk free space. In this article I primarily address the operations management needs. Obviously, the security benefits come as a side-effect. But don’t rely purely on what I am presenting here if you would like to takle the security side of disk free space. In the article, I will first convey the idea of what can be done and then I will also provide a potential solution using Adiscon’s MonitorWare Agent software.

    The Idea

    Shortage on disk space does not (necessarily) come in an instant. Typically, free space decreases by a little every day. If left undetected, some day no space may be left at all. This is where we start at. In my point of view, a good disk space monitoring script must work with at least two thresholds:

    • disk space is low, but still acceptable
    • disk space is too low, problems will occur very soon (or already exist)

    The first level is a warning level, the second level is a real error level. In a typical setup, the warning level may not cause any big action. Typically, a notification email is sent to the administrator and that’s it. Again, in a typcial sample, the error level eventually causes more serious action. Now, the warning message may be sent to a pager email address. But a good disk space monitoring solution might also initiate some corrective action. For example, on a file server, many temporary files may fill up the disk. It may be agreed policy that such files (and eventually .bak backup files) can be automatically deleted – without asking each user. If so, a script can be started that tries to delete as many temporary files as possible, thereby freeing up disk space. In an optimal case, such a script may even delete enough space to recover from the very-low disk space condition. Ideally, it would even recover from the warning level, too. Now let’s consider that the very-low space condition triggered a pager alarm to the administrator. Poor John Admin is at the beach when his pager beeps. Too bad… Now consider he jumps off the beach and drives into his data center … just to see that the configured auto-action has already solved the issue. How would you feel in John’s place? I bet you’d be really happy and go back to the beach,wouldn’t you? I also guess you would have been even happier when the system had notified you that the low space condition was solved. So this is one more thing that we need to do within our free space monitoring: not only send an alert when things go worse, but also send an alert when the system has recovered from such a condition. Please note that the recovery case may even happen if no corrective action has been configured – just imagine a file server: a user may copy a hughe file set just to try something out. Later, he himself deletes it. Again, the low space condition is solved. Finally, a monitoring solution should only notify you once when the problem occurs and not continously (yes, I have seen solutions which do it ever and ever again…). The same goes for the “recovered” message, which obviously should only be sent once and only after a problem message has been sent first. So to sum up, a good disk free space monitoring solution must provide:

    • at least two thresholds for disk space shortages
    • notifications that only occur ones these thresholds are crossed
    • optionally automatically-triggered corrective actions
    • notifications when the shortage conditions have been triggered

    Of course, the system should be able to send different types of notifications. For example, you may want to send some of these via email while others are forwarded to a pager or a simple “net send” type notifications.

    A potential Solution

    As always in life, there are many ways to implement the disk space monitor. I am using a solution based on Adiscon’s MonitorWare Agent here. This is because it is a good fit to our requested functionality and it is also easy to setup and run. MonitorWare Agent is a multi-monitoring solution. It can monitor Windows Event logs, syslog devices, databases, files … and disk space. With MonitorWare, we create a so-called disk space monitor which then is bound to a “rule set”. The disk space monitor is the part actually checking disk free space. It does this in intervals. Each time, it creates an event, which includes the free space information. That event is then passed to the rule set, where the actuall processing takes place. This is where we implement our requirements. Inside the rule set, we just need a few rules to create our scenario. Basically, we utilize MonitorWare Agent’s status variables to keep track if we have a low or a very-low space condition. With this knowledge, we check the disk space report. If it is below the thresholds and the status variable is not yet set, we create an alert (and potentially action) and set the status variable. Similarily, when free space goes up, we check if we had one of the low conditions and, if so, create another alert. We utilize MonitorWare Agent’s other action types to start the low space recovery script. Of course, I could provide you with detailled setup instructions here and also include numerous screen shots. But this article should not become a product manual… For your convenience, though, I have created a the configuration with MonitorWare Agent. You can simply download it and try it yourself. I’ve placed plenty of comments inside the rule set in that configuration. If you review the comments, you will know pretty well what I have been doing.

    Related Software

    The MonitorWare Agent web site and free eval download

    .

    Revision History

    2004-07-22 Initial version created. 2004-10-19 Updated sample and added hyperlink to it.

    Author’s Address

    Rainer Gerhards Adiscon GmbH rgerhards @ adiscon.com www.adiscon.com

    Disclaimer

    The information within this paper may change without notice. Use of this information constitutes acceptance for use in an AS IS condition. There are NO warranties with regard to this information. In no event shall the author be liable for any damages whatsoever arising out of or in connection with the use or spread of this information. Any use of this information is at the user’s own risk.

    How To setup MonitorWare Console 2.0

    How To setup MonitorWare Console 2.0

    Article created 2004-04-22 by
    Tamsila-Q-Siddique
    .

    After installation, once MonitorWare Console 2.0 is started, a dialog box similar to the one shown below would be displayed.



    Figure 1: MonitorWare Console: Startup Dialog Box

    The default user name is “admin” and password is nothing (as shown above).
    Please note that the password is not the word “nothing” but actually it is
    empty. Once a user enters into the application, this password can be changed.

    At the bottom left corner of this dialog box, there are two links “Edit
    Settings” and “License Options” The latter one is self-explanatory. If you
    click on it, a license dialog appears where you can view or change your license
    key and license name. There is also a link to order the product directly via
    our online ordering system. Please note that MonitorWare Console has Modular
    Licensing now. For getting more details on License, please see
    License Options
    .



    Figure 2: License options Dialogue Box

    The other link in the login dialog, “Edit Settings” is used if the user wants to
    change the database connection or other settings. Currently MonitorWare Console
    supports Microsoft Access, SQL Server and MySQL. Once the above mentioned link
    is clicked, a dialog box, as shown in figure, will pop up. Using this dialog
    box, the user can change the underlying database or other settings.



    Figure 3: Dialog Box to change the underlying database or log file

    Display Login Dialog at Startup

    If checked the dialog box in figure 2 appears every time at the startup of the
    MonitorWare Console application. If unchecked it will directly take you into
    the Monitorware Console main application without displaying Figure 1.

    DSN

    This field is mandatory. This will point to the DSN of the database which will
    store all the settings related to the MontitorWare Console . And later on this
    will work as the underlying database to which MonitorWare Console is connected.

    Edit

    This options opens up a dialog box for creating the DSN. A dialog similar to
    the one displayed opens where you can configure the settings according to your
    environment.



    Figure 4: Dialog Box to create a DSN

    Once the provider and the connection has been selected, Test Connection button
    can test whether the connection with the specified database has been
    established or not.

    If the dialog box, as shown in figure 5, is displayed, it means that the
    connection with the specified database has been set up properly and the user
    can proceed further by pressing the OK button.



    Figure 5: Success dialog

    On the other hand, if a dialog box, as shown in figure 6 is displayed, it means
    that there is something wrong and the connection with the mentioned database
    has not been established.



    Figure 6: Connection Failure Dialog Box

    User Name

    This option allows you to configure the User Name for connecting to the
    database.

    Password

    This option allows you to configure the Password for connecting to the
    database.

    Note: If you had created the DSN with the “Windows Integerated Security”, then
    you don’t need to give any user name or password.

    Generate Reports on data coming from database

    If this option is checked then in Windows Reporting Module and Pix Reporting
    Module the reports would be generated on the basis of the underlying database.
    We have provided this option so that if your main data on which you want to
    generate reports is present in some other database, then you can give its DSN
    over here.

    Generate Reports on data coming from the following file

    If this option is checked then in Windows Reporting Module and Pix Reporting
    Module the reports would be generated on the basis of the configured log files
    and not on any database

    Log File Prefix

    This option allows you to enter the prefix of the log files that have been
    generated by our other products. MonitorWare Console will go in the specified
    path and will look for files starting with this prefix.

    Log File Path

    This option allows you to enter the path of the folder which contain the log
    files.

    Browse

    This option will open a dialog box from where you can select the path of the
    log files. A dialog similar to the one below opens up.



    Figure 7: Browse – Select Folder Form

    Log File Naming

    This option allows you to select the naming convention for your log files.
    Options available are:

    1). Adiscon(LogPrefix-yyyy-mm-dd.log)

    2). Single

    Type of Parser

    This option allows you to select the type of the parser used for parsing the
    log files. Options available are:

    1). Adiscon Parser for PIX

    2). Adiscon Parser for XML

    Note: If you are interested in PIX Reports then choose Adiscon Parser for PIX.
    If you are interested in Windows Report then choose Adiscon Parser for XML.

    OK

    Saves the settings and quits the form.

    Cancel

    Quits the form without saving the settings.

    Note: Please note that the settings for this dialog box are global settings. It
    means that whenever you open up any report, it will be opened up with these
    settings. You can overwrite these settings for each report on individual basis.

    After saving the settings, click on OK. This will take you back to Figure 1.
    After setting up the database or the log file, the OK button in the top most
    figure will take the user inside the MonitorWare Console application.
    There
    could be following Six cases that can happen when starting MonitorWare Console.

    Case 1: Your login and password is validated and is correct and there is
    no update required for the underlying database that you set in Figure 3. If
    this is the case, you will enter MonitorWare Console successfully and you will
    see a form similar to the one shown below:



    Figure 8: Main Form of MonitorWare Console

    Case 2: Your login and password fails because you have either entered
    wrong login and wrong password. If this is the case, you will stay on this
    dialog box and it will ask you for the correct login and password again.
    Following message box will be displayed to you:



    Figure 9: Login Fail Dialog

    Case 3: Your database to which the DSN in figure 3 is pointing to is not
    a valid DSN. By valid DSN, we mean that the DSN is not pointing to the database
    that contains SystemEvents table. In this case, you will get the following
    message box:



    Figure 10: Invalid Database

    Case 4: Your database to which the DSN in figure 3 is pointing to is
    valid but you don’t have sufficient permissions to query it. In this case, once
    again a dialog box similar to the one shown in figure 10 will be displayed.

    Case 5: You don’t have sufficient permissions to write something to the
    registry. In this case, again a dialog box complaining that you don’t have
    sufficient permissions will be displayed to you.

    Case 6: Your login and password is valid and your DSN is pointing to the
    correct MonitorWare database but the database is old. MonitorWare Console will
    display you the following message:



    Figure 11: Database Update Required Dialog

    If you click on Yes, the database will be updated (because console needs some
    additional tables for house keeping). If you click on No or Cancel, the dialog
    box will disappear taking you to the main dialog in figure 1.