3 Approaches for Using Events in MicroStrategy for Report and Document Distribution

MicroStrategy Report and Document Distribution

One of the primary responsibilities of a MicroStrategy Administrator is the creation of schedules and triggers for on-time and efficient delivery of BI reports and documents to end users. The subscriptions can either be time-triggered (i.e., sent to the subscribers at a particular time every day) or event-triggered (i.e., delivered only at the completion of a specific event). Both types are useful depending on the environment and the requirements.

At one of my clients, time-triggered subscriptions were used to deliver reports and documents to the business users. The ETL jobs were scheduled to run daily from 2 AM to 5 AM and they regularly completed before 5 AM. It was therefore decided that triggering the subscriptions at 6 AM would be the best way to schedule the “push” reports.

To prevent all reports from getting triggered at the same time, the reports were grouped into batches and were triggered in 15-minute intervals from 6 AM onwards. This was done to avoid high loads on the MicroStrategy Intelligence Server as well as the database server.

The arrangement worked perfectly until one day when the reports were delivered to all business users and senior executives with missing data and incomplete sales information. The initial suspicion fell on the POS files and ETL job. After further research, the root cause was identified as a timing issue.

On that particular day, processing of POS files was delayed for some stores which resulted in the ETL load running past 6 AM. Since the MicroStrategy Intelligence Server scheduling the delivery of reports had no way of knowing the status of the ETL load, it triggered the reports to run at 6 AM with an incomplete data set.

The incident influenced management to change all reporting schedules to be event-triggered so the report distribution occurs only after the completion of the ETL load event. I researched several ways to implement this. In this blog post I discuss three approaches for leveraging Events in MicroStrategy to handle timely report and document distribution.

Enterprise Scheduler-controlled Command Manager Scripts

This approach is used to control the execution of MicroStrategy reports using Command Manager scripts, which are in turn triggered by the Enterprise Scheduler. Following are the steps to control report execution from the Enterprise Scheduler:

  1. Create Events in MicroStrategy that symbolize an actual event that occurs in the BI environment, such as a data warehouse load.
  2. All time-triggered Schedules need to be changed to event-triggered Schedules.
  3. Associate the required Events with their respective Schedules. The Events control the Schedules. To trigger the Schedules, the corresponding Event needs to be triggered first. This is achieved using scripts (triggered manually or automatically). Every scheduled item (Schedule) linked to that event will then take place.
  4. Automate the triggering of an Event using the MicroStrategy Command Manager component. It can be easily integrated to external systems such as an Enterprise Scheduler.
  5. Set the dependency in the Enterprise Scheduler between the ETL load job and MicroStrategy Report trigger Events.
  6. When the ETL jobs finishes, it satisfies a dependency in the Scheduler which then runs the EVENT TRIGGER Command Manager script.
  7. The Command Manager script in turn triggers the event in MicroStrategy.
  8. The MicroStrategy Event triggers the Schedules associated with the event.
  9. Reports in the Schedule are executed and delivered to the end users.

MicroStrategy Pre and Post SQL Execution

The second alternative is by using tables, which are updated by MicroStrategy. MicroStrategy provides the PRE and POST SQL execution feature which can be run before and after an ETL load to control the report schedules.

PRESQL: Before service execution – to make the flag check in time intervals.

If Flag = 1, then service finishes execution.

If Flag = 0, then service execution is cancelled.

POSTSQL: After service execution — to update the value of the field to prevent next execution.

The following are the steps required to implement this model:

  1. Create a table (e.g., DW_LOAD) with 3 fields:(1) Service ID (Server Services)(2) Subscription Set ID (IDs of each Subscription set in a service)(3) Flag (0 – event has not happened, 1- event has happened)
  2. Create time-based schedules for the reports.
  3. When Scheduler triggers the service, the Intelligence Server checks the value of the Flag.
    1. If Flag = 1 → Server executes service and updates Flag to 0.
    2. If Flag = 0 → No new updates; Server cancels service execution.
    3. When the ETL load is complete, the Scheduler can trigger a stored procedure to update the value of the Flag back to 1.
    4. When the Scheduler kicks off again, the service execution checks for the flag and the process continues.

Batch Job Check for Load Complete

This approach requires a schedule control table with a Load_Complete_Flag field to indicate DW load completion. The field is updated using SQL scripts to indicate the present status of the DW load.

  1. Create a table (e.g. DW_LOAD_2) with 2 fields:(1) Table Name(2) Load_Complete_Flag
  2. Execute a batch job to run a SQL query looking for the ‘Load_Complete_Flag’ (DW_LOAD_2) that gets set when the data warehouse tables have completed loading.0 → not loaded, 1 → loaded
  3. Query to check how many tables are not loaded.
SELECT COUNT(*) WHERE Load_Complete_Flag = ‘0’ FROM DW_LOAD_2.

 If n is the number of tables to be loaded:

1 – Result set = ‘0’ → 0 tables are not loaded (i.e., all n tables are loaded).

2 – Result set = ‘1’ → 1 table is not loaded (i.e., n-1 tables are loaded).

  1. The batch file looks for the result set equal to ‘0’ meaning all tables have been loaded. When it finds a ‘0’, it spools the results to a text file.
  2. Schedule a batch job to run every 5 minutes to check for this ‘results’ text file. If a file is found (i.e., all tables are loaded), the batch job continues on to trigger an ‘Event’ on the Intelligence Server.
  3. The Event triggers the event-based Schedules, so any daily reports scheduled to be delivered on this Schedule are then sent out.
  4. Schedule several batch jobs at incremental times to control the load on the Intelligence Server.
  5. Create the batch jobs with the ‘SLEEP’ utility built in so the jobs will sleep before the next one kicks in off.
  6. Update Load_Complete_Flag back to 0 after Reports are triggered.

Benefits and Drawbacks

All the three options have their advantages and are suitable for specific situations. You need to consider which method(s) best meet your needs.

To help with that, I have listed some of the benefits and drawbacks of each method.

Enterprise Scheduler-controlled Command Manager Scripts


  1. External table to maintain load status is not required. Dependency between ETL load and MicroStrategy is controlled by the scheduler itself.
  2. It is possible to set up dependencies for triggering reports based on a series of events instead of a single load complete event.
  3. On-demand report execution can be achieved from the scheduler by triggering the Command Manager script.
  4. This method offers good scalability. The enterprise scheduler can schedule any number of reports. We only need to set the dependencies between the reports in the scheduler.


  1. Command Manager script does not wait until job execution is complete. It sends a success signal as soon as the job trigger is submitted, so setting a dependency between reports is not possible.

MicroStrategy Pre and Post SQL Execution


  1. Reports can be triggered as soon as the specific tables it requires are loaded; there is no need to wait until loading of all tables is complete.
  2. Historical load times for ETL tables can be tracked if required by adding an extra field in the table to capture the load times. This is useful for auditing.


  1. Additional table to track data warehouse load information is needed.
  2. Resource-aware scheduling is not possible. Several reports can be running at the same time, thus increasing the load on the server and thereby decreasing performance.
  3. Report schedules based on a series of events are not possible.

Batch Job Check for Load Complete


  1. Simple Load Status table as it captures only loaded/not loaded information.
  2. Batch jobs can be scheduled at incremental times to control the load on the Intelligence Server.


  1. Reports can be triggered only after all the tables in the batch are completed.
  2. Resource-aware scheduling is not possible. Several reports can be running at the same time increasing the load on the server and thereby decreasing performance.

In my client’s organization, it was decided that the new architecture for report triggering and scheduling must support some essential features, keeping the bigger picture and long term plans in mind, including:

  • Event-based triggering
  • Resource-aware scheduling
  • Ability to manage and schedule jobs externally
  • Managing alerts and notification centrally
  • Scalability

We therefore decided to go with enterprise scheduling of MicroStrategy Reports through our enterprise batch scheduling utility.

The organization greatly benefited from this implementation. Apart from preventing partial and wrong information from being sent to management, we were able to have better visibility and control over report triggering. The arrangement was scalable too, and the addition of new tables required only a new condition added to the dependency in the scheduler.

There’s more to explore at Smartbridge.com!

Sign up to be notified when we publish articles, news, videos and more!