Gautham KAMATH

BI Developer specializing in SSAS and MDX

Posts Tagged ‘SSAS

CoordinatorExecutionMode and Process MaxThreads

leave a comment »

Just recently I got into a processing performance test of my Analysis Services database and I noticed that only a little over 50% of the available system resources were being used. On investigation I found that the busy (processing pool busy threads), idle (processing pool idle threads) and queue (processing pool job queue length) that were to be used by the processing were way below the SSAS defaults.

This did not seen right. On closer investigation I found that, during the processing cycle, small amounts of data were being processed (ranging from a 1000 to 100000 records per batch) and ProcessAdd was being used to process the partition data.

I changed the processing cycle to process all the records using ProcessFull and the data processed significantly faster. Yes, I know ProcessFull must have been used instead of ProcessAdd but as per our earlier requirements this was not possible and besides this is not the topic of discussion today.

As I mentioned, the processing time did reduce but there were a significant amount of job queues. This, as a matter of fact, was not surprising as I was using SSAS default settings for CoordinatorExecutionMode and Process MaxThreads.

Just for information let me define a couple of Analysis Services server properties and three performance counters that I will be referring to in this post.

Analysis Services server properties

CoordinatorExecutionMode: Processing of dimensions and partitions and querying of partitions are handled by job operations. A job operation is a unit of execution of command that is fired against the SSAS server. The property that controls these job operations to run in parallel is the CoordinatorExecutionMode. The CoordinatorExecutionMode can have the following values:

  • Negative number indicates absolute maximum number of parallel jobs per core. The CoordinatorExecutionMode defaults to -4 indicating 4x Number of cores of parallel jobs. i.e. If you have 8 cores on the server then the absolute maximum number of parallel jobs would be 32.
  • Positive number indicates absolute maximum number of parallel jobs per server. If you set it to 10 then absolute maximum number of parallel jobs for the server would be 10.
  • Zero indicates the server to determine the maximum number of parallel jobs, based on workload and available system resources.

Process MaxThreads: This server property determines the maximum number of available threads to SSAS during processing (and for accessing the IO system during querying). This defaults to 64 or 10 time the number of cores whichever is maximum.

Performance Monitor (PerfMon) counters

Processing pool BUSY threads: Indicates the number of threads in use.
Processing pool IDLE threads: Indicates the number of threads that are idle.
Processing pool JOB QUEUE LENGTH: Indicates the number of jobs that are waiting for threads to be available.

Now coming back to the issue, as a test case, I changed the CoordinatorExecutionMode to zero.

Changing the CoordinatorExecutionMode to zero meant that I was forcing the SSAS engine to use all the available system resources. Ironically changing the CoordinatorExecutionMode to zero slowed the processing performance significantly and increased the processing time by 50%.

This was because now the all the available threads (in excess of 300) were either fighting or queuing up to do the same portion of processing work. The system resources were being hammered and ignoring the SSAS setting of Process MaxThreads that had the default value of 64.

I changed the CoordinatorExecutionMode back to -4 and the system was processing as before. But I still saw a significant amount of job queues and for that I increased the Process MaxThreads from the default value of 64 to 128. The queues were nullified and processing time just improved marginally. The important thing here was changing the Process MaxThreads to 128 zeroed the queued threads.

I went slightly adventures by decreasing the CoordinatorExecutionMode to -1. The processing pool idle threads were reduced significantly and processing pool busy threads increased while processing pool job queue length remained 0. But for me the default CoordinatorExecutionMode of -4 and changing the Process MaxThreads to 128 worked.

To summarize:

  • DO NOT change CoordinatorExecutionMode to 0.
  • Try to get the queue lengths to minimum. Playing with Process MaxThreads is a good idea but do not be over generous as it may backfire. Please note that CoordinatorExecutionMode and Process MaxThreads complement each other and if you change CoordinatorExecutionMode make sure you have enough threads to play with.
  • CPU is underutilized when
    • Processing pool job queue length is greater than 0. This means that SSAS wants to process more but does not have available threads.
    • Processing pool idle threads = 0. This means that there is a potential to use more threads hence increase Process MaxThreads. Increase gradually and check. Ideally it should not go above 0 but gradual increase will give you a clear indication of its capacity.
    • Processing pool idle threads greater than 0. Increase the CoordinatorExecutionMode from default value of -4 to -5 or -6 (Yes this is an increase and not decrease. Please note that negative sign (-) is just an indication that it is a per core setting. Positive on the other hand means that it is per server setting). Idle threads greater than 0 does not mean that it does not have enough threads bit it means that it does not have enough jobs.

Please note this test was carried out against:

  • SQL Server Analysis Services (SSAS) 2008R2 64 Bit
  • Windows Server 2008R2 64Bit
  • 10GB RAM
  • 4 Cores

This behaviour has an improvement in Analysis Services 2012 (Multi-dimensional model) with changes to the way Multi-dimensional model uses Non-Uniform Memory Access (NUMA).

Advertisements

Processing using Snapshot Isolation in Analysis Services (SSAS)

with 7 comments

Just a few months ago I was hit with an issue of dimension processing failure. The error read

Errors in the OLAP storage engine: The attribute key cannot be found when processing: Table: ‘<<TableName>>’, Column: ‘<<ColumnName>>’, Value: ‘XXXXXXX’. The attribute is ‘<<AttributeName>>’.

On investigation I found that constant updates to the dimension were the main cause. One of my dimensions had as many as 15 attributes and the processing failed when any update occurred during dimension processing. Generally speaking, the first attribute’s SELECT DISTINCT statement would conflict with the SELECT DISTINCTs of other attributes because of the constant update.

One way to get around this is was to change the ProcessingGroup property of the dimension to be ByTable (instead of the default value of ByAttribute). As the ByTable option takes the entire table data for that dimension into memory, it would have to process the same data without having to worry about the constant updates. The new updates would then get incorporated in the next processing cycle.

This option did not work as the ByTable option works only when you have only a solitary table to draw the data from. Multiple tables in star schema or snow-flake schema or a RDBMS view that encapsulates multiple tables will issue multiple queries against the data source.

P/S Try processing the Product and Geography dimensions using ByAttribute and ByTable options. The profile trace for ByTable ProcessingGroup mode for Product dimension will show you multiple SELECT DISTINCT statements being fired whereas for the Geography dimension, because of its single table data source, it would fire only one SELECT DISTINCT statement.

As I am using RDBMS database views that uses multiple table joins, this option for me, also went to pits. The only feasible solution was to use the Snapshot Isolation.

Snapshot Isolation

The Snapshot Isolation works on the Row Versioning framework.

Row versioning based isolation levels reduce the number of locks acquired by transaction by eliminating the use of shared locks on read operations.

For the Analysis Services database to work on Snapshot Isolation mode the source RDBMS database needs to have the READ_COMMITTED_SNAPSHOT option and ALLOW_SNAPSHOT_ISOLATION to be ON.

  • When the READ_COMMITTED_SNAPSHOT is ON, READ_COMMITTED transactions provide statement level read consistency using row versioning.
  • When ALLOW_SNAPSHOT_ISOLATION is ON, SNAPSHOT transactions provide transaction-level read consistent using row versioning.

The following is the SQL command that can enable the mentioned options in your RDBMS database.

        USE <<DatabaseName>>
        GO
        ALTER DATABASE <<DatabaseName>>
        SET READ_COMMITTED_SNAPSHOT ON
        GO
        ALTER DATABASE <<DatabaseName>>
        SET ALLOW_SNAPSHOT_ISOLATION ON
        GO

Enabling these options causes all data modifications for the database to be versioned. A copy of the data before update (or add) is copied and stored in tempdb. The data for processing is thereafter used from the tempdb. After the dimension processing operation (under Snapshot Isolation) is finished the tempdb frees the space occupied by the versioned rows.

Please note, the tempdb must be enabled to have enough space to hold the row versioned data generated for the source RDBMS database.

Another important thing is to enable the Multiple Active Result Sets (MARS) connection property in the DataSource in the Analysis Services database.

Not enabling MARS will give you the following error

OLE DB error: OLE DB or ODBC error: Cannot create new connection because in manual or distributed transaction mode.

By enabling the Snapshot Isolation on my Analysis Services database the dimension processing has not had any errors due to constant updates at source. The tempdb size also has been well within acceptable limits.

Written by Gautham KAMATH

24/02/2012 at 16:34