Gautham KAMATH

BI Developer specializing in SSAS and MDX

Archive for November 2011

Populate “All Caption” property of Analysis Services 2000 Dimension using DSO

leave a comment »

My Analysis Services 2000 post was brought up just yesterday. In that post I had proposed a solution to populate the “All caption” property of Analysis Services 2000 Dimension using DSO and C# as I could not find an answer in Books Online (BOL).

The following is that solution:

Using just the MemberKeyColumn property solves the problem.

DSO.Level allLevelDepartment = (DSO.Level)dsoDimDepartment.Levels.AddNew(“(All)”, DSO.SubClassTypes.sbclsRegular);
allLevelDepartment.LevelType = DSO.LevelTypes.levAll;
allLevelDepartment.EstimatedSize = 1;
allLevelDepartment.MemberKeyColumn = “All Department”;

This syntax of C# will be enough to populate the “All Caption” property of a Analysis Services 2000 Dimension using DSO.

Advertisements

Written by Gautham KAMATH

15/11/2011 at 21:56

An open letter to Sachin

leave a comment »

Dear Sachin

It is my first letter to you in 22 years that I have known you and needless that I say you keep fascinating me. Its not only because you are a great player, but also a great athlete. Its hard for me to imagine any athlete being on top (or at the very least near the top) for so long. 22 years by far is very long. To me the 22 years holds far more value than 33207 runs (as off 15 November 2011) that you have scored and which very nicely brings me to the main subject of the letter your hundredth century.

Your hundredth century will come and there is nothing stopping it. There is a lot of media chats or  politely speaking bitching going on regarding. Topics like “the hundredth 100 is getting better of Sachin”, “Do tell Tendulkar how it is done” etc etc. While I am sure that you are not affected by this but because you are still a mortal human being, a part of me thinks that it might be true.

Let me divert you from this topic. A good few years ago I wanted pass my driving test here in UK. UK driving test is possibly the most difficult to pass and possibly very expensive. I failed the first time (it was my bad luck as I sincerely thought I did nothing wrong). The desperation to pass on my second attempt and the possibility of losing another good chunk of money made me so nervous that I made a mess of my test. Even before the driving instructor told me that I had not passed, I had failed myself. I deliberately allowed myself to be nervous. This was a lesson to me. On my third attempt I went with a clear mind and told myself that if I fail again I will pay again and go for it again. The pain of losing money was no longer in that feeling. On my third attempt I passed.

The reason I am telling this to you is self-explanatory. Dear Sachin you will get your hundred. Go and bat with an attitude “Aaj nahi, kal sahi”.

Eagerly looking forward to your 100.

Warm regards
Gautham

Written by Gautham KAMATH

15/11/2011 at 21:39

Always use {} and ()

leave a comment »

When SSAS 2005 was released, MDX has a very nice change (at least it looked nice that time). The change was that it was no longer necessary to be very strict about i.e curly bracing {} or bracketing () the MDX syntax. This leniency sounded fun at that time but now after years relying in the intrinsic conversion between members, tuples and sets, it does not seem all that good now. Let me explain.

Consider the following simple query

     SELECT {[Measures].[Internet Sales Amount]} ON 0,
     {[Customer].[Customer Geography].[City].&[Berkshire]&[ENG]
     * [Product].[Product Categories].[Category].Members
     * [Date].[Calendar].[Calendar Year].[CY 2008]} ON 1
     FROM [Adventure Works]

The results are good.

But if we swap a couple of members ([Date] and [Product]) in the ROWS axis, it bangs.

     SELECT {[Measures].[Internet Sales Amount]} ON 0,
     {[Customer].[Customer Geography].[City].&[Berkshire]&[ENG]
     * [Date].[Calendar].[Calendar Year].[CY 2008]
     * [Product].[Product Categories].[Category].Members} ON 1
     FROM [Adventure Works]

The error reads…

Executing the query …
Query (2, 2) The * function expects a tuple set expression for the 1 argument. A string or numeric expression was used.
Execution complete

There is so much internal conversion going on between members, tuples, and sets that interpretation and construction of MDX statements have become more complex.
The row and column axes require sets. CrossJoin statement also require sets and neither of the above queries properly declares it.
The lesson: Always declare your sets with {} and tuples with ().

So if the above query is written as

     SELECT {[Measures].[Internet Sales Amount]} ON 0,
     {[Customer].[Customer Geography].[City].&[Berkshire]&[ENG]}
     * {[Date].[Calendar].[Calendar Year].[CY 2008]}
     * {[Product].[Product Categories].[Category].Members} ON 1
     FROM [Adventure Works]

…everyone is happy

Please note that by contrast the WHERE clause is not as lenient as the rest of the query. A single hierarchy in the WHERE clause without round brackets is acceptable as the member will be explicitly converted to a tuple. Modify it to reference a second hierarchy and add round brackets, it will not like it as it expects curly braces {}.

Written by Gautham KAMATH

03/11/2011 at 18:36

Posted in MDX

Tagged with , , ,

CommitTimeout and ForceCommitTimeout in SSAS

leave a comment »

There has been some confusion over CommitTimeout and ForceCommitTimeout  in the past. Hopefully this will be of some use.

SSAS Processing requires write lock on the cube to commit the process. To acquire the write lock there should be no read lock present on the cube. Read locks are acquired when a MDX query is fired against SSAS cube. COMMITTIMEOUT property will wait till all the read locks are closed before committing. For example, say the commit timeout is 30 seconds and if read locks are not released for commit to happen then rolling back of processing transactions takes place.

FORCECOMMITTIMEOUT property is used to avoid rolling back of processing transactions. FORCECOMMITTIMEOUT will wait the specified time for long running queries (that have acquired read locks) to complete its work before cancelling them (to acquire write lock for processing).

The following is general representation of how locks for processing and querying work.

To summarize:
COMMITTIMEOUT – Cancels processing (after specified time) if write locks cannot be obtained.
FORCECOMMITTIMEOUT – Cancels long running queries (after specified time) to acquire write locks for processing. When these queries are cancelled the user gets an error “Server action is cancelled”.

COMMITTIMEOUT of 0 will never let the processing operation timeout.

COMMITTIMEOUT of 0 and a low value of FORCECOMMITTIMEOUT would mean user queries getting cancelled more often. A high value of FORCECOMMITTIMEOUT would mean user queries (even badly performing queries) get more time to finish its operation and processing operations get delayed as they will have to wait until all the read locks (held by querying operation) are released, thus giving a perception of bad processing performance.

If we set both COMMITTIMEOUT and FORCECOMMITTIMEOUT to zero then write lock on cube will wait indefinitely till all the read locks are released. There will be no errors or timeouts but the system will generally appear to be badly performing. Please note, if the user queries the cube during the commit phase of processing then the queries get queued up.

If you want the processing to take priority the set the COMMITTIMEOUT to 0 and FORCECOMMITTIMEOUT to 500 (default of 3000 is also OK). FORCECOMMITTIMEOUT of 500 will enforce the user queries to be cancelled after 5 seconds.

If querying has to take precedence then set COMMITTIMEOUT to 500 which would enforce the processing operations to be cancelled after 5 seconds of holding the write lock. Setting COMMITTIMEOUT and FORCECOMMITTIMEOUT to 0 would also give precedence to the query but without any timeouts of processing operations. This will generally give an impression of server performing badly.

To avoid such unwanted situations it is best scale out Analysis Services operations by separating the  processing and querying operations.

Written by Gautham KAMATH

02/11/2011 at 09:42