Analysis Services and Excel Tips and Tricks

I was speaking with a Business Analyst the other day and I couldn’t remember exactly how to do a cool trick I’d learned years ago working with Excel and Analysis Services.  It got me thinking about some other cool techniques that I suspect many do not know about.  If you’ve got a cool trick to share, please let us know in the comments.

Trick #1 – Use OLAP PivotTable Extensions to Add Calculated Measures

I want to demonstrate how to add a new calculated measure to a pivot table using MDX.

The Setup

  1. Download and install the add-in.
  2. Open Excel and connect to the AdventureWorks SSAS database.  (Data, From Other Sources, Analysis Services, etc.)
  3. In the PivotTable Field list use the “Show fields related to:” drop down at the top select “Internet Sales.”
  4. Add the Internet Sales Amount to the values
  5. Add the Product Category user hierarchy to the rows and expand.
  6. Add the Date.Fiscal Weeks User hierarchy to the Report Filter and select Week 4 FY 2008.
  7. Right click a cell within the pivot table and select OLAP PivotTable Extensions.
  8. Enter the following in the Calculation Name drop-down field:  [SPPY Internet Sales Amt]  (SPPY = Same Period Prior Year)
  9. Enter the following in the formula text box: (ParallelPeriod([Date].[Fiscal].[Fiscal Year],1,[Date].[Fiscal].currentmember),[Measures].[Internet Sales Amount])
  10. Click the “Add to PivotTable” button.
  11. Place a check mark in the PivotTable Field list under the newly created “Values” measure group next to SPPY Internet Sales Amt.
  12. Verify that those numbers match Week 4 FY 2007.
  13. Remove the Date.Fiscal Weeks user hierarchy from the report filter.
  14. Add the Date.Fiscal user hierarchy to the Report Filter and select July 2007.
  15. Verify that those numbers match July 2006.

Hungry for more?

There are lots of examples of useful MDX you can use in your spreadsheets!  A great graphical tool to help you build (and learn) complex MDX queries is Cube Player.

Trick #2 – Filtering, the Easy Way!

Have you ever needed to analyze a particular set of products or customers based on a long list of customer names or USNs?  Going through the filter drop-down and individually clicking each one can be a tedious, time-consuming, and frustrating process.  If you’ve ever had to do this, I am about to become your best friend.

The Setup

  1. Open Excel and connect to the AdventureWorks SSAS database.  (Data, From Other Sources, Analysis Services, etc.)
  2. In the PivotTable Field list use the “Show fields related to:” drop down at the top select “Internet Sales.”
  3. Add the Internet Sales Amount to the values
  4. Add Customer to the row labels
  5. Highlight all the customer names that start with the first name “Michael” and copy them to your clipboard.
  6. Right click a cell in the PivotTable and select OLAP PivotTable Extensions.
  7. Click the Filter list tab and select [Customer].[Customer] from the filter hierarchy drop-down.
  8. Paste in your Michaels.
  9. Click Filter PivotTable.
  10. Add a comment to this blog post about how much you appreciate the time I spent writing this blog post.  Smile

Trick #3 – Using SSAS Time Intelligence in Excel

For this example, I added time intelligence to the AdventureWorks cube by following the wizard.  If you don’t know if the cube at your company has time intelligence, look in the date dimension for an attribute named something like “Fiscal Date Calculations” or just ask somebody.  If your cube doesn’t have time intelligence, maybe you can convince them to add it.

The Setup

  1. Open Excel and connect to the AdventureWorks SSAS database.  (Data, From Other Sources, Analysis Services, etc.)
  2. In the PivotTable Field list use the “Show fields related to:” drop down at the top select “Internet Sales.”
  3. Add the Internet Order Count measure from the Internet Orders measure group to the values.
  4. Add the Date.Fiscal user hierarchy to the report filter.
  5. Add Fiscal Date Calculations to the rows.
  6. Select November 2003 in  the Date.Fiscal report filter in cell B1.



Trick #4 – Formatting, As You Like It!

For this example, let’s create a nicely formatted re-usable Balance Sheet for Adventure Works.

The Setup

  1. Open Excel and connect to the AdventureWorks SSAS database.  (Data, From Other Sources, Analysis Services, etc.)
  2. In the PivotTable Field list use the “Show fields related to:” drop down at the top select “Financial Reporting.”
  3. Check the box next to the Amount metric in the Financial Reporting measure group.
  4. Check the box next to Accounts in the Account dimension.
  5. Click the plus sign next to the Fiscal display folder in the Date dimension and check the box next to the “Date.Fiscal” user hierarchy.
  6. Click the drop down arrow in cell A2 and uncheck everything but Balance Sheet.
  7. Click the drop down arrow in cell B1 and uncheck everything except FY 2006, FY 2007, and FY 2008.
  8. Click the plus sign next to “Balance Sheet” in cell A3.  Click the plus sign on every plus you see until there are no longer any plus signs on the sheet.
  9. (Optional sub-tip)  The CFO wants to see the GL account next to the lowest leaf level rollups.  Right click “Cash” and select “Show Properties in Report” and choose “Account Number.”


Your screen should look like this.

What’s Wrong With This Picture?

This looks okay; but, it could certainly be better.  If you look for balance sheet examples on the internet you might notice that the totals are normally put underneath the category.  They also typically wouldn’t have a total column for the combined years aggregation.

First let’s do something about those totals.

  1. Click on a cell in the pivot table.  This action adds the PivotTable Tools section on your Excel ribbon.
  2. Click on the Design tab and click the Report Layout button third from the left on the ribbon.  Select “Show in Tabular Form.”
  3. Click the Grand Totals button and select “Off for Rows and Columns.”

It is starting to look better; but, now if I want to insert a row underneath the receivables total I get a very big error message!


This is the trick.

  1. Click the Options tab of the PivotTable Tools ribbon and click the OLAP Tools button fourth from the right.  Select “Convert to Formulas.”
  2. You’ll get an error.


To resolve the error:

  1. Click the filter icon in cell A2 and select all.
  2. Click the filter icon in cell G1 and select all
  3. Save a copy of your Balance Sheet with a file name such as “Dynamic Balance Sheet.XLSX” (I’ll explain why shortly.)
  4. Click the OLAP Tools button and select “Convert to OLAP formulas” again.

Continuing our beautification…

  1. Insert a row underneath “Receivables Total.”
  2. Insert rows underneath every total.
  3. Insert a column between FY 2007 and FY 2008.
  4. Label the column “YOY Var %” in cell I2
  5. Add the formula: =(H6-G6)/G6 and copy it down the balance sheet
  6. Format the column I as a percent
  7. Hide the rows 49 – 51
  8. Make the totals bold
  9. Hide column A
  10. Hide Row 1, Bold and Center Row 2.
  11. Highlight cells B2-F2 and press delete
  12. Highlight columns B-F and select Format,  Auto Fit Column Width and make bold.
  13. Copy column I to (the new) Column M
  14. Modify the formula in M3 so that it reads: =(J3-H3)/H3
  15. Hide columns I and M.  I just wanted to show you how easy it was to add a formula.
  16. Add top and bottom borders for each of the sub category total amounts, top and thick bottom borders to Current Assets, Property, Plant, Equipment, Liabilities, and Owners equity totals, and finally top and double bottom borders for Assets and Liabilities and Owners Equity Totals.
  17. Select columns G-J, right click a cell, select Format cells and decrease the precision so that there are no cents displayed.
  18. From the view tab uncheck the Gridlines checkbox.
  19. Print.
  20. Get some coffee on your way to the printer.

You’re spreadsheet should look like this:


There is one caveat you need to know.  If accounting adds a new GL rollup it will not automatically appear on the balance sheet.  You can manually add it if you’re careful or open the “Balance Sheet Dynamic” version and re-apply the formatting.


That’s all for today, folks!  Please add to the comments if you would like to share a great Analysis Services in Excel tip or trick.


SSAS MSMDSRV.INI Settings for Multi-User Environments

Here are excerpts from the SSAS Operations Guide for SQL Server 2008 R2 regarding the recommended settings for a multi-user environment.  If you want more information you must read the whitepaper.

——————————————————————————-    Heap Settings, DataStorePageSize, and DataStoreHashPageSize
During query execution, Analysis Services generally touches a lot of data and performs many memory allocations. Analysis Services has historically relied on its own heap implementation to achieve the best performance. However, since Windows Server 2003, advances in the Windows Server operating system mean that memory can now be managed more efficiently by the operating system. This turns out to be especially true for multi-user scenarios. Because of this, servers that have many users should generally apply the following changes to the msmdsrv.ini file.

Setting Default Multi-user, faster heap
<MemoryHeapType> 1 2
<HeapTypeForObjects> 1 0

It is also possible to increase the page size that is used for managing the hash tables Analysis Services uses to look up values. Especially on modern hardware, we have seen the following change yield a significant increase in throughput during query execution.

Setting Default Bigger pages
<DataStorePageSize> 8192 65536
<DataStoreHashPageSize> 8192 65536

——————————————————————————-    Clean Up Idle Sessions
Client tools may not always clean up sessions opened on the server. The memory consumed by active sessions is non-shrinkable, and hence is not returned the Analysis Services or operating system process for other purposes. After a session has been idle for some time, Analysis Services considers the session expired and move the memory used to the shrinkable memory space. But the session still consumes memory until it is cleaned up.
Because idle sessions consume valuable memory, you may want to configure the server to be more aggressive about cleaning up idle sessions. There are some msmdsrv.ini settings that control how Analysis Services behaves with respect to idle sessions. Note that a value of zero for any of the following settings means that the sessions or connection is kept alive indefinitely.

Setting Description
<MinIdleSessionTimeOut> This is the minimum amount of time a session is allowed to be idle before Analysis Services considers it ready to destroy. Sessions are destroyed only if there is memory pressure.
<MaxIdleSessionTimeout> This is the time after which the server forcibly destroys the session, regardless of memory pressure.
<IdleOrphanSessionTimeout> This is the timeout that controls sessions that no longer have a connection associated with them. Examples of these are users running a query and then disconnecting from the server.
<IdleConnectionTimeout> This timeout controls how long a connection can be kept open and idle until Analysis Services destroys it. Note that if the connection has no active sessions, MaxIdleSessionTimeout eventually marks the session for cleaning and the connection is cleaned with it.

If your server is under memory pressure and has many users connected, but few executing queries, you should consider lowering MinIdleSessionTimeOut and IdleOrphanSessionTimeout to clean up idle sessions faster.


2.4.3    Multi-User Process Pool Settings
In multi-user scenarios, long-running queries can starve other queries; specifically they can consume all available threads, blocking execution of other queries until the longer-running queries complete.
You can reduce the aggressiveness of how each coordinator job allocates threads per segment by modifying CoordinatorQueryBalancingFactor and CoordinatorQueryBoostPriorityLevel as follows.

Setting Default Multi-user nonblocking settings
CoordinatorQueryBalancingFactor -1 1
CoordinatorQueryBoostPriorityLeve 3 0


If your SSAS instance shares a server with anything else you may also want to read my good buddy Jorge’s blog post: Setting Memory Limits


Did I miss any?  Do you have any additional tips to share?  If so, please sound off in the comments.

SSAS Gotcha: The Operation Has Been Cancelled Due To Memory Pressure

My team and I recently had an issue with our SSAS instance in which users were receiving the following message from Excel:

Data Could not be retrieved from the external data source.  Error message returned by the external data source:

Server: The operation has been cancelled due to memory pressure. 



Another user reporting a different error from Excel when trying to create a new connection to the Analysis Services database:

An error has occurred in the transport layer.

Errors in the OLE DB provider.  An error occurred while loading the connection dialog box components for prompting.



The following blog posts helped us identify and kill the MDX query and the user was executed:

Finding and killing SSAS Sessions

Kill a Session, SPID or Connection to Analysis Service Cube

The problem seemed to be caused by poorly constructed Excel pivot tables.  Users can do some strange things.  Things like dragging product numbers to the columns and product descriptions to the rows.  Regardless, a single user shouldn’t be able to bring down an entire server making it useless to anyone else in the company, right?

For the short term we had to find a way to automatically kill queries like these automatically.  Otherwise, our team could never go to lunch together.  Someone would always have to stay behind to log the locked query and restart the SSAS service if the server spun out of control.

Luckily, we were able to reproduce the condition by simply executing one of the queries we had captured on our QA server.  We tried a SSIS package that Chris Webb wrote to automatically kill locked sessions.  Unfortunately, when the issue occurred, the SSIS package couldn’t get a connection to check for locks let alone kill them.  I also played with the Server Timeout advanced property of the SSAS instance; but, that would only work reliably if we set it to a very small time period, such as 30 seconds.  Otherwise, it would blow right past the time out and never get cancelled.  I also got the PowerShell version of Chris Webb’s method to work; but, again it would only work reliably for short time periods.

*Make sure the user executing the Agent job is a SERVER administrator to get the PowerShell solution to work.  Only a WINDOWS SERVER admin can issue cancel XMLA requests.

Killing sessions automatically with SSIS

Dealing with Long Running SSAS Queries using Powershell

After more research and testing, I found this snippet:

Check the CancelCoordinatorCount property, you can decrease a bit from its default value, which will force the engine to check the IsCancelled property more often. As a result, the queries will be stopped faster. We had big problems with that, but after some post SP2 updated + with using that CancelCoordinatorCount thighs got much better.

I changed the CancelCoordinatorCount property from 1000 to 100 and the PowerShell solution would work with a slightly larger threshold.  Coupled w/ a larger failsafe threshold on the Server Timeout property (which also started working) I was getting closer to what I wanted.

Something about that post triggered a memory and a promise to myself I had made years ago.  The part about applying a service pack.  Many years ago, I worked as a BI Developer for a company that specialized in software for the Casino industry.  From time to time, I would be asked to help troubleshoot a problem with a client’s BI solution.  The first thing I would always do would be to reboot the server.  I can’t tell you how many times this would fix the immediate problem.  Second, I would make sure all of their windows updates and service packs were applied.  When they weren’t up to date, many times this fixed the problem.  As a consultant for Pragmatic Works, one of the first things I would always do when starting a new BI project or troubleshooting an issue would be to make sure all updates and service packs were applied.

To make a long story short… applying SQL Server 2008 R2 Service Pack 1 completely fixed our problem.  Two queries that would run for almost two hours on our QA server before finally bombing with a memory pressure error ran in less than five minutes and produced results.

Old lessons learned again.

Near Real Time OLAP using SSAS Proactive Caching

If I had a nickel for every time I’ve heard from a business user that the OLAP solution must support current real time data from the source system, I’d have at least half a dollar.  The fact is that in reality there is usually some acceptable degree of latency.  Even though Microsoft SQL Server Analysis Services has a ROLAP storage mode option, I’ve typically tried to steer people towards the MOLAP option due to it’s query performance superiority.

This past week I’ve had the opportunity to explore some of the proactive caching options available in SSAS.  I thought this would be a great topic for a blog post.  I ran into a couple of stumbling blocks and perhaps someone else can benefit from my experience.

Proactive Caching is a feature in SSAS that allows you to specify when to process a measure group partition or dimension as the data in the relational data source changes much like a cache.  A proactive caching partition or dimension is commonly referred to as a cache, even though it is still considered to be a partition or dimension.  When Proactive Caching is implemented,  SSAS will handle keeping the cube up to date on its own, per the parameters you specify.  The alternative to Proactive Caching is to develop a SSIS package that processes the dimensions and measure group partitions; you would execute the SSIS package periodically.

Proactive Caching goes hand in hand with partitioning.  So I started my research by revisiting that topic.  I discovered an excellent blog post that included a very easy to follow tutorial on setting up dynamic partitioning using SSIS.

Dynamic Cube Partitioning in SSAS 2008

If you’d like to follow along, first go through that tutorial.  One important note, in step 10 there is a missing step.  You need to open the second script task in the editor, right click on the References folder in the project explorer, and select Analysis Management Objects.  Save it and then click Build from the menu to compile the script.

In case that site goes dark or you just want to have a look at my completed solution I’ve uploaded a zip file with everything you need.

Still with me?  Great.  Our first task is to test out SQL Server Notification.

SQL Server notification can only be used if the relational data source is a Microsoft SQL Server 2000 or later database. SQL Server raises trace events when data changes. In order to catch these trace events, SSAS must connect to the SQL Server with administrator rights. With SQL Server notifications, measure group partitions are always processed using Full Process which discards the contents of the partition and rebuilds it; dimensions are processed using Process Update which picks up inserts, updates, and deletes in the relational data source.

Open up SSMS and connect to your analysis services database.  Click the plus signs until you get to the partitions created during that tutorial.  Select the last one, right click and click Properties.


Select the Proactive Caching page and click the Options button.  The General settings tab should look like the screenshot below.


Click the notifications tab, select the SQL Server notifications radio button, check the box next to “Specify tracking tables” and pick FactSales by clicking the ellipses and placing a check in the box next to that table.  Your screen should look like the screenshot below.


Click two OK buttons to complete the operation.

Now, let’s get a baseline.  Create a simple slice using your cube browser and note the Fact Sales Count for the month of March.


Now, execute the following SQL statement to insert some more data into the table.

INSERT INTO FactSales SELECT D.DateKey,P.ProductKey,ISNULL(C.CustomerKey+2,P.ProductKey),P.Cost FROM DimProduct P LEFT JOIN DimCustomer C ON P.ProductKey-2 = C.CustomerKey+2 CROSS JOIN DimDate D WHERE CalendarMonth = 3; GO

Go back to your cube browser and click the refresh icon.  You should see that the count of records for March has doubled.


Let’s try the next notification option.

The Client Initiated notification is performed by sending a NotifyTableChange XMLA command to the SSAS server. For example an SSIS package that updates the data warehouse could use the Analysis Services Execute DDL Task to send the NotifyTableChange XMLA command to the SSAS server every time the data warehouse update processing is completed.

This is the option that gave me the most grief during my testing.  Perhaps some of the XMLA code that I see proliferated around the web works for SSAS 2005; but, not SSAS 2008.  I could not get the XMLA code referenced in blogs like this one  to work.

I came across another example which ended up working; but, at first I was confused by the data source element and mistakenly thought it meant the one in my SSIS package when in fact it needs the data source name from your cube.  I know, silly mistake.  I’m blaming it on the cold medicine.  Great for colds; bad for BI.

<NotifyTableChange xmlns=””>     <TableNotifications>     <TableNotification>       <DbSchemaName>dbo</DbSchemaName>       <DbTableName>FactSales</DbTableName>     </TableNotification>   </TableNotifications> </NotifyTableChange>

To test this, right click on the partition again and get back to the notifications tab of the storage options dialog.  Select the ‘Client initiated’ radio button, check the box next to ‘Specify tracking tables’ and select the FactSales table as before.


Create an SSIS package with a connection manager for your SSAS database and an Analysis Services Execute DDL task.  Set the connection in the task to your SSAS connection manager and paste the XMLA code from above into the SourceDirect property.  Run the SQL commands from above to insert more data into the fact table.  Execute the package (by firing squad preferably.)

Once all is green, jump back to your cube browser and refresh.  If all went as expected you should see the count for month three increase by a third.


Finally, let’s try my personal favorite and the last notification option.

The Scheduled polling mechanism provides the most flexibility by allowing you to either rebuild or incrementally update the cache.  Scheduled polling simply queries the relational data source periodically to determine if the data has changed.  Incremental updates improve proactive caching performance by reducing the amount of data that needs to be processed.  For proactive caching partitions, incremental updates use a ProcessAdd to append new data to the cache. For proactive caching dimensions, a ProcessUpdate is performed.  If you use Scheduled polling without incremental updates, the cache is always completely rebuilt.

The first thing we need to do is alter our FactSales table by adding a column to store the date the row was appended to the table.


Get back to the Notifications tab in the storage options dialog and select the ‘Scheduled polling’ radio button.  Check the box next to ‘Enable incremental updates’ and click the ellipses in the textbox under the ‘Polling Query’ column in the mini-table.  Paste the following SQL statement into the ‘Create Polling Query’ dialog:

SELECT MAX(LoadedDate) LoadedDate FROM dbo.FactSales;

We could stop there; but, if we did SSAS would always do a full process of the partition whenever it detected a change.  We can do better.  If we enter a processing query, SSAS will perform a ProcessIncremental on the partition thereby reducing the amount of data that needs to be processed each time a change is detected.  Click the ellipses in the text box under the ‘Processing Query’ column in the mini-table.  Paste the following SQL statement into the ‘Create Processing Query’ dialog:

SELECT        DateKey, ProductKey, CustomerKey, SalesAmountFROM            dbo.FactSales

WHERE LoadedDate > ? AND LoadedDate <= ?

Click OK twice to commit the changes, process your partition manually, and then run the SQL statement below to insert some more data into our table.  (Note the additional GetDate() column I’ve added to the end of this)

INSERT INTO FactSales SELECT D.DateKey,P.ProductKey,ISNULL(C.CustomerKey+2,P.ProductKey),P.Cost,GETDATE() FROM DimProduct P LEFT JOIN DimCustomer C ON P.ProductKey-2 = C.CustomerKey+2 CROSS JOIN DimDate D WHERE CalendarMonth = 3 GO

Finally, jump back to your cube browser to refresh your view.  Your count should increase for the third month.


Now I’d like to revisit the General tab of the storage options dialog and discuss what each of those options mean.


The Silence interval is the amount of time to wait after an update notification is received before processing the partition. This allows for other updates to occur before starting to process the partition.

The Silence override interval is the maximum time to wait after an update notification is received before processing the partition. Without the silence override interval you could conceivably have changes that keep occurring and the partition would never get processed.

Latency is the maximum amount of time that you want to use the MOLAP cache after changes have occurred in the relational data source and before/while the partition is processed. In other words you may continue to use the existing MOLAP cache for a period of time while it is being rebuilt. Once the latency is reached the MOLAP cache is dropped and queries can be satisfied by going back to the relational data source.

Bring online immediately (when checked) will satisfy queries from the relational data source while the MOLAP cache is being rebuilt.

Enable ROLAP aggregations (when checked) will create materialized views for aggregations in the relational data source.

The following diagram that I copied from the Analysis Services 2005 Performance Guide does a great job of explaining these settings with an example.


Still with me?  Congratulations!  You deserve a joke.

Q:  Why does the OLAP developer hang his wallet above the fireplace?

A:  He is warming the cache.

LOL.  He could have just used SSIS.