Dan English's BI Blog

Welcome to my random thoughts in regards to Business Intelligence, databases, and other technologies

Archive for October, 2009

What’s new for PerformancePoint Services (SharePoint Server 2010)

Posted by denglishbi on October 29, 2009

More and more information is slowly coming out and being made public now that the SharePoint conference has completed.  Check out the online documentation and be on the lookout for more information and blog postings coming out.  National PASS conference is next week!

What’s new for PerformancePoint Services (SharePoint Server 2010)

[This article is pre-release documentation and is subject to change in future releases.]


Just to name a few of the items that have been announced so far:

  • Integrated security with SharePoint
  • ‘Secure Store’ authorization (single sign-on) option
  • Decomposition Tree
  • Setup as a SharePoint Service (similar to Excel) with trusted locations
  • PPS items will now be content type
  • KPI detail option to get at some of the underlying metadata
  • New chart types and conditional formatting capabilities (using SSAS information)
  • and much more

Check out the link above for the current list of items, the improvements, and support for security within SharePoint.

Posted in PerformancePoint | Tagged: | Leave a Comment »

Getting started with BI in SharePoint Server 2010

Posted by denglishbi on October 29, 2009

Saw this in the MSDN Downloads and wanted to pass it along.  It is a poster, so feel free to blow it up and put it up on your wall;)


Discusses the business intelligence tools available in SharePoint Server 2010


Are you getting excited for the 2010 products yet?

Interested in what the upgrade path to SharePoint 2010 will look like?  Take a look at this Visio diagram – Microsoft SharePoint Server 2010 Upgrade Planning.

Don’t have Visio?  No worries, I saved it as PDF and uploaded the document to my SkyDrive.


(Click on the image above that contains link to download. You should be able to do a right-click and do a save target)

Posted in Business Intelligence | Tagged: | Leave a Comment »

IASA IT Architect Regional Conference – Minneapolis Nov 13, 2009

Posted by denglishbi on October 29, 2009

IASA IT Architect Regional Conference Only $75 for all attendees!!!


We are kicking off things right in our newest Twin Cities Chapter. A full 1-Day Conference, the IT Architect Regional Conference is architect created and hosts seminars with two tracks addressing the core subjects: Enterprise and Software Architecture. All for just $75.

Featured Speakers:

  • Paul Preiss, President and Founder, IASA
  • Glen Leifheit, Fair Isaac
  • Dave Hendrickson, Thompson Reuters
  • John Andrew, Oracle
    and more…

Check Out the Agenda

What you can expect from the Minneapolis ITARC:

  • A strong regional focus. Receive case studies, best practices and individual training from senior architects in your region
  • Internationally acclaimed Keynote speakers
  • Affordable enough for your entire team to attend without sacrificing quality
  • Proceeds from the event will be used to support the local IASA Minneapolis chapter through new services and programs
  • Relevant topics for IT architects of all types and all levels

13 November 2009 | Minneapolis


Posted in Training | Leave a Comment »

Analysis Services (SSAS) Processing and Aggregations

Posted by denglishbi on October 29, 2009

I get a lot of questions about this and see quite a few postings about Analysis Services processing in the MSDN forums.  So you created an SSAS database solution and have deployed it to a production environment.  Now the data has accumulated and you need to take a look at moving to a incremental processing approach because you are either loading the data more regularly or the volume of data and processing does not meet your maintenance window anymore.  So what do you do?

The solution is to look at doing a more incremental processing approach instead of just a Process Full on the database.  You have a few options in regards to this and I am going to discuss what happens during a dimension update process and also go over an SSIS solution that I put together.

In order to update and refresh the data in your dimension table you can perform a Process Update on the dimension (or a Process Add if you are only adding new members and not updating any records, but that typically is not the case).  So what happens to the cube data when you update a dimension?  Well if nothing changed then you are okay, but if you have flexible relationships and the data within these relationships changed then the aggregation designs for these are dropped from the partitions.  Here is a walkthrough showing this:


Aggregation Advanced View Display in SSAS 2008


Query the partition information to view the aggregation information



Process Update with no changes – everything will stay the same


Process Update and there were data changes to dimension attributes that had aggregation designs


Aggregations designs are dropped that referenced those dimension attributes (A0 and A2 were dropped).  How do you get them back?


Run a Process Default on the affected partitions (you could also choose the option to process affected objects during your dimension processing, this will do the same thing)


Aggregation designs are now back

So that was just a quick overview of what happens when you do a refresh of a dimension in your SSAS database.  You can run this, just be aware of what is going on behind the scenes.  If there were changes and relationships were updated, then you will also need to rebuild the aggregation and indexes if these attributes were included in any aggregation designs.

So if you want to incorporate this logic along with a incremental update of your partitions, what can you do?  Well for this I leveraged SSIS and setup a package that allows for either a full or incremental update process.  In my example here I actually do this for a specific cube in a SSAS database, so I only process the dimensions that are associated with a particular cube.

image image

I have to give credit to Vidas Matelis here for providing some of the AMO coding for the processing, SSIS package that process all partitions/measure groups/cubes in one database .  I modified it a bit to be able to limit the scope for a particular cube along with a Try Catch block in the code, but it is what I used for a starting point.  Using the AMO allows the process to be more flexible and dynamic.  As with all things, as you expand on concepts and build them out they can start to get a bit complex, so I will try and explain this as best I can in a brief sort of way for this posting.

  1. I added a script task as a starting point so that I could evaluate the ProcessFull variable and I also use this to set the number of months that will be processed (or this could be years depending on your volume.  I use a naming convention in the partition names so that it is like partition_name YYYY or partition_name YYYYMM.  In this process it will be for the later YYYYMM format.  So if the IncrementalMonthsProcess is set to 2 then at the time of this posting the value would be set to 200909 and this will be used later in the data flow task for evaluation where the names of the partitions will be parsed.
  2. Now at this point you move on to either the Full or Incremental processing.  The Full is fairly self explanatory where I have it broken into a full dimension process script task and then another script task for the partition processing.  I broke it out into two tasks so that I could get some additional duration information for logging and history.  The dimension processing only processed the dimensions for the specified cube and this is the same for the incremental approach as well.  So if you have multiple cubes with more dimensions or some unused dimensions then these would not be touched.
  3. The incremental process starts out with a script task that does a process update on all of the dimensions in the specified cube.  Same script task code as the full with the exception of the type of processing that is performed.
  4. The data flow task is used to create three recordsets of data that will be used for the partition processing – a full, data, and default processing list.  I setup a script task as a source to create a recordset of values and to parse out the date information in the partitions if it exists.
  5. The first list that is created is the full list of partitions and these get placed in the default processing list (object variable) that will be used in the last step of the package.
  6. The split transform is used to determine which partitions will have a full process performed on them (no date information included in the partition names, so yearinfo was set to zero) along with the partitions that will be part of the incremental update.  These partitions are typically part of a large measure group and the data has been partitioned by year or month (in this case month) and we only are going to process the last 2 months based on step 1.  We are just going to do a Process Data on these partitions (this reduces impact on server and also once this is done makes the data accessible to end-users – the final step in the package will build out the aggregations and indexes).  The remaining partitions will be left untouched (at least in regards to processing the data, aggregations might be rebuilt if they were affected).
  7. Now back to the control flow and to finish off the process we first do the process full partition list, then move on to the process data partition list, and finally the process default processing list.  As shown above in the screenshots the last step will do nothing to the partitions that were fully processed, but it will build the aggregations and indexes for the process data partitions and also for any additional partitions that might have been affected by the dimension updates.

So that was a quick overview of the process.  Now there are definitely other ways you could go about this, this is simply one solution that I came up with and it has worked good.  I have made a few custom modifications as needed like to do incremental processing by year or month along with limiting it to a particular cube, but these can quickly be modified and put to use in pretty much any environment.  Just set the variable values (ASServer, ASDatabase, ASCube, ProcessFull, IncrementalMonthsProcess) and you are off to the races.  If you want to turn on the SSIS logging that is up to you.  I have used the SQL Server table logging and then leverage the SSIS report pack for reporting purposes, but that is entirely up to you.

I have uploaded the package if you want to check it out to my SkyDrive account.  I do have a SSIS 2005 and 2008 version available to take a look at in the download and I have annotations included in the packages.

Hope you find this useful


(Click on the image above that contains link to download. You should be able to do a right-click and do a save target)

UPDATE (6/2/2010): Package download has been updated to reflect a fix to handle the partitions with a yearly naming convention so that they do not get included in the full processing.  This was discovered during my May 2010 presentation preparation – PASSMN May 2010 Follow-up.

Posted in Analysis Services | Tagged: , | 2 Comments »

PerformancePoint Server (PPS) 2007 SP3 now available

Posted by denglishbi on October 16, 2009

Yesterday afternoon Service Pack (SP) 3 for PerformancePoint Server was finally released.  I had been posting about some hotfixes that have been release since SP2 that have fixed some issues with support for SSRS 2008 Native Mode and some other minor fixes when exporting items to Excel and interacting with analytical items when you open them in new windows – More PerformancePoint Server 2007 Hotfixes Available, PerformancePoint Server 2007 Hotfixes Available and Build List, and PerformancePoint Dashboard Designer ‘Unable to connect to server’ with SSRS 2008.

The upgrade instructions for going from SQL Server 2005 to 2008 or simply going with SQL Server 2008 still mention the CU9 SQL Server 2005 components, not sure why.  I think it is mainly because the SP2 for PPS came out just before SP3 did for SQL Server 2005.  At a minimum you need CU9, but I would simply go with the SP3 components which are available on the SQL Server 2005 December 2008 Feature Pack site.  And they have also included the information about installing the Microsoft Report Viewer Redistributable 2008 pack which was needed after upgrading to PPS SP2.  This wasn’t needed if you already had the full blown Visual Studio 2008 installed.  The version that Microsoft includes in the documentation links is also old, in April 2009 there was a SP1 release that I would recommend instead, but at a minimum you need to have this installed – Microsoft Report Viewer 2008 SP1 Redistributable.  I had made mention of this in the forums after SP2 first became available.

Here are the links to the downloads

And here are the links for the upgrade instructions

Here is the link for the updated SP3 online documentation – Service Pack 3 for Microsoft Office PerformancePoint Server 2007

Here is a list of what is new since SP2 for the Monitoring and Analytic portion that I am aware of from the hotfix links, currently the SP3 readme file and KB articles are not available online to get any additional information yet:

  • Fix for the disappearing Analytical Chart toolbar when you click on the Filter buttons to remove empty items (it still does disappear when you click on the drop down arrow and select Reset View though)
  • Fix for exporting an Analytical Chart with more than 14 items in the legend.  Now all of the items will be exported to Excel if there are more than 14
  • Another fix for exporting to Excel with scorecards to resolve issue of exporting values from 9.5 to 9.9 so that they maintain their format instead of being multiplied by 100
  • Fix for you cannot browse the list of Web parts when you are editing a dashboard that is deployed to a SharePoint site
  • Fix for Time Intelligence Post Formula filters display the next day’s date
  • Fix for Member property values may not be displayed correctly in reports or in charts
  • Fix for the connection to SQL Server 2008 Reporting Services fails in PerformancePoint Dashboard Designer
  • Fix PerformancePoint Server 2007 Dashboard Designer issue – Decimal values are displayed inconsistently in SQL Server Business Intelligence Development Studio and in PerformancePoint Dashboard Designer
  • Fixed: You create a line chart in PerformancePoint Dashboard Designer. The line chart shows only the total percentage value. When you move your mouse to pause over the chart lines, two values appear in the tooltip. The first value is correct. However, the second value is displayed incorrectly.
  • FIX: You add a Time Intelligence filter to a dashboard in PerformancePoint Dashboard Designer. The dashboard displays a ProClarity Analytics Server (PAS) page. When you try to view the dashboard by specifying a time period that is defined in the filter, you receive the following error message on the PAS page: The webpage could not be displayed
  • FIX: You create a data source that has a custom connection string in PerformancePoint Dashboard Designer. The data source is used in a filter on a dashboard. In this scenario, only users who have the Admin permission in PerformancePoint can open the dashboard. When other users try to open the dashboard, they receive an error message that states that they do not have the READ permission to the data source.

    And according to the PerformancePoint Team’s blog posting yesterday – Microsoft Office PerformancePoint Server 2007 Service Pack 3 is released!

    • For Monitoring & Analytics customers, SP3 offers improved usability, with better handling of zone adjustments, scrolling, and filtering and improved messaging. SP3 also provides better integration with Excel Services, SharePoint Server, and ProClarity.

    They also mentioned that SP3 is available for ProClarity Analytical Server and the documentation states that it is supported on Windows Server 2008 R2 and the instructions are included on configuring this.  I am wondering if this is the same for PPS SP3.  They don’t make mention of this from what I have seen, but maybe that will be included in the ReadMe once that is online – PerformancePoint Server SP3 Readme along with the KB article that isn’t available yet KB975250.

    ProClarity 6.3 SP3 was released on October 9, 2009

    I am interested to see what changes have been made to the usability with PPS SP3.  The posting by the PPS Team was really vague, but I guess they are more focused on the PerformancePoint Services right now.  Guess we will just have to install it and check it out.

    So for now download and enjoy the SP3 which has been finally made available.  SP3 will bring you up to build version 3.0.4414.00 (which is a big jump from 3.0.4318.00 which was the latest hotfix that I had posted about and am running).

    UPDATE (10/16/2009): I just installed SP3 for PPS M&A and it brings you up to build version 3.0.4417.00, not 3.0.4414.00 as stated in the online documentation – http://technet.microsoft.com/en-us/library/cc656688.aspx.  I will probably be doing another posting today or this weekend in regards to SP3 for PPS M&A.

    Posted in PerformancePoint | Tagged: | 2 Comments »

    PASSMN Executive Board 2010 Nominations is now open

    Posted by denglishbi on October 13, 2009

    The Minnesota SQL Server User Group is accepting nominations for the 2010 Executive Board. If you are interested in running for a position on the board where you can help steer the direction of the Minnesota chapter of PASS then go ahead and submit your information online. The information submitted to the survey will be used during the election process in November.  Nominations are due by October 31, 2009 (end of day).

    The Executive Board is comprised of five elected members and one appointed member of PASSMN. Once the board is elected, the new board members determine who will take on the various positions and duties of the board. Executive Board members generally devote 5-10 hours per month for the various duties.

    Elections will be held via the www.mnssug.org website from November 1st thru November 15th. Only members of PASSMN are eligible to vote.

    Posted in News and politics | Tagged: | Leave a Comment »

    PASSMN October 20, 2009 Monthly Meeting

    Posted by denglishbi on October 9, 2009

    The next PASSMN Minnesota SQL Server User Group meeting is coming up.  The meeting is on Tuesday, September 15 from 3:00 PM to 5:15 PM.  You can visit the local site to get registered at http://www.mnssug.org or click the link to Register Here.

    This month the topics are:

    Getting To Know Your Indexes (Speaker: Jason Strate, Digineer) – Without proper indexing SQL Server can be hard pressed to create efficient and performant execution plans.  Dynamic Management Views (DMV) and system views provide a slew of information about indexes that can be used to analyze indexes within SQL Server.  In this session we’ll go under the hood of SQL Server to look at DMVs and system views to know what indexes you have, should have, and how they feel about the way applications are treating them.


    • Understanding of performance tuning needs
    • Experience with Database design


    1. Identify methods to analyze current and potential indexes
    2. Learn how to alleviate stress found on indexes
    3. Demonstrate methods for tuning indexes

    Jason Strate, Digineer Inc, has been a Database Architect for over ten years and is a SQL Server MVP. His experiences include designing and implementing both OLAP and OLTP solutions involving clustering servers, database monitoring and tuning, analysis services, DTS/Integration Services, Reporting Services, and Notification Services. Jason is actively involved with the local PASS chapter (SQL Server User Group); he is currently serving on the Executive Board and has presented at monthly meetings. Jason contributed to Microsoft’s published white paper Empowering Enterprise Solutions with SQL Server 2008 Enterprise Edition and actively blogs about SQL Server and related technologies.  Jason participated in the development of Microsoft Certification exams for SQL Server 2008.  Over the past year, he has presented at the SSWUG SQL Server Launch Event and the past two SSWUG Virtual Conferences.  He is also scheduled to present at this year’s PASS Summit in Seattle, WA. 

    Enriching the Design of an Analysis Services Cube (Speaker: Peter Myers, Solid Quality Mentors) – In the most basic way, an Analysis Services combines the best aspects of traditional OLAP analysis and relational reporting. In doing so it presents a "single version of the truth" that can satisfy multidimensional analysis and relationship reporting. But that’s just the tip of the iceberg. The cube can also define server-side calculations, Key Performance Indicators (KPIs), actions, perspectives, and translations to provide a richer experience than just querying for results from a database. So beyond storing your data, the cube can store information about how our data is represented and how it should be delivered and presented to the business user. In this presentation, discover through theory, demonstration, and best practices discussion, how to enrich the design of the cube beyond the data itself.

    This presentation is suitable for those with a background developing cubes with Analysis Services, and for those unfamiliar with the product yet curious to understand what can be achieved. Introductory theory and numerous compelling demonstrations will make this presentation relevant for all.

    Peter Myers has 12 years of solid experience working in OLTP database design and development, DBA support and Data Warehousing/Business Intelligence with SQL Server. Specializing in developing custom data-focused applications and reporting solutions, he provides architecting, development, implementation and support services at all tiers of a multi-tier application.

    He has a broad business background supported by a bachelor degree in applied economics, and he extends this with MCITP, MCSD, MCDBA, MCSE and MCT certifications. Currently, he provides mentoring and technical training internationally.

    In April 2009, Peter was re-awarded Microsoft MVP (Most Valuable Professional) for the third time.

    Peter enjoys sharing his enthusiasm for Microsoft technologies by presenting at SQL Server user group meetings and technical events, including TechEd (US and Australia) and VSLive.

    Posted in Training | Tagged: | Leave a Comment »