Most of us who welcomed the inclusion of Essbase into the OBIEE product set where soon disappointed by the lack of functionality it provided when compared to using a separate Essbase implementation as a data source for OBIEE. There is no ability to build your own cubes or customise the calculations in Essbase in anyway. Essbase was included in OBIEE for one reason only – to provide fast aggregations on large data sets.
If you are already using Essbase separately to OBIEE or have any of the other technologies to improve report performance (in-memory databases, Exalytics etc.), then there may not be much use for the built-in Essbase component of OBIEE. But if you don’t, and you don’t have the cash required to buy these other technologies, then the Essbase component is certainly worth a try.
But there’s a catch. The way Essbase has been implemented within the OBIEE product set is very good – a wizard does most of the work for you by building cubes you design based on your subject areas in the Repository. However, Oracle have only done half of the job. You can manually build cubes and it will automatically load the data there and then, but there is no way to automate the reloading of data – to me this is a critical flaw. If the source of your BI data (usually a data warehouse) has data loaded regularly, the Essbase cube must also be reloaded just as regularly or it will almost instantly become out of date and useless.
There is an option to manually kick off a load process but you can’t have someone coming in early every day to manually run this after your main load processes are complete but before the users get in. Also, it seems to be a bit flaky, not always working, but not necessarily telling you it hasn’t.
So, it is usable? Out of the box, the answer is probably not. However, with some careful configuration and workarounds to avoid some rather silly issues, data loads can be automated and Essbase can be made to work well.
This is the first of two blogs which show how to do this and I will try to explain what some of the issues are any why this is more complex than it should be. If you are already familiar with Essbase, especially around using rules files and the MAXL scripting language, then most of this should seem straight forward.
I’ve split this into two blogs to cover the two sides of using Essbase with BI – building Cubes and Loading cubes. In this blog I will go through using the Essbase BI Acceleration Wizard – do not skip this section if you’ve played with the Wizard before, there are some important differences in this approach. In the second blog I will go through creating & running the load scripts.
So, let’s look at the Essbase Acceleration Wizard. What exactly does this do?
Well it does four things. Firstly, a handful of wizard screens let you choose what data you want loaded into the Essbase cube and for which dimensions – you do not need to aggregate all of the data in a subject area, you can just concentrate on the data most important to the users or the data with the biggest performance issues.
Secondly it builds an Essbase cube. This is pretty much done behind the scenes, you have no control over it nor do you have access to what’s done, but you can see a standard Essbase cube ASO build in the operating system directories.
Thirdly it loads data into the Cube. Actually, it performs dimension build load followed by a data load, but they can both be generically labelled ‘loading the cube’. Again, you have little control over this, but the core files, such as Essbase rules files, can be seen (but not edited) in the operating system.
Lastly it imports the Essbase cube into the OBIEE repository and links it into the subject area in the BMM layer. The measures you specify for loading in Essbase in the Wizard screens get the Essbase cube as a second data source. When running reports the BI Server will then interrogate the data sources for the measure and if it determines that the data requested can be retrieved from Essbase then it will usethat rather than going to the standard relational source. Hopefully it will be faster!
These last two steps hide an important point in how the Essbase cubes are loaded and in my mind lead to the fatal flaw in how Essbase has been implemented in OBIEE, making the automated loading of data difficult.
Data is loaded into Essbase via the BI Server and Repository. OBIEE is acting as the data source for Essbase. So rather than Essbase connecting to the original source of the data – be that a data warehouse, production system or whatever, Essbase simply requests data from OBIEE. The BI Server interprets the request just like any analysis written in OBIEE. The BI server goes to the repository, determines the relevant code to extract the data, extracts it, performs any calculations required, then passes the data to the requester. In this case the data is then stored within the Essbase cube.
But once the Essbase cube has been built, the second (and future loads) would repeat this – the BI Server will accept the request, determine source and extract the data. But wait, what if the BI server decides all of the data requested can be sourced from the Essbase cube? It’s a valid data source in the subject area model for the data being requested, why wouldn’t it? Essbase becomes the data source to load itself. The load would run fine, but it would just reload the data it already has. It would not go to the original source systems looking for new data.
In reality there is no way of knowing if this would happen in every scenario, it depends on many factors, including the complexity of the Essbase cube (over which you have little control). So whilst it may not happen in every situation, I suspect it would happen in many – it certainly did during my testing. New data was not loaded from the data warehouse. Even when I manually deleted all data from the Essbase cube, the load process still only went to Essbase to load from, so I still ended up with an empty cube.
The only way to be absolutely sure that the load extracts data from the original source rather than from Essbase is for Essbase not to be a source of the data. This comes back to how the Wizard works – it pushes you down deleting and redeploying the cube each time. This is unacceptable in a working production system.
However, there is a way to force it to work properly, it’s a bit clumsy, but does work. That is to have two identical subject areas in the repository. One for reporting, which includes Essbase as a data source, one for loading data into Essbase which does not have Essbase as a data source.
The catch? You have to set them up manually. But’s it’s not that big a deal, since subject areas can be duplicated in the repository. So we create an Essbase cube against a subject area, let it create it’s additional data sources in the BMM layer. Then we duplicate the subject area and in the duplicate remove Essbase as a source. The users report against the original, Essbase loads against the duplicate. Simple!
Well almost…you can’t change the subject area used by Essbase to load data, so we have to setup the subject areas in the opposite order. The original subject area must be the one used to load Essbase, the duplicate must be the one used by the users.
The way to achieve this is to rename the subject area (suffix _Essbase on the subject area name works for me). Create the Essbase cube against this. Duplicate the subject area, rename the duplicate back to the original subject area name, then remove Essbase as a data source from the xxx_Essbase subject area.
Confused? Follow these steps and all should become clear!
For the record, I’m using OBIEE 126.96.36.199 on a Oracle Linux 7 server.
We start by opening the Admin tool and connecting On-line to the OBIEE server. It’s best to do this on-line, otherwise you’ll be constantly saving and re-uploading off-line files. Take a back-up copy of the Repository before starting if you don’t already have backups. (File -> Copy As -> Repository from the menu)
I’ve got a simple data model already setup, based on the Oracle sample app. One table of facts, four dimensions, three of which have multi-level dimensions over which data needs to be aggregated for reporting:
Shortly I’ll be creating an Essbase cube against my Sales Data subject area, so my first step needs to be to rename this in the Presentation and BMM layers:
Quick note: if you can’t rename the Presentation layer, go to Tools -> Options in the menu and tick the ‘Edit presentation names’ option:
Now Check-in and Save the repository.
We can now launch the Essbase BI Acceleration Wizard. This is browser based, the URL will be http://servername:9502/cds/view/ (9502 is the default port, you may have a different port number based on your installation)
Once logged in click on the ‘Launch Essbase BI Acceleration Wizard’ link:
On the first screen of the Wizard click on the ‘Populate Business Models’ button then select your subject area name from the drop down. Over to the right leave the ‘Select existing connection from RPD’ unticked and enter a name for the application, e.g. Essbase_Sales. Then click Next.
In the next screen you choose which measures are to be loaded into Essbase. You do not have to pick all measures, just those you need the performance of Essbase to aggregate. Select the table which contains the measures, Revenue Facts in my example then pick the required measures below. Click next when done.
In the third screen, you choose which dimensions are to be included in Essbase. In my example this is easy – all of them, but if you have a complex model with many dimensions, it could well be that you only need fast aggregations over a handful of dimensions, or it may be that some dimensions either do not apply to the measures or users report over them so infrequently that performance isn’t an issue. Do not just pick all dimensions “just because you can”, storing more dimensions in Essbase than you need will just slow down Essbase’s aggregations.
The next screen lets you choose which levels of the dimension hierarchies are to be included in Essbase. Again my example is very simple, but again do not just include all levels because you can – remove any that won’t be reported against, especially on dimensions with multiple hierarchies.
Finally, you get the summary screen where you can check your selections and run the deployment by clicking the Deploy button. Unless you know what you are doing I’d recommend just using the default settings (further settings are available when checking the ‘show advanced options’ button).
First click on Save, then click on Deploy and wait for the cube to be created. To see when your cube is ready use the Browse window over on the left side of the screen. Click on Actions and choose Refresh:
When your cube is ready it will be visible under the ‘local’ heading:
Go back to the BI Admin tool, close then re-open on-line. You’ll see that you now have a new Physical data source, Essbase on http…. And if you expand the subject area and fact table in the BMM layer you can see the fact table now has two sources:
The dimensions chosen for the Cube will also have two sources now as well.
The next step is to Duplicate the Subject Area to get back the correct original names as used by reports in OBIEE.
Firstly, we need to do a bit of Admin on the Presentation layer. Right click on the subject area and click Properties
In the Alias tab click on the original subject area name and click the red-cross to remove it. Admin will not let us create a new subject area with the same name as an alias on another subject area. Click ok to save.
Next Right-click on the subject area name in the BMM layer and click ‘Duplicate with Subject Area…’:
Enter the original subject area name for both the business model and subject area names:
Let that process for a short while, then you will have two sets of subject areas:
Now we need to remove the Essbase link from the “Essbase” version of the subject area. Expand the fact table in the “Essbase” subject area, then expand Sources. Click on the Essbase acceleration source and press delete. Confirm when asked.
You need to repeat this on the fact tables and all dimension tables, so that nothing in the “Essbase” subject area has an Essbase data source.
Then check-in and save the repository.
The consistency checker will probably throw up two errors. These can be ignored for now, they won’t affect Essbase being used as a source, but should be addressed at some point.
Back in OBIEE, reload the server metadata (from the Admin tab or in the subject area drop down in an analysis) to pick up the latest changes to the repository.
To test that Essbase is being used either edit an existing report which uses these fields or create a new one. If creating a new report be sure to pick the original subject area name, not the ‘Essbase’ subject area (this can be hidden from users using permissions in the repository or manage privileges in the Admin screen)
In order to check Essbase is being used properly, we need to ensure there is no caching and logging is switched on to an appropriate level. The simplest way to do this is on the advanced tab of the report.
In the Advanced SQL Clauses section, enter the following into the Prefix field and click Apply SQL:
SET VARIABLE LOGLEVEL=3,DISABLE_CACHE_HIT=1;
Now return to criteria or results and run your report, for example:
Next go to the Manage Sessions screen off of the Administrator Screen. Locate the log for your report (it should be at the bottom of the screen):
Then click the ‘View Log’ link.
Scroll down this screen until you find the select command run to extract the data and you should see if refer to the Essbase_xxxx Physical Catalog (from the Repository). Also the select command will be in MDX format, not relational SQL format!
You can also look in the operating system to see the cube by looking in the directory:
You can also see the size of the cube by looking in the ‘default’ sub-directory: (my example is pathetically small!):
Looking in this directory provides a useful check – if there is no file here then no data has been loaded into your cube.
So, we have successfully created our Essbase cube and can create reports that use it, but done it in a way that will allow scripted data loading. I’ll cover this in Part 2 of this blog: http://www.redstk.com/8736/