My Blog

Saving you 15 minutes: Recovery session…my picks for the post SQL Dev run Sunday OOW15

by Philip Brown on 7th October 2015 No comments

Open World is not far away and I’m already starting to pick out some sessions.  This year for me will be a bit different as I will hopefully be up early to run across the Golden Gate bridge, jet lag and hangover permitting.  Assuming that goes well then these are some of my picks for the day:

Worst Practices in Data Warehouse Design UGF9985
I’ve worked on a lot of dataware houses but to be honest never had to build one from the ground up; I’ve got an idea of how it might be done but I’m sure I could fall into some common mistakes.  Any session which is backed up by real life stories is great so this session looks like it will be up my street.

Oracle Database In-Memory Option: Challenges and Possibilities UGF10319
I’m working on my own In-Memory presentation at the moment and I’m keen to hear about other peoples’ experiences.  I’ve hit a couple of challenges shall we say, I’m not sure if this is down to a user error or it not working quite as expected!  I’m hoping but can’t say if this session will include real life use cases or just Christian’s experiences; the former is always best but somebody working through it will still be interesting.

Improving Analytics: Experiences Testing Oracle Database In-Memory UGF9936
What can I say, I hope to be an expert on In-Memory by the end of the day!  Practical hints will always be good and I’m also keen to hear stats on how long it takes to load data into memory; so far I have seen it is time consuming and massively CPU intensive.

Rapid Provisioning of Real Application Clusters and Database Deep Dive UGF10275
All my provisioning of databases is pushed via OEM and I wouldn’t use anything else, this session doesn’t mention OEM so I’m keen to see what tooling is being used.  If it is OEM then I’m sure I could pick up some tips, if it’s not then I’m interested to compare with my own experience.

Managing the Changes in Database Structures UGF6317
Heli is been nominated for an Oracle Database Developer Award around database design and this session is on database structures with SQL Developer Modeller.  I don’t work in this space on a day to day basis but that doesn’t mean it won’t be interesting.  Part of the experience for me at OOW is sitting in on sessions or learning about technology not directly related to what I do and if it is based on experiences and insight why not!

read more
Philip BrownSaving you 15 minutes: Recovery session…my picks for the post SQL Dev run Sunday OOW15

Database system target in pending status for standby database in OEM 12c

by Svetoslav Gyurov on 6th October 2015 No comments

That’s not really a problem but annoying issue I had with OEM 12c. Once a standby database is promoted, the database system for the same is showing as metric collections error OR Status Pending.

The standby database doesn’t need its own system since it will join the primary database system. The solution is to associate the standby database with the primary system and then remove the standby database system.

For example – we’ve got primary and standby databases – TESTDB_LON, TESTDB_RDG. Once promoted the following targets are also created in OEM – TESTDB_LON_sys and TESTDB_RDG_sys.

The second one will always be having status Pending:
Status Pending (Post Blackout)

The way to resolve that is to associate the standby database with the primary database system. I usually rename the primary database system as well to omit the location (LON and RDG):
– Go to the Targets -> Systems and choose the system you want to edit
– Then go to Database System -> Target Setup -> Edit system
– Rename the system name from TESTDB_LON_sys to TESTDB_sys
– Save changes
– Go to Database System again, Target Setup -> Edit system
– Click next to go to Step 2
– Add the standby database to the Standby Database Associations table
– Save changes

At this point we’ve got one system TESTDB_sys with two database members TESTDB_LON and TESTDB_RDG.

Next step is to remove the database system for the standby using emcli:

[oracle@oem12c ~]$ /opt/app/oracle/em12cr4/middleware/oms/bin/emcli login -username=sysman
Enter password :
Login successful

[oracle@oem12c ~]$ /opt/app/oracle/em12cr4/middleware/oms/bin/emcli delete_target -name="TESTDB_RDG_sys" -type="oracle_dbsys"
Target "TESTDB_RDG_sys:oracle_dbsys" deleted successfully

Now it’s all sorted and hopefully all targets are “green”.

read more
Svetoslav GyurovDatabase system target in pending status for standby database in OEM 12c


by Elizabeth Spencer on 6th October 2015 No comments

The Red Stack Tech Academy will provide local talent with an unrivalled opportunity to develop sought after technical skills and secure employment at Red Stack Tech’s prestigious Krakow office

8ecae62e-804a-4cda-ae2a-3a46248e62eb-largeOracle Platinum Partner, Red Stack Tech, has announced the official launch of its Red Stack Tech Academy in Poland. Following a rigorous pre-selection process, 17 talented individuals have secured their place on the fully-funded 12-month database training course which will be delivered by leading international Red Stack Tech experts.

Graduates that demonstrate an exceptional level of technical skill and expertise on completion of the course will be offered full-time positions at Red Stack Tech’s prestigious Krakow office, joining a dedicated team responsible for handling the operations of large database systems, on a global scale.

Red Stack Tech is a leading Oracle technology brand specialising in the provision of Oracle Software, Hardware, Managed and Professional services. Headquartered in the UK, the company entered the Polish market in January 2015, following the acquisition of leading Krakow-based Oracle provider, Gridwise. With additional offices strategically located in the U.S. and Australia, the company provides Oracle technology solutions to a worldwide client base and are widely recognised as global Oracle specialists.

The Academy will feature a 12-month teaching curriculum where participating students will learn all the skills required to become a database professional including; the architecture of relational databases, Oracle database maintenance, troubleshooting, back up and archiving, database migration, consolidation and Oracle Engineered Systems. The Academy will also take a dynamic approach to learning – giving students the opportunity to apply their theoretical classroom based learning to real-life commercial challenges. Participating students will receive monthly stipend and will attend the Academy while continuing their daytime education.

In order to attract the highest calibre of technical talent, the company held a series of open lectures and workshops at several local Universities in May and June 2015, after which 52 applicants took part in an interview and technical assessment, with applicants that demonstrated excellent technical ability and aptitude securing their place at the Academy.

Alex Louth, CEO at Red Stack Tech, said, “After receiving so much interest in the Academy from local talent, these 17 students stood out as having a high level of technical knowledge and long-term potential so we are very pleased to offer them this unrivalled opportunity to increase their qualifications, develop sought after skills and secure desirable employment.”

“For us, the launch of the Academy is only the beginning of our long-term investment in Poland. Our vision is to make Poland the center of our core continental Europe operations and to service the majority of our global clients from Krakow, so building a strong, committed local workforce is of paramount importance and we will continue to invest heavily to achieve this.”

read more

Saving you 15 minutes: Sharing credentials for the software library for database provisioning in em12c#

by Philip Brown on 2nd October 2015 No comments

When you deploy software from OEM you need to stage the binaries in the OEM software library.

The method I use is an Agent Referred Location, to do this:

Setup > Provisioning and Patching > Software Library > Reference File Location

On the drop down box select Agent.


When you add a location in you define the name, select the host and the relevant location.  When you login to select the location you’re actually determining the host credentials which will be saved against this referenced location.  Once you have selected the location you will see the following:


If you click on the change credential you will see which credentials have been saved against this location.  It is quite likely that the person who sets up the initial location will not be the person deploying the software at some point in the future.  Therefore whomever sets this agent location needs to ensure that the other OEM administrators have access to VIEW credential.  If you don’t do this then when it comes to deploying the software you will get an error in the provisioning tool.  Potentially it will be around a credential that doesn’t look familiar or hasn’t been specified during the provisioning; that’s because it had been setup some time ago.  Not a biggy I know but hopefully it will save you 15 minutes!

read more
Philip BrownSaving you 15 minutes: Sharing credentials for the software library for database provisioning in em12c#

Yum Yum

by James Gibbon on 30th September 2015 No comments

Broadly speaking there are two branches of the GNU / Linux faith; two kinds of popular distribution. These can be categorised by the method they use to install and maintain installed software – their package manager.

Some use the RedHat Package Management system and are often referred to as “RPM-based”. Others, derived from Debian Linux, use that distro’s own system to maintain packages. Examples of the former include SUSE, Mandriva and Oracle Linux. Ubuntu is the best-known Debian-based distro, probably including Debian itself.

Other versions of Linux that aren’t derived either from Debian or RedHat don’t use either of their package management systems, Slackware and Gentoo for example – but it is rare to find one of these being used in a commercial environment, at least for a server build.

As you might expect, Debian-based and RPM-based systems use roughly equivalent methods for maintaining packages. Each handles packages represented in a single file (suffixed by .deb and .rpm respectively) which contains the package’s various binaries and base config files, in addition to package metadata. Each provides a basic command-line tool to perform basic operations on package files and installed packages – to install, upgrade, uninstall, query the package database and so on. For RPM this is rpm; for Debian and its descendants, dpkg.

However, both also provide a powerful, higher level tool to manage packages which is capable of downloading package files from repositories, checking dependencies and recursively installing dependency packages automatically. For Debian-derived flavours of Linux this is the Advanced Packaging Tool, APT, which offers a command-line utility called apt-get. RPM-based distros have YUM (which actually stands for the Yellowdog Updater, Modified but that’s perhaps an obscure historical reference that I’ll leave to the reader to look up if interested).

For the purpose of this brief piece I’ll outline a few of YUM’s capabilities with command line examples using package names plucked out of the air. Obviously these operations would all be performed as root.

To install a package:

# yum install mysql


To upgrade a package already installed to the most recent available version:

# yum update bash


To search for a package using a keyword:

# yum search php5


To reinstall a package at the same version:

# yum reinstall zfs


To find out what package is needed for a particular utility:

# yum provides "*bin/top"


To list all available packages:

# yum list


To patch your entire system up to date:

# yum update

These are just a few of the capabilities; an excellent cheat sheet for YUM can be found at RedHat’s site here, or of course you can type “man yum” from a terminal.

It’s an indispensable and extremely powerful tool but as always that power is a double-edged sword – be careful! Fortunately yum will prompt you to confirm you’re sure before doing anything destructive, which means for example that if you’re brave, you can find out how many packages are out of date on your system by typing yum update, then answering N to the confirmation prompt. Like this:

Transaction Summary
Install   1 Package  (+12 Dependent packages)
Upgrade  99 Packages
Remove    1 Package

Total download size: 170 M
Is this ok [y/d/N]: N

An  entire system update performed using yum in the manner described (and copped out of) above is usually safe – but of course paranoia is the system administrator’s friend, and I would always take an opportunity to take a backup first – especially on a cloud-based system or VM where the entire host can be restored from a snapshot.

Debian’s APT tool is no less capable; perhaps I’ll visit that in a later piece.

read more
James GibbonYum Yum

HCM Bank Account details

by Simon Parsons on 30th September 2015 No comments

If you’ve been tearing your hair out recently trying to find out how to find out how to find Bank Account details in HCM then I have the answer here. (I’ve got no hair left and before this I looked like Robert Plant..)

You can’t do it in an Analysis. It’s far too complicated and none of the Subject Areas contain the columns you require.

So you have to use BI Publisher. This SQL will allow you to connect the columns you require to a person_id. You should be able to now include this in your data model for BI Publisher…

Have fun…

select distinct
from per_persons p
, hz_parties h
, hz_orig_sys_references hosp
, pay_pay_relationships_dn rel
where hosp.owner_table_id = h.party_id
and to_number(hosp.orig_system_reference) = p.person_id
and hosp.owner_table_name=’HZ_PARTIES’
and hosp.orig_system = ‘FUSION_HCM’
and rel.person_id=p.person_id
and AO.ACCOUNT_OWNER_PARTY_ID = h.party_id

Have Fun…

read more
Simon ParsonsHCM Bank Account details

HCM – Build a BI Publisher Report – Part 2

by Simon Parsons on 30th September 2015 No comments

To create you BP Report in HCM what you do is click on ‘Create Report’


Make sure ‘Use Data Model is selected and ‘Guide Me’ too selected Click ‘Next’.


We’re going to use the Wizard Initially. But, as you get used to editing your reports you may want to go straight away to ‘Use Report Editor’.


Initially we’re just going to create a Landscape Report with a Table Layout. Click Next.


Now you can select which columns you want in your Report. Drag over the columns you want.


When you’re ready for a first look at the Report click ‘Next’. Also un-tick the Grand Totals Row for this Report.


Click ‘Finish’.


Give your Report a Name…


We now have the start of our HR Report. We can edit the Layout. More of that in the next Blog.

Have fun…

read more
Simon ParsonsHCM – Build a BI Publisher Report – Part 2

OBIEE “Failed to upload repository” error

by Paul Cannon on 29th September 2015 No comments

I came across the error “Failed to upload repository” today whilst uploading a new RPD file via the enterprise manager into an installation of OBIEE that I’d uploaded repositories for many times before:


The error crops up immediately after browsing for the new RPD file, before even entering the repository password.

Viewing the log messages didn’t return anything meaningful, other than reams of java errors. Googling didn’t help much either. After a big of digging around I discovered the disk volume where my /tmp directory is located was full:


When you upload a new repository the enterprise manager copies it to the /tmp directory initially, then moves it the /repository directory under the OBIEE home once you hit apply.

So I cleared some old files to open up some disk space and the repository loaded perfectly afterwards.

We hit this error using version of OBIEE, but I just checked and the same error happens in when the /tmp disk is full.

Hope it helps!




read more
Paul CannonOBIEE “Failed to upload repository” error

OBIEE Variables Prompts and Aliases

by Pete Bunning on 28th September 2015 No comments

I never been the biggest fan of presentation column aliases in OBIEE and, probably because of this, I’ve never found any need to use them.

That was until recently when I was tidying up a new dashboard and finding ways to make to more user friendly.  The dashboard in question had a few analyses with the option to show revenue in different currencies.  Initially we had each analysis with three different views showing US dollars, Euros and US dollars using a different exchange rate.  The problem was that the user had to switch views independently on each analysis, which was a bit painful.

So to solve this decided to use a variable prompt which would allow the user to choose the currency.  The variable could then be used in the analysis column formula to choose which currency is displayed by using the variable name instead of a presentation column name.

The first step was to add a prompt which displays the three currencies and assign the prompted value to a presentation variable (KPICURRENCY).


I used custom prompt values that looked better on the dashboard instead of using the exact column name (in this case ‘USD Amount’, ‘Euro Amount’ etc).


Then I went to the analysis and edited the ‘Amount’ column formula.  Instead of the column name I replaced it with a variable expression:

-(“Finance Transactions”.”Euro Amount”)


-(“Finance Transactions”.”@{KPICURRENCY}{EURO}”)


If I left it like this the formula would throw an error at runtime as the values from the prompt don’t match any known presentation column.


So the last step is to edit the RPD to make use of the alias.  In the each of the three currency columns I added an alias which matched the custom prompt values above:

Prompt Alias alias_USD alias_USD_inv

Now, the prompt values will be translated into the proper column name and the report will switch columns at runtime based on the user input.

Until next time…

read more
Pete BunningOBIEE Variables Prompts and Aliases

Keep your BICS Server Tidy…deleting uploaded data files

by Paul Cannon on 25th September 2015 No comments

In the BI Cloud service it is possible to load data on-the-fly directly into new Visual Analyzer or standard Analysis reports – I’ve already written some blogs on doing this.

What isn’t particularly clear is how to delete this data if it’s no longer required. Once you’ve loaded it, the data is sitting on the cloud server taking up space and shows up every time you create a new analysis as a data set that can be used. Clearly there needs to be a way of deleting them and clearing the space they are taking up. There is, but it’s not exactly obvious!

In my BI Cloud server I’ve loaded three data files for various analyses I was working on and these can be seen whenever I create a new Analysis or Visual Analyzer project:

From new Analysis:


For a new Visual Analyzer project:


These files can be deleted from within the Visual Analyzer editor window. Even if you loaded the files for use in a standard Analysis, you can still use Visual Analyzer to delete them.

If you’ve loaded a data file into a standard analysis you can delete the file whilst still editing that analysis – for example if you loaded the wrong file – but once you’ve saved and left the analysis you can only go back and delete it if it’s not the main (i.e. first) data source in the report. more on this below.

So from the Home page create select ‘Create a Visual Analyzer Project’. Then when the ‘Add Data Source’ window appears select the file you want to delete. Click ok to create the new report.


The new report will open as normal:


Now select the Data Source tab in the top left of the window to just see the selected file:


Now right click on the file and select ‘Delete’


You’ll get a confirmation window:


Then the file will be deleted. You will still be in the Analyzer screen, but with no data source selected:


At this point you can either close down the report screen or use the ‘Add Data Source’ link at the bottom of the Data Source tab and bring in another saved file to delete. If you do this you’ll notice the file you’ve just deleted is no longer available:


When you close down the Visual Analyzer project screen there is a slight issue with the home page. Since Visual Analyzer opens in a new tab, when you close it and return the home page, it’s still there on its own tab and hasn’t been refreshed. Therefore it doesn’t know about the files you’ve just deleted. If you try to create a new Analysis you’ll see the deleted file(s) are still visible:


However if you select the deleted file you will then get the following ‘Access Prohibited’ screen:


If you return back to the home page it gets refreshed and the deleted file(s) are no longer visible.

You can avoid this by either refreshing the home page (the browser refresh button will do the job), or simply navigate away from the home page and back again (e.g. go to a dashboard then back):


Be warned that any Analyses or Visual Analyzer reports you’ve created with the deleted files now won’t work. Attempting to open a saved Analysis results in the following error:


And opening a saved Visual Analyzer project, essentially the same message:


It is however worth noting that if you reload the same file, either via new Analysis or new Visual Analyzer project then any reports that use the file will work again – do you can delete and then reload data.

When using Visual Analyzer to load data it is possible to overwrite an existing file – you simply reload the file and it will prompt you about replacing it. However when using a standard Analysis report to load a data file you cannot overwrite the file – you merely get told you have to give it a different name before loading.

If you’ve loaded a data file into a standard analysis you can delete the file whilst still editing that analysis – for example if you loaded the wrong file – but once you’ve saved and left the analysis you can only go back and delete it if it’s not the main (i.e. first) data source in the report. In order to delete files via a standard Analysis the best option is to create a new report using a database subject area then add in the loaded files as additional data sources – you can then delete them individually by right clicking on each data source.




read more
Paul CannonKeep your BICS Server Tidy…deleting uploaded data files