My Blog

Saving you 15 minutes: Virtualisation, Licensing, Pools, AIX in em12c Part 1

by Philip Brown on 28th August 2015 No comments

If you’re looking to understand CPU related information from em12c then there are a couple of ways you can do it.  Through the GUI go to:

Search for a Host > Configuration > Last Collected > Hardware

From here you can look at two different areas: CPU and Virtualisation.  The CPU stats will only make sense in a non-virtualised environment what you need to look at is Virtualisation.

lpar1

The great thing about this is that for AIX and LPARs it seems to take the information from the OS command lparstat.  What is great is that this information includes really important configuration information like Shared Pool ID.  AIX LPARs have the ability to allow hard partitioning which allows you to carve up your estate into relevant CPU pools and license them appropriately.  The fact that this information is kept in OEM is great as you can then tie it up to all the other license information as well.

Now what we can do is then look at where this information comes from in the repository, next to Virtualisation click actions and search and it will take you to a configuration search page.  Then click ‘search using SQL’ on the right hand side.

lpar2

SELECT
s1.target_name ,
s2.NAME        ,
s2.VALUE
FROM
   CM$MGMT_ECM_HW_VIRTUAL s2        ,
   MGMT$ECM_CURRENT_SNAPSHOTS s1gen1,
   MGMT$TARGET s1
WHERE
(
   s1gen1.TARGET_GUID     = s1.TARGET_GUID
   AND s1gen1.ECM_SNAPSHOT_ID = s2.ECM_SNAPSHOT_ID (+)
   AND s1.TARGET_TYPE         = 'host'
   AND s1gen1.SNAPSHOT_TYPE   = 'll_host_config'
)

Now the query will initially embed your host in it but you can take it out and start to look at information for all hosts.  The idea being is that you can check / ensure that the environments you think should be in the relevant shared pools are in the correct shared pools.  In the next blog I will expand on this and look how else you might want to manipulate this information.

read more
Philip BrownSaving you 15 minutes: Virtualisation, Licensing, Pools, AIX in em12c Part 1

A Tiny BASH Script

by James Gibbon on 28th August 2015 No comments

Here’s a script that I hope will illustrate that a very few lines of BASH can perform a useful, time-saving and relatively complicated task.

As a system administrator responsible for maintaining services across a number of different companies, I’m naturally called upon to login remotely, using SSH, to a large number of machines. To assist me in this very frequent undertaking, I use a config file in my home directory – $HOME/.ssh/config – which keeps a record of hostnames, network ports, usernames and so on to be used for access particular machines. Here’s an excerpt from one of them, with the details changed to protect the innocent:

host blah
        hostname 77.66.55.44
        user cloud-user
        port 2222
	IdentityFile ~/.ssh/mykey.pem

host web2
	hostname webserver2.somecompany.net
	user root
	ProxyCommand ssh blah -W %h:%p

I work from a number of computers, each of which has its own version of the config file. I naturally want to share additions to the file between all of them – my laptop, my home desktop and my office desktop – without performing three edits.

Unfortunately it’s not appropriate to simply copy the file around the various computers – for example the home desktop version of $HOME/.ssh/config has local entries for my network at home which aren’t needed or useful on the laptop, or on the office desktop.

So I’ve written a brief script, sshpull to help me distribute between the three machines only those SSH config entries which are intended to be shared. It works like this: I maintain the master copy of the shared SSH config on my home machine. A particular comment line in the file, containing only the text “# shared” is used as a divider between the local, machine-specific area and the shared area, on all three versions of the file.

The script is run on the laptop, and the office desktop. It preserves the local area of $HOME/.ssh/config, discards the shared area, and overwrites it with the master copy shared area, obtained from the home machine over SSH. It uses the classic Unix stream editor sed to carve out the necessary lines from the remote master copy and to preserve the local part of the local file, and the basic file copy utility dd to append to it the new version of the shared area.

And here it is. Four lines of brutally terse code, and one of those is only there to take a backup copy of the file:

#!/bin/bash
# sshpull

# (c) 2013 James Gibbon

# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.

# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License at <http://www.gnu.org/licenses/> for
# more details.

# the local config file
LCONF=$HOME/.ssh/config

# take a backup
cp $LCONF $LCONF.prev

# store the local part of the file in a variable to overwrite it
pt1=$(sed "/# shared/q" $LCONF) && echo "$pt1" > $LCONF

# get the shared part from the master and append it
ssh home "sed '1,/# shared/d' ~/.ssh/config" |dd of=$LCONF conv=notrunc oflag=append

 

Happy coding.

read more
James GibbonA Tiny BASH Script

Using BICS DataSync to load data – Part 2

by Paul Cannon on 28th August 2015 No comments

Following on from part 1 of this blog – http://www.redstk.com/using-bics-datasync-to-load-data-part-1/ – where I covered installing and initial configuration of the DataSync utility, I’m now going to run through the steps to set-up a simple data transfer to move some data into my BI Cloud database.

The first step in the DataSync client is to click on the Projects button in the top menu bar. This opens the projects window which at present is empty.

BICS_DataSync_Part2-1

An important point to remember in this screen is to check the project you are working on. A project is a suite of tables/files to upload in a single session. You may well need multiple projects to populate a BI Cloud database – either if loading data from multiple sources or if you need to schedule different tables at different times – may be some tables are updated daily, others weekly for example.

In the menu bar at the top of the screen is a drop-down list of projects that have been created

BICS_DataSync_Part2-2

At present I only have one – ‘Pauls BICS DataSync Project’ – you can create more from the menu under File -> Projects.

For this project I have a set of tables in a local Oracle database which I need to transfer to my BI Cloud database and in part 1 of this blog I’d set-up a connection to this database – ‘BI Datawarehouse’

So in the tabs across the top of the upper window I need the ‘Relational Data’ tab.

I can now click on ‘Data From Table’ to import the table definitions. The following window pops up allowing us to search for and then select tables. I’ve only got the one source connection, so it defaults to that. In the table name file I could leave the asterisk and view all tables, or filter them down.

BICS_DataSync_Part2-3

For this example I’m using the Oracle SampleApp database, so I’m going to filter on tables starting with SAMP.  After clicking on ‘Search Tables’ I get the list of available tables to choose from.

BICS_DataSync_Part2-4

It’s a little annoying that you can’t resize this window to cope with a long list of tables, but I can now select the tables I want – just nine of them for now.

Next click ‘Import Tables’. After a few seconds the tables appear in the project window:

BICS_DataSync_Part2-5

There are a few settings we can apply to each table at this stage.

Firstly we can specify the exact query to use against the table – the default will be a simple ‘select * from…’, but with the query you can add where clauses or any other valid select syntax. A common use of this is to restrict the data to the most recently updated records to enable incremental uploading of data – a simple example might be “select * from table_name where update_date > sysdate-1″, to only transfer data loaded since yesterday.

Next you can specify the load strategy – we’ll look at this below. You can also specify the maximum number of permitted errors before the transfer will be aborted and you can temporarily disable individual tables from the transfer.

Clicking on the Load Strategy field opens up the following window providing options for loading data into each table in the project.

BICS_DataSync_Part2-6

The default ‘Replace data in table’ will perform a truncate and resend all data from the table (or from the query if one was entered).

Append will insert records without deleting or check for the existence of existing records – this will generate errors if keys prevent duplicate records, which you can override with the max number of errors field if you desire.

The update table option uses keys on the table to either add new records or update existing records and you can specify to only add new records, only update existing records, or both.

If you are using the update table option you then need to specify which fields make up the keys for identifying update records. To do this click on the table in the upper project window. In the lower detail table there are two tabs. The first tab, ‘Edit’ just displays the same fields as on the upper project. The second tab ‘Table Column Mapping’ lists the fields on the table and the ‘Update Rows on Match’ column let you choose which fields to use:

BICS_DataSync_Part2-7

Once you’ve finished setting the details for each table in the project click the ‘Save’ button in the bar across the top of the project window.

So now we are ready to run our project – to transfer the data in these tables between the source database and the BI Cloud database.

We do this via “Jobs”. A job is an instance of running a particular project and can be associated with a schedule to regularly run it. Clicking on the Jobs button in the top menu bar opens the Jobs window:

BICS_DataSync_Part2-8

Each project will have one Job automatically created – with the suffix ‘Job1’, although this can be renamed. To run the job immediately click the ‘Run Job’ button.

You get an ‘are you sure’ window and then a message stating that the job has been submitted.

Click on the ‘Current Jobs’ tab to see the job progress. The bottom tab provides more detail.

BICS_DataSync_Part2-9

Click on the Tasks tab to see its progress through each table:

BICS_DataSync_Part2-10

Each table goes through phases of being queued (in yellow), running (blue) and finally completed (green). There is a fourth phase – failed (red), but that hasn’t happened here. The DataSync utility can multi-task – hence why above four tasks are running at the same time..

Once each table is complete the window displays the number of successful and failed rows.

The Job History tab will display a list of all the times each project has been run and right now will just show the one successful record:

BICS_DataSync_Part2-11

You can manually run a job as often as you like, simply return to the Jobs tab and click Run Job again.

Alternatively you can schedule the job to run automatically using the Job Schedules tab.

Clicking on the new button opens up a new line. In the bottom window you enter the details of the schedule – a name for it, the job to run and then the frequency details. Here I’ve set-up my job to run daily at 10pm from today onwards:

BICS_DataSync_Part2-12

Obviously the client does not need to be open for the schedule to run – the DataSync agent running in the background will do all the work. The DataSync agent must of course be running – make sure it gets restarted whenever the server is rebooted!

So there we are – a quick overview of using DataSync to load data into a BI Cloud database. – Enjoy!

 

 

read more
Paul CannonUsing BICS DataSync to load data – Part 2

Using BICS DataSync to load data – Part 1

by Paul Cannon on 24th August 2015 No comments

Today I’m taking a look at installing BICS DataSync  – a utility for loading data into an Oracle BICS (Business Intelligence Cloud Service) database.  In a second blog – to follow –  I’ll use it to transfer up some data.

Version 1.2 of this utility has recently been released and can be downloaded from:

http://www.oracle.com/technetwork/middleware/bicloud/downloads/index.html

The download is a 60mb zip file, BICSDataSync_V1_2.zip. Simply unzip this to a new directory – I chose c:\app\BICSDataSync.

The only pre-requisite is having java 1.7 or better installed and having the JAVA_HOME environment variable set.  Both Windows and Linux versions are included in the zip. I’m using a windows environment for this blog, but simply substituting .sh for .bat in the commands below will run the Linux equivalent.

You can cheat on the JAVA_HOME setting by editing the config.bat file in the directory you unzipped it to above – at the top of this file is a line which sets the JAVA_HOME:

set JAVA_HOME=@JAVA_HOME

Change to this your java home directory, in my case:

set JAVA_HOME=C:\Progra~1\Java\jdk1.7.0_40

There are two main components. The DataSync service which runs in the background and processes requests and the DataSync client which you use to manage your requests.

The DataSync service is started with the datasync.bat command in the install directory:

C:\app\BICSDataSync\datasync.bat

The first time this is run you are taken through a configuration wizard:

BICS_DataSync_Part1_1

 

You can configure a new environment or copy one from a previous installation of DataSync (e.g. a previous version).

 

 

BICS_DataSync_Part1_2

Enter a name for your repository – “Default Repository” is just the default name it provides:

BICS_DataSync_Part1_3

Enter a password for the Data Sync Agent. This password is stored in an encrypted file underneath the install directory.

BICS_DataSync_Part1_4

Click finish to end the wizard.

BICS_DataSync_Part1_5

Once it’s saved its details the Data Sync service will start and will prompt for the password again. You will then be prompted to create a new Project:

BICS_DataSync_Part1-new1

Then the main client screen will open. Users of older versions of BI Applications may recognise this screen…it’s the old DAC!

BICS_DataSync_Part1_8

 

Note when you close down the DataSync client, the DataSync service itself still keeps running in the background. To restart the client whilst the DataSync service is running you need to run the datasyncClient.bat file from the installation directory. You will be prompted for the Agent password as entered earlier if you haven’t clicked “remember password”. You can then select the project to connect to, i.e. the one already created above or you can create another new one.

BICS_DataSync_Part1-new12png

The first thing you need to do in the client is create a connection to your BICS database. Click on the ‘TARGET’ row in the Sources/Targets window above and enter the details in the bottom window:

BICS_DataSync_Part1_10

All you need to enter is your user name and password for the BICS service and its URL, but without the /analytics on the end, for example:

https://demobics-gbsomecompany54321.analytics.us2.oraclecloud.com

 

Click Save then hit Test Connection. If all is well you should get the success window:

BICS_DataSync_Part1_11

 

The next step is to setup a connection to your local database – where the data you want to copy up to the BICS database is stored.

Click on the ‘New’ button in the Sources/Targets window then enter the details in the window below. There are a number of connection types, depending on how you connect to your oracle database, or even if you want to connect to other supported databases, e.g. mysql, DB2, terradata & timesten. You can also load data directly from external files.

BICS_DataSync_Part1_12

Again click Save then Test Connection and all being well you’ll get the success window:

BICS_DataSync_Part1_13

So at this point we’ve installed the DataSync service and setup connections so it can see both my source local database and the target BICS database.

The next step is to create a project to identify data to transfer and schedule a job to run perform the transfer…that’ll be in part 2 at http://www.redstk.com/using-bics-datasync-to-load-data-part-2/

Enjoy!

read more
Paul CannonUsing BICS DataSync to load data – Part 1

Saving you 15 minutes: Customising Standard Compliance Rules in OEM 12c

by Philip Brown on 21st August 2015 No comments

In 12c OEM you have Compliance Frameworks, Compliance Standards and Compliance Rules.  Just a brief summary:

Compliance Framework – This is a collection of standards
Compliance Standards – This is a collection of compliance rules by target type (Host, DB, RAC, PDB)
Compliance Rules – These are the individual rules which make a standard

When you are creating a compliance standard you can bring together two different sets of rules; standard or custom.  A standard rule is a rule that has been pre-defined and written by Oracle.  There are 1700 of these in OEM already covering different target types.  Custom rules are ones written by you.  The benefit of custom rules is that they can include and exclude exactly what you want based on your environment.  Custom rules are based on configuration extensions (SQL, OS commands executed on a target which brings data into the OEM repository).

Now standard compliance rules also have the ability to be customised which gives you the ability to filter our certain data if it isn’t defined as a violation in your security standard.  Here is an example:

There is a standard rule in OEM which checks which users have access to DBA_TAB_PRIVS.  In the rule itself you can see that it already excludes.

R1

However if you want to add in some more users to this exclusion list how do you do that?  The way to do this is through editing your Compliance Standard (not the rule itself!!!!).  You may think this is strange but actually it’s quite cool as it allows you to maintain the intergrity of the standard rule and use it again and again and just tweak it for different standards.  Therefore to edit it:

Enterprise > Compliance > Compliance Library > Compliance Standard > Override Target Type Settings

Now go to the specific rule in question, here you have an exclude object list; all you do is add in the additional users.

R2

There you go; simple and easy to modify….standardisation with simple customisation; it’s a win win.

read more
Philip BrownSaving you 15 minutes: Customising Standard Compliance Rules in OEM 12c

Saving you 15 minutes: TNS soup, how to confuse yourself and others

by Philip Brown on 11th August 2015 No comments

I was asked to help out on a strange ‘TNS no listener’ issue and found I perfect storm of incorrect configuration and specific UNIX settings which made this one a tricky one at first glance.  This issue occurred after an ASM migration which is when the listener was migrated from the Oracle home to the Grid home.

With this particular client there are two different network addresses, and ADMIN address and a SERVICE address.  This means that any host can be connected to on two different addresses.  However the ADMIN address isn’t to be used only the SERVICE address.

The ADMIN address is DBLDN01_ADM and the SERVICE address is DBLDN01_SRV.  When you are on the OS and you do a HOSTNAME it will always return the ADMIN address.

So when it comes to listener configuration there were the following errors:

  • LOCAL_LISTENER was using a hardcoded connection string and not an alias
  • The TNSNAMES.ORA which had an alias was pointing to the ADMIN address
  • The LISTENER.ORA file in the Grid home didn’t specify a host (it did for IPC but that is EXTPROC) and therefore was picking up the ADMIN address not the SERVICE address for TCP

It shouldn’t look like this: (notice the mis-matching items between what is defined as an ALIAS but then specified as the HOST, this furthers the confusion)

LOCAL_LISTENER - Database Parameter uses a hard-coded string
*.local_listener='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=DBLDB01_ADM)(PORT=1527)))

TNSNAMES.ORA - This is ignored as LOCAL_LISTENER isn't using an alias, and if it did use it it would be wrong
DBLDN01_SRV = (DESCRIPTION =  (ADDRESS = (PROTOCOL = TCP)(HOST = DBLDN01_ADM)(PORT = 1527)))

LISTENER.ORA – HOST not specified for TCP only IPC, therefore will use the default hostname which is an ADMIN address
DBLDN01_SRV=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY= DBLDN01_SRV))))                # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ DBLDN01_SRV=ON            # line added by Agent

 

It should look like this:

LOCAL_LISTENER
*.local_listener=DBLDN01_SRV

TNSNAMES.ORA
DBLDN01_SRV = (DESCRIPTION =  (ADDRESS = (PROTOCOL = TCP)(HOST = DBLDN01_SRV)(PORT = 1527)))

LISTENER.ORA
DBLDN01_SRV =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = DBLDN01_SRV)(PORT = 1527))
)
)

ADR_BASE_LISTENER = /oracle/GRID
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON              # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ DBLDN01_SRV =ON            # line added by Agent

Hostnames have been changed for the obvious reasons of security… ;-)

read more
Philip BrownSaving you 15 minutes: TNS soup, how to confuse yourself and others

OWB and 12c Pluggable Databases

by Pete Bunning on 5th August 2015 No comments

Just a quick one this month about OWB and installing it with Oracle 12c.

I did this a couple of months ago on my laptop and it worked perfectly but we tried again on a client site and every step threw up new and exciting errors.  We had trouble with the OWB install, registering users and installing workflow.  Eventually we worked out the only difference was that we were trying to install into a pluggable 12c database instead of a normal one.

We searched for problems in My Oracle Support and found nothing but while I was hunting I stumbled across a note (How To Use OWB With The 12.1 Pluggable RDBMS Feature (Doc ID 1532674.1).  Although not helping with our problems the note did point out that using OWB with a pluggable database requires a full ODI licence.  This was news to me and I still can’t find any other mention of this particular restriction.

Anyway, so we had our DBA rebuild the database the old fashioned way and tried again.  This time the install worked perfectly.

Until next time…

 

read more
Pete BunningOWB and 12c Pluggable Databases

Saving you 15 minutes: Why do we focus on CPU cores when comparing SE and EE??

by Philip Brown on 4th August 2015 2 comments

I was discussing the other day the merits of SE and EE and I got into a bit of a conversation around CPU cores and SE. SE has a maximum socket count of 4; that translates generally into a 16 core count for any database running SE. From memory I knew that SE and EE are potentially worlds apart in terms of functionality and I had a quick look on the web to verify that. This is the link that I came across:

http://www.oracle.com/us/products/database/enterprise-edition/comparisons/index.html

Now taken at face value you could argue that based on the info there is very little difference between SE and EE; however from memory I know there were some key differences; like Bitmap indexes, maybe this is no longer the case. A bit more digging led me to this link:

http://docs.oracle.com/database/121/DBLIC/editions.htm

Ok so now I can see the true extent of the differences between SE and EE. So some of the things that I would say you would take for granted which are not in SE are the following:

  • Online Index Rebuilds
  • Online Table Redefinition
  • Block Change Tracking
  • Various Flashback Options
  • Fine-Grained Auditing
  • Any of the Manageability Packs (that’s no AWRs!!!)
  • Basic Compression
  • Bitmap Indexes
  • Anything in Parallel
  • Transportable Tablespaces

The list in the link contains a lot more however the items above are some of the key aspects of EE which people would potentially take for granted. When you’re looking at SE v’s EE I think it is easy to assume that it will be ‘ok’ due to the number of CPU cores which you could potentially run. Running on SE is fine with 16 cores but unless you application has been designed to run in parallel (remember Oracle won’t be doing that for you now) then you could just be hitting performance bottlenecks and digging out notes on Statspack to try and fix it! Don’t get me wrong SE definitely has its place but if you’re currently running EE and think of switching to SE then you need to look way beyond CPU cores.

read more
Philip BrownSaving you 15 minutes: Why do we focus on CPU cores when comparing SE and EE??

Saving you 15 minutes: In-Memory Priority Changes Compression Level

by Philip Brown on 3rd August 2015 2 comments

I have been evaluating In-Memory compression for a large BI system and found a bit of an anomaly regarding the different compression rations and the changing of the In-Memory priority. The test case below is repeatable, I’m not sure if this is an issue to specific to In-Memory in general or just AIX.

So in summary; if you change the compression ratio of a table currently In-Memory (From Low to High) and then change the In-Memory priority then Oracle will change the compression ratio back to the original compression ratio. This occurs even you you disable In-Memory on the table between changing the compression ratio.

So to re-create this:

ALTER TABLE STAR.SALES_ITEM INMEMORY MEMCOMPRESS FOR QUERY LOW;
In-Memory store has been populated on the table

ALTER TABLE STAR.SALES_ITEM INMEMORY NOINMEMORY;
In-Memory store has been disable on the table

ALTER TABLE STAR.SALES_ITEM INMEMORY MEMCOMPRESS FOR QUERY HIGH;
In-Memory store has been enabled but not populated
V$IM_SEGMENTS shows INMEMORY_COMPRESSION = QUERY HIGH

ALTER TABLE STAR.SALES_ITEM INMEMORY PRIORITY HIGH;
Note – Now V$IM_SEGMENTS shows INMEMORY_COMPRESSION = QUERY LOW

While writing this I did another test which waited until the In-Memory store started to be populated and I then changed the priority, the result was that the In-Memory store was purged and the compression ration set back to QUERY LOW.

The query which I use to track the In-Memory store population is this:

select count(*) from v$im_segments where populate_status='COMPLETED'

This will need some further investigation…interested to hear about any other In-Memory anomalies out there.

read more
Philip BrownSaving you 15 minutes: In-Memory Priority Changes Compression Level

HCM Compensation – Salary Details SA

by Simon Parsons on 31st July 2015 No comments

To access the Salary Details Real Time Subject Area in OBIEE for HCM that you need to add the following Roles:-

either

Compensation Analyst or Compensation Manager

AND you also need to add the BI Duty Role

Compensation Transaction Analysis Duty

see:

http://www.oracle.com/technetwork/middleware/bi/documentation/hcm-comp-sal-dets-rt-2226976.pdf

However, recently I had a problem with this. It worked fine in Test but didn’t work in Prod after I added the roles. It behaved as if I still had no access and showed no Salary details. Hmmmm…

Snap1

Here is the solution.

Go to Setup and Maintenance and search for ‘Security’.

Snap2.jpg

Select ‘Manage Data Role and Security Profiles’.

Snap3

Select the Role you have used when you allocated the previous Roles. Mine is ‘Red Stack HR Specialist’.

Snap4

click ‘Next’..

Snap5

In the Security Criteria screen make sure you have selected Person Security Profile ‘View all People’ and NOT ‘View all Workers’. This should fix the problem. Click Next and repeat until you find the ‘Submit’ button is available. Click Submit and you shoul now be able to run your Report and see Salary Details.

Snap6

Have Fun! 

read more
Simon ParsonsHCM Compensation – Salary Details SA