One of the highlights of the recent UKOUG RAC SIG for me was a talk from Jamie Wallis from Oracle on the TFA. It silently gets installed during an 220.127.116.11 upgrade, but was available long before that as a standalone download. It is an answer to those all too common requests from Oracle Support to gather a selection of logs from all nodes in a cluster, but is also much more than that.
A few highlights before diving down into a little more detail –
– Collect all relevant logs from all nodes of a cluster, and collate on a single node ready for sending to Oracle
– Cluster aware (many commands including installation run across cluster)
– On Exadata will recognise storage cells and collect logs from them too
– Installed by default on ODA
– Can be configured to perform automatic collection upon incident
– Self manages repository (default size 10GB, won’t run if filesystem has less than 1GB free)
– Run as root, oracle or any other user configured within tool to have access
– Part of the support tools roadmap. Other support tools being integrated into TFA
– Can be patched manually but also patched by PSUs
– “Zero configuration”. Many options available, but by default will run discovery for all relevant log locations
Now a little more detail.
Command line utility
Very similar in syntax design to other Oracle tools. Get help with “help”. Get help with certain commands with “help “. This feels very familiar.
Status / Start / Stop
TFA runs a lightweight deamon process which is automatically started by init. Manual control is also possible. Note that starting and stopping happens on the local node only.
Lots of configuration possible.
Take a look at some of the directories picked up by default on my cluster. As you can see, the changes of Oracle Support having to assign an SR back to you in order to provide a new log file are extremely remote!
Added a host to your cluster? No problem.
We can trigger a manual collection. As you can imagine, there are a lot of command line options for this but by default it’ll collect 4 hours of logs from all locations.
We actually a lot of feedback about what is going on while this collection is taking place. Click on the image for a closer look.
As you can see, it has collected everything to one node and even reminds us where the repository is located.
We can query the collections which have taken place –
TFA can also be configured to trigger a collection upon certain events (such as cluster events, database ORA-600’s and other configured errors).
Let’s switch it on across the cluster –
And then raise an ORA-600 in our ASM instance –
5 minutes later we’ll see the collection took place –
Different collections take place depending on the event, and these are clearly documented. However, in this case an ORA-600 on the ASM instance will trigger a collection for the ASM instance only. In this case it will also only include log data for 10 minutes either side of the event. It has produced a ZIP file 84KB in size, so you can see that there is plenty of space in my 10GB repository for many collections!
TFA will automatically detect Exadata storage servers, and offers collection from those too!
We can see it is configured and working –
A really great tool in my opinion, and definitely something that will come in handy next time we have a Sev 1 running and Oracle Support give us their log file shopping list!