VP AIOps for the Autonomous Database discusses using Exachk to effectively manage Exadata environments. Exachk can run automatically and monitor for changes, providing proactive warnings before issues impact users. It checks for compliance with best practices and reduces risk through regular email reports. Exachk is becoming part of the Autonomous Health Framework to provide a single interface for issue detection, diagnosis, and support tools.
IAC 2024 - IA Fast Track to Search Focused AI Solutions
VP AIOps for the Autonomous Database: How to use EXAchk effectively to manage Exadata environments
1. VP AIOps for the Autonomous Database
Sandesh Rao
How to use Exachk effectively to manage
Exadata environments
@sandeshr
https://www.linkedin.com/in/raosandesh/
https://www.slideshare.net/SandeshRao4
2. EXAchk
Automatic proactive warnings
before you’re impacted
Results viewable in the
tool of your choice
Regular emails with
check results
Compliance checks for most
impactful reoccurring problems
No need to send
anything to Oracle
REDUCE
YOUR
RISK
3. Building an Engineered System with best practices
Development methodology
1
Idea
Reports from development, testing, support etc
2
Expert review
Weekly meetings to review and test
3
MOS Note 757552.1
Published Exadata best practices
4
Default deployment
Bake best practices back in to default deployment
5 EXAchk check
Generation of new Exadata EXAchk check
4. One combined product
EXAchk as part of AHF (Autonomous Health Framework)
2 TFA
Automatic issue detection,
diagnostic collection and
analysis along with a single
interface for Database support
tools3 AHF Service (Coming soon..)
Single web dashboard using data
from EXAchk and TFA to provide
issue triage, compliance checking,
solution recommendation, anomaly
timelines, issue diagnosis and more
1 EXAchk
Automatic compliance checking and
warnings when drifting away from
best practices as well as offering pre
and post upgrade advice
5. Benefits of having EXAchk in AHF
1 User defined input and output location
2 Life cycling management
3 Scheduling of automated tasks
4 Secure socket connection to remote nodes
5 Configuration for result uploads
6 Task delegation from root to low privilege users
7 Executables available from any location
EXAchk
TFA
6. Upgrading to AHF
TFA will be moved from
GI_HOME to AHF_LOC
LOCATION
From Dec 2019 onwards
AHF is contained in OEDA
OEDA Configuration will be shared
between TFA and EXAchk
CONFIGURATION
EXAchk will use TFA scheduler
and TFA socket
RESOURCES
TFA will use EXAchk to
collect cell diagnostics
COLLECTION
From Nov 2019 onwards
AHF is available from
My Oracle Support
MOS
From 20c onwards AHF is
contained in the base GI
and Database
DB AND GI
7. EXAchk use cases
EXAchk
Run automatically and monitor the diffs.
In Virtualized Exadata, autoruns only on domU
AUTOMATED (recommended)
Run once a month, if in Virtualized Exadata,
run on dom0, cells and switches
ON-DEMAND
Run before and after configuration changes
CONFIGURATION
Run before and after any planned software
and hardware maintenance
MAINTENANCE
8.
9.
10.
11.
12.
13. ORAchk / EXAchk Collection Manager Enterprise Manager ELK Stack
13
Architecture Options
Health
Checks
Run
Checks
HTML
Email
Oracle
Database
Many Instances One Instance One Instance One Instance
Oracle
Database
Elastic
Search
SQL
Results
XML
JSON
View enterprise wide
results via Collection
Manager interface
View enterprise wide
results via Enterprise
Manager interface
View enterprise wide
results via Kibana
dashboards
AHF Service
One Instance
Object
Store
View enterprise wide
results via AHF
Service UI
14. Oracle Stack Coverage
Oracle Database
Autonomous Database
Standalone Database
Grid Infrastructure & RAC
Maximum Availability
Architecture (MAA) Scorecard
Upgrade Readiness Validation
Golden Gate
Application Continuity
Enterprise Manager Cloud
Control
Repository
Agent
OMS
Middleware
Oracle Identify and Access
Management Suite (Oracle IAM)
Oracle CRM
Oracle Project Billing
Siebel
Database best practices
PeopleSoft
Database best practices
SAP
EXAdata best practices
ASR
Engineered Systems
Oracle Exadata Database
Machine
Oracle SuperCluster
Oracle Private Cloud Appliance
Oracle Database Appliance
Oracle Big Data Appliance
Oracle Exalogic Elastic Cloud
Oracle Exalytics In-Memory
Machine
Oracle Zero Data Loss Recovery
Appliance
Oracle ZFS Storage Appliance
Systems
Oracle Solaris
Cross stack checks
Solaris Cluster
OVN
15. Ways to run compliance checks
Limit checks
-profile
One or more of 40+
different component
focused check categories
Upgrade readiness
-preupgrade
-postupgrade
Limit targets
-cells
-clusternodes
-ibswitches
-dbnames
16. 3
Optionally to run on a subset of nodes use :
-clusternodes to designate databases
-cells to designate storage servers
-ibswitches to designate InfiniBand switches
2
Run exachk command as root.
When run from DOM0, it discovers all compute nodes, storage
servers, and InfiniBand switches in the entire InfiniBand fabric
How to run EXAchk in virtualized Exadata
1 As root install AHF into the management domain
also referred to as DOM0
17. tfactl setupload -name mysqlnetconfig -type sqlnet
Enter mysqlnetconfig.sqlnet.user: orachkcm
Enter mysqlnetconfig.sqlnet.password: ########
Enter mysqlnetconfig.sqlnet.connectstring: (DESCRIPT
ION = (ADDRESS = (PROTOCOL = TCP)(HOST = scam02-
scan1)(PORT = 1521))(CONNECT_DATA =(SERVER =
DEDICATED)(SERVICE_NAME = CDB11_PDB1_svc)))
Enter mysqlnetconfig.sqlnet.uploadtable: RCA13_DOCS
Uploads via tfactl setupload
tfactl setupload -name mos_config -type https
Enter mos_config.https.user : mos portal user id
Enter mos_config.https.password : ********
Enter mos_config.https.url :
https://transport.oracle.com/upload/issue
Upload to Database
Upload to MOS
18. Run as root (recommended)
oORAchk/EXAchk will su to lower privileged
owners of RDBMS or grid homes
oTo specify a user other that root for these
situations:
Run as RDBMS or GRID Home Owner
oUser must be able to switch to root for root
level checks – several options:
1. Provide the root userid password at prompts
or
2. Set up sudo
or
3. Pre-configure passwordless SSH connectivity
or
4. Allow ORAchk/EXAchk to configure private
keys for remote nodes
Which User to Run as
Connect via
SSH &
Run Checks
on
Default User
Change User By
exporting user id in
this Environment
Variable
Exadata
Storage
Server
root RAT_CELL_SSH_USER
InfiniBand
switches
root
(when run as root)
RAT_IBSWITCH_USER
nm2user
(when run as other
user)
18
Note: You may only choose from the provided lower privileged
account
Note:
•On SuperCluster you can use Role Based Access Control (RBAC)
to execute root privileged checks, no root user required.
•root checks must be run as a user with a root equivalent access
role
•On Exalogic it is only supported to run as root
19. Remote node connection without passwordless SSH
ORAchk/EXAchk will:
1. Prompt for remote node password
2. Login to remote node and generate private and public key pair on remote node
3. Copy contents public key into the .ssh/authorized_keys file of remote node and delete the
public key from remote node
4. Copy private key of remote node into local node and use as identity file to make future
connections
Alternatively you can provide the private key file yourself
Run:
E.g.:
This will generate the following key pair in the $HOME/.ssh/ directory:
• id_dsa.myhost67.root (private key / Identity file)
• id_dsa.myhost67.root.pub (Public key)
Confidential – Oracle Internal/Restricted/Highly Restricted19
ssh-keygen -f $HOME/.ssh/id_dsa.host.user -N ''
ssh-keygen -f $HOME/.ssh/id_dsa.myhost67.user -N ''
20. Subsequent emails compare results
to previous run
• Easily see if something has changed
• Email attachment has:
o Latest report
o Previous report
o Diff Report
Email Notification
20
23. Install as root
[root@myserver1]# ./ahf_setup
AHF Installation Log : /tmp/ahf_install_26252_2019_10_31-08_27_59.log
Starting Autonomous Health Framework (AHF) Installation
AHF Version: 193000 Build Date: 201910181720
Default AHF Location : /opt/oracle.ahf
Do you want to change AHF Location (/opt/oracle.ahf) ? Y|[N] :
AHF Location : /opt/oracle.ahf
AHF Data Directory stores diagnostic collections and metadata.
AHF Data Directory requires at least 5GB (Recommended 10GB) of free space.
Choose Data Directory from below options :
1. /opt/oracle.ahf [Free Space : 2069 MB]
2. /u01/app [Free Space : 2290 MB]
3. Enter a different Location
Choose Option [1 - 3] : 1 AHF Data Directory : /opt/oracle.ahf/data
Installing as root provides the richest
capabilities with:
Automate diagnostic collections
Collections from remote hosts
Collecting files that are not
readable by the Oracle home
owner, for example,
/var/log/messages, or certain
Oracle Grid Infrastructure logs
24. Install as root
Do you want to add AHF Notification
Email IDs ? [Y]|N : y Enter Email IDs separated by space : john.doe@acme.com
Extracting AHF to /opt/oracle.ahf
Configuring TFA Services
Discovering Nodes and Oracle Resources
Not generating certificates as GI discovered
Starting TFA Services
Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to
/etc/systemd/system/oracle-tfa.service.
Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to
/etc/systemd/system/oracle-tfa.service.
25. Install as root
.-------------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID |
+------------------+---------------+-------+------+------------+----------------------+
| myserver1 | RUNNING | 27582 | 5000 | 19.3.0.0.0 | 19300020191018172057 |
'------------------+---------------+-------+------+------------+----------------------’
Running TFA Inventory...
Adding default users to TFA Access list...
.--------------------------------------------------------------.
| Summary of AHF Configuration |
+-----------------+--------------------------------------------+
| Parameter | Value |
+-----------------+--------------------------------------------+
| AHF Location | /opt/oracle.ahf |
| TFA Location | /opt/oracle.ahf/tfa |
| Orachk Location | /opt/oracle.ahf/orachk |
| Data Directory | /opt/oracle.ahf/data |
| Repository | /opt/oracle.ahf/data/repository |
| Diag Directory | /opt/oracle.ahf/data/myserver1/diag |
'-----------------+--------------------------------------------'
26. Install as root
AHF install completed on myserver1
AHF will also be installed/upgraded on these Cluster Nodes :
1. myserver2
The AHF Location and AHF Data Directory must exist on the above nodes
AHF Location : /opt/oracle.ahf
AHF Data Directory : /opt/oracle.ahf/data
Do you want to install/upgrade AHF on Cluster Nodes ? [Y]|N : y
Installing AHF on Remote Nodes :
AHF will be installed on myserver2, Please wait.
Installing AHF on myserver2 :
[myserver2] Copying AHF Installer
[myserver2] Running AHF Installer
AHF binaries are available in /opt/oracle.ahf/bin
AHF is successfully installed
Moving /tmp/ahf_install_26252_2019_10_31-08_27_59.log to /opt/oracle.ahf/data/myserver1/diag/ahf/
orachk / exachk
tfactl
etc
27. Install as non-root
If it is not possible to install as root, then you can install as non-root user
Will not include
Automate diagnostic collections
Collections from remote hosts
Collecting files that are not readable by the Oracle home owner, for example,
/var/log/messages, or certain Oracle Grid Infrastructure logs
28. Install as non-root
[oracle@myserver1]$ mkdir -p $ORACLE_HOME/ahf
[oracle@myserver1]$ ./ahf_setup -ahf_loc $ORACLE_HOME/ahf
AHF Installation Log : /tmp/ahf_install_512_2019_10_31-09_22_14.log
Starting Autonomous Health Framework (AHF) Installation
AHF Version: 193000 Build Date: 201910181720
AHF Location : /u01/app/oracle/product/19.0.0/dbhome_1/ahf/oracle.ahf
AHF Data Directory stores diagnostic collections and metadata.
AHF Data Directory requires at least 5GB (Recommended 10GB) of free space.
Choose Data Directory from below options :
1. /u01/app/oracle/product/19.0.0/dbhome_1/ahf/oracle.ahf [Free Space : 76493 MB]
2. Enter a different Location
Choose Option [1 - 2] : 1
29. Install as non-root
AHF Data Directory : /u01/app/oracle/product/19.0.0/dbhome_1/ahf/oracle.ahf/data
Do you want to add AHF Notification Email IDs ? [Y]|N : john.doe@acme.com
Extracting AHF to /u01/app/oracle/product/19.0.0/dbhome_1/ahf/oracle.ahf
Configuring TFA in Standalone Mode...
Build Version : 193000 Build Date : 201910181720
Discovering Nodes and Oracle Resources
.---------------------------------------------------------------------------------------------------------.
| Summary of TFA Configuration |
+----------------+----------------------------------------------------------------------------------------+
| Parameter | Value |
+----------------+----------------------------------------------------------------------------------------+
| TFA Location | /u01/app/oracle/product/19.0.0/dbhome_1/ahf/oracle.ahf/tfa |
| Data Directory | /u01/app/oracle/product/19.0.0/dbhome_1/ahf/oracle.ahf/data/myserver1/tfa |
| Repository | /u01/app/oracle/product/19.0.0/dbhome_1/ahf/oracle.ahf/data/repository |
| Diag Directory | /u01/app/oracle/product/19.0.0/dbhome_1/ahf/oracle.ahf/data/myserver1/diag/tfa |
| Java Home | /u01/app/oracle/product/19.0.0/dbhome_1/ahf/oracle.ahf/jre |
'----------------+----------------------------------------------------------------------------------------'
30. Install as non-root
.---------------------------------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status |
+------------------+---------------+-----+---------+------------+----------------------+------------------+
| myserver1 | RUNNING | - | OFFLINE | 19.3.0.0.0 | 19300020191018172057 | COMPLETED |
'------------------+---------------+-----+---------+------------+----------------------+------------------'
AHF is deployed at /u01/app/oracle/product/19.0.0/dbhome_1/ahf/oracle.ahf
AHF binaries are available in /u01/app/oracle/product/19.0.0/dbhome_1/ahf/oracle.ahf/bin
AHF is successfully installed
Moving /tmp/ahf_install_512_2019_10_31-09_22_14.log to
/u01/app/oracle/product/19.0.0/dbhome_1/ahf/oracle.ahf/data/myserver1/diag/ahf/
31. Set Daemon Options, When, What & Who to Tell
31
–set “<option_1>=<option_1_value>;<option_2>=<option_2_value>;<option_n>=<option_n_value>”
AUTORUN_SCHEDULE
• Schedule when orachk will be run
• Min, Hour, day of month, month of year & day of week
• Comma separate multiple values for same timeframe
• * Wildcard
?
Hour (0 – 23)
? ? ?
Day of month (1 – 31)
Month (1 – 12)
Day of week (0 – 6)
(0 to 6 are Sunday to Saturday)
–set “AUTORUN_SCHEDULE=2 * * 1,3,5”
AUTORUN_FLAGS
• Command line options to be passed through to orachk run
–set “AUTORUN_FLAGS=-profile dba –tag dba”
NOTIFICATION_EMAIL
• Comma separated list of emails to send daemon
notifications to
–set “NOTIFICATION_EMAIL=some.person@acompany.com,another.person@acompany.com”
?
Minute (0 – 59)
Optional - default 0 if unset)
32. Set Daemon Options, Maintenance
PASSWORD_CHECK_INTERVAL
Frequency in hours of password validation
When found invalid daemon stops & notifies via log & email
32
–set “<option_1>=<option_1_value>;<option_2>=<option_2_value>;<option_n>=<option_n_value>”
COLLECTION_RETENTION
• Number of days to keep files created by scheduled run, files older than this will be deleted
–set “COLLECTION_RETENTION=30”
–set “PASSWORD_CHECK_INTERVAL=48”
33. 33
Health Check Catalog
• ORAchk_Health_Check_Catalog.html
• EXAchk_Health_Check_Catalog.html
• Contains all published checks
• Filterable & searchable
• Product Area / Engineered System
• Profiles
• Alert Level
• Release Check Authored
• Platforms
• Privileged User
• Look up check id without running report
34. Database Checks
Checks run against all database nodes in the cluster by default
oTo specify only a subset of nodes use:
oOnly local node:
Automatically discovers all databases and prompts for which should be checked
oDo not prompt but run all checks on all discovered database:
oDo not prompt and skip all database related checks:
oOnly run checks against a subset of databases:
oOnly run checks against a subset of PDBs:
34
–clusternodes <node_1>,<node_2>
–localonly
–dball
–dbnone
–dbnames <db_1>,<db_2>
–pdbnames <pdb_1>,<pdb_2>
35. • To use a specific CVU home:
• To only run CVU checks:
• To include CVU checks:
(only required for EXAchk)
Integration with CVU
CVU checks are run by default for ORAchk
and optionally for EXAchk
CVU checks are only run when a CVU version
of 11.2.0.4 or greater if found
35
-cvuhome <cvu_home>
-cvuonly
-includecvu
36. Output
EXAchk & ORAchk will output the collection results to the directory it is run from
• Unless run from $ORACLE_HOME/suptools/orachk then output goes to $ORACLE_BASE/orachk
Output can be directed to a different directory with –output
Output will be directory and a zip of the same name
36
–output <OUTPUT_DIR>
Output Descriptions
log : various log files
outfiles : collection results checks are based on
reports : subreports used to build the main report
scripts : scripts used during collection
upload : files for upload of collection into database or integration into other tools
orachk_*.htm
l
exachk_*.ht
ml
Main HTML report output
37. Temporary Working Directory
Temporary files will be created during execution
Default location is $HOME
Location can be changed by setting RAT_TMPDIR
If using sudo access to root from a lower privileged user id, temporary directory
must be reflected in /etc/sudoers file
Root privilege checks run from root_orachk.sh or root_exachk.sh
• If you want the root script in a different directory to RAT_TMPDIR use: RAT_ROOT_SH_DIR
37
export RAT_TMPDIR=<TEMP_DIR>
<user> ALL=(root) NOPASSWD:<TEMPDIR>/root_[orachk|exachk].sh
oracle ALL=(root) NOPASSWD:/mylocation/root_exachk.sh
oracle ALL=(root) NOPASSWD:/tmp/root_orachk.sh
export RAT_ROOT_SH_DIR=/mylocation
38. Parallel Execution
Database collections are executed in parallel
The default number of slave processes used is calculated automatically
Default can be changed with –dbparallel <# slave processes> or -dbparallelmax
Parallel execution can be disabled altogether if required with -dbserial
38
–dbparallel <# slave processes> –dbparallelmax
–dbserial
39. Tagging, Merging & Comparing Reports
Collections are typically of the format:
[orachk|exachk]_<dbserver>_<database>_<date>_<timestamp>.html
Tag collections so output contains another word to help differentiate it:
[orachk|exachk]_<dbserver>_<database>_<date>_<timestamp>_<tag_name>.html
Merge multiple reports into one with –merge and list of collection directories or
zip files:
Compare collections with –diff:
39
–merge <collection_1>,<collection_2>
–diff <collection_1>,<collection_2>
–tag <tag_name>
40. Profiles provide logical grouping of checks which are about similar topics
• Run only checks in a specific profile
• Run everything except checks in a specific profile
Profiles
40
–profile <profile>
–excludeprofile <profile>
41. User defined profiles
Create user defined profiles by providing a comma separated list of check ids:
Once a user defined profile has been created, it can be modified:
• This list of check_ids can contain both new checks to be added and existing checks to be removed,
ORAchk/EXAchk will add/remove as necessary
Delete a user defined profile:
41
-createprofile <profile_name> <check_ids>
-modifyprofile <profile_name> <check_ids>
-deleteprofile <profile_name>
42. Run or exclude individual checks
Granular control to execute or exclude a single check
Ideal for testing new checks or troubleshooting
Run only specific check(s):
Exclude a specific check:
Find check id either from report or Health Check Catalog
42
-check <check_id_1>,<check_id_2>
–excludecheck <check_id_1>,<check_id_2>
43. Only Run Checks that Previously Failed
1. Generate a health check report
2. Fix the issues identified
3. Generate another health check report verifying only the issues that failed before
43
-failedchecks <previous_result>
44. $ ./orachk -fileattr start
CRS stack is running and CRS_HOME is not set. Do you want to set CRS_HOME to
/u01/app/11.2.0.4/grid?[y/n][y]
Checking ssh user equivalency settings on all nodes in cluster
Node mysrv22 is configured for ssh user equivalency for oradb user
Node mysrv23 is configured for ssh user equivalency for oradb user
List of directories(recursive) for checking file attributes:
/u01/app/oradb/product/11.2.0/dbhome_11203
/u01/app/oradb/product/11.2.0/dbhome_11204
orachk has taken snapshot of file attributes for above directories at:
/orahome/oradb/orachk/orachk_mysrv21_20170504_041214
• Track changes to the attributes of important files with –fileattr
– Looks at all files & directories within Grid Infrastructure and Database homes by default
– The list of monitored directories and their contents can be configured to your specific
requirements
– Use –fileattr start to start the first snapshot
44
Keep Track of Changes to the Attributes of Important Files
./orachk –fileattr start
45. 45
Keep Track of Changes to the Attributes of Important Files
• Include other directories with –includedir <directories> using a comma separated list of
directories
./orachk –fileattr start includedir “/home/oradb,/etc/oatab”
• Exclude the default discovered directories with –excludediscovery
./orachk –fileattr start includedir “/home/oradb,/etc/oatab” -excludediscovery
46. Note:
• Use the same arguments with check that you used with start
• Will proceed to perform standard health checks after attribute
checking
• File Attribute Changes will also show in HTML report output
$ ./orachk -fileattr check -includedir "/root/myapp/config" -excludediscovery
CRS stack is running and CRS_HOME is not set. Do you want to set CRS_HOME to
/u01/app/12.2.0/grid?[y/n][y]
Checking for prompts on myserver18 for oragrid user...
Checking ssh user equivalency settings on all nodes in cluster
Node myserver17 is configured for ssh user equivalency for root user
List of directories(recursive) for checking file attributes:
/root/myapp/config
Checking file attribute changes...
.
"/root/myapp/config/myappconfig.xml" is different:
Baseline : 0644 oracle root /root/myapp/config/myappconfig.xml
Current : 0644 root root /root/myapp/config/myappconfig.xml
…etc
…etc
• Compare current attributes against first snapshot using –fileattr check
46
Keep Track of Changes to the Attributes of Important Files
./orachk –fileattr check
• Results of snapshot comparison will
also be shown in the HTML report
output
47. Keep Track of Changes to the Attributes of Important Files
To prevent standard health checking after attribute checking add –fileattronly:
To use a different snapshot baseline use –baseline:
To remove all snapshot use –fileattr remove
47
–fileattr check –fileattronly
-fileattr check -baseline <snapshot>
-fileattr remove
48. Encrypted resulting zip file
ORAchk and EXAchk can encrypt the resulting collection zip file
To use encryption add the option -encryptzip: e.g.
• This will prompt for the password
• Once the zip file is encrypted, the original zip and directory will be deleted
To decrypt a zip use:
Confidential – Oracle Internal/Restricted/Highly Restricted48
–profile dba -encryptzip
–decryptzip <zip_filename>
The encrypt/decrypt feature is only supported on Linux and Solaris
platforms.
49. REST Interface
ORAchk and EXAchk include full REST support, allowing invocation & query over HTTPS
Oracle REST Data Services (ORDS) is included within the install
To enable REST:
1. Start ORDS:
2. Start the daemon, using the -ords option:
Start a full health check run by accessing the
URL: https://<host>:7080/ords/tfaml/orachk/start_client
Run specific profiles: https://<host>:7080/ords/tfaml/orachk/profile/<profile1>,<profile2>
Run specific checks: https://<host>:7080/ords/tfaml/orachk/check/<check_id>,<check_id>
Any request will return a job id, which can then be used to query:
• Status: https://<host>:7080/ords/tfaml/orachk/status/<job_id>
• Download result: https://<host>:7080/ords/tfaml/orachk/download/<job_id>
–ordssetup
-d start -ords
The standalone ORDS setup feature utilizes file based user authentication and is provided solely for use in test and
development environments.
For production use, the included orachk.jar and ords.war should be deployed and configured.
55. No difference OR No
regression failed in current
collection
At least one regression from
Non-WARNING to WARNING
OR Found WARNING
regression in current
collection
At least one regression from
Non-FAIL to FAIL OR Found
FAIL regression in current
collection
Non clickable green flag -
Preceding collection not
found
Recent Collections
55
Health
Score
Warning
count
Fail
count Info count Pass count
Ignore count
58. 58
User Defined Checks
• Use as a Health Checking
Platform
• You write your own business
specific User Defined Checks
• Collection Manager authoring UI
very similar to Oracle’s internal
authoring tool
• OS or SQL logic
• Generates
user_defined_checks.xml
sample in install directory
• Utilizes framework features such
as result output, email
notification, CM storage etc
59. 59
User Defined Checks
• Have their own profile:
user_defined_checks
• Can be excluded:
-excludeprofile user_defined_checks
• Have their own section of the
report
-profile user_defined_checks
• Can be run on their own:
• Can have customized check names, pass and fail
messages:<existing_check_code>
echo "CUSTOM_CHECK_NAME=<customized_check_name>" >> CUSTOMIZE_CHECK_PARAMS
echo "CUSTOM_PASS_MSG=<customized_pass_message>" >> CUSTOMIZE_CHECK_PARAMS
echo "CUSTOM_FAIL_MSG=<customized_fail_message>" >> CUSTOMIZE_CHECK_PARAMS
60. 1. First time installation done via the APEX
workspace (either APEX 4.2 or 5.x)
2. Use the sql script applicable for your APEX
version:
• APEX 4.2: CollectionManager_App.sql
• APEX 5.x:
Apex5_CollectionManager_App.sql
3. Follow Health Check Collection Manager
installation in the User Guide
4. Login to Collection Manager Application
via a URL like the following:
http://hostname:port/apex/f?p=ApplicationI
D
http://hostname:port/pls/apex/f?p=Applicati
onID
Collection Manager upgrade done
from orachk / exachk:
Will determine the APEX version you
have and install the latest applicable
Collection Manager app
If the Collection Manager schema
changes in the future then ORAchk will
prompt for auto upgrade
60
Setup
-cmupgrade
61. Collection Storage Table
61
• Collection zip files are stored in the
RCA13_DOCS table - already created during
collection manager installation
• Provide ORAchk details of where to upload
collection results with –setdbupload all and
complete prompts:
• Get current values with -getdbupload:
• Unset values with –unsetdbupload
<parameter>:
-setdbupload all
–unsetdbupload RAT_UPLOAD_PASSWORD
-getdbupload
62. 62
Store DB Upload Variables in Wallet
• Set all with:
• Set specific variables by specifying comma separated list:
• Unset all with
• Check if variables are set correctly:
-setdbupload all
-setdbupload RAT_UPLOAD_CONNECT_STRING,RAT_UPLOAD_PASSWORD
-unsetdbupload all
-checkdbupload
Other Upload Parameters Not
Set by default
Description
RAT_UPLOAD_USER The user to connect as (default is
ORACHKCM)
RAT_UPLOAD_TABLE The table name to store non-zipped
collection results
RAT_PATCH_UPLOAD_TABLE The table name to store non-zipped patch
results
RAT_UPLOAD_ORACLE_HOME The ORACLE_HOME used during establishing
connection and uploading.
(Uses GI HOME discovered by ORAchk by
default)
RAT_UPLOAD_TABLE &
RAT_PATCH_UPLOAD_TABLE
Only needed if you are using your own
custom application to view collection results,
rather than Collection Manager.
63. Enterprise Manager Integration
•Check results integrated into EM
compliance framework via plugin
•View results in native EM
compliance dashboards
•Related checks grouped into
compliance standards
63
•View targets checked, violations
& average score
•Drill down into compliance
standard to see individual check
results
•View break down by target
64. Setting Up Enterprise Manager Plugin
• The plugin is already installed by default with Enterprise Manager 13.1+
1. Deploy the plugin using the Enterprise Manager Plugin Deployment feature
2. Provision the plugin to setup the daemon
64
65. • Use Enterprise Manager provisioning
feature and select ORAchk/EXAchk
• After selected this will launch the
provisioning wizard, choose the
system type
65
Provision
66. Provision
Provide new or select existing credentials
Specify install location
Select when daemon should be run
66
68. Drill into applicable standard and view
individual checks & target status
68
View Results by Compliance Standard
Filter by
Exachk%”
Click individual checks for
recommendation details
69. • The JSON provides many tags
to allow dashboard filtering
based on facts such as:
• Engineered System type
• Engineered System version
• Hardware type
• Node name
• OS version
• Rack identifier
• Rack type
• Database version
• And more...
• Kibana can be used to view
health check compliance across
your data center
• Results can also be filtered
based on any combination of
exposed system attributes
69
JSON Output to Integrate with Kibana, Elastic Search etc
70. Results are also output in JSON
format in the upload directory of
the collection
Writing JSON Results With syslog
1. JSON output results can be sent to the
syslogd Daemon with –syslog option
e.g.:
2. Message levels used of “crit”, “err”,
“warn” and “info”
3. You can verify syslog configuration by
running the following commands:
4. Then verify in your configured message
location (e.g. /var/adm/messages) that
each test message was written
70
JSON Result Output
–set “AUTORUN_FLAGS=-syslog”
72. If you don’t use Collection Manager and have your own application which consumes the results
1. Create the tables: auditcheck_result, auditcheck_patch_result & RCA13_DOCS
2. Set default parameters:
– This will prompt you for and set the RAT_UPLOAD_CONNECT_STRING & RAT_UPLOAD_PASSWORD
3. Set optional parameters for RAT_UPLOAD_TABLE & RAT_PATCH_UPLOAD_TABLE
72
Configure Details for Upload of Collection Results
–setdbupload all
-setdbupload RAT_UPLOAD_TABLE,RAT_PATCH_UPLOAD_TABLE
Other Upload Parameters Not Set by
default
Description
RAT_UPLOAD_USER The user to connect as (default is ORACHKCM)
RAT_UPLOAD_TABLE The table name to store non-zipped collection results
RAT_PATCH_UPLOAD_TABLE The table name to store non-zipped patch results
RAT_UPLOAD_ORACLE_HOME The ORACLE_HOME used during establishing connection
and uploading.
(Uses GI HOME discovered by ORAchk by default)
RAT_UPLOAD_TABLE &
RAT_PATCH_UPLOAD_TABLE
Only needed if you are using your own custom application to
view collection results, rather than Collection Manager.
80. Understand what the repair command does
Understand what the repair command will do with:
tfactl orachk -showrepair 8300E0A2FFE48253E053D298EB0A76CC
TFA using ORAchk : /opt/oracle.ahf/orachk/orachk
Repair Command:
currentUserName=$(whoami)
if [ "$currentUserName" = "root" ]
then
repair_report=$(rpm -e stix-fonts 2>&1)
else
repair_report="$currentUserName does not have priviedges to run
$CRS_HOME/bin/crsctl set resource use 1"
fi
echo -e "$repair_report"
81. Run the repair command
Run the checks again and repair everything that fails
Run the checks again and repair only the specified checks
Run the checks again and repair all checks listed in the file
tfactl orachk -repaircheck all
tfactl orachk -repaircheck <check_id_1>,<check_id_2>
tfactl orachk -repaircheck <file>