Compare commits

..

83 Commits

Author SHA1 Message Date
d8752eb721 feat(hanatool): Add path discovery for hdbsql 2025-11-20 13:46:03 +01:00
668f56e56c feat(keymanager): Add alternative path detection for the tools needed to manage users 2025-11-20 13:25:40 +01:00
9acf30da12 func(b1conf): Update config generator to include AD integration 2025-10-16 14:12:34 +02:00
46673d88d2 feat(hanatool): Add NO DEPENDENCIES to export function 2025-10-15 15:03:11 +02:00
557cb807dd fix(backup): Fix an issue where backup script would try to create an obsolete dir for the systemdb backup 2025-10-09 10:15:34 +02:00
4274f4d01d chore: Add signature to scripts 2025-10-08 21:45:11 +02:00
66da934be2 fix(backup): Fixed obsolete config option. 2025-10-08 21:18:47 +02:00
3355727a9b refactor(backup): Rewrite backup tool to use hanatool instead of its own implementation. 2025-10-08 21:14:02 +02:00
6a94e19a94 fix(hanatool): Improve backup notifications and bump version 2025-10-08 20:09:38 +02:00
c35f2528cf fix(keymanager): Fixed SYSTEMDB user creation 2025-10-08 20:03:40 +02:00
8a5f76bbe4 feat(hanatool): Include HANA tenant name in backup file names 2025-10-08 19:56:24 +02:00
d428def0a2 feat(hanatool): Improve compressed tenant backup logic
Refactored the compressed tenant backup process to use a temporary directory within the specified TARGET_PATH, aligning its behavior with schema exports. This change avoids the use of the /tmp directory, which can be too small for large backups.

- Create a temporary directory in TARGET_PATH for the backup.
- Perform the tenant backup into this temporary directory.
- Compress the backup files to a .tar.gz archive in TARGET_PATH.
- Clean up the temporary backup directory after compression.
- Bumped hanatool.sh version to 1.5.2.
- Updated hanatool version in packages.conf to 1.5.2.
2025-10-08 19:26:39 +02:00
2fe4ba0fd2 feat(b1.gen): Add B1 config generator 2025-10-08 17:55:29 +02:00
b801c2c002 feat(backup): Introduce backup hook script and adjust configuration 2025-10-06 11:22:59 +02:00
80fd12f0f9 fix(monitor): Remove incorrect folder from path 2025-10-06 11:21:53 +02:00
f597ae09aa feat(monitor): Implement conditional configuration in monitor.hook.sh 2025-10-06 10:55:10 +02:00
bd35ddbab6 fix(monitor): Resolve sed delimiter issue in monitor.hook.sh
Updated the sed commands in monitor.hook.sh to use # as a delimiter instead of / when updating SAPCONTROL_PATH and HDBSQL_PATH. This resolves "unknown option to s" errors caused by path slashes conflicting with the sed syntax.
2025-10-06 10:13:57 +02:00
1bd67d3613 chore(monitor): Update monitor.conf paths to placeholders 2025-10-06 10:09:37 +02:00
1c254115c4 fix(monitor): Adjust monitor.hook.sh to correctly update monitor.conf 2025-10-06 10:07:32 +02:00
b0553c5826 refactor(install): Use curl for install script execution and revert monitor hook URL 2025-10-06 10:04:45 +02:00
56e781996a fix(packages): Simplify monitor hook URL in packages.conf 2025-10-06 09:59:50 +02:00
4e98731bd1 fix(packages): Correct quoting for monitor hook curl execution 2025-10-06 09:54:59 +02:00
a2579ab3d5 feat(packages): Update monitor hook to use direct curl execution 2025-10-06 09:53:19 +02:00
b983b9e953 feat(monitor): Introduce monitor.hook.sh and update version to 1.3.1 2025-10-06 09:49:43 +02:00
1c4c7ebcc6 feat(install): Add optional install script execution for packages
This commit introduces the ability for packages to define an optional install script
within the `packages.conf` file. The `install.sh` script will now parse this
new field and execute the provided script after all associated package files
have been downloaded.

The `packages.conf` documentation has been updated to reflect the new format,
including the optional fifth field for the install script.
2025-10-06 09:41:12 +02:00
52bc1ed352 fix(hanatool): Initialize import_options and bump version to 1.5.1 2025-10-03 16:08:04 +02:00
ec0c686a3c chore(packages): Update Monitor Suite version to 1.3.0
This commit updates the version of the Monitor Suite in packages.conf to 1.3.0, aligning with the recent feature additions and refactoring in the monitoring scripts.
2025-10-01 13:11:34 +02:00
bb0531aeea feat(monitor): Add HANA statement queue monitoring
This commit introduces a new feature to monitor the HANA statement queue.

Added STATEMENT_QUEUE_THRESHOLD and STATEMENT_QUEUE_CONSECUTIVE_RUNS to monitor/monitor.conf.
Implemented logic in monitor/monitor.sh to query the statement queue length, track consecutive breaches of the defined threshold, and send notifications.
Updated the script version to 1.3.0.
Refactored log segment checks to only run when segments are found.
2025-10-01 13:10:57 +02:00
92a2b963c4 refactor(readme): Improved README, make it way more readable 2025-09-26 15:19:31 +02:00
a8fc2c07e8 refactor(readme): Improve README.md structure, update tool descriptions, and remove non-HANA related scripts. 2025-09-26 15:05:41 +02:00
6b2132a7ab feat(monitor): Bump version to 1.2.3 and refactor notification logic
Updated monitor/monitor.sh to version 1.2.3.
Removed the redundant else block in the send_notification_if_changed() function in monitor/monitor.sh as it provided no functional change.
Updated the "Monitor Suite" version in packages.conf to 1.2.3 to reflect the script update.
2025-09-25 18:44:59 +02:00
2549ccf250 feat: Remove verbose "state unchanged" messages and bump version to 1.2.2 2025-09-25 18:42:43 +02:00
e083c5b749 fix(monitor): Remove 'local' from global variables and bump version to 1.2.1
Removed incorrect 'local' keyword from global variable declarations in monitor/monitor.sh as it's only valid within functions. Updated version to 1.2.1 in monitor/monitor.sh and packages.conf.
2025-09-25 18:38:41 +02:00
eeb5b2eb7b feat(monitor): Implement state-based notifications to prevent alert spam
Introduces state management to 'monitor.sh' to send notifications only when a monitored status changes (e.g., from healthy to alert, or alert to resolved). This prevents repetitive alerts for persistent issues. Creates a 'monitor_state' directory for tracking. Updates script version to 1.2.0.
2025-09-25 18:34:40 +02:00
a6150467e5 fix(monitor): Correct hdbsql command in backup monitoring
Fixes an issue in 'monitor.sh' where the HANA backup monitoring used incorrect variable names ('HANA_USERKEY' instead of 'HANA_USER_KEY') and did not explicitly use the configured 'HDBSQL_PATH' for the 'hdbsql' command. Updates script version to 1.1.1.
2025-09-25 18:33:58 +02:00
2424d55426 feat(monitor): Add HANA data backup age monitoring
Introduces a new monitoring check in 'monitor.sh' to verify the age of the last successful SAP HANA data backup. Alerts are sent if the backup age exceeds a configurable 'BACKUP_THRESHOLD_HOURS'. Updates script version to 1.1.0.
2025-09-25 18:33:49 +02:00
408f2396da refactor: Rename cleaner.sh and keymanager.sh scripts
Renames 'clean.sh' to 'cleaner.sh' and 'hdb_keymanager.sh' to 'keymanager.sh' for consistency. Updates corresponding file paths in 'packages.conf'.
2025-09-25 18:33:37 +02:00
a16b8aa42b feat(installer): Rework install.sh for non-interactive mode and improved UX
Performs a major refactoring of 'install.sh' to introduce non-interactive installation via command-line arguments (e.g., '--overwrite-config'). Enhances the interactive menu with detailed package information (display name, description, version, update status) and improves config file handling with diff previews. Updates 'packages.conf' format to support new package metadata and uses short, lowercase keys.
2025-09-25 18:33:24 +02:00
d9760b9072 docs(readme): Update Aurora script usage to reflect automation
Updates 'README.md' to reflect the new, automated execution model of 'aurora.sh', removing references to manual 'new', 'complete', and 'info' arguments. The script is now configured via 'aurora.conf' and typically run via cron.
2025-09-25 18:33:09 +02:00
229683dfa5 feat(aurora): Dynamically fetch and update company name in refreshed schema
Enhances 'aurora.sh' to fetch the original company name from the source schema and use it to construct a descriptive company name for the new Aurora schema (e.g., "AURORA - [Original Name] - [Date]"). Updates script version to 2.1.0.
2025-09-25 18:31:51 +02:00
2d5d2dfa9c refactor(aurora): Complete rewrite of Aurora refresh script to v2.0.0
Performs a major refactoring of 'aurora.sh' and 'aurora.conf'. The script is rewritten for improved clarity, error handling, and a streamlined, single-purpose execution flow for automated HANA schema refreshes. Configuration variables are renamed for better understanding. Updates script version to 2.0.0.
2025-09-25 18:31:35 +02:00
61e44106e5 feat(hanatool): Add --replace flag for schema imports
Introduces a new '--replace' option to 'hanatool.sh' to allow replacing existing objects during schema import operations. Updates script version to 1.5.0.
2025-09-25 18:30:45 +02:00
62d5df4c65 update version aurroa 2025-09-24 21:07:57 +02:00
24da8eb6e8 fix aurora, ignore existing 2025-09-24 21:07:40 +02:00
03beb02956 update aurora messages 2025-09-24 20:59:35 +02:00
293281f732 update aurora, add replace to query 2025-09-24 20:55:42 +02:00
c7c2f30f0d fix aurora quotes 2025-09-24 20:36:59 +02:00
23eded7de3 fix2 hanatool 2025-09-24 20:26:42 +02:00
fc84cb0750 fix hanatool 2025-09-24 20:25:55 +02:00
0ca4c703fa hana tmpp2 2025-09-24 20:22:45 +02:00
20ca109b50 fix hanatool 2025-09-24 20:22:28 +02:00
3ca3e0cd86 hanatool tmp 2025-09-24 20:21:33 +02:00
681b44b8f7 fix hanatool quote 2025-09-24 20:20:14 +02:00
52b63645ac fix hanatool, quotes were fucked 2025-09-24 20:19:14 +02:00
b265af02b2 update hanatool, escape sql queries 2025-09-24 20:16:20 +02:00
4a49ef92e2 update aurora, escape sql queries 2025-09-24 20:13:55 +02:00
7ba2f3565e fix hanatool, local at wrong spot 2025-09-24 20:02:49 +02:00
69ccad02e2 fix aurora, version 2025-09-24 19:58:38 +02:00
0dc18265ad update aurora, add disk cleanup 2025-09-24 19:57:01 +02:00
b018908f64 update aurora, make it more compact 2025-09-24 19:48:10 +02:00
f0a9d2d75a update readme 2025-09-24 18:49:26 +02:00
bb4b4ab5d5 update monitor, echo errors to stderr 2025-09-24 18:04:14 +02:00
c800c20f1b update monitor, sapcontrol as configurable path 2025-09-24 18:01:59 +02:00
db354c6441 update installer, select multiple packages at the same time 2025-09-24 17:53:55 +02:00
01c1c6e2f6 update monitor, hide curl output 2025-09-24 17:31:02 +02:00
817fc83763 fix monitor 2025-09-24 17:14:48 +02:00
781c4654e5 fix monitor 2025-09-24 17:06:33 +02:00
32eb49f890 add monitoring 2025-09-24 15:18:26 +02:00
c691a87d7d fix hanatool 2025-09-22 23:28:48 +02:00
57ad14302b fix hanatool 2025-09-22 14:56:35 +02:00
c42fbf482c update hanatool, add tenant backup 2025-09-22 14:55:24 +02:00
b81915190b update hanatool, add notification and dry-run 2025-09-22 14:36:08 +02:00
95e86f3e60 update backup, add systemdb backup 2025-09-22 14:20:07 +02:00
aa7dfd7fe0 update hanatool fix 2025-09-22 12:23:45 +02:00
b01f17c59a update hanatool 2025-09-22 12:21:51 +02:00
85004b817d add hanatool 2025-09-22 12:17:50 +02:00
177cce7326 fix installer 2025-09-10 12:15:37 +02:00
7af6a851a0 fix installer menu 2025-09-10 12:13:59 +02:00
30ae23d75a add versioning 2025-09-10 12:11:33 +02:00
54d8dd0dff fix install.sh 2025-09-10 12:01:45 +02:00
66b516ad2d rename packages 2025-09-10 11:59:14 +02:00
1f5d919a9c Merge branch 'main' of git.technopunk.space:tomi/Scripts
retarded stuff
2025-09-10 11:57:32 +02:00
33ee5f56af update installer 2025-09-10 11:57:06 +02:00
15 changed files with 1651 additions and 372 deletions

View File

@@ -1,17 +1,82 @@
# SAP HANA cron tools # 🚀 SAP HANA Automation Scripts
Run the installer: A collection of powerful Bash scripts designed to automate and simplify SAP HANA administration, monitoring, and management tasks.
## ✨ Key Features
* **Automate Everything**: Schedule routine backups, file cleanups, and schema refreshes.
* **Monitor Proactively**: Keep an eye on system health, disk space, and backup status with automated alerts.
* **Simplify Management**: Use powerful command-line tools and interactive menus for common tasks.
* **Secure**: Integrates with SAP's secure user store (`hdbuserstore`) for credential management.
* **Get Notified**: Receive completion and failure alerts via `ntfy.sh`.
## ⚙️ Quick Install
Get started in seconds. The interactive installer will guide you through selecting the tools you need.
```sh ```sh
bash -c "$(curl -sSL https://install.technopunk.space)" bash -c "$(curl -sSL https://install.technopunk.space)"
``` ```
## Tools ## 🛠️ Tools Overview
### Aurora generator script The following scripts and suites are included. Suites are configured via a `.conf` file in their respective directories.
Configure the `aurora.conf`, then run the script with `./arurora.sh`. | Tool | Purpose & Core Function |
| :------------- | :------------------------------------------------------------------------------------------------------------------------------------------- |
| **`cleaner`** 🧹 | **File Cleaner**: Deletes files older than a specified retention period. Ideal for managing logs and temporary files. |
| **`hanatool`** 🗄️ | **HANA Management**: A powerful CLI tool to export/import schemas, perform full tenant backups, and compress artifacts. |
| **`keymanager`** 🔑 | **Key Manager**: An interactive menu to easily create, delete, and test `hdbuserstore` keys with an automatic rollback safety feature. |
| **`aurora`** 🌅 | **Schema Refresh Suite**: Automates refreshing a non-production schema from a production source. |
| **`backup`** 💾 | **Backup Suite**: A complete, cron-friendly solution for scheduling schema exports and/or full tenant backups with configurable compression. |
| **`monitor`** 📊 | **Monitoring Suite**: Continuously checks HANA process status, disk usage, log segments, and backup age, sending alerts when thresholds are breached. |
### Backup script ## 📖 Tool Details
Configure the `backup.conf`, then run the script with `./backup.sh`. ### 1\. `cleaner.sh` (File Cleaner) 🧹
* **Purpose**: Deletes files older than a specified retention period from given directories to help manage disk space.
### 2\. `hanatool.sh` (SAP HANA Schema & Tenant Management) 🗄️
* **Purpose**: A versatile command-line utility for SAP HANA, enabling quick exports and imports of schemas, as well as full tenant backups.
* **Features**:
* Export/Import schemas (with optional renaming).
* Perform full tenant backups.
* Dry-run mode to preview commands.
* `ntfy.sh` notifications for task completion/failure.
* **Options**: `-t, --threads N`, `-c, --compress`, `-n, --dry-run`, `--ntfy <token>`, `--replace`, `--hdbsql <path>`, `-h, --help`
### 3\. `keymanager.sh` (Secure User Store Key Manager) 🔑
* **Purpose**: An interactive script to simplify the creation, deletion, and testing of SAP HANA `hdbuserstore` keys.
* **Features**:
* Interactive menu for easy key management.
* Connection testing for existing keys.
* Automatic rollback of a newly created key if its connection test fails.
### 4\. `aurora.sh` (HANA Aurora Refresh Suite) 🌅
* **Purpose**: Automates the refresh of a "copy" schema from a production source, ensuring non-production environments stay up-to-date.
* **Process**:
1. Drops the existing target schema (optional).
2. Exports the source schema from production.
3. Imports and renames the data to the target schema.
4. Runs post-import configurations and grants privileges.
### 5\. `backup.sh` (SAP HANA Automated Backup Suite) 💾
* **Purpose**: Provides automated, scheduled backups for SAP HANA databases.
* **Features**:
* Supports schema exports, full tenant data backups, or both.
* Configurable compression to save disk space.
* Uses secure `hdbuserstore` keys for connections.
### 6\. `monitor.sh` (SAP HANA Monitoring Suite) 📊
* **Purpose**: Continuously monitors critical aspects of SAP HANA and sends proactive alerts via `ntfy.sh` when predefined thresholds are exceeded.
* **Checks Performed**:
* Verifies all HANA processes have a 'GREEN' status.
* Monitors disk usage against a set threshold.
* Analyzes log segment state.
* Checks the age of the last successful data backup.

View File

@@ -1,31 +1,40 @@
# Configuration for the HANA Aurora Refresh Script # Configuration for the Aurora Refresh Script (aurora_refresh.sh)
# Place this file in the same directory as the aurora.sh script. # Place this file in the same directory as the script.
# Author: Tomi Eckert
# --- Main Settings --- # --- Main Settings ---
# The source production schema to be copied. # The source production schema to be copied.
SCHEMA="SBO_DEMO" # Example: "SBO_COMPANY_PROD"
SOURCE_SCHEMA="SBODEMOHU"
# The user who will be granted privileges on the new Aurora schema. # The HANA user that will be granted read/write access to the new Aurora schema.
AURORA_SCHEMA_USER="B1_53424F5F4348494D5045585F4155524F5241_RW" # This is typically a technical user for the application.
# Example: "B1_..._RW"
AURORA_USER="B1_XXXXXXXXX_RW"
# The database user for performing backup and administrative tasks. # The secure user store key for the HANA database user with privileges to
BACKOP_USER="CRONKEY" # perform EXPORT, IMPORT, DROP SCHEMA, and GRANT commands (e.g., SYSTEM).
# Using a key (hdbuserstore) is more secure than hardcoding a password.
# Example: "CRONKEY"
DB_ADMIN_KEY="CRONKEY"
# --- Paths and Files --- # --- Paths ---
# The base directory for storing the temporary schema export. # The base directory where the temporary schema export folder will be created.
BACKUP_DIR="/hana/shared/backup/schema" # Ensure the <sid>adm user has write permissions here.
BACKUP_BASE_DIR="/hana/shared/backup/schema"
# The full path to the HANA hdbsql executable. # The full path to the HANA hdbsql executable.
HDBSQL="/usr/sap/NDB/HDB00/exe/hdbsql" HDBSQL="/usr/sap/NDB/HDB00/exe/hdbsql"
# The root directory where post-import SQL scripts are located.
SQL_SCRIPTS_ROOT="/usr/sap/NDB/home/tools/sql"
# --- Post-Import Scripts --- # --- Post-Import Scripts (Optional) ---
# The root directory where the SQL script and its associated files are located. # A space-separated list of SQL script filenames to run after the import is complete.
SQL_ROOT="/usr/sap/NDB/home/tools" # The script will look for these files inside the SQL_SCRIPTS_ROOT directory.
# Leave empty ("") if no scripts are needed.
# A space-separated list of SQL script files to run after the import is complete. # Example: "update_user_emails.sql cleanup_tables.sql"
# These scripts should be located in the SCRIPT_ROOT directory. POST_IMPORT_SQL=""
POST_SQL=""

View File

@@ -1,123 +1,120 @@
#!/bin/sh #!/bin/sh
# Version: 2.1.0
# Author: Tomi Eckert
#
# Purpose: Performs an automated refresh of a SAP HANA schema. It exports a
# production schema and re-imports it under a new name ("Aurora")
# to create an up-to-date, non-production environment for testing.
# Designed to be run via cron, typically in the early morning.
#
# -----------------------------------------------------------------------------
# Exit immediately if a command exits with a non-zero status. # --- Basic Setup ---
set -e # Exit immediately if any command fails or if an unset variable is used.
set -eu
# === SETUP === # --- Configuration ---
# Determine script's directory and source the configuration file. # Load the configuration file located in the same directory as the script.
SCRIPT_DIR=$(dirname "$0") SCRIPT_DIR=$(dirname "$0")
CONFIG_FILE="${SCRIPT_DIR}/aurora.conf" CONFIG_FILE="${SCRIPT_DIR}/aurora.conf"
if [ ! -f "$CONFIG_FILE" ]; then if [ ! -f "$CONFIG_FILE" ]; then
echo "Error: Configuration file not found at ${CONFIG_FILE}" echo "❌ FATAL: Configuration file not found at '${CONFIG_FILE}'" >&2
exit 1 exit 1
fi fi
# shellcheck source=aurora.conf # shellcheck source=aurora.conf
. "$CONFIG_FILE" . "$CONFIG_FILE"
# === DERIVED VARIABLES === # --- Validate Configuration ---
TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S") if [ ! -x "$HDBSQL" ]; then
AURORA="${SCHEMA}_AURORA" echo "❌ FATAL: hdbsql is not found or not executable at '${HDBSQL}'" >&2
AURORA_TEMP_DIR="${BACKUP_DIR}/${AURORA}"
LOGFILE="${SCRIPT_ROOT}/aurora.log"
temp_compnyname=${SCHEMA#SBO_} # Remove SBO_ prefix
COMPNYNAME=${temp_compnyname%_PROD} # Remove _PROD suffix if it exists
# === FUNCTIONS ===
log() { echo "$(date +"%Y-%m-%d %H:%M:%S") - $1" | tee -a "$LOGFILE"; }
run_sql() {
log "Executing: $1"
"$HDBSQL" -U "${BACKOP_USER}" "$1" >/dev/null
}
show_info() {
echo "Source Schema: ${SCHEMA}"
echo "Target Schema: ${AURORA}"
echo "Target Schema User: ${AURORA_SCHEMA_USER}"
echo "Company Name: ${COMPNYNAME}"
echo "Export Directory: ${AURORA_TEMP_DIR}"
echo "Log File: ${LOGFILE}"
}
usage() {
echo "Usage: $0 [new | complete | info]"
echo " new : Export, import, and rename. (No privileges or post-scripts)"
echo " complete : Drop, export, import, grant privileges, and run post-scripts."
echo " info : Show configuration information."
}
export_schema() {
log "Starting schema export for '${SCHEMA}'."
mkdir -p "$AURORA_TEMP_DIR"
run_sql "EXPORT \"${SCHEMA}\".\"*\" AS BINARY INTO '$AURORA_TEMP_DIR' WITH REPLACE;"
log "Schema export completed."
}
import_and_rename() {
log "Starting import and rename to '${AURORA}'."
run_sql "IMPORT \"${SCHEMA}\".\"*\" FROM '$AURORA_TEMP_DIR' WITH RENAME SCHEMA \"${SCHEMA}\" TO \"${AURORA}\";"
log "Updating company name fields."
local update_sql="
UPDATE \"${AURORA}\".CINF SET \"CompnyName\"='AURORA ${COMPNYNAME} ${TIMESTAMP}';
UPDATE \"${AURORA}\".OADM SET \"CompnyName\"='AURORA ${COMPNYNAME} ${TIMESTAMP}';
UPDATE \"${AURORA}\".OADM SET \"PrintHeadr\"='AURORA ${COMPNYNAME} ${TIMESTAMP}';"
"$HDBSQL" -U "${BACKOP_USER}" -c ";" -I - <<EOF
${update_sql}
EOF
log "Import and rename completed."
}
grant_privileges() {
log "Granting privileges on '${AURORA}' to '${AURORA_SCHEMA_USER}'."
run_sql "GRANT ALL PRIVILEGES ON SCHEMA \"${AURORA}\" TO \"${AURORA_SCHEMA_USER}\";"
log "Privileges granted."
}
drop_aurora_schema() {
log "Dropping existing '${AURORA}' schema."
"$HDBSQL" -U "${BACKOP_USER}" "DROP SCHEMA \"${AURORA}\" CASCADE;" >/dev/null 2>&1 || log "Could not drop schema '${AURORA}'. It might not exist."
log "Old schema dropped."
}
run_post_scripts() {
log "Running post-import SQL scripts: ${POST_SQL}"
for sql_file in $POST_SQL; do
log "Running script: ${sql_file}"
"$HDBSQL" -U "${BACKOP_USER}" -I "${SCRIPT_ROOT}/${sql_file}"
done
log "All post-import scripts completed."
}
# === SCRIPT EXECUTION ===
if [ $# -eq 0 ]; then
usage
exit 1 exit 1
fi fi
case "$1" in # --- Derived Variables (Do Not Edit) ---
new) TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S")
log "=== Starting 'new' operation ===" AURORA_SCHEMA="${SOURCE_SCHEMA}_AURORA"
export_schema EXPORT_DIR="${BACKUP_BASE_DIR}/${AURORA_SCHEMA}_TEMP_EXPORT"
import_and_rename COMPANY_NAME_BASE=$(echo "${SOURCE_SCHEMA}" | sed 's/^SBO_//' | sed 's/_PROD$//')
log "=== 'New' operation finished successfully ==="
;; # --- Main Execution ---
complete) echo
log "=== Starting 'complete' operation ===" echo "🚀 [$(date "+%T")] Starting Aurora Refresh for '${SOURCE_SCHEMA}'"
drop_aurora_schema echo "--------------------------------------------------------"
export_schema echo " Source Schema: ${SOURCE_SCHEMA}"
import_and_rename echo " Target Aurora Schema: ${AURORA_SCHEMA}"
grant_privileges echo " Temp Export Path: ${EXPORT_DIR}"
run_post_scripts echo "--------------------------------------------------------"
log "=== 'Complete' operation finished successfully ==="
;; # 1. Drop the old Aurora schema if it exists.
info) echo "🗑️ Dropping old schema '${AURORA_SCHEMA}' (if it exists)..."
show_info "$HDBSQL" -U "$DB_ADMIN_KEY" "DROP SCHEMA \"${AURORA_SCHEMA}\" CASCADE" >/dev/null 2>&1 || echo " -> Schema did not exist. Continuing."
;;
*) # 2. Prepare the temporary export directory.
echo "Error: Invalid argument '$1'." echo "📁 Preparing temporary export directory..."
usage rm -rf "$EXPORT_DIR"
exit 1 mkdir -p "$EXPORT_DIR"
;;
esac # 3. Export the source schema.
echo "⬇️ Exporting source schema '${SOURCE_SCHEMA}' to binary files..."
"$HDBSQL" -U "$DB_ADMIN_KEY" "EXPORT \"${SOURCE_SCHEMA}\".\"*\" AS BINARY INTO '${EXPORT_DIR}' WITH REPLACE;" >/dev/null
echo " -> Export complete."
# 4. Import the data into the new Aurora schema.
echo "⬆️ Importing data and renaming schema to '${AURORA_SCHEMA}'..."
"$HDBSQL" -U "$DB_ADMIN_KEY" "IMPORT \"${SOURCE_SCHEMA}\".\"*\" FROM '${EXPORT_DIR}' WITH IGNORE EXISTING RENAME SCHEMA \"${SOURCE_SCHEMA}\" TO \"${AURORA_SCHEMA}\";" >/dev/null
echo " -> Import complete."
# 5. Update company name in CINF and OADM tables.
echo "✍️ Updating company name fields in the new schema..."
# First, get the original company name from the source schema.
# The query returns a header and the name in quotes. sed gets the second line, tr removes the quotes, xargs trims whitespace.
echo " -> Fetching original company name from '${SOURCE_SCHEMA}'..."
ORIGINAL_COMPNY_NAME=$("$HDBSQL" -U "$DB_ADMIN_KEY" "SELECT \"CompnyName\" FROM \"${SOURCE_SCHEMA}\".\"CINF\"" | sed -n '2p' | tr -d '"' | xargs)
# Construct the new name in the desired format.
DATE_STAMP=$(date "+%Y-%m-%d")
NEW_COMPNY_NAME="AURORA - ${ORIGINAL_COMPNY_NAME} - ${DATE_STAMP}"
echo " -> New company name set to: '${NEW_COMPNY_NAME}'"
echo " -> Updating CINF table..."
"$HDBSQL" -U "$DB_ADMIN_KEY" "UPDATE \"${AURORA_SCHEMA}\".CINF SET \"CompnyName\" = '${NEW_COMPNY_NAME}';" >/dev/null
echo " -> Updating OADM table..."
"$HDBSQL" -U "$DB_ADMIN_KEY" "UPDATE \"${AURORA_SCHEMA}\".OADM SET \"CompnyName\" = '${NEW_COMPNY_NAME}', \"PrintHeadr\" = '${NEW_COMPNY_NAME}';" >/dev/null
echo " -> Company info updated."
# 6. Grant privileges to the read/write user.
echo "🔑 Granting ALL privileges on '${AURORA_SCHEMA}' to '${AURORA_USER}'..."
"$HDBSQL" -U "$DB_ADMIN_KEY" "GRANT ALL PRIVILEGES ON SCHEMA \"${AURORA_SCHEMA}\" TO \"${AURORA_USER}\";" >/dev/null
echo " -> Privileges granted."
# 7. Run post-import SQL scripts, if any are defined.
if [ -n "$POST_IMPORT_SQL" ]; then
echo "⚙️ Running post-import SQL scripts..."
# Use word splitting intentionally here
# shellcheck disable=SC2086
for sql_file in $POST_IMPORT_SQL; do
full_path="${SQL_SCRIPTS_ROOT}/${sql_file}"
if [ -f "$full_path" ]; then
echo " -> Executing: ${sql_file}"
"$HDBSQL" -U "$DB_ADMIN_KEY" -I "$full_path"
else
echo " -> ⚠️ WARNING: Script not found: ${full_path}" >&2
fi
done
else
echo " No post-import SQL scripts to run."
fi
# 8. Clean up the temporary export files.
echo "🧹 Cleaning up temporary directory '${EXPORT_DIR}'..."
rm -rf "$EXPORT_DIR"
echo " -> Cleanup complete."
echo "--------------------------------------------------------"
echo "✅ [$(date "+%T")] Aurora Refresh finished successfully!"
echo
exit 0

256
b1.gen.sh Normal file
View File

@@ -0,0 +1,256 @@
#!/bin/bash
# Author: Tomi Eckert
# ==============================================================================
# SAP Business One for HANA Silent Installation Configurator
# ==============================================================================
# This script interactively collects necessary details to customize the
# silent installation properties file for SAP Business One on HANA.
# It provides sensible defaults and generates the final 'install.properties'.
# ==============================================================================
# --- Function to display a welcome header ---
print_header() {
echo "======================================================"
echo " SAP Business One for HANA Installation Configurator "
echo "======================================================"
echo "Please provide the following details. Defaults are in [brackets]."
echo ""
}
# --- Function to read password securely (single entry) ---
read_password() {
local prompt_text=$1
local -n pass_var=$2 # Use a nameref to pass the variable name
# Loop until the entered password is not empty
while true; do
read -s -p "$prompt_text: " pass_var
echo
if [ -z "$pass_var" ]; then
echo "Password cannot be empty. Please try again."
else
break
fi
done
}
# --- Function to read and verify password securely ---
read_password_verify() {
local prompt_text=$1
local -n pass_var=$2 # Use a nameref to pass the variable name
local pass_verify
# Loop until the entered passwords match and are not empty
while true; do
read -s -p "$prompt_text: " pass_var
echo
if [ -z "$pass_var" ]; then
echo "Password cannot be empty. Please try again."
continue
fi
read -s -p "Confirm password: " pass_verify
echo
if [ "$pass_var" == "$pass_verify" ]; then
break
else
echo "Passwords do not match. Please try again."
echo ""
fi
done
}
# --- Main configuration logic ---
print_header
# --- Installation Type ---
echo "--- Installation Type ---"
read -p "Is this a new installation or are you reconfiguring an existing instance? (new/reconfigure) [new]: " install_type
install_type=${install_type:-new}
if [[ "$install_type" == "reconfigure" ]]; then
LANDSCAPE_INSTALL_ACTION="connect"
B1S_SHARED_FOLDER_OVERWRITE="false"
else
LANDSCAPE_INSTALL_ACTION="create"
B1S_SHARED_FOLDER_OVERWRITE="true"
fi
echo ""
# 1. Get Hostname/IP Details
# Default to the current machine's hostname.
DEFAULT_HOSTNAME=$(hostname)
read -p "Enter HANA Database Server Hostname or IP [${DEFAULT_HOSTNAME}]: " HANA_DATABASE_SERVERS
HANA_DATABASE_SERVERS=${HANA_DATABASE_SERVERS:-$DEFAULT_HOSTNAME}
# 2. Get HANA Instance Details
read -p "Enter HANA Database Instance Number [00]: " HANA_DATABASE_INSTANCE
HANA_DATABASE_INSTANCE=${HANA_DATABASE_INSTANCE:-00}
# 3. Get HANA SID to construct the admin user
read -p "Enter HANA SID (Tenant Name) [NDB]: " HANA_SID
HANA_SID=${HANA_SID:-NDB}
# Convert SID to lowercase and append 'adm'
HANA_DATABASE_ADMIN_ID=$(echo "${HANA_SID}" | tr '[:upper:]' '[:lower:]')adm
# 4. Get Passwords
echo ""
echo "--- Secure Password Entry ---"
read_password "Enter password for HANA Admin ('${HANA_DATABASE_ADMIN_ID}')" HANA_DATABASE_ADMIN_PASSWD
# 5. Get HANA Database User
read -p "Enter HANA Database User ID [SYSTEM]: " HANA_DATABASE_USER_ID
HANA_DATABASE_USER_ID=${HANA_DATABASE_USER_ID:-SYSTEM}
# 6. Get HANA User Password
read_password "Enter password for HANA User ('${HANA_DATABASE_USER_ID}')" HANA_DATABASE_USER_PASSWORD
# 7. Get SLD and Site User Details
echo ""
echo "--- System Landscape Directory (SLD) ---"
read -p "Enter SLD Service Port [40000]: " SERVICE_PORT
SERVICE_PORT=${SERVICE_PORT:-40000}
read -p "Enter SLD Site User ID [B1SiteUser]: " SITE_USER_ID
SITE_USER_ID=${SITE_USER_ID:-B1SiteUser}
read_password_verify "Enter password for Site User ('${SITE_USER_ID}')" SITE_USER_PASSWORD
# --- SLD Single Sign-On (SSO) Settings ---
echo ""
echo "--- SLD Single Sign-On (SSO) Settings ---"
read -p "Do you want to configure Active Directory SSO? [y/N]: " configure_sso
if [[ "$configure_sso" =~ ^[yY]$ ]]; then
SLD_WINDOWS_DOMAIN_ACTION="use"
read -p "Enter AD Domain Controller: " SLD_WINDOWS_DOMAIN_CONTROLLER
read -p "Enter AD Domain Name: " SLD_WINDOWS_DOMAIN_NAME
read -p "Enter AD Domain User ID: " SLD_WINDOWS_DOMAIN_USER_ID
read_password "Enter password for AD Domain User ('${SLD_WINDOWS_DOMAIN_USER_ID}')" SLD_WINDOWS_DOMAIN_USER_PASSWORD
else
SLD_WINDOWS_DOMAIN_ACTION="skip"
SLD_WINDOWS_DOMAIN_CONTROLLER=""
SLD_WINDOWS_DOMAIN_NAME=""
SLD_WINDOWS_DOMAIN_USER_ID=""
SLD_WINDOWS_DOMAIN_USER_PASSWORD=""
fi
# 10. & 11. Get Service Layer Load Balancer Details
echo ""
echo "--- Service Layer ---"
read -p "Enter Service Layer Load Balancer Port [50000]: " SL_LB_PORT
SL_LB_PORT=${SL_LB_PORT:-50000}
read -p "How many Service Layer member nodes should be configured? [2]: " SL_MEMBER_COUNT
SL_MEMBER_COUNT=${SL_MEMBER_COUNT:-2}
# Generate the SL_LB_MEMBERS string
SL_LB_MEMBERS=""
for (( i=1; i<=SL_MEMBER_COUNT; i++ )); do
port=$((50000 + i))
member="${HANA_DATABASE_SERVERS}:${port}"
if [ -z "$SL_LB_MEMBERS" ]; then
SL_LB_MEMBERS="$member"
else
SL_LB_MEMBERS="$SL_LB_MEMBERS,$member"
fi
done
# 12. Display Summary and Ask for Confirmation
clear
echo "======================================================"
echo " Configuration Summary"
echo "======================================================"
echo ""
echo " --- Installation & System Details ---"
echo " INSTALLATION_FOLDER=/usr/sap/SAPBusinessOne"
echo " LANDSCAPE_INSTALL_ACTION=${LANDSCAPE_INSTALL_ACTION}"
echo " B1S_SHARED_FOLDER_OVERWRITE=${B1S_SHARED_FOLDER_OVERWRITE}"
echo ""
echo " --- SAP HANA Database Server Details ---"
echo " HANA_DATABASE_SERVERS=${HANA_DATABASE_SERVERS}"
echo " HANA_DATABASE_INSTANCE=${HANA_DATABASE_INSTANCE}"
echo " HANA_DATABASE_ADMIN_ID=${HANA_DATABASE_ADMIN_ID}"
echo " HANA_DATABASE_ADMIN_PASSWD=[hidden]"
echo ""
echo " --- SAP HANA Database User ---"
echo " HANA_DATABASE_USER_ID=${HANA_DATABASE_USER_ID}"
echo " HANA_DATABASE_USER_PASSWORD=[hidden]"
echo ""
echo " --- System Landscape Directory (SLD) Details ---"
echo " SERVICE_PORT=${SERVICE_PORT}"
echo " SITE_USER_ID=${SITE_USER_ID}"
echo " SITE_USER_PASSWORD=[hidden]"
echo ""
echo " --- SLD Single Sign-On (SSO) ---"
echo " SLD_WINDOWS_DOMAIN_ACTION=${SLD_WINDOWS_DOMAIN_ACTION}"
if [ "$SLD_WINDOWS_DOMAIN_ACTION" == "use" ]; then
echo " SLD_WINDOWS_DOMAIN_CONTROLLER=${SLD_WINDOWS_DOMAIN_CONTROLLER}"
echo " SLD_WINDOWS_DOMAIN_NAME=${SLD_WINDOWS_DOMAIN_NAME}"
echo " SLD_WINDOWS_DOMAIN_USER_ID=${SLD_WINDOWS_DOMAIN_USER_ID}"
echo " SLD_WINDOWS_DOMAIN_USER_PASSWORD=[hidden]"
fi
echo ""
echo " --- Service Layer ---"
echo " SL_LB_PORT=${SL_LB_PORT}"
echo " SL_LB_MEMBERS=${SL_LB_MEMBERS}"
echo ""
echo "======================================================"
read -p "Save this configuration to 'install.properties'? [y/N]: " confirm
echo ""
if [[ ! "$confirm" =~ ^[yY]$ ]]; then
echo "Configuration cancelled by user."
exit 1
fi
# --- Write the final install.properties file ---
# Using a HEREDOC to write the configuration file with the variables collected.
cat > install.properties << EOL
# SAP Business One for HANA Silent Installation Properties
# Generated by configuration script on $(date)
INSTALLATION_FOLDER=/usr/sap/SAPBusinessOne
HANA_DATABASE_SERVERS=${HANA_DATABASE_SERVERS}
HANA_DATABASE_INSTANCE=${HANA_DATABASE_INSTANCE}
HANA_DATABASE_ADMIN_ID=${HANA_DATABASE_ADMIN_ID}
HANA_DATABASE_ADMIN_PASSWD=${HANA_DATABASE_ADMIN_PASSWD}
HANA_DATABASE_USER_ID=${HANA_DATABASE_USER_ID}
HANA_DATABASE_USER_PASSWORD=${HANA_DATABASE_USER_PASSWORD}
SERVICE_PORT=${SERVICE_PORT}
SLD_DATABASE_NAME=SLDDATA
SLD_CERTIFICATE_ACTION=self
CONNECTION_SSL_CERTIFICATE_VERIFICATION=false
SLD_DATABASE_ACTION=create
SLD_SERVER_PROTOCOL=https
SITE_USER_ID=${SITE_USER_ID}
SITE_USER_PASSWORD=${SITE_USER_PASSWORD}
# --- SLD Single Sign-On (SSO) Settings ---
SLD_WINDOWS_DOMAIN_ACTION=${SLD_WINDOWS_DOMAIN_ACTION}
SLD_WINDOWS_DOMAIN_CONTROLLER=${SLD_WINDOWS_DOMAIN_CONTROLLER}
SLD_WINDOWS_DOMAIN_NAME=${SLD_WINDOWS_DOMAIN_NAME}
SLD_WINDOWS_DOMAIN_USER_ID=${SLD_WINDOWS_DOMAIN_USER_ID}
SLD_WINDOWS_DOMAIN_USER_PASSWORD=${SLD_WINDOWS_DOMAIN_USER_PASSWORD}
SL_LB_MEMBER_ONLY=false
SL_LB_PORT=${SL_LB_PORT}
SL_LB_MEMBERS=${SL_LB_MEMBERS}
SL_THREAD_PER_SERVER=10
SELECTED_FEATURES=B1ServerTools,B1ServerToolsLandscape,B1ServerToolsSLD,B1ServerToolsLicense,B1ServerToolsJobService,B1ServerToolsXApp,B1SLDAgent,B1BackupService,B1Server,B1ServerSHR,B1ServerHelp,B1AnalyticsPlatform,B1ServerCommonDB,B1ServiceLayerComponent
B1S_SAMBA_AUTOSTART=true
B1S_SHARED_FOLDER_OVERWRITE=${B1S_SHARED_FOLDER_OVERWRITE}
LANDSCAPE_INSTALL_ACTION=${LANDSCAPE_INSTALL_ACTION}
EOL
echo "Success! The configuration file 'install.properties' has been created in the current directory."
exit 0

View File

@@ -1,29 +1,33 @@
# ============================================================================== # ==============================================================================
# Configuration for HANA Backup Script (backup.sh) # Configuration for HANA Backup Script (backup.sh)
# ============================================================================== # ==============================================================================
# Author: Tomi Eckert
# --- Connection Settings --- # --- Connection Settings ---
# Full path to the SAP HANA hdbsql executable.
HDBSQL_PATH="/usr/sap/hdbclient/hdbsql"
# User key name from the hdbuserstore. # User key name from the hdbuserstore.
# This key should be configured to connect to the target tenant database. # This key should be configured to connect to the target tenant database.
USER_KEY="CRONKEY" USER_KEY="CRONKEY"
# hdbuserstore key for the SYSTEMDB user
SYSTEMDB_USER_KEY="SYSTEMKEY"
# --- Backup Settings --- # --- Backup Settings ---
# The base directory where all backup files and directories will be stored. # The base directory where all backup files and directories will be stored.
# Ensure this directory exists and that the OS user running the script has # Ensure this directory exists and that the OS user running the script has
# write permissions to it. # write permissions to it.
BACKUP_BASE_DIR="/hana/backups/automated" BACKUP_BASE_DIR="/hana/shared/backup"
# Specify the type of backup to perform on script execution. # Specify the type of backup to perform on script execution.
# Options are: # Options are:
# 'schema' - Performs only the schema export. # 'schema' - Performs only the schema export.
# 'tenant' - Performs only the tenant data backup. # 'tenant' - Performs only the tenant data backup.
# 'all' - Performs both the schema export and the tenant backup. # 'all' - Performs both the schema export and the tenant backup.
BACKUP_TYPE="all" BACKUP_TYPE="tenant"
# Set to 'true' to also perform a backup of the SYSTEMDB
BACKUP_SYSTEMDB=true
# Schema can be compressed after exporting, decreasing it's size. # Schema can be compressed after exporting, decreasing it's size.
COMPRESS_SCHEMA=true COMPRESS_SCHEMA=true

17
backup/backup.hook.sh Normal file
View File

@@ -0,0 +1,17 @@
#!/bin/bash
# Author: Tomi Eckert
# This script helps to configure backup.conf
# Source the backup.conf to get current values
source backup.conf
HDBSQL_PATH_INPUT=$(which hdbsql)
# Default values if not found
HDBSQL_PATH_INPUT=${HDBSQL_PATH_INPUT:-"/usr/sap/hdbclient/hdbsql"}
# Update backup.conf
sed -i "s#^HDBSQL_PATH=\".*\"#HDBSQL_PATH=\"$HDBSQL_PATH_INPUT\"#" backup.conf
echo "backup.conf updated successfully!"

View File

@@ -1,18 +1,20 @@
#!/bin/bash #!/bin/bash
# Version: 1.0.8
# Author: Tomi Eckert
# ============================================================================== # ==============================================================================
# SAP HANA Backup Script # SAP HANA Backup Script
# #
# Performs schema exports for one or more schemas and/or tenant backups for a # Performs schema exports for one or more schemas and/or tenant backups for a
# SAP HANA database. Designed to be executed via a cronjob. # SAP HANA database using hanatool.sh. Designed to be executed via a cronjob.
# Reads all settings from the backup.conf file in the same directory. # Reads all settings from the backup.conf file in the same directory.
# ============================================================================== # ==============================================================================
# --- Configuration and Setup --- # --- Configuration and Setup ---
# Find the script's own directory to locate the config file # Find the script's own directory to locate the config file and hanatool.sh
SCRIPT_DIR=$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" &> /dev/null && pwd) SCRIPT_DIR=$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" &> /dev/null && pwd)
CONFIG_FILE="${SCRIPT_DIR}/backup.conf" CONFIG_FILE="${SCRIPT_DIR}/backup.conf"
HANATOOL_PATH="${SCRIPT_DIR}/hanatool.sh" # Assuming hanatool.sh is in the parent directory
# Check for config file and source it # Check for config file and source it
if [[ -f "$CONFIG_FILE" ]]; then if [[ -f "$CONFIG_FILE" ]]; then
@@ -22,162 +24,104 @@ else
exit 1 exit 1
fi fi
# Check if hdbsql executable exists # Check if hanatool.sh executable exists
if [[ ! -x "$HDBSQL_PATH" ]]; then if [[ ! -x "$HANATOOL_PATH" ]]; then
echo "❌ Error: hdbsql not found or not executable at '${HDBSQL_PATH}'" echo "❌ Error: hanatool.sh not found or not executable at '${HANATOOL_PATH}'"
exit 1 exit 1
fi fi
# Calculate threads to use (half of the available cores, but at least 1)
TOTAL_THREADS=$(nproc --all)
THREADS=$((TOTAL_THREADS / 2))
if [[ "$THREADS" -eq 0 ]]; then
THREADS=1
fi
# --- Functions ---
# Performs a binary export of a specific schema.
# Accepts the schema name as its first argument.
perform_schema_export() {
local schema_name="$1"
if [[ -z "$schema_name" ]]; then
echo " ❌ Error: No schema name provided to perform_schema_export function."
return 1
fi
echo "⬇️ Starting schema export for '${schema_name}'..."
local timestamp
timestamp=$(date +%Y%m%d_%H%M%S)
local export_base_dir="${BACKUP_BASE_DIR}/schema"
local export_path="${export_base_dir}/${schema_name}_${timestamp}"
local query_export_path="$export_path" # Default path for the EXPORT query
if [[ "$COMPRESS_SCHEMA" == "true" ]]; then
export_path="${export_base_dir}/tmp/${schema_name}_${timestamp}"
query_export_path="$export_path"
echo " Compression enabled. Using temporary export path: ${export_path}"
fi
local archive_file="${export_base_dir}/${schema_name}_${timestamp}.tar.gz"
mkdir -p "$(dirname "$export_path")"
local query="EXPORT \"${schema_name}\".\"*\" AS BINARY INTO '${query_export_path}' WITH REPLACE THREADS ${THREADS};"
"$HDBSQL_PATH" -U "$USER_KEY" "$query" > /dev/null 2>&1
local exit_code=$?
if [[ "$exit_code" -eq 0 ]]; then
echo " ✅ Successfully exported schema '${schema_name}'."
if [[ "$COMPRESS_SCHEMA" == "true" ]]; then
echo " 🗜️ Compressing exported files..."
tar -czf "$archive_file" -C "$(dirname "$export_path")" "$(basename "$export_path")"
local tar_exit_code=$?
if [[ "$tar_exit_code" -eq 0 ]]; then
echo " ✅ Successfully created archive '${archive_file}'."
echo " 🧹 Cleaning up temporary directory..."
rm -rf "$export_path"
rmdir --ignore-fail-on-non-empty "$(dirname "$export_path")"
echo " ✨ Cleanup complete."
else
echo " ❌ Error: Failed to compress '${export_path}'."
fi
else
echo " Compression disabled. Raw export files are located at '${export_path}'."
fi
else
echo " ❌ Error: Failed to export schema '${schema_name}' (hdbsql exit code: ${exit_code})."
fi
}
# NEW: Loops through the schemas in the config file and runs an export for each.
run_all_schema_exports() {
if [[ -z "$SCHEMA_NAMES" ]]; then
echo " ⚠️ Warning: SCHEMA_NAMES variable is not set in config. Skipping schema export."
return
fi
echo "🔎 Found schemas to export: ${SCHEMA_NAMES}"
for schema in $SCHEMA_NAMES; do
perform_schema_export "$schema"
echo "--------------------------------------------------"
done
}
# Performs a full backup of the tenant database.
perform_tenant_backup() {
echo "⬇️ Starting tenant backup..."
local timestamp
timestamp=$(date +%Y%m%d_%H%M%S)
local backup_base_dir="${BACKUP_BASE_DIR}/tenant"
local backup_path_prefix
local backup_target_dir
if [[ "$COMPRESS_TENANT" == "true" ]]; then
backup_target_dir="${backup_base_dir}/tmp"
backup_path_prefix="${backup_target_dir}/backup_${timestamp}"
echo " Compression enabled. Using temporary backup path: ${backup_path_prefix}"
else
backup_target_dir="$backup_base_dir"
backup_path_prefix="${backup_target_dir}/backup_${timestamp}"
fi
mkdir -p "$backup_target_dir"
local query="BACKUP DATA USING FILE ('${backup_path_prefix}')"
"$HDBSQL_PATH" -U "$USER_KEY" "$query" > /dev/null 2>&1
local exit_code=$?
if [[ "$exit_code" -eq 0 ]]; then
echo " ✅ Successfully initiated tenant backup with prefix '${backup_path_prefix}'."
if [[ "$COMPRESS_TENANT" == "true" ]]; then
local archive_file="${backup_base_dir}/backup_${timestamp}.tar.gz"
echo " 🗜️ Compressing backup files..."
tar -czf "$archive_file" -C "$backup_target_dir" .
local tar_exit_code=$?
if [[ "$tar_exit_code" -eq 0 ]]; then
echo " ✅ Successfully created archive '${archive_file}'."
echo " 🧹 Cleaning up temporary directory..."
rm -rf "$backup_target_dir"
echo " ✨ Cleanup complete."
else
echo " ❌ Error: Failed to compress backup files in '${backup_target_dir}'."
fi
fi
else
echo " ❌ Error: Failed to initiate tenant backup (hdbsql exit code: ${exit_code})."
fi
}
# --- Main Execution --- # --- Main Execution ---
echo "⚙️ Starting HANA backup process..." echo "⚙️ Starting HANA backup process using hanatool.sh..."
mkdir -p "$BACKUP_BASE_DIR" mkdir -p "$BACKUP_BASE_DIR"
SCHEMA_EXPORT_OPTIONS=""
case "$BACKUP_TYPE" in case "$BACKUP_TYPE" in
schema) schema)
run_all_schema_exports if [[ -z "$SCHEMA_NAMES" ]]; then
echo " ⚠️ Warning: SCHEMA_NAMES variable is not set in config. Skipping schema export."
else
echo "🔎 Found schemas to export: ${SCHEMA_NAMES}"
for schema in $SCHEMA_NAMES; do
echo "⬇️ Starting schema export for '${schema}'..."
SCHEMA_EXPORT_OPTIONS="$COMMON_OPTIONS"
if [[ "$COMPRESS_SCHEMA" == "true" ]]; then
SCHEMA_EXPORT_OPTIONS+=" --compress"
fi
"$HANATOOL_PATH" "$USER_KEY" export "$schema" "${BACKUP_BASE_DIR}/schema" $SCHEMA_EXPORT_OPTIONS
if [[ $? -ne 0 ]]; then
echo "❌ Error: Schema export for '${schema}' failed."
fi
echo "--------------------------------------------------"
done
fi
;; ;;
tenant) tenant)
perform_tenant_backup echo "⬇️ Starting Tenant backup..."
TENANT_BACKUP_OPTIONS="$COMMON_OPTIONS"
if [[ "$COMPRESS_TENANT" == "true" ]]; then
TENANT_BACKUP_OPTIONS+=" --compress"
fi
"$HANATOOL_PATH" "$USER_KEY" backup "${BACKUP_BASE_DIR}/tenant" $TENANT_BACKUP_OPTIONS
if [[ $? -ne 0 ]]; then
echo "❌ Error: Tenant backup failed."
fi
;; ;;
all) all)
run_all_schema_exports if [[ -z "$SCHEMA_NAMES" ]]; then
perform_tenant_backup echo " ⚠️ Warning: SCHEMA_NAMES variable is not set in config. Skipping schema export."
else
echo "🔎 Found schemas to export: ${SCHEMA_NAMES}"
for schema in $SCHEMA_NAMES; do
echo "⬇️ Starting schema export for '${schema}'..."
SCHEMA_EXPORT_OPTIONS="$COMMON_OPTIONS"
if [[ "$COMPRESS_SCHEMA" == "true" ]]; then
SCHEMA_EXPORT_OPTIONS+=" --compress"
fi
"$HANATOOL_PATH" "$USER_KEY" export "$schema" "${BACKUP_BASE_DIR}/schema" $SCHEMA_EXPORT_OPTIONS
if [[ $? -ne 0 ]]; then
echo "❌ Error: Schema export for '${schema}' failed."
fi
echo "--------------------------------------------------"
done
fi
echo "⬇️ Starting Tenant backup..."
TENANT_BACKUP_OPTIONS="$COMMON_OPTIONS"
if [[ "$COMPRESS_TENANT" == "true" ]]; then
TENANT_BACKUP_OPTIONS+=" --compress"
fi
"$HANATOOL_PATH" "$USER_KEY" backup "${BACKUP_BASE_DIR}/tenant" $TENANT_BACKUP_OPTIONS
if [[ $? -ne 0 ]]; then
echo "❌ Error: Tenant backup failed."
fi
;; ;;
*) *)
echo " ❌ Error: Invalid BACKUP_TYPE '${BACKUP_TYPE}' in config. Use 'schema', 'tenant', or 'all'." echo " ❌ Error: Invalid BACKUP_TYPE '${BACKUP_TYPE}' in config. Use 'schema', 'tenant', or 'all'."
;; ;;
esac esac
# Check if SYSTEMDB backup is enabled, regardless of BACKUP_TYPE (as long as it's not 'schema' only)
if [[ "$BACKUP_TYPE" == "tenant" || "$BACKUP_TYPE" == "all" ]]; then
if [[ "$BACKUP_SYSTEMDB" == "true" ]]; then
echo "--------------------------------------------------"
if [[ -z "$SYSTEMDB_USER_KEY" ]]; then
echo " ❌ Error: BACKUP_SYSTEMDB is true, but SYSTEMDB_USER_KEY is not set in config."
else
echo "⬇️ Starting SYSTEMDB backup..."
SYSTEMDB_BACKUP_OPTIONS="$COMMON_OPTIONS"
if [[ "$COMPRESS_TENANT" == "true" ]]; then # SYSTEMDB compression uses COMPRESS_TENANT setting
SYSTEMDB_BACKUP_OPTIONS+=" --compress"
fi
"$HANATOOL_PATH" "$SYSTEMDB_USER_KEY" backup "${BACKUP_BASE_DIR}/tenant" $SYSTEMDB_BACKUP_OPTIONS
if [[ $? -ne 0 ]]; then
echo "❌ Error: SYSTEMDB backup failed."
fi
fi
fi
fi
echo "📦 Backup process complete." echo "📦 Backup process complete."
echo "👋 Exiting." echo "👋 Exiting."

31
cleaner.sh Normal file
View File

@@ -0,0 +1,31 @@
#!/bin/bash
# Version: 1.1.0
# Author: Tomi Eckert
# Check if any arguments were provided
if [ "$#" -eq 0 ]; then
echo "Usage: $0 <retention_days>:<path> [<retention_days>:<path> ...]"
exit 1
fi
# Loop through each argument provided
for ARG in "$@"; do
# Split the argument at the first colon
IFS=':' read -r RETENTION_DAYS TARGET_DIR <<< "$ARG"
# Validate that both a retention period and a path were provided
if [ -z "$RETENTION_DAYS" ] || [ -z "$TARGET_DIR" ]; then
echo "Invalid format for argument: $ARG. Please use the format <retention_days>:<path>"
continue
fi
echo "Starting cleanup of files older than $RETENTION_DAYS days in $TARGET_DIR..."
# Use find to locate and delete files, handling potential errors
find "$TARGET_DIR" -type f -mtime +"$RETENTION_DAYS" -delete -print || echo "Could not process $TARGET_DIR. Check permissions."
echo "Cleanup complete for $TARGET_DIR."
echo "--------------------------------------------------"
done
echo "All cleanup tasks finished."

448
hanatool.sh Normal file
View File

@@ -0,0 +1,448 @@
#!/bin/bash
# Version: 1.5.6
# Author: Tomi Eckert
# ==============================================================================
# SAP HANA Schema and Tenant Management Tool (hanatool.sh)
#
# A command-line utility to quickly export/import schemas or backup a tenant.
# ==============================================================================
# --- Default Settings ---
# Define potential HDB client paths
HDB_CLIENT_PATH_1="/usr/sap/hdbclient"
HDB_CLIENT_PATH_2="/usr/sap/NDB/HDB00/exe"
# Determine the correct HDB_CLIENT_PATH
if [ -d "$HDB_CLIENT_PATH_1" ]; then
HDB_CLIENT_PATH="$HDB_CLIENT_PATH_1"
elif [ -d "$HDB_CLIENT_PATH_2" ]; then
HDB_CLIENT_PATH="$HDB_CLIENT_PATH_2"
else
echo "❌ Error: Neither '$HDB_CLIENT_PATH_1' nor '$HDB_CLIENT_PATH_2' found."
echo "Please install the SAP HANA client or adjust the paths in the script."
exit 1
fi
HDBSQL_PATH="${HDB_CLIENT_PATH}/hdbsql"
COMPRESS=false
THREADS=0 # 0 means auto-calculate later
DRY_RUN=false
NTFY_TOKEN=""
IMPORT_REPLACE=false
# --- Help/Usage Function ---
usage() {
echo "SAP HANA Schema and Tenant Management Tool"
echo ""
echo "Usage (Schema): $0 [USER_KEY] export|import [SCHEMA_NAME] [PATH] [OPTIONS]"
echo " (Schema): $0 [USER_KEY] import-rename [SCHEMA_NAME] [NEW_SCHEMA_NAME] [PATH] [OPTIONS]"
echo " (Tenant): $0 [USER_KEY] backup [PATH] [OPTIONS]"
echo ""
echo "Actions:"
echo " export Export a schema to a specified path."
echo " import Import a schema from a specified path."
echo " import-rename Import a schema from a path to a new schema name."
echo " backup Perform a full backup of the tenant."
echo ""
echo "Arguments:"
echo " USER_KEY The user key from hdbuserstore for DB connection."
echo " SCHEMA_NAME The name of the source schema."
echo " NEW_SCHEMA_NAME (Required for import-rename only) The target schema name."
echo " PATH The file system path for the export/import/backup data."
echo ""
echo "Options:"
echo " -t, --threads N Specify the number of threads (not used for 'backup')."
echo " -c, --compress Enable tar.gz compression for exports and backups."
echo " -n, --dry-run Show what commands would be executed without running them."
echo " --ntfy <token> Send a notification via ntfy.sh upon completion/failure."
echo " --replace Use the 'REPLACE' option for imports instead of 'IGNORE EXISTING'."
echo " --hdbsql <path> Specify a custom path for the hdbsql executable."
echo " -h, --help Show this help message."
echo ""
echo "Examples:"
echo " # Backup the tenant determined by MY_TENANT_KEY and compress the result"
echo " $0 MY_TENANT_KEY backup /hana/backups -c --ntfy tk_xxxxxxxxxxxx"
echo ""
echo " # Import MYSCHEMA from a compressed archive"
echo " $0 MY_SCHEMA_KEY import MYSCHEMA /hana/backups/MYSCHEMA_20240101.tar.gz -c"
echo ""
echo " # Import MYSCHEMA as MYSCHEMA_TEST, replacing any existing objects"
echo " $0 MY_SCHEMA_KEY import-rename MYSCHEMA MYSCHEMA_TEST /hana/backups/temp_export --replace"
}
# --- Notification Function ---
send_notification() {
local message="$1"
if [[ -n "$NTFY_TOKEN" && "$DRY_RUN" == "false" ]]; then
echo " Sending notification..."
curl -s -H "Authorization: Bearer $NTFY_TOKEN" -d "$message" https://ntfy.technopunk.space/sap > /dev/null
elif [[ -n "$NTFY_TOKEN" && "$DRY_RUN" == "true" ]]; then
echo "[DRY RUN] Would send notification: curl -H \"Authorization: Bearer ...\" -d \"$message\" https://ntfy.technopunk.space/sap"
fi
}
# --- Function to get HANA tenant name ---
get_hana_tenant_name() {
local user_key="$1"
local hdbsql_path="$2"
local dry_run="$3"
local query="SELECT DATABASE_NAME FROM SYS.M_DATABASES;"
local tenant_name=""
if [[ "$dry_run" == "true" ]]; then
echo "[DRY RUN] Would execute hdbsql to get tenant name: \"$hdbsql_path\" -U \"$user_key\" \"$query\""
tenant_name="DRYRUN_TENANT"
else
tenant_name=$("$hdbsql_path" -U "$user_key" "$query" | tail -n +2 | head -n 1 | tr -d '[:space:]' | tr -d '"')
if [[ -z "$tenant_name" ]]; then
echo "❌ Error: Could not retrieve HANA tenant name using user key '${user_key}'."
exit 1
fi
fi
echo "$tenant_name"
}
# --- Argument Parsing ---
POSITIONAL_ARGS=()
while [[ $# -gt 0 ]]; do
case $1 in
-t|--threads)
THREADS="$2"
shift 2
;;
-c|--compress)
COMPRESS=true
shift
;;
-n|--dry-run)
DRY_RUN=true
shift
;;
--ntfy)
NTFY_TOKEN="$2"
shift 2
;;
--replace)
IMPORT_REPLACE=true
shift
;;
--hdbsql)
HDBSQL_PATH="$2"
shift 2
;;
-h|--help)
usage
exit 0
;;
*)
POSITIONAL_ARGS+=("$1") # save positional arg
shift
;;
esac
done
set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
# Assign common positional arguments
USER_KEY="$1"
ACTION="$2"
# --- Main Logic ---
if [[ "$DRY_RUN" == "true" ]]; then
echo "⚠️ --- DRY RUN MODE ENABLED --- ⚠️"
echo "No actual commands will be executed."
echo "-------------------------------------"
fi
# Check for hdbsql executable
if [[ ! -x "$HDBSQL_PATH" ]]; then
echo "❌ Error: hdbsql not found or not executable at '${HDBSQL_PATH}'"
exit 1
fi
# Calculate default threads if not specified and action is not backup
if [[ "$THREADS" -eq 0 && "$ACTION" != "backup" ]]; then
TOTAL_THREADS=$(nproc --all)
THREADS=$((TOTAL_THREADS / 2))
if [[ "$THREADS" -eq 0 ]]; then
THREADS=1
fi
echo " Auto-detected threads to use: ${THREADS}"
fi
# Execute action based on user input
case "$ACTION" in
backup)
TARGET_PATH="$3"
if [[ -z "$USER_KEY" || -z "$TARGET_PATH" ]]; then
echo "❌ Error: Missing arguments for 'backup' action."
usage
exit 1
fi
echo "⬇️ Starting tenant backup..."
echo " - User Key: ${USER_KEY}"
echo " - Path: ${TARGET_PATH}"
echo " - Compress: ${COMPRESS}"
TENANT_NAME=$(get_hana_tenant_name "$USER_KEY" "$HDBSQL_PATH" "$DRY_RUN")
echo " - Tenant Name: ${TENANT_NAME}"
timestamp=$(date +%Y%m%d_%H%M%S)
backup_target_dir="$TARGET_PATH" # Initialize with TARGET_PATH
backup_path_prefix=""
if [[ "$COMPRESS" == "true" ]]; then
if [[ "$DRY_RUN" == "true" ]]; then
backup_target_dir="${TARGET_PATH}/${TENANT_NAME}_backup_DRYRUN_TEMP" # Use TARGET_PATH
else
backup_target_dir=$(mktemp -d "${TARGET_PATH}/${TENANT_NAME}_backup_${timestamp}_XXXXXXXX") # Use TARGET_PATH
fi
echo " Using temporary backup directory: ${backup_target_dir}"
fi
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY RUN] Would create directory: mkdir -p \"$backup_target_dir\""
else
mkdir -p "$backup_target_dir"
fi
backup_path_prefix="${backup_target_dir}/backup_${TENANT_NAME}_${timestamp}"
QUERY="BACKUP DATA USING FILE ('${backup_path_prefix}')"
EXIT_CODE=0
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY RUN] Would execute hdbsql: \"$HDBSQL_PATH\" -U \"$USER_KEY\" \"$QUERY\""
else
"$HDBSQL_PATH" -U "$USER_KEY" "$QUERY" > /dev/null 2>&1
EXIT_CODE=$?
fi
if [[ "$EXIT_CODE" -eq 0 ]]; then
echo "✅ Successfully initiated tenant backup with prefix '${backup_path_prefix}'."
if [[ "$COMPRESS" == "true" ]]; then
ARCHIVE_FILE="${TARGET_PATH}/${TENANT_NAME}_backup_${timestamp}.tar.gz"
echo "🗜️ Compressing backup files to '${ARCHIVE_FILE}'..."
TAR_EXIT_CODE=0
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY RUN] Would execute tar: tar -czf \"$ARCHIVE_FILE\" -C \"$backup_target_dir\" ."
else
tar -czf "$ARCHIVE_FILE" -C "$backup_target_dir" .
TAR_EXIT_CODE=$?
fi
if [[ "$TAR_EXIT_CODE" -eq 0 ]]; then
echo "✅ Successfully created archive."
echo "🧹 Cleaning up temporary directory..."
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY RUN] Would remove temp directory: rm -rf \"$backup_target_dir\""
else
rm -rf "$backup_target_dir"
fi
else
echo "❌ Error: Failed to create archive from '${backup_target_dir}'."
fi
fi
send_notification "✅ HANA tenant '${TENANT_NAME}' backup completed successfully."
else
echo "❌ Error: Failed to initiate tenant backup (hdbsql exit code: ${EXIT_CODE})."
send_notification "❌ HANA tenant '${TENANT_NAME}' backup FAILED."
if [[ "$COMPRESS" == "true" && "$DRY_RUN" == "false" ]]; then rm -rf "$backup_target_dir"; fi
fi
;;
export)
SCHEMA_NAME="$3"
TARGET_PATH="$4"
if [[ -z "$USER_KEY" || -z "$SCHEMA_NAME" || -z "$TARGET_PATH" ]]; then
echo "❌ Error: Missing arguments for 'export' action."
usage
exit 1
fi
echo "⬇️ Starting schema export..."
echo " - User Key: ${USER_KEY}"
echo " - Schema: ${SCHEMA_NAME}"
echo " - Path: ${TARGET_PATH}"
echo " - Compress: ${COMPRESS}"
echo " - Threads: ${THREADS}"
EXPORT_DIR="$TARGET_PATH"
if [[ "$COMPRESS" == "true" ]]; then
if [[ "$DRY_RUN" == "true" ]]; then
EXPORT_DIR="${TARGET_PATH}/export_${SCHEMA_NAME}_DRYRUN_TEMP"
else
EXPORT_DIR=$(mktemp -d "${TARGET_PATH}/export_${SCHEMA_NAME}_XXXXXXXX")
fi
echo " Using temporary export directory: ${EXPORT_DIR}"
fi
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY RUN] Would create directory: mkdir -p \"$EXPORT_DIR\""
else
mkdir -p "$EXPORT_DIR"
fi
QUERY="EXPORT \"${SCHEMA_NAME}\".\"*\" AS BINARY INTO '${EXPORT_DIR}' WITH REPLACE THREADS ${THREADS} NO DEPENDENCIES;"
EXIT_CODE=0
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY RUN] Would execute hdbsql: \"$HDBSQL_PATH\" -U \"$USER_KEY\" \"$QUERY\""
else
"$HDBSQL_PATH" -U "$USER_KEY" "$QUERY" > /dev/null 2>&1
EXIT_CODE=$?
fi
if [[ "$EXIT_CODE" -eq 0 ]]; then
echo "✅ Successfully exported schema '${SCHEMA_NAME}' to '${EXPORT_DIR}'."
if [[ "$COMPRESS" == "true" ]]; then
ARCHIVE_FILE="${TARGET_PATH}/${SCHEMA_NAME}_$(date +%Y%m%d_%H%M%S).tar.gz"
echo "🗜️ Compressing files to '${ARCHIVE_FILE}'..."
TAR_EXIT_CODE=0
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY RUN] Would execute tar: tar -czf \"$ARCHIVE_FILE\" -C \"$(dirname "$EXPORT_DIR")\" \"$(basename "$EXPORT_DIR")\""
else
tar -czf "$ARCHIVE_FILE" -C "$(dirname "$EXPORT_DIR")" "$(basename "$EXPORT_DIR")"
TAR_EXIT_CODE=$?
fi
if [[ "$TAR_EXIT_CODE" -eq 0 ]]; then
echo "✅ Successfully created archive."
echo "🧹 Cleaning up temporary directory..."
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY RUN] Would remove temp directory: rm -rf \"$EXPORT_DIR\""
else
rm -rf "$EXPORT_DIR"
fi
else
echo "❌ Error: Failed to create archive from '${EXPORT_DIR}'."
fi
fi
send_notification "✅ Export of schema '${SCHEMA_NAME}' completed successfully."
else
echo "❌ Error: Failed to export schema '${SCHEMA_NAME}' (hdbsql exit code: ${EXIT_CODE})."
send_notification "❌ Export of schema '${SCHEMA_NAME}' FAILED."
if [[ "$COMPRESS" == "true" && "$DRY_RUN" == "false" ]]; then rm -rf "$EXPORT_DIR"; fi
fi
;;
import|import-rename)
SCHEMA_NAME="$3"
if [[ "$ACTION" == "import" ]]; then
SOURCE_PATH="$4"
NEW_SCHEMA_NAME=""
if [[ -z "$USER_KEY" || -z "$SCHEMA_NAME" || -z "$SOURCE_PATH" ]]; then
echo "❌ Error: Missing arguments for 'import' action."
usage
exit 1
fi
else # import-rename
NEW_SCHEMA_NAME="$4"
SOURCE_PATH="$5"
if [[ -z "$USER_KEY" || -z "$SCHEMA_NAME" || -z "$NEW_SCHEMA_NAME" || -z "$SOURCE_PATH" ]]; then
echo "❌ Error: Missing arguments for 'import-rename' action."
usage
exit 1
fi
fi
echo "⬆️ Starting schema import..."
echo " - User Key: ${USER_KEY}"
echo " - Source Schema: ${SCHEMA_NAME}"
if [[ -n "$NEW_SCHEMA_NAME" ]]; then
echo " - Target Schema: ${NEW_SCHEMA_NAME}"
fi
echo " - Path: ${SOURCE_PATH}"
echo " - Compress: ${COMPRESS}"
echo " - Threads: ${THREADS}"
IMPORT_DIR="$SOURCE_PATH"
if [[ "$COMPRESS" == "true" ]]; then
if [[ ! -f "$SOURCE_PATH" && "$DRY_RUN" == "false" ]]; then
echo "❌ Error: Source path '${SOURCE_PATH}' is not a valid file for compressed import."
exit 1
fi
if [[ "$DRY_RUN" == "true" ]]; then
IMPORT_DIR="/tmp/import_${SCHEMA_NAME}_DRYRUN_TEMP"
else
IMPORT_DIR=$(mktemp -d "/tmp/import_${SCHEMA_NAME}_XXXXXXXX")
fi
echo " Decompressing to temporary directory: ${IMPORT_DIR}"
TAR_EXIT_CODE=0
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY RUN] Would decompress archive: tar -xzf \"$SOURCE_PATH\" -C \"$IMPORT_DIR\" --strip-components=1"
else
tar -xzf "$SOURCE_PATH" -C "$IMPORT_DIR" --strip-components=1
TAR_EXIT_CODE=$?
fi
if [[ "$TAR_EXIT_CODE" -ne 0 ]]; then
echo "❌ Error: Failed to decompress '${SOURCE_PATH}'."
if [[ "$DRY_RUN" == "false" ]]; then rm -rf "$IMPORT_DIR"; fi
exit 1
fi
fi
if [[ ! -d "$IMPORT_DIR" && "$DRY_RUN" == "false" ]]; then
echo "❌ Error: Import directory '${IMPORT_DIR}' does not exist."
exit 1
fi
import_options=""
if [[ "$IMPORT_REPLACE" == "true" ]]; then
import_options="REPLACE"
echo " - Mode: REPLACE"
else
import_options="IGNORE EXISTING"
echo " - Mode: IGNORE EXISTING (default)"
fi
if [[ "$ACTION" == "import-rename" ]]; then
import_options="${import_options} RENAME SCHEMA \"${SCHEMA_NAME}\" TO \"${NEW_SCHEMA_NAME}\""
fi
QUERY="IMPORT \"${SCHEMA_NAME}\".\"*\" AS BINARY FROM '${IMPORT_DIR}' WITH ${import_options} THREADS ${THREADS};"
EXIT_CODE=0
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY RUN] Would execute hdbsql: \"$HDBSQL_PATH\" -U \"$USER_KEY\" \"$QUERY\""
else
"$HDBSQL_PATH" -U "$USER_KEY" "$QUERY" > /dev/null 2>&1
EXIT_CODE=$?
fi
target_schema_name="${NEW_SCHEMA_NAME:-$SCHEMA_NAME}"
if [[ "$EXIT_CODE" -eq 0 ]]; then
echo "✅ Successfully imported schema."
send_notification "${ACTION} of schema '${SCHEMA_NAME}' to '${target_schema_name}' completed successfully."
else
echo "❌ Error: Failed to import schema (hdbsql exit code: ${EXIT_CODE})."
send_notification "${ACTION} of schema '${SCHEMA_NAME}' to '${target_schema_name}' FAILED."
fi
if [[ "$COMPRESS" == "true" ]]; then
echo "🧹 Cleaning up temporary directory..."
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY RUN] Would remove temp directory: rm -rf \"$IMPORT_DIR\""
else
rm -rf "$IMPORT_DIR"
fi
fi
;;
*)
echo "❌ Error: Invalid action '${ACTION}'."
usage
exit 1
;;
esac
echo "✅ Process complete."

View File

@@ -1,106 +1,241 @@
#!/bin/bash #!/bin/bash
# --- Configuration --- # Author: Tomi Eckert
# Define script packages. The key is the name that will appear in the menu.
# The value is a space-separated string of all the URLs to download for that package.
declare -A SCRIPT_PACKAGES
SCRIPT_PACKAGES["Aurora Suite"]="https://git.technopunk.space/tomi/Scripts/raw/branch/main/aurora/aurora.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/aurora/aurora.conf"
SCRIPT_PACKAGES["Backup Suite"]="https://git.technopunk.space/tomi/Scripts/raw/branch/main/backup/backup.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/backup/backup.conf"
SCRIPT_PACKAGES["Key Manager"]="https://git.technopunk.space/tomi/Scripts/raw/branch/main/hdb_keymanager.sh"
# Example: To add another single script later, just add a new line:
# SCRIPT_PACKAGES["My Other Script"]="https://path/to/my-other-script.sh"
# --- Main Script --- # --- Main Script ---
# Welcome message # This script presents a menu of software packages, or installs them
echo "-------------------------------------" # non-interactively via command-line arguments. It downloads files from a
echo " Script Downloader " # remote configuration, shows a diff for config updates, and checks versions.
echo "-------------------------------------"
# Create an array of options from the package names (the keys of our map) # --- Functions ---
options=("${!SCRIPT_PACKAGES[@]}")
options+=("Quit") # Add a Quit option
# Set the prompt for the select menu # Get the version from a local script file.
PS3="Please enter the number of the script/package you want to download: " get_local_version() {
local file_path="$1"
if [[ -f "${file_path}" ]]; then
head -n 5 "${file_path}" | grep -m 1 "^# Version:" | awk '{print $NF}'
else
echo "0.0.0" # Return a base version if file doesn't exist.
fi
}
# Compare two version strings. Returns 0 if v1 is newer.
is_version_greater() {
local v1=$1
local v2=$2
if [[ "$(printf '%s\n' "$v1" "$v2" | sort -V | head -n 1)" != "$v1" ]]; then
return 0 # v1 is greater
else
return 1 # v1 is not greater (equal or less)
fi
}
# Process a single selected package.
process_package() {
local choice_key="$1"
local force_overwrite="$2" # Expects "true" or "false"
if [[ -z "${SCRIPT_PACKAGES[$choice_key]}" ]]; then
echo "[❌] Invalid package name provided: '${choice_key}'"
return
fi
# Display the menu and handle user input
select choice in "${options[@]}"; do
case "${choice}" in
"Quit")
echo "👋 Exiting."
break
;;
*)
# Check if the user's choice is a valid package name
if [[ -n "${SCRIPT_PACKAGES[$choice]}" ]]; then
echo echo
echo "⬇️ Downloading package: '${choice}'..." echo "[⬇️] Processing package: '${choice_key}'..."
# Get the space-separated list of URLs for the chosen package # Parse the new config format
urls_to_download="${SCRIPT_PACKAGES[$choice]}" config_value="${SCRIPT_PACKAGES[$choice_key]}"
display_name=$(echo "${config_value}" | cut -d'|' -f1)
remote_version=$(echo "${config_value}" | cut -d'|' -f2)
description=$(echo "${config_value}" | cut -d'|' -f3)
urls_to_download=$(echo "${config_value}" | cut -d'|' -f4)
install_script=$(echo "${config_value}" | cut -d'|' -f5) # Optional install script
# Loop through each URL in the list and download the file read -r -a urls_to_download_array <<< "$urls_to_download"
for url in $urls_to_download; do
for url in "${urls_to_download_array[@]}"; do
filename=$(basename "${url}") filename=$(basename "${url}")
# If it's a .conf file AND it already exists, ask to overwrite. # Handle config file overwrites
if [[ "${filename}" == *.conf && -f "${filename}" ]]; then if [[ "${filename}" == *.conf && -f "${filename}" ]]; then
echo " -> Found existing config file: '${filename}'." if [[ "$force_overwrite" == "true" ]]; then
# Create a temporary file to download the new version for comparison echo "[⚠️] Overwriting '${filename}' due to --overwrite-config flag."
tmp_file=$(mktemp) if ! curl -fsSL -o "${filename}" "${url}"; then
echo "[❌] Error: Failed to download '${filename}'."
fi
continue
fi
# Download the new version silently to the temp file echo "[->] Found existing config file: '${filename}'."
tmp_file=$(mktemp)
if curl -fsSL -o "${tmp_file}" "${url}"; then if curl -fsSL -o "${tmp_file}" "${url}"; then
echo " 🔎 Comparing versions..." echo "[🔎] Comparing versions..."
echo "-------------------- DIFF START --------------------" echo "-------------------- DIFF START --------------------"
# Show a colorized diff if 'colordiff' is available, otherwise use regular 'diff'
if command -v colordiff &> /dev/null; then if command -v colordiff &> /dev/null; then
colordiff -u "${filename}" "${tmp_file}" colordiff -u "${filename}" "${tmp_file}"
else else
diff --color=always -u "${filename}" "${tmp_file}" diff --color=always -u "${filename}" "${tmp_file}" 2>/dev/null || diff -u "${filename}" "${tmp_file}"
fi fi
echo "--------------------- DIFF END ---------------------" echo "--------------------- DIFF END ---------------------"
# Ask the user for confirmation before overwriting
read -p "Do you want to overwrite '${filename}'? (y/N) " -n 1 -r REPLY read -p "Do you want to overwrite '${filename}'? (y/N) " -n 1 -r REPLY
echo # Move to a new line for cleaner output echo
if [[ $REPLY =~ ^[Yy]$ ]]; then if [[ $REPLY =~ ^[Yy]$ ]]; then
mv "${tmp_file}" "${filename}" mv "${tmp_file}" "${filename}"
echo " Updated '${filename}'." echo "[✅] Updated '${filename}'."
else else
rm "${tmp_file}" rm "${tmp_file}"
echo " 🤷 Kept existing version of '${filename}'." echo "[🤷] Kept existing version of '${filename}'."
fi fi
else else
echo " Error: Failed to download new version of '${filename}' for comparison." echo "[❌] Error downloading new version of '${filename}' for comparison."
# Clean up the temp file on failure
rm -f "${tmp_file}" rm -f "${tmp_file}"
fi fi
else else
# Original download logic for all other files (or new .conf files) # Original download logic for all other files.
echo " -> Downloading '${filename}'..." echo "[->] Downloading '${filename}'..."
if curl -fsSL -o "${filename}" "${url}"; then if curl -fsSL -o "${filename}" "${url}"; then
echo " Successfully downloaded '${filename}'." echo "[✅] Successfully downloaded '${filename}'."
# If the downloaded file is a shell script, make it executable if [[ "${filename}" == *.sh || "${filename}" == *.bash ]]; then
if [[ "${filename}" == *.sh ]]; then
chmod +x "${filename}" chmod +x "${filename}"
echo " 🤖 Made '${filename}' executable." echo "[🤖] Made '${filename}' executable."
fi fi
else else
echo " Error: Failed to download '${filename}'." echo "[❌] Error: Failed to download '${filename}'."
fi fi
fi fi
done done
echo
echo "📦 Package download complete." if [[ -n "${install_script}" ]]; then
break echo "[⚙️] Running install script for '${choice_key}'..."
#eval "${install_script}"
bash -c "$(curl -sSL $install_script)"
if [ $? -eq 0 ]; then
echo "[✅] Install script completed successfully."
else else
# The user entered an invalid number echo "[❌] Install script failed with exit code $?."
echo "Invalid selection. Please try again."
fi fi
fi
echo "[📦] Package processing complete for '${choice_key}'."
}
# --- Main Logic ---
conf_file="packages.conf.$(date +%Y%m%d%H%M%S)"
trap 'rm -f "${conf_file}"' EXIT
echo "[🔄] Downloading configuration file..."
if ! curl -fsSL -o "${conf_file}" "https://git.technopunk.space/tomi/Scripts/raw/branch/main/packages.conf"; then
echo "[❌] Error: Failed to download packages.conf. Exiting."
exit 1
fi
echo "[✅] Configuration file downloaded successfully."
source "${conf_file}"
# --- Argument Parsing for Non-Interactive Mode ---
if [ "$#" -gt 0 ]; then
declare -a packages_to_install
overwrite_configs=false
for arg in "$@"; do
case $arg in
--overwrite-config)
overwrite_configs=true
;;
-*)
echo "[❌] Unknown flag: $arg" >&2
exit 1
;;
*)
packages_to_install+=("$arg")
;; ;;
esac esac
done
if [ ${#packages_to_install[@]} -eq 0 ]; then
echo "[❌] Flag provided with no package names. Exiting."
exit 1
fi
echo "[🚀] Running in non-interactive mode."
for pkg_key in "${packages_to_install[@]}"; do
if [[ -n "${SCRIPT_PACKAGES[$pkg_key]}" ]]; then
process_package "$pkg_key" "$overwrite_configs"
else
echo "[⚠️] Unknown package: '$pkg_key'. Skipping."
fi
done
echo "[🏁] Non-interactive run complete."
exit 0
fi
# --- Interactive Mode ---
declare -a ordered_keys
package_keys_sorted=($(for k in "${!SCRIPT_PACKAGES[@]}"; do echo $k; done | sort))
ordered_keys=("${package_keys_sorted[@]}")
# --- Display Menu ---
echo
echo "-------------------------------------"
echo " Script Downloader "
echo "-------------------------------------"
echo "[🔎] Checking for updates..."
echo
for i in "${!ordered_keys[@]}"; do
key="${ordered_keys[$i]}"
config_value="${SCRIPT_PACKAGES[$key]}"
display_name=$(echo "${config_value}" | cut -d'|' -f1)
remote_version=$(echo "${config_value}" | cut -d'|' -f2)
description=$(echo "${config_value}" | cut -d'|' -f3)
urls=$(echo "${config_value}" | cut -d'|' -f4)
# install_script=$(echo "${config_value}" | cut -d'|' -f5) # Not used for display in menu
read -r -a url_array <<< "$urls"
main_script_filename=$(basename "${url_array[0]}")
local_version=$(get_local_version "${main_script_filename}")
# Print main package line
echo -e "\033[1m$((i+1))) $key - $display_name (v$remote_version)\033[0m"
# Print description
echo " $description"
# Print status
if [[ -f "${main_script_filename}" ]]; then
if is_version_greater "$remote_version" "$local_version"; then
echo -e " \033[33m[Update available: v${local_version} -> v${remote_version}]\033[0m"
else
echo -e " \033[32m[Installed: v${local_version}]\033[0m"
fi
fi
echo
done done
quit_num=$((${#ordered_keys[@]} + 1))
echo -e "\033[1m${quit_num}) Quit\033[0m"
echo
# --- Handle User Input ---
read -p "Please enter your choice(s) (e.g., 1 3 4), or press Enter to quit: " -r -a user_choices
if [ ${#user_choices[@]} -eq 0 ]; then
echo "[👋] No selection made. Exiting."
exit 0
fi
for choice_num in "${user_choices[@]}"; do
if ! [[ "$choice_num" =~ ^[0-9]+$ ]]; then
echo "[⚠️] Skipping invalid input: '${choice_num}'. Not a number."
continue
fi
if [ "$choice_num" -eq "$quit_num" ]; then
echo "[👋] Quit selected. Exiting."
exit 0
fi
index=$((choice_num - 1))
if [[ -z "${ordered_keys[$index]}" ]]; then
echo "[⚠️] Skipping invalid choice: '${choice_num}'. Out of range."
continue
fi
choice_key="${ordered_keys[$index]}"
process_package "$choice_key" "false" # Never force overwrite in interactive mode
done
echo
echo "[🏁] All selected packages have been processed."

View File

@@ -1,4 +1,6 @@
#!/bin/bash #!/bin/bash
# Version: 1.2.3
# Author: Tomi Eckert
# A script to interactively manage SAP HANA hdbuserstore keys, with testing. # A script to interactively manage SAP HANA hdbuserstore keys, with testing.
@@ -11,7 +13,20 @@ COLOR_NC='\033[0m' # No Color
# --- Configuration --- # --- Configuration ---
# Adjust these paths if your HANA client is installed elsewhere. # Adjust these paths if your HANA client is installed elsewhere.
HDB_CLIENT_PATH="/usr/sap/hdbclient" # Define potential HDB client paths
HDB_CLIENT_PATH_1="/usr/sap/hdbclient"
HDB_CLIENT_PATH_2="/usr/sap/NDB/HDB00/exe"
# Check which path exists and set HDB_CLIENT_PATH accordingly
if [ -d "$HDB_CLIENT_PATH_1" ]; then
HDB_CLIENT_PATH="$HDB_CLIENT_PATH_1"
elif [ -d "$HDB_CLIENT_PATH_2" ]; then
HDB_CLIENT_PATH="$HDB_CLIENT_PATH_2"
else
echo -e "${COLOR_RED}❌ Error: Neither '$HDB_CLIENT_PATH_1' nor '$HDB_CLIENT_PATH_2' found.${COLOR_NC}"
echo -e "${COLOR_RED}Please install the SAP HANA client or adjust the paths in the script.${COLOR_NC}"
exit 1
fi
HDB_USERSTORE_EXEC="${HDB_CLIENT_PATH}/hdbuserstore" HDB_USERSTORE_EXEC="${HDB_CLIENT_PATH}/hdbuserstore"
HDB_SQL_EXEC="${HDB_CLIENT_PATH}/hdbsql" HDB_SQL_EXEC="${HDB_CLIENT_PATH}/hdbsql"
@@ -64,7 +79,7 @@ create_new_key() {
# Conditionally build the connection string # Conditionally build the connection string
if [[ "$is_systemdb" =~ ^[Yy]$ ]]; then if [[ "$is_systemdb" =~ ^[Yy]$ ]]; then
CONNECTION_STRING="${hdb_host}:3${hdb_instance}15" CONNECTION_STRING="${hdb_host}:3${hdb_instance}13"
echo -e "${COLOR_YELLOW}💡 Connecting to SYSTEMDB. Tenant name will be omitted from the connection string.${COLOR_NC}" echo -e "${COLOR_YELLOW}💡 Connecting to SYSTEMDB. Tenant name will be omitted from the connection string.${COLOR_NC}"
else else
read -p "Enter the Tenant DB [NDB]: " hdb_tenant read -p "Enter the Tenant DB [NDB]: " hdb_tenant

40
monitor/monitor.conf Normal file
View File

@@ -0,0 +1,40 @@
# Configuration for SAP HANA Monitoring Script
# Author: Tomi Eckert
# --- Company Information ---
# Used to identify which company the alert is for.
COMPANY_NAME="Company"
# --- Notification Settings ---
# Your ntfy.sh topic URL
NTFY_TOPIC_URL="https://ntfy.technopunk.space/sap"
# Your ntfy.sh bearer token (if required)
NTFY_TOKEN="tk_xxxxx"
# --- HANA Connection Settings ---
# Full path to the sapcontrol executable
SAPCONTROL_PATH="<sapcontrol_path>"
# Full path to the hdbsql executable
HDBSQL_PATH="<hdbsql_path>"
# HANA user key for authentication
HANA_USER_KEY="CRONKEY"
# HANA Instance Number for sapcontrol
HANA_INSTANCE_NR="00"
# --- Monitoring Thresholds ---
# Disk usage percentage that triggers an alert
DISK_USAGE_THRESHOLD=80
# Percentage of 'Truncated' log segments that triggers an alert
TRUNCATED_PERCENTAGE_THRESHOLD=50
# Percentage of 'Free' log segments below which an alert is triggered
FREE_PERCENTAGE_THRESHOLD=25
# Maximum age of the last successful full data backup in hours.
BACKUP_THRESHOLD_HOURS=25
# Statement queue length that triggers a check
STATEMENT_QUEUE_THRESHOLD=100
# Number of consecutive runs the queue must be over threshold to trigger an alert
STATEMENT_QUEUE_CONSECUTIVE_RUNS=3
# --- Monitored Directories ---
# List of directories to check for disk usage (space-separated)
DIRECTORIES_TO_MONITOR=("/hana/log" "/hana/shared" "/hana/data" "/usr/sap")

56
monitor/monitor.hook.sh Normal file
View File

@@ -0,0 +1,56 @@
#!/bin/bash
# Author: Tomi Eckert
# This script helps to configure monitor.conf
# Source the monitor.conf to get current values
source monitor.conf
# Check if COMPANY_NAME or NTFY_TOKEN are still default
if [ "$COMPANY_NAME" = "Company" ] || [ "$NTFY_TOKEN" = "tk_xxxxx" ]; then
echo "Default COMPANY_NAME or NTFY_TOKEN detected. Running configuration..."
else
echo "COMPANY_NAME and NTFY_TOKEN are already configured. Exiting."
exit 0
fi
# Prompt for COMPANY_NAME
read -p "Enter Company Name (e.g., MyCompany): " COMPANY_NAME_INPUT
COMPANY_NAME_INPUT=${COMPANY_NAME_INPUT:-"$COMPANY_NAME"} # Default to current value if not provided
# Prompt for NTFY_TOKEN
read -p "Enter ntfy.sh token (e.g., tk_xxxxx): " NTFY_TOKEN_INPUT
NTFY_TOKEN_INPUT=${NTFY_TOKEN_INPUT:-"$NTFY_TOKEN"} # Default to current value if not provided
# Define HANA client paths
HDB_CLIENT_PATH="/usr/sap/hdbclient"
HDB_USERSTORE_EXEC="${HDB_CLIENT_PATH}/hdbuserstore"
# List HANA user keys and prompt for selection
echo "Available HANA User Keys:"
HANA_KEYS=$("$HDB_USERSTORE_EXEC" list 2>/dev/null | tail -n +3 | grep '^KEY ' | awk '{print $2}')
if [ -z "$HANA_KEYS" ]; then
echo "No HANA user keys found. Please create one using keymanager.sh or enter manually."
read -p "Enter HANA User Key (e.g., CRONKEY): " HANA_USER_KEY_INPUT
else
echo "$HANA_KEYS"
read -p "Enter HANA User Key from the list above (e.g., CRONKEY): " HANA_USER_KEY_INPUT
fi
HANA_USER_KEY_INPUT=${HANA_USER_KEY_INPUT:-"CRONKEY"} # Default value
# Find paths for sapcontrol and hdbsql
SAPCONTROL_PATH_INPUT=$(which sapcontrol)
HDBSQL_PATH_INPUT=$(which hdbsql)
# Default values if not found
SAPCONTROL_PATH_INPUT=${SAPCONTROL_PATH_INPUT:-"/usr/sap/NDB/HDB00/exe/sapcontrol"}
HDBSQL_PATH_INPUT=${HDBSQL_PATH_INPUT:-"/usr/sap/hdbclient/hdbsql"}
# Update monitor.conf
sed -i "s/^COMPANY_NAME=\".*\"/COMPANY_NAME=\"$COMPANY_NAME_INPUT\"/" monitor.conf
sed -i "s/^NTFY_TOKEN=\".*\"/NTFY_TOKEN=\"$NTFY_TOKEN_INPUT\"/" monitor.conf
sed -i "s#^SAPCONTROL_PATH=\".*\"#SAPCONTROL_PATH=\"$SAPCONTROL_PATH_INPUT\"#" monitor.conf
sed -i "s#^HDBSQL_PATH=\".*\"#HDBSQL_PATH=\"$HDBSQL_PATH_INPUT\"#" monitor.conf
sed -i "s/^HANA_USER_KEY=\".*\"/HANA_USER_KEY=\"$HANA_USER_KEY_INPUT\"/" monitor.conf
echo "monitor.conf updated successfully!"

244
monitor/monitor.sh Normal file
View File

@@ -0,0 +1,244 @@
#!/bin/bash
# Version: 1.3.1
# Author: Tomi Eckert
# =============================================================================
# SAP HANA Monitoring Script
#
# Checks HANA processes, disk usage, log segments, and statement queue.
# Sends ntfy.sh notifications if thresholds are exceeded.
# =============================================================================
# --- Lock File Implementation ---
LOCK_FILE="/tmp/hana_monitor.lock"
if [ -e "$LOCK_FILE" ]; then
echo "▶️ Script is already running. Exiting."
exit 1
fi
touch "$LOCK_FILE"
# Ensure lock file is removed on script exit
trap 'rm -f "$LOCK_FILE"' EXIT
# --- Configuration and Setup ---
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" &>/dev/null && pwd)"
CONFIG_FILE="${SCRIPT_DIR}/monitor.conf"
if [ ! -f "$CONFIG_FILE" ]; then
echo "❌ Error: Configuration file not found at ${CONFIG_FILE}" >&2
rm -f "$LOCK_FILE"
exit 1
fi
source "$CONFIG_FILE"
STATE_DIR="${SCRIPT_DIR}/monitor_state"
mkdir -p "${STATE_DIR}"
# Helper functions for state management
get_state() {
local key="$1"
if [ -f "${STATE_DIR}/${key}.state" ]; then
cat "${STATE_DIR}/${key}.state"
else
echo ""
fi
}
set_state() {
local key="$1"
local value="$2"
echo "$value" > "${STATE_DIR}/${key}.state"
}
HOSTNAME=$(hostname)
SQL_QUERY="SELECT b.host, b.service_name, a.state, count(*) FROM PUBLIC.M_LOG_SEGMENTS a JOIN PUBLIC.M_SERVICES b ON (a.host = b.host AND a.port = b.port) GROUP BY b.host, b.service_name, a.state;"
send_notification_if_changed() {
local alert_key="$1"
local title_prefix="$2" # e.g., "HANA Process"
local current_message="$3"
local is_alert_condition="$4" # "true" or "false"
local current_value="$5" # The value to store as state (e.g., "85%", "GREEN", "ALERT")
local previous_value=$(get_state "${alert_key}")
if [ "$current_value" != "$previous_value" ]; then
local full_title=""
local full_message=""
if [ "$is_alert_condition" == "true" ]; then
full_title="${title_prefix} Alert"
full_message="🚨 Critical: ${current_message}"
else
# Check if it was previously an alert (i.e., previous_value was not "OK")
if [ -n "$previous_value" ] && [ "$previous_value" != "OK" ]; then
full_title="${title_prefix} Resolved"
full_message="✅ Resolved: ${current_message}"
else
# No alert, and no previous alert to resolve, so just update state silently
set_state "${alert_key}" "$current_value"
return
fi
fi
local final_message="[${COMPANY_NAME} | ${HOSTNAME}] ${full_message}"
curl -H "Authorization: Bearer ${NTFY_TOKEN}" -H "Title: ${full_title}" -d "${final_message}" "${NTFY_TOPIC_URL}" > /dev/null 2>&1
set_state "${alert_key}" "$current_value"
echo "🔔 Notification sent for ${alert_key}: ${full_message}"
fi
}
# --- HANA Process Status ---
echo "⚙️ Checking HANA process status..."
if [ ! -x "$SAPCONTROL_PATH" ]; then
echo "❌ Error: sapcontrol not found or not executable at ${SAPCONTROL_PATH}" >&2
send_notification_if_changed "hana_sapcontrol_path" "HANA Monitor Error" "sapcontrol not found or not executable at ${SAPCONTROL_PATH}" "true" "SAPCONTROL_ERROR"
exit 1
fi
non_green_processes=$("${SAPCONTROL_PATH}" -nr "${HANA_INSTANCE_NR}" -function GetProcessList | tail -n +6 | grep -v 'GREEN')
if [ -n "$non_green_processes" ]; then
echo "🚨 Alert: One or more HANA processes are not running!" >&2
echo "$non_green_processes" >&2
send_notification_if_changed "hana_processes" "HANA Process" "One or more HANA processes are not GREEN. Problem processes: ${non_green_processes}" "true" "PROCESS_ALERT:${non_green_processes}"
exit 1 # Exit early as other checks might fail
else
send_notification_if_changed "hana_processes" "HANA Process" "All HANA processes are GREEN." "false" "OK"
echo "✅ Success! All HANA processes are GREEN."
fi
# --- Disk Space Monitoring ---
echo " Checking disk usage..."
for dir in "${DIRECTORIES_TO_MONITOR[@]}"; do
if [ ! -d "$dir" ]; then
echo "⚠️ Warning: Directory '$dir' not found. Skipping." >&2
send_notification_if_changed "disk_dir_not_found_${dir//\//_}" "HANA Disk Warning" "Directory '$dir' not found." "true" "DIR_NOT_FOUND"
continue
fi
usage=$(df -h "$dir" | awk 'NR==2 {print $5}' | sed 's/%//')
echo " - ${dir} is at ${usage}%"
if (( $(echo "$usage > $DISK_USAGE_THRESHOLD" | bc -l) )); then
echo "🚨 Alert: ${dir} usage is at ${usage}% which is above the ${DISK_USAGE_THRESHOLD}% threshold." >&2
send_notification_if_changed "disk_usage_${dir//\//_}" "HANA Disk" "Disk usage for ${dir} is at ${usage}%." "true" "${usage}%"
else
send_notification_if_changed "disk_usage_${dir//\//_}" "HANA Disk" "Disk usage for ${dir} is at ${usage}% (below threshold)." "false" "OK"
fi
done
# --- HANA Log Segment Monitoring ---
echo "⚙️ Executing HANA SQL query..."
if [ ! -x "$HDBSQL_PATH" ]; then
echo "❌ Error: hdbsql not found or not executable at ${HDBSQL_PATH}" >&2
send_notification_if_changed "hana_hdbsql_path" "HANA Monitor Error" "hdbsql not found or not executable at ${HDBSQL_PATH}" "true" "HDBSQL_ERROR"
exit 1
fi
readarray -t sql_output < <("$HDBSQL_PATH" -U "$HANA_USER_KEY" -c ";" "$SQL_QUERY" 2>&1)
if [ $? -ne 0 ]; then
echo "❌ Failure! The hdbsql command failed. Please check logs." >&2
error_message=$(printf '%s\n' "${sql_output[@]}")
send_notification_if_changed "hana_hdbsql_command" "HANA Monitor Error" "The hdbsql command failed. Details: ${error_message}" "true" "HDBSQL_COMMAND_FAILED"
exit 1
fi
total_segments=0
truncated_segments=0
free_segments=0
for line in "${sql_output[@]}"; do
if [[ -z "$line" || "$line" == *"STATE"* ]]; then continue; fi
cleaned_line=$(echo "$line" | tr -d '"')
state=$(echo "$cleaned_line" | awk -F',' '{print $3}')
count=$(echo "$cleaned_line" | awk -F',' '{print $4}')
total_segments=$((total_segments + count))
if [[ "$state" == "Truncated" ]]; then
truncated_segments=$((truncated_segments + count))
elif [[ "$state" == "Free" ]]; then
free_segments=$((free_segments + count))
fi
done
echo " Total Segments: ${total_segments}"
echo " Truncated Segments: ${truncated_segments}"
echo " Free Segments: ${free_segments}"
if [ $total_segments -eq 0 ]; then
echo "⚠️ Warning: No log segments found. Skipping percentage checks." >&2
send_notification_if_changed "hana_log_segments_total" "HANA Log Segment Warning" "No log segments found. Skipping percentage checks." "true" "NO_LOG_SEGMENTS"
else
send_notification_if_changed "hana_log_segments_total" "HANA Log Segment" "Log segments found." "false" "OK"
truncated_percentage=$((truncated_segments * 100 / total_segments))
if (( $(echo "$truncated_percentage > $TRUNCATED_PERCENTAGE_THRESHOLD" | bc -l) )); then
echo "🚨 Alert: ${truncated_percentage}% of log segments are 'Truncated'." >&2
send_notification_if_changed "hana_log_truncated" "HANA Log Segment" "${truncated_percentage}% of HANA log segments are in 'Truncated' state." "true" "${truncated_percentage}%"
else
send_notification_if_changed "hana_log_truncated" "HANA Log Segment" "${truncated_percentage}% of HANA log segments are in 'Truncated' state (below threshold)." "false" "OK"
fi
free_percentage=$((free_segments * 100 / total_segments))
if (( $(echo "$free_percentage < $FREE_PERCENTAGE_THRESHOLD" | bc -l) )); then
echo "🚨 Alert: Only ${free_percentage}% of log segments are 'Free'." >&2
send_notification_if_changed "hana_log_free" "HANA Log Segment" "Only ${free_percentage}% of HANA log segments are in 'Free' state." "true" "${free_percentage}%"
else
send_notification_if_changed "hana_log_free" "HANA Log Segment" "Only ${free_percentage}% of HANA log segments are in 'Free' state (above threshold)." "false" "OK"
fi
fi
# --- HANA Statement Queue Monitoring ---
echo "⚙️ Checking HANA statement queue..."
STATEMENT_QUEUE_SQL="SELECT COUNT(*) FROM M_SERVICE_THREADS WHERE THREAD_TYPE = 'SqlExecutor' AND THREAD_STATE = 'Queueing';"
queue_count=$("$HDBSQL_PATH" -U "$HANA_USER_KEY" -j -a -x "$STATEMENT_QUEUE_SQL" 2>/dev/null | tr -d '"')
if ! [[ "$queue_count" =~ ^[0-9]+$ ]]; then
echo "⚠️ Warning: Could not retrieve HANA statement queue count. Skipping check." >&2
send_notification_if_changed "hana_statement_queue_check_fail" "HANA Monitor Warning" "Could not retrieve statement queue count." "true" "QUEUE_CHECK_FAIL"
else
send_notification_if_changed "hana_statement_queue_check_fail" "HANA Monitor Warning" "Statement queue check is working." "false" "OK"
echo " Current statement queue length: ${queue_count}"
breach_count=$(get_state "statement_queue_breach_count")
breach_count=${breach_count:-0}
if (( queue_count > STATEMENT_QUEUE_THRESHOLD )); then
breach_count=$((breach_count + 1))
echo "📈 Statement queue is above threshold. Consecutive breach count: ${breach_count}/${STATEMENT_QUEUE_CONSECUTIVE_RUNS}."
else
breach_count=0
fi
set_state "statement_queue_breach_count" "$breach_count"
if (( breach_count >= STATEMENT_QUEUE_CONSECUTIVE_RUNS )); then
message="Statement queue has been over ${STATEMENT_QUEUE_THRESHOLD} for ${breach_count} checks. Current count: ${queue_count}."
send_notification_if_changed "hana_statement_queue_status" "HANA Statement Queue" "${message}" "true" "ALERT:${queue_count}"
else
message="Statement queue is normal. Current count: ${queue_count}."
send_notification_if_changed "hana_statement_queue_status" "HANA Statement Queue" "${message}" "false" "OK"
fi
fi
# --- HANA Backup Status Monitoring ---
echo " Checking last successful data backup status..."
last_backup_date=$("$HDBSQL_PATH" -U "$HANA_USER_KEY" -j -a -x \
"SELECT TOP 1 SYS_START_TIME FROM M_BACKUP_CATALOG WHERE ENTRY_TYPE_NAME = 'complete data backup' AND STATE_NAME = 'successful' ORDER BY SYS_START_TIME DESC" 2>/dev/null | tr -d "\"" | sed 's/\..*//')
if [[ -z "$last_backup_date" ]]; then
message="No successful complete data backup found for ${COMPANY_NAME} HANA."
echo "🚨 Critical: ${message}"
send_notification_if_changed "hana_backup_status" "HANA Backup" "${message}" "true" "NO_BACKUP"
else
last_backup_epoch=$(date -d "$last_backup_date" +%s)
current_epoch=$(date +%s)
threshold_seconds=$((BACKUP_THRESHOLD_HOURS * 3600))
age_seconds=$((current_epoch - last_backup_epoch))
age_hours=$((age_seconds / 3600))
if (( age_seconds > threshold_seconds )); then
message="Last successful HANA backup for ${COMPANY_NAME} is ${age_hours} hours old, which exceeds the threshold of ${BACKUP_THRESHOLD_HOURS} hours. Last backup was on: ${last_backup_date}."
echo "🚨 Critical: ${message}"
send_notification_if_changed "hana_backup_status" "HANA Backup" "${message}" "true" "${age_hours}h"
else
message="Last successful backup is ${age_hours} hours old (Threshold: ${BACKUP_THRESHOLD_HOURS} hours)."
echo "✅ Success! ${message}"
send_notification_if_changed "hana_backup_status" "HANA Backup" "${message}" "false" "OK"
fi
fi
echo "✅ Success! HANA monitoring check complete."

18
packages.conf Normal file
View File

@@ -0,0 +1,18 @@
#!/bin/bash
# Author: Tomi Eckert
#
# This file contains the configuration for the script downloader.
# The `SCRIPT_PACKAGES` associative array maps a short package name
# to a pipe-separated string with the following format:
# "<Display Name>|<Version>|<Description>|<Space-separated list of URLs>|[Install Script (optional)]"
# The Install Script will be executed after all files for the package are downloaded.
declare -A SCRIPT_PACKAGES
# Format: short_name="Display Name|Version|Description|URL1 URL2..."
SCRIPT_PACKAGES["aurora"]="Aurora Suite|2.1.0|A collection of scripts for managing Aurora database instances.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/aurora/aurora.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/aurora/aurora.conf"
SCRIPT_PACKAGES["backup"]="Backup Suite|1.0.8|A comprehensive script for backing up system files and databases.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/backup/backup.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/backup/backup.conf"
SCRIPT_PACKAGES["monitor"]="Monitor Suite|1.3.1|Scripts for monitoring system health and performance metrics.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/monitor/monitor.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/monitor/monitor.conf|https://git.technopunk.space/tomi/Scripts/raw/branch/main/monitor/monitor.hook.sh"
SCRIPT_PACKAGES["keymanager"]="Key Manager|1.2.3|A utility for managing HDB user keys for SAP HANA.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/keymanager.sh"
SCRIPT_PACKAGES["cleaner"]="File Cleaner|1.1.0|A simple script to clean up temporary files and logs.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/cleaner.sh"
SCRIPT_PACKAGES["hanatool"]="HANA Tool|1.5.6|A command-line tool for various SAP HANA administration tasks.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/hanatool.sh"