Compare commits
81 Commits
1f5d919a9c
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| d8752eb721 | |||
| 668f56e56c | |||
| 9acf30da12 | |||
| 46673d88d2 | |||
| 557cb807dd | |||
| 4274f4d01d | |||
| 66da934be2 | |||
| 3355727a9b | |||
| 6a94e19a94 | |||
| c35f2528cf | |||
| 8a5f76bbe4 | |||
| d428def0a2 | |||
| 2fe4ba0fd2 | |||
| b801c2c002 | |||
| 80fd12f0f9 | |||
| f597ae09aa | |||
| bd35ddbab6 | |||
| 1bd67d3613 | |||
| 1c254115c4 | |||
| b0553c5826 | |||
| 56e781996a | |||
| 4e98731bd1 | |||
| a2579ab3d5 | |||
| b983b9e953 | |||
| 1c4c7ebcc6 | |||
| 52bc1ed352 | |||
| ec0c686a3c | |||
| bb0531aeea | |||
| 92a2b963c4 | |||
| a8fc2c07e8 | |||
| 6b2132a7ab | |||
| 2549ccf250 | |||
| e083c5b749 | |||
| eeb5b2eb7b | |||
| a6150467e5 | |||
| 2424d55426 | |||
| 408f2396da | |||
| a16b8aa42b | |||
| d9760b9072 | |||
| 229683dfa5 | |||
| 2d5d2dfa9c | |||
| 61e44106e5 | |||
| 62d5df4c65 | |||
| 24da8eb6e8 | |||
| 03beb02956 | |||
| 293281f732 | |||
| c7c2f30f0d | |||
| 23eded7de3 | |||
| fc84cb0750 | |||
| 0ca4c703fa | |||
| 20ca109b50 | |||
| 3ca3e0cd86 | |||
| 681b44b8f7 | |||
| 52b63645ac | |||
| b265af02b2 | |||
| 4a49ef92e2 | |||
| 7ba2f3565e | |||
| 69ccad02e2 | |||
| 0dc18265ad | |||
| b018908f64 | |||
| f0a9d2d75a | |||
| bb4b4ab5d5 | |||
| c800c20f1b | |||
| db354c6441 | |||
| 01c1c6e2f6 | |||
| 817fc83763 | |||
| 781c4654e5 | |||
| 32eb49f890 | |||
| c691a87d7d | |||
| 57ad14302b | |||
| c42fbf482c | |||
| b81915190b | |||
| 95e86f3e60 | |||
| aa7dfd7fe0 | |||
| b01f17c59a | |||
| 85004b817d | |||
| 177cce7326 | |||
| 7af6a851a0 | |||
| 30ae23d75a | |||
| 54d8dd0dff | |||
| 66b516ad2d |
79
README.md
79
README.md
@@ -1,17 +1,82 @@
|
||||
# SAP HANA cron tools
|
||||
# 🚀 SAP HANA Automation Scripts
|
||||
|
||||
Run the installer:
|
||||
A collection of powerful Bash scripts designed to automate and simplify SAP HANA administration, monitoring, and management tasks.
|
||||
|
||||
## ✨ Key Features
|
||||
|
||||
* **Automate Everything**: Schedule routine backups, file cleanups, and schema refreshes.
|
||||
* **Monitor Proactively**: Keep an eye on system health, disk space, and backup status with automated alerts.
|
||||
* **Simplify Management**: Use powerful command-line tools and interactive menus for common tasks.
|
||||
* **Secure**: Integrates with SAP's secure user store (`hdbuserstore`) for credential management.
|
||||
* **Get Notified**: Receive completion and failure alerts via `ntfy.sh`.
|
||||
|
||||
## ⚙️ Quick Install
|
||||
|
||||
Get started in seconds. The interactive installer will guide you through selecting the tools you need.
|
||||
|
||||
```sh
|
||||
bash -c "$(curl -sSL https://install.technopunk.space)"
|
||||
```
|
||||
|
||||
## Tools
|
||||
## 🛠️ Tools Overview
|
||||
|
||||
### Aurora generator script
|
||||
The following scripts and suites are included. Suites are configured via a `.conf` file in their respective directories.
|
||||
|
||||
Configure the `aurora.conf`, then run the script with `./arurora.sh`.
|
||||
| Tool | Purpose & Core Function |
|
||||
| :------------- | :------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **`cleaner`** 🧹 | **File Cleaner**: Deletes files older than a specified retention period. Ideal for managing logs and temporary files. |
|
||||
| **`hanatool`** 🗄️ | **HANA Management**: A powerful CLI tool to export/import schemas, perform full tenant backups, and compress artifacts. |
|
||||
| **`keymanager`** 🔑 | **Key Manager**: An interactive menu to easily create, delete, and test `hdbuserstore` keys with an automatic rollback safety feature. |
|
||||
| **`aurora`** 🌅 | **Schema Refresh Suite**: Automates refreshing a non-production schema from a production source. |
|
||||
| **`backup`** 💾 | **Backup Suite**: A complete, cron-friendly solution for scheduling schema exports and/or full tenant backups with configurable compression. |
|
||||
| **`monitor`** 📊 | **Monitoring Suite**: Continuously checks HANA process status, disk usage, log segments, and backup age, sending alerts when thresholds are breached. |
|
||||
|
||||
### Backup script
|
||||
## 📖 Tool Details
|
||||
|
||||
Configure the `backup.conf`, then run the script with `./backup.sh`.
|
||||
### 1\. `cleaner.sh` (File Cleaner) 🧹
|
||||
|
||||
* **Purpose**: Deletes files older than a specified retention period from given directories to help manage disk space.
|
||||
|
||||
### 2\. `hanatool.sh` (SAP HANA Schema & Tenant Management) 🗄️
|
||||
|
||||
* **Purpose**: A versatile command-line utility for SAP HANA, enabling quick exports and imports of schemas, as well as full tenant backups.
|
||||
* **Features**:
|
||||
* Export/Import schemas (with optional renaming).
|
||||
* Perform full tenant backups.
|
||||
* Dry-run mode to preview commands.
|
||||
* `ntfy.sh` notifications for task completion/failure.
|
||||
* **Options**: `-t, --threads N`, `-c, --compress`, `-n, --dry-run`, `--ntfy <token>`, `--replace`, `--hdbsql <path>`, `-h, --help`
|
||||
|
||||
### 3\. `keymanager.sh` (Secure User Store Key Manager) 🔑
|
||||
|
||||
* **Purpose**: An interactive script to simplify the creation, deletion, and testing of SAP HANA `hdbuserstore` keys.
|
||||
* **Features**:
|
||||
* Interactive menu for easy key management.
|
||||
* Connection testing for existing keys.
|
||||
* Automatic rollback of a newly created key if its connection test fails.
|
||||
|
||||
### 4\. `aurora.sh` (HANA Aurora Refresh Suite) 🌅
|
||||
|
||||
* **Purpose**: Automates the refresh of a "copy" schema from a production source, ensuring non-production environments stay up-to-date.
|
||||
* **Process**:
|
||||
1. Drops the existing target schema (optional).
|
||||
2. Exports the source schema from production.
|
||||
3. Imports and renames the data to the target schema.
|
||||
4. Runs post-import configurations and grants privileges.
|
||||
|
||||
### 5\. `backup.sh` (SAP HANA Automated Backup Suite) 💾
|
||||
|
||||
* **Purpose**: Provides automated, scheduled backups for SAP HANA databases.
|
||||
* **Features**:
|
||||
* Supports schema exports, full tenant data backups, or both.
|
||||
* Configurable compression to save disk space.
|
||||
* Uses secure `hdbuserstore` keys for connections.
|
||||
|
||||
### 6\. `monitor.sh` (SAP HANA Monitoring Suite) 📊
|
||||
|
||||
* **Purpose**: Continuously monitors critical aspects of SAP HANA and sends proactive alerts via `ntfy.sh` when predefined thresholds are exceeded.
|
||||
* **Checks Performed**:
|
||||
* Verifies all HANA processes have a 'GREEN' status.
|
||||
* Monitors disk usage against a set threshold.
|
||||
* Analyzes log segment state.
|
||||
* Checks the age of the last successful data backup.
|
||||
|
||||
@@ -1,31 +1,40 @@
|
||||
# Configuration for the HANA Aurora Refresh Script
|
||||
# Place this file in the same directory as the aurora.sh script.
|
||||
# Configuration for the Aurora Refresh Script (aurora_refresh.sh)
|
||||
# Place this file in the same directory as the script.
|
||||
# Author: Tomi Eckert
|
||||
|
||||
# --- Main Settings ---
|
||||
|
||||
# The source production schema to be copied.
|
||||
SCHEMA="SBO_DEMO"
|
||||
# Example: "SBO_COMPANY_PROD"
|
||||
SOURCE_SCHEMA="SBODEMOHU"
|
||||
|
||||
# The user who will be granted privileges on the new Aurora schema.
|
||||
AURORA_SCHEMA_USER="B1_53424F5F4348494D5045585F4155524F5241_RW"
|
||||
# The HANA user that will be granted read/write access to the new Aurora schema.
|
||||
# This is typically a technical user for the application.
|
||||
# Example: "B1_..._RW"
|
||||
AURORA_USER="B1_XXXXXXXXX_RW"
|
||||
|
||||
# The database user for performing backup and administrative tasks.
|
||||
BACKOP_USER="CRONKEY"
|
||||
# The secure user store key for the HANA database user with privileges to
|
||||
# perform EXPORT, IMPORT, DROP SCHEMA, and GRANT commands (e.g., SYSTEM).
|
||||
# Using a key (hdbuserstore) is more secure than hardcoding a password.
|
||||
# Example: "CRONKEY"
|
||||
DB_ADMIN_KEY="CRONKEY"
|
||||
|
||||
# --- Paths and Files ---
|
||||
# --- Paths ---
|
||||
|
||||
# The base directory for storing the temporary schema export.
|
||||
BACKUP_DIR="/hana/shared/backup/schema"
|
||||
# The base directory where the temporary schema export folder will be created.
|
||||
# Ensure the <sid>adm user has write permissions here.
|
||||
BACKUP_BASE_DIR="/hana/shared/backup/schema"
|
||||
|
||||
# The full path to the HANA hdbsql executable.
|
||||
HDBSQL="/usr/sap/NDB/HDB00/exe/hdbsql"
|
||||
|
||||
# The root directory where post-import SQL scripts are located.
|
||||
SQL_SCRIPTS_ROOT="/usr/sap/NDB/home/tools/sql"
|
||||
|
||||
# --- Post-Import Scripts ---
|
||||
# --- Post-Import Scripts (Optional) ---
|
||||
|
||||
# The root directory where the SQL script and its associated files are located.
|
||||
SQL_ROOT="/usr/sap/NDB/home/tools"
|
||||
|
||||
# A space-separated list of SQL script files to run after the import is complete.
|
||||
# These scripts should be located in the SCRIPT_ROOT directory.
|
||||
POST_SQL=""
|
||||
# A space-separated list of SQL script filenames to run after the import is complete.
|
||||
# The script will look for these files inside the SQL_SCRIPTS_ROOT directory.
|
||||
# Leave empty ("") if no scripts are needed.
|
||||
# Example: "update_user_emails.sql cleanup_tables.sql"
|
||||
POST_IMPORT_SQL=""
|
||||
|
||||
213
aurora/aurora.sh
213
aurora/aurora.sh
@@ -1,123 +1,120 @@
|
||||
#!/bin/sh
|
||||
# Version: 2.1.0
|
||||
# Author: Tomi Eckert
|
||||
#
|
||||
# Purpose: Performs an automated refresh of a SAP HANA schema. It exports a
|
||||
# production schema and re-imports it under a new name ("Aurora")
|
||||
# to create an up-to-date, non-production environment for testing.
|
||||
# Designed to be run via cron, typically in the early morning.
|
||||
#
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
# Exit immediately if a command exits with a non-zero status.
|
||||
set -e
|
||||
# --- Basic Setup ---
|
||||
# Exit immediately if any command fails or if an unset variable is used.
|
||||
set -eu
|
||||
|
||||
# === SETUP ===
|
||||
# Determine script's directory and source the configuration file.
|
||||
# --- Configuration ---
|
||||
# Load the configuration file located in the same directory as the script.
|
||||
SCRIPT_DIR=$(dirname "$0")
|
||||
CONFIG_FILE="${SCRIPT_DIR}/aurora.conf"
|
||||
|
||||
if [ ! -f "$CONFIG_FILE" ]; then
|
||||
echo "Error: Configuration file not found at ${CONFIG_FILE}"
|
||||
echo "❌ FATAL: Configuration file not found at '${CONFIG_FILE}'" >&2
|
||||
exit 1
|
||||
fi
|
||||
# shellcheck source=aurora.conf
|
||||
. "$CONFIG_FILE"
|
||||
|
||||
# === DERIVED VARIABLES ===
|
||||
TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S")
|
||||
AURORA="${SCHEMA}_AURORA"
|
||||
AURORA_TEMP_DIR="${BACKUP_DIR}/${AURORA}"
|
||||
LOGFILE="${SCRIPT_ROOT}/aurora.log"
|
||||
temp_compnyname=${SCHEMA#SBO_} # Remove SBO_ prefix
|
||||
COMPNYNAME=${temp_compnyname%_PROD} # Remove _PROD suffix if it exists
|
||||
|
||||
# === FUNCTIONS ===
|
||||
|
||||
log() { echo "$(date +"%Y-%m-%d %H:%M:%S") - $1" | tee -a "$LOGFILE"; }
|
||||
run_sql() {
|
||||
log "Executing: $1"
|
||||
"$HDBSQL" -U "${BACKOP_USER}" "$1" >/dev/null
|
||||
}
|
||||
|
||||
show_info() {
|
||||
echo "Source Schema: ${SCHEMA}"
|
||||
echo "Target Schema: ${AURORA}"
|
||||
echo "Target Schema User: ${AURORA_SCHEMA_USER}"
|
||||
echo "Company Name: ${COMPNYNAME}"
|
||||
echo "Export Directory: ${AURORA_TEMP_DIR}"
|
||||
echo "Log File: ${LOGFILE}"
|
||||
}
|
||||
|
||||
usage() {
|
||||
echo "Usage: $0 [new | complete | info]"
|
||||
echo " new : Export, import, and rename. (No privileges or post-scripts)"
|
||||
echo " complete : Drop, export, import, grant privileges, and run post-scripts."
|
||||
echo " info : Show configuration information."
|
||||
}
|
||||
|
||||
export_schema() {
|
||||
log "Starting schema export for '${SCHEMA}'."
|
||||
mkdir -p "$AURORA_TEMP_DIR"
|
||||
run_sql "EXPORT \"${SCHEMA}\".\"*\" AS BINARY INTO '$AURORA_TEMP_DIR' WITH REPLACE;"
|
||||
log "Schema export completed."
|
||||
}
|
||||
|
||||
import_and_rename() {
|
||||
log "Starting import and rename to '${AURORA}'."
|
||||
run_sql "IMPORT \"${SCHEMA}\".\"*\" FROM '$AURORA_TEMP_DIR' WITH RENAME SCHEMA \"${SCHEMA}\" TO \"${AURORA}\";"
|
||||
log "Updating company name fields."
|
||||
local update_sql="
|
||||
UPDATE \"${AURORA}\".CINF SET \"CompnyName\"='AURORA ${COMPNYNAME} ${TIMESTAMP}';
|
||||
UPDATE \"${AURORA}\".OADM SET \"CompnyName\"='AURORA ${COMPNYNAME} ${TIMESTAMP}';
|
||||
UPDATE \"${AURORA}\".OADM SET \"PrintHeadr\"='AURORA ${COMPNYNAME} ${TIMESTAMP}';"
|
||||
"$HDBSQL" -U "${BACKOP_USER}" -c ";" -I - <<EOF
|
||||
${update_sql}
|
||||
EOF
|
||||
log "Import and rename completed."
|
||||
}
|
||||
|
||||
grant_privileges() {
|
||||
log "Granting privileges on '${AURORA}' to '${AURORA_SCHEMA_USER}'."
|
||||
run_sql "GRANT ALL PRIVILEGES ON SCHEMA \"${AURORA}\" TO \"${AURORA_SCHEMA_USER}\";"
|
||||
log "Privileges granted."
|
||||
}
|
||||
|
||||
drop_aurora_schema() {
|
||||
log "Dropping existing '${AURORA}' schema."
|
||||
"$HDBSQL" -U "${BACKOP_USER}" "DROP SCHEMA \"${AURORA}\" CASCADE;" >/dev/null 2>&1 || log "Could not drop schema '${AURORA}'. It might not exist."
|
||||
log "Old schema dropped."
|
||||
}
|
||||
|
||||
run_post_scripts() {
|
||||
log "Running post-import SQL scripts: ${POST_SQL}"
|
||||
for sql_file in $POST_SQL; do
|
||||
log "Running script: ${sql_file}"
|
||||
"$HDBSQL" -U "${BACKOP_USER}" -I "${SCRIPT_ROOT}/${sql_file}"
|
||||
done
|
||||
log "All post-import scripts completed."
|
||||
}
|
||||
|
||||
# === SCRIPT EXECUTION ===
|
||||
|
||||
if [ $# -eq 0 ]; then
|
||||
usage
|
||||
# --- Validate Configuration ---
|
||||
if [ ! -x "$HDBSQL" ]; then
|
||||
echo "❌ FATAL: hdbsql is not found or not executable at '${HDBSQL}'" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
case "$1" in
|
||||
new)
|
||||
log "=== Starting 'new' operation ==="
|
||||
export_schema
|
||||
import_and_rename
|
||||
log "=== 'New' operation finished successfully ==="
|
||||
;;
|
||||
complete)
|
||||
log "=== Starting 'complete' operation ==="
|
||||
drop_aurora_schema
|
||||
export_schema
|
||||
import_and_rename
|
||||
grant_privileges
|
||||
run_post_scripts
|
||||
log "=== 'Complete' operation finished successfully ==="
|
||||
;;
|
||||
info)
|
||||
show_info
|
||||
;;
|
||||
*)
|
||||
echo "Error: Invalid argument '$1'."
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
# --- Derived Variables (Do Not Edit) ---
|
||||
TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S")
|
||||
AURORA_SCHEMA="${SOURCE_SCHEMA}_AURORA"
|
||||
EXPORT_DIR="${BACKUP_BASE_DIR}/${AURORA_SCHEMA}_TEMP_EXPORT"
|
||||
COMPANY_NAME_BASE=$(echo "${SOURCE_SCHEMA}" | sed 's/^SBO_//' | sed 's/_PROD$//')
|
||||
|
||||
# --- Main Execution ---
|
||||
echo
|
||||
echo "🚀 [$(date "+%T")] Starting Aurora Refresh for '${SOURCE_SCHEMA}'"
|
||||
echo "--------------------------------------------------------"
|
||||
echo " Source Schema: ${SOURCE_SCHEMA}"
|
||||
echo " Target Aurora Schema: ${AURORA_SCHEMA}"
|
||||
echo " Temp Export Path: ${EXPORT_DIR}"
|
||||
echo "--------------------------------------------------------"
|
||||
|
||||
# 1. Drop the old Aurora schema if it exists.
|
||||
echo "🗑️ Dropping old schema '${AURORA_SCHEMA}' (if it exists)..."
|
||||
"$HDBSQL" -U "$DB_ADMIN_KEY" "DROP SCHEMA \"${AURORA_SCHEMA}\" CASCADE" >/dev/null 2>&1 || echo " -> Schema did not exist. Continuing."
|
||||
|
||||
# 2. Prepare the temporary export directory.
|
||||
echo "📁 Preparing temporary export directory..."
|
||||
rm -rf "$EXPORT_DIR"
|
||||
mkdir -p "$EXPORT_DIR"
|
||||
|
||||
# 3. Export the source schema.
|
||||
echo "⬇️ Exporting source schema '${SOURCE_SCHEMA}' to binary files..."
|
||||
"$HDBSQL" -U "$DB_ADMIN_KEY" "EXPORT \"${SOURCE_SCHEMA}\".\"*\" AS BINARY INTO '${EXPORT_DIR}' WITH REPLACE;" >/dev/null
|
||||
echo " -> Export complete."
|
||||
|
||||
# 4. Import the data into the new Aurora schema.
|
||||
echo "⬆️ Importing data and renaming schema to '${AURORA_SCHEMA}'..."
|
||||
"$HDBSQL" -U "$DB_ADMIN_KEY" "IMPORT \"${SOURCE_SCHEMA}\".\"*\" FROM '${EXPORT_DIR}' WITH IGNORE EXISTING RENAME SCHEMA \"${SOURCE_SCHEMA}\" TO \"${AURORA_SCHEMA}\";" >/dev/null
|
||||
echo " -> Import complete."
|
||||
|
||||
# 5. Update company name in CINF and OADM tables.
|
||||
echo "✍️ Updating company name fields in the new schema..."
|
||||
|
||||
# First, get the original company name from the source schema.
|
||||
# The query returns a header and the name in quotes. sed gets the second line, tr removes the quotes, xargs trims whitespace.
|
||||
echo " -> Fetching original company name from '${SOURCE_SCHEMA}'..."
|
||||
ORIGINAL_COMPNY_NAME=$("$HDBSQL" -U "$DB_ADMIN_KEY" "SELECT \"CompnyName\" FROM \"${SOURCE_SCHEMA}\".\"CINF\"" | sed -n '2p' | tr -d '"' | xargs)
|
||||
|
||||
# Construct the new name in the desired format.
|
||||
DATE_STAMP=$(date "+%Y-%m-%d")
|
||||
NEW_COMPNY_NAME="AURORA - ${ORIGINAL_COMPNY_NAME} - ${DATE_STAMP}"
|
||||
echo " -> New company name set to: '${NEW_COMPNY_NAME}'"
|
||||
|
||||
echo " -> Updating CINF table..."
|
||||
"$HDBSQL" -U "$DB_ADMIN_KEY" "UPDATE \"${AURORA_SCHEMA}\".CINF SET \"CompnyName\" = '${NEW_COMPNY_NAME}';" >/dev/null
|
||||
|
||||
echo " -> Updating OADM table..."
|
||||
"$HDBSQL" -U "$DB_ADMIN_KEY" "UPDATE \"${AURORA_SCHEMA}\".OADM SET \"CompnyName\" = '${NEW_COMPNY_NAME}', \"PrintHeadr\" = '${NEW_COMPNY_NAME}';" >/dev/null
|
||||
echo " -> Company info updated."
|
||||
|
||||
# 6. Grant privileges to the read/write user.
|
||||
echo "🔑 Granting ALL privileges on '${AURORA_SCHEMA}' to '${AURORA_USER}'..."
|
||||
"$HDBSQL" -U "$DB_ADMIN_KEY" "GRANT ALL PRIVILEGES ON SCHEMA \"${AURORA_SCHEMA}\" TO \"${AURORA_USER}\";" >/dev/null
|
||||
echo " -> Privileges granted."
|
||||
|
||||
# 7. Run post-import SQL scripts, if any are defined.
|
||||
if [ -n "$POST_IMPORT_SQL" ]; then
|
||||
echo "⚙️ Running post-import SQL scripts..."
|
||||
# Use word splitting intentionally here
|
||||
# shellcheck disable=SC2086
|
||||
for sql_file in $POST_IMPORT_SQL; do
|
||||
full_path="${SQL_SCRIPTS_ROOT}/${sql_file}"
|
||||
if [ -f "$full_path" ]; then
|
||||
echo " -> Executing: ${sql_file}"
|
||||
"$HDBSQL" -U "$DB_ADMIN_KEY" -I "$full_path"
|
||||
else
|
||||
echo " -> ⚠️ WARNING: Script not found: ${full_path}" >&2
|
||||
fi
|
||||
done
|
||||
else
|
||||
echo "ℹ️ No post-import SQL scripts to run."
|
||||
fi
|
||||
|
||||
# 8. Clean up the temporary export files.
|
||||
echo "🧹 Cleaning up temporary directory '${EXPORT_DIR}'..."
|
||||
rm -rf "$EXPORT_DIR"
|
||||
echo " -> Cleanup complete."
|
||||
|
||||
echo "--------------------------------------------------------"
|
||||
echo "✅ [$(date "+%T")] Aurora Refresh finished successfully!"
|
||||
echo
|
||||
|
||||
exit 0
|
||||
|
||||
256
b1.gen.sh
Normal file
256
b1.gen.sh
Normal file
@@ -0,0 +1,256 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Author: Tomi Eckert
|
||||
# ==============================================================================
|
||||
# SAP Business One for HANA Silent Installation Configurator
|
||||
# ==============================================================================
|
||||
# This script interactively collects necessary details to customize the
|
||||
# silent installation properties file for SAP Business One on HANA.
|
||||
# It provides sensible defaults and generates the final 'install.properties'.
|
||||
# ==============================================================================
|
||||
|
||||
# --- Function to display a welcome header ---
|
||||
print_header() {
|
||||
echo "======================================================"
|
||||
echo " SAP Business One for HANA Installation Configurator "
|
||||
echo "======================================================"
|
||||
echo "Please provide the following details. Defaults are in [brackets]."
|
||||
echo ""
|
||||
}
|
||||
|
||||
# --- Function to read password securely (single entry) ---
|
||||
read_password() {
|
||||
local prompt_text=$1
|
||||
local -n pass_var=$2 # Use a nameref to pass the variable name
|
||||
|
||||
# Loop until the entered password is not empty
|
||||
while true; do
|
||||
read -s -p "$prompt_text: " pass_var
|
||||
echo
|
||||
if [ -z "$pass_var" ]; then
|
||||
echo "Password cannot be empty. Please try again."
|
||||
else
|
||||
break
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# --- Function to read and verify password securely ---
|
||||
read_password_verify() {
|
||||
local prompt_text=$1
|
||||
local -n pass_var=$2 # Use a nameref to pass the variable name
|
||||
local pass_verify
|
||||
|
||||
# Loop until the entered passwords match and are not empty
|
||||
while true; do
|
||||
read -s -p "$prompt_text: " pass_var
|
||||
echo
|
||||
if [ -z "$pass_var" ]; then
|
||||
echo "Password cannot be empty. Please try again."
|
||||
continue
|
||||
fi
|
||||
|
||||
read -s -p "Confirm password: " pass_verify
|
||||
echo
|
||||
|
||||
if [ "$pass_var" == "$pass_verify" ]; then
|
||||
break
|
||||
else
|
||||
echo "Passwords do not match. Please try again."
|
||||
echo ""
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# --- Main configuration logic ---
|
||||
print_header
|
||||
|
||||
# --- Installation Type ---
|
||||
echo "--- Installation Type ---"
|
||||
read -p "Is this a new installation or are you reconfiguring an existing instance? (new/reconfigure) [new]: " install_type
|
||||
install_type=${install_type:-new}
|
||||
|
||||
if [[ "$install_type" == "reconfigure" ]]; then
|
||||
LANDSCAPE_INSTALL_ACTION="connect"
|
||||
B1S_SHARED_FOLDER_OVERWRITE="false"
|
||||
else
|
||||
LANDSCAPE_INSTALL_ACTION="create"
|
||||
B1S_SHARED_FOLDER_OVERWRITE="true"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
|
||||
# 1. Get Hostname/IP Details
|
||||
# Default to the current machine's hostname.
|
||||
DEFAULT_HOSTNAME=$(hostname)
|
||||
read -p "Enter HANA Database Server Hostname or IP [${DEFAULT_HOSTNAME}]: " HANA_DATABASE_SERVERS
|
||||
HANA_DATABASE_SERVERS=${HANA_DATABASE_SERVERS:-$DEFAULT_HOSTNAME}
|
||||
|
||||
# 2. Get HANA Instance Details
|
||||
read -p "Enter HANA Database Instance Number [00]: " HANA_DATABASE_INSTANCE
|
||||
HANA_DATABASE_INSTANCE=${HANA_DATABASE_INSTANCE:-00}
|
||||
|
||||
# 3. Get HANA SID to construct the admin user
|
||||
read -p "Enter HANA SID (Tenant Name) [NDB]: " HANA_SID
|
||||
HANA_SID=${HANA_SID:-NDB}
|
||||
# Convert SID to lowercase and append 'adm'
|
||||
HANA_DATABASE_ADMIN_ID=$(echo "${HANA_SID}" | tr '[:upper:]' '[:lower:]')adm
|
||||
|
||||
# 4. Get Passwords
|
||||
echo ""
|
||||
echo "--- Secure Password Entry ---"
|
||||
read_password "Enter password for HANA Admin ('${HANA_DATABASE_ADMIN_ID}')" HANA_DATABASE_ADMIN_PASSWD
|
||||
|
||||
# 5. Get HANA Database User
|
||||
read -p "Enter HANA Database User ID [SYSTEM]: " HANA_DATABASE_USER_ID
|
||||
HANA_DATABASE_USER_ID=${HANA_DATABASE_USER_ID:-SYSTEM}
|
||||
|
||||
# 6. Get HANA User Password
|
||||
read_password "Enter password for HANA User ('${HANA_DATABASE_USER_ID}')" HANA_DATABASE_USER_PASSWORD
|
||||
|
||||
# 7. Get SLD and Site User Details
|
||||
echo ""
|
||||
echo "--- System Landscape Directory (SLD) ---"
|
||||
read -p "Enter SLD Service Port [40000]: " SERVICE_PORT
|
||||
SERVICE_PORT=${SERVICE_PORT:-40000}
|
||||
|
||||
read -p "Enter SLD Site User ID [B1SiteUser]: " SITE_USER_ID
|
||||
SITE_USER_ID=${SITE_USER_ID:-B1SiteUser}
|
||||
|
||||
read_password_verify "Enter password for Site User ('${SITE_USER_ID}')" SITE_USER_PASSWORD
|
||||
|
||||
# --- SLD Single Sign-On (SSO) Settings ---
|
||||
echo ""
|
||||
echo "--- SLD Single Sign-On (SSO) Settings ---"
|
||||
read -p "Do you want to configure Active Directory SSO? [y/N]: " configure_sso
|
||||
|
||||
if [[ "$configure_sso" =~ ^[yY]$ ]]; then
|
||||
SLD_WINDOWS_DOMAIN_ACTION="use"
|
||||
read -p "Enter AD Domain Controller: " SLD_WINDOWS_DOMAIN_CONTROLLER
|
||||
read -p "Enter AD Domain Name: " SLD_WINDOWS_DOMAIN_NAME
|
||||
read -p "Enter AD Domain User ID: " SLD_WINDOWS_DOMAIN_USER_ID
|
||||
read_password "Enter password for AD Domain User ('${SLD_WINDOWS_DOMAIN_USER_ID}')" SLD_WINDOWS_DOMAIN_USER_PASSWORD
|
||||
else
|
||||
SLD_WINDOWS_DOMAIN_ACTION="skip"
|
||||
SLD_WINDOWS_DOMAIN_CONTROLLER=""
|
||||
SLD_WINDOWS_DOMAIN_NAME=""
|
||||
SLD_WINDOWS_DOMAIN_USER_ID=""
|
||||
SLD_WINDOWS_DOMAIN_USER_PASSWORD=""
|
||||
fi
|
||||
|
||||
# 10. & 11. Get Service Layer Load Balancer Details
|
||||
echo ""
|
||||
echo "--- Service Layer ---"
|
||||
read -p "Enter Service Layer Load Balancer Port [50000]: " SL_LB_PORT
|
||||
SL_LB_PORT=${SL_LB_PORT:-50000}
|
||||
|
||||
read -p "How many Service Layer member nodes should be configured? [2]: " SL_MEMBER_COUNT
|
||||
SL_MEMBER_COUNT=${SL_MEMBER_COUNT:-2}
|
||||
|
||||
# Generate the SL_LB_MEMBERS string
|
||||
SL_LB_MEMBERS=""
|
||||
for (( i=1; i<=SL_MEMBER_COUNT; i++ )); do
|
||||
port=$((50000 + i))
|
||||
member="${HANA_DATABASE_SERVERS}:${port}"
|
||||
if [ -z "$SL_LB_MEMBERS" ]; then
|
||||
SL_LB_MEMBERS="$member"
|
||||
else
|
||||
SL_LB_MEMBERS="$SL_LB_MEMBERS,$member"
|
||||
fi
|
||||
done
|
||||
|
||||
# 12. Display Summary and Ask for Confirmation
|
||||
clear
|
||||
echo "======================================================"
|
||||
echo " Configuration Summary"
|
||||
echo "======================================================"
|
||||
echo ""
|
||||
echo " --- Installation & System Details ---"
|
||||
echo " INSTALLATION_FOLDER=/usr/sap/SAPBusinessOne"
|
||||
echo " LANDSCAPE_INSTALL_ACTION=${LANDSCAPE_INSTALL_ACTION}"
|
||||
echo " B1S_SHARED_FOLDER_OVERWRITE=${B1S_SHARED_FOLDER_OVERWRITE}"
|
||||
echo ""
|
||||
echo " --- SAP HANA Database Server Details ---"
|
||||
echo " HANA_DATABASE_SERVERS=${HANA_DATABASE_SERVERS}"
|
||||
echo " HANA_DATABASE_INSTANCE=${HANA_DATABASE_INSTANCE}"
|
||||
echo " HANA_DATABASE_ADMIN_ID=${HANA_DATABASE_ADMIN_ID}"
|
||||
echo " HANA_DATABASE_ADMIN_PASSWD=[hidden]"
|
||||
echo ""
|
||||
echo " --- SAP HANA Database User ---"
|
||||
echo " HANA_DATABASE_USER_ID=${HANA_DATABASE_USER_ID}"
|
||||
echo " HANA_DATABASE_USER_PASSWORD=[hidden]"
|
||||
echo ""
|
||||
echo " --- System Landscape Directory (SLD) Details ---"
|
||||
echo " SERVICE_PORT=${SERVICE_PORT}"
|
||||
echo " SITE_USER_ID=${SITE_USER_ID}"
|
||||
echo " SITE_USER_PASSWORD=[hidden]"
|
||||
echo ""
|
||||
echo " --- SLD Single Sign-On (SSO) ---"
|
||||
echo " SLD_WINDOWS_DOMAIN_ACTION=${SLD_WINDOWS_DOMAIN_ACTION}"
|
||||
if [ "$SLD_WINDOWS_DOMAIN_ACTION" == "use" ]; then
|
||||
echo " SLD_WINDOWS_DOMAIN_CONTROLLER=${SLD_WINDOWS_DOMAIN_CONTROLLER}"
|
||||
echo " SLD_WINDOWS_DOMAIN_NAME=${SLD_WINDOWS_DOMAIN_NAME}"
|
||||
echo " SLD_WINDOWS_DOMAIN_USER_ID=${SLD_WINDOWS_DOMAIN_USER_ID}"
|
||||
echo " SLD_WINDOWS_DOMAIN_USER_PASSWORD=[hidden]"
|
||||
fi
|
||||
echo ""
|
||||
echo " --- Service Layer ---"
|
||||
echo " SL_LB_PORT=${SL_LB_PORT}"
|
||||
echo " SL_LB_MEMBERS=${SL_LB_MEMBERS}"
|
||||
echo ""
|
||||
echo "======================================================"
|
||||
read -p "Save this configuration to 'install.properties'? [y/N]: " confirm
|
||||
echo ""
|
||||
|
||||
if [[ ! "$confirm" =~ ^[yY]$ ]]; then
|
||||
echo "Configuration cancelled by user."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# --- Write the final install.properties file ---
|
||||
# Using a HEREDOC to write the configuration file with the variables collected.
|
||||
cat > install.properties << EOL
|
||||
# SAP Business One for HANA Silent Installation Properties
|
||||
# Generated by configuration script on $(date)
|
||||
|
||||
INSTALLATION_FOLDER=/usr/sap/SAPBusinessOne
|
||||
|
||||
HANA_DATABASE_SERVERS=${HANA_DATABASE_SERVERS}
|
||||
HANA_DATABASE_INSTANCE=${HANA_DATABASE_INSTANCE}
|
||||
HANA_DATABASE_ADMIN_ID=${HANA_DATABASE_ADMIN_ID}
|
||||
HANA_DATABASE_ADMIN_PASSWD=${HANA_DATABASE_ADMIN_PASSWD}
|
||||
|
||||
HANA_DATABASE_USER_ID=${HANA_DATABASE_USER_ID}
|
||||
HANA_DATABASE_USER_PASSWORD=${HANA_DATABASE_USER_PASSWORD}
|
||||
|
||||
SERVICE_PORT=${SERVICE_PORT}
|
||||
SLD_DATABASE_NAME=SLDDATA
|
||||
SLD_CERTIFICATE_ACTION=self
|
||||
CONNECTION_SSL_CERTIFICATE_VERIFICATION=false
|
||||
SLD_DATABASE_ACTION=create
|
||||
SLD_SERVER_PROTOCOL=https
|
||||
SITE_USER_ID=${SITE_USER_ID}
|
||||
SITE_USER_PASSWORD=${SITE_USER_PASSWORD}
|
||||
|
||||
# --- SLD Single Sign-On (SSO) Settings ---
|
||||
SLD_WINDOWS_DOMAIN_ACTION=${SLD_WINDOWS_DOMAIN_ACTION}
|
||||
SLD_WINDOWS_DOMAIN_CONTROLLER=${SLD_WINDOWS_DOMAIN_CONTROLLER}
|
||||
SLD_WINDOWS_DOMAIN_NAME=${SLD_WINDOWS_DOMAIN_NAME}
|
||||
SLD_WINDOWS_DOMAIN_USER_ID=${SLD_WINDOWS_DOMAIN_USER_ID}
|
||||
SLD_WINDOWS_DOMAIN_USER_PASSWORD=${SLD_WINDOWS_DOMAIN_USER_PASSWORD}
|
||||
|
||||
SL_LB_MEMBER_ONLY=false
|
||||
SL_LB_PORT=${SL_LB_PORT}
|
||||
SL_LB_MEMBERS=${SL_LB_MEMBERS}
|
||||
SL_THREAD_PER_SERVER=10
|
||||
|
||||
SELECTED_FEATURES=B1ServerTools,B1ServerToolsLandscape,B1ServerToolsSLD,B1ServerToolsLicense,B1ServerToolsJobService,B1ServerToolsXApp,B1SLDAgent,B1BackupService,B1Server,B1ServerSHR,B1ServerHelp,B1AnalyticsPlatform,B1ServerCommonDB,B1ServiceLayerComponent
|
||||
|
||||
B1S_SAMBA_AUTOSTART=true
|
||||
B1S_SHARED_FOLDER_OVERWRITE=${B1S_SHARED_FOLDER_OVERWRITE}
|
||||
LANDSCAPE_INSTALL_ACTION=${LANDSCAPE_INSTALL_ACTION}
|
||||
EOL
|
||||
|
||||
echo "Success! The configuration file 'install.properties' has been created in the current directory."
|
||||
exit 0
|
||||
|
||||
@@ -1,29 +1,33 @@
|
||||
# ==============================================================================
|
||||
# Configuration for HANA Backup Script (backup.sh)
|
||||
# ==============================================================================
|
||||
# Author: Tomi Eckert
|
||||
|
||||
# --- Connection Settings ---
|
||||
|
||||
# Full path to the SAP HANA hdbsql executable.
|
||||
HDBSQL_PATH="/usr/sap/hdbclient/hdbsql"
|
||||
|
||||
# User key name from the hdbuserstore.
|
||||
# This key should be configured to connect to the target tenant database.
|
||||
USER_KEY="CRONKEY"
|
||||
|
||||
# hdbuserstore key for the SYSTEMDB user
|
||||
SYSTEMDB_USER_KEY="SYSTEMKEY"
|
||||
|
||||
# --- Backup Settings ---
|
||||
|
||||
# The base directory where all backup files and directories will be stored.
|
||||
# Ensure this directory exists and that the OS user running the script has
|
||||
# write permissions to it.
|
||||
BACKUP_BASE_DIR="/hana/backups/automated"
|
||||
BACKUP_BASE_DIR="/hana/shared/backup"
|
||||
|
||||
# Specify the type of backup to perform on script execution.
|
||||
# Options are:
|
||||
# 'schema' - Performs only the schema export.
|
||||
# 'tenant' - Performs only the tenant data backup.
|
||||
# 'all' - Performs both the schema export and the tenant backup.
|
||||
BACKUP_TYPE="all"
|
||||
BACKUP_TYPE="tenant"
|
||||
|
||||
# Set to 'true' to also perform a backup of the SYSTEMDB
|
||||
BACKUP_SYSTEMDB=true
|
||||
|
||||
# Schema can be compressed after exporting, decreasing it's size.
|
||||
COMPRESS_SCHEMA=true
|
||||
|
||||
17
backup/backup.hook.sh
Normal file
17
backup/backup.hook.sh
Normal file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Author: Tomi Eckert
|
||||
# This script helps to configure backup.conf
|
||||
|
||||
# Source the backup.conf to get current values
|
||||
source backup.conf
|
||||
|
||||
HDBSQL_PATH_INPUT=$(which hdbsql)
|
||||
|
||||
# Default values if not found
|
||||
HDBSQL_PATH_INPUT=${HDBSQL_PATH_INPUT:-"/usr/sap/hdbclient/hdbsql"}
|
||||
|
||||
# Update backup.conf
|
||||
sed -i "s#^HDBSQL_PATH=\".*\"#HDBSQL_PATH=\"$HDBSQL_PATH_INPUT\"#" backup.conf
|
||||
|
||||
echo "backup.conf updated successfully!"
|
||||
224
backup/backup.sh
224
backup/backup.sh
@@ -1,18 +1,20 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Version: 1.0.8
|
||||
# Author: Tomi Eckert
|
||||
# ==============================================================================
|
||||
# SAP HANA Backup Script
|
||||
#
|
||||
# Performs schema exports for one or more schemas and/or tenant backups for a
|
||||
# SAP HANA database. Designed to be executed via a cronjob.
|
||||
# SAP HANA database using hanatool.sh. Designed to be executed via a cronjob.
|
||||
# Reads all settings from the backup.conf file in the same directory.
|
||||
# ==============================================================================
|
||||
|
||||
# --- Configuration and Setup ---
|
||||
|
||||
# Find the script's own directory to locate the config file
|
||||
# Find the script's own directory to locate the config file and hanatool.sh
|
||||
SCRIPT_DIR=$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" &> /dev/null && pwd)
|
||||
CONFIG_FILE="${SCRIPT_DIR}/backup.conf"
|
||||
HANATOOL_PATH="${SCRIPT_DIR}/hanatool.sh" # Assuming hanatool.sh is in the parent directory
|
||||
|
||||
# Check for config file and source it
|
||||
if [[ -f "$CONFIG_FILE" ]]; then
|
||||
@@ -22,162 +24,104 @@ else
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if hdbsql executable exists
|
||||
if [[ ! -x "$HDBSQL_PATH" ]]; then
|
||||
echo "❌ Error: hdbsql not found or not executable at '${HDBSQL_PATH}'"
|
||||
# Check if hanatool.sh executable exists
|
||||
if [[ ! -x "$HANATOOL_PATH" ]]; then
|
||||
echo "❌ Error: hanatool.sh not found or not executable at '${HANATOOL_PATH}'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Calculate threads to use (half of the available cores, but at least 1)
|
||||
TOTAL_THREADS=$(nproc --all)
|
||||
THREADS=$((TOTAL_THREADS / 2))
|
||||
if [[ "$THREADS" -eq 0 ]]; then
|
||||
THREADS=1
|
||||
fi
|
||||
|
||||
# --- Functions ---
|
||||
|
||||
# Performs a binary export of a specific schema.
|
||||
# Accepts the schema name as its first argument.
|
||||
perform_schema_export() {
|
||||
local schema_name="$1"
|
||||
if [[ -z "$schema_name" ]]; then
|
||||
echo " ❌ Error: No schema name provided to perform_schema_export function."
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "⬇️ Starting schema export for '${schema_name}'..."
|
||||
|
||||
local timestamp
|
||||
timestamp=$(date +%Y%m%d_%H%M%S)
|
||||
local export_base_dir="${BACKUP_BASE_DIR}/schema"
|
||||
local export_path="${export_base_dir}/${schema_name}_${timestamp}"
|
||||
local query_export_path="$export_path" # Default path for the EXPORT query
|
||||
|
||||
if [[ "$COMPRESS_SCHEMA" == "true" ]]; then
|
||||
export_path="${export_base_dir}/tmp/${schema_name}_${timestamp}"
|
||||
query_export_path="$export_path"
|
||||
echo " ℹ️ Compression enabled. Using temporary export path: ${export_path}"
|
||||
fi
|
||||
|
||||
local archive_file="${export_base_dir}/${schema_name}_${timestamp}.tar.gz"
|
||||
|
||||
mkdir -p "$(dirname "$export_path")"
|
||||
|
||||
local query="EXPORT \"${schema_name}\".\"*\" AS BINARY INTO '${query_export_path}' WITH REPLACE THREADS ${THREADS};"
|
||||
|
||||
"$HDBSQL_PATH" -U "$USER_KEY" "$query" > /dev/null 2>&1
|
||||
local exit_code=$?
|
||||
|
||||
if [[ "$exit_code" -eq 0 ]]; then
|
||||
echo " ✅ Successfully exported schema '${schema_name}'."
|
||||
|
||||
if [[ "$COMPRESS_SCHEMA" == "true" ]]; then
|
||||
echo " 🗜️ Compressing exported files..."
|
||||
tar -czf "$archive_file" -C "$(dirname "$export_path")" "$(basename "$export_path")"
|
||||
local tar_exit_code=$?
|
||||
|
||||
if [[ "$tar_exit_code" -eq 0 ]]; then
|
||||
echo " ✅ Successfully created archive '${archive_file}'."
|
||||
echo " 🧹 Cleaning up temporary directory..."
|
||||
rm -rf "$export_path"
|
||||
rmdir --ignore-fail-on-non-empty "$(dirname "$export_path")"
|
||||
echo " ✨ Cleanup complete."
|
||||
else
|
||||
echo " ❌ Error: Failed to compress '${export_path}'."
|
||||
fi
|
||||
else
|
||||
echo " ℹ️ Compression disabled. Raw export files are located at '${export_path}'."
|
||||
fi
|
||||
else
|
||||
echo " ❌ Error: Failed to export schema '${schema_name}' (hdbsql exit code: ${exit_code})."
|
||||
fi
|
||||
}
|
||||
|
||||
# NEW: Loops through the schemas in the config file and runs an export for each.
|
||||
run_all_schema_exports() {
|
||||
if [[ -z "$SCHEMA_NAMES" ]]; then
|
||||
echo " ⚠️ Warning: SCHEMA_NAMES variable is not set in config. Skipping schema export."
|
||||
return
|
||||
fi
|
||||
|
||||
echo "🔎 Found schemas to export: ${SCHEMA_NAMES}"
|
||||
for schema in $SCHEMA_NAMES; do
|
||||
perform_schema_export "$schema"
|
||||
echo "--------------------------------------------------"
|
||||
done
|
||||
}
|
||||
|
||||
# Performs a full backup of the tenant database.
|
||||
perform_tenant_backup() {
|
||||
echo "⬇️ Starting tenant backup..."
|
||||
|
||||
local timestamp
|
||||
timestamp=$(date +%Y%m%d_%H%M%S)
|
||||
local backup_base_dir="${BACKUP_BASE_DIR}/tenant"
|
||||
local backup_path_prefix
|
||||
local backup_target_dir
|
||||
|
||||
if [[ "$COMPRESS_TENANT" == "true" ]]; then
|
||||
backup_target_dir="${backup_base_dir}/tmp"
|
||||
backup_path_prefix="${backup_target_dir}/backup_${timestamp}"
|
||||
echo " ℹ️ Compression enabled. Using temporary backup path: ${backup_path_prefix}"
|
||||
else
|
||||
backup_target_dir="$backup_base_dir"
|
||||
backup_path_prefix="${backup_target_dir}/backup_${timestamp}"
|
||||
fi
|
||||
|
||||
mkdir -p "$backup_target_dir"
|
||||
|
||||
local query="BACKUP DATA USING FILE ('${backup_path_prefix}')"
|
||||
|
||||
"$HDBSQL_PATH" -U "$USER_KEY" "$query" > /dev/null 2>&1
|
||||
local exit_code=$?
|
||||
|
||||
if [[ "$exit_code" -eq 0 ]]; then
|
||||
echo " ✅ Successfully initiated tenant backup with prefix '${backup_path_prefix}'."
|
||||
|
||||
if [[ "$COMPRESS_TENANT" == "true" ]]; then
|
||||
local archive_file="${backup_base_dir}/backup_${timestamp}.tar.gz"
|
||||
echo " 🗜️ Compressing backup files..."
|
||||
tar -czf "$archive_file" -C "$backup_target_dir" .
|
||||
local tar_exit_code=$?
|
||||
|
||||
if [[ "$tar_exit_code" -eq 0 ]]; then
|
||||
echo " ✅ Successfully created archive '${archive_file}'."
|
||||
echo " 🧹 Cleaning up temporary directory..."
|
||||
rm -rf "$backup_target_dir"
|
||||
echo " ✨ Cleanup complete."
|
||||
else
|
||||
echo " ❌ Error: Failed to compress backup files in '${backup_target_dir}'."
|
||||
fi
|
||||
fi
|
||||
else
|
||||
echo " ❌ Error: Failed to initiate tenant backup (hdbsql exit code: ${exit_code})."
|
||||
fi
|
||||
}
|
||||
|
||||
# --- Main Execution ---
|
||||
|
||||
echo "⚙️ Starting HANA backup process..."
|
||||
echo "⚙️ Starting HANA backup process using hanatool.sh..."
|
||||
|
||||
mkdir -p "$BACKUP_BASE_DIR"
|
||||
|
||||
SCHEMA_EXPORT_OPTIONS=""
|
||||
|
||||
case "$BACKUP_TYPE" in
|
||||
schema)
|
||||
run_all_schema_exports
|
||||
if [[ -z "$SCHEMA_NAMES" ]]; then
|
||||
echo " ⚠️ Warning: SCHEMA_NAMES variable is not set in config. Skipping schema export."
|
||||
else
|
||||
echo "🔎 Found schemas to export: ${SCHEMA_NAMES}"
|
||||
for schema in $SCHEMA_NAMES; do
|
||||
echo "⬇️ Starting schema export for '${schema}'..."
|
||||
SCHEMA_EXPORT_OPTIONS="$COMMON_OPTIONS"
|
||||
if [[ "$COMPRESS_SCHEMA" == "true" ]]; then
|
||||
SCHEMA_EXPORT_OPTIONS+=" --compress"
|
||||
fi
|
||||
"$HANATOOL_PATH" "$USER_KEY" export "$schema" "${BACKUP_BASE_DIR}/schema" $SCHEMA_EXPORT_OPTIONS
|
||||
if [[ $? -ne 0 ]]; then
|
||||
echo "❌ Error: Schema export for '${schema}' failed."
|
||||
fi
|
||||
echo "--------------------------------------------------"
|
||||
done
|
||||
fi
|
||||
;;
|
||||
tenant)
|
||||
perform_tenant_backup
|
||||
echo "⬇️ Starting Tenant backup..."
|
||||
TENANT_BACKUP_OPTIONS="$COMMON_OPTIONS"
|
||||
if [[ "$COMPRESS_TENANT" == "true" ]]; then
|
||||
TENANT_BACKUP_OPTIONS+=" --compress"
|
||||
fi
|
||||
"$HANATOOL_PATH" "$USER_KEY" backup "${BACKUP_BASE_DIR}/tenant" $TENANT_BACKUP_OPTIONS
|
||||
if [[ $? -ne 0 ]]; then
|
||||
echo "❌ Error: Tenant backup failed."
|
||||
fi
|
||||
;;
|
||||
all)
|
||||
run_all_schema_exports
|
||||
perform_tenant_backup
|
||||
if [[ -z "$SCHEMA_NAMES" ]]; then
|
||||
echo " ⚠️ Warning: SCHEMA_NAMES variable is not set in config. Skipping schema export."
|
||||
else
|
||||
echo "🔎 Found schemas to export: ${SCHEMA_NAMES}"
|
||||
for schema in $SCHEMA_NAMES; do
|
||||
echo "⬇️ Starting schema export for '${schema}'..."
|
||||
SCHEMA_EXPORT_OPTIONS="$COMMON_OPTIONS"
|
||||
if [[ "$COMPRESS_SCHEMA" == "true" ]]; then
|
||||
SCHEMA_EXPORT_OPTIONS+=" --compress"
|
||||
fi
|
||||
"$HANATOOL_PATH" "$USER_KEY" export "$schema" "${BACKUP_BASE_DIR}/schema" $SCHEMA_EXPORT_OPTIONS
|
||||
if [[ $? -ne 0 ]]; then
|
||||
echo "❌ Error: Schema export for '${schema}' failed."
|
||||
fi
|
||||
echo "--------------------------------------------------"
|
||||
done
|
||||
fi
|
||||
|
||||
echo "⬇️ Starting Tenant backup..."
|
||||
TENANT_BACKUP_OPTIONS="$COMMON_OPTIONS"
|
||||
if [[ "$COMPRESS_TENANT" == "true" ]]; then
|
||||
TENANT_BACKUP_OPTIONS+=" --compress"
|
||||
fi
|
||||
"$HANATOOL_PATH" "$USER_KEY" backup "${BACKUP_BASE_DIR}/tenant" $TENANT_BACKUP_OPTIONS
|
||||
if [[ $? -ne 0 ]]; then
|
||||
echo "❌ Error: Tenant backup failed."
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
echo " ❌ Error: Invalid BACKUP_TYPE '${BACKUP_TYPE}' in config. Use 'schema', 'tenant', or 'all'."
|
||||
;;
|
||||
esac
|
||||
|
||||
# Check if SYSTEMDB backup is enabled, regardless of BACKUP_TYPE (as long as it's not 'schema' only)
|
||||
if [[ "$BACKUP_TYPE" == "tenant" || "$BACKUP_TYPE" == "all" ]]; then
|
||||
if [[ "$BACKUP_SYSTEMDB" == "true" ]]; then
|
||||
echo "--------------------------------------------------"
|
||||
if [[ -z "$SYSTEMDB_USER_KEY" ]]; then
|
||||
echo " ❌ Error: BACKUP_SYSTEMDB is true, but SYSTEMDB_USER_KEY is not set in config."
|
||||
else
|
||||
echo "⬇️ Starting SYSTEMDB backup..."
|
||||
SYSTEMDB_BACKUP_OPTIONS="$COMMON_OPTIONS"
|
||||
if [[ "$COMPRESS_TENANT" == "true" ]]; then # SYSTEMDB compression uses COMPRESS_TENANT setting
|
||||
SYSTEMDB_BACKUP_OPTIONS+=" --compress"
|
||||
fi
|
||||
"$HANATOOL_PATH" "$SYSTEMDB_USER_KEY" backup "${BACKUP_BASE_DIR}/tenant" $SYSTEMDB_BACKUP_OPTIONS
|
||||
if [[ $? -ne 0 ]]; then
|
||||
echo "❌ Error: SYSTEMDB backup failed."
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "📦 Backup process complete."
|
||||
echo "👋 Exiting."
|
||||
@@ -1,4 +1,6 @@
|
||||
#!/bin/bash
|
||||
# Version: 1.1.0
|
||||
# Author: Tomi Eckert
|
||||
|
||||
# Check if any arguments were provided
|
||||
if [ "$#" -eq 0 ]; then
|
||||
448
hanatool.sh
Normal file
448
hanatool.sh
Normal file
@@ -0,0 +1,448 @@
|
||||
#!/bin/bash
|
||||
# Version: 1.5.6
|
||||
# Author: Tomi Eckert
|
||||
# ==============================================================================
|
||||
# SAP HANA Schema and Tenant Management Tool (hanatool.sh)
|
||||
#
|
||||
# A command-line utility to quickly export/import schemas or backup a tenant.
|
||||
# ==============================================================================
|
||||
|
||||
# --- Default Settings ---
|
||||
# Define potential HDB client paths
|
||||
HDB_CLIENT_PATH_1="/usr/sap/hdbclient"
|
||||
HDB_CLIENT_PATH_2="/usr/sap/NDB/HDB00/exe"
|
||||
|
||||
# Determine the correct HDB_CLIENT_PATH
|
||||
if [ -d "$HDB_CLIENT_PATH_1" ]; then
|
||||
HDB_CLIENT_PATH="$HDB_CLIENT_PATH_1"
|
||||
elif [ -d "$HDB_CLIENT_PATH_2" ]; then
|
||||
HDB_CLIENT_PATH="$HDB_CLIENT_PATH_2"
|
||||
else
|
||||
echo "❌ Error: Neither '$HDB_CLIENT_PATH_1' nor '$HDB_CLIENT_PATH_2' found."
|
||||
echo "Please install the SAP HANA client or adjust the paths in the script."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
HDBSQL_PATH="${HDB_CLIENT_PATH}/hdbsql"
|
||||
COMPRESS=false
|
||||
THREADS=0 # 0 means auto-calculate later
|
||||
DRY_RUN=false
|
||||
NTFY_TOKEN=""
|
||||
IMPORT_REPLACE=false
|
||||
|
||||
# --- Help/Usage Function ---
|
||||
usage() {
|
||||
echo "SAP HANA Schema and Tenant Management Tool"
|
||||
echo ""
|
||||
echo "Usage (Schema): $0 [USER_KEY] export|import [SCHEMA_NAME] [PATH] [OPTIONS]"
|
||||
echo " (Schema): $0 [USER_KEY] import-rename [SCHEMA_NAME] [NEW_SCHEMA_NAME] [PATH] [OPTIONS]"
|
||||
echo " (Tenant): $0 [USER_KEY] backup [PATH] [OPTIONS]"
|
||||
echo ""
|
||||
echo "Actions:"
|
||||
echo " export Export a schema to a specified path."
|
||||
echo " import Import a schema from a specified path."
|
||||
echo " import-rename Import a schema from a path to a new schema name."
|
||||
echo " backup Perform a full backup of the tenant."
|
||||
echo ""
|
||||
echo "Arguments:"
|
||||
echo " USER_KEY The user key from hdbuserstore for DB connection."
|
||||
echo " SCHEMA_NAME The name of the source schema."
|
||||
echo " NEW_SCHEMA_NAME (Required for import-rename only) The target schema name."
|
||||
echo " PATH The file system path for the export/import/backup data."
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " -t, --threads N Specify the number of threads (not used for 'backup')."
|
||||
echo " -c, --compress Enable tar.gz compression for exports and backups."
|
||||
echo " -n, --dry-run Show what commands would be executed without running them."
|
||||
echo " --ntfy <token> Send a notification via ntfy.sh upon completion/failure."
|
||||
echo " --replace Use the 'REPLACE' option for imports instead of 'IGNORE EXISTING'."
|
||||
echo " --hdbsql <path> Specify a custom path for the hdbsql executable."
|
||||
echo " -h, --help Show this help message."
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " # Backup the tenant determined by MY_TENANT_KEY and compress the result"
|
||||
echo " $0 MY_TENANT_KEY backup /hana/backups -c --ntfy tk_xxxxxxxxxxxx"
|
||||
echo ""
|
||||
echo " # Import MYSCHEMA from a compressed archive"
|
||||
echo " $0 MY_SCHEMA_KEY import MYSCHEMA /hana/backups/MYSCHEMA_20240101.tar.gz -c"
|
||||
echo ""
|
||||
echo " # Import MYSCHEMA as MYSCHEMA_TEST, replacing any existing objects"
|
||||
echo " $0 MY_SCHEMA_KEY import-rename MYSCHEMA MYSCHEMA_TEST /hana/backups/temp_export --replace"
|
||||
}
|
||||
|
||||
# --- Notification Function ---
|
||||
send_notification() {
|
||||
local message="$1"
|
||||
if [[ -n "$NTFY_TOKEN" && "$DRY_RUN" == "false" ]]; then
|
||||
echo "ℹ️ Sending notification..."
|
||||
curl -s -H "Authorization: Bearer $NTFY_TOKEN" -d "$message" https://ntfy.technopunk.space/sap > /dev/null
|
||||
elif [[ -n "$NTFY_TOKEN" && "$DRY_RUN" == "true" ]]; then
|
||||
echo "[DRY RUN] Would send notification: curl -H \"Authorization: Bearer ...\" -d \"$message\" https://ntfy.technopunk.space/sap"
|
||||
fi
|
||||
}
|
||||
|
||||
# --- Function to get HANA tenant name ---
|
||||
get_hana_tenant_name() {
|
||||
local user_key="$1"
|
||||
local hdbsql_path="$2"
|
||||
local dry_run="$3"
|
||||
|
||||
local query="SELECT DATABASE_NAME FROM SYS.M_DATABASES;"
|
||||
local tenant_name=""
|
||||
|
||||
if [[ "$dry_run" == "true" ]]; then
|
||||
echo "[DRY RUN] Would execute hdbsql to get tenant name: \"$hdbsql_path\" -U \"$user_key\" \"$query\""
|
||||
tenant_name="DRYRUN_TENANT"
|
||||
else
|
||||
tenant_name=$("$hdbsql_path" -U "$user_key" "$query" | tail -n +2 | head -n 1 | tr -d '[:space:]' | tr -d '"')
|
||||
if [[ -z "$tenant_name" ]]; then
|
||||
echo "❌ Error: Could not retrieve HANA tenant name using user key '${user_key}'."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
echo "$tenant_name"
|
||||
}
|
||||
|
||||
# --- Argument Parsing ---
|
||||
POSITIONAL_ARGS=()
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
-t|--threads)
|
||||
THREADS="$2"
|
||||
shift 2
|
||||
;;
|
||||
-c|--compress)
|
||||
COMPRESS=true
|
||||
shift
|
||||
;;
|
||||
-n|--dry-run)
|
||||
DRY_RUN=true
|
||||
shift
|
||||
;;
|
||||
--ntfy)
|
||||
NTFY_TOKEN="$2"
|
||||
shift 2
|
||||
;;
|
||||
--replace)
|
||||
IMPORT_REPLACE=true
|
||||
shift
|
||||
;;
|
||||
--hdbsql)
|
||||
HDBSQL_PATH="$2"
|
||||
shift 2
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
POSITIONAL_ARGS+=("$1") # save positional arg
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
|
||||
|
||||
# Assign common positional arguments
|
||||
USER_KEY="$1"
|
||||
ACTION="$2"
|
||||
|
||||
# --- Main Logic ---
|
||||
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo "⚠️ --- DRY RUN MODE ENABLED --- ⚠️"
|
||||
echo "No actual commands will be executed."
|
||||
echo "-------------------------------------"
|
||||
fi
|
||||
|
||||
# Check for hdbsql executable
|
||||
if [[ ! -x "$HDBSQL_PATH" ]]; then
|
||||
echo "❌ Error: hdbsql not found or not executable at '${HDBSQL_PATH}'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Calculate default threads if not specified and action is not backup
|
||||
if [[ "$THREADS" -eq 0 && "$ACTION" != "backup" ]]; then
|
||||
TOTAL_THREADS=$(nproc --all)
|
||||
THREADS=$((TOTAL_THREADS / 2))
|
||||
if [[ "$THREADS" -eq 0 ]]; then
|
||||
THREADS=1
|
||||
fi
|
||||
echo "ℹ️ Auto-detected threads to use: ${THREADS}"
|
||||
fi
|
||||
|
||||
# Execute action based on user input
|
||||
case "$ACTION" in
|
||||
backup)
|
||||
TARGET_PATH="$3"
|
||||
if [[ -z "$USER_KEY" || -z "$TARGET_PATH" ]]; then
|
||||
echo "❌ Error: Missing arguments for 'backup' action."
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "⬇️ Starting tenant backup..."
|
||||
echo " - User Key: ${USER_KEY}"
|
||||
echo " - Path: ${TARGET_PATH}"
|
||||
echo " - Compress: ${COMPRESS}"
|
||||
|
||||
TENANT_NAME=$(get_hana_tenant_name "$USER_KEY" "$HDBSQL_PATH" "$DRY_RUN")
|
||||
echo " - Tenant Name: ${TENANT_NAME}"
|
||||
|
||||
timestamp=$(date +%Y%m%d_%H%M%S)
|
||||
backup_target_dir="$TARGET_PATH" # Initialize with TARGET_PATH
|
||||
backup_path_prefix=""
|
||||
|
||||
if [[ "$COMPRESS" == "true" ]]; then
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
backup_target_dir="${TARGET_PATH}/${TENANT_NAME}_backup_DRYRUN_TEMP" # Use TARGET_PATH
|
||||
else
|
||||
backup_target_dir=$(mktemp -d "${TARGET_PATH}/${TENANT_NAME}_backup_${timestamp}_XXXXXXXX") # Use TARGET_PATH
|
||||
fi
|
||||
echo "ℹ️ Using temporary backup directory: ${backup_target_dir}"
|
||||
fi
|
||||
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo "[DRY RUN] Would create directory: mkdir -p \"$backup_target_dir\""
|
||||
else
|
||||
mkdir -p "$backup_target_dir"
|
||||
fi
|
||||
|
||||
backup_path_prefix="${backup_target_dir}/backup_${TENANT_NAME}_${timestamp}"
|
||||
|
||||
QUERY="BACKUP DATA USING FILE ('${backup_path_prefix}')"
|
||||
|
||||
EXIT_CODE=0
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo "[DRY RUN] Would execute hdbsql: \"$HDBSQL_PATH\" -U \"$USER_KEY\" \"$QUERY\""
|
||||
else
|
||||
"$HDBSQL_PATH" -U "$USER_KEY" "$QUERY" > /dev/null 2>&1
|
||||
EXIT_CODE=$?
|
||||
fi
|
||||
|
||||
if [[ "$EXIT_CODE" -eq 0 ]]; then
|
||||
echo "✅ Successfully initiated tenant backup with prefix '${backup_path_prefix}'."
|
||||
if [[ "$COMPRESS" == "true" ]]; then
|
||||
ARCHIVE_FILE="${TARGET_PATH}/${TENANT_NAME}_backup_${timestamp}.tar.gz"
|
||||
echo "🗜️ Compressing backup files to '${ARCHIVE_FILE}'..."
|
||||
|
||||
TAR_EXIT_CODE=0
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo "[DRY RUN] Would execute tar: tar -czf \"$ARCHIVE_FILE\" -C \"$backup_target_dir\" ."
|
||||
else
|
||||
tar -czf "$ARCHIVE_FILE" -C "$backup_target_dir" .
|
||||
TAR_EXIT_CODE=$?
|
||||
fi
|
||||
|
||||
if [[ "$TAR_EXIT_CODE" -eq 0 ]]; then
|
||||
echo "✅ Successfully created archive."
|
||||
echo "🧹 Cleaning up temporary directory..."
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo "[DRY RUN] Would remove temp directory: rm -rf \"$backup_target_dir\""
|
||||
else
|
||||
rm -rf "$backup_target_dir"
|
||||
fi
|
||||
else
|
||||
echo "❌ Error: Failed to create archive from '${backup_target_dir}'."
|
||||
fi
|
||||
fi
|
||||
send_notification "✅ HANA tenant '${TENANT_NAME}' backup completed successfully."
|
||||
else
|
||||
echo "❌ Error: Failed to initiate tenant backup (hdbsql exit code: ${EXIT_CODE})."
|
||||
send_notification "❌ HANA tenant '${TENANT_NAME}' backup FAILED."
|
||||
if [[ "$COMPRESS" == "true" && "$DRY_RUN" == "false" ]]; then rm -rf "$backup_target_dir"; fi
|
||||
fi
|
||||
;;
|
||||
|
||||
export)
|
||||
SCHEMA_NAME="$3"
|
||||
TARGET_PATH="$4"
|
||||
if [[ -z "$USER_KEY" || -z "$SCHEMA_NAME" || -z "$TARGET_PATH" ]]; then
|
||||
echo "❌ Error: Missing arguments for 'export' action."
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "⬇️ Starting schema export..."
|
||||
echo " - User Key: ${USER_KEY}"
|
||||
echo " - Schema: ${SCHEMA_NAME}"
|
||||
echo " - Path: ${TARGET_PATH}"
|
||||
echo " - Compress: ${COMPRESS}"
|
||||
echo " - Threads: ${THREADS}"
|
||||
|
||||
EXPORT_DIR="$TARGET_PATH"
|
||||
if [[ "$COMPRESS" == "true" ]]; then
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
EXPORT_DIR="${TARGET_PATH}/export_${SCHEMA_NAME}_DRYRUN_TEMP"
|
||||
else
|
||||
EXPORT_DIR=$(mktemp -d "${TARGET_PATH}/export_${SCHEMA_NAME}_XXXXXXXX")
|
||||
fi
|
||||
echo "ℹ️ Using temporary export directory: ${EXPORT_DIR}"
|
||||
fi
|
||||
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo "[DRY RUN] Would create directory: mkdir -p \"$EXPORT_DIR\""
|
||||
else
|
||||
mkdir -p "$EXPORT_DIR"
|
||||
fi
|
||||
|
||||
QUERY="EXPORT \"${SCHEMA_NAME}\".\"*\" AS BINARY INTO '${EXPORT_DIR}' WITH REPLACE THREADS ${THREADS} NO DEPENDENCIES;"
|
||||
|
||||
EXIT_CODE=0
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo "[DRY RUN] Would execute hdbsql: \"$HDBSQL_PATH\" -U \"$USER_KEY\" \"$QUERY\""
|
||||
else
|
||||
"$HDBSQL_PATH" -U "$USER_KEY" "$QUERY" > /dev/null 2>&1
|
||||
EXIT_CODE=$?
|
||||
fi
|
||||
|
||||
if [[ "$EXIT_CODE" -eq 0 ]]; then
|
||||
echo "✅ Successfully exported schema '${SCHEMA_NAME}' to '${EXPORT_DIR}'."
|
||||
if [[ "$COMPRESS" == "true" ]]; then
|
||||
ARCHIVE_FILE="${TARGET_PATH}/${SCHEMA_NAME}_$(date +%Y%m%d_%H%M%S).tar.gz"
|
||||
echo "🗜️ Compressing files to '${ARCHIVE_FILE}'..."
|
||||
|
||||
TAR_EXIT_CODE=0
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo "[DRY RUN] Would execute tar: tar -czf \"$ARCHIVE_FILE\" -C \"$(dirname "$EXPORT_DIR")\" \"$(basename "$EXPORT_DIR")\""
|
||||
else
|
||||
tar -czf "$ARCHIVE_FILE" -C "$(dirname "$EXPORT_DIR")" "$(basename "$EXPORT_DIR")"
|
||||
TAR_EXIT_CODE=$?
|
||||
fi
|
||||
|
||||
if [[ "$TAR_EXIT_CODE" -eq 0 ]]; then
|
||||
echo "✅ Successfully created archive."
|
||||
echo "🧹 Cleaning up temporary directory..."
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo "[DRY RUN] Would remove temp directory: rm -rf \"$EXPORT_DIR\""
|
||||
else
|
||||
rm -rf "$EXPORT_DIR"
|
||||
fi
|
||||
else
|
||||
echo "❌ Error: Failed to create archive from '${EXPORT_DIR}'."
|
||||
fi
|
||||
fi
|
||||
send_notification "✅ Export of schema '${SCHEMA_NAME}' completed successfully."
|
||||
else
|
||||
echo "❌ Error: Failed to export schema '${SCHEMA_NAME}' (hdbsql exit code: ${EXIT_CODE})."
|
||||
send_notification "❌ Export of schema '${SCHEMA_NAME}' FAILED."
|
||||
if [[ "$COMPRESS" == "true" && "$DRY_RUN" == "false" ]]; then rm -rf "$EXPORT_DIR"; fi
|
||||
fi
|
||||
;;
|
||||
|
||||
import|import-rename)
|
||||
SCHEMA_NAME="$3"
|
||||
if [[ "$ACTION" == "import" ]]; then
|
||||
SOURCE_PATH="$4"
|
||||
NEW_SCHEMA_NAME=""
|
||||
if [[ -z "$USER_KEY" || -z "$SCHEMA_NAME" || -z "$SOURCE_PATH" ]]; then
|
||||
echo "❌ Error: Missing arguments for 'import' action."
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
else # import-rename
|
||||
NEW_SCHEMA_NAME="$4"
|
||||
SOURCE_PATH="$5"
|
||||
if [[ -z "$USER_KEY" || -z "$SCHEMA_NAME" || -z "$NEW_SCHEMA_NAME" || -z "$SOURCE_PATH" ]]; then
|
||||
echo "❌ Error: Missing arguments for 'import-rename' action."
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "⬆️ Starting schema import..."
|
||||
echo " - User Key: ${USER_KEY}"
|
||||
echo " - Source Schema: ${SCHEMA_NAME}"
|
||||
if [[ -n "$NEW_SCHEMA_NAME" ]]; then
|
||||
echo " - Target Schema: ${NEW_SCHEMA_NAME}"
|
||||
fi
|
||||
echo " - Path: ${SOURCE_PATH}"
|
||||
echo " - Compress: ${COMPRESS}"
|
||||
echo " - Threads: ${THREADS}"
|
||||
|
||||
IMPORT_DIR="$SOURCE_PATH"
|
||||
if [[ "$COMPRESS" == "true" ]]; then
|
||||
if [[ ! -f "$SOURCE_PATH" && "$DRY_RUN" == "false" ]]; then
|
||||
echo "❌ Error: Source path '${SOURCE_PATH}' is not a valid file for compressed import."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
IMPORT_DIR="/tmp/import_${SCHEMA_NAME}_DRYRUN_TEMP"
|
||||
else
|
||||
IMPORT_DIR=$(mktemp -d "/tmp/import_${SCHEMA_NAME}_XXXXXXXX")
|
||||
fi
|
||||
|
||||
echo "ℹ️ Decompressing to temporary directory: ${IMPORT_DIR}"
|
||||
|
||||
TAR_EXIT_CODE=0
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo "[DRY RUN] Would decompress archive: tar -xzf \"$SOURCE_PATH\" -C \"$IMPORT_DIR\" --strip-components=1"
|
||||
else
|
||||
tar -xzf "$SOURCE_PATH" -C "$IMPORT_DIR" --strip-components=1
|
||||
TAR_EXIT_CODE=$?
|
||||
fi
|
||||
|
||||
if [[ "$TAR_EXIT_CODE" -ne 0 ]]; then
|
||||
echo "❌ Error: Failed to decompress '${SOURCE_PATH}'."
|
||||
if [[ "$DRY_RUN" == "false" ]]; then rm -rf "$IMPORT_DIR"; fi
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ ! -d "$IMPORT_DIR" && "$DRY_RUN" == "false" ]]; then
|
||||
echo "❌ Error: Import directory '${IMPORT_DIR}' does not exist."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
import_options=""
|
||||
if [[ "$IMPORT_REPLACE" == "true" ]]; then
|
||||
import_options="REPLACE"
|
||||
echo " - Mode: REPLACE"
|
||||
else
|
||||
import_options="IGNORE EXISTING"
|
||||
echo " - Mode: IGNORE EXISTING (default)"
|
||||
fi
|
||||
|
||||
if [[ "$ACTION" == "import-rename" ]]; then
|
||||
import_options="${import_options} RENAME SCHEMA \"${SCHEMA_NAME}\" TO \"${NEW_SCHEMA_NAME}\""
|
||||
fi
|
||||
|
||||
QUERY="IMPORT \"${SCHEMA_NAME}\".\"*\" AS BINARY FROM '${IMPORT_DIR}' WITH ${import_options} THREADS ${THREADS};"
|
||||
|
||||
EXIT_CODE=0
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo "[DRY RUN] Would execute hdbsql: \"$HDBSQL_PATH\" -U \"$USER_KEY\" \"$QUERY\""
|
||||
else
|
||||
"$HDBSQL_PATH" -U "$USER_KEY" "$QUERY" > /dev/null 2>&1
|
||||
EXIT_CODE=$?
|
||||
fi
|
||||
|
||||
target_schema_name="${NEW_SCHEMA_NAME:-$SCHEMA_NAME}"
|
||||
if [[ "$EXIT_CODE" -eq 0 ]]; then
|
||||
echo "✅ Successfully imported schema."
|
||||
send_notification "✅ ${ACTION} of schema '${SCHEMA_NAME}' to '${target_schema_name}' completed successfully."
|
||||
else
|
||||
echo "❌ Error: Failed to import schema (hdbsql exit code: ${EXIT_CODE})."
|
||||
send_notification "❌ ${ACTION} of schema '${SCHEMA_NAME}' to '${target_schema_name}' FAILED."
|
||||
fi
|
||||
|
||||
if [[ "$COMPRESS" == "true" ]]; then
|
||||
echo "🧹 Cleaning up temporary directory..."
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo "[DRY RUN] Would remove temp directory: rm -rf \"$IMPORT_DIR\""
|
||||
else
|
||||
rm -rf "$IMPORT_DIR"
|
||||
fi
|
||||
fi
|
||||
;;
|
||||
|
||||
*)
|
||||
echo "❌ Error: Invalid action '${ACTION}'."
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
echo "✅ Process complete."
|
||||
|
||||
316
install.sh
316
install.sh
@@ -1,111 +1,241 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Author: Tomi Eckert
|
||||
# --- Main Script ---
|
||||
|
||||
# Generate a unique temporary filename with a timestamp
|
||||
conf_file="packages.conf.$(date +%Y%m%d%H%M%S)"
|
||||
# This script presents a menu of software packages, or installs them
|
||||
# non-interactively via command-line arguments. It downloads files from a
|
||||
# remote configuration, shows a diff for config updates, and checks versions.
|
||||
|
||||
# Set up a trap to delete the temporary file on exit, regardless of how the script ends
|
||||
# --- Functions ---
|
||||
|
||||
# Get the version from a local script file.
|
||||
get_local_version() {
|
||||
local file_path="$1"
|
||||
if [[ -f "${file_path}" ]]; then
|
||||
head -n 5 "${file_path}" | grep -m 1 "^# Version:" | awk '{print $NF}'
|
||||
else
|
||||
echo "0.0.0" # Return a base version if file doesn't exist.
|
||||
fi
|
||||
}
|
||||
|
||||
# Compare two version strings. Returns 0 if v1 is newer.
|
||||
is_version_greater() {
|
||||
local v1=$1
|
||||
local v2=$2
|
||||
if [[ "$(printf '%s\n' "$v1" "$v2" | sort -V | head -n 1)" != "$v1" ]]; then
|
||||
return 0 # v1 is greater
|
||||
else
|
||||
return 1 # v1 is not greater (equal or less)
|
||||
fi
|
||||
}
|
||||
|
||||
# Process a single selected package.
|
||||
process_package() {
|
||||
local choice_key="$1"
|
||||
local force_overwrite="$2" # Expects "true" or "false"
|
||||
|
||||
if [[ -z "${SCRIPT_PACKAGES[$choice_key]}" ]]; then
|
||||
echo "[❌] Invalid package name provided: '${choice_key}'"
|
||||
return
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "[⬇️] Processing package: '${choice_key}'..."
|
||||
|
||||
# Parse the new config format
|
||||
config_value="${SCRIPT_PACKAGES[$choice_key]}"
|
||||
display_name=$(echo "${config_value}" | cut -d'|' -f1)
|
||||
remote_version=$(echo "${config_value}" | cut -d'|' -f2)
|
||||
description=$(echo "${config_value}" | cut -d'|' -f3)
|
||||
urls_to_download=$(echo "${config_value}" | cut -d'|' -f4)
|
||||
install_script=$(echo "${config_value}" | cut -d'|' -f5) # Optional install script
|
||||
|
||||
read -r -a urls_to_download_array <<< "$urls_to_download"
|
||||
|
||||
for url in "${urls_to_download_array[@]}"; do
|
||||
filename=$(basename "${url}")
|
||||
# Handle config file overwrites
|
||||
if [[ "${filename}" == *.conf && -f "${filename}" ]]; then
|
||||
if [[ "$force_overwrite" == "true" ]]; then
|
||||
echo "[⚠️] Overwriting '${filename}' due to --overwrite-config flag."
|
||||
if ! curl -fsSL -o "${filename}" "${url}"; then
|
||||
echo "[❌] Error: Failed to download '${filename}'."
|
||||
fi
|
||||
continue
|
||||
fi
|
||||
|
||||
echo "[->] Found existing config file: '${filename}'."
|
||||
tmp_file=$(mktemp)
|
||||
if curl -fsSL -o "${tmp_file}" "${url}"; then
|
||||
echo "[🔎] Comparing versions..."
|
||||
echo "-------------------- DIFF START --------------------"
|
||||
if command -v colordiff &> /dev/null; then
|
||||
colordiff -u "${filename}" "${tmp_file}"
|
||||
else
|
||||
diff --color=always -u "${filename}" "${tmp_file}" 2>/dev/null || diff -u "${filename}" "${tmp_file}"
|
||||
fi
|
||||
echo "--------------------- DIFF END ---------------------"
|
||||
read -p "Do you want to overwrite '${filename}'? (y/N) " -n 1 -r REPLY
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
mv "${tmp_file}" "${filename}"
|
||||
echo "[✅] Updated '${filename}'."
|
||||
else
|
||||
rm "${tmp_file}"
|
||||
echo "[🤷] Kept existing version of '${filename}'."
|
||||
fi
|
||||
else
|
||||
echo "[❌] Error downloading new version of '${filename}' for comparison."
|
||||
rm -f "${tmp_file}"
|
||||
fi
|
||||
else
|
||||
# Original download logic for all other files.
|
||||
echo "[->] Downloading '${filename}'..."
|
||||
if curl -fsSL -o "${filename}" "${url}"; then
|
||||
echo "[✅] Successfully downloaded '${filename}'."
|
||||
if [[ "${filename}" == *.sh || "${filename}" == *.bash ]]; then
|
||||
chmod +x "${filename}"
|
||||
echo "[🤖] Made '${filename}' executable."
|
||||
fi
|
||||
else
|
||||
echo "[❌] Error: Failed to download '${filename}'."
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ -n "${install_script}" ]]; then
|
||||
echo "[⚙️] Running install script for '${choice_key}'..."
|
||||
#eval "${install_script}"
|
||||
bash -c "$(curl -sSL $install_script)"
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "[✅] Install script completed successfully."
|
||||
else
|
||||
echo "[❌] Install script failed with exit code $?."
|
||||
fi
|
||||
fi
|
||||
echo "[📦] Package processing complete for '${choice_key}'."
|
||||
}
|
||||
|
||||
# --- Main Logic ---
|
||||
|
||||
conf_file="packages.conf.$(date +%Y%m%d%H%M%S)"
|
||||
trap 'rm -f "${conf_file}"' EXIT
|
||||
|
||||
# Download the configuration file before sourcing it.
|
||||
echo "🔄 Downloading configuration file '${conf_file}'..."
|
||||
echo "[🔄] Downloading configuration file..."
|
||||
if ! curl -fsSL -o "${conf_file}" "https://git.technopunk.space/tomi/Scripts/raw/branch/main/packages.conf"; then
|
||||
echo "❌ Error: Failed to download packages.conf. Exiting."
|
||||
echo "[❌] Error: Failed to download packages.conf. Exiting."
|
||||
exit 1
|
||||
fi
|
||||
echo "✅ Configuration file downloaded successfully."
|
||||
echo "[✅] Configuration file downloaded successfully."
|
||||
|
||||
# Source the configuration file to load the SCRIPT_PACKAGES array.
|
||||
source "${conf_file}"
|
||||
|
||||
# Welcome message
|
||||
# --- Argument Parsing for Non-Interactive Mode ---
|
||||
if [ "$#" -gt 0 ]; then
|
||||
declare -a packages_to_install
|
||||
overwrite_configs=false
|
||||
for arg in "$@"; do
|
||||
case $arg in
|
||||
--overwrite-config)
|
||||
overwrite_configs=true
|
||||
;;
|
||||
-*)
|
||||
echo "[❌] Unknown flag: $arg" >&2
|
||||
exit 1
|
||||
;;
|
||||
*)
|
||||
packages_to_install+=("$arg")
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [ ${#packages_to_install[@]} -eq 0 ]; then
|
||||
echo "[❌] Flag provided with no package names. Exiting."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "[🚀] Running in non-interactive mode."
|
||||
for pkg_key in "${packages_to_install[@]}"; do
|
||||
if [[ -n "${SCRIPT_PACKAGES[$pkg_key]}" ]]; then
|
||||
process_package "$pkg_key" "$overwrite_configs"
|
||||
else
|
||||
echo "[⚠️] Unknown package: '$pkg_key'. Skipping."
|
||||
fi
|
||||
done
|
||||
echo "[🏁] Non-interactive run complete."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# --- Interactive Mode ---
|
||||
declare -a ordered_keys
|
||||
package_keys_sorted=($(for k in "${!SCRIPT_PACKAGES[@]}"; do echo $k; done | sort))
|
||||
ordered_keys=("${package_keys_sorted[@]}")
|
||||
|
||||
# --- Display Menu ---
|
||||
echo
|
||||
echo "-------------------------------------"
|
||||
echo " Script Downloader "
|
||||
echo " Script Downloader "
|
||||
echo "-------------------------------------"
|
||||
echo "[🔎] Checking for updates..."
|
||||
echo
|
||||
|
||||
# Create an array of options from the package names (the keys of our map)
|
||||
options=("${!SCRIPT_PACKAGES[@]}")
|
||||
options+=("Quit") # Add a Quit option
|
||||
for i in "${!ordered_keys[@]}"; do
|
||||
key="${ordered_keys[$i]}"
|
||||
config_value="${SCRIPT_PACKAGES[$key]}"
|
||||
display_name=$(echo "${config_value}" | cut -d'|' -f1)
|
||||
remote_version=$(echo "${config_value}" | cut -d'|' -f2)
|
||||
description=$(echo "${config_value}" | cut -d'|' -f3)
|
||||
urls=$(echo "${config_value}" | cut -d'|' -f4)
|
||||
# install_script=$(echo "${config_value}" | cut -d'|' -f5) # Not used for display in menu
|
||||
read -r -a url_array <<< "$urls"
|
||||
main_script_filename=$(basename "${url_array[0]}")
|
||||
local_version=$(get_local_version "${main_script_filename}")
|
||||
|
||||
# Set the prompt for the select menu
|
||||
PS3="Please enter the number of the script/package you want to download: "
|
||||
|
||||
# Display the menu and handle user input
|
||||
select choice in "${options[@]}"; do
|
||||
case "${choice}" in
|
||||
"Quit")
|
||||
echo "👋 Exiting."
|
||||
break
|
||||
;;
|
||||
*)
|
||||
# Check if the user's choice is a valid package name
|
||||
if [[ -n "${SCRIPT_PACKAGES[$choice]}" ]]; then
|
||||
echo
|
||||
echo "⬇️ Downloading package: '${choice}'..."
|
||||
|
||||
# Get the space-separated list of URLs for the chosen package
|
||||
urls_to_download="${SCRIPT_PACKAGES[$choice]}"
|
||||
|
||||
# Loop through each URL in the list and download the file
|
||||
for url in $urls_to_download; do
|
||||
filename=$(basename "${url}")
|
||||
# If it's a .conf file AND it already exists, ask to overwrite.
|
||||
if [[ "${filename}" == *.conf && -f "${filename}" ]]; then
|
||||
echo " -> Found existing config file: '${filename}'."
|
||||
# Create a temporary file to download the new version for comparison
|
||||
tmp_file=$(mktemp)
|
||||
|
||||
# Download the new version silently to the temp file
|
||||
if curl -fsSL -o "${tmp_file}" "${url}"; then
|
||||
echo " 🔎 Comparing versions..."
|
||||
echo "-------------------- DIFF START --------------------"
|
||||
# Show a colorized diff if 'colordiff' is available, otherwise use regular 'diff'
|
||||
if command -v colordiff &> /dev/null; then
|
||||
colordiff -u "${filename}" "${tmp_file}"
|
||||
else
|
||||
diff --color=always -u "${filename}" "${tmp_file}"
|
||||
fi
|
||||
echo "--------------------- DIFF END ---------------------"
|
||||
|
||||
# Ask the user for confirmation before overwriting
|
||||
read -p "Do you want to overwrite '${filename}'? (y/N) " -n 1 -r REPLY
|
||||
echo # Move to a new line for cleaner output
|
||||
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
mv "${tmp_file}" "${filename}"
|
||||
echo " ✅ Updated '${filename}'."
|
||||
else
|
||||
rm "${tmp_file}"
|
||||
echo " 🤷 Kept existing version of '${filename}'."
|
||||
fi
|
||||
else
|
||||
echo " ❌ Error: Failed to download new version of '${filename}' for comparison."
|
||||
# Clean up the temp file on failure
|
||||
rm -f "${tmp_file}"
|
||||
fi
|
||||
else
|
||||
# Original download logic for all other files (or new .conf files)
|
||||
echo " -> Downloading '${filename}'..."
|
||||
if curl -fsSL -o "${filename}" "${url}"; then
|
||||
echo " ✅ Successfully downloaded '${filename}'."
|
||||
# If the downloaded file is a shell script, make it executable
|
||||
if [[ "${filename}" == *.sh ]]; then
|
||||
chmod +x "${filename}"
|
||||
echo " 🤖 Made '${filename}' executable."
|
||||
fi
|
||||
else
|
||||
echo " ❌ Error: Failed to download '${filename}'."
|
||||
fi
|
||||
fi
|
||||
done
|
||||
echo
|
||||
echo "📦 Package download complete."
|
||||
break
|
||||
else
|
||||
# The user entered an invalid number
|
||||
echo "Invalid selection. Please try again."
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
# Print main package line
|
||||
echo -e "\033[1m$((i+1))) $key - $display_name (v$remote_version)\033[0m"
|
||||
# Print description
|
||||
echo " $description"
|
||||
# Print status
|
||||
if [[ -f "${main_script_filename}" ]]; then
|
||||
if is_version_greater "$remote_version" "$local_version"; then
|
||||
echo -e " \033[33m[Update available: v${local_version} -> v${remote_version}]\033[0m"
|
||||
else
|
||||
echo -e " \033[32m[Installed: v${local_version}]\033[0m"
|
||||
fi
|
||||
fi
|
||||
echo
|
||||
done
|
||||
quit_num=$((${#ordered_keys[@]} + 1))
|
||||
echo -e "\033[1m${quit_num}) Quit\033[0m"
|
||||
echo
|
||||
|
||||
# --- Handle User Input ---
|
||||
read -p "Please enter your choice(s) (e.g., 1 3 4), or press Enter to quit: " -r -a user_choices
|
||||
|
||||
if [ ${#user_choices[@]} -eq 0 ]; then
|
||||
echo "[👋] No selection made. Exiting."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
for choice_num in "${user_choices[@]}"; do
|
||||
if ! [[ "$choice_num" =~ ^[0-9]+$ ]]; then
|
||||
echo "[⚠️] Skipping invalid input: '${choice_num}'. Not a number."
|
||||
continue
|
||||
fi
|
||||
if [ "$choice_num" -eq "$quit_num" ]; then
|
||||
echo "[👋] Quit selected. Exiting."
|
||||
exit 0
|
||||
fi
|
||||
index=$((choice_num - 1))
|
||||
if [[ -z "${ordered_keys[$index]}" ]]; then
|
||||
echo "[⚠️] Skipping invalid choice: '${choice_num}'. Out of range."
|
||||
continue
|
||||
fi
|
||||
choice_key="${ordered_keys[$index]}"
|
||||
process_package "$choice_key" "false" # Never force overwrite in interactive mode
|
||||
done
|
||||
|
||||
echo
|
||||
echo "[🏁] All selected packages have been processed."
|
||||
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
#!/bin/bash
|
||||
# Version: 1.2.3
|
||||
# Author: Tomi Eckert
|
||||
|
||||
# A script to interactively manage SAP HANA hdbuserstore keys, with testing.
|
||||
|
||||
@@ -11,7 +13,20 @@ COLOR_NC='\033[0m' # No Color
|
||||
|
||||
# --- Configuration ---
|
||||
# Adjust these paths if your HANA client is installed elsewhere.
|
||||
HDB_CLIENT_PATH="/usr/sap/hdbclient"
|
||||
# Define potential HDB client paths
|
||||
HDB_CLIENT_PATH_1="/usr/sap/hdbclient"
|
||||
HDB_CLIENT_PATH_2="/usr/sap/NDB/HDB00/exe"
|
||||
|
||||
# Check which path exists and set HDB_CLIENT_PATH accordingly
|
||||
if [ -d "$HDB_CLIENT_PATH_1" ]; then
|
||||
HDB_CLIENT_PATH="$HDB_CLIENT_PATH_1"
|
||||
elif [ -d "$HDB_CLIENT_PATH_2" ]; then
|
||||
HDB_CLIENT_PATH="$HDB_CLIENT_PATH_2"
|
||||
else
|
||||
echo -e "${COLOR_RED}❌ Error: Neither '$HDB_CLIENT_PATH_1' nor '$HDB_CLIENT_PATH_2' found.${COLOR_NC}"
|
||||
echo -e "${COLOR_RED}Please install the SAP HANA client or adjust the paths in the script.${COLOR_NC}"
|
||||
exit 1
|
||||
fi
|
||||
HDB_USERSTORE_EXEC="${HDB_CLIENT_PATH}/hdbuserstore"
|
||||
HDB_SQL_EXEC="${HDB_CLIENT_PATH}/hdbsql"
|
||||
|
||||
@@ -64,7 +79,7 @@ create_new_key() {
|
||||
|
||||
# Conditionally build the connection string
|
||||
if [[ "$is_systemdb" =~ ^[Yy]$ ]]; then
|
||||
CONNECTION_STRING="${hdb_host}:3${hdb_instance}15"
|
||||
CONNECTION_STRING="${hdb_host}:3${hdb_instance}13"
|
||||
echo -e "${COLOR_YELLOW}💡 Connecting to SYSTEMDB. Tenant name will be omitted from the connection string.${COLOR_NC}"
|
||||
else
|
||||
read -p "Enter the Tenant DB [NDB]: " hdb_tenant
|
||||
40
monitor/monitor.conf
Normal file
40
monitor/monitor.conf
Normal file
@@ -0,0 +1,40 @@
|
||||
# Configuration for SAP HANA Monitoring Script
|
||||
# Author: Tomi Eckert
|
||||
|
||||
# --- Company Information ---
|
||||
# Used to identify which company the alert is for.
|
||||
COMPANY_NAME="Company"
|
||||
|
||||
# --- Notification Settings ---
|
||||
# Your ntfy.sh topic URL
|
||||
NTFY_TOPIC_URL="https://ntfy.technopunk.space/sap"
|
||||
# Your ntfy.sh bearer token (if required)
|
||||
NTFY_TOKEN="tk_xxxxx"
|
||||
|
||||
# --- HANA Connection Settings ---
|
||||
# Full path to the sapcontrol executable
|
||||
SAPCONTROL_PATH="<sapcontrol_path>"
|
||||
# Full path to the hdbsql executable
|
||||
HDBSQL_PATH="<hdbsql_path>"
|
||||
# HANA user key for authentication
|
||||
HANA_USER_KEY="CRONKEY"
|
||||
# HANA Instance Number for sapcontrol
|
||||
HANA_INSTANCE_NR="00"
|
||||
|
||||
# --- Monitoring Thresholds ---
|
||||
# Disk usage percentage that triggers an alert
|
||||
DISK_USAGE_THRESHOLD=80
|
||||
# Percentage of 'Truncated' log segments that triggers an alert
|
||||
TRUNCATED_PERCENTAGE_THRESHOLD=50
|
||||
# Percentage of 'Free' log segments below which an alert is triggered
|
||||
FREE_PERCENTAGE_THRESHOLD=25
|
||||
# Maximum age of the last successful full data backup in hours.
|
||||
BACKUP_THRESHOLD_HOURS=25
|
||||
# Statement queue length that triggers a check
|
||||
STATEMENT_QUEUE_THRESHOLD=100
|
||||
# Number of consecutive runs the queue must be over threshold to trigger an alert
|
||||
STATEMENT_QUEUE_CONSECUTIVE_RUNS=3
|
||||
|
||||
# --- Monitored Directories ---
|
||||
# List of directories to check for disk usage (space-separated)
|
||||
DIRECTORIES_TO_MONITOR=("/hana/log" "/hana/shared" "/hana/data" "/usr/sap")
|
||||
56
monitor/monitor.hook.sh
Normal file
56
monitor/monitor.hook.sh
Normal file
@@ -0,0 +1,56 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Author: Tomi Eckert
|
||||
# This script helps to configure monitor.conf
|
||||
|
||||
# Source the monitor.conf to get current values
|
||||
source monitor.conf
|
||||
|
||||
# Check if COMPANY_NAME or NTFY_TOKEN are still default
|
||||
if [ "$COMPANY_NAME" = "Company" ] || [ "$NTFY_TOKEN" = "tk_xxxxx" ]; then
|
||||
echo "Default COMPANY_NAME or NTFY_TOKEN detected. Running configuration..."
|
||||
else
|
||||
echo "COMPANY_NAME and NTFY_TOKEN are already configured. Exiting."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prompt for COMPANY_NAME
|
||||
read -p "Enter Company Name (e.g., MyCompany): " COMPANY_NAME_INPUT
|
||||
COMPANY_NAME_INPUT=${COMPANY_NAME_INPUT:-"$COMPANY_NAME"} # Default to current value if not provided
|
||||
|
||||
# Prompt for NTFY_TOKEN
|
||||
read -p "Enter ntfy.sh token (e.g., tk_xxxxx): " NTFY_TOKEN_INPUT
|
||||
NTFY_TOKEN_INPUT=${NTFY_TOKEN_INPUT:-"$NTFY_TOKEN"} # Default to current value if not provided
|
||||
|
||||
# Define HANA client paths
|
||||
HDB_CLIENT_PATH="/usr/sap/hdbclient"
|
||||
HDB_USERSTORE_EXEC="${HDB_CLIENT_PATH}/hdbuserstore"
|
||||
|
||||
# List HANA user keys and prompt for selection
|
||||
echo "Available HANA User Keys:"
|
||||
HANA_KEYS=$("$HDB_USERSTORE_EXEC" list 2>/dev/null | tail -n +3 | grep '^KEY ' | awk '{print $2}')
|
||||
if [ -z "$HANA_KEYS" ]; then
|
||||
echo "No HANA user keys found. Please create one using keymanager.sh or enter manually."
|
||||
read -p "Enter HANA User Key (e.g., CRONKEY): " HANA_USER_KEY_INPUT
|
||||
else
|
||||
echo "$HANA_KEYS"
|
||||
read -p "Enter HANA User Key from the list above (e.g., CRONKEY): " HANA_USER_KEY_INPUT
|
||||
fi
|
||||
HANA_USER_KEY_INPUT=${HANA_USER_KEY_INPUT:-"CRONKEY"} # Default value
|
||||
|
||||
# Find paths for sapcontrol and hdbsql
|
||||
SAPCONTROL_PATH_INPUT=$(which sapcontrol)
|
||||
HDBSQL_PATH_INPUT=$(which hdbsql)
|
||||
|
||||
# Default values if not found
|
||||
SAPCONTROL_PATH_INPUT=${SAPCONTROL_PATH_INPUT:-"/usr/sap/NDB/HDB00/exe/sapcontrol"}
|
||||
HDBSQL_PATH_INPUT=${HDBSQL_PATH_INPUT:-"/usr/sap/hdbclient/hdbsql"}
|
||||
|
||||
# Update monitor.conf
|
||||
sed -i "s/^COMPANY_NAME=\".*\"/COMPANY_NAME=\"$COMPANY_NAME_INPUT\"/" monitor.conf
|
||||
sed -i "s/^NTFY_TOKEN=\".*\"/NTFY_TOKEN=\"$NTFY_TOKEN_INPUT\"/" monitor.conf
|
||||
sed -i "s#^SAPCONTROL_PATH=\".*\"#SAPCONTROL_PATH=\"$SAPCONTROL_PATH_INPUT\"#" monitor.conf
|
||||
sed -i "s#^HDBSQL_PATH=\".*\"#HDBSQL_PATH=\"$HDBSQL_PATH_INPUT\"#" monitor.conf
|
||||
sed -i "s/^HANA_USER_KEY=\".*\"/HANA_USER_KEY=\"$HANA_USER_KEY_INPUT\"/" monitor.conf
|
||||
|
||||
echo "monitor.conf updated successfully!"
|
||||
244
monitor/monitor.sh
Normal file
244
monitor/monitor.sh
Normal file
@@ -0,0 +1,244 @@
|
||||
#!/bin/bash
|
||||
# Version: 1.3.1
|
||||
# Author: Tomi Eckert
|
||||
# =============================================================================
|
||||
# SAP HANA Monitoring Script
|
||||
#
|
||||
# Checks HANA processes, disk usage, log segments, and statement queue.
|
||||
# Sends ntfy.sh notifications if thresholds are exceeded.
|
||||
# =============================================================================
|
||||
|
||||
# --- Lock File Implementation ---
|
||||
LOCK_FILE="/tmp/hana_monitor.lock"
|
||||
if [ -e "$LOCK_FILE" ]; then
|
||||
echo "▶️ Script is already running. Exiting."
|
||||
exit 1
|
||||
fi
|
||||
touch "$LOCK_FILE"
|
||||
# Ensure lock file is removed on script exit
|
||||
trap 'rm -f "$LOCK_FILE"' EXIT
|
||||
|
||||
# --- Configuration and Setup ---
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" &>/dev/null && pwd)"
|
||||
CONFIG_FILE="${SCRIPT_DIR}/monitor.conf"
|
||||
|
||||
if [ ! -f "$CONFIG_FILE" ]; then
|
||||
echo "❌ Error: Configuration file not found at ${CONFIG_FILE}" >&2
|
||||
rm -f "$LOCK_FILE"
|
||||
exit 1
|
||||
fi
|
||||
source "$CONFIG_FILE"
|
||||
|
||||
STATE_DIR="${SCRIPT_DIR}/monitor_state"
|
||||
mkdir -p "${STATE_DIR}"
|
||||
|
||||
# Helper functions for state management
|
||||
get_state() {
|
||||
local key="$1"
|
||||
if [ -f "${STATE_DIR}/${key}.state" ]; then
|
||||
cat "${STATE_DIR}/${key}.state"
|
||||
else
|
||||
echo ""
|
||||
fi
|
||||
}
|
||||
|
||||
set_state() {
|
||||
local key="$1"
|
||||
local value="$2"
|
||||
echo "$value" > "${STATE_DIR}/${key}.state"
|
||||
}
|
||||
|
||||
HOSTNAME=$(hostname)
|
||||
SQL_QUERY="SELECT b.host, b.service_name, a.state, count(*) FROM PUBLIC.M_LOG_SEGMENTS a JOIN PUBLIC.M_SERVICES b ON (a.host = b.host AND a.port = b.port) GROUP BY b.host, b.service_name, a.state;"
|
||||
|
||||
send_notification_if_changed() {
|
||||
local alert_key="$1"
|
||||
local title_prefix="$2" # e.g., "HANA Process"
|
||||
local current_message="$3"
|
||||
local is_alert_condition="$4" # "true" or "false"
|
||||
local current_value="$5" # The value to store as state (e.g., "85%", "GREEN", "ALERT")
|
||||
|
||||
local previous_value=$(get_state "${alert_key}")
|
||||
|
||||
if [ "$current_value" != "$previous_value" ]; then
|
||||
local full_title=""
|
||||
local full_message=""
|
||||
|
||||
if [ "$is_alert_condition" == "true" ]; then
|
||||
full_title="${title_prefix} Alert"
|
||||
full_message="🚨 Critical: ${current_message}"
|
||||
else
|
||||
# Check if it was previously an alert (i.e., previous_value was not "OK")
|
||||
if [ -n "$previous_value" ] && [ "$previous_value" != "OK" ]; then
|
||||
full_title="${title_prefix} Resolved"
|
||||
full_message="✅ Resolved: ${current_message}"
|
||||
else
|
||||
# No alert, and no previous alert to resolve, so just update state silently
|
||||
set_state "${alert_key}" "$current_value"
|
||||
return
|
||||
fi
|
||||
fi
|
||||
|
||||
local final_message="[${COMPANY_NAME} | ${HOSTNAME}] ${full_message}"
|
||||
curl -H "Authorization: Bearer ${NTFY_TOKEN}" -H "Title: ${full_title}" -d "${final_message}" "${NTFY_TOPIC_URL}" > /dev/null 2>&1
|
||||
set_state "${alert_key}" "$current_value"
|
||||
echo "🔔 Notification sent for ${alert_key}: ${full_message}"
|
||||
fi
|
||||
}
|
||||
|
||||
# --- HANA Process Status ---
|
||||
echo "⚙️ Checking HANA process status..."
|
||||
if [ ! -x "$SAPCONTROL_PATH" ]; then
|
||||
echo "❌ Error: sapcontrol not found or not executable at ${SAPCONTROL_PATH}" >&2
|
||||
send_notification_if_changed "hana_sapcontrol_path" "HANA Monitor Error" "sapcontrol not found or not executable at ${SAPCONTROL_PATH}" "true" "SAPCONTROL_ERROR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
non_green_processes=$("${SAPCONTROL_PATH}" -nr "${HANA_INSTANCE_NR}" -function GetProcessList | tail -n +6 | grep -v 'GREEN')
|
||||
|
||||
if [ -n "$non_green_processes" ]; then
|
||||
echo "🚨 Alert: One or more HANA processes are not running!" >&2
|
||||
echo "$non_green_processes" >&2
|
||||
send_notification_if_changed "hana_processes" "HANA Process" "One or more HANA processes are not GREEN. Problem processes: ${non_green_processes}" "true" "PROCESS_ALERT:${non_green_processes}"
|
||||
exit 1 # Exit early as other checks might fail
|
||||
else
|
||||
send_notification_if_changed "hana_processes" "HANA Process" "All HANA processes are GREEN." "false" "OK"
|
||||
echo "✅ Success! All HANA processes are GREEN."
|
||||
fi
|
||||
|
||||
# --- Disk Space Monitoring ---
|
||||
echo "ℹ️ Checking disk usage..."
|
||||
for dir in "${DIRECTORIES_TO_MONITOR[@]}"; do
|
||||
if [ ! -d "$dir" ]; then
|
||||
echo "⚠️ Warning: Directory '$dir' not found. Skipping." >&2
|
||||
send_notification_if_changed "disk_dir_not_found_${dir//\//_}" "HANA Disk Warning" "Directory '$dir' not found." "true" "DIR_NOT_FOUND"
|
||||
continue
|
||||
fi
|
||||
usage=$(df -h "$dir" | awk 'NR==2 {print $5}' | sed 's/%//')
|
||||
echo " - ${dir} is at ${usage}%"
|
||||
if (( $(echo "$usage > $DISK_USAGE_THRESHOLD" | bc -l) )); then
|
||||
echo "🚨 Alert: ${dir} usage is at ${usage}% which is above the ${DISK_USAGE_THRESHOLD}% threshold." >&2
|
||||
send_notification_if_changed "disk_usage_${dir//\//_}" "HANA Disk" "Disk usage for ${dir} is at ${usage}%." "true" "${usage}%"
|
||||
else
|
||||
send_notification_if_changed "disk_usage_${dir//\//_}" "HANA Disk" "Disk usage for ${dir} is at ${usage}% (below threshold)." "false" "OK"
|
||||
fi
|
||||
done
|
||||
|
||||
# --- HANA Log Segment Monitoring ---
|
||||
echo "⚙️ Executing HANA SQL query..."
|
||||
if [ ! -x "$HDBSQL_PATH" ]; then
|
||||
echo "❌ Error: hdbsql not found or not executable at ${HDBSQL_PATH}" >&2
|
||||
send_notification_if_changed "hana_hdbsql_path" "HANA Monitor Error" "hdbsql not found or not executable at ${HDBSQL_PATH}" "true" "HDBSQL_ERROR"
|
||||
exit 1
|
||||
fi
|
||||
readarray -t sql_output < <("$HDBSQL_PATH" -U "$HANA_USER_KEY" -c ";" "$SQL_QUERY" 2>&1)
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "❌ Failure! The hdbsql command failed. Please check logs." >&2
|
||||
error_message=$(printf '%s\n' "${sql_output[@]}")
|
||||
send_notification_if_changed "hana_hdbsql_command" "HANA Monitor Error" "The hdbsql command failed. Details: ${error_message}" "true" "HDBSQL_COMMAND_FAILED"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
total_segments=0
|
||||
truncated_segments=0
|
||||
free_segments=0
|
||||
for line in "${sql_output[@]}"; do
|
||||
if [[ -z "$line" || "$line" == *"STATE"* ]]; then continue; fi
|
||||
cleaned_line=$(echo "$line" | tr -d '"')
|
||||
state=$(echo "$cleaned_line" | awk -F',' '{print $3}')
|
||||
count=$(echo "$cleaned_line" | awk -F',' '{print $4}')
|
||||
total_segments=$((total_segments + count))
|
||||
if [[ "$state" == "Truncated" ]]; then
|
||||
truncated_segments=$((truncated_segments + count))
|
||||
elif [[ "$state" == "Free" ]]; then
|
||||
free_segments=$((free_segments + count))
|
||||
fi
|
||||
done
|
||||
|
||||
echo "ℹ️ Total Segments: ${total_segments}"
|
||||
echo "ℹ️ Truncated Segments: ${truncated_segments}"
|
||||
echo "ℹ️ Free Segments: ${free_segments}"
|
||||
|
||||
if [ $total_segments -eq 0 ]; then
|
||||
echo "⚠️ Warning: No log segments found. Skipping percentage checks." >&2
|
||||
send_notification_if_changed "hana_log_segments_total" "HANA Log Segment Warning" "No log segments found. Skipping percentage checks." "true" "NO_LOG_SEGMENTS"
|
||||
else
|
||||
send_notification_if_changed "hana_log_segments_total" "HANA Log Segment" "Log segments found." "false" "OK"
|
||||
truncated_percentage=$((truncated_segments * 100 / total_segments))
|
||||
if (( $(echo "$truncated_percentage > $TRUNCATED_PERCENTAGE_THRESHOLD" | bc -l) )); then
|
||||
echo "🚨 Alert: ${truncated_percentage}% of log segments are 'Truncated'." >&2
|
||||
send_notification_if_changed "hana_log_truncated" "HANA Log Segment" "${truncated_percentage}% of HANA log segments are in 'Truncated' state." "true" "${truncated_percentage}%"
|
||||
else
|
||||
send_notification_if_changed "hana_log_truncated" "HANA Log Segment" "${truncated_percentage}% of HANA log segments are in 'Truncated' state (below threshold)." "false" "OK"
|
||||
fi
|
||||
|
||||
free_percentage=$((free_segments * 100 / total_segments))
|
||||
if (( $(echo "$free_percentage < $FREE_PERCENTAGE_THRESHOLD" | bc -l) )); then
|
||||
echo "🚨 Alert: Only ${free_percentage}% of log segments are 'Free'." >&2
|
||||
send_notification_if_changed "hana_log_free" "HANA Log Segment" "Only ${free_percentage}% of HANA log segments are in 'Free' state." "true" "${free_percentage}%"
|
||||
else
|
||||
send_notification_if_changed "hana_log_free" "HANA Log Segment" "Only ${free_percentage}% of HANA log segments are in 'Free' state (above threshold)." "false" "OK"
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- HANA Statement Queue Monitoring ---
|
||||
echo "⚙️ Checking HANA statement queue..."
|
||||
STATEMENT_QUEUE_SQL="SELECT COUNT(*) FROM M_SERVICE_THREADS WHERE THREAD_TYPE = 'SqlExecutor' AND THREAD_STATE = 'Queueing';"
|
||||
queue_count=$("$HDBSQL_PATH" -U "$HANA_USER_KEY" -j -a -x "$STATEMENT_QUEUE_SQL" 2>/dev/null | tr -d '"')
|
||||
|
||||
if ! [[ "$queue_count" =~ ^[0-9]+$ ]]; then
|
||||
echo "⚠️ Warning: Could not retrieve HANA statement queue count. Skipping check." >&2
|
||||
send_notification_if_changed "hana_statement_queue_check_fail" "HANA Monitor Warning" "Could not retrieve statement queue count." "true" "QUEUE_CHECK_FAIL"
|
||||
else
|
||||
send_notification_if_changed "hana_statement_queue_check_fail" "HANA Monitor Warning" "Statement queue check is working." "false" "OK"
|
||||
echo "ℹ️ Current statement queue length: ${queue_count}"
|
||||
|
||||
breach_count=$(get_state "statement_queue_breach_count")
|
||||
breach_count=${breach_count:-0}
|
||||
|
||||
if (( queue_count > STATEMENT_QUEUE_THRESHOLD )); then
|
||||
breach_count=$((breach_count + 1))
|
||||
echo "📈 Statement queue is above threshold. Consecutive breach count: ${breach_count}/${STATEMENT_QUEUE_CONSECUTIVE_RUNS}."
|
||||
else
|
||||
breach_count=0
|
||||
fi
|
||||
set_state "statement_queue_breach_count" "$breach_count"
|
||||
|
||||
if (( breach_count >= STATEMENT_QUEUE_CONSECUTIVE_RUNS )); then
|
||||
message="Statement queue has been over ${STATEMENT_QUEUE_THRESHOLD} for ${breach_count} checks. Current count: ${queue_count}."
|
||||
send_notification_if_changed "hana_statement_queue_status" "HANA Statement Queue" "${message}" "true" "ALERT:${queue_count}"
|
||||
else
|
||||
message="Statement queue is normal. Current count: ${queue_count}."
|
||||
send_notification_if_changed "hana_statement_queue_status" "HANA Statement Queue" "${message}" "false" "OK"
|
||||
fi
|
||||
fi
|
||||
|
||||
|
||||
# --- HANA Backup Status Monitoring ---
|
||||
echo "ℹ️ Checking last successful data backup status..."
|
||||
last_backup_date=$("$HDBSQL_PATH" -U "$HANA_USER_KEY" -j -a -x \
|
||||
"SELECT TOP 1 SYS_START_TIME FROM M_BACKUP_CATALOG WHERE ENTRY_TYPE_NAME = 'complete data backup' AND STATE_NAME = 'successful' ORDER BY SYS_START_TIME DESC" 2>/dev/null | tr -d "\"" | sed 's/\..*//')
|
||||
|
||||
if [[ -z "$last_backup_date" ]]; then
|
||||
message="No successful complete data backup found for ${COMPANY_NAME} HANA."
|
||||
echo "🚨 Critical: ${message}"
|
||||
send_notification_if_changed "hana_backup_status" "HANA Backup" "${message}" "true" "NO_BACKUP"
|
||||
else
|
||||
last_backup_epoch=$(date -d "$last_backup_date" +%s)
|
||||
current_epoch=$(date +%s)
|
||||
threshold_seconds=$((BACKUP_THRESHOLD_HOURS * 3600))
|
||||
age_seconds=$((current_epoch - last_backup_epoch))
|
||||
age_hours=$((age_seconds / 3600))
|
||||
|
||||
if (( age_seconds > threshold_seconds )); then
|
||||
message="Last successful HANA backup for ${COMPANY_NAME} is ${age_hours} hours old, which exceeds the threshold of ${BACKUP_THRESHOLD_HOURS} hours. Last backup was on: ${last_backup_date}."
|
||||
echo "🚨 Critical: ${message}"
|
||||
send_notification_if_changed "hana_backup_status" "HANA Backup" "${message}" "true" "${age_hours}h"
|
||||
else
|
||||
message="Last successful backup is ${age_hours} hours old (Threshold: ${BACKUP_THRESHOLD_HOURS} hours)."
|
||||
echo "✅ Success! ${message}"
|
||||
send_notification_if_changed "hana_backup_status" "HANA Backup" "${message}" "false" "OK"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "✅ Success! HANA monitoring check complete."
|
||||
18
packages.conf
Normal file
18
packages.conf
Normal file
@@ -0,0 +1,18 @@
|
||||
#!/bin/bash
|
||||
# Author: Tomi Eckert
|
||||
#
|
||||
# This file contains the configuration for the script downloader.
|
||||
# The `SCRIPT_PACKAGES` associative array maps a short package name
|
||||
# to a pipe-separated string with the following format:
|
||||
# "<Display Name>|<Version>|<Description>|<Space-separated list of URLs>|[Install Script (optional)]"
|
||||
# The Install Script will be executed after all files for the package are downloaded.
|
||||
|
||||
declare -A SCRIPT_PACKAGES
|
||||
|
||||
# Format: short_name="Display Name|Version|Description|URL1 URL2..."
|
||||
SCRIPT_PACKAGES["aurora"]="Aurora Suite|2.1.0|A collection of scripts for managing Aurora database instances.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/aurora/aurora.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/aurora/aurora.conf"
|
||||
SCRIPT_PACKAGES["backup"]="Backup Suite|1.0.8|A comprehensive script for backing up system files and databases.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/backup/backup.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/backup/backup.conf"
|
||||
SCRIPT_PACKAGES["monitor"]="Monitor Suite|1.3.1|Scripts for monitoring system health and performance metrics.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/monitor/monitor.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/monitor/monitor.conf|https://git.technopunk.space/tomi/Scripts/raw/branch/main/monitor/monitor.hook.sh"
|
||||
SCRIPT_PACKAGES["keymanager"]="Key Manager|1.2.3|A utility for managing HDB user keys for SAP HANA.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/keymanager.sh"
|
||||
SCRIPT_PACKAGES["cleaner"]="File Cleaner|1.1.0|A simple script to clean up temporary files and logs.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/cleaner.sh"
|
||||
SCRIPT_PACKAGES["hanatool"]="HANA Tool|1.5.6|A command-line tool for various SAP HANA administration tasks.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/hanatool.sh"
|
||||
14
packages.sh
14
packages.sh
@@ -1,14 +0,0 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# This file contains the configuration for the script downloader.
|
||||
# The `SCRIPT_PACKAGES` associative array maps a package name to a
|
||||
# space-separated list of URLs to download.
|
||||
|
||||
declare -A SCRIPT_PACKAGES
|
||||
|
||||
SCRIPT_PACKAGES["Aurora Suite"]="https://git.technopunk.space/tomi/Scripts/raw/branch/main/aurora/aurora.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/aurora/aurora.conf"
|
||||
SCRIPT_PACKAGES["Backup Suite"]="https://git.technopunk.space/tomi/Scripts/raw/branch/main/backup/backup.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/backup/backup.conf"
|
||||
SCRIPT_PACKAGES["Key Manager"]="https://git.technopunk.space/tomi/Scripts/raw/branch/main/hdb_keymanager.sh"
|
||||
SCRIPT_PACKAGES["File Cleaner"]="https://git.technopunk.space/tomi/Scripts/raw/branch/main/clean.sh"
|
||||
# Example: To add another single script later, just add a new line:
|
||||
# SCRIPT_PACKAGES["My Other Script"]="https://path/to/my-other-script.sh"
|
||||
Reference in New Issue
Block a user