Compare commits
33 Commits
eeb5b2eb7b
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| d8752eb721 | |||
| 668f56e56c | |||
| 9acf30da12 | |||
| 46673d88d2 | |||
| 557cb807dd | |||
| 4274f4d01d | |||
| 66da934be2 | |||
| 3355727a9b | |||
| 6a94e19a94 | |||
| c35f2528cf | |||
| 8a5f76bbe4 | |||
| d428def0a2 | |||
| 2fe4ba0fd2 | |||
| b801c2c002 | |||
| 80fd12f0f9 | |||
| f597ae09aa | |||
| bd35ddbab6 | |||
| 1bd67d3613 | |||
| 1c254115c4 | |||
| b0553c5826 | |||
| 56e781996a | |||
| 4e98731bd1 | |||
| a2579ab3d5 | |||
| b983b9e953 | |||
| 1c4c7ebcc6 | |||
| 52bc1ed352 | |||
| ec0c686a3c | |||
| bb0531aeea | |||
| 92a2b963c4 | |||
| a8fc2c07e8 | |||
| 6b2132a7ab | |||
| 2549ccf250 | |||
| e083c5b749 |
155
README.md
155
README.md
@@ -1,119 +1,82 @@
|
||||
# SAP HANA Automation Scripts
|
||||
# 🚀 SAP HANA Automation Scripts
|
||||
|
||||
This repository contains a collection of Bash and Batch scripts designed to automate various administrative and monitoring tasks for SAP HANA databases.
|
||||
A collection of powerful Bash scripts designed to automate and simplify SAP HANA administration, monitoring, and management tasks.
|
||||
|
||||
## Installation
|
||||
## ✨ Key Features
|
||||
|
||||
To get started, you can use the `install.sh` script to download and set up the tools:
|
||||
* **Automate Everything**: Schedule routine backups, file cleanups, and schema refreshes.
|
||||
* **Monitor Proactively**: Keep an eye on system health, disk space, and backup status with automated alerts.
|
||||
* **Simplify Management**: Use powerful command-line tools and interactive menus for common tasks.
|
||||
* **Secure**: Integrates with SAP's secure user store (`hdbuserstore`) for credential management.
|
||||
* **Get Notified**: Receive completion and failure alerts via `ntfy.sh`.
|
||||
|
||||
## ⚙️ Quick Install
|
||||
|
||||
Get started in seconds. The interactive installer will guide you through selecting the tools you need.
|
||||
|
||||
```sh
|
||||
bash -c "$(curl -sSL https://install.technopunk.space)"
|
||||
```
|
||||
|
||||
This command will download and execute the `install.sh` script, which provides an interactive menu to select and install individual tools.
|
||||
## 🛠️ Tools Overview
|
||||
|
||||
## Tools Overview
|
||||
The following scripts and suites are included. Suites are configured via a `.conf` file in their respective directories.
|
||||
|
||||
Here's a breakdown of the scripts included in this repository:
|
||||
| Tool | Purpose & Core Function |
|
||||
| :------------- | :------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **`cleaner`** 🧹 | **File Cleaner**: Deletes files older than a specified retention period. Ideal for managing logs and temporary files. |
|
||||
| **`hanatool`** 🗄️ | **HANA Management**: A powerful CLI tool to export/import schemas, perform full tenant backups, and compress artifacts. |
|
||||
| **`keymanager`** 🔑 | **Key Manager**: An interactive menu to easily create, delete, and test `hdbuserstore` keys with an automatic rollback safety feature. |
|
||||
| **`aurora`** 🌅 | **Schema Refresh Suite**: Automates refreshing a non-production schema from a production source. |
|
||||
| **`backup`** 💾 | **Backup Suite**: A complete, cron-friendly solution for scheduling schema exports and/or full tenant backups with configurable compression. |
|
||||
| **`monitor`** 📊 | **Monitoring Suite**: Continuously checks HANA process status, disk usage, log segments, and backup age, sending alerts when thresholds are breached. |
|
||||
|
||||
### 1. `install.sh` (Script Downloader)
|
||||
## 📖 Tool Details
|
||||
|
||||
* **Purpose**: An interactive script to download and manage other scripts from a remote `packages.conf` file.
|
||||
* **Features**:
|
||||
* Presents a menu of available tools.
|
||||
* Checks for updates to installed scripts.
|
||||
* Shows a `diff` for configuration files before overwriting.
|
||||
* Automatically makes downloaded shell scripts executable.
|
||||
* **Usage**: Run the script and follow the on-screen prompts.
|
||||
### 1\. `cleaner.sh` (File Cleaner) 🧹
|
||||
|
||||
### 2. `packages.conf` (Configuration for `install.sh`)
|
||||
* **Purpose**: Deletes files older than a specified retention period from given directories to help manage disk space.
|
||||
|
||||
* **Purpose**: Defines the list of available scripts, their versions, and their download URLs for `install.sh`.
|
||||
* **Format**: An associative array `SCRIPT_PACKAGES` where keys are package names and values are `version|URL1 URL2...`.
|
||||
### 2\. `hanatool.sh` (SAP HANA Schema & Tenant Management) 🗄️
|
||||
|
||||
### 3. `clean.sh` (File Cleaner)
|
||||
* **Purpose**: A versatile command-line utility for SAP HANA, enabling quick exports and imports of schemas, as well as full tenant backups.
|
||||
* **Features**:
|
||||
* Export/Import schemas (with optional renaming).
|
||||
* Perform full tenant backups.
|
||||
* Dry-run mode to preview commands.
|
||||
* `ntfy.sh` notifications for task completion/failure.
|
||||
* **Options**: `-t, --threads N`, `-c, --compress`, `-n, --dry-run`, `--ntfy <token>`, `--replace`, `--hdbsql <path>`, `-h, --help`
|
||||
|
||||
* **Purpose**: Deletes files older than a specified retention period in given directories.
|
||||
* **Usage**: `./clean.sh <retention_days>:<path> [<retention_days>:<path> ...]`
|
||||
* **Example**: `./clean.sh 30:/var/log 7:/tmp/downloads`
|
||||
### 3\. `keymanager.sh` (Secure User Store Key Manager) 🔑
|
||||
|
||||
### 4. `hanatool.sh` (SAP HANA Schema and Tenant Management Tool)
|
||||
* **Purpose**: An interactive script to simplify the creation, deletion, and testing of SAP HANA `hdbuserstore` keys.
|
||||
* **Features**:
|
||||
* Interactive menu for easy key management.
|
||||
* Connection testing for existing keys.
|
||||
* Automatic rollback of a newly created key if its connection test fails.
|
||||
|
||||
* **Purpose**: A versatile tool for exporting/importing HANA schemas and performing tenant backups.
|
||||
* **Features**:
|
||||
* Export a schema.
|
||||
* Import a schema (with optional renaming).
|
||||
* Perform a full tenant backup.
|
||||
* Supports compression (`tar.gz`).
|
||||
* Dry-run mode to preview commands.
|
||||
* Ntfy.sh notifications for completion/failure.
|
||||
* Custom `hdbsql` path.
|
||||
* **Usage**:
|
||||
* Schema: `./hanatool.sh [USER_KEY] export|import [SCHEMA_NAME] [PATH] [OPTIONS]`
|
||||
* Schema Rename: `./hanatool.sh [USER_KEY] import-rename [SCHEMA_NAME] [NEW_SCHEMA_NAME] [PATH] [OPTIONS]`
|
||||
* Tenant: `./hanatool.sh [USER_KEY] backup [PATH] [OPTIONS]`
|
||||
* **Options**: `-t, --threads N`, `-c, --compress`, `-n, --dry-run`, `--ntfy <token>`, `--hdbsql <path>`, `-h, --help`
|
||||
### 4\. `aurora.sh` (HANA Aurora Refresh Suite) 🌅
|
||||
|
||||
### 5. `hdb_keymanager.sh` (SAP HANA Secure User Store Key Manager)
|
||||
* **Purpose**: Automates the refresh of a "copy" schema from a production source, ensuring non-production environments stay up-to-date.
|
||||
* **Process**:
|
||||
1. Drops the existing target schema (optional).
|
||||
2. Exports the source schema from production.
|
||||
3. Imports and renames the data to the target schema.
|
||||
4. Runs post-import configurations and grants privileges.
|
||||
|
||||
* **Purpose**: An interactive script to manage SAP HANA `hdbuserstore` keys.
|
||||
* **Features**:
|
||||
* Create new secure keys with interactive prompts.
|
||||
* Delete existing keys from a selection list.
|
||||
* Test connections for existing keys.
|
||||
* Includes rollback functionality if a newly created key fails to connect.
|
||||
* **Usage**: `./hdb_keymanager.sh` (runs an interactive menu)
|
||||
### 5\. `backup.sh` (SAP HANA Automated Backup Suite) 💾
|
||||
|
||||
### 6. `update.bat` (Git Update Script for Windows)
|
||||
* **Purpose**: Provides automated, scheduled backups for SAP HANA databases.
|
||||
* **Features**:
|
||||
* Supports schema exports, full tenant data backups, or both.
|
||||
* Configurable compression to save disk space.
|
||||
* Uses secure `hdbuserstore` keys for connections.
|
||||
|
||||
* **Purpose**: A simple Windows Batch script to stage all changes, commit with a user-provided message, and push to a Git repository.
|
||||
* **Usage**: `update.bat`
|
||||
### 6\. `monitor.sh` (SAP HANA Monitoring Suite) 📊
|
||||
|
||||
### 7. `aurora/aurora.sh` (HANA Aurora Refresh Script)
|
||||
|
||||
* **Purpose**: Automates the refresh of a "Aurora" schema (a copy of a production schema) for testing or development.
|
||||
* **Configuration**: Uses `aurora/aurora.conf`.
|
||||
* **Features**:
|
||||
* Exports a source schema.
|
||||
* Imports and renames it to an Aurora schema.
|
||||
* Updates company name fields in the imported schema.
|
||||
* Can drop existing Aurora schema before refresh.
|
||||
* Grants privileges to a specified user.
|
||||
* Runs post-import SQL scripts.
|
||||
* **Usage**: The script runs automatically based on the settings in `aurora/aurora.conf`. It is typically scheduled via cron.
|
||||
|
||||
### 8. `aurora/aurora.conf` (Configuration for `aurora.sh`)
|
||||
|
||||
* **Purpose**: Configures the `aurora.sh` script, including source schema, target user, backup directory, `hdbsql` path, and post-import SQL scripts.
|
||||
|
||||
### 9. `backup/backup.sh` (SAP HANA Automated Backup Script)
|
||||
|
||||
* **Purpose**: Performs automated schema exports and/or tenant backups for SAP HANA databases, typically via cronjobs.
|
||||
* **Configuration**: Uses `backup/backup.conf`.
|
||||
* **Features**:
|
||||
* Supports schema exports for multiple schemas.
|
||||
* Performs full tenant data backups.
|
||||
* Optionally backs up the SYSTEMDB.
|
||||
* Supports compression for both schema exports and tenant backups.
|
||||
* Configurable backup types (`schema`, `tenant`, `all`).
|
||||
* **Usage**: `./backup/backup.sh` (all settings are read from `backup.conf`)
|
||||
|
||||
### 10. `backup/backup.conf` (Configuration for `backup.sh`)
|
||||
|
||||
* **Purpose**: Configures the `backup.sh` script, including `hdbsql` path, user keys, base backup directory, backup type, compression settings, and schema names to export.
|
||||
|
||||
### 11. `monitor/monitor.sh` (SAP HANA Monitoring Script)
|
||||
|
||||
* **Purpose**: Monitors SAP HANA processes, disk usage, and log segment states, sending ntfy.sh notifications for alerts.
|
||||
* **Configuration**: Uses `monitor/monitor.conf`.
|
||||
* **Features**:
|
||||
* Checks if all HANA processes are running (GREEN status).
|
||||
* Monitors disk usage for specified directories against a threshold.
|
||||
* Analyzes HANA log segments for 'Truncated' and 'Free' percentages against thresholds.
|
||||
* Sends detailed notifications via ntfy.sh.
|
||||
* Uses a lock file to prevent multiple instances from running.
|
||||
* **Usage**: `./monitor/monitor.sh` (all settings are read from `monitor.conf`)
|
||||
|
||||
### 12. `monitor/monitor.conf` (Configuration for `monitor.sh`)
|
||||
|
||||
* **Purpose**: Configures the `monitor.sh` script, including company name, ntfy.sh settings, HANA connection details, monitoring thresholds (disk usage, log segments), and directories to monitor.
|
||||
* **Purpose**: Continuously monitors critical aspects of SAP HANA and sends proactive alerts via `ntfy.sh` when predefined thresholds are exceeded.
|
||||
* **Checks Performed**:
|
||||
* Verifies all HANA processes have a 'GREEN' status.
|
||||
* Monitors disk usage against a set threshold.
|
||||
* Analyzes log segment state.
|
||||
* Checks the age of the last successful data backup.
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
# Configuration for the Aurora Refresh Script (aurora_refresh.sh)
|
||||
# Place this file in the same directory as the script.
|
||||
# Author: Tomi Eckert
|
||||
|
||||
# --- Main Settings ---
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
#!/bin/sh
|
||||
# Version: 2.1.0
|
||||
# Author: Tomi Eckert
|
||||
#
|
||||
# Purpose: Performs an automated refresh of a SAP HANA schema. It exports a
|
||||
# production schema and re-imports it under a new name ("Aurora")
|
||||
|
||||
256
b1.gen.sh
Normal file
256
b1.gen.sh
Normal file
@@ -0,0 +1,256 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Author: Tomi Eckert
|
||||
# ==============================================================================
|
||||
# SAP Business One for HANA Silent Installation Configurator
|
||||
# ==============================================================================
|
||||
# This script interactively collects necessary details to customize the
|
||||
# silent installation properties file for SAP Business One on HANA.
|
||||
# It provides sensible defaults and generates the final 'install.properties'.
|
||||
# ==============================================================================
|
||||
|
||||
# --- Function to display a welcome header ---
|
||||
print_header() {
|
||||
echo "======================================================"
|
||||
echo " SAP Business One for HANA Installation Configurator "
|
||||
echo "======================================================"
|
||||
echo "Please provide the following details. Defaults are in [brackets]."
|
||||
echo ""
|
||||
}
|
||||
|
||||
# --- Function to read password securely (single entry) ---
|
||||
read_password() {
|
||||
local prompt_text=$1
|
||||
local -n pass_var=$2 # Use a nameref to pass the variable name
|
||||
|
||||
# Loop until the entered password is not empty
|
||||
while true; do
|
||||
read -s -p "$prompt_text: " pass_var
|
||||
echo
|
||||
if [ -z "$pass_var" ]; then
|
||||
echo "Password cannot be empty. Please try again."
|
||||
else
|
||||
break
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# --- Function to read and verify password securely ---
|
||||
read_password_verify() {
|
||||
local prompt_text=$1
|
||||
local -n pass_var=$2 # Use a nameref to pass the variable name
|
||||
local pass_verify
|
||||
|
||||
# Loop until the entered passwords match and are not empty
|
||||
while true; do
|
||||
read -s -p "$prompt_text: " pass_var
|
||||
echo
|
||||
if [ -z "$pass_var" ]; then
|
||||
echo "Password cannot be empty. Please try again."
|
||||
continue
|
||||
fi
|
||||
|
||||
read -s -p "Confirm password: " pass_verify
|
||||
echo
|
||||
|
||||
if [ "$pass_var" == "$pass_verify" ]; then
|
||||
break
|
||||
else
|
||||
echo "Passwords do not match. Please try again."
|
||||
echo ""
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# --- Main configuration logic ---
|
||||
print_header
|
||||
|
||||
# --- Installation Type ---
|
||||
echo "--- Installation Type ---"
|
||||
read -p "Is this a new installation or are you reconfiguring an existing instance? (new/reconfigure) [new]: " install_type
|
||||
install_type=${install_type:-new}
|
||||
|
||||
if [[ "$install_type" == "reconfigure" ]]; then
|
||||
LANDSCAPE_INSTALL_ACTION="connect"
|
||||
B1S_SHARED_FOLDER_OVERWRITE="false"
|
||||
else
|
||||
LANDSCAPE_INSTALL_ACTION="create"
|
||||
B1S_SHARED_FOLDER_OVERWRITE="true"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
|
||||
# 1. Get Hostname/IP Details
|
||||
# Default to the current machine's hostname.
|
||||
DEFAULT_HOSTNAME=$(hostname)
|
||||
read -p "Enter HANA Database Server Hostname or IP [${DEFAULT_HOSTNAME}]: " HANA_DATABASE_SERVERS
|
||||
HANA_DATABASE_SERVERS=${HANA_DATABASE_SERVERS:-$DEFAULT_HOSTNAME}
|
||||
|
||||
# 2. Get HANA Instance Details
|
||||
read -p "Enter HANA Database Instance Number [00]: " HANA_DATABASE_INSTANCE
|
||||
HANA_DATABASE_INSTANCE=${HANA_DATABASE_INSTANCE:-00}
|
||||
|
||||
# 3. Get HANA SID to construct the admin user
|
||||
read -p "Enter HANA SID (Tenant Name) [NDB]: " HANA_SID
|
||||
HANA_SID=${HANA_SID:-NDB}
|
||||
# Convert SID to lowercase and append 'adm'
|
||||
HANA_DATABASE_ADMIN_ID=$(echo "${HANA_SID}" | tr '[:upper:]' '[:lower:]')adm
|
||||
|
||||
# 4. Get Passwords
|
||||
echo ""
|
||||
echo "--- Secure Password Entry ---"
|
||||
read_password "Enter password for HANA Admin ('${HANA_DATABASE_ADMIN_ID}')" HANA_DATABASE_ADMIN_PASSWD
|
||||
|
||||
# 5. Get HANA Database User
|
||||
read -p "Enter HANA Database User ID [SYSTEM]: " HANA_DATABASE_USER_ID
|
||||
HANA_DATABASE_USER_ID=${HANA_DATABASE_USER_ID:-SYSTEM}
|
||||
|
||||
# 6. Get HANA User Password
|
||||
read_password "Enter password for HANA User ('${HANA_DATABASE_USER_ID}')" HANA_DATABASE_USER_PASSWORD
|
||||
|
||||
# 7. Get SLD and Site User Details
|
||||
echo ""
|
||||
echo "--- System Landscape Directory (SLD) ---"
|
||||
read -p "Enter SLD Service Port [40000]: " SERVICE_PORT
|
||||
SERVICE_PORT=${SERVICE_PORT:-40000}
|
||||
|
||||
read -p "Enter SLD Site User ID [B1SiteUser]: " SITE_USER_ID
|
||||
SITE_USER_ID=${SITE_USER_ID:-B1SiteUser}
|
||||
|
||||
read_password_verify "Enter password for Site User ('${SITE_USER_ID}')" SITE_USER_PASSWORD
|
||||
|
||||
# --- SLD Single Sign-On (SSO) Settings ---
|
||||
echo ""
|
||||
echo "--- SLD Single Sign-On (SSO) Settings ---"
|
||||
read -p "Do you want to configure Active Directory SSO? [y/N]: " configure_sso
|
||||
|
||||
if [[ "$configure_sso" =~ ^[yY]$ ]]; then
|
||||
SLD_WINDOWS_DOMAIN_ACTION="use"
|
||||
read -p "Enter AD Domain Controller: " SLD_WINDOWS_DOMAIN_CONTROLLER
|
||||
read -p "Enter AD Domain Name: " SLD_WINDOWS_DOMAIN_NAME
|
||||
read -p "Enter AD Domain User ID: " SLD_WINDOWS_DOMAIN_USER_ID
|
||||
read_password "Enter password for AD Domain User ('${SLD_WINDOWS_DOMAIN_USER_ID}')" SLD_WINDOWS_DOMAIN_USER_PASSWORD
|
||||
else
|
||||
SLD_WINDOWS_DOMAIN_ACTION="skip"
|
||||
SLD_WINDOWS_DOMAIN_CONTROLLER=""
|
||||
SLD_WINDOWS_DOMAIN_NAME=""
|
||||
SLD_WINDOWS_DOMAIN_USER_ID=""
|
||||
SLD_WINDOWS_DOMAIN_USER_PASSWORD=""
|
||||
fi
|
||||
|
||||
# 10. & 11. Get Service Layer Load Balancer Details
|
||||
echo ""
|
||||
echo "--- Service Layer ---"
|
||||
read -p "Enter Service Layer Load Balancer Port [50000]: " SL_LB_PORT
|
||||
SL_LB_PORT=${SL_LB_PORT:-50000}
|
||||
|
||||
read -p "How many Service Layer member nodes should be configured? [2]: " SL_MEMBER_COUNT
|
||||
SL_MEMBER_COUNT=${SL_MEMBER_COUNT:-2}
|
||||
|
||||
# Generate the SL_LB_MEMBERS string
|
||||
SL_LB_MEMBERS=""
|
||||
for (( i=1; i<=SL_MEMBER_COUNT; i++ )); do
|
||||
port=$((50000 + i))
|
||||
member="${HANA_DATABASE_SERVERS}:${port}"
|
||||
if [ -z "$SL_LB_MEMBERS" ]; then
|
||||
SL_LB_MEMBERS="$member"
|
||||
else
|
||||
SL_LB_MEMBERS="$SL_LB_MEMBERS,$member"
|
||||
fi
|
||||
done
|
||||
|
||||
# 12. Display Summary and Ask for Confirmation
|
||||
clear
|
||||
echo "======================================================"
|
||||
echo " Configuration Summary"
|
||||
echo "======================================================"
|
||||
echo ""
|
||||
echo " --- Installation & System Details ---"
|
||||
echo " INSTALLATION_FOLDER=/usr/sap/SAPBusinessOne"
|
||||
echo " LANDSCAPE_INSTALL_ACTION=${LANDSCAPE_INSTALL_ACTION}"
|
||||
echo " B1S_SHARED_FOLDER_OVERWRITE=${B1S_SHARED_FOLDER_OVERWRITE}"
|
||||
echo ""
|
||||
echo " --- SAP HANA Database Server Details ---"
|
||||
echo " HANA_DATABASE_SERVERS=${HANA_DATABASE_SERVERS}"
|
||||
echo " HANA_DATABASE_INSTANCE=${HANA_DATABASE_INSTANCE}"
|
||||
echo " HANA_DATABASE_ADMIN_ID=${HANA_DATABASE_ADMIN_ID}"
|
||||
echo " HANA_DATABASE_ADMIN_PASSWD=[hidden]"
|
||||
echo ""
|
||||
echo " --- SAP HANA Database User ---"
|
||||
echo " HANA_DATABASE_USER_ID=${HANA_DATABASE_USER_ID}"
|
||||
echo " HANA_DATABASE_USER_PASSWORD=[hidden]"
|
||||
echo ""
|
||||
echo " --- System Landscape Directory (SLD) Details ---"
|
||||
echo " SERVICE_PORT=${SERVICE_PORT}"
|
||||
echo " SITE_USER_ID=${SITE_USER_ID}"
|
||||
echo " SITE_USER_PASSWORD=[hidden]"
|
||||
echo ""
|
||||
echo " --- SLD Single Sign-On (SSO) ---"
|
||||
echo " SLD_WINDOWS_DOMAIN_ACTION=${SLD_WINDOWS_DOMAIN_ACTION}"
|
||||
if [ "$SLD_WINDOWS_DOMAIN_ACTION" == "use" ]; then
|
||||
echo " SLD_WINDOWS_DOMAIN_CONTROLLER=${SLD_WINDOWS_DOMAIN_CONTROLLER}"
|
||||
echo " SLD_WINDOWS_DOMAIN_NAME=${SLD_WINDOWS_DOMAIN_NAME}"
|
||||
echo " SLD_WINDOWS_DOMAIN_USER_ID=${SLD_WINDOWS_DOMAIN_USER_ID}"
|
||||
echo " SLD_WINDOWS_DOMAIN_USER_PASSWORD=[hidden]"
|
||||
fi
|
||||
echo ""
|
||||
echo " --- Service Layer ---"
|
||||
echo " SL_LB_PORT=${SL_LB_PORT}"
|
||||
echo " SL_LB_MEMBERS=${SL_LB_MEMBERS}"
|
||||
echo ""
|
||||
echo "======================================================"
|
||||
read -p "Save this configuration to 'install.properties'? [y/N]: " confirm
|
||||
echo ""
|
||||
|
||||
if [[ ! "$confirm" =~ ^[yY]$ ]]; then
|
||||
echo "Configuration cancelled by user."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# --- Write the final install.properties file ---
|
||||
# Using a HEREDOC to write the configuration file with the variables collected.
|
||||
cat > install.properties << EOL
|
||||
# SAP Business One for HANA Silent Installation Properties
|
||||
# Generated by configuration script on $(date)
|
||||
|
||||
INSTALLATION_FOLDER=/usr/sap/SAPBusinessOne
|
||||
|
||||
HANA_DATABASE_SERVERS=${HANA_DATABASE_SERVERS}
|
||||
HANA_DATABASE_INSTANCE=${HANA_DATABASE_INSTANCE}
|
||||
HANA_DATABASE_ADMIN_ID=${HANA_DATABASE_ADMIN_ID}
|
||||
HANA_DATABASE_ADMIN_PASSWD=${HANA_DATABASE_ADMIN_PASSWD}
|
||||
|
||||
HANA_DATABASE_USER_ID=${HANA_DATABASE_USER_ID}
|
||||
HANA_DATABASE_USER_PASSWORD=${HANA_DATABASE_USER_PASSWORD}
|
||||
|
||||
SERVICE_PORT=${SERVICE_PORT}
|
||||
SLD_DATABASE_NAME=SLDDATA
|
||||
SLD_CERTIFICATE_ACTION=self
|
||||
CONNECTION_SSL_CERTIFICATE_VERIFICATION=false
|
||||
SLD_DATABASE_ACTION=create
|
||||
SLD_SERVER_PROTOCOL=https
|
||||
SITE_USER_ID=${SITE_USER_ID}
|
||||
SITE_USER_PASSWORD=${SITE_USER_PASSWORD}
|
||||
|
||||
# --- SLD Single Sign-On (SSO) Settings ---
|
||||
SLD_WINDOWS_DOMAIN_ACTION=${SLD_WINDOWS_DOMAIN_ACTION}
|
||||
SLD_WINDOWS_DOMAIN_CONTROLLER=${SLD_WINDOWS_DOMAIN_CONTROLLER}
|
||||
SLD_WINDOWS_DOMAIN_NAME=${SLD_WINDOWS_DOMAIN_NAME}
|
||||
SLD_WINDOWS_DOMAIN_USER_ID=${SLD_WINDOWS_DOMAIN_USER_ID}
|
||||
SLD_WINDOWS_DOMAIN_USER_PASSWORD=${SLD_WINDOWS_DOMAIN_USER_PASSWORD}
|
||||
|
||||
SL_LB_MEMBER_ONLY=false
|
||||
SL_LB_PORT=${SL_LB_PORT}
|
||||
SL_LB_MEMBERS=${SL_LB_MEMBERS}
|
||||
SL_THREAD_PER_SERVER=10
|
||||
|
||||
SELECTED_FEATURES=B1ServerTools,B1ServerToolsLandscape,B1ServerToolsSLD,B1ServerToolsLicense,B1ServerToolsJobService,B1ServerToolsXApp,B1SLDAgent,B1BackupService,B1Server,B1ServerSHR,B1ServerHelp,B1AnalyticsPlatform,B1ServerCommonDB,B1ServiceLayerComponent
|
||||
|
||||
B1S_SAMBA_AUTOSTART=true
|
||||
B1S_SHARED_FOLDER_OVERWRITE=${B1S_SHARED_FOLDER_OVERWRITE}
|
||||
LANDSCAPE_INSTALL_ACTION=${LANDSCAPE_INSTALL_ACTION}
|
||||
EOL
|
||||
|
||||
echo "Success! The configuration file 'install.properties' has been created in the current directory."
|
||||
exit 0
|
||||
|
||||
@@ -1,32 +1,30 @@
|
||||
# ==============================================================================
|
||||
# Configuration for HANA Backup Script (backup.sh)
|
||||
# ==============================================================================
|
||||
# Author: Tomi Eckert
|
||||
|
||||
# --- Connection Settings ---
|
||||
|
||||
# Full path to the SAP HANA hdbsql executable.
|
||||
HDBSQL_PATH="/usr/sap/hdbclient/hdbsql"
|
||||
|
||||
# User key name from the hdbuserstore.
|
||||
# This key should be configured to connect to the target tenant database.
|
||||
USER_KEY="CRONKEY"
|
||||
|
||||
# hdbuserstore key for the SYSTEMDB user
|
||||
SYSTEMDB_USER_KEY="SYSTEMDB_KEY"
|
||||
SYSTEMDB_USER_KEY="SYSTEMKEY"
|
||||
|
||||
# --- Backup Settings ---
|
||||
|
||||
# The base directory where all backup files and directories will be stored.
|
||||
# Ensure this directory exists and that the OS user running the script has
|
||||
# write permissions to it.
|
||||
BACKUP_BASE_DIR="/hana/backups/automated"
|
||||
BACKUP_BASE_DIR="/hana/shared/backup"
|
||||
|
||||
# Specify the type of backup to perform on script execution.
|
||||
# Options are:
|
||||
# 'schema' - Performs only the schema export.
|
||||
# 'tenant' - Performs only the tenant data backup.
|
||||
# 'all' - Performs both the schema export and the tenant backup.
|
||||
BACKUP_TYPE="all"
|
||||
BACKUP_TYPE="tenant"
|
||||
|
||||
# Set to 'true' to also perform a backup of the SYSTEMDB
|
||||
BACKUP_SYSTEMDB=true
|
||||
|
||||
17
backup/backup.hook.sh
Normal file
17
backup/backup.hook.sh
Normal file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Author: Tomi Eckert
|
||||
# This script helps to configure backup.conf
|
||||
|
||||
# Source the backup.conf to get current values
|
||||
source backup.conf
|
||||
|
||||
HDBSQL_PATH_INPUT=$(which hdbsql)
|
||||
|
||||
# Default values if not found
|
||||
HDBSQL_PATH_INPUT=${HDBSQL_PATH_INPUT:-"/usr/sap/hdbclient/hdbsql"}
|
||||
|
||||
# Update backup.conf
|
||||
sed -i "s#^HDBSQL_PATH=\".*\"#HDBSQL_PATH=\"$HDBSQL_PATH_INPUT\"#" backup.conf
|
||||
|
||||
echo "backup.conf updated successfully!"
|
||||
221
backup/backup.sh
221
backup/backup.sh
@@ -1,18 +1,20 @@
|
||||
#!/bin/bash
|
||||
# Version: 1.0.5
|
||||
# Version: 1.0.8
|
||||
# Author: Tomi Eckert
|
||||
# ==============================================================================
|
||||
# SAP HANA Backup Script
|
||||
#
|
||||
# Performs schema exports for one or more schemas and/or tenant backups for a
|
||||
# SAP HANA database. Designed to be executed via a cronjob.
|
||||
# SAP HANA database using hanatool.sh. Designed to be executed via a cronjob.
|
||||
# Reads all settings from the backup.conf file in the same directory.
|
||||
# ==============================================================================
|
||||
|
||||
# --- Configuration and Setup ---
|
||||
|
||||
# Find the script's own directory to locate the config file
|
||||
# Find the script's own directory to locate the config file and hanatool.sh
|
||||
SCRIPT_DIR=$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" &> /dev/null && pwd)
|
||||
CONFIG_FILE="${SCRIPT_DIR}/backup.conf"
|
||||
HANATOOL_PATH="${SCRIPT_DIR}/hanatool.sh" # Assuming hanatool.sh is in the parent directory
|
||||
|
||||
# Check for config file and source it
|
||||
if [[ -f "$CONFIG_FILE" ]]; then
|
||||
@@ -22,176 +24,101 @@ else
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if hdbsql executable exists
|
||||
if [[ ! -x "$HDBSQL_PATH" ]]; then
|
||||
echo "❌ Error: hdbsql not found or not executable at '${HDBSQL_PATH}'"
|
||||
# Check if hanatool.sh executable exists
|
||||
if [[ ! -x "$HANATOOL_PATH" ]]; then
|
||||
echo "❌ Error: hanatool.sh not found or not executable at '${HANATOOL_PATH}'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Calculate threads to use (half of the available cores, but at least 1)
|
||||
TOTAL_THREADS=$(nproc --all)
|
||||
THREADS=$((TOTAL_THREADS / 2))
|
||||
if [[ "$THREADS" -eq 0 ]]; then
|
||||
THREADS=1
|
||||
fi
|
||||
|
||||
# --- Functions ---
|
||||
|
||||
# Performs a binary export of a specific schema.
|
||||
# Accepts the schema name as its first argument.
|
||||
perform_schema_export() {
|
||||
local schema_name="$1"
|
||||
if [[ -z "$schema_name" ]]; then
|
||||
echo " ❌ Error: No schema name provided to perform_schema_export function."
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "⬇️ Starting schema export for '${schema_name}'..."
|
||||
|
||||
local timestamp
|
||||
timestamp=$(date +%Y%m%d_%H%M%S)
|
||||
local export_base_dir="${BACKUP_BASE_DIR}/schema"
|
||||
local export_path="${export_base_dir}/${schema_name}_${timestamp}"
|
||||
local query_export_path="$export_path"
|
||||
|
||||
if [[ "$COMPRESS_SCHEMA" == "true" ]]; then
|
||||
export_path="${export_base_dir}/tmp/${schema_name}_${timestamp}"
|
||||
query_export_path="$export_path"
|
||||
echo " ℹ️ Compression enabled. Using temporary export path: ${export_path}"
|
||||
fi
|
||||
|
||||
local archive_file="${export_base_dir}/${schema_name}_${timestamp}.tar.gz"
|
||||
|
||||
mkdir -p "$(dirname "$export_path")"
|
||||
|
||||
local query="EXPORT \"${schema_name}\".\"*\" AS BINARY INTO '${query_export_path}' WITH REPLACE THREADS ${THREADS};"
|
||||
|
||||
"$HDBSQL_PATH" -U "$USER_KEY" "$query" > /dev/null 2>&1
|
||||
local exit_code=$?
|
||||
|
||||
if [[ "$exit_code" -eq 0 ]]; then
|
||||
echo " ✅ Successfully exported schema '${schema_name}'."
|
||||
|
||||
if [[ "$COMPRESS_SCHEMA" == "true" ]]; then
|
||||
echo " 🗜️ Compressing exported files..."
|
||||
tar -czf "$archive_file" -C "$(dirname "$export_path")" "$(basename "$export_path")"
|
||||
local tar_exit_code=$?
|
||||
|
||||
if [[ "$tar_exit_code" -eq 0 ]]; then
|
||||
echo " ✅ Successfully created archive '${archive_file}'."
|
||||
echo " 🧹 Cleaning up temporary directory..."
|
||||
rm -rf "$export_path"
|
||||
rmdir --ignore-fail-on-non-empty "$(dirname "$export_path")"
|
||||
echo " ✨ Cleanup complete."
|
||||
else
|
||||
echo " ❌ Error: Failed to compress '${export_path}'."
|
||||
fi
|
||||
else
|
||||
echo " ℹ️ Compression disabled. Raw export files are located at '${export_path}'."
|
||||
fi
|
||||
else
|
||||
echo " ❌ Error: Failed to export schema '${schema_name}' (hdbsql exit code: ${exit_code})."
|
||||
fi
|
||||
}
|
||||
|
||||
# Loops through the schemas in the config file and runs an export for each.
|
||||
run_all_schema_exports() {
|
||||
if [[ -z "$SCHEMA_NAMES" ]]; then
|
||||
echo " ⚠️ Warning: SCHEMA_NAMES variable is not set in config. Skipping schema export."
|
||||
return
|
||||
fi
|
||||
|
||||
echo "🔎 Found schemas to export: ${SCHEMA_NAMES}"
|
||||
for schema in $SCHEMA_NAMES; do
|
||||
perform_schema_export "$schema"
|
||||
echo "--------------------------------------------------"
|
||||
done
|
||||
}
|
||||
|
||||
# REFACTORED: Generic function to back up any database (Tenant or SYSTEMDB).
|
||||
# Arguments: 1:Backup Name (for logging), 2:User Key, 3:Base Directory, 4:Compression Flag
|
||||
perform_database_backup() {
|
||||
local backup_name="$1"
|
||||
local user_key="$2"
|
||||
local backup_base_dir="$3"
|
||||
local compress_enabled="$4"
|
||||
|
||||
echo "⬇️ Starting ${backup_name} backup..."
|
||||
|
||||
local timestamp
|
||||
timestamp=$(date +%Y%m%d_%H%M%S)
|
||||
local backup_path_prefix
|
||||
local backup_target_dir
|
||||
|
||||
if [[ "$compress_enabled" == "true" ]]; then
|
||||
backup_target_dir="${backup_base_dir}/tmp"
|
||||
backup_path_prefix="${backup_target_dir}/backup_${timestamp}"
|
||||
echo " ℹ️ Compression enabled. Using temporary backup path: ${backup_path_prefix}"
|
||||
else
|
||||
backup_target_dir="$backup_base_dir"
|
||||
backup_path_prefix="${backup_target_dir}/backup_${timestamp}"
|
||||
fi
|
||||
|
||||
mkdir -p "$backup_target_dir"
|
||||
|
||||
local query="BACKUP DATA USING FILE ('${backup_path_prefix}')"
|
||||
|
||||
"$HDBSQL_PATH" -U "$user_key" "$query" > /dev/null 2>&1
|
||||
local exit_code=$?
|
||||
|
||||
if [[ "$exit_code" -eq 0 ]]; then
|
||||
echo " ✅ Successfully initiated ${backup_name} backup with prefix '${backup_path_prefix}'."
|
||||
|
||||
if [[ "$compress_enabled" == "true" ]]; then
|
||||
local archive_file="${backup_base_dir}/backup_${timestamp}.tar.gz"
|
||||
echo " 🗜️ Compressing backup files..."
|
||||
tar -czf "$archive_file" -C "$backup_target_dir" .
|
||||
local tar_exit_code=$?
|
||||
|
||||
if [[ "$tar_exit_code" -eq 0 ]]; then
|
||||
echo " ✅ Successfully created archive '${archive_file}'."
|
||||
echo " 🧹 Cleaning up temporary directory..."
|
||||
rm -rf "$backup_target_dir"
|
||||
echo " ✨ Cleanup complete."
|
||||
else
|
||||
echo " ❌ Error: Failed to compress backup files in '${backup_target_dir}'."
|
||||
fi
|
||||
fi
|
||||
else
|
||||
echo " ❌ Error: Failed to initiate ${backup_name} backup (hdbsql exit code: ${exit_code})."
|
||||
fi
|
||||
}
|
||||
|
||||
# --- Main Execution ---
|
||||
|
||||
echo "⚙️ Starting HANA backup process..."
|
||||
echo "⚙️ Starting HANA backup process using hanatool.sh..."
|
||||
|
||||
mkdir -p "$BACKUP_BASE_DIR"
|
||||
|
||||
SCHEMA_EXPORT_OPTIONS=""
|
||||
|
||||
case "$BACKUP_TYPE" in
|
||||
schema)
|
||||
run_all_schema_exports
|
||||
if [[ -z "$SCHEMA_NAMES" ]]; then
|
||||
echo " ⚠️ Warning: SCHEMA_NAMES variable is not set in config. Skipping schema export."
|
||||
else
|
||||
echo "🔎 Found schemas to export: ${SCHEMA_NAMES}"
|
||||
for schema in $SCHEMA_NAMES; do
|
||||
echo "⬇️ Starting schema export for '${schema}'..."
|
||||
SCHEMA_EXPORT_OPTIONS="$COMMON_OPTIONS"
|
||||
if [[ "$COMPRESS_SCHEMA" == "true" ]]; then
|
||||
SCHEMA_EXPORT_OPTIONS+=" --compress"
|
||||
fi
|
||||
"$HANATOOL_PATH" "$USER_KEY" export "$schema" "${BACKUP_BASE_DIR}/schema" $SCHEMA_EXPORT_OPTIONS
|
||||
if [[ $? -ne 0 ]]; then
|
||||
echo "❌ Error: Schema export for '${schema}' failed."
|
||||
fi
|
||||
echo "--------------------------------------------------"
|
||||
done
|
||||
fi
|
||||
;;
|
||||
tenant)
|
||||
perform_database_backup "Tenant" "$USER_KEY" "${BACKUP_BASE_DIR}/tenant" "$COMPRESS_TENANT"
|
||||
echo "⬇️ Starting Tenant backup..."
|
||||
TENANT_BACKUP_OPTIONS="$COMMON_OPTIONS"
|
||||
if [[ "$COMPRESS_TENANT" == "true" ]]; then
|
||||
TENANT_BACKUP_OPTIONS+=" --compress"
|
||||
fi
|
||||
"$HANATOOL_PATH" "$USER_KEY" backup "${BACKUP_BASE_DIR}/tenant" $TENANT_BACKUP_OPTIONS
|
||||
if [[ $? -ne 0 ]]; then
|
||||
echo "❌ Error: Tenant backup failed."
|
||||
fi
|
||||
;;
|
||||
all)
|
||||
run_all_schema_exports
|
||||
perform_database_backup "Tenant" "$USER_KEY" "${BACKUP_BASE_DIR}/tenant" "$COMPRESS_TENANT"
|
||||
if [[ -z "$SCHEMA_NAMES" ]]; then
|
||||
echo " ⚠️ Warning: SCHEMA_NAMES variable is not set in config. Skipping schema export."
|
||||
else
|
||||
echo "🔎 Found schemas to export: ${SCHEMA_NAMES}"
|
||||
for schema in $SCHEMA_NAMES; do
|
||||
echo "⬇️ Starting schema export for '${schema}'..."
|
||||
SCHEMA_EXPORT_OPTIONS="$COMMON_OPTIONS"
|
||||
if [[ "$COMPRESS_SCHEMA" == "true" ]]; then
|
||||
SCHEMA_EXPORT_OPTIONS+=" --compress"
|
||||
fi
|
||||
"$HANATOOL_PATH" "$USER_KEY" export "$schema" "${BACKUP_BASE_DIR}/schema" $SCHEMA_EXPORT_OPTIONS
|
||||
if [[ $? -ne 0 ]]; then
|
||||
echo "❌ Error: Schema export for '${schema}' failed."
|
||||
fi
|
||||
echo "--------------------------------------------------"
|
||||
done
|
||||
fi
|
||||
|
||||
echo "⬇️ Starting Tenant backup..."
|
||||
TENANT_BACKUP_OPTIONS="$COMMON_OPTIONS"
|
||||
if [[ "$COMPRESS_TENANT" == "true" ]]; then
|
||||
TENANT_BACKUP_OPTIONS+=" --compress"
|
||||
fi
|
||||
"$HANATOOL_PATH" "$USER_KEY" backup "${BACKUP_BASE_DIR}/tenant" $TENANT_BACKUP_OPTIONS
|
||||
if [[ $? -ne 0 ]]; then
|
||||
echo "❌ Error: Tenant backup failed."
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
echo " ❌ Error: Invalid BACKUP_TYPE '${BACKUP_TYPE}' in config. Use 'schema', 'tenant', or 'all'."
|
||||
;;
|
||||
esac
|
||||
|
||||
# NEW: Check if SYSTEMDB backup is enabled, regardless of BACKUP_TYPE (as long as it's not 'schema' only)
|
||||
# Check if SYSTEMDB backup is enabled, regardless of BACKUP_TYPE (as long as it's not 'schema' only)
|
||||
if [[ "$BACKUP_TYPE" == "tenant" || "$BACKUP_TYPE" == "all" ]]; then
|
||||
if [[ "$BACKUP_SYSTEMDB" == "true" ]]; then
|
||||
echo "--------------------------------------------------"
|
||||
if [[ -z "$SYSTEMDB_USER_KEY" ]]; then
|
||||
echo " ❌ Error: BACKUP_SYSTEMDB is true, but SYSTEMDB_USER_KEY is not set in config."
|
||||
else
|
||||
perform_database_backup "SYSTEMDB" "$SYSTEMDB_USER_KEY" "${BACKUP_BASE_DIR}/systemdb" "$COMPRESS_TENANT"
|
||||
echo "⬇️ Starting SYSTEMDB backup..."
|
||||
SYSTEMDB_BACKUP_OPTIONS="$COMMON_OPTIONS"
|
||||
if [[ "$COMPRESS_TENANT" == "true" ]]; then # SYSTEMDB compression uses COMPRESS_TENANT setting
|
||||
SYSTEMDB_BACKUP_OPTIONS+=" --compress"
|
||||
fi
|
||||
"$HANATOOL_PATH" "$SYSTEMDB_USER_KEY" backup "${BACKUP_BASE_DIR}/tenant" $SYSTEMDB_BACKUP_OPTIONS
|
||||
if [[ $? -ne 0 ]]; then
|
||||
echo "❌ Error: SYSTEMDB backup failed."
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
#!/bin/bash
|
||||
# Version: 1.1.0
|
||||
# Author: Tomi Eckert
|
||||
|
||||
# Check if any arguments were provided
|
||||
if [ "$#" -eq 0 ]; then
|
||||
|
||||
67
hanatool.sh
67
hanatool.sh
@@ -1,5 +1,6 @@
|
||||
#!/bin/bash
|
||||
# Version: 1.5.0
|
||||
# Version: 1.5.6
|
||||
# Author: Tomi Eckert
|
||||
# ==============================================================================
|
||||
# SAP HANA Schema and Tenant Management Tool (hanatool.sh)
|
||||
#
|
||||
@@ -7,7 +8,22 @@
|
||||
# ==============================================================================
|
||||
|
||||
# --- Default Settings ---
|
||||
HDBSQL_PATH="/usr/sap/hdbclient/hdbsql"
|
||||
# Define potential HDB client paths
|
||||
HDB_CLIENT_PATH_1="/usr/sap/hdbclient"
|
||||
HDB_CLIENT_PATH_2="/usr/sap/NDB/HDB00/exe"
|
||||
|
||||
# Determine the correct HDB_CLIENT_PATH
|
||||
if [ -d "$HDB_CLIENT_PATH_1" ]; then
|
||||
HDB_CLIENT_PATH="$HDB_CLIENT_PATH_1"
|
||||
elif [ -d "$HDB_CLIENT_PATH_2" ]; then
|
||||
HDB_CLIENT_PATH="$HDB_CLIENT_PATH_2"
|
||||
else
|
||||
echo "❌ Error: Neither '$HDB_CLIENT_PATH_1' nor '$HDB_CLIENT_PATH_2' found."
|
||||
echo "Please install the SAP HANA client or adjust the paths in the script."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
HDBSQL_PATH="${HDB_CLIENT_PATH}/hdbsql"
|
||||
COMPRESS=false
|
||||
THREADS=0 # 0 means auto-calculate later
|
||||
DRY_RUN=false
|
||||
@@ -65,6 +81,28 @@ send_notification() {
|
||||
fi
|
||||
}
|
||||
|
||||
# --- Function to get HANA tenant name ---
|
||||
get_hana_tenant_name() {
|
||||
local user_key="$1"
|
||||
local hdbsql_path="$2"
|
||||
local dry_run="$3"
|
||||
|
||||
local query="SELECT DATABASE_NAME FROM SYS.M_DATABASES;"
|
||||
local tenant_name=""
|
||||
|
||||
if [[ "$dry_run" == "true" ]]; then
|
||||
echo "[DRY RUN] Would execute hdbsql to get tenant name: \"$hdbsql_path\" -U \"$user_key\" \"$query\""
|
||||
tenant_name="DRYRUN_TENANT"
|
||||
else
|
||||
tenant_name=$("$hdbsql_path" -U "$user_key" "$query" | tail -n +2 | head -n 1 | tr -d '[:space:]' | tr -d '"')
|
||||
if [[ -z "$tenant_name" ]]; then
|
||||
echo "❌ Error: Could not retrieve HANA tenant name using user key '${user_key}'."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
echo "$tenant_name"
|
||||
}
|
||||
|
||||
# --- Argument Parsing ---
|
||||
POSITIONAL_ARGS=()
|
||||
while [[ $# -gt 0 ]]; do
|
||||
@@ -148,28 +186,29 @@ case "$ACTION" in
|
||||
echo " - Path: ${TARGET_PATH}"
|
||||
echo " - Compress: ${COMPRESS}"
|
||||
|
||||
TENANT_NAME=$(get_hana_tenant_name "$USER_KEY" "$HDBSQL_PATH" "$DRY_RUN")
|
||||
echo " - Tenant Name: ${TENANT_NAME}"
|
||||
|
||||
timestamp=$(date +%Y%m%d_%H%M%S)
|
||||
backup_target_dir=""
|
||||
backup_target_dir="$TARGET_PATH" # Initialize with TARGET_PATH
|
||||
backup_path_prefix=""
|
||||
|
||||
if [[ "$COMPRESS" == "true" ]]; then
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
backup_target_dir="/tmp/tenant_backup_DRYRUN_TEMP"
|
||||
backup_target_dir="${TARGET_PATH}/${TENANT_NAME}_backup_DRYRUN_TEMP" # Use TARGET_PATH
|
||||
else
|
||||
backup_target_dir=$(mktemp -d "/tmp/tenant_backup_${timestamp}_XXXXXXXX")
|
||||
backup_target_dir=$(mktemp -d "${TARGET_PATH}/${TENANT_NAME}_backup_${timestamp}_XXXXXXXX") # Use TARGET_PATH
|
||||
fi
|
||||
echo "ℹ️ Using temporary backup directory: ${backup_target_dir}"
|
||||
else
|
||||
backup_target_dir="$TARGET_PATH"
|
||||
fi
|
||||
|
||||
if [[ "$DRY_RUN" == "true" && "$COMPRESS" == "false" ]]; then
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo "[DRY RUN] Would create directory: mkdir -p \"$backup_target_dir\""
|
||||
else
|
||||
mkdir -p "$backup_target_dir"
|
||||
fi
|
||||
|
||||
backup_path_prefix="${backup_target_dir}/backup_${timestamp}"
|
||||
backup_path_prefix="${backup_target_dir}/backup_${TENANT_NAME}_${timestamp}"
|
||||
|
||||
QUERY="BACKUP DATA USING FILE ('${backup_path_prefix}')"
|
||||
|
||||
@@ -184,7 +223,7 @@ case "$ACTION" in
|
||||
if [[ "$EXIT_CODE" -eq 0 ]]; then
|
||||
echo "✅ Successfully initiated tenant backup with prefix '${backup_path_prefix}'."
|
||||
if [[ "$COMPRESS" == "true" ]]; then
|
||||
ARCHIVE_FILE="${TARGET_PATH}/tenant_backup_${timestamp}.tar.gz"
|
||||
ARCHIVE_FILE="${TARGET_PATH}/${TENANT_NAME}_backup_${timestamp}.tar.gz"
|
||||
echo "🗜️ Compressing backup files to '${ARCHIVE_FILE}'..."
|
||||
|
||||
TAR_EXIT_CODE=0
|
||||
@@ -207,10 +246,10 @@ case "$ACTION" in
|
||||
echo "❌ Error: Failed to create archive from '${backup_target_dir}'."
|
||||
fi
|
||||
fi
|
||||
send_notification "✅ Tenant backup for user key '${USER_KEY}' completed successfully."
|
||||
send_notification "✅ HANA tenant '${TENANT_NAME}' backup completed successfully."
|
||||
else
|
||||
echo "❌ Error: Failed to initiate tenant backup (hdbsql exit code: ${EXIT_CODE})."
|
||||
send_notification "❌ Tenant backup for user key '${USER_KEY}' FAILED."
|
||||
send_notification "❌ HANA tenant '${TENANT_NAME}' backup FAILED."
|
||||
if [[ "$COMPRESS" == "true" && "$DRY_RUN" == "false" ]]; then rm -rf "$backup_target_dir"; fi
|
||||
fi
|
||||
;;
|
||||
@@ -247,7 +286,7 @@ case "$ACTION" in
|
||||
mkdir -p "$EXPORT_DIR"
|
||||
fi
|
||||
|
||||
QUERY="EXPORT \"${SCHEMA_NAME}\".\"*\" AS BINARY INTO '${EXPORT_DIR}' WITH REPLACE THREADS ${THREADS};"
|
||||
QUERY="EXPORT \"${SCHEMA_NAME}\".\"*\" AS BINARY INTO '${EXPORT_DIR}' WITH REPLACE THREADS ${THREADS} NO DEPENDENCIES;"
|
||||
|
||||
EXIT_CODE=0
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
@@ -356,7 +395,7 @@ case "$ACTION" in
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local import_options
|
||||
import_options=""
|
||||
if [[ "$IMPORT_REPLACE" == "true" ]]; then
|
||||
import_options="REPLACE"
|
||||
echo " - Mode: REPLACE"
|
||||
|
||||
18
install.sh
18
install.sh
@@ -1,5 +1,6 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Author: Tomi Eckert
|
||||
# --- Main Script ---
|
||||
|
||||
# This script presents a menu of software packages, or installs them
|
||||
@@ -47,7 +48,8 @@ process_package() {
|
||||
display_name=$(echo "${config_value}" | cut -d'|' -f1)
|
||||
remote_version=$(echo "${config_value}" | cut -d'|' -f2)
|
||||
description=$(echo "${config_value}" | cut -d'|' -f3)
|
||||
urls_to_download=$(echo "${config_value}" | cut -d'|' -f4-)
|
||||
urls_to_download=$(echo "${config_value}" | cut -d'|' -f4)
|
||||
install_script=$(echo "${config_value}" | cut -d'|' -f5) # Optional install script
|
||||
|
||||
read -r -a urls_to_download_array <<< "$urls_to_download"
|
||||
|
||||
@@ -101,6 +103,17 @@ process_package() {
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ -n "${install_script}" ]]; then
|
||||
echo "[⚙️] Running install script for '${choice_key}'..."
|
||||
#eval "${install_script}"
|
||||
bash -c "$(curl -sSL $install_script)"
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "[✅] Install script completed successfully."
|
||||
else
|
||||
echo "[❌] Install script failed with exit code $?."
|
||||
fi
|
||||
fi
|
||||
echo "[📦] Package processing complete for '${choice_key}'."
|
||||
}
|
||||
|
||||
@@ -173,7 +186,8 @@ for i in "${!ordered_keys[@]}"; do
|
||||
display_name=$(echo "${config_value}" | cut -d'|' -f1)
|
||||
remote_version=$(echo "${config_value}" | cut -d'|' -f2)
|
||||
description=$(echo "${config_value}" | cut -d'|' -f3)
|
||||
urls=$(echo "${config_value}" | cut -d'|' -f4-)
|
||||
urls=$(echo "${config_value}" | cut -d'|' -f4)
|
||||
# install_script=$(echo "${config_value}" | cut -d'|' -f5) # Not used for display in menu
|
||||
read -r -a url_array <<< "$urls"
|
||||
main_script_filename=$(basename "${url_array[0]}")
|
||||
local_version=$(get_local_version "${main_script_filename}")
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
#!/bin/bash
|
||||
# Version: 1.2.1
|
||||
# Version: 1.2.3
|
||||
# Author: Tomi Eckert
|
||||
|
||||
# A script to interactively manage SAP HANA hdbuserstore keys, with testing.
|
||||
|
||||
@@ -12,7 +13,20 @@ COLOR_NC='\033[0m' # No Color
|
||||
|
||||
# --- Configuration ---
|
||||
# Adjust these paths if your HANA client is installed elsewhere.
|
||||
HDB_CLIENT_PATH="/usr/sap/hdbclient"
|
||||
# Define potential HDB client paths
|
||||
HDB_CLIENT_PATH_1="/usr/sap/hdbclient"
|
||||
HDB_CLIENT_PATH_2="/usr/sap/NDB/HDB00/exe"
|
||||
|
||||
# Check which path exists and set HDB_CLIENT_PATH accordingly
|
||||
if [ -d "$HDB_CLIENT_PATH_1" ]; then
|
||||
HDB_CLIENT_PATH="$HDB_CLIENT_PATH_1"
|
||||
elif [ -d "$HDB_CLIENT_PATH_2" ]; then
|
||||
HDB_CLIENT_PATH="$HDB_CLIENT_PATH_2"
|
||||
else
|
||||
echo -e "${COLOR_RED}❌ Error: Neither '$HDB_CLIENT_PATH_1' nor '$HDB_CLIENT_PATH_2' found.${COLOR_NC}"
|
||||
echo -e "${COLOR_RED}Please install the SAP HANA client or adjust the paths in the script.${COLOR_NC}"
|
||||
exit 1
|
||||
fi
|
||||
HDB_USERSTORE_EXEC="${HDB_CLIENT_PATH}/hdbuserstore"
|
||||
HDB_SQL_EXEC="${HDB_CLIENT_PATH}/hdbsql"
|
||||
|
||||
@@ -65,7 +79,7 @@ create_new_key() {
|
||||
|
||||
# Conditionally build the connection string
|
||||
if [[ "$is_systemdb" =~ ^[Yy]$ ]]; then
|
||||
CONNECTION_STRING="${hdb_host}:3${hdb_instance}15"
|
||||
CONNECTION_STRING="${hdb_host}:3${hdb_instance}13"
|
||||
echo -e "${COLOR_YELLOW}💡 Connecting to SYSTEMDB. Tenant name will be omitted from the connection string.${COLOR_NC}"
|
||||
else
|
||||
read -p "Enter the Tenant DB [NDB]: " hdb_tenant
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
# Configuration for SAP HANA Monitoring Script
|
||||
# Author: Tomi Eckert
|
||||
|
||||
# --- Company Information ---
|
||||
# Used to identify which company the alert is for.
|
||||
@@ -12,9 +13,9 @@ NTFY_TOKEN="tk_xxxxx"
|
||||
|
||||
# --- HANA Connection Settings ---
|
||||
# Full path to the sapcontrol executable
|
||||
SAPCONTROL_PATH="/usr/sap/NDB/HDB00/exe/sapcontrol"
|
||||
SAPCONTROL_PATH="<sapcontrol_path>"
|
||||
# Full path to the hdbsql executable
|
||||
HDBSQL_PATH="/usr/sap/hdbclient/hdbsql"
|
||||
HDBSQL_PATH="<hdbsql_path>"
|
||||
# HANA user key for authentication
|
||||
HANA_USER_KEY="CRONKEY"
|
||||
# HANA Instance Number for sapcontrol
|
||||
@@ -29,8 +30,11 @@ TRUNCATED_PERCENTAGE_THRESHOLD=50
|
||||
FREE_PERCENTAGE_THRESHOLD=25
|
||||
# Maximum age of the last successful full data backup in hours.
|
||||
BACKUP_THRESHOLD_HOURS=25
|
||||
# Statement queue length that triggers a check
|
||||
STATEMENT_QUEUE_THRESHOLD=100
|
||||
# Number of consecutive runs the queue must be over threshold to trigger an alert
|
||||
STATEMENT_QUEUE_CONSECUTIVE_RUNS=3
|
||||
|
||||
# --- Monitored Directories ---
|
||||
# List of directories to check for disk usage (space-separated)
|
||||
DIRECTORIES_TO_MONITOR=("/hana/log" "/hana/shared" "/hana/data" "/usr/sap")
|
||||
|
||||
|
||||
56
monitor/monitor.hook.sh
Normal file
56
monitor/monitor.hook.sh
Normal file
@@ -0,0 +1,56 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Author: Tomi Eckert
|
||||
# This script helps to configure monitor.conf
|
||||
|
||||
# Source the monitor.conf to get current values
|
||||
source monitor.conf
|
||||
|
||||
# Check if COMPANY_NAME or NTFY_TOKEN are still default
|
||||
if [ "$COMPANY_NAME" = "Company" ] || [ "$NTFY_TOKEN" = "tk_xxxxx" ]; then
|
||||
echo "Default COMPANY_NAME or NTFY_TOKEN detected. Running configuration..."
|
||||
else
|
||||
echo "COMPANY_NAME and NTFY_TOKEN are already configured. Exiting."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Prompt for COMPANY_NAME
|
||||
read -p "Enter Company Name (e.g., MyCompany): " COMPANY_NAME_INPUT
|
||||
COMPANY_NAME_INPUT=${COMPANY_NAME_INPUT:-"$COMPANY_NAME"} # Default to current value if not provided
|
||||
|
||||
# Prompt for NTFY_TOKEN
|
||||
read -p "Enter ntfy.sh token (e.g., tk_xxxxx): " NTFY_TOKEN_INPUT
|
||||
NTFY_TOKEN_INPUT=${NTFY_TOKEN_INPUT:-"$NTFY_TOKEN"} # Default to current value if not provided
|
||||
|
||||
# Define HANA client paths
|
||||
HDB_CLIENT_PATH="/usr/sap/hdbclient"
|
||||
HDB_USERSTORE_EXEC="${HDB_CLIENT_PATH}/hdbuserstore"
|
||||
|
||||
# List HANA user keys and prompt for selection
|
||||
echo "Available HANA User Keys:"
|
||||
HANA_KEYS=$("$HDB_USERSTORE_EXEC" list 2>/dev/null | tail -n +3 | grep '^KEY ' | awk '{print $2}')
|
||||
if [ -z "$HANA_KEYS" ]; then
|
||||
echo "No HANA user keys found. Please create one using keymanager.sh or enter manually."
|
||||
read -p "Enter HANA User Key (e.g., CRONKEY): " HANA_USER_KEY_INPUT
|
||||
else
|
||||
echo "$HANA_KEYS"
|
||||
read -p "Enter HANA User Key from the list above (e.g., CRONKEY): " HANA_USER_KEY_INPUT
|
||||
fi
|
||||
HANA_USER_KEY_INPUT=${HANA_USER_KEY_INPUT:-"CRONKEY"} # Default value
|
||||
|
||||
# Find paths for sapcontrol and hdbsql
|
||||
SAPCONTROL_PATH_INPUT=$(which sapcontrol)
|
||||
HDBSQL_PATH_INPUT=$(which hdbsql)
|
||||
|
||||
# Default values if not found
|
||||
SAPCONTROL_PATH_INPUT=${SAPCONTROL_PATH_INPUT:-"/usr/sap/NDB/HDB00/exe/sapcontrol"}
|
||||
HDBSQL_PATH_INPUT=${HDBSQL_PATH_INPUT:-"/usr/sap/hdbclient/hdbsql"}
|
||||
|
||||
# Update monitor.conf
|
||||
sed -i "s/^COMPANY_NAME=\".*\"/COMPANY_NAME=\"$COMPANY_NAME_INPUT\"/" monitor.conf
|
||||
sed -i "s/^NTFY_TOKEN=\".*\"/NTFY_TOKEN=\"$NTFY_TOKEN_INPUT\"/" monitor.conf
|
||||
sed -i "s#^SAPCONTROL_PATH=\".*\"#SAPCONTROL_PATH=\"$SAPCONTROL_PATH_INPUT\"#" monitor.conf
|
||||
sed -i "s#^HDBSQL_PATH=\".*\"#HDBSQL_PATH=\"$HDBSQL_PATH_INPUT\"#" monitor.conf
|
||||
sed -i "s/^HANA_USER_KEY=\".*\"/HANA_USER_KEY=\"$HANA_USER_KEY_INPUT\"/" monitor.conf
|
||||
|
||||
echo "monitor.conf updated successfully!"
|
||||
@@ -1,9 +1,10 @@
|
||||
#!/bin/bash
|
||||
# Version: 1.2.0
|
||||
# Version: 1.3.1
|
||||
# Author: Tomi Eckert
|
||||
# =============================================================================
|
||||
# SAP HANA Monitoring Script
|
||||
#
|
||||
# Checks HANA processes, disk usage, and log segment state.
|
||||
# Checks HANA processes, disk usage, log segments, and statement queue.
|
||||
# Sends ntfy.sh notifications if thresholds are exceeded.
|
||||
# =============================================================================
|
||||
|
||||
@@ -74,7 +75,6 @@ send_notification_if_changed() {
|
||||
else
|
||||
# No alert, and no previous alert to resolve, so just update state silently
|
||||
set_state "${alert_key}" "$current_value"
|
||||
echo "ℹ️ State for ${alert_key} updated to ${current_value}. No notification sent."
|
||||
return
|
||||
fi
|
||||
fi
|
||||
@@ -83,8 +83,6 @@ send_notification_if_changed() {
|
||||
curl -H "Authorization: Bearer ${NTFY_TOKEN}" -H "Title: ${full_title}" -d "${final_message}" "${NTFY_TOPIC_URL}" > /dev/null 2>&1
|
||||
set_state "${alert_key}" "$current_value"
|
||||
echo "🔔 Notification sent for ${alert_key}: ${full_message}"
|
||||
else
|
||||
echo "ℹ️ State for ${alert_key} unchanged. No notification sent."
|
||||
fi
|
||||
}
|
||||
|
||||
@@ -117,7 +115,7 @@ for dir in "${DIRECTORIES_TO_MONITOR[@]}"; do
|
||||
continue
|
||||
fi
|
||||
usage=$(df -h "$dir" | awk 'NR==2 {print $5}' | sed 's/%//')
|
||||
echo " - ${dir} is at ${usage}%"
|
||||
echo " - ${dir} is at ${usage}%"
|
||||
if (( $(echo "$usage > $DISK_USAGE_THRESHOLD" | bc -l) )); then
|
||||
echo "🚨 Alert: ${dir} usage is at ${usage}% which is above the ${DISK_USAGE_THRESHOLD}% threshold." >&2
|
||||
send_notification_if_changed "disk_usage_${dir//\//_}" "HANA Disk" "Disk usage for ${dir} is at ${usage}%." "true" "${usage}%"
|
||||
@@ -164,58 +162,83 @@ echo "ℹ️ Free Segments: ${free_segments}"
|
||||
if [ $total_segments -eq 0 ]; then
|
||||
echo "⚠️ Warning: No log segments found. Skipping percentage checks." >&2
|
||||
send_notification_if_changed "hana_log_segments_total" "HANA Log Segment Warning" "No log segments found. Skipping percentage checks." "true" "NO_LOG_SEGMENTS"
|
||||
exit 0
|
||||
else
|
||||
send_notification_if_changed "hana_log_segments_total" "HANA Log Segment" "Log segments found." "false" "OK"
|
||||
truncated_percentage=$((truncated_segments * 100 / total_segments))
|
||||
if (( $(echo "$truncated_percentage > $TRUNCATED_PERCENTAGE_THRESHOLD" | bc -l) )); then
|
||||
echo "🚨 Alert: ${truncated_percentage}% of log segments are 'Truncated'." >&2
|
||||
send_notification_if_changed "hana_log_truncated" "HANA Log Segment" "${truncated_percentage}% of HANA log segments are in 'Truncated' state." "true" "${truncated_percentage}%"
|
||||
else
|
||||
send_notification_if_changed "hana_log_truncated" "HANA Log Segment" "${truncated_percentage}% of HANA log segments are in 'Truncated' state (below threshold)." "false" "OK"
|
||||
fi
|
||||
|
||||
free_percentage=$((free_segments * 100 / total_segments))
|
||||
if (( $(echo "$free_percentage < $FREE_PERCENTAGE_THRESHOLD" | bc -l) )); then
|
||||
echo "🚨 Alert: Only ${free_percentage}% of log segments are 'Free'." >&2
|
||||
send_notification_if_changed "hana_log_free" "HANA Log Segment" "Only ${free_percentage}% of HANA log segments are in 'Free' state." "true" "${free_percentage}%"
|
||||
else
|
||||
send_notification_if_changed "hana_log_free" "HANA Log Segment" "Only ${free_percentage}% of HANA log segments are in 'Free' state (above threshold)." "false" "OK"
|
||||
fi
|
||||
fi
|
||||
|
||||
truncated_percentage=$((truncated_segments * 100 / total_segments))
|
||||
if (( $(echo "$truncated_percentage > $TRUNCATED_PERCENTAGE_THRESHOLD" | bc -l) )); then
|
||||
echo "🚨 Alert: ${truncated_percentage}% of log segments are 'Truncated'." >&2
|
||||
send_notification_if_changed "hana_log_truncated" "HANA Log Segment" "${truncated_percentage}% of HANA log segments are in 'Truncated' state." "true" "${truncated_percentage}%"
|
||||
# --- HANA Statement Queue Monitoring ---
|
||||
echo "⚙️ Checking HANA statement queue..."
|
||||
STATEMENT_QUEUE_SQL="SELECT COUNT(*) FROM M_SERVICE_THREADS WHERE THREAD_TYPE = 'SqlExecutor' AND THREAD_STATE = 'Queueing';"
|
||||
queue_count=$("$HDBSQL_PATH" -U "$HANA_USER_KEY" -j -a -x "$STATEMENT_QUEUE_SQL" 2>/dev/null | tr -d '"')
|
||||
|
||||
if ! [[ "$queue_count" =~ ^[0-9]+$ ]]; then
|
||||
echo "⚠️ Warning: Could not retrieve HANA statement queue count. Skipping check." >&2
|
||||
send_notification_if_changed "hana_statement_queue_check_fail" "HANA Monitor Warning" "Could not retrieve statement queue count." "true" "QUEUE_CHECK_FAIL"
|
||||
else
|
||||
send_notification_if_changed "hana_log_truncated" "HANA Log Segment" "${truncated_percentage}% of HANA log segments are in 'Truncated' state (below threshold)." "false" "OK"
|
||||
send_notification_if_changed "hana_statement_queue_check_fail" "HANA Monitor Warning" "Statement queue check is working." "false" "OK"
|
||||
echo "ℹ️ Current statement queue length: ${queue_count}"
|
||||
|
||||
breach_count=$(get_state "statement_queue_breach_count")
|
||||
breach_count=${breach_count:-0}
|
||||
|
||||
if (( queue_count > STATEMENT_QUEUE_THRESHOLD )); then
|
||||
breach_count=$((breach_count + 1))
|
||||
echo "📈 Statement queue is above threshold. Consecutive breach count: ${breach_count}/${STATEMENT_QUEUE_CONSECUTIVE_RUNS}."
|
||||
else
|
||||
breach_count=0
|
||||
fi
|
||||
set_state "statement_queue_breach_count" "$breach_count"
|
||||
|
||||
if (( breach_count >= STATEMENT_QUEUE_CONSECUTIVE_RUNS )); then
|
||||
message="Statement queue has been over ${STATEMENT_QUEUE_THRESHOLD} for ${breach_count} checks. Current count: ${queue_count}."
|
||||
send_notification_if_changed "hana_statement_queue_status" "HANA Statement Queue" "${message}" "true" "ALERT:${queue_count}"
|
||||
else
|
||||
message="Statement queue is normal. Current count: ${queue_count}."
|
||||
send_notification_if_changed "hana_statement_queue_status" "HANA Statement Queue" "${message}" "false" "OK"
|
||||
fi
|
||||
fi
|
||||
|
||||
free_percentage=$((free_segments * 100 / total_segments))
|
||||
if (( $(echo "$free_percentage < $FREE_PERCENTAGE_THRESHOLD" | bc -l) )); then
|
||||
echo "🚨 Alert: Only ${free_percentage}% of log segments are 'Free'." >&2
|
||||
send_notification_if_changed "hana_log_free" "HANA Log Segment" "Only ${free_percentage}% of HANA log segments are in 'Free' state." "true" "${free_percentage}%"
|
||||
else
|
||||
send_notification_if_changed "hana_log_free" "HANA Log Segment" "Only ${free_percentage}% of HANA log segments are in 'Free' state (above threshold)." "false" "OK"
|
||||
fi
|
||||
|
||||
echo "ℹ️ Checking last successful data backup status..."
|
||||
|
||||
# Query to get the start time of the most recent successful complete data backup
|
||||
# --- HANA Backup Status Monitoring ---
|
||||
echo "ℹ️ Checking last successful data backup status..."
|
||||
last_backup_date=$("$HDBSQL_PATH" -U "$HANA_USER_KEY" -j -a -x \
|
||||
"SELECT TOP 1 SYS_START_TIME FROM M_BACKUP_CATALOG WHERE ENTRY_TYPE_NAME = 'complete data backup' AND STATE_NAME = 'successful' ORDER BY SYS_START_TIME DESC" 2>/dev/null | tr -d "\"" | sed 's/\..*//') # sed removes fractional seconds
|
||||
"SELECT TOP 1 SYS_START_TIME FROM M_BACKUP_CATALOG WHERE ENTRY_TYPE_NAME = 'complete data backup' AND STATE_NAME = 'successful' ORDER BY SYS_START_TIME DESC" 2>/dev/null | tr -d "\"" | sed 's/\..*//')
|
||||
|
||||
if [[ -z "$last_backup_date" ]]; then
|
||||
# No successful backup found at all
|
||||
local message="No successful complete data backup found for ${COMPANY_NAME} HANA."
|
||||
message="No successful complete data backup found for ${COMPANY_NAME} HANA."
|
||||
echo "🚨 Critical: ${message}"
|
||||
send_notification_if_changed "hana_backup_status" "HANA Backup" "${message}" "true" "NO_BACKUP"
|
||||
return
|
||||
fi
|
||||
|
||||
# Convert dates to epoch seconds for comparison
|
||||
last_backup_epoch=$(date -d "$last_backup_date" +%s)
|
||||
current_epoch=$(date +%s)
|
||||
threshold_seconds=$((BACKUP_THRESHOLD_HOURS * 3600))
|
||||
|
||||
age_seconds=$((current_epoch - last_backup_epoch))
|
||||
age_hours=$((age_seconds / 3600))
|
||||
|
||||
if (( age_seconds > threshold_seconds )); then
|
||||
local message="Last successful HANA backup for ${COMPANY_NAME} is ${age_hours} hours old, which exceeds the threshold of ${BACKUP_THRESHOLD_HOURS} hours. Last backup was on: ${last_backup_date}."
|
||||
echo "🚨 Critical: ${message}"
|
||||
send_notification_if_changed "hana_backup_status" "HANA Backup" "${message}" "true" "${age_hours}h"
|
||||
else
|
||||
local message="Last successful backup is ${age_hours} hours old (Threshold: ${BACKUP_THRESHOLD_HOURS} hours)."
|
||||
echo "✅ Success! ${message}"
|
||||
send_notification_if_changed "hana_backup_status" "HANA Backup" "${message}" "false" "OK"
|
||||
last_backup_epoch=$(date -d "$last_backup_date" +%s)
|
||||
current_epoch=$(date +%s)
|
||||
threshold_seconds=$((BACKUP_THRESHOLD_HOURS * 3600))
|
||||
age_seconds=$((current_epoch - last_backup_epoch))
|
||||
age_hours=$((age_seconds / 3600))
|
||||
|
||||
if (( age_seconds > threshold_seconds )); then
|
||||
message="Last successful HANA backup for ${COMPANY_NAME} is ${age_hours} hours old, which exceeds the threshold of ${BACKUP_THRESHOLD_HOURS} hours. Last backup was on: ${last_backup_date}."
|
||||
echo "🚨 Critical: ${message}"
|
||||
send_notification_if_changed "hana_backup_status" "HANA Backup" "${message}" "true" "${age_hours}h"
|
||||
else
|
||||
message="Last successful backup is ${age_hours} hours old (Threshold: ${BACKUP_THRESHOLD_HOURS} hours)."
|
||||
echo "✅ Success! ${message}"
|
||||
send_notification_if_changed "hana_backup_status" "HANA Backup" "${message}" "false" "OK"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "✅ Success! HANA monitoring check complete."
|
||||
|
||||
echo "✅ Success! HANA monitoring check complete."
|
||||
@@ -1,16 +1,18 @@
|
||||
#!/bin/bash
|
||||
# Author: Tomi Eckert
|
||||
#
|
||||
# This file contains the configuration for the script downloader.
|
||||
# The `SCRIPT_PACKAGES` associative array maps a short package name
|
||||
# to a pipe-separated string with the following format:
|
||||
# "<Display Name>|<Version>|<Description>|<Space-separated list of URLs>"
|
||||
# "<Display Name>|<Version>|<Description>|<Space-separated list of URLs>|[Install Script (optional)]"
|
||||
# The Install Script will be executed after all files for the package are downloaded.
|
||||
|
||||
declare -A SCRIPT_PACKAGES
|
||||
|
||||
# Format: short_name="Display Name|Version|Description|URL1 URL2..."
|
||||
SCRIPT_PACKAGES["aurora"]="Aurora Suite|2.1.0|A collection of scripts for managing Aurora database instances.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/aurora/aurora.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/aurora/aurora.conf"
|
||||
SCRIPT_PACKAGES["backup"]="Backup Suite|1.0.5|A comprehensive script for backing up system files and databases.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/backup/backup.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/backup/backup.conf"
|
||||
SCRIPT_PACKAGES["monitor"]="Monitor Suite|1.2.0|Scripts for monitoring system health and performance metrics.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/monitor/monitor.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/monitor/monitor.conf"
|
||||
SCRIPT_PACKAGES["keymanager"]="Key Manager|1.2.1|A utility for managing HDB user keys for SAP HANA.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/keymanager.sh"
|
||||
SCRIPT_PACKAGES["backup"]="Backup Suite|1.0.8|A comprehensive script for backing up system files and databases.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/backup/backup.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/backup/backup.conf"
|
||||
SCRIPT_PACKAGES["monitor"]="Monitor Suite|1.3.1|Scripts for monitoring system health and performance metrics.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/monitor/monitor.sh https://git.technopunk.space/tomi/Scripts/raw/branch/main/monitor/monitor.conf|https://git.technopunk.space/tomi/Scripts/raw/branch/main/monitor/monitor.hook.sh"
|
||||
SCRIPT_PACKAGES["keymanager"]="Key Manager|1.2.3|A utility for managing HDB user keys for SAP HANA.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/keymanager.sh"
|
||||
SCRIPT_PACKAGES["cleaner"]="File Cleaner|1.1.0|A simple script to clean up temporary files and logs.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/cleaner.sh"
|
||||
SCRIPT_PACKAGES["hanatool"]="HANA Tool|1.5.0|A command-line tool for various SAP HANA administration tasks.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/hanatool.sh"
|
||||
SCRIPT_PACKAGES["hanatool"]="HANA Tool|1.5.6|A command-line tool for various SAP HANA administration tasks.|https://git.technopunk.space/tomi/Scripts/raw/branch/main/hanatool.sh"
|
||||
|
||||
Reference in New Issue
Block a user