System hardening: Reduce attack surfaces, keep operations stable
Systems rarely become insecure due to “high tech,” but rather due to too much openness, too many exceptions, and configurations that no one can explain with certainty anymore. System hardening addresses precisely these issues: it reduces attack surfaces, makes settings traceable, and creates verifiable results—without forcing operations into a fragile straitjacket. Precisely because attack patterns (from credential theft to ransomware in lateral movements) and patch cycles have become noticeably more frequent in recent years, “security by default” is often no longer sufficient in practice.
The key is a level of security that matches the role, risk, and operational reality – documented, versioned, and continuously verifiable. This creates a basis on which updates, audits, and team changes do not become a security lottery every time.

Why system hardening is an operational factor today
A hardened system is not primarily a security project, but an operational standard. Consistently reducing attack surfaces and maintaining clean configurations reduces the likelihood of incidents, minimizes downtime due to “surprise changes,” and shortens the time to robust root cause analysis when something happens. This has a direct impact on costs, stability, and delivery capability—especially where services run 24/7 or many systems have to be operated in a similar manner.
At the same time, system hardening is the bridge between technology and verifiability: In many organizations, there is growing pressure not only to “do” security measures, but also to provide verifiable evidence of them – depending on the industry, e.g., in the context of NIS2-oriented programs, BSI-related requirements, or ISO-based management systems. System hardening provides tangible artifacts for this purpose: baselines, deviation justifications, measured values, and re-audit routines.
From a technical perspective, hardening does not comprise “one tool,” but rather a bundle: kernel/sysctl parameters, service minimization, SSH and rights concepts, firewall rules (nftables/iptables), logging/auditing and – where appropriate – mandatory access control (SELinux / AppArmor). The value arises when these building blocks work together as a consistent operating model rather than as a loose list of measures.
Operating model & ownership

Who is the owner of baselines, exceptions, and re-audits? Without clear responsibility, hardening measures become isolated actions. In practice, a model in which baselines are versioned (e.g., as code/policy), deviations have an expiration date or a review routine, and the operations/platform team not only “implements” but also actively develops standards has proven successful. Market reality: Teams change, skills are not always available – therefore, decisions must be documentable and transferable.
Update & security capability

Hardening must be update-proof. Kernel parameters, SSH settings, cipher suites, or service defaults change across releases – and security vulnerabilities often force quick responses. A good criterion is therefore: Which measures can be checked and followed up automatically without requiring manual intervention each time? Modern tooling reality (CI, IaC, policy-as-code, central scans) can accelerate hardening – provided it is neatly embedded in change and rollout processes.
Integration, data & lifecycle

Many security problems arise at transitions: log pipelines, identity connections, backup/recovery, monitoring, secrets, network paths. Hardening must take these integrations into account – otherwise it remains “nice” locally, but systemically full of holes. Then there is the lifecycle: LTS versions, end of support, provider specifications, cryptographic deprecations. Anyone defining cipher suites, auth methods, or audit stacks today should already take the next platform changes and compliance requirements into account.

Trainings
Specific trainings and current topics can be found in the Comelio GmbH course catalog.
Whether in-house at your company, as a webinar, or as an open event – the formats are flexibly tailored to different requirements.
Typical misconceptions that slow down system hardening
“Hardening means restricting everything to the maximum.”
Maximum restriction sounds good, but often produces side effects: unstable deployments, unexplained failures, workarounds that “bypass the process.” Good hardening works with conscious decisions: What is necessary for this system role – and what is not? What is a “must have” and what is “nice to have”? Especially in environments with a high patch cycle, maintainability is part of the security effect.
“A score equals security.”
Tools such as Lynis are valuable because they perform structured checks and enable comparability. However, a score is not a seal of approval, but a measuring point. The decisive factor is why something deviates: a consciously accepted risk (with justification) is usually better than a “green” system that no one can operate. Objective metrics help to make configuration drift visible – but they do not replace context and prioritization.
“We harden once – then it’s peace and quiet.”
System hardening is not a one-time sprint. New packages, changed defaults, new services, new team members: all of these things change the status quo. Without re-audits, baseline management, and change discipline, hardening slowly becomes “softened” again. This is particularly relevant today because supply chain risks (e.g., compromised dependencies or unexpected provider effects) are more frequently translated into operational reality: What is installed and how it is configured must be repeatably traceable.
“Enable MAC = done.”
SELinux/AppArmor can be very effective – but only if policies are understood, maintained, and mastered in operation. “Blind activation” generates false positives, frustration, and, in the end, often a relapse into permissive exceptions. MAC makes sense where protection boundaries are clear, logs are evaluated cleanly, and a team can integrate policy work into their daily routine.
Initial consultation / project initiation
If you want to set up baselines, measurement points (e.g., Lynis Audit as a reference), review routines, and integration into deployment/operations, a short initial consultation will clarify the approach, scope, and appropriate artifacts.
Frequently asked questions about system hardening
In this FAQ, you will find the topics that come up most frequently in consulting and training. Each answer is kept short and refers to further content if necessary. Is your question missing? We are happy to help you personally.

What is the difference between system hardening and “security by default”?
Security by default refers to the manufacturer’s secure default settings. System hardening goes further: it specifically reduces attack surfaces, adapts configurations to system roles and threat assumptions, and makes results verifiable. The core is context instead of defaults.
Does a system always have to be more restricted for greater security?
Not necessarily. Too many restrictions can damage operation and maintainability—and then create new risks through workarounds. Good hardening is a controlled balance with documented decisions.
How meaningful is a Lynis score?
It is very useful as a comparison and trend metric, especially against drift and for prioritization. However, it is not an absolute quality feature: Deviations can be deliberate and correct if they are justified and regularly reviewed.
What role do SELinux or AppArmor play in practice?
MAC can significantly strengthen security boundaries, but it is demanding. It makes sense when policies are mastered, logs are evaluated, and exception processes are clearly regulated. “Simply activating” often leads to frustration and subsequent deactivation.
