IT infrastructure: Stable operation & clear segmentation
IT infrastructure is “good” when it remains stable in everyday use: changes can be planned, disruptions can be clearly isolated, access is traceable, and recovery does not depend on manual labor. This quality is not achieved through individual tools, but through architecture and operating models – i.e., clear zones, defined data paths, consistent identities, and repeatable automation. At the same time, many companies are under increasing pressure to justify security and operational decisions – whether through internal controls, customer requirements, or programs aligned with NIS2/BSI-oriented approaches. Those who design their infrastructure in such a way that network, storage, authentication, and security are interlinked will reap pragmatic benefits: less ad hoc work, less shadow IT, faster changes, and significantly better fault tolerance – without operations and security having to work against each other.

Hardware

Hardware determines whether operations remain predictable: redundancy, lifecycle, clean component selection, and expandability without subsequent modifications. The focus is on technical evaluation, architecture, and commissioning – with a view to Linux, virtualization, and everyday monitoring and recovery requirements.
Network planning

Networks must remain maintainable even as requirements grow: clear zones (internal/external/management), defined transitions, traceable access paths, and resilient telemetry. Segmentation, remote access, and HA mechanisms are planned in such a way that changes do not become manual work and can be clearly documented and automated.
IT security

Security is part of the system design: hardening, identities, rights, and monitoring are interlinked so that fewer exceptions arise and decisions remain traceable. The focus is on clear access concepts, auditable logging/alerting, and standards that work in everyday life – often based on transparent open-source tools.
Why infrastructure determines speed and risk today
Infrastructure is the foundation of any platform and application strategy: if segmentation is unclear, storage latencies fluctuate, or identities are inconsistent, any modernization becomes slower and more expensive. This is particularly noticeable in recurring issues such as patch windows, site connections, onboarding new systems, or “minor” changes that suddenly affect routing, firewall rules, and certificates. In practice, the discussion is therefore shifting away from individual components toward operability: Who operates what, how are changes approved, and how can you see whether the system is still in the expected state?
There is also an external factor: many organizations are now expanding their security and control programs based on established references (depending on the environment, e.g., BSI-related procedures or NIS2-oriented requirements). This does not mean “more bureaucracy,” but above all that decisions must be explainable. A cleanly structured infrastructure reduces friction in audits as well as in incident reviews – and makes capacity and cost issues more resilient.

Trainings
Specific trainings and current topics can be found in the Comelio GmbH course catalog.
Whether in-house at your company, as a webinar, or as an open event – the formats are flexibly tailored to different requirements.
Typical misunderstandings
“A few VLANs are enough for segmentation.”
VLANs are a mechanism, but not a security architecture. Without clear zone logic (including management plane, east-west traffic, and defined transitions), rules quickly become inconsistent. At the latest when remote access, external services, or multiple clients are added, the complexity explodes – often unnoticed until the first incident shows that paths are not really controlled. Those who take a structured approach benefit from good network planning instead of “historically grown” rules.
“Firewall = security”
A firewall is only as good as its identities, logging, and processes. Modern attacks use legitimate access, token-based sessions, or supply chain effects. That’s why network protection must always work in conjunction with central authentication, hardening, and telemetry – otherwise, it creates a false sense of security. Ransomware patterns in particular show how important lateral movements and privileged access are, not just the perimeter rule.
“Storage is capacity, not architecture”
In virtualization and Kubernetes, storage is part of the failure design. Snapshot policies, replication paths, quotas, and recovery tests determine whether a problem remains isolated or becomes a domino effect. Those who simply make storage “bigger” instead of clarifying data paths and ownership are buying into uncertainty.
“Central authentication comes later”
If identities, groups, and privileges are only consolidated retrospectively, shadow accounts, exceptions, and local admin knowledge carriers remain. In regulated or highly audit-driven environments, this becomes apparent at the latest when proof of authorizations, admin changes, and access histories are required.
Frequently asked questions about IT infrastructure
In this FAQ, you will find the topics that come up most frequently in consulting and training. Each answer is kept short and refers to further content if necessary. Can’t find your question? We are happy to help you personally.

How large does a network have to be for segmentation to make sense?
Segmentation is not only worthwhile for large networks, but as soon as different protection requirements, locations, or operating roles exist. It is crucial to keep the zone logic simple and to define transitions clearly.
OPNsense or a proprietary firewall—how do you make a sound decision?
Not based on feature sheets, but on the operating model: update process, logging/integration, rule discipline, HA concept, and skill availability. In many environments, operational transparency is a stronger argument than a “vendor feature.”
When does Ceph make sense – and when does it not?
Ceph shows its strengths when distributed systems, replication, and scaling are real requirements. If you only need “a little shared storage,” a leaner design can be more stable and cheaper to operate – recovery logic is often more important here than pure capacity.
How do you connect Linux and Windows domains cleanly without authorization chaos?
With clear identity sources, consistent group/role models, and defined privilege paths (including logging). Technically, many things are possible—quality comes from standards and review routines.
