Network planning
Today, networks are more than just “connectivity.” They determine whether platforms run reliably, whether changes go through without any problems, and whether security controls work in reality. Ever since segmentation, remote access, and cloud components have become intertwined in almost every environment, network planning has become an architectural issue: Who is allowed to go where, via which routes, and with what credentials? In regulated environments, there is additional pressure to document security and operational decisions in a traceable manner (e.g., in the context of NIS2-oriented programs or BSI-related procedures).
Good network planning not only reduces risk, but also friction: fewer special rules, clearer ownership, more stable maintenance windows, and an infrastructure that can be operated automatically. The goal is a network that combines performance, access, and availability—and is structured in such a way that teams can still change it securely even after years.

Firewall & VPN

How segmentation, rules, and site networking are designed so that security and operations do not work against each other. The focus is on a clean zone model, transparent policy decisions, logging/monitoring, and a VPN setup that takes identities, DNS, and change processes into account – so that “tunnel is up” becomes a resilient access path that remains maintainable even after team changes.
Block Storage

Why storage over the network has its own requirements for latency, redundancy, and failure domains – and how Ceph/ZFS fits neatly into KVM or Kubernetes. It’s a question of which model suits your workload and operations: data paths, replication, recovery, maintenance windows, and observability. Clarifying these points early on avoids performance surprises later and builds a platform that can be operated automatically and secured consistently.
Availability

How redundancy paths, failover mechanisms, and observability work together so that “HA” doesn’t become a surprise in everyday life. The focus is on real-world operational scenarios: link flaps, asymmetric paths, state sync, planned maintenance, and recovery under stress. The goal is an architecture that can be tested, has clear switchover logic, and whose behavior in an incident does not depend on the implicit knowledge of individuals.
Why network planning is back at the top of the agenda today
In many companies, the focus is shifting from “we have a network” to “we can develop it in a controlled manner.” The reason is rarely a single project, but rather the reality of operations: more east-west traffic due to container platforms, more distributed teams, more external integrations—and at the same time, shorter response windows for security issues. Ransomware patterns and lateral movement in flat networks are less “news” and more everyday experience from incident postmortems.
From an operational perspective, the benefits of good network architecture are very concrete: changes become more predictable because dependencies are visible; disruptions are isolated more quickly because segmentation and telemetry are not “bolted on” after the fact; and costs remain manageable because the solution is not tied to proprietary workarounds. Especially if you want to operate firewalls and VPNs consistently later on, or if you want to measure availability rather than just promise it, the real work begins with a clean network structure—not with the next tool.

Trainings
Specific trainings and current topics can be found in the Comelio GmbH course catalog.
Whether in-house at your company, as a webinar, or as an open event – the formats are flexibly tailored to different requirements.
Typical misunderstandings
“The network is just transport—security is provided by the firewall.”
In practice, this leads to overloaded rules and a network that remains too flat internally. Modern environments need security zones, clear paths, and clear default denies—not just at the edge, but between workloads.
“Segmentation equals VLANs”
VLANs are a tool, but not yet a security model. Without routing and policy design, logging, and ownership, segmentation becomes docu-fiction. At the latest when SDN/overlay components or cloud networks are added, the model must be comprehensive.
“VPN is solved when the tunnel is up”
A tunnel is only the beginning. What matters are identities, access paths, split tunneling strategies, DNS/service discovery, and the question of how to integrate policies and logs in such a way that audits or security reviews get reliable answers.
“High availability is a feature, not a design”
HA rarely fails because of the protocol, but rather because of a lack of failover paths in operation: What happens in the event of link flaps, asymmetric routing, state sync, maintenance windows, or provider disruptions? If you don’t model this in advance, availability becomes a gamble—especially under patch pressure and lifecycle changes.
Frequently asked questions about network planning
In this FAQ, you will find the topics that come up most frequently in consulting and training. Each answer is kept short and refers to further content if necessary. Can’t find your question? We are happy to help you personally.

How much segmentation makes sense without overloading operations?
Enough to ensure that zones and paths are unique—but not so much that ownership and changes become unmanageable. A tiered model often helps: a few hard zones, clear default policies, then targeted refinement.
When does WireGuard make sense, and when is IPsec more appropriate?
WireGuard is often strong in terms of simplicity and performance, while IPsec is strong in terms of broad interoperability in heterogeneous environments. The decisive factors are operation (rollout/keys), observability, and integration into your access model—not just the protocol.
Do I absolutely need “next-gen” firewall functions such as IDS/IPS?
Not necessarily everywhere. In many environments, clean segmentation + logging/alerting is already the greater lever. IDS/IPS can be useful if you can operate it operationally (tuning, false positives, updates).
What is the most common reason why HA designs fail?
Unclear failure domains and lack of testing in operation: Failover paths are not practiced regularly, state sync and monitoring are not consistent, maintenance windows are not factored into the design.
