Skip to content

CORENET-6410: Add EgressIP disruption monitor test for upgrade validation#31011

Open
bpickard22 wants to merge 1 commit intoopenshift:mainfrom
bpickard22:EIP-UPGRADE
Open

CORENET-6410: Add EgressIP disruption monitor test for upgrade validation#31011
bpickard22 wants to merge 1 commit intoopenshift:mainfrom
bpickard22:EIP-UPGRADE

Conversation

@bpickard22
Copy link
Copy Markdown

@bpickard22 bpickard22 commented Apr 14, 2026

We don't currently have any test coverage for EgressIP traffic continuity during cluster upgrades. When nodes roll, OVN controllers restart, and leader elections happen, there are several windows where EgressIP SNAT can briefly stop working -- but nothing catches that today.

This adds a new MonitorTest that continuously validates EgressIP SNAT throughout the upgrade process. It sets up a target pod on the host network, creates an EgressIP CR for a test namespace, and runs a poller that checks the source IP seen by the target matches the expected EgressIP. Any SNAT mismatches or connection failures are recorded as disruption intervals.

The test skips gracefully on non-OVN clusters, single-node topologies, MicroShift, and clusters without enough annotated worker nodes. It uses Deployments (not bare Pods) so the poller survives node drains, patches node labels to avoid update races, and includes CloudPrivateIPConfig reservations when allocating IPs to prevent cloud-level conflicts.

For now, we set a generous 120s disruption threshold to catch catastrophic regressions while we collect baseline data from CI runs.

Assisted by Claude Opus 4.6

Summary by CodeRabbit

  • New Features
    • Added EgressIP availability monitoring to network disruption tests to detect and report source-IP (egress) disruptions and total disruption duration.
    • Tests now deploy isolated resources and validate SNAT behavior, producing structured logs and JUnit results for disruption intervals.

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: automatic mode

@openshift-ci openshift-ci bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 14, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 14, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: be8b4d43-8098-47ea-b74f-a6975a149d98

📥 Commits

Reviewing files that changed from the base of the PR and between fa52b3e and cddb03c.

📒 Files selected for processing (3)
  • pkg/defaultmonitortests/types.go
  • pkg/monitortests/network/disruptionegressip/monitortest.go
  • pkg/monitortests/network/disruptionegressip/namespace.yaml
✅ Files skipped from review due to trivial changes (2)
  • pkg/monitortests/network/disruptionegressip/namespace.yaml
  • pkg/monitortests/network/disruptionegressip/monitortest.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • pkg/defaultmonitortests/types.go

Walkthrough

Adds a new EgressIP availability monitor test: registers it, implements the monitor that allocates an EgressIP, deploys target and poller workloads to detect SNAT disruptions, and provides a dedicated test namespace manifest.

Changes

Cohort / File(s) Summary
Monitor Test Registration
pkg/defaultmonitortests/types.go
Registers new egressip-availability monitor under "Network / ovn-kubernetes" and adds import for the new monitor package.
EgressIP Disruption Monitor Implementation
pkg/monitortests/network/disruptionegressip/monitortest.go
New monitor test implementation (NewAvailabilityInvariant) that validates platform/network, selects and labels an egress node, allocates an IPv4 EgressIP (excluding reserved ranges and existing addresses), creates EgressIP CR, deploys host-networked target and poller, parses poller JSON logs into disruption intervals, evaluates availability against threshold, and performs cleanup.
Test Namespace Manifest
pkg/monitortests/network/disruptionegressip/namespace.yaml
New Namespace manifest with generateName e2e-egressip-disruption-test- and pod-security / OpenShift security annotations for privileged workloads.

Sequence Diagram

sequenceDiagram
    actor Controller as EgressIP Monitor
    participant K8sAPI as Kubernetes API
    participant Node as Worker Node
    participant Target as Target Pod (host-network)
    participant Poller as Poller Pod
    participant Logs as Pod Logs

    Controller->>K8sAPI: Create Namespace
    Controller->>K8sAPI: Label selected Node (egress-assignable)
    Controller->>K8sAPI: Create EgressIP CR
    activate K8sAPI
    K8sAPI->>Node: Assign EgressIP to node
    deactivate K8sAPI

    Controller->>K8sAPI: Deploy Target (host-network) on Node
    Controller->>K8sAPI: Deploy Poller
    activate Target
    activate Poller

    loop Continuous monitoring
        Poller->>Target: HTTP request (check source IP)
        Target-->>Poller: Response (source IP observed)
        Poller->>Logs: Emit JSON line {ok: true|false, ts, msg}
    end

    deactivate Target
    deactivate Poller

    Controller->>Logs: Read Poller logs
    Controller->>Controller: Parse JSON lines -> construct disruption intervals
    Controller->>K8sAPI: Delete EgressIP CR
    Controller->>K8sAPI: Remove Node label
    Controller->>K8sAPI: Delete Namespace
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes


Important

Pre-merge checks failed

Please resolve all errors before merging. Addressing warnings is optional.

❌ Failed checks (2 warnings, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Single Node Openshift (Sno) Test Compatibility ⚠️ Warning The monitor test lacks explicit SNO topology checks despite claiming to skip such deployments in objectives. Add explicit Single Node topology check in StartCollection using infrastructure.Status.ControlPlaneTopology == configv1.SingleReplicaTopologyMode.
Ote Binary Stdout Contract ❓ Inconclusive The specified PR files could not be located in the repository. Cannot assess OTE Binary Stdout Contract compliance without access to the actual code changes. Fetch the PR branch (e.g., git fetch origin refs/pull/31011/head:pr-31011) and examine the files for process-level stdout writes that could corrupt JSON test output.
✅ Passed checks (7 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely describes the main change: adding an EgressIP disruption monitor test for upgrade validation, which is the primary focus of the changeset.
Stable And Deterministic Test Names ✅ Passed The EgressIP disruption monitor test uses a stable, deterministic test name 'egressip-availability' which is a static string constant with no dynamic values.
Test Structure And Quality ✅ Passed The PR adds a MonitorTest implementation without any Ginkgo test code, making this check not applicable.
Microshift Test Compatibility ✅ Passed The PR adds a MonitorTest framework implementation, not Ginkgo e2e tests. The code already includes explicit MicroShift protection via runtime platform checks that skip execution on MicroShift clusters.
Topology-Aware Scheduling Compatibility ✅ Passed PR explicitly checks for SingleReplica (SNO) topology and implicitly validates for dedicated worker nodes with egress-ipconfig annotations, correctly rejecting unsupported topologies. Deployments use single replicas with no problematic anti-affinity or topology spread constraints.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed This PR adds a MonitorTest, not Ginkgo e2e tests. The custom check targets Ginkgo e2e tests (It(), Describe(), Context(), When()), which are not present here.
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci bot requested review from p0lyn0mial and sjenning April 14, 2026 18:16
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Apr 14, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: bpickard22
Once this PR has been reviewed and has the lgtm label, please assign smg247 for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

We don't currently have any test coverage for EgressIP traffic continuity
during cluster upgrades. When nodes roll, OVN controllers restart, and
leader elections happen, there are several windows where EgressIP SNAT
can briefly stop working -- but nothing catches that today.

This adds a new MonitorTest that continuously validates EgressIP SNAT
throughout the upgrade process. It sets up a target pod on the host
network, creates an EgressIP CR for a test namespace, and runs a poller
that checks the source IP seen by the target matches the expected
EgressIP. Any SNAT mismatches or connection failures are recorded as
disruption intervals.

The test skips gracefully on non-OVN clusters, single-node topologies,
MicroShift, and clusters without enough annotated worker nodes. It uses
Deployments (not bare Pods) so the poller survives node drains, patches
node labels to avoid update races, and includes CloudPrivateIPConfig
reservations when allocating IPs to prevent cloud-level conflicts.

For now, we set a generous 120s disruption threshold to catch
catastrophic regressions while we collect baseline data from CI runs.

Assisted by Claude Opus 4.6

Signed-off-by: Benjamin Pickard <bpickard@redhat.com>
@openshift-ci openshift-ci bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 14, 2026
@bpickard22 bpickard22 changed the title Add EgressIP disruption monitor test for upgrade validation CORENET-6410: Add EgressIP disruption monitor test for upgrade validation Apr 14, 2026
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Apr 14, 2026
@openshift-ci-robot
Copy link
Copy Markdown

openshift-ci-robot commented Apr 14, 2026

@bpickard22: This pull request references CORENET-6410 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "5.0.0" version, but no target version was set.

Details

In response to this:

We don't currently have any test coverage for EgressIP traffic continuity during cluster upgrades. When nodes roll, OVN controllers restart, and leader elections happen, there are several windows where EgressIP SNAT can briefly stop working -- but nothing catches that today.

This adds a new MonitorTest that continuously validates EgressIP SNAT throughout the upgrade process. It sets up a target pod on the host network, creates an EgressIP CR for a test namespace, and runs a poller that checks the source IP seen by the target matches the expected EgressIP. Any SNAT mismatches or connection failures are recorded as disruption intervals.

The test skips gracefully on non-OVN clusters, single-node topologies, MicroShift, and clusters without enough annotated worker nodes. It uses Deployments (not bare Pods) so the poller survives node drains, patches node labels to avoid update races, and includes CloudPrivateIPConfig reservations when allocating IPs to prevent cloud-level conflicts.

For now, we set a generous 120s disruption threshold to catch catastrophic regressions while we collect baseline data from CI runs.

Assisted by Claude Opus 4.6

Summary by CodeRabbit

Release Notes

  • New Features
  • Added EgressIP availability monitoring to network disruption tests, enabling better visibility into how EgressIP egress network functionality performs during disruption scenarios.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/monitortests/network/disruptionegressip/monitortest.go`:
- Around line 493-495: The test currently pins the target pod with NodeSelector
{"kubernetes.io/hostname": targetNode.Name} and the poller assumes a single
hardcoded node IP, which causes false failures during node drains; remove the
hard NodeSelector pin (stop using targetNode.Name for scheduling) and instead
deploy multiple hostNetwork backends behind a Service (or use a movable/stable
endpoint) so backends can float across nodes, then change the poller logic (the
code that currently hardcodes the node IP in the poller block) to query the
Service/endpoint DNS or iterate over multiple backend endpoints and validate the
SNAT/source IP from whichever backend answers rather than expecting a single
node IP.
- Around line 384-399: The test currently only reads stdout from still-running
poller pods (LabelSelector "app=egressip-disruption-poller" and function
w.collectPodLogs), which loses evidence from pods deleted during restarts;
change the design so probe results are persisted outside pod lifetime and read
from that persistent store instead of relying on live pod logs. Concretely:
modify the poller to write its intervals/results to durable storage (e.g., a
shared PVC/log file, a ConfigMap/CRD, or an external aggregator/metrics
endpoint) and update the monitor code that currently calls w.collectPodLogs and
lists pods to read from that persistent interface (or Kubernetes events/metrics)
so deleted pods’ data are preserved; keep the LabelSelector and pod-listing
logic only as a fallback, and ensure functions like collectPodLogs are adapted
or replaced to query the new persistent source.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: b0859dac-896f-4eb6-9bd1-6205581a28ed

📥 Commits

Reviewing files that changed from the base of the PR and between 9df27cd and fa52b3e.

📒 Files selected for processing (3)
  • pkg/defaultmonitortests/types.go
  • pkg/monitortests/network/disruptionegressip/monitortest.go
  • pkg/monitortests/network/disruptionegressip/namespace.yaml

Comment on lines +384 to +399
pods, err := w.kubeClient.CoreV1().Pods(w.namespaceName).List(ctx, metav1.ListOptions{
LabelSelector: "app=egressip-disruption-poller",
})
if err != nil {
return nil, nil, []error{fmt.Errorf("failed to list poller pods: %w", err)}
}
if len(pods.Items) == 0 {
return nil, nil, []error{fmt.Errorf("no poller pods found")}
}

var allIntervals monitorapi.Intervals
var allErrors []error
for _, pod := range pods.Items {
intervals, errs := w.collectPodLogs(ctx, pod.Name)
allIntervals = append(allIntervals, intervals...)
allErrors = append(allErrors, errs...)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Only scraping surviving poller pods loses earlier disruption evidence.

Lines 384-399 list the poller pods that still exist at collection time, and Lines 415-416 stream only the current container logs from each. Any poller pod deleted during a drain/restart takes its stdout history with it, so outages that happened before the final replica came up are silently dropped. Persist probe results outside pod lifetime, or emit intervals continuously instead of reconstructing them only from final pod logs.

Also applies to: 415-416

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/monitortests/network/disruptionegressip/monitortest.go` around lines 384
- 399, The test currently only reads stdout from still-running poller pods
(LabelSelector "app=egressip-disruption-poller" and function w.collectPodLogs),
which loses evidence from pods deleted during restarts; change the design so
probe results are persisted outside pod lifetime and read from that persistent
store instead of relying on live pod logs. Concretely: modify the poller to
write its intervals/results to durable storage (e.g., a shared PVC/log file, a
ConfigMap/CRD, or an external aggregator/metrics endpoint) and update the
monitor code that currently calls w.collectPodLogs and lists pods to read from
that persistent interface (or Kubernetes events/metrics) so deleted pods’ data
are preserved; keep the LabelSelector and pod-listing logic only as a fallback,
and ensure functions like collectPodLogs are adapted or replaced to query the
new persistent source.

Comment on lines +493 to +495
NodeSelector: map[string]string{
"kubernetes.io/hostname": targetNode.Name,
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Pinning the target to one worker turns node drains into false egress-ip failures.

Lines 493-495 force the target onto a single node, and Lines 523-542 hardcode that node IP into the poller. If that worker is drained or rebooted during the upgrade, the backend disappears until the same node returns, so the monitor measures target-node availability rather than SNAT availability. Use a stable endpoint that can move with the target, or keep multiple host-network backends behind a Service and validate the source IP from whichever backend answers.

Also applies to: 523-542

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/monitortests/network/disruptionegressip/monitortest.go` around lines 493
- 495, The test currently pins the target pod with NodeSelector
{"kubernetes.io/hostname": targetNode.Name} and the poller assumes a single
hardcoded node IP, which causes false failures during node drains; remove the
hard NodeSelector pin (stop using targetNode.Name for scheduling) and instead
deploy multiple hostNetwork backends behind a Service (or use a movable/stable
endpoint) so backends can float across nodes, then change the poller logic (the
code that currently hardcodes the node IP in the poller block) to query the
Service/endpoint DNS or iterate over multiple backend endpoints and validate the
SNAT/source IP from whichever backend answers rather than expecting a single
node IP.

@openshift-ci-robot
Copy link
Copy Markdown

openshift-ci-robot commented Apr 14, 2026

@bpickard22: This pull request references CORENET-6410 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "5.0.0" version, but no target version was set.

Details

In response to this:

We don't currently have any test coverage for EgressIP traffic continuity during cluster upgrades. When nodes roll, OVN controllers restart, and leader elections happen, there are several windows where EgressIP SNAT can briefly stop working -- but nothing catches that today.

This adds a new MonitorTest that continuously validates EgressIP SNAT throughout the upgrade process. It sets up a target pod on the host network, creates an EgressIP CR for a test namespace, and runs a poller that checks the source IP seen by the target matches the expected EgressIP. Any SNAT mismatches or connection failures are recorded as disruption intervals.

The test skips gracefully on non-OVN clusters, single-node topologies, MicroShift, and clusters without enough annotated worker nodes. It uses Deployments (not bare Pods) so the poller survives node drains, patches node labels to avoid update races, and includes CloudPrivateIPConfig reservations when allocating IPs to prevent cloud-level conflicts.

For now, we set a generous 120s disruption threshold to catch catastrophic regressions while we collect baseline data from CI runs.

Assisted by Claude Opus 4.6

Summary by CodeRabbit

  • New Features
  • Added EgressIP availability monitoring to network disruption tests to detect and report source-IP (egress) disruptions and total disruption duration.
  • Tests now deploy isolated resources and validate SNAT behavior, producing structured logs and JUnit results for disruption intervals.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Apr 14, 2026

@bpickard22: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/unit cddb03c link true /test unit
ci/prow/okd-scos-images cddb03c link true /test okd-scos-images
ci/prow/verify cddb03c link true /test verify
ci/prow/images cddb03c link true /test images

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants