Back to index

4.12.5

Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |

Changes from 4.11.59

Note: this page shows the Feature-Based Change Log for a release

Complete Features

These features were completed when this image was assembled

1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI

2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.

3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.

4. List any affected packages or components.

Epic Goal

  • Make it possible to disable the console operator at install time, while still having a supported+upgradeable cluster.

Why is this important?

  • It's possible to disable console itself using spec.managementState in the console operator config. There is no way to remove the console operator, though. For clusters where an admin wants to completely remove console, we should give the option to disable the console operator as well.

Scenarios

  1. I'm an administrator who wants to minimize my OpenShift cluster footprint and who does not want the console installed on my cluster

Acceptance Criteria

  • It is possible at install time to opt-out of having the console operator installed. Once the cluster comes up, the console operator is not running.

Dependencies (internal and external)

  1. Composable cluster installation

Previous Work (Optional):

  1. https://docs.google.com/document/d/1srswUYYHIbKT5PAC5ZuVos9T2rBnf7k0F1WV2zKUTrA/edit#heading=h.mduog8qznwz
  2. https://docs.google.com/presentation/d/1U2zYAyrNGBooGBuyQME8Xn905RvOPbVv3XFw3stddZw/edit#slide=id.g10555cc0639_0_7

Open questions::

  1. The console operator manages the downloads deployment as well. Do we disable the downloads deployment? Long term we want to move to CLI manager: https://github.com/openshift/enhancements/blob/6ae78842d4a87593c63274e02ac7a33cc7f296c3/enhancements/oc/cli-manager.md

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.

 

Manifests are currently present in /bindata and /manifest directories.

 

Here is example of the insights-operator change.

Here is the overall enhancement doc.

 

Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.

Goals

  • Framework for rapid creation of CSI drivers for our cloud providers
  • CSI driver for AWS EBS
  • CSI driver for AWS EFS
  • CSI driver for GCP
  • CSI driver for Azure
  • CSI driver for VMware vSphere
  • CSI Driver for Azure Stack
  • CSI Driver for Alicloud
  • CSI Driver for IBM Cloud

Requirements

Requirement Notes isMvp?
Framework for CSI driver  TBD Yes
Drivers should be available to install both in disconnected and connected mode   Yes
Drivers should upgrade from release to release without any impact   Yes
Drivers should be installable via CVO (when in-tree plugin exists)    

Out of Scope

This work will only cover the drivers themselves, it will not include

  • enhancements to the CSI API framework
  • the migration to said drivers from the the intree drivers
  • work for non-cloud provider storage drivers (FC-SAN, iSCSI) being converted to CSI drivers

Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.

Assumptions

  • Storage SIG won't move out the changeover to a later Kubernetes release

Customer Considerations
Customers will need to be able to use the storage they want.

Documentation Considerations

  • Target audience: cluster admins
  • Updated content: update storage docs to show how to use these drivers (also better expose the capabilities)

This Epic is to track the GA of this feature

Goal

  • Make available the Google Cloud File Service via a CSI driver, it is desirable that this implementation has dynamic provisioning
  • Without GCP filestore support, we are limited to block / RWO only (GCP PD 4.8 GA)
  • Align with what we support on other major public cloud providers.

Why is this important?

  • There is a know storage gap with google cloud where only block is supported
  • More customers deploying on GCE and asking for file / RWX storage.

Scenarios

  1. Install the CSI driver
  2. Remove the CSI Driver
  3. Dynamically provision a CSI Google File PV*
  4. Utilise a Google File PV
  5. Assess optional features such as resize & snapshot

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Customers::

  • Telefonica Spain
  • Deutsche Bank

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.

Epic Goal

  • Enable the migration from a storage intree driver to a CSI based driver with minimal impact to the end user, applications and cluster
  • These migrations would include, but are not limited to:
    • CSI driver for AWS EBS
    • CSI driver for GCP
    • CSI driver for Azure (file and disk)
    • CSI driver for VMware vSphere

Why is this important?

  • OpenShift needs to maintain it's ability to enable PVCs and PVs of the main storage types
  • CSI Migration is getting close to GA, we need to have the feature fully tested and enabled in OpenShift
  • Upstream intree drivers are being deprecated to make way for the CSI drivers prior to intree driver removal

Scenarios

  1. User initiated move to from intree to CSI driver
  2. Upgrade initiated move from intree to CSI driver
  3. Upgrade from EUS to EUS

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

On new installations, we should make the StorageClass created by the CSI operator the default one. 

However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set  a different quota on the CSI driver Storage Class.

Exit criteria:

  • New clusters get the CSI Storage Class as the default one.
  • Existing clusters don't get their default Storage Classes changed.

This Epic tracks the GA of this feature

Epic Goal

Why is this important?

  • OpenShift needs to maintain it's ability to enable PVCs and PVs of the main storage types
  • CSI Migration is getting close to GA, we need to have the feature fully tested and enabled in OpenShift
  • Upstream intree drivers are being deprecated to make way for the CSI drivers prior to intree driver removal

Scenarios

  1. User initiated move to from intree to CSI driver
  2. Upgrade initiated move from intree to CSI driver
  3. Upgrade from EUS to EUS

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

On new installations, we should make the StorageClass created by the CSI operator the default one. 

However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set  a different quota on the CSI driver Storage Class.

Exit criteria:

  • New clusters get the CSI Storage Class as the default one.
  • Existing clusters don't get their default Storage Classes changed.

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • Rebase OpenShift components to k8s v1.24

Why is this important?

  • Rebasing ensures components work with the upcoming release of Kubernetes
  • Address tech debt related to upstream deprecations and removals.

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. k8s 1.24 release

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Epic Goal

  • Rebase cluster autoscaler on top of Kubernetes 1.25

Why is this important?

  • Need to pick up latest upstream changes

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

User Story

As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.

Background

We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921

Steps

  • add the --record-duplicated-events flag to all autoscaler deployments from the CAO

Stakeholders

  • openshift eng

Definition of Done

  • autoscaler continues to work as expected and produces events for everything
  • Docs
  • this does not require documentation as it preserves existing behavior and provides no interface for user interaction
  • Testing
  • current tests should continue to pass

Feature Overview

Add GA support for deploying OpenShift to IBM Public Cloud

Goals

Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available

Requirements

Optional requirements

  • OpenShift can be deployed using Mint mode and STS for cloud provider credentials (future release, tbd)
  • OpenShift can be deployed in disconnected mode https://issues.redhat.com/browse/SPLAT-737)
  • OpenShift on IBM Cloud supports User Provisioned Infrastructure (UPI) deployment method (future release, 4.14?)

Epic Goal

  • Enable installation of private clusters on IBM Cloud. This epic will track associated work.

Why is this important?

  • This is required MVP functionality to achieve GA.

Scenarios

  1. Install a private cluster on IBM Cloud.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Background and Goal

Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue. 

Acceptance Criteria

  1. Under guidance from Red Hat CEE, customers can deploy RHEL hotfix packages to MachineConfigPools.
  2. Customers can easily remove the hotfix when the underlying RHCOS image incorporates the fix.

Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.

The overall plan is:

  • Publish the new base image as `rhel-coreos-8` in the release image
  • Also publish the new extensions container (https://github.com/openshift/os/pull/763) as `rhel-coreos-8-extensions`
  • Teach the MCO to use this without also involving layering/build controller
  • Delete old `machine-os-content`

As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.

Acceptance Criteria:

  • Cluster using Custom osImageURL is available via telemetry

After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:

  • Schedule the extensions container as a kubernetes service (just serves a yum repo via http)
  • Change the MCD to write a file into `/etc/yum.repos.d/machine-config-extensions.repo` that consumes it instead of what it does now in pulling RPMs from the mounted container filesystem

 

Why?

  • Decouple control and data plane. 
    • Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
  • Improve security
    • Shift credentials out of cluster that support the operation of core platform vs workload
  • Improve cost
    • Allow a user to toggle what they don’t need.
    • Ensure a smooth path to scale to 0 workers and upgrade with 0 workers.

 

Assumption

  • A customer will be able to associate a cluster as “Infrastructure only”
  • E.g. one option: management cluster has role=master, and role=infra nodes only, control planes are packed on role=infra nodes
  • OR the entire cluster is labeled infrastructure , and node roles are ignored.
  • Anything that runs on a master node by default in Standalone that is present in HyperShift MUST be hosted and not run on a customer worker node.

 

 

Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit 

Epic Goal

  • To improve debug-ability of ovn-k in hypershift
  • To verify the stability of of ovn-k in hypershift
  • To introduce a EgressIP reach-ability check that will work in hypershift

Why is this important?

  • ovn-k is supposed to be GA in 4.12. We need to make sure it is stable, we know the limitations and we are able to debug it similar to the self hosted cluster.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated

Dependencies (internal and external)

  1. This will need consultation with the people working on HyperShift

Previous Work (Optional):

  1. https://issues.redhat.com/browse/SDN-2589

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Overview 

Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.

Assumption

  • A customer will be able to associate a cluster as “Infrastructure only”
  • E.g. one option: management cluster has role=master, and role=infra nodes only, control planes are packed on role=infra nodes
  • OR the entire cluster is labeled infrastructure, and node roles are ignored.
  • Anything that runs on a master node by default in Standalone that is present in HyperShift MUST be hosted and not run on a customer worker node.

DoD 

Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.

More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit 

 

As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.

 

must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents

hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276

 

Exit criteria:

  • verify that hypershift dump cluster --dump-guest-cluster has storage objects from the guest cluster.

As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.

  • Add a new cmdline option for the guest cluster kubeconfig file location
  • Parse both kubeconfigs:
    • One from projected service account, which leads to the management cluster.
    • Second from the new cmdline option introduced above. This one leads to the guest cluster.
  • Only on HyperShift:
    • When interacting with Kubernetes API, carefully choose the right kubeconfig to watch / create / update objects in the right cluster.
    • Replace namespaces in all Deployments and other objects that are created in the management cluster. They must be created in the same namespace as the operator.
  •  
  •  
    • Pass only the guest kubeconfig to the operand (control-plane Deployment of the CSI driver).

Exit criteria:

  • Control plane Deployment of AWS EBS CSI driver runs in the management cluster in HyperShift.
  • Storage works in the guest cluster.
  • No regressions in standalone OCP.

As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.

  • Add a new cmdline option for the guest cluster kubeconfig file location
  • Parse both kubeconfigs:
    • One from projected service account, which leads to the management cluster.
    • Second from the new cmdline option introduced above. This one leads to the guest cluster.
  • Tag manifests of objects that should not be deployed by CVO in HyperShift
  • Only on HyperShift:
    • When interacting with Kubernetes API, carefully choose the right kubeconfig to watch / create / update objects in the right cluster.
    • Replace namespaces in all Deployments and other objects that are created in the management cluster. They must be created in the same namespace as the operator.
    • Pass only the guest kubeconfig to the operands (AWS EBS CSI driver operator).

Exit criteria:

  • CSO and AWS EBS CSI driver operator runs in the management cluster in HyperShift
  • Storage works in the guest cluster.
  • No regressions in standalone OCP.

Overview 

Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.

Assumption

  • A customer will be able to associate a cluster as “Infrastructure only”
  • E.g. one option: management cluster has role=master, and role=infra nodes only, control planes are packed on role=infra nodes
  • OR the entire cluster is labeled infrastructure, and node roles are ignored.
  • Anything that runs on a master node by default in Standalone that is present in HyperShift MUST be hosted and not run on a customer worker node.

DoD 

cluster-snapshot-controller-operator is running on the CP. 

More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit 

As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.

  • Add a new cmdline option for the guest cluster kubeconfig file location
  • Parse both kubeconfigs:
    • One from projected service account, which leads to the management cluster.
    • Second from the new cmdline option introduced above. This one leads to the guest cluster.
  • Move creation of manifests/08_webhook_service.yaml from CVO to the operator - it needs to be created in the management cluster.
  • Tag manifests of objects that should not be deployed by CVO in HyperShift by
  • Only on HyperShift:
    • When interacting with Kubernetes API, carefully choose the right kubeconfig to watch / create / update objects in the right cluster.
    • Replace namespaces in all Deployments and other objects that are created in the management cluster. They must be created in the same namespace as the operator.
    • Don’t create operand’s PodDisruptionBudget?
    • Update ValidationWebhookConfiguration to point directly to URL exposed by manifests/08_webhook_service.yaml instead of a Service. The Service is not available in the guest cluster.
    • Pass only the guest kubeconfig to the operands (both the webhook and csi-snapshot-controller).
    • Update unit tests to handle two kube clients.

Exit criteria:

  • cluster-csi-snapshot-controller-operator runs in the management cluster in HyperShift
  • csi-snapshot-controller runs in the management cluster in HyperShift
  • It is possible to take & restore volume snapshot in the guest cluster.
  • No regressions in standalone OCP.

As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.

  • Check and remove manifests/03_configmap.yaml, it does not seem to be useful.
  • Check and remove manifests/03_service.yaml, it does not seem to be useful (at least now).
  • Use DeploymentController from library-go to sync Deployments.
  • Get rid of common/ package? It does not seem to be useful.
  • Use StaticResourceController for static content, including the snapshot CRDs.

Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!

Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.

Exit criteria:

  • The operator code is smaller.
  • No regressions in standalone OCP.
  • Upgrade/downgrade from/to standalone OCP 4.11 works.
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Incomplete Features

When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release

Epic Goal

  • Enabling integration of single hub cluster to install both ARM and x86 spoke clusters
  • Enabling support for heterogeneous OCP clusters
  • document requirements deployment flows
  • support in disconnected environment

Why is this important?

  • clients request

Scenarios

  1. Users manage both ARM and x86 machines, we should not require to have two different hub clusters
  2. Users manage a mixed architecture clusters without requirement of all the nodes to be of the same architecture

Acceptance Criteria

  • Process is well documented
  • we are able to install in a disconnected environment

We have a set of images

  • quay.io/edge-infrastructure/assisted-installer-agent:latest
  • quay.io/edge-infrastructure/assisted-installer-controller:latest
  • quay.io/edge-infrastructure/assisted-installer:latest

that should become multiarch images. This should be done both in upstream and downstream.

As a reference, we have built internally those images as multiarch and made them available as

  • registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest
  • registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest
  • registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest

They can be consumed by the Assisted Serivce pod via the following env

    - name: AGENT_DOCKER_IMAGE
      value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest
    - name: CONTROLLER_IMAGE
      value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest
    - name: INSTALLER_IMAGE
      value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest

OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes

Ref: https://github.com/openshift/enhancements/pull/1014

 

Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.

A/C:

 - New OLM API version release
 - OLM API dependency updated in OLM Project
 - OLM Subscription API changes  downstreamed
 - OLM Controller changes  downstreamed
 - Changes manually tested on Cluster Bot

Feature Overview

We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.

Goals

  • Feature enhancements (performance, scale, configuration, UX, ...)
  • Modernization (incorporation and productization of new technologies)

Requirements

  • Core Networking Stability
  • Core Networking Performance and Scale
  • Core Neworking Extensibility (Multus CNIs)
  • Core Networking UX (Observability)
  • Core Networking Security and Compliance

In Scope

  • Network Edge (ingress, DNS, LB)
  • SDN (CNI plugins, openshift-sdn, OVN, network policy, egressIP, egress Router, ...)
  • Networking Observability

Out of Scope

There are definitely grey areas, but in general:

  • CNV
  • Service Mesh
  • CNF

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?

Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.

Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.

Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.

Dependencies (internal and external):

Prioritized epics + deliverables (in scope / not in scope):

Not in scope:

Estimate (XS, S, M, L, XL, XXL):

Previous Work:

Open questions:

Acceptance criteria:

Epic Done Checklist:

  • CI - CI Job & Automated tests: <link to CI Job & automated tests>
  • Release Enablement: <link to Feature Enablement Presentation> 
  • DEV - Upstream code and tests merged: <link to meaningful PR orf GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>
  • Notes for Done Checklist
    • Adding links to the above checklist with multiple teams contributing; select a meaningful reference for this Epic.
    • Checklist added to each Epic in the description, to be filled out as phases are completed - tracking progress towards “Done” for the Epic.

Description:

As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:

  • Minimum Routes per Shard
    • Recording Rule – cluster:route_metrics_controller_routes_per_shard:min  : min(route_metrics_controller_routes_per_shard)
    • Gives the minimum value of Routes per Shard.
  • Maximum Routes per Shard
    • Recording Rule – cluster:route_metrics_controller_routes_per_shard:max  : max(route_metrics_controller_routes_per_shard)
    • Gives the maximum value of Routes per Shard.
  • Average Routes per Shard
    • Recording Rule – cluster:route_metrics_controller_routes_per_shard:avg  : avg(route_metrics_controller_routes_per_shard)
    • Gives the average value of Routes per Shard.
  • Median Routes per Shard
    • Recording Rule – cluster:route_metrics_controller_routes_per_shard:median  : quantile(0.5, route_metrics_controller_routes_per_shard)
    • Gives the median value of Routes per Shard.
  • Number of Routes summed by TLS Termination type
    • Recording Rule – cluster:openshift_route_info:tls_termination:sum : sum (openshift_route_info) by (tls_termination)
    • Gives the number of Routes for each tls_termination value. The possible values for tls_termination are edge, passthrough and reencrypt. 

The metrics should be allowlisted on the cluster side.

The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.

Depends on CFE-478.

Acceptance Criteria:

  • Support for sending the above mentioned metrics from OpenShift clusters to the Red Hat premises by allowlisting metrics on the cluster side

Description:

As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:

  • Number of routes/shard

Design 2 will be implemented as part of this story.

 

Acceptance Criteria:

  • Support for exporting the above mentioned metrics by Cluster Ingress Operator

This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.

Epic Goal

  • Allow Operator Authors to easily change the layout of the update graph in a single location so they can version/maintain/release it via git and have more approachable controls about graph vertices than today's replaces, skips and/or skipRange taxonomy
  • Allow Operators authors to have control over channel and bundle channel membership

Why is this important?

  • The imperative catalog maintenance approach so far with opm is being moved to a declarative format (OLM-2127 and OLM-1780) moving away from bundle-level controls but the update graph properties are still attached to a bundle
  • We've received feedback from the RHT internal developer community that maintaining and reasoning about the graph in the context of a single channel is still too hard, even with visualization tools
  • making the update graph easily changeable is important to deliver on some of the promises of declarative index configuration
  • The current interface for declarative index configuration still relies on skips, skipRange and replaces to shape the graph on a per-bundle level - this is too complex at a certain point with a lot of bundles in channels, we need to something at the package level

Scenarios

  1. An Operator author wants to release a new version replacing the latest version published previously
  2. After additional post-GA testing an Operator author wants to establish a new update path to an existing released version from an older, released version
  3. After finding a bug post-GA an Operator author wants to temporarily remove a known to be problematic update path
  4. An automated system wants to push a bundle inbetween an existing update path as a result of an Operator (base) image rebuild (Freshmaker use case)
  5. A user wants to take a declarative graph definition and turn it into a graphical image for visually ensuring the graph looks like they want
  6. An Operator author wants to promote a certain bundle to an additional / different channel to indicate progress in maturity of the operator.

Acceptance Criteria

  • The declarative format has to be user readable and terse enough to make quick modifications
  • The declarative format should be machine writeable (Freshmaker)
  • The update graph is declared and modified in a text based format aligned with the declarative config
  • it has to be possible to add / removes edges at the leave of the graph (releasing/unpublishing a new version)
  • it has to be possible to add/remove new vertices between existing edges (releasing/retracting a new update path)
  • it has to be possible to add/remove new edges in between existing vertices (releasing/unpublishing a version inbetween, freshmaker user case)
  • it has to be possible to change the channel member ship of a bundle after it's published (channel promotion)
  • CI - MUST be running successfully with tests automated
  • it has to be possible to add additional metadata later to implement OLM-2087 and OLM-259 if required

Dependencies (internal and external)

  1. Declarative Index Config (OLM-2127)

Previous Work:

  1. Declarative Index Config (OLM-1780)

Related work

Open questions:

  1. What other manipulation scenarios are required?
    1. Answer: deprecation of content in the spirit of OLM-2087
    2. Answer: cross-channel update hints as described in OLM-2059 if that implementation requires it

 

When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276

 

Jira Description

As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).

Summary / Background

IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions. 

Acceptance Criteria

  • The changes in the PR are available for the releases which uses FBC -> OCP 4.11, 4.12

Definition of Ready

  • PRs merged into downstream OCP repos branches 4.11/4.12

Definition of Done

  • We checked that the downstream images are with the changes applied (i.e.: we can try to verify in the same way that we checked if the changes were in the downstream for the fix OLM-2639 )

enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j

then the command could be used in a manner similar to many k8s examples like

```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```

Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011

We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.

Goals

  • To allow OCP users and cluster admins to detect problems early and with as little interaction with Red Hat as possible.
  • When Red Hat is involved, make sure we have all the information we need from the customer, i.e. in metrics / telemetry / must-gather.
  • Reduce storage test flakiness so we can spot real bugs in our CI.

Requirements

Requirement Notes isMvp?
Telemetry   No
Certification   No
API metrics   No
     

Out of Scope

n/a

Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low

Assumptions

Customer Considerations

Documentation Considerations

  • Target audience: internal
  • Updated content: none at this time.

Notes

In progress:

  • CI flakes:
    • Configurable timeouts for e2e tests
      • Azure is slow and times out often
      • Cinder times out formatting volumes
      • AWS resize test times out

 

High prio:

  • Env. check tool for VMware - users often mis-configure permissions there and blame OpenShift. If we had a tool they could run, it might report better errors.
    • Should it be part of the installer?
    • Spike exists
  • Add / use cloud API call metrics
    • Helps customers to understand why things are slow
    • Helps build cop to understand a flake
      • With a post-install step that filters data from Prometheus that’s still running in the CI job.
    • Ideas:
      • Cloud is throttling X% of API calls longer than Y seconds
      • Attach / detach / provisioning / deletion / mount / unmount / resize takes longer than X seconds?
    • Capture metrics of operations that are stuck and won’t finish.
      • Sweep operation map from executioner???
      • Report operation metric into the highest bucket after the bucket threshold (i.e. if 10minutes is the last bucket, report an operation into this bucket after 10 minutes and don’t wait for its completion)?
      • Ask the monitoring team?
    • Include in CSI drivers too.
      • With alerts too

Unsorted

  • As the number of storage operators grows, it would be grafana board for storage operators
    • CSI driver metrics (from CSI sidecars + the driver itself  + its operator?)
    • CSI migration?
  • Get aggregated logs in cluster
    • They're rotated too soon
    • No logs from dead / restarted pods
    • No tools to combine logs from multiple pods (e.g. 3 controller managers)
  • What storage issues customers have? it was 22% of all issues.
    • Insufficient docs?
    • Probably garbage
  • Document basic storage troubleshooting for our supports
    • What logs are useful when, what log level to use
    • This has been discussed during the GSS weekly team meeting; however, it would be beneficial to have this documented.
  • Common vSphere errors, their debugging and fixing. 
  • Document sig-storage flake handling - not all failed [sig-storage] tests are ours
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.

We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.

We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.

related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729

Epic Goal

  • Update all images that we ship with OpenShift to the latest upstream releases and libraries.
  • Exact content of what needs to be updated will be determined as new images are released upstream, which is not known at the beginning of OCP development work. We don't know what new features will be included and should be tested and documented. Especially new CSI drivers releases may bring new, currently unknown features. We expect that the amount of work will be roughly the same as in the previous releases. Of course, QE or docs can reject an update if it's too close to deadline and/or looks too big.

Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.

Why is this important?

  • We want to ship the latest software that contains new features and bugfixes.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.

(Using separate cards for each driver because these updates can be more complicated)

Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.

(Using separate cards for each driver because these updates can be more complicated)

Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.

(Using separate cards for each driver because these updates can be more complicated)

Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.

This includes (but is not limited to):

  • Kubernetes:
    • client-go
    • controller-runtime
  • OCP:
    • library-go
    • openshift/api
    • openshift/client-go
    • operator-sdk

Operators:

  • aws-ebs-csi-driver-operator 
  • aws-efs-csi-driver-operator
  • azure-disk-csi-driver-operator
  • azure-file-csi-driver-operator
  • openstack-cinder-csi-driver-operator
  • gcp-pd-csi-driver-operator
  • gcp-filestore-csi-driver-operator
  • manila-csi-driver-operator
  • ovirt-csi-driver-operator
  • vmware-vsphere-csi-driver-operator
  • alibaba-disk-csi-driver-operator
  • ibm-vpc-block-csi-driver-operator
  • csi-driver-shared-resource-operator

 

  • cluster-storage-operator
  • csi-snapshot-controller-operator
  • local-storage-operator
  • vsphere-problem-detector

Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.

(Using separate cards for each driver because these updates can be more complicated)

There is a new driver release 5.0.0 since the last rebase that includes snapshot support:

https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0

Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.

Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.

This includes ibm-vpc-node-label-updater!

(Using separate cards for each driver because these updates can be more complicated)

Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.

(Using separate cards for each driver because these updates can be more complicated)

tldr: three basic claims, the rest is explanation and one example

  1. We cannot improve long term maintainability solely by fixing bugs.
  2. Teams should be asked to produce designs for improving maintainability/debugability.
  3. Specific maintenance items (or investigation of maintenance items), should be placed into planning as peer to PM requests and explicitly prioritized against them.

While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.

One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.

I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.

We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.


Relevant links:

Epic Goal

  • Change the default value for the spec.tuningOptions.maxConnections field in the IngressController API, which configures the HAProxy maxconn setting, to 50000 (fifty thousand).

Why is this important?

  • The maxconn setting constrains the number of simultaneous connections that HAProxy accepts. Beyond this limit, the kernel queues incoming connections. 
  • Increasing maxconn enables HAProxy to queue incoming connections intelligently.  In particular, this enables HAProxy to respond to health probes promptly while queueing other connections as needed.
  • The default setting of 20000 has been in place since OpenShift 3.5 was released in April 2017 (see BZ#1405440, commit, RHBA-2017:0884). 
  • Hardware capabilities have increased over time, and the current default is too low for typical modern machine sizes. 
  • Increasing the default setting improves HAProxy's performance at an acceptable cost in the common case. 

Scenarios

  1. As a cluster administrator who is installing OpenShift on typical hardware, I want OpenShift router to be tuned appropriately to take advantage of my hardware's capabilities.

Acceptance Criteria

  • CI is passing. 
  • The new default setting is clearly documented. 
  • A release note informs cluster administrators of the change to the default setting. 

Dependencies (internal and external)

  1. None.

Previous Work (Optional):

  1. The  haproxy-max-connections-tuning enhancement made maxconn configurable without changing the default.  The enhancement document details the tradeoffs in terms of memory for various settings of nbthreads and maxconn with various numbers of routes. 

Open questions::

  1. ...

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

 

OCP/Telco Definition of Done

Epic Template descriptions and documentation.

Epic Goal

Why is this important?

  • This regression is a major performance and stability issue and it has happened once before.

Drawbacks

  • The E2E test may be complex due to trying to determine what DNS pods are responding to DNS requests. This is straightforward using the chaos plugin.

Scenarios

  • CI Testing

Acceptance Criteria

  • CI - MUST be running successfully with tests automated

Dependencies (internal and external)

  1. SDN Team

Previous Work (Optional):

  1. N/A

Open questions::

  1. Where do these E2E test go? SDN Repo? DNS Repo?

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub
    Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub
    Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.

Feature Overview

  • This Section:* High-Level description of the feature ie: Executive Summary
  • Note: A Feature is a capability or a well defined set of functionality that delivers business value. Features can include additions or changes to existing functionality. Features can easily span multiple teams, and multiple releases.

 

Goals

  • This Section:* Provide high-level goal statement, providing user context and expected user outcome(s) for this feature

 

Requirements

  • This Section:* A list of specific needs or objectives that a Feature must deliver to satisfy the Feature.. Some requirements will be flagged as MVP. If an MVP gets shifted, the feature shifts. If a non MVP requirement slips, it does not shift the feature.

 

Requirement Notes isMvp?
CI - MUST be running successfully with test automation This is a requirement for ALL features. YES
Release Technical Enablement Provide necessary release enablement details and documents. YES

 

(Optional) Use Cases

This Section: 

  • Main success scenarios - high-level user stories
  • Alternate flow/scenarios - high-level user stories
  • ...

 

Questions to answer…

  • ...

 

Out of Scope

 

Background, and strategic fit

This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.

 

Assumptions

  • ...

 

Customer Considerations

  • ...

 

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?  
  • New Content, Updates to existing content,  Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

When OCP is performing cluster upgrade user should be notified about this fact.

There are two possibilities how to surface the cluster upgrade to the users:

  • Display a console notification throughout OCP web UI saying that the cluster is currently under upgrade.
  • Global notification throughout OCP web UI saying that the cluster is currently under upgrade.
  • Have an alert firing for all the users of OCP stating the cluster is undergoing an upgrade. 

 

AC:

  • Console-operator will create a ConsoleNotification CR when the cluster is being upgraded. Once the upgrade is done console-operator will remote that CR. These are the three statuses based on which we are determining if the cluster is being upgraded.
  • Add unit tests

 

Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem 

 

Created from: https://issues.redhat.com/browse/RFE-3024

As a console user I want to have option to:

  • Restart Deployment
  • Retry latest DeploymentConfig if it failed

 

For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.

  • action is disabled if:
    • Deployment is paused

 

For DeploymentConfig we will add 'Retry rollout' action button.  This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.

  • action is enabled if:
    • latest revision of the ReplicationController resource is in Failed phase
  • action is disabled if:
    • latest revision of the ReplicationController resource is in Complete phase
    • DeploymentConfig does not have any rollouts
    • DeploymentConfigs is paused

 

Acceptance Criteria:

  • Add the 'Restart rollout' action button for the Deployment resource to both action menu and kebab menu
  • Add the 'Retry rollout' action button for the DeploymentConfig resource to both action menu and kebab menu

 

BACKGROUND:

OpenShift console will be updated to allow rollout restart deployment from the console itself.

Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.

The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.

Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit

As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console.  When viewing a Pod in the console, the field status.HostIP is not visible.

 

Acceptance criteria:

  • Make pod's HostIP field visible in the pod details page, similarly to PodIP field

Feature Overview

  • As an infrastructure owner, I want a repeatable method to quickly deploy the initial OpenShift cluster.
  • As an infrastructure owner, I want to install the first (management, hub, “cluster 0”) cluster to manage other (standalone, hub, spoke, hub of hubs) clusters.

Goals

  • Enable customers and partners to successfully deploy a single “first” cluster in disconnected, on-premises settings

Requirements

4.11 MVP Requirements

  • Customers and partners needs to be able to download the installer
  • Enable customers and partners to deploy a single “first” cluster (cluster 0) using single node, compact, or highly available topologies in disconnected, on-premises settings
  • Installer must support advanced network settings such as static IP assignments, VLANs and NIC bonding for on-premises metal use cases, as well as DHCP and PXE provisioning environments.
  • Installer needs to support automation, including integration with third-party deployment tools, as well as user-driven deployments.
  • In the MVP automation has higher priority than interactive, user-driven deployments.
  • For bare metal deployments, we cannot assume that users will provide us the credentials to manage hosts via their BMCs.
  • Installer should prioritize support for platforms None, baremetal, and VMware.
  • The installer will focus on a single version of OpenShift, and a different build artifact will be produced for each different version.
  • The installer must not depend on a connected registry; however, the installer can optionally use a previously mirrored registry within the disconnected environment.

Use Cases

  • As a Telco partner engineer (Site Engineer, Specialist, Field Engineer), I want to deploy an OpenShift cluster in production with limited or no additional hardware and don’t intend to deploy more OpenShift clusters [Isolated edge experience].
  • As a Enterprise infrastructure owner, I want to manage the lifecycle of multiple clusters in 1 or more sites by first installing the first  (management, hub, “cluster 0”) cluster to manage other (standalone, hub, spoke, hub of hubs) clusters [Cluster before your cluster].
  • As a Partner, I want to package OpenShift for large scale and/or distributed topology with my own software and/or hardware solution.
  • As a large enterprise customer or Service Provider, I want to install a “HyperShift Tugboat” OpenShift cluster in order to offer a hosted OpenShift control plane at scale to my consumers (DevOps Engineers, tenants) that allows for fleet-level provisioning for low CAPEX and OPEX, much like AKS or GKE [Hypershift].
  • As a new, novice to intermediate user (Enterprise Admin/Consumer, Telco Partner integrator, RH Solution Architect), I want to quickly deploy a small OpenShift cluster for Poc/Demo/Research purposes.

Questions to answer…

  •  

Out of Scope

Out of scope use cases (that are part of the Kubeframe/factory project):

  • As a Partner (OEMs, ISVs), I want to install and pre-configure OpenShift with my hardware/software in my disconnected factory, while allowing further (minimal) reconfiguration of a subset of capabilities later at a different site by different set of users (end customer) [Embedded OpenShift].
  • As an Infrastructure Admin at an Enterprise customer with multiple remote sites, I want to pre-provision OpenShift centrally prior to shipping and activating the clusters in remote sites.

Background, and strategic fit

  • This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.

Assumptions

  1. The user has only access to the target nodes that will form the cluster and will boot them with the image presented locally via a USB stick. This scenario is common in sites with restricted access such as government infra where only users with security clearance can interact with the installation, where software is allowed to enter in the premises (in a USB, DVD, SD card, etc.) but never allowed to come back out. Users can't enter supporting devices such as laptops or phones.
  2. The user has access to the target nodes remotely to their BMCs (e.g. iDrac, iLo) and can map an image as virtual media from their computer. This scenario is common in data centers where the customer provides network access to the BMCs of the target nodes.
  3. We cannot assume that we will have access to a computer to run an installer or installer helper software.

Customer Considerations

  • ...

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?

 

References

 

 

Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode

In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.

This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service

Epic Goal

As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6

As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6

Why is this important?

IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.

Acceptance Criteria

  • Agent-based installer can deploy IPv6 clusters
  • Agent-based installer can deploy dual-stack clusters
  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Previous Work

Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]

Done Checklist * CI - CI is running, tests are automated and merged.

  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>|

For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().

For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.

Epic Goal

As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed

Why is this important?

BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.

Acceptance Criteria

  • A user can provide MCE manifests and have it installed without additional manual steps after the installation is completed
  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

User Story:

As a customer, I want to be able to:

  • Install MCE with the agent-installer

so that I can achieve

  • create an MCE hub with my openshift install

Acceptance Criteria:

Description of criteria:

  • Upstream documentation including examples of the extra manifests needed
  • Unit tests that include MCE extra manifests
  • Ability to install MCE using agent-installer is tested
  • Point 3

(optional) Out of Scope:

We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)

Engineering Details:

This requires/does not require a design proposal.
This requires/does not require a feature gate.

User Story:

As a customer, I want to be able to:

  • Install MCE with the agent-installer

so that I can achieve

  • create an MCE hub with my openshift install

Acceptance Criteria:

Description of criteria:

  • Upstream documentation including examples of the extra manifests needed
  • Unit tests that include MCE extra manifests
  • Ability to install MCE using agent-installer is tested
  • Point 3

(optional) Out of Scope:

We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)

Engineering Details:

This requires/does not require a design proposal.
This requires/does not require a feature gate.

OC mirror is GA product as of Openshift 4.11 .

The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror 

Epic Goal

  • Mirror to mirror operations and custom mirroring flows required by IBM CloudPak catalog management

Why is this important?

  • IBM needs additional customization around the actual mirroring of images to enable CloudPaks to fully adopt OLM-style operator packaging and catalog management
  • IBM CloudPaks introduce additional compute architectures, increasing the download volume by 2/3rds to day, we need the ability to effectively filter non-required image versions of OLM operator catalogs during filtering for other customers that only require a single or a subset of the available image architectures
  • IBM CloudPaks regularly run on older OCP versions like 4.8 which require additional work to be able to read the mirrored catalog produced by oc mirror

Scenarios

  1. Customers can use the oc utility and delegate the actual image mirror step to another tool
  2. Customers can mirror between disconnected registries using the oc utility
  3. The oc utility supports filtering manifest lists in the context of multi-arch images according to the sparse manifest list proposal in the distribution spec

Acceptance Criteria

  • Customers can use the oc utility to mirror between two different air-gapped environments
  • Customers can specify the desired computer architectures and oc mirror will create sparse manifest lists in the target registry as a result

Dependencies (internal and external)

Previous Work:

  1. WRKLDS-369
  2. Disconnected Mirroring Improvement Proposal

Related Work:

  1. https://github.com/opencontainers/distribution-spec/pull/310
  2. https://github.com/distribution/distribution/pull/3536
  3. https://docs.google.com/document/d/10ozLoV7sVPLB8msLx4LYamooQDSW-CAnLiNiJ9SER2k/edit?usp=sharing

Pre-Work Objectives

Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.

Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster. 
Why customers want this?

  1. Single interface to accomplish their tasks
  2. Consistent UX and patterns
  3. Easily accessible: One URL, one set of credentials

Why we want this?

  • Shared code -  improve the velocity of both teams and most importantly ensure consistency of the experience at the code level
  • Pre-built PF4 components
  • Accessibility & i18n
  • Remove barriers for enabling ACM

Phase 2 Goal: Productization of the united Console 

  1. Enable user to quickly change context from fleet view to single cluster view
    1. Add Cluster selector with “All Cluster” Option. “All Cluster” = ACM
    2. Shared SSO across the fleet
    3. Hub OCP Console can connect to remote clusters API
    4. When ACM Installed the user starts from the fleet overview aka “All Clusters”
  2. Share UX between views
    1. ACM Search —> resource list across fleet -> resource details that are consistent with single cluster details view
    2. Add Cluster List to OCP —> Create Cluster

As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.

cc Ali Mobrem Sho Weimer Jakub Hadvig 

UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and  OpenShiftDedicated

Acceptance criteria:

  • Investigate if console-operator should pass info about which cluster are supported and unsupported to the frontend
  • Unsupported clusters should not appear in the cluster dropdown
  • Unsupported clusters based off
    • defined vendor label
    • non 4.x ocp clusters

Feature Overview

RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.

 

Requirements

  • RHEL 9.x sources for RHCOS builds starting with OCP 4.13 and RHEL 9.2.

 

Requirement Notes isMvp?
CI - MUST be running successfully with test automation This is a requirement for ALL features. YES
Release Technical Enablement Provide necessary release enablement details and documents. YES

(Optional) Use Cases

  • 9.2 Preview via Layering No longer necessary assuming we stay the course of going all in on 9.2

Assumptions

  • ...

Customer Considerations

  • ...

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?

PROBLEM

We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.

PROPOSAL

Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.

ACCEPTANCE CRITERIA

Image has been switched/included: 

DEPENDENCIES

The SCOS build payload.

RELATED RESOURCES

OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p

OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit

 

Acceptance Criteria

A stable OKD on SCOS is built and available to the community sprintly.

 

This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image

 

```

[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```

 

The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53

Overview 

HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term. 

Main Goals for hosted control planes (HyperShift)

  • Optimize OpenShift for Cost/footprint/ which improves our competitive stance against the *KSes
  • Establish separation of concerns which makes it more resilient for SRE to manage their workload clusters (be it security, configuration management, etc).
  • Simplify and enhance multi-cluster management experience especially since multi-cluster is becoming an industry need nowadays. 

Secondary Goals

HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]

 

Hosted Control Planes (HyperShift) Map 

To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes: 

 

  • Self-managed: In that case, Red Hat would provide hosted control planes as a service that is managed and SREed by the customer for their tenants (hence “self”-managed). In this management model, our external customers are the direct consumers of the multi-cluster control plane as a servie. Once MCE is installed, they can start to self-service dedicated control planes. 

 

  • Managed: This is OpenShift as a managed service, today we only “manage” the CP, and share the responsibility for other system components, more info here. To reduce management costs incurred by service delivery organizations which translates to operating profit (by reducing variable costs per control-plane), as well as to improve user experience, lower platform overhead (allow customers to focus mostly on writing applications and not concern themselves with infrastructure artifacts), and improve the cluster provisioning experience. HyperShift is shipped via MCE, and delivered to Red Hat managed SREs (same consumption route). However, for managed services, additional tooling needs to be refactored to support the new provisioning path. Furthermore, unlike self-managed where customers are free to bring their own observability stack, Red Hat managed SREs need to observe the managed fleet to ensure compliance with SLOs/SLIs/…

 

If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE

High-level Requirements

For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA: 

 

  • Hosted control planes fits well with our multi-cluster story (with MCE)
  • Hosted control planes APIs are stable for consumption  
  • Customers are not paying for control planes/infra components.  
  • Hosted control planes has an HA and a DR story
  • Hosted control planes is in parity with top-level add-on operators 
  • Hosted control planes reports metrics on usage/adoption
  • Hosted control planes is observable  
  • HyperShift as a backend to managed services is fully unblocked.

 

Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc. 

Hosted control planes fits well with our multi-cluster story

Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:

 

 

As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters. 

HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story. 

Thus the following stories are important for HyperShift: 

  • When lifecycling OpenShift clusters (for any OpenShift form factor) on any of the supported providers from MCE/ACM/OCM/CLI as a Cluster Service Consumer  (RH managed SRE, or self-manage SRE/admin):
  • I want to be able to use a consistent UI so I can manage and operate (observe, govern,...) a fleet of clusters.
  • I want to specify HA constraints (e.g., deploy my clusters in different regions) while ensuring acceptable QoS (e.g., latency boundaries) to ensure/reduce any potential downtime for my workloads. 
  • When operating OpenShift clusters (for any OpenShift form factor) on any of the supported provider from MCE/ACM/OCM/CLI as a Cluster Service Consumer  (RH managed SRE, or self-manage SRE/admin):
  • I want to be able to backup any critical data so I am able to restore them in case of hosting service cluster (management cluster) failure. 

Refs:

Hosted control planes APIs are stable for consumption.

 

HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed. 

 

Main user story:  When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees. 

 

Ref: What are we missing in Core HyperShift for GA Readiness?

Customers are not paying for control planes/infra components. 

 

Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.

Assumptions

  • A customer will be able to associate a cluster as “Infrastructure only”
  • E.g. one option: management cluster has role=master, and role=infra nodes only, control planes are packed on role=infra nodes
  • OR the entire cluster is labeled infrastructure , and node roles are ignored.
  • Anything that runs on a master node by default in Standalone that is present in HyperShift MUST be hosted and not run on a customer worker node.

HyperShift - proposed cuts from data plane

HyperShift has an HA and a DR story

When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer  (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:

  • as means for disaster recovery in the case of total failure
  • so that scaling pressures on a management cluster can be mitigated or a management cluster can be decommissioned.

More information: 

 

Hosted control planes reports metrics on usage/adoption

To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.

See Hosted Control Planes (aka HyperShift) Strategy [Live Document]

Hosted control plane is observable  

Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path. 

HyperShift is in parity with top-level add-on operators

https://issues.redhat.com/browse/OCPPLAN-8901 

Unblock HyperShift as a backend to managed services

HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead). 

 

We should make sure our SD milestones are unblocked by the core team. 

 

Note 

This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.

- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors. 
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771

Epic Goal*

The goal is to split client certificate trust chains from the global Hypershift root CA.

 
Why is this important? (mandatory)

This is important to:

  • assure a workload can be run on any kind of OCP flavor
  • reduce the blast radius in case of a sensitive material leak
  • separate trust to allow more granular control over client certificate authentication

 
Scenarios (mandatory) 

Provide details for user scenarios including actions to be performed, platform specifications, and user personas.  

  1. I would like to be able to run my workloads on any OpenShift-like platform.
    My workloads allow components to authenticate using client certificates based
    on a trust bundle that I am able to retrieve from the cluster.
  1. I don't want my users to have access to any CA bundle that would allow them
    to trust a random certificate from the cluster for client certificate authentication.

 
Dependencies (internal and external) (mandatory)

Hypershift team needs to provide us with code reviews and merge the changes we are to deliver

Contributing Teams(and contacts) (mandatory) 

  • Development - OpenShift Auth, Hypershift
  • Documentation -OpenShift Auth Docs team
  • QE - OpenShift Auth QE
  • PX - I have no idea what PX is
  • Others - others

Acceptance Criteria (optional)

The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.

Drawbacks or Risk (optional)

Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release

Done - Checklist (mandatory)

  • CI Testing -  Basic e2e automationTests are merged and completing successfully
  • Documentation - Content development is complete.
  • QE - Test scenarios are written and executed successfully.
  • Technical Enablement - Slides are complete (if requested by PLM)
  • Engineering Stories Merged
  • All associated work items with the Epic are closed
  • Epic status should be “Release Pending” 

Feature Overview (aka. Goal Summary)  

The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike. 

Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.

In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.

The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike. 

For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing. 

This includes: 

  • Conditions
  • Some Logging 
  • Possibly Some Events 

While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic.  I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state". 

 

Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing

 

https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing

 

The current property description is:

configuration represents the current MachineConfig object for the machine config pool.

But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?

Complete Epics

This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled

Epic Goal

  • Update OpenShift components that are owned by the Builds + Jenkins Team to use Kubernetes 1.25

Why is this important?

  • Our components need to be updated to ensure that they are using the latest bug/CVE fixes, features, and that they are API compatible with other OpenShift components.

Acceptance Criteria

  • Existing CI/CD tests must be passing

This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.

Today the links point at a rule-scoped page, but that page lacks information about recommended resolution.  You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.

We can implement by updating the template here to be:

fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)

or something like that.

 

unknowns

request is clear, solution/implementation to be further clarified

This epic contains all the Dynamic Plugins related stories for OCP release-4.11 

Epic Goal

  • Track all the stories under a single epic

Acceptance Criteria

  •  

This story only covers API components. We will create a separate story for other utility functions.

Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.

We are generating the markdown from the dynamic-plugin-sdk using

yarn generate-doc

Here is the list of the API that the dynamic-plugin-sdk is exposing:

https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a

Acceptance Criteria:

  • Add missing jsdocs for the API that dynamic-plugin-sdk exposes

Out of Scope:

  • This does not include work for integrating the API docs into the OpenShift docs
  • This does not cover other public utilities, only components.

This epic contains all the Dynamic Plugins related stories for OCP release-4.12

Epic Goal

  • Track all the stories under a single epic

Acceptance Criteria

Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:

  • useAccessReviewAllowed (use useAccessReview instead)
  • useSafetyFirst

cc Andrew Ballantyne Bryan Florkiewicz 

Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`

The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx

To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.

If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.

Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.

There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.

Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.

Currently ResourceLink is exported but not ResourceIcon

 

AC:

  • Require the ResourceIcon  from public to dynamic-plugin-sdk
  • Add the component to the dynamic-demo-plugin
  • Add a CI test to check for the ResourceIcon component

 

when defining two proxy endpoints, 
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:

  • alias: forklift-inventory
    authorize: true
    service:
    name: forklift-inventory
    namespace: konveyor-forklift
    port: 8443
    type: Service
  • alias: forklift-must-gather-api
    authorize: true
    service:
    name: forklift-must-gather-api
    namespace: konveyor-forklift
    port: 8443
    type: Service

service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api

but both proxy to the `forklift-must-gather-api` service

e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service

Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.

This would require updates in following repositories:

  1. openshift/api (add the v1 version and generate a new CRD)
  2. openshift/client-go (picku the changes in the openshift/api repo and generate clients & informers for the new v1 version)
  3. openshift/console-operator repository will using both the new v1 version and v1alpha1 in code and manifests folder.

AC:

  • both v1 and v1alpha1 ConsolePlugins should be passed to the console-config.yaml when the plugins are enabled and present on the cluster.

 

NOTE: This story does not include the conversion webhook change which will be created as a follow on story

During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.

 

AC: Add `message` property to NotLoadedDynamicPluginInfo type.

We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.

We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.

 

AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.

The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.

The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.

I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.

Acceptance Criteria:

  • Deprecate the old extension (in docs, with date/stamp)
  • Make a new extension that applies a stricter type
  • Include this new extension next to the old one (with the error boundary around it)

`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.

This isn't documented today. We need to do that.

Acceptance Criteria

  • Add a note in the "SDK packages" section of the README about the existence of this package and it's purpose
    • The purpose of being a static utility delivery library intended not to be tied to OpenShift Console versions and compatible with multiple version of OpenShift Console

This epic contains all the OLM related stories for OCP release-4.12

Epic Goal

  • Track all the stories under a single epic

This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.

 

We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g.  `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures

AC:

  1. Implement logic in the console's backend to read the set of architecture types from console-config.yaml and set it as a SERVER_FLAG.nodeArchitectures (Change similar to https://github.com/openshift/console/commit/39aabe171a2e89ed3757ac2146d252d087fdfd33)
  2. In Operator hub render only operators that are support on any given node, based on the SERVER_FLAG.nodeArchitectures field implemented in CONSOLE-3242.

 

OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86

 

@jpoulin is good to ask about heterogeneous clusters.

This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.

 

We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.

 

AC: 

  1. Implement logic in the console-operator that will scan though all the nodes and build a set of all the architecture types that the cluster nodes run on and pass it to the console-config.yaml
  2. Add unit and e2e test cases in the console-operator repository.

 

@jpoulin is good to ask about heterogeneous clusters.

An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.

As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content. 

 

Acceptance criteria:

  • Remove any unused scss / css content after revamping for dark mode

Epic Goal

  • Enable OpenShift IPI Installer to deploy OCP to a shared VPC in GCP.
  • The host project is where the VPC and subnets are defined. Those networks are shared to one or more service projects.
  • Objects created by the installer are created in the service project where possible. Firewall rules may be the only exception.
  • Documentation outlines the needed minimal IAM for both the host and service project.

Why is this important?

  • Shared VPC's are a feature of GCP to enable granular separation of duties for organizations that centrally manage networking but delegate other functions and separation of billing. This is used more often in larger organizations where separate teams manage subsets of the cloud infrastructure. Enterprises that use this model would also like to create IPI clusters so that they can leverage the features of IPI. Currently organizations that use Shared VPC's must use UPI and implement the features of IPI themselves. This is repetative engineering of little value to the customer and an increased risk of drift from upstream IPI over time. As new features are built into IPI, organizations must become aware of those changes and implement them themselves instead of getting them "for free" during upgrades.

Scenarios

  1. Deploy cluster(s) into service project(s) on network(s) shared from a host project.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

User Story:

As a user, I want to be able to:

  • skip creating service accounts in Terraform when using passthrough credentialsMode.
  • pass the installer service account to Terraform to be used as the service account for instances when using passthrough credentialsMode.

so that I can achieve

  • creating an IPI cluster using Shared VPC networks using a pre-created service account with the necessary permissions in the Host Project.

Acceptance Criteria:

Description of criteria:

  • Upstream documentation
  • Point 1
  • Point 2
  • Point 3

(optional) Out of Scope:

Detail about what is specifically not being delivered in the story

Engineering Details:

1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.

2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.

3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/

4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:

spec:
connectionConfig:
username: username
password:
secretName: secret-name

The secret namespace should be openshift-config to align with the tlsClientConfig behavior.

5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus

Owner: Architect:

Story (Required)

As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated

Background (Required)

We need to support helm installs for Repos that have the basic authentication secret name and namespace.

Glossary

Out of scope

Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.

In Scope

<Defines what is included in this story>

Approach(Required)

If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.

Dependencies

Nonet

Edge Case

NA

Acceptance Criteria

I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth

INVEST Checklist

Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated

Legend

Unknown
Verified
Unsatisfied

Epic Goal

  • Support manifest lists by image streams and the integrated registry. Clients should be able to pull/push manifests lists from/into the integrated registry. They also should be able to import images via `oc import-image` and them pull them from the internal registry.

Why is this important?

  • Manifest lists are becoming more and more popular. Customers want to mirror manifest lists into the registry and be able to pull them by digest.

Scenarios

  1. Manifest lists can be pushed into the integrated registry
  2. Imported manifests list can be pulled from the integrated registry
  3. Image triggers work with manifest lists

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • Existing functionality shouldn't change its behavior

Dependencies (internal and external)

  1. ...

Previous Work (Optional)

  1. https://github.com/openshift/enhancements/blob/master/enhancements/manifestlist/manifestlist-support.md

Open questions

  1. Can we merge creation of images without having the pruner?

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

ACCEPTANCE CRITERIA

  • The ImageStream object should contain a new flag indicating that it refers to a manifest list
  • openshift-controller-manager uses new openshift/api code to import image streams
  • changing `importMode` of an image stream tag triggers a new import (i.e. updates generation in the tag spec)

NOTES

This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:

 

 - removing or reducing the need for ignition

 - maintaining feature parity between self-driving and managed OCP models

 - adding additional functionality such as hotfixes

Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic

 

Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof

Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD

We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12

This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9

More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4

Epic Goal

  • We need the installer to accept a LB type from user and then we could set type of LB in the following object.
    oc get ingress.config.openshift.io/cluster -o yaml
    Then we can fetch info from this object and reconcile the operator to have the NLB changes reflected.

 

This is an API change and we will consider this as a feature request.

Why is this important?

https://issues.redhat.com/browse/NE-799 Please check this for more details

 

Scenarios

https://issues.redhat.com/browse/NE-799 Please check this for more details

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. installer
  2. ingress operator

Previous Work (Optional):

 No

Open questions::

N/A

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to

  • minimize bugs,
  • reproduce and fix them faster and
  • pin down current behavior of the driver

Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:

  • fast feedback cycle (local test execution)
  • developer in-code documentation
  • easier onboarding for new contributers
  • lower resource consumption
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Description

As a user, I would like to be informed in an intuitive way,  when quotas have been reached in a namespace

Acceptance Criteria

  1. Show an alert banner on the Topology and add page for this project/namespace when there is a RQ (Resource Quota) / ACRQ (Applied Cluster Resource Quota) issue
    PF guideline: https://www.patternfly.org/v4/components/alert/design-guidelines#using-alerts 
  2. The above alert should have a CTA link to the search page with all RQ, ACRQ and if there is just one show the details page for the same
  3. For RQ, ACRQ list view show one more column called status with details as shown in the project view.

Additional Details:

 

Refer below for more details 

Description

As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits

Acceptance Criteria

  1. Show a yellow border around deployments if any of the deployments have reached the quota limit
  2. For deployments, if there are any errors associated with resource limits or quotas, include a warning alert in the side panel.
    1. If we know resource limits are the cause, include link to Edit resource limits
    2. If we know pod count is the cause, include a link to Edit pod count

Additional Details:

 

Refer below for more details 

Goal

Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.

Problem:

We have heard the following requests from customers and developer advocates:

  • Some admins do not want to provide access to the Developer Perspective from the console
  • Some admins do not want to provide non-priv users access to the Admin Perspective from the console

Acceptance criteria:

  1. Cluster administrator is able to "hide" the admin perspective for non-priv users
  2. Cluster administrator is able to "hide" the developer perspective for all users
  3. Be user that User Preferences for individual users behaves appropriately. If only one perspective is available, the perspective switcher is not needed.

Dependencies (External/Internal):

Design Artifacts:

Exploration:

Note:

Description

As an admin, I want to hide user perspective(s) based on the customization.

Acceptance Criteria

  1. Hide perspective(s) based on the customization
    1. When the admin perspective is disabled -> we hide the admin perspective for all unprivileged users
    2. When the dev perspective is disabled -> we hide the dev perspective for all users
  2. When all the perspectives are hidden from a user or for all users, show the Admin perspective by default

Additional Details:

Description

As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users

Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource

Acceptance Criteria

  1. Extend the "customization" spec type definition for the CRD in the openshift/api project

Additional Details:

Previous customization work:

  1. https://issues.redhat.com/browse/ODC-5416
  2. https://issues.redhat.com/browse/ODC-5020
  3. https://issues.redhat.com/browse/ODC-5447

Description

As an admin, I want to be able to use a form driven experience  to hide user perspective(s)

Acceptance Criteria

  1. Add checkboxes with the options
    1. Hide "Administrator" perspective for non-privileged users
    2.  Hide "Developer" perspective for all users
  2. The console configuration CR should be updated as per the selected option

Additional Details:

Description

As an admin, I should be able to see a code snippet that shows how to add user perspectives

Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives

To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).

Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205

Acceptance Criteria

  1. When the admin opens the Console CRD there is a snippet in the sidebar which provides a default YAML which supports the admin to add user perspectives

Additional Details:

Previous work:

  1. https://issues.redhat.com/browse/ODC-5080
  2. https://issues.redhat.com/browse/ODC-5449

Problem:

Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog.  The request is to change access for the cluster, not per user or persona.

Goal:

Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.

Why is it important?

Multiple customer requests.

Acceptance criteria:

  1. As a cluster admin, I can hide/disable access to the developer catalog for all users across all namespaces.
  2. As a cluster admin, I can hide/disable access to a specific sub-catalog in the developer catalog for all users across all namespaces.
    1. Builder Images
    2. Templates
    3. Helm Charts
    4. Devfiles
    5. Operator Backed

Notes

We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services

Dependencies (External/Internal):

Design Artifacts:

Exploration:

Note:

Description

As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.

Acceptance Criteria

  1. Hide all links to the sub-catalog(s) from the add page, topology actions, empty states, quick search, and the catalog itself
  2. The sub-catalog should show Not found if the user opens the sub-catalog directly
  3. The feature should not be hidden if a sub-catalog option is disabled

Additional Details:

Description

As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.

Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s)  from the Developer Catalog or the Dev catalog as a whole.

To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).

Acceptance Criteria

  1. When the admin opens the Console CRD there is a snippet in the sidebar which provides a default YAML, which supports the admin to add sub-catalogs/the whole dev catalog

Additional Details:

Previous work:

  1. https://issues.redhat.com/browse/ODC-5080
  2. https://issues.redhat.com/browse/ODC-5449

Description

As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.

Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource

Acceptance Criteria

Extend the "customization" spec type definition for the CRD in the openshift/api project

Additional Details:

Previous customization work:

  1. https://issues.redhat.com/browse/ODC-5416
  2. https://issues.redhat.com/browse/ODC-5020
  3. https://issues.redhat.com/browse/ODC-5447

Epic Goal

  • Facilitate the transition to for OLM and content to PSA enforcing the `restricted` security profile
  • Use the label synch'er to enforce the required security profile
  • Current content should work out-of-the-box as is
  • Upgrades should not be blocked

Why is this important?

  • PSA helps secure the cluster by enforcing certain security restrictions that the pod must meet to be scheduled
  • 4.12 will enforce the `restricted` profile, which will affect the deployment of operators in `openshift-*` namespaces 

Scenarios

  1. Admin installs operator in an `openshift-*`namespace that is not managed by the label syncher -> label should be applied
  2. Admin installs operator in an `openshift-*` namespace that has a label asking the label syncher to not reconcile it -> nothing changes

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • Done only downstream
  • Transition documentation written and reviewed

Dependencies (internal and external)

  1. label syncher (still searching for the link)

Open questions::

  1. Is this only for openshift-* namespaces?

Resources

Stakeholders

  • Daniel S...?

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.

Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.

A/C:
 - OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
 - If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled 
 - The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)

This epic tracks network tooling improvements for 4.12

New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.

Our estimation for this Epic is 1 engineer * 2 Sprints

WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.

 

Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.

The metric updates every 2 minutes so please be mindful of this when creating the alert.

If the controller is disconnected for 10 minutes, fire an alert.

DoD: Merged to CNO and tested by QE

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • Come up with a consistent way to detect node down on OCP and hypershift. Current mechanism for OCP (probe port 9) does not work for hypershift, meaning, hypershift node down detection will be longer (~40 secs). We should aim to have a common mechanism for both. As well, we should consider alternatives to the probing port 9. Perhaps BFD, or other detection.
  • Get clarification on node down detection times. Some customers have (apparently) asked for detection on the order of 100ms, recommendation is to use multiple Egress IPs, so this may not be a hard requirement. Need clarification from PM/Customers.

Why is this important?

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
 
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
 
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]

This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/

Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23

https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help

Incomplete Epics

This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled

Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.

We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.

Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.

Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.

The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).

Epic Goal

  • To improve the reliability of disk cleaning before installation and to provide the user with sufficient warning regarding the consequences of the cleaning

Why is this important?

  • Insufficient cleaning can lead to installation failure
  • Insufficient warning can lead to complaints of unexpected data loss

Scenarios

  1.  

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as

Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem? 

How reproducible:

Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.

List block devices
/usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME
NAME              MAJ:MIN   SIZE TYPE FSTYPE      KNAME MODEL            UUID                                   WWN                HCTL       VENDOR   STATE   TRAN PKNAME
loop0               7:0   125.9G loop xfs         loop0                  c080b47b-2291-495c-8cc0-2009ebc39839                                                       
loop1               7:1   885.5M loop squashfs    loop1                                                                                                             
sda                 8:0   894.3G disk             sda   INTEL SSDSC2KG96                                        0x55cd2e415235b2db 1:0:0:0    ATA      running sas  
|-sda1              8:1     250M part             sda1                                                          0x55cd2e415235b2db                                  sda
|-sda2              8:2     750M part ext2        sda2                   3aa73c72-e342-4a07-908c-a8a49767469d   0x55cd2e415235b2db                                  sda
|-sda3              8:3      49G part xfs         sda3                   ffc3ccfe-f150-4361-8ae5-f87b17c13ac2   0x55cd2e415235b2db                                  sda
|-sda4              8:4   394.2G part LVM2_member sda4                   Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db                                  sda
`-sda5              8:5     450G part LVM2_member sda5                   W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db                                  sda
  `-nova-instance 253:0     3.1T lvm  ext4        dm-0                   d15e2de6-2b97-4241-9451-639f7b14594e                                          running      sda5
sdb                 8:16  894.3G disk             sdb   INTEL SSDSC2KG96                                        0x55cd2e415235b31b 1:0:1:0    ATA      running sas  
`-sdb1              8:17  894.3G part LVM2_member sdb1                   6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b                                  sdb
  `-nova-instance 253:0     3.1T lvm  ext4        dm-0                   d15e2de6-2b97-4241-9451-639f7b14594e                                          running      sdb1
sdc                 8:32  894.3G disk             sdc   INTEL SSDSC2KG96                                        0x55cd2e415235b652 1:0:2:0    ATA      running sas  
`-sdc1              8:33  894.3G part LVM2_member sdc1                   pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652                                  sdc
  `-nova-instance 253:0     3.1T lvm  ext4        dm-0                   d15e2de6-2b97-4241-9451-639f7b14594e                                          running      sdc1
sdd                 8:48  894.3G disk             sdd   INTEL SSDSC2KG96                                        0x55cd2e41521679b7 1:0:3:0    ATA      running sas  
`-sdd1              8:49  894.3G part LVM2_member sdd1                   exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7                                  sdd
  `-nova-instance 253:0     3.1T lvm  ext4        dm-0                   d15e2de6-2b97-4241-9451-639f7b14594e                                          running      sdd1
sr0                11:0     989M rom  iso9660     sr0   Virtual CDROM0   2022-06-17-18-18-33-00                                    0:0:0:0    AMI      running usb  

Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda

Actual results:

 The installation will fail with a message that indicates that it could not exclusively access /dev/sda

Expected results:

The installation should proceed and the cluster should start to install.

Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810

Description of the problem:

Cluster Installation fail if installation disk has lvm on raid:

Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?" 

How reproducible:

100%

Steps to reproduce:

1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)

Actual results:

Installation failed

Expected results:

Installation success

Epic Goal

  • Increase success-rate of of our CI jobs
  • Improve debugability / visibility or tests 

Why is this important?

  • Failed presubmit jobs (required or optional) can make an already tested+approved PR to not get in
  • Failed periodic jobs interfere our visibility around stability of features

Description of problem:

check_pkt_length cannot be offloaded without
1) sFlow offload patches in Openvswitch
2) Hardware driver support.

Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.

Version-Release number of selected component (if applicable):

4.11/4.12

How reproducible:

Always

Steps to Reproduce:

1. Any flow that has check_pkt_len()
  5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node)
  6-b: Pod -> NodePort Service traffic (Host Backend - Different Node)
  4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node)
  10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node)
  11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node)
  12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)   

Actual results:

Poor performance due to upcalls when check_pkt_len() is not supported.

Expected results:

Good performance.

Additional info:

https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • Run OpenShift builds that do not execute as the "root" user on the host node.

Why is this important?

  • OpenShift builds require an elevated set of capabilities to build a container image
  • Builds currently run as root to maintain adequate performance
  • Container workloads should run as non-root from the host's perspective. Containers running as root are a known security risk.
  • Builds currently run as root and require a privileged container. See BUILD-225 for removing the privileged container requirement.

Scenarios

  1. Run BuildConfigs in a multi-tenant environment
  2. Run BuildConfigs in a heightened security environment/deployment

Acceptance Criteria

  • Developers can opt into running builds in a cri-o user namespace by providing an environment variable with a specific value.
  • When the correct environment variable is provided, builds run in a cri-o user namespace, and the build pod does not require the "privileged: true" security context.
  • User namespace builds can pass basic test scenarios for the Docker and Source strategy build.
  • Steps to run unprivileged builds are documented.

Dependencies (internal and external)

  1. Buildah supports running inside a non-privileged container
  2. CRI-O allows workloads to opt into running containers in user namespaces.

Previous Work (Optional):

  1. BUILD-225 - remove privileged requirement for builds.

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

User Story

As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges

Acceptance Criteria

  • Developers can provide an environment variable to indicate the build should not use privileged containers
  • When the correct env var + value is specified, builds run in a user namespace (non-root on the host)

QE Impact

No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.

Docs Impact

We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.

PX Impact

This likely warrants an OpenShift blog post, potentially?

Notes

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • ...

Why is this important?

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.

Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.

I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.

Goal
Provide an indication that advanced features are used

Problem

Today, customers and RH don't have the information on the actual usage of advanced features.

Why is this important?

  1. Better focus upsell efforts
  2. Compliance information for customers that are not aware they are not using the right subscription

 

Prioritized Scenarios

In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode). 

Not in Scope

Integrate with subscription watch - will be done by the subscription watch team with our help.

Customers

All

Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions

What does success look like?

A clear indication in subscription watch for ODF usage (either essential or advanced). 

1. Proposed title of this feature request

  • Request to add a bool variable into telemetry which indicates the usage of any of the advanced feature, like PV encryption or KMS encryption or external mode etc.

2. What is the nature and description of the request?

  • Today, customers and RH don't have the information on the actual usage of advanced features. This feature will help RH to have a better indication on the statistics of customers using the advanced features and focus better on upsell efforts.

3. Why does the customer need this? (List the business requirements here)

  • As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions.

4. List any affected packages or components.

  • Telemetry

_____________________

Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173

 

Other Complete

This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled

This is a clone of issue OCPBUGS-3114. The following is the description of the original issue:

Description of problem:

When running a Hosted Cluster on Hypershift the cluster-networking-operator never progressed to Available despite all the components being up and running

Version-Release number of selected component (if applicable):

quay.io/openshift-release-dev/ocp-release:4.11.11-x86_64 for the hosted clusters
hypershift operator is quay.io/hypershift/hypershift-operator:4.11
4.11.9 management cluster

How reproducible:

Happened once

Steps to Reproduce:

1.
2.
3.

Actual results:

oc get co network reports False availability

Expected results:

oc get co network reports True availability

Additional info:

 

This is a clone of issue OCPBUGS-2873. The following is the description of the original issue:

Description of problem:

Prometheus fails to scrape metrics from the storage operator after some time.

Version-Release number of selected component (if applicable):

4.11

How reproducible:

Always

Steps to Reproduce:

1. Install storage operator.
2. Wait for 24h (time for the certificate to be recycled).
3.

Actual results:

Targets being down because Prometheus didn't reload the CA certificate.

Expected results:

Prometheus reloads its client TLS certificate and scrapes the target successfully.

Additional info:


This is a clone of issue OCPBUGS-4656. The following is the description of the original issue:

Description of problem:

`/etc/hostname` may exist, but be empty. `vsphere-hostname` service should check that the file is not empty instead of just that it exists.

OKD's machine-os-content starting from F37 has an empty /etc/hostname file, which breaks joining workers in vsphere IPI

Version-Release number of selected component (if applicable):


How reproducible:

Always

Steps to Reproduce:

1. Install OKD w/ workers on vsphere
2.
3.

Actual results:


Workers get hostname resolved using NM

Expected results:


Workers get hostname resolved using vmtoolsd

Additional info:


This is a clone of issue OCPBUGS-3405. The following is the description of the original issue:

In case it should be used for publishing artifacts in CI jobs.

Look into to see if the following things are leaked:

  • pull secret
  • ssh key
  • potentially values in journal logs

Description of problem:

TestEditUnmanagedPodDisruptionBudget flakes in the console-operator e2e

Version-Release number of selected component (if applicable):

4.12

How reproducible:

Flake

Steps to Reproduce:
1. Check https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_console-operator/665/pull-ci-openshift-console-operator-master-e2e-aws-operator/1562005782164148224
2.
3.

Actual results:

Expected results:

Additional info:

There is a chance that the PDB instances is not present since prior to the Unmanaged* TCs the RemoveTest is running which is removing all the console resources (Pods, Services, PDBs, ...).

 

Description of problem:

console.openshift.io/use-i18n false in v1alpha API is converted to "" in the v1 APi, which is not a valid value for the enum type declared in the code. 

Version-Release number of selected component (if applicable):

 4.12.0-0.nightly-2022-09-25-071630

How reproducible:

Always

Steps to Reproduce:

1. Load a dynamic plugin with v1alpha API console.openshift.io/use-i18n set to 'false'
2. In the v1 API the {"spec":{"i18n":{"loadType":""}}} loadType is set to empty string, which is not a valid value defined here: https://github.com/jhadvig/api/blob/22d69793277ffeb618d642724515f249262959a5/console/v1/types_console_plugin.go#L46
https://github.com/openshift/api/pull/1186/files# 

Actual results:

{"spec":{"i18n":{"loadType":""}}}

Expected results:

{"spec":{"i18n":{"loadType":"Lazy"}}}

Additional info:

 

Description of problem:

i18n translation missing in "Remove component node from application" modal

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1. Navigate to dev console and create a workload under an Application group
2. On the Toplogy remove the workload from the Application group
3. See the i18n error in the console

Actual results:

Missing i18n key "Remove component node from application" in namespace "topology" and language "en." in console

Expected results:

No i18n error should be shown in the console.

Additional info:

 

This is a clone of issue OCPBUGS-4913. The following is the description of the original issue:

Description of problem:

Currently the Terraform code waits for 45 seconds, but anecdotal data suggest we should actually wait for 3 minutes in order to avoid "failures" due to occasional slow boots of a new VM in PowerVS.

Version-Release number of selected component (if applicable):

 

How reproducible:

often enough

Steps to Reproduce:

1. run IPI installer against PowerVS
2. look for "empty tuple" in the error message when it fails to reach `bootstrap-complete`
3.

Actual results:

 

Expected results:

VMs to always have IP address assigned by DHCP after a certain wait

Additional info:

The change has already been merged into master/4.13, but 4.12 also needs this for planned PowerVS IPI GA on the z-stream.

This is a clone of issue OCPBUGS-3018. The following is the description of the original issue:

Description of problem:

When running an overnight run in dev-scripts (COMPACT_IPV4) with repeated installs I saw this panic in WaitForBootstrapComplete occur once.

level=debug msg=Agent Rest API Initialized
E1101 05:19:09.733309 1802865 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 1 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x4086520?, 0x1d875810})
    /home/stack/go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00056fb00?})
    /home/stack/go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75
panic({0x4086520, 0x1d875810})
    /usr/local/go/src/runtime/panic.go:838 +0x207
github.com/openshift/installer/pkg/agent.(*NodeZeroRestClient).getClusterID(0xc0001341e0)
    /home/stack/go/src/github.com/openshift/installer/pkg/agent/rest.go:121 +0x53
github.com/openshift/installer/pkg/agent.(*Cluster).IsBootstrapComplete(0xc000134190)
    /home/stack/go/src/github.com/openshift/installer/pkg/agent/cluster.go:183 +0x4fc
github.com/openshift/installer/pkg/agent.WaitForBootstrapComplete.func1()
    /home/stack/go/src/github.com/openshift/installer/pkg/agent/waitfor.go:31 +0x77
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x1d8fa901?)
    /home/stack/go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:157 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001958c0?, {0x1a53c7a0, 0xc0011d4a50}, 0x1, 0xc0001958c0)
    /home/stack/go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:158 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009ab860?, 0x77359400, 0x0, 0xa?, 0x8?)
    /home/stack/go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:135 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
    /home/stack/go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:92
github.com/openshift/installer/pkg/agent.WaitForBootstrapComplete({0x7ffd7fccb4e3?, 0x40d7e7?})
    /home/stack/go/src/github.com/openshift/installer/pkg/agent/waitfor.go:30 +0x1bc
github.com/openshift/installer/pkg/agent.WaitForInstallComplete({0x7ffd7fccb4e3?, 0x5?})
    /home/stack/go/src/github.com/openshift/installer/pkg/agent/waitfor.go:73 +0x56
github.com/openshift/installer/cmd/openshift-install/agent.newWaitForInstallCompleteCmd.func1(0xc0003b6c80?, {0xc0004d67c0?, 0x2?, 0x2?})
    /home/stack/go/src/github.com/openshift/installer/cmd/openshift-install/agent/waitfor.go:73 +0x126
github.com/spf13/cobra.(*Command).execute(0xc0003b6c80, {0xc0004d6780, 0x2, 0x2})
    /home/stack/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:876 +0x67b
github.com/spf13/cobra.(*Command).ExecuteC(0xc0013b0a00)
    /home/stack/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:990 +0x3b4
github.com/spf13/cobra.(*Command).Execute(...)
    /home/stack/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:918
main.installerMain()
    /home/stack/go/src/github.com/openshift/installer/cmd/openshift-install/main.go:61 +0x2b0
main.main()
    /home/stack/go/src/github.com/openshift/installer/cmd/openshift-install/main.go:38 +0xff
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
    panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x33d3cd3]

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-10-25-210451

How reproducible:

Occurred on the 12th run, all previous installs were successfule

Steps to Reproduce:

1.Set up dev-scripts for  AGENT_E2E_TEST_SCENARIO=COMPACT_IPV4, no mirroring
2. Run 'make clean; make agent' in a loop
3. After repeated installs got the failure

Actual results:

Panic in WaitForBootstrapComplete

Expected results:

No failure

Additional info:

It looks like clusterResult is used here even on failure, which causes the dereference - https://github.com/openshift/installer/blob/master/pkg/agent/rest.go#L121

 

This is a clone of issue OCPBUGS-4190. The following is the description of the original issue:

Description of problem:

Two tests are perma failing in metal-ipi upgrade tests
[sig-imageregistry] Image registry remains available using new connections expand_more    39m27s
[sig-imageregistry] Image registry remains available using reused connections expand_more    39m27s

Version-Release number of selected component (if applicable):

4.12 / 4.13

How reproducible:

all ci runs

Steps to Reproduce:

1.
2.
3.

Actual results:

Nov 24 02:58:26.998: INFO: "[sig-imageregistry] Image registry remains available using reused connections": panic: runtime error: invalid memory address or nil pointer dereference

Expected results:

pass

Additional info:

 

In the Known Issues section of the OpenStack-specific Installer docs issues, there is a point about control plane anti-affinity.

The known issue has several problems:

  • it is in the UPI section, when it is not a UPI-specific issue
  • it mentions Control plane scale-out, when OCP only supports exactly 3 masters
  • it is now possible to set anti-affinity from the install-config.yaml, and that should be the recommended solution when VM distribution across hosts is required.

Description of problem:

co/storage is not available due to csi driver not have proxy setting on ibm cloud

Version-Release number of selected component (if applicable):

{4.12.0-0.ci-2022-10-13-233744}

How reproducible:

Always

Steps to Reproduce:

1.Install ocp cluster on ibm disconnected env with http proxy
Template: private-templates/functionality-testing/aos-4_12/ipi-on-ibmcloud/versioned-installer-customer_vpc-http_proxy
2.Check co/storage
oc get co/storage
NAME      VERSION                         AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
storage   4.12.0-0.ci-2022-10-13-233744   False       True          False      6h55m   IBMVPCBlockCSIDriverOperatorCRAvailable: IBMBlockDriverControllerServiceControllerAvailable: Waiting for Deployment...
3.oc get pods
NAME                                                 READY   STATUS                  RESTARTS         AGE
ibm-vpc-block-csi-controller-6c4bfc9fc-6dmz7         4/5     CrashLoopBackOff        83 (113s ago)    6h55m
ibm-vpc-block-csi-driver-operator-7bd6fb5cdc-rktk2   1/1     Running                 1 (6h44m ago)    6h55m
ibm-vpc-block-csi-node-8s6dj                         0/3     Init:0/1                77 (5m34s ago)   6h52m
ibm-vpc-block-csi-node-9msld                         0/3     Init:Error              76 (5m49s ago)   6h47m
ibm-vpc-block-csi-node-fgs76                         0/3     Init:CrashLoopBackOff   76 (5m ago)      6h52m
ibm-vpc-block-csi-node-jd9fl                         0/3     Init:CrashLoopBackOff   75 (4m16s ago)   6h47m
ibm-vpc-block-csi-node-qkjxs                         0/3     Init:CrashLoopBackOff   77 (2m53s ago)   6h52m
ibm-vpc-block-csi-node-xbzm8                         0/3     Init:0/1                76 (5m13s ago)   6h47m
4.oc -n openshift-cluster-csi-drivers logs -c vpc-node-label-updater ibm-vpc-block-csi-node-xbzm8
{"level":"info","timestamp":"2022-10-14T09:18:32.436Z","caller":"nodeupdater/utils.go:57","msg":"Fetching secret configuration.","watcher-name":"vpc-node-label-updater"}
{"level":"info","timestamp":"2022-10-14T09:18:32.436Z","caller":"nodeupdater/utils.go:158","msg":"parsing conf file","watcher-name":"vpc-node-label-updater","confpath":"/etc/storage_ibmc/slclient.toml"}
{"level":"error","timestamp":"2022-10-14T09:19:02.437Z","caller":"nodeupdater/utils.go:96","msg":"Failed to Get IAM access token","watcher-name":"vpc-node-label-updater","error":"Post \"https://iam.cloud.ibm.com/oidc/token\": dial tcp 23.203.93.6:443: i/o timeout"}
{"level":"fatal","timestamp":"2022-10-14T09:19:02.437Z","caller":"cmd/main.go:140","msg":"Failed to read secret configuration from storage secret present in the cluster ","watcher-name":"vpc-node-label-updater","error":"Post \"https://iam.cloud.ibm.com/oidc/token\": dial tcp 23.203.93.6:443: i/o timeout"}

5.oc -n openshift-cluster-csi-drivers describe pod ibm-vpc-block-csi-node-xbzm8
Environment:
   ADDRESS:          /csi/csi.sock
   DRIVER_REGISTRATION_SOCK: /var/lib/kubelet/plugins/vpc.block.csi.ibm.io/csi.sock
   KUBE_NODE_NAME:       (v1:spec.nodeName)
Actual results:{code:none}

Expected results:

 

Additional info:

 

Description of problem
`oc-mirror` will hit error when use docker without namespace for OCI format mirror

How reproducible:
always

Steps to Reproduce:
Copy the operator image with OCI format to localhost;
cat copy.yaml
apiVersion: mirror.openshift.io/v1alpha2
kind: ImageSetConfiguration
mirror:
operators:

  • catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11
    packages:
  • name: multicluster-engine
    minVersion: '2.1.1'
    maxVersion: '2.1.2'
    `oc-mirror --config copy.yaml oci:///home/ocmirrortest/noo --use-oci-feature --oci-feature-action=copy --continue-on-error`
    Mirror the operator image with OCI format to registry without namespace :
    cat mirror.yaml
    apiVersion: mirror.openshift.io/v1alpha2
    kind: ImageSetConfiguration
    mirror:
    operators:
  • catalog: oci:///home/ocmirrortest/noo/redhat-operator-index
    packages:
  • name: multicluster-engine
    minVersion: '2.1.1'
    maxVersion: '2.1.2'

`oc-mirror --config mirror.yaml --use-oci-feature --oci-feature-action=mirror --dest-skip-tls docker://localhost:5000`

Actual results:
2. Hit error:
`oc-mirror --config mirror.yaml --use-oci-feature --oci-feature-action=mirror --dest-skip-tls docker://localhost:5000`
……
info: Mirroring completed in 30ms (0B/s)
error: mirroring images "localhost:5000//multicluster-engine/mce-operator-bundle@sha256:e7519948bbcd521390d871ccd1489a49aa01d4de4c93c0b6972dfc61c92e0ca2" is not a valid image reference: invalid reference format

Expected results:
2. No error

Additional info:
`oc-mirror --config mirror.yaml --use-oci-feature --oci-feature-action=mirror --dest-skip-tls docker://localhost:5000/ocmir` works well.

Description of problem:

For OVNK to become CNCF complaint, we need to support session affinity timeout feature and enable the e2e's on OpenShift side. This bug tracks the efforts to get this into 4.12 OCP.

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

Description of problem:

The service project and the host project both have a private DNS zone named as "ipi-xpn-private-zone". The thing is, although platform.gcp.privateDNSZone.project is set as the host project, the installer checks the zone of the service project, and complains dns name not match. 

Version-Release number of selected component (if applicable):

$ openshift-install version
openshift-install 4.12.0-0.nightly-2022-10-25-210451
built from commit 14d496fdaec571fa97604a487f5df6a0433c0c68
release image registry.ci.openshift.org/ocp/release@sha256:d6cc07402fee12197ca1a8592b5b781f9f9a84b55883f126d60a3896a36a9b74
release architecture amd64

How reproducible:

Always, if both the service project and the host project have a private DNS zone with the same name.

Steps to Reproduce:

1. try IPI installation to a shared VPC, using "privateDNSZone" of the host project

Actual results:

$ openshift-install create cluster --dir test7
INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" 
ERROR failed to fetch Metadata: failed to load asset "Install Config": failed to create install config: platform.gcp.privateManagedZone: Invalid value: "ipi-xpn-private-zone": dns zone jiwei-1026a.qe1.gcp.devcluster.openshift.com. did not match expected jiwei-1027a.qe-shared-vpc.qe.gcp.devcluster.openshift.com 
$ 

Expected results:

The installer should check the private zone in the specified project (i.e. the host project).

Additional info:

$ yq-3.3.0 r test7/install-config.yaml platform
gcp:
  projectID: openshift-qe
  region: us-central1
  computeSubnet: installer-shared-vpc-subnet-2
  controlPlaneSubnet: installer-shared-vpc-subnet-1
  createFirewallRules: Disabled
  publicDNSZone:
    id: qe-shared-vpc
    project: openshift-qe-shared-vpc
  privateDNSZone:
    id: ipi-xpn-private-zone
    project: openshift-qe-shared-vpc
  network: installer-shared-vpc
  networkProjectID: openshift-qe-shared-vpc
$ yq-3.3.0 r test7/install-config.yaml baseDomain
qe-shared-vpc.qe.gcp.devcluster.openshift.com
$ yq-3.3.0 r test7/install-config.yaml metadata
creationTimestamp: null
name: jiwei-1027a
$ 
$ openshift-install create cluster --dir test7
INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" 
ERROR failed to fetch Metadata: failed to load asset "Install Config": failed to create install config: platform.gcp.privateManagedZone: Invalid value: "ipi-xpn-private-zone": dns zone jiwei-1026a.qe1.gcp.devcluster.openshift.com. did not match expected jiwei-1027a.qe-shared-vpc.qe.gcp.devcluster.openshift.com 
$ 
$ gcloud --project openshift-qe-shared-vpc dns managed-zones list --filter='name=qe-shared-vpc'
NAME           DNS_NAME                                        DESCRIPTION  VISIBILITY
qe-shared-vpc  qe-shared-vpc.qe.gcp.devcluster.openshift.com.               public
$ gcloud --project openshift-qe-shared-vpc dns managed-zones list --filter='name=ipi-xpn-private-zone'
NAME                  DNS_NAME                                                    DESCRIPTION                         VISIBILITY
ipi-xpn-private-zone  jiwei-1027a.qe-shared-vpc.qe.gcp.devcluster.openshift.com.  Preserved private zone for IPI XPN  private
$ gcloud dns managed-zones list --filter='name=ipi-xpn-private-zone'
NAME                  DNS_NAME                                       DESCRIPTION                         VISIBILITY
ipi-xpn-private-zone  jiwei-1026a.qe1.gcp.devcluster.openshift.com.  Preserved private zone for IPI XPN  private
$ 
$ gcloud --project openshift-qe-shared-vpc dns managed-zones describe qe-shared-vpc
cloudLoggingConfig:
  kind: dns#managedZoneCloudLoggingConfig
creationTime: '2020-04-26T02:50:25.172Z'
description: ''
dnsName: qe-shared-vpc.qe.gcp.devcluster.openshift.com.
id: '7036327024919173373'
kind: dns#managedZone
name: qe-shared-vpc
nameServers:
- ns-cloud-b1.googledomains.com.
- ns-cloud-b2.googledomains.com.
- ns-cloud-b3.googledomains.com.
- ns-cloud-b4.googledomains.com.
visibility: public
$ 
$ gcloud --project openshift-qe-shared-vpc dns managed-zones describe ipi-xpn-private-zone         
cloudLoggingConfig:
  kind: dns#managedZoneCloudLoggingConfig
creationTime: '2022-10-27T08:05:18.332Z'
description: Preserved private zone for IPI XPN
dnsName: jiwei-1027a.qe-shared-vpc.qe.gcp.devcluster.openshift.com.
id: '5506116785330943369'
kind: dns#managedZone
name: ipi-xpn-private-zone
nameServers:
- ns-gcp-private.googledomains.com.
privateVisibilityConfig:
  kind: dns#managedZonePrivateVisibilityConfig
  networks:
  - kind: dns#managedZonePrivateVisibilityConfigNetwork
    networkUrl: https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/networks/installer-shared-vpc
visibility: private
$ 
$ gcloud dns managed-zones describe ipi-xpn-private-zone
cloudLoggingConfig:
  kind: dns#managedZoneCloudLoggingConfig
creationTime: '2022-10-26T06:42:52.268Z'
description: Preserved private zone for IPI XPN
dnsName: jiwei-1026a.qe1.gcp.devcluster.openshift.com.
id: '7663537481778983285'
kind: dns#managedZone
name: ipi-xpn-private-zone
nameServers:
- ns-gcp-private.googledomains.com.
privateVisibilityConfig:
  kind: dns#managedZonePrivateVisibilityConfig
  networks:
  - kind: dns#managedZonePrivateVisibilityConfigNetwork
    networkUrl: https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/networks/installer-shared-vpc
visibility: private
$ 

 

 

Description of problem:

AWS tagging - when applying user defined tags you cannot add more than 10

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1. Configure userTags for aws platform with more than 8 tags.
2. Installer fails to add the tags while AWS supports upto 50 tags.

Actual results:

Installer validation fails.

Expected results:

Installer should be able to add more than 8 tags.

Additional info:

 

There is a bug where creating OLM subscription manifests early in the installation process results in those OLM operators not being installed.

This is because the OLM installation Jobs fail when they are tried early in the installation process, and OLM does not retry those jobs sufficiently and eventually gives up on them.

This should be solved starting OCP 4.12, but until then, we should solve this using Assisted.

A way to solve this is to delay the installation of OLM operators to only occur after the cluster is up and healthy. 

This can be done by creating the subscriptions with "installPlanApproval" set to "Manual" instead of "Automatic". Then once the cluster is up and healthy, the assisted-controller should approve the InstallPlans that OLM will create for the operators. This will then trigger the installation which is more likely to succeed since the cluster is up and healthy at this point

Description of problem:

When trying to enable Hardware Backed Management Ports (e.g. Virtual functions) on BF2 in NIC mode OR any other MLX NICs (CX-6, CX-5) by setting the node_mgmt_port_netdev_flags flags to a VF in the CNO; then OVN-K Node will crash.

Version-Release number of selected component (if applicable):

4.12.0

How reproducible:

Always

Steps to Reproduce:

Start by enabling OvS HWOL and setting sriovnetworknodepolicy
https://docs.openshift.com/container-platform/4.11/networking/hardware_networks/configuring-hardware-offloading.html
1. Scale down CNO: oc scale --replicas=0 deploy/network-operator -n openshift-network-operator
2. Make changes to OVN-K node: oc edit daemonsets ovnkube-node -n openshift-ovn-kubernetes
    a. Find "node_mgmt_port_netdev_flags=" and replace it with something like this:
          node_mgmt_port_netdev_flags=
          if [[ ${K8S_NODE} != *"master"* ]]; then
                node_mgmt_port_netdev_flags="--ovnkube-node-mgmt-port-netdev=ens1f0v0"
          fi
     b. Additionally you have to add the "node_mgmt_port_netdev_flags"  to the " exec /usr/bin/ovnkube --init-node "${K8S_NODE}"" call in the same script. Since this is missing.
3. Save the edit.
4. Observe OVN-K node on baremetal worker nodes.

Actual results:

I0822 14:21:56.250285  496356 ovs.go:204] Exec(3): stderr: ""
I0822 14:21:56.250290  496356 node.go:310] Detected support for port binding with external IDs
I0822 14:21:56.250516  496356 management-port-dpu.go:181] Setup management port dpu host: ens1f0v0
F0822 14:21:56.250568  496356 ovnkube.go:133] failed to set management port name. file exists

Workaround is to go to the node and run this command: sudo ovs-vsctl del-port br-int ovn-k8s-mp0

Expected results:

There should not be any errors when changing node_mgmt_port_netdev_flags to a valid value.

Additional info:

Reported here: https://github.com/ovn-org/ovn-kubernetes/pull/3160
Discussed briefly here: https://issues.redhat.com/browse/OCPBUGS-4098
Fixed Upstream here: https://github.com/ovn-org/ovn-kubernetes/pull/3251

In order to support 4.12 there needs to be an entry for OS_IMAGES in images.env.template.

 

Note that the actual url isn't important, just that there is an entry for 4.12.

 in order to have more info to be able to debug router issue in sno , we want to see if router is healthy from node network point of view and enable router access logs,

Lets revert when https://bugzilla.redhat.com/show_bug.cgi?id=2097041 will be found

Description of problem:
Users on a disconnected cluster with a proxy could not import a Devfile (from GitHub).

The API call /api/devfile/ takes 30 seconds until it fails with 504 Gateway timeout.

Version-Release number of selected component (if applicable):
This might happen since 4.8

Tested this yet only on 4.12.0-0.nightly-2022-09-07-112008

How reproducible:
Always

Steps to Reproduce:

  1. Start a disconnected cluster with a proxy
  2. Open the browser network inspector and filter for /api/devfile
  3. Switch to Developer perspective
  4. Navigate to Add > Developer Catalog (All Services) > Devfiles
  5. Select a Devfile like Basic Go (https://github.com/devfile-samples/devfile-sample-go-basic.git)
  6. Press Create

Actual results:

  • Network call fails after 30 seconds
  • Import doesn't work

Expected results:

  • Import should create a Deployment and switch to topology view

Additional info:
The console Pod log contains this error:

E0909 10:28:18.448680 1 devfile-handler.go:74] Failed to parse devfile: failed to populateAndParseDevfile: Get "https://registry.devfile.io/devfiles/go": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

Description of problem:

node_exporter collects network metrics for "virtual" interfaces like br-*. When OVN is used, it also reports metrics for ovs-*, ovn, and genev_sys_* interfaces.

Version-Release number of selected component (if applicable):

4.12 (and before)

How reproducible:

Always

Steps to Reproduce:

1. Launch a 4.12 cluster.
2. Run the following PromQL query: "group by(device) (node_network_info)"
3.

Actual results:

Expected results:

Only real host interfaces should be present.

Additional info:


This is a clone of issue OCPBUGS-4089. The following is the description of the original issue:

The kube-state-metric pod inside the openshift-monitoring namespace is not running as expected.

On checking the logs I am able to see that there is a memory panic

~~~
2022-11-22T09:57:17.901790234Z I1122 09:57:17.901768 1 main.go:199] Starting kube-state-metrics self metrics server: 127.0.0.1:8082
2022-11-22T09:57:17.901975837Z I1122 09:57:17.901951 1 main.go:66] levelinfomsgTLS is disabled.http2false
2022-11-22T09:57:17.902389844Z I1122 09:57:17.902291 1 main.go:210] Starting metrics server: 127.0.0.1:8081
2022-11-22T09:57:17.903191857Z I1122 09:57:17.903133 1 main.go:66] levelinfomsgTLS is disabled.http2false
2022-11-22T09:57:17.906272505Z I1122 09:57:17.906224 1 builder.go:191] Active resources: certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
2022-11-22T09:57:17.917758187Z E1122 09:57:17.917560 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
2022-11-22T09:57:17.917758187Z goroutine 24 [running]:
2022-11-22T09:57:17.917758187Z k8s.io/apimachinery/pkg/util/runtime.logPanic(

{0x1635600, 0x2696e10})
2022-11-22T09:57:17.917758187Z /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x7d
2022-11-22T09:57:17.917758187Z k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xfffffffe})
2022-11-22T09:57:17.917758187Z /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75
2022-11-22T09:57:17.917758187Z panic({0x1635600, 0x2696e10}

)
2022-11-22T09:57:17.917758187Z /usr/lib/golang/src/runtime/panic.go:1038 +0x215
2022-11-22T09:57:17.917758187Z k8s.io/kube-state-metrics/v2/internal/store.ingressMetricFamilies.func6(0x40)
2022-11-22T09:57:17.917758187Z /go/src/k8s.io/kube-state-metrics/internal/store/ingress.go:136 +0x189
2022-11-22T09:57:17.917758187Z k8s.io/kube-state-metrics/v2/internal/store.wrapIngressFunc.func1(

{0x17fe520, 0xc00063b590})
2022-11-22T09:57:17.917758187Z /go/src/k8s.io/kube-state-metrics/internal/store/ingress.go:175 +0x49
2022-11-22T09:57:17.917758187Z k8s.io/kube-state-metrics/v2/pkg/metric_generator.(*FamilyGenerator).Generate(...)
2022-11-22T09:57:17.917758187Z /go/src/k8s.io/kube-state-metrics/pkg/metric_generator/generator.go:67
2022-11-22T09:57:17.917758187Z k8s.io/kube-state-metrics/v2/pkg/metric_generator.ComposeMetricGenFuncs.func1({0x17fe520, 0xc00063b590}

)
2022-11-22T09:57:17.917758187Z /go/src/k8s.io/kube-state-metrics/pkg/metric_generator/generator.go:107 +0xd8
~~~

Logs are attached to the support case

Description of problem:

Large OpenShift Container Platform 4.10.24 - Cluster is failing to update router-certs secret in openshift-config-managed namespace as the given secret is too big.

2022-09-01T06:24:15.157333294Z 2022-09-01T06:24:15.157Z ERROR operator.init.controller.certificate_publisher_controller controller/controller.go:266  Reconciler error  {"name": "foo-bar", "namespace": "openshift-ingress-operator", "error": "failed to ensure global secret: failed to update published router certificates secret: Secret \"router-certs\" is invalid: data: Too long: must have at most 1048576 bytes"}

The OpenShift Container Platform 4 - Cluster has 180 IngressController configured with endpointPublishingStrategy set to private.

Now the default certificate needs to be replaced but is not properly replicated to openshift-authentication namespace and potentially other location because of the problem mentioned (since the required secret can not be updated)

Version-Release number of selected component (if applicable):

OpenShift Container Platform 4.10.24

How reproducible:

Always

Steps to Reproduce:

1. Install OpenShift Container Platform 4.10
2. Create 180 IngressController with specific certificates
3. Check openshift-ingress-operator logs to see how it fails to update/create the necessary secret in openshift-config-managed

Actual results:

2022-09-01T06:24:15.157333294Z 2022-09-01T06:24:15.157Z ERROR operator.init.controller.certificate_publisher_controller controller/controller.go:266  Reconciler error  {"name": "foo-bar", "namespace": "openshift-ingress-operator", "error": "failed to ensure global secret: failed to update published router certificates secret: Secret \"router-certs\" is invalid: data: Too long: must have at most 1048576 bytes"}

Expected results:

No matter how many IngressController is created, secret management taken care by Operators need to work, even if data exceed 1 MB size limitation. In that case an approach needs to exist to split data into multiple secrets or handle it otherwise.

Additional info:

 

Description of problem:
When disable all helm chart repos the helm navigation item is disabled.

To re-enable the helm charts again the user can search for HCP or PHCPs but the action menu doesn't work if no other helm chart repo is enabled.

Version-Release number of selected component (if applicable):
Only 4.12 (4.11 is fine)

How reproducible:
Always

Steps to Reproduce:
1. Switch to developer perspective
2. Navigate to Helm > Repos > Edit the default repo and disable it
3. Helm Navigation should disappear and the content area maybe switch to 404, that's fine.
4. Navigate to Search and select HelmChartRepository as resource
5. Click on the action menu (kebab icon) to edit the HCR

Actual results:
The action menu is not shown

Expected results:
The action menu should be shown so that the user can edit or delete the HCR.

Additional info:

Description of problem:

The API Explorer page layout is incorrect,  please check the attachment for more details

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-08-15-150248

How reproducible:

Always

Steps to Reproduce:
1. Login OCP, Go to Home -> API Explorer page

2. Check if there is an extra blank line between the dropdown filter and the list 

Actual results:

There is an extra blank line between the dropdown filter and the list 

Expected results:

Use right patternfly package, remove the extra blank line

Additional info:

104.0.5112.79 (Official Build) (64-bit)

Description of problem:

We discovered an issue before code freeze that caused many CI issues.This is resolved with this PR: https://github.com/openshift/cluster-network-operator/pull/1579

Version-Release number of selected component (if applicable):

4.12

How reproducible:

NA

Steps to Reproduce:

1.NA
2.
3.

Actual results:

Severity is set too low for various OVN-K alerts

Expected results:

Alerts work as expected at the correct severity level and CI runs are clear including for hypershift clusters.

Additional info:

This is resolved with this PR: https://github.com/openshift/cluster-network-operator/pull/1579 Here is my testing with `e2e-all` and `e2e-serial` and there are no issues after 10 runs each: https://docs.google.com/spreadsheets/d/1FZON8-d3m7D_2-z3XetODA-ucbXKJzCioC-zRMArHlY/edit?usp=sharing

Description of problem:
This is a follow up on OCPBUGSM-47202 (https://bugzilla.redhat.com/show_bug.cgi?id=2110570)

While OCPBUGSM-47202 fixes the issue specific for Set Pod Count, many other actions aren't fixed. When the user updates a Deployment with one of this options, and selects the action again, the old values are still shown.

Version-Release number of selected component (if applicable)
4.8-4.12 as well as master with the changes of OCPBUGSM-47202

How reproducible:
Always

Steps to Reproduce:

  1. Import a deployment
  2. Select the deployment to open the topology sidebar
  3. Click on actions and one of the 4 options to update the deployment with a modal
    1. Edit labels
    2. Edit annotatations
    3. Edit update strategy
    4. Edit resource limits
  4. Click on the action again and check if the data in the modal reflects the changes from step 3

Actual results:
Old data (labels, annotations, etc.) was shown.

Expected results:
Latest data should be shown

Additional info:

Description:

I was testing the DHCP scenario where only rendezvousIP is specified in the agent-config.yaml and no NMStateConfig is embedded. create-cluster-and-infraenv.service fails on node0 when networkConfig is missing from agent-config.yaml. /etc/assisted/manifests/nmstateconfig.yaml is an empty file.

agent-config.yaml used:

metadata:
name: ostest
namespace: cluster0
spec:
rendezvousIP: 192.168.122.2

Steps to reproduce:

1. Create agent.iso using install-config.yaml and agent-config.yaml
2. Deploy cluster using agent.iso
3. Log into node0 and create-cluster-and-infraenv.service will be displayed as a failed unit.

Expected:

create-cluster-and-infraenv.service in success state

Actual:

create-cluster-and-infraenv.service in failed state

Aug 05 08:27:59 control1 podman[2681]: time="2022-08-05T08:27:59Z" level=info msg="releaseImage version 4.11.0-0.okd-2022-08-04-074610 cpuarch x86_64"
Aug 05 08:27:59 control1 create-cluster-and-infraenv[2693]: time="2022-08-05T08:27:59Z" level=info msg="Registered cluster with id: 1cc3ea1a-5bbc-4c4d-ad66-6e052800fb0c"
Aug 05 08:27:59 control1 create-cluster-and-infraenv[2693]: time="2022-08-05T08:27:59Z" level=info msg="Registering infraenv"
Aug 05 08:27:59 control1 podman[2681]: time="2022-08-05T08:27:59Z" level=info msg="Registered cluster with id: 1cc3ea1a-5bbc-4c4d-ad66-6e052800fb0c"
Aug 05 08:27:59 control1 podman[2681]: time="2022-08-05T08:27:59Z" level=info msg="Registering infraenv"
Aug 05 08:27:59 control1 create-cluster-and-infraenv[2693]: time="2022-08-05T08:27:59Z" level=fatal msg="Failed to register infraenv with assisted-service: nmstateconfig should have at least one label set matching the infra-env label selector"
Aug 05 08:27:59 control1 podman[2681]: time="2022-08-05T08:27:59Z" level=fatal msg="Failed to register infraenv with assisted-service: nmstateconfig should have at least one label set matching the infra-env label selector"
Aug 05 08:27:59 control1 systemd[1]: create-cluster-and-infraenv.service: Main process exited, code=exited, status=1/FAILURE
Aug 05 08:27:59 control1 systemd[1]: create-cluster-and-infraenv.service: Failed with result 'exit-code'.
Aug 05 08:27:59 control1 systemd[1]: Failed to start Service that creates initial cluster and infraenv.

/etc/assisted/manifests/nmstateconfig.yaml is an empty file.

[core@control1 ~]$ sudo cat /etc/assisted/manifests/nmstateconfig.yaml
[core@control1 ~]$

This is a clone of issue OCPBUGS-4941. The following is the description of the original issue:

Description of problem: This is a follow-up to OCPBUGS-3933.

The installer fails to destroy the cluster when the OpenStack object storage omits 'content-type' from responses, and a container is empty.

Version-Release number of selected component (if applicable):

4.8.z

How reproducible:

Likely not happening in customer environments where Swift is exposed directly. We're seeing the issue in our CI where we're using a non-RHOSP managed cloud.

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

This is a clone of issue OCPBUGS-4411. The following is the description of the original issue:

Description of problem:

manually configure ipv6 addresses and route on ipv4 OCP cluster to create a dualstack cluster, newly created pods will stay in 'ContainerCreating' status

Version-Release number of selected component (if applicable):

4.12

How reproducible:

Steps to Reproduce:

1. enable ipv6 in network.
# more patch_dual.yaml 
- op: add
  path: /spec/clusterNetwork/-
  value:
    cidr: fd01::/48
    hostPrefix: 64
- op: add
  path: /spec/serviceNetwork/-
  value: fd02::/112
# oc patch network.config.openshift.io cluster --type='json' --patch-file patch_dual.yaml
 
2. Configure ipv6 addresses and routes

PODS=$(oc get pods -n openshift-cluster-node-tuning-operator -l openshift-app=tuned --field-selector=status.phase=Running --no-headers -o name)
i=10
for pod in $PODS; do
  oc exec -n openshift-cluster-node-tuning-operator $pod -- ip -6 addr add fd00:172:22::${i}/64 dev br-ex
  oc exec -n openshift-cluster-node-tuning-operator $pod -- ip -6 route add default via fd00:172:22::1 dev br-ex
  ((i=i+1))
done 

3. create pods and they will stay in ContainerCreating status.

4. if remove the ipv6 configuration in network, newly created pods can be ready.


Actual results:

Pod can not be running

Expected results:

Pod should be ready with both ipv4 and ipv6 address.

Additional info:

version:
# oc version
Client Version: 4.12.0-0.nightly-2022-11-30-182550
Kustomize Version: v4.5.7
Server Version: 4.12.0-0.nightly-2022-11-30-182550
Kubernetes Version: v1.25.2+5533733

Describe pods:
# oc describe pod iperf-rc-normal-qg6zd 
Name:             iperf-rc-normal-qg6zd
Namespace:        offload-testing
Priority:         0
Service Account:  default
Node:             openshift-qe-025.lab.eng.rdu2.redhat.com/192.168.111.54
Start Time:       Thu, 01 Dec 2022 21:35:28 -0500
Labels:           name=iperf-pods-normal
Annotations:      k8s.ovn.org/pod-networks:
                    {"default":{"ip_addresses":["10.129.2.7/23","fd01:0:0:6::3/64"],"mac_address":"0a:58:0a:81:02:07","gateway_ips":["10.129.2.1","fd01:0:0:6:...
                  openshift.io/scc: restricted-v2
                  seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicationController/iperf-rc-normal
Containers:
  iperf:
    Container ID:   
    Image:          quay.io/openshifttest/iperf3@sha256:440c59251338e9fcf0a00d822878862038d3b2e2403c67c940c7781297953614
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  340Mi
    Requests:
      memory:     340Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4266b (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-4266b:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                     From     Message
  ----     ------                  ----                    ----     -------
  Warning  FailedCreatePodSandBox  3m4s (x173 over 5h50m)  kubelet  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_iperf-rc-normal-qg6zd_offload-testing_18673f13-37b4-40ea-aa5d-85654dfa5c85_0(4899f7150492fa4cd895c62d0ec25ac5c1507016037c31b6019849083b42cdb5): error adding pod offload-testing_iperf-rc-normal-qg6zd to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [offload-testing/iperf-rc-normal-qg6zd/18673f13-37b4-40ea-aa5d-85654dfa5c85:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[offload-testing/iperf-rc-normal-qg6zd 4899f7150492fa4cd895c62d0ec25ac5c1507016037c31b6019849083b42cdb5] [offload-testing/iperf-rc-normal-qg6zd 4899f7150492fa4cd895c62d0ec25ac5c1507016037c31b6019849083b42cdb5] failed to configure pod interface: timed out waiting for OVS port binding (ovn-installed) for 0a:58:0a:81:02:07 [10.129.2.7/23 fd01:0:0:6::3/64]
'

 

Description of problem:

catsrc is not ready due to "compute digest: compute hash: write tar: open /tmp/cache/cache: permission denied"

Version-Release number of selected component (if applicable):

zhaoxia@xzha-mac test % ../bin/opm version  
Version: version.Version{OpmVersion:"b94e073b5", GitCommit:"b94e073b5187ecaa687c322beccf76f1d1f26d54", BuildDate:"2022-08-29T06:30:05Z", GoOs:"darwin", GoArch:"amd64"}
zhaoxia@xzha-mac test % oc exec catalog-operator-79d885b755-6cnbp  -- olm --version
OLM version: 0.19.0
git commit: dfa7f0e70578432117e63867706630cda5366fb7

How reproducible:

always

Steps to Reproduce:

1. generate index image
zhaoxia@xzha-mac test % mkdir catalog
zhaoxia@xzha-mac test % ../bin/opm generate dockerfile catalog
zhaoxia@xzha-mac test % cat catalog.Dockerfile 
# The base image is expected to contain
# /bin/opm (with a serve subcommand) and /bin/grpc_health_probe
FROM quay.io/operator-framework/opm:latest


# Configure the entrypoint and command
ENTRYPOINT ["/bin/opm"]
CMD ["serve", "/configs", "--cache-dir=/tmp/cache"]


# Copy declarative config root into image at /configs and pre-populate serve cache
ADD catalog /configs
RUN ["/bin/opm", "serve", "/configs", "--cache-dir=/tmp/cache", "--cache-only"]


# Set DC-specific label for the location of the DC root directory
# in the image
LABEL operators.operatorframework.io.index.configs.v1=/configs

zhaoxia@xzha-mac test % docker build . -f catalog.Dockerfile -t quay.io/olmqe/nginxolm-operator-index:2726 
zhaoxia@xzha-mac test % docker push quay.io/olmqe/nginxolm-operator-index:2726

2. create catsrc
zhaoxia@xzha-mac test % cat catsrc.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: test-index
  namespace: test-1
spec:
  displayName: Test
  publisher: OLM-QE
  sourceType: grpc
  image: quay.io/olmqe/nginxolm-operator-index:2726
  updateStrategy:
    registryPoll:
      interval: 10m

oc new-project test-1
oc apply -f catsrc.yaml 
 3. check pod status
zhaoxia@xzha-mac test % oc get pod
NAME               READY   STATUS             RESTARTS        AGE
test-index-hbqlv   0/1     Error              8 (5m13s ago)   16m
test-index-l6mzq   0/1     CrashLoopBackOff   10 (59s ago)    27m

zhaoxia@xzha-mac test % oc get pod test-index-hbqlv -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "openshift-sdn",
          "interface": "eth0",
          "ips": [
              "10.131.0.84"
          ],
          "default": true,
          "dns": {}
      }]
    k8s.v1.cni.cncf.io/networks-status: |-
      [{
          "name": "openshift-sdn",
          "interface": "eth0",
          "ips": [
              "10.131.0.84"
          ],
          "default": true,
          "dns": {}
      }]
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"operators.coreos.com/v1alpha1","kind":"CatalogSource","metadata":{"annotations":{},"name":"test-index","namespace":"test-1"},"spec":{"displayName":"Test","image":"quay.io/olmqe/nginxolm-operator-index:2726","publisher":"OLM-QE","sourceType":"grpc","updateStrategy":{"registryPoll":{"interval":"10m"}}}}
    openshift.io/scc: restricted-v2
    seccomp.security.alpha.kubernetes.io/pod: runtime/default
  creationTimestamp: "2022-08-29T06:57:55Z"
  generateName: test-index-
  labels:
    catalogsource.operators.coreos.com/update: test-index
    olm.catalogSource: ""
    olm.pod-spec-hash: 777849c67c
  name: test-index-hbqlv
  namespace: test-1
  ownerReferences:
  - apiVersion: operators.coreos.com/v1alpha1
    blockOwnerDeletion: false
    controller: false
    kind: CatalogSource
    name: test-index
    uid: 5ef60ce9-6ade-43e1-bae4-7d69f6c9d5e0
  resourceVersion: "218774"
  uid: 7606a54a-6a7d-4979-833a-97c2f87a88b8
spec:
  containers:
  - image: quay.io/olmqe/nginxolm-operator-index:2726
    imagePullPolicy: Always
    livenessProbe:
      exec:
        command:
        - grpc_health_probe
        - -addr=:50051
      failureThreshold: 3
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    name: registry-server
    ports:
    - containerPort: 50051
      name: grpc
      protocol: TCP
    readinessProbe:
      exec:
        command:
        - grpc_health_probe
        - -addr=:50051
      failureThreshold: 3
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    resources:
      requests:
        cpu: 10m
        memory: 50Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      readOnlyRootFilesystem: false
      runAsNonRoot: true
      runAsUser: 1001130000
    startupProbe:
      exec:
        command:
        - grpc_health_probe
        - -addr=:50051
      failureThreshold: 15
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-bfzvh
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: test-index-dockercfg-wp8s4
  nodeName: qe-daily-412-0829-qf9lx-worker-1-djpwq
  nodeSelector:
    kubernetes.io/os: linux
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1001130000
    seLinuxOptions:
      level: s0:c34,c4
    seccompProfile:
      type: RuntimeDefault
  serviceAccount: test-index
  serviceAccountName: test-index
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  - effect: NoSchedule
    key: node.kubernetes.io/memory-pressure
    operator: Exists
  volumes:
  - name: kube-api-access-bfzvh
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
      - configMap:
          items:
          - key: service-ca.crt
            path: service-ca.crt
          name: openshift-service-ca.crt
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-08-29T06:57:55Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2022-08-29T06:57:55Z"
    message: 'containers with unready status: [registry-server]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-08-29T06:57:55Z"
    message: 'containers with unready status: [registry-server]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2022-08-29T06:57:55Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://54d7a5ba94c061fb86ad056ad964dbda2824c864c6fdcd2d7d5a7ada515bc70e
    image: quay.io/olmqe/nginxolm-operator-index:2726
    imageID: quay.io/olmqe/nginxolm-operator-index@sha256:d70f38fa773ea5030b5b80bfe34d9168aabff5039ead44b7f7e7cd76f8705eb1
    lastState:
      terminated:
        containerID: cri-o://54d7a5ba94c061fb86ad056ad964dbda2824c864c6fdcd2d7d5a7ada515bc70e
        exitCode: 1
        finishedAt: "2022-08-29T07:14:23Z"
        message: |+
          Error: compute digest: compute hash: write tar: open /tmp/cache/cache: permission denied
          Usage:
            opm serve <source_path> [flags]


          Flags:
                --cache-dir string         if set, sync and persist server cache directory
                --cache-only               sync the serve cache and exit without serving
                --debug                    enable debug logging
            -h, --help                     help for serve
            -p, --port string              port number to serve on (default "50051")
                --pprof-addr string        address of startup profiling endpoint (addr:port format)
            -t, --termination-log string   path to a container termination log file (default "/dev/termination-log")


          Global Flags:
                --skip-tls-verify   skip TLS certificate verification for container image registries while pulling bundles
                --use-http          use plain HTTP for container image registries while pulling bundles


        reason: Error
        startedAt: "2022-08-29T07:14:23Z"
    name: registry-server
    ready: false
    restartCount: 8
    started: false
    state:
      waiting:
        message: back-off 5m0s restarting failed container=registry-server pod=test-index-hbqlv_test-1(7606a54a-6a7d-4979-833a-97c2f87a88b8)
        reason: CrashLoopBackOff
  hostIP: 10.242.0.4
  phase: Running
  podIP: 10.131.0.84
  podIPs:
  - ip: 10.131.0.84
  qosClass: Burstable
  startTime: "2022-08-29T06:57:55Z" 

Actual results:

the status of pod for catsrc is not running

Expected results:

the status of pod for catsrc is running

Additional info:

When using project openshift-marketplace, the same error will be raised.

Error: compute digest: compute hash: write tar: open /tmp/cache/cache: permission denied

Description of problem:

 

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

Grafana has been removed in 4.11 and we can safely remove any logic in CMO that deals with Grafana (except dashboards since they are used by OCP console).

Another point to clarify is to communicate to ProdSec and ART that Grafana isn't part of OCP anymore.

Description of problem:

when install private cluster, firstly failed , then need 
ibmcloud is security-group-rule-add "${infra}-sg-kube-api-lb" inbound tcp --port-min 6443 --port-max 6443 --remote $sg 

then openshift-install wait-for  again.

Version-Release number of selected component (if applicable):

 

How reproducible:

always

 

Steps to Reproduce:

1. try to create cluster with BYON, in install-config.yaml publish: Internal, install failed

Actual results:

firstly time, install failed

Expected results:

Just need install once. need not manually security-group-rule-add. 

Additional info:

https://coreos.slack.com/archives/C01U40AM37F/p1664439142279079?thread_ts=1663769891.358229&cid=C01U40AM37F

this issue blocked set up private cluster automatically

 

 

 

 

 

Derrick got an "old and new refs are equal" on rebase error; this is similar to OCPBUGS-1899 but I think has a different root cause. In this case, when a manual rollback is performed via the bootloader, we've computed that there's an osimageurl diff between the expected and desired state, but actually the desired state is already set.

We just need to skip doing the rebase if we're already in the target state.

(A real root of this problem again is that the whole "current/desired config" thing is trying to track state independently of the bootloader...if we made node state == container image, all of that goes away. The MCO would understand that it got booted into a previous state)

This is a clone of issue OCPBUGS-3283. The following is the description of the original issue:

Description of problem:

We discovered that we are shipping unnecesary RBAC in https://coreos.slack.com/archives/CC3CZCQHM/p1667571136730989 .

This RBAC was only used 4.2 and 4.3 for

  • for making a switch from configMaps to leases in leader election

and we should remove it

Version-Release number of selected component (if applicable):{code:none}

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

Clone of https://issues.redhat.com/browse/OCPBUGSM-44162.

Cannot use the original as the bot won't accept a security bug:

When the change merges, the Bugzilla associated with the CVE must be set to MODIFIED. Since the DPTP bugzilla bot is not permitted to scan bugs with the SECURITY group in Bugzilla, The REP will not be able to use the bot's public functionality of moving their bug to MODIFIED.

https://docs.google.com/document/d/1KuenDafC3Ukw19jY55tkVeH8nNVVAi8TEAfqynoVfzY/edit#heading=h.ikdk6suc575k

Description of problem:

Automatic ART PRs to update the build config are failing. Needs manual intervention.

Description of problem:

When alert raised for vSphere privilege check which is reported by vsphere-problem-detector, we could only get the very simple info as below:

 

=======================================

Description

The vsphere-problem-detector monitors the health and configuration of OpenShift on VSphere. If problems are found which may prevent machine scaling, storage provisioning, and safe upgrades, the vsphere-problem-detector will raise alerts.

 

Summary

VSphere cluster health checks are failing

 

Message

VSphere cluster health checks are failing with CheckAccountPermissions

=======================================

 

  1. Please add vSphere privilege check in the Description, currently only mention "prevent machine scaling, storage provisioning, and safe upgrades" 
  2. Could we at least add something like "Check vsphere-problem-detector pod log in openshift-cluster-storage-operator namespace to see the detail info" if we could not list which privilege is missing.

(We could get the namespace/pod info from metric, but I think adding it in alert Description or Message should be more clear)

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-09-12-152748

 

How reproducible:

Always

 

Steps to Reproduce:

See description

Actual results:

Alert info is not so clear

 

Expected results:

Add more Alert info

This is a clone of issue OCPBUGS-3761. The following is the description of the original issue:

Description of problem:

Events.Events: event view displays created pod
https://search.ci.openshift.org/?search=event+view+displays+created+pod&maxAge=168h&context=1&type=junit&name=pull-ci-openshift-console-master-e2e-gcp-console&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.Run event scenario tests and note below results: 

Actual results:

{Expected '' to equal 'test-vjxfx-event-test-pod'. toEqual Error: Failed expectation
    at /go/src/github.com/openshift/console/frontend/integration-tests/tests/event.scenario.ts:65:72
    at Generator.next (<anonymous>:null:null)
    at fulfilled (/go/src/github.com/openshift/console/frontend/integration-tests/tests/event.scenario.ts:5:58)
    at runMicrotasks (<anonymous>:null:null)
    at processTicksAndRejections (internal/process/task_queues.js:93:5)
   }

Expected results:

 

Additional info:

 

Description of problem:

When a pod runs to a completed state, we typically rely on the update event that will indicate to us that this pod is completed. At that point the pod IP is released and the port configuration is removed in OVN. The subsequent delete event for this pod will be ignored because it should have been cleaned up in the previous update.

However, there can be cases where the update event is missed with pod completed. In this case we will only receive a delete with pod completed event, and ignore tearing down the pod. The end result is the pod is not cleaned up in OVN and the IP address remains allocated, reducing the amount of address range available to launch another pod. This can lead to exhausting all IP addresses available for pod allocation on a node.

Version-Release number of selected component (if applicable):

4.10.24

How reproducible:

Not sure how to reproduce this. I'm guessing some lag in kapi updates can cause the completed update event and the final delete event to be combined into a single event.

Steps to Reproduce:

1.
2.
3.

Actual results:

Port still exists in OVN, IP remains allocated for a deleted pod.

Expected results:

IP should be freed, port should be removed from OVN.

Additional info:

 

Description of problem:
ovnkube-trace fails on hypershift deployments:
https://bugzilla.redhat.com/show_bug.cgi?id=2066891#c8

getDatabaseURIs looks for pods with container ovnkube-master, and those don't exist in hypershift.

https://github.com/ovn-org/ovn-kubernetes/blob/6b8acf05cb6043ebdc42d9d36e700390baabea4a/go-controller/cmd/ovnkube-trace/ovnkube-trace.go#L540
~~~
// Returns nbAddress, sbAddress, protocol == "ssl", nil
func getDatabaseURIs(coreclient *corev1client.CoreV1Client, restconfig *rest.Config, ovnNamespace string) (string, string, bool, error) {
containerName := "ovnkube-master"
var err error

found := false
var podName string

listOptions := metav1.ListOptions{}
pods, err := coreclient.Pods(ovnNamespace).List(context.TODO(), listOptions)
if err != nil

{ return "", "", false, err }

for _, pod := range pods.Items {
for _, container := range pod.Spec.Containers {
if container.Name == containerName

{ found = true podName = pod.Name break }

}
}
if !found

{ klog.V(5).Infof("Cannot find ovnkube pods with container %s", containerName) return "", "", false, fmt.Errorf("cannot find ovnkube pods with container: %s", containerName) }

~~~

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual results:

Expected results:

Additional info:

This is a clone of issue OCPBUGS-2290. The following is the description of the original issue:

Description of problem:

If you try to deploy with Internal publishing strategy, and you have either already have a pubilc gateway or already permitted the VPC subnet to the DNS service, deploy will always fail.

Version-Release number of selected component (if applicable):

 

How reproducible:

Easily

Steps to Reproduce:

1. Add a public gateway to VPC network and/or add VPC subnet to permitted DNS networks
2. Set publish strategy to Internal
3. Deploy

Actual results:

Deploy fails

Expected results:

If the resources exist simply skip trying to create them.

Additional info:

Fix here https://github.com/openshift/installer/pull/6481

This is a clone of issue OCPBUGS-2851. The following is the description of the original issue:

Description of problem:

The current implementation of registries.conf support is not working as expected. This bug report will outline the expectations of how we believe this should work.

Background

The containers/image project defines a configuration file called registries.conf, which controls how image pulls can be redirected to another registry. Effectively the pull request for a given registry is redirected to another registry which can satisfy the image pull request instead. The specification for the registries.conf file is located here. For tools such as podman and skopeo, this configuration file allows those tools to indicate where images should be pulled from, and the containers/image project rewrites the image reference on the fly and tries to get the image from the first location it can, preferring these "alternate locations" and then falling back to the original location if one of the alternate locations can't satisfy the image request.

An important aspect of this redirection mechanism is it allows the "host:port" and "namespace" portions of the image reference to be redirected. To be clear on the nomenclature used in the registries.conf specification, a namespace refers to zero or more slash separated sections leading up to the image name (which is called repo in the specification and has the tag or digest after it. See repo(:_tag|@digest) below) and the host[:port] refers to the domain where the image registry is being hosted.

Example:

host[:port]/namespace[/namespace…]/repo(:_tag|@digest)

For example, if we have an image called myimage@sha:1234 the and the image normally resides in quay.io/foo/myimage@sha:1234 you could redirect the image pull request to my registry.com/bar/baz/myimage@sha:1234. Note that in this example the alternate registry location is in a different host, and the namespace "path" is different too.

Use Case

In a typical development scenario, image references within an OLM catalog should always point to a production location where the image is intended to be pulled from when a catalog is published publicly. Doing this prevents publishing a catalog which contains image references to internal repositories, which would never be accessible by a customer. By using the registries.conf redirection mechanism, we can perform testing even before the images are officially published to public locations, and we can redirect the image reference from a production location to an internal repository for testing purposes. Below is a simple example of a registries.conf file that redirects image pull requests away from prodlocation.io to preprodlocation.com:

[[registry]]
 location = "prodlocation.io/xx"
 insecure = false
 blocked = false
 mirror-by-digest-only = true
 prefix = ""
 [[registry.mirror]]
  location = "preprodlocation.com/xx"
  insecure = false

Other Considerations

  • We only care about redirection of images during image pull. Image redirection on push is out of scope.
  • We would like to see as much support for the fields and TOML tables defined in the spec as possible. That being said, there are some items we don't really care about.
    • supported:
      • support multiple [[registry]] TOML tables
      • support multiple [[registry.mirror]] TOML tables for a given [[registry]] TOML table
      • if all entires of [[registry.mirror]] for a given [[registry]] TOML table do not resolve an image, the original [[registry]] TOML locations should be used as the final fallback (this is consistent with how the specification is written, but want to make this point clear. See the specification example which describes how things should work.
      • prefix and location
        • These fields work together, so refer to the specification for how this works. If necessary, we could simplify this to only use location since we are unlikely to use the prefix option.
      • insecure
        • this should be supported for the [[registry]] and [[registry.mirror]] TOML tables so you know how to access registries. If this is not needed by oc mirror then we can forgo this field.
    • fields that require discussion:
      • we assume that digests and tags can be supplied for an image reference, but in the end digests are required for oc mirror to keep track of the image in the workspace. It's not clear if we need to support these configuration options or not:
        • mirror-by-digest-only
          • we assume this is always false since we don't need to prevent an image from being pulled if it is using a tag
        • pull-from-mirror
          • we assume this is always all since we don't need to prevent an image from being pulled if it is using a tag
    • does not need to be supported:
      • unqualified-search-registries
      • credential-helpers
      • blocked
      • aliases
  • we are not interested in supporting version 1 of registries.conf since it is deprecated

Version-Release number of selected component (if applicable):

4.12

How reproducible:

Always

Steps to Reproduce:

oc mirror -c ImageSetConfiguration.yaml --use-oci-feature --oci-feature-action mirror --oci-insecure-signature-policy --oci-registries-config registries.conf --dest-skip-tls docker://localhost:5000/example/test

Example registries.conf

[[registry]]
  prefix = ""
  insecure = false
  blocked = false
  location = "prod.com/abc"
  mirror-by-digest-only = true
  [[registry.mirror]]
    location = "internal.exmaple.io/cp"
    insecure = false
[[registry]]
  prefix = ""
  insecure = false
  blocked = false
  location = "quay.io"
  mirror-by-digest-only = true
  [[registry.mirror]]
    location = "internal.exmaple.io/abcd"
    insecure = false

 

Actual results:

images are not pulled from "internal" registry

Expected results:

images should be pulled from "internal" registry

Additional info:

The current implementation in oc mirror creates its own structs to approximate the ones provided by the containers/image project, but it might not be necessary to do that. Since the oc mirror project already uses containers/image as a dependency, it could leverage the FindRegistry function, which takes a image reference, loads the registries.conf information and returns the most appropriate [[registry]] reference (in the form of Registry struct) or nil if no match was found. Obviously custom processing will be necessary to do something useful with the Registry instance. Using this code is not a requirement, just a suggestion of another possible path to load the configuration.

Description of problem:
OpenShift installer hits error when missing a topology section inside of a failureDomain like this in install-config.yaml:

    - name: us-east-1
      region: us-east
      zone: us-east-1a
    - name: us-east-2
      region: us-east
      zone: us-east-2a
      topology:
        computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2
        networks:
        - ci-segment-154
        datastore: workload_share_vcsmdcncworkload2_vyC6a

Version-Release number of selected component (if applicable):

Build from latest master (4.12)

How reproducible:

Each time

Steps to Reproduce:

1. Create install-config.yaml for vsphere multi-zone
2. Leave out a topology section (under failureDomains)
3. Attempt to create cluster

Actual results:

FATAL failed to fetch Terraform Variables: failed to fetch dependency of "Terraform Variables": failed to generate asset "Platform Provisioning Check": platform.vsphere.failureDomains.topology.resourcePool: Invalid value: "//Resources": resource pool '//Resources' not found 

Expected results:

Validation of topology before attempting to create any resources

This is a clone of issue OCPBUGS-6270. The following is the description of the original issue:

Similar to how, due to the install-config validation, the baremetal platform previously required a bunch of fields that are actually ignored (OCPBUGS-3278), we similarly require values for the following fields in the platform.vsphere section:

  • vCenter
  • username
  • password
  • datacenter
  • defaultDatastore

None of these values are actually used in the agent-based installer at present, and they should not be required.

Users can work around this by specifying dummy values in the platform config (note that the VIP values are required and must be genuine):

platform:
  vsphere:
    apiVIP: 192.168.111.1
    ingressVIP: 192.168.111.2
    vCenter: a
    username: b
    password: c
    datacenter: d
    defaultDatastore: e

And possibly other alerts.  Declaring namespace labels on alerts makes it easy to find the source or affected resource, as described here. But because Insights alerts are based on metrics exported by the cluster-version operator, they inherit source information from the CVO, and end up looking like:

ALERTS{alertname="SimpleContentAccessNotAvailable", alertstate="firing", condition="SCAAvailable", endpoint="metrics", instance="10.58.57.116:9099", job="cluster-version-operator", name="insights", namespace="openshift-cluster-version", pod="cluster-version-operator-5d8579fb58-p5hfn", prometheus="openshift-monitoring/k8s", reason="NotFound", receive="true", service="cluster-version-operator", severity="info"}

Adding namespace: openshift-insights to the labels block for InsightsDisabled and SimpleContentAccessNotAvailable would avoid this confusion.

You might also want to clear the job and service labels as irrelevant source information. And you might want to clear the pod label to avoid churning alerts when the CVO rolls out a new pod. You can get the label clearing by wrapping the expr with max without (job, pod, service) (...) or similar.

Description of problem:

See: https://issues.redhat.com/browse/CPSYN-143

tldr:  Based on the previous direction that 4.12 was going to enforce PSA restricted by default, OLM had to make a few changes because the way we run catalog pods (and we have to run them that way because of how the opm binary worked) was incompatible w/ running restricted.

1) We set openshift-marketplace to enforce restricted (this was our choice, we didn't have to do it, but we did)
2) we updated the opm binary so catalog images using a newer opm binary don't have to run privileged
3) we added a field to catalogsource that allows you to choose whether to run the pod privileged(legacy mode) or restricted.  The default is restricted.  We made that the default so that users running their own catalogs in their own NSes (which would be default PSA enforcing) would be able to be successful w/o needing their NS upgraded to privileged.

Unfortunately this means:
1) legacy catalog images(i.e. using older opm binaries) won't run on 4.12 by default (the catalogsource needs to be modified to specify legacy mode.
2) legacy catalog images cannot be run in the openshift-marketplace NS since that NS does not allow privileged pods.  This means legacy catalogs can't contribute to the global catalog (since catalogs must be in that NS to be in the global catalog).

Before 4.12 ships we need to:
1) remove the PSA restricted label on the openshift-marketplace NS
2) change the catalogsource securitycontextconfig mode default to use "legacy" as the default, not restricted.

This gives catalog authors another release to update to using a newer opm binary that can run restricted, or get their NSes explicitly labeled as privileged (4.12 will not enforce restricted, so in 4.12 using the legacy mode will continue to work)

In 4.13 we will need to revisit what we want the default to be, since at that point catalogs will start breaking if they try to run in legacy mode in most NSes.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:

1.
2.
3.

Actual results:


Expected results:


Additional info:


Description of problem:

The ovn-kubernetes ovnkube-master containers are continuously crashlooping since we updated to 4.11.0-0.okd-2022-10-15-073651.

Log Excerpt:

] [] []  [{kubectl-client-side-apply Update networking.k8s.io/v1 2022-09-12 12:25:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:ingress":{},"f:policyTypes":{}}} }]},Spec:NetworkPolicySpec{PodSelector:{map[] []},Ingress:[]NetworkPolicyIngressRule{NetworkPolicyIngressRule{Ports:[]NetworkPolicyPort{},From:[]NetworkPolicyPeer{NetworkPolicyPeer{PodSelector:&v1.LabelSelector{MatchLabels:map[string]string{access: true,},MatchExpressions:[]LabelSelectorRequirement{},},NamespaceSelector:nil,IPBlock:nil,},},},},Egress:[]NetworkPolicyEgressRule{},PolicyTypes:[Ingress],},} &NetworkPolicy{ObjectMeta:{allow-from-openshift-ingress  compsci-gradcentral  a405f843-c250-40d7-8dd4-a759f764f091 217304038 1 2022-09-22 14:36:38 +0000 UTC <nil> <nil> map[] map[] [] []  [{openshift-apiserver Update networking.k8s.io/v1 2022-09-22 14:36:38 +0000 UTC FieldsV1 {"f:spec":{"f:ingress":{},"f:policyTypes":{}}} }]},Spec:NetworkPolicySpec{PodSelector:{map[] []},Ingress:[]NetworkPolicyIngressRule{NetworkPolicyIngressRule{Ports:[]NetworkPolicyPort{},From:[]NetworkPolicyPeer{NetworkPolicyPeer{PodSelector:nil,NamespaceSelector:&v1.LabelSelector{MatchLabels:map[string]string{policy-group.network.openshift.io/ingress: ,},MatchExpressions:[]LabelSelectorRequirement{},},IPBlock:nil,},},},},Egress:[]NetworkPolicyEgressRule{},PolicyTypes:[Ingress],},}]: cannot clean up egress default deny ACL name: error in transact with ops [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:delete Value:{GoSet:[{GoUUID:60cb946a-46e9-4623-9ba4-3cb35f018ed6}]}}] Timeout:<nil> Where:[where column _uuid == {ccdd01bf-3009-42fb-9672-e1df38190cd7}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:delete Value:{GoSet:[{GoUUID:60cb946a-46e9-4623-9ba4-3cb35f018ed6}]}}] Timeout:<nil> Where:[where column _uuid == {10bbf229-8c1b-4c62-b36e-4ba0097722db}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:delete Table:ACL Row:map[] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {7b55ba0c-150f-4a63-9601-cfde25f29408}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:delete Table:ACL Row:map[] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {60cb946a-46e9-4623-9ba4-3cb35f018ed6}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}] results [{Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:0 Error:referential integrity violation Details:cannot delete ACL row 7b55ba0c-150f-4a63-9601-cfde25f29408 because of 1 remaining reference(s) UUID:{GoUUID:} Rows:[]}] and errors []: referential integrity violation: cannot delete ACL row 7b55ba0c-150f-4a63-9601-cfde25f29408 because of 1 remaining reference(s)

Additional info:

https://github.com/okd-project/okd/issues/1372

Issue persisted through update to 4.11.0-0.okd-2022-10-28-153352

must-gather: https://nbc9-snips.cloud.duke.edu/snips/must-gather.local.2859117512952590880.zip

Description of problem:

 

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

This is a clone of issue OCPBUGS-4874. The following is the description of the original issue:

OCPBUGS-3278 is supposed to fix the issue where the user was required to provide data about the baremetal hosts (including MAC addresses) in the install-config, even though this data is ignored.

However, we determine whether we should disable the validation by checking the second CLI arg to see if it is agent.

This works when the command is:

openshift-install agent create image --dir=whatever

But fails when the argument is e.g., as in dev-scripts:

openshift-install --log-level=debug --dir=whatever agent create image

We cache images by filename, which works when downloading from the Internet as the filename always includes the CoreOS version.

However, when extracting an image from the release payload, it always has the same name. Therefore, we will never update it to a newer image even when running different versions of the installer.

A possible solution:

  1. Check that the cached ISO's checksum matches the RHCOS metadata.
  2. If it doesn't, extract the expected checksum from the release payload and compare that to the cached ISO's checksum.
  3. If it still doesn't match, extract the ISO from the release payload.

An alternative might be to set the name of the cache file to something different. It's not clear how we'd guarantee a match between the release payload we've been given and the ISO unless the name was based on the release payload (which eliminates some of the point of the cache, since ordinarily most release payloads will point to a small number of images).

Description of problem:

Deployed hypershift cluster with recent multi-arch build. 
Storage cluster operator has become available but having below warning message


PowerVSBlockCSIDriverOperatorCRDegraded: PowerVSBlockCSIDriverStaticResourcesControllerDegraded: "rbac/attacher_role.yaml" (string): clusterroles.rbac.authorization.k8s.io "ibm-powervs-block-external-attacher-role" is forbidden: user "system:serviceaccount:openshift-cluster-csi-drivers:powervs-block-csi-driver-operator" (groups=["system:serviceaccounts" "system:serviceaccounts:openshift-cluster-csi-drivers" "system:authenticated"]) is attempting to grant RBAC permissions not currently held:
PowerVSBlockCSIDriverOperatorCRDegraded: PowerVSBlockCSIDriverStaticResourcesControllerDegraded: {APIGroups:["csi.storage.k8s.io"], Resources:["csinodeinfos"], Verbs:["get" "list" "watch"]}
PowerVSBlockCSIDriverOperatorCRDegraded: PowerVSBlockCSIDriverStaticResourcesControllerDegraded: "rbac/attacher_binding.yaml" (string): clusterroles.rbac.authorization.k8s.io "ibm-powervs-block-external-attacher-role" not found

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.Deploy 4.12.0-0.nightly-multi-2022-09-01-220105 nightly build

Actual results:

 

Expected results:

 

Additional info:

 

We need to rebase openshift-sdn to kube 1.25's kube-proxy.

In particular, we need this to get https://github.com/kubernetes/kubernetes/pull/110334 into master because we will probably get asked to backport it.

This is a clone of issue OCPBUGS-3358. The following is the description of the original issue:

Description of problem:
Due to changes in BUILD-407 which merged into release-4.12, we have a permafailing test `e2e-aws-csi-driver-no-refreshresource` and are unable to merge subsequent pull requests.

Version-Release number of selected component (if applicable):


How reproducible: Always

Steps to Reproduce:

1. Bring up cluster using release-4.12 or release-4.13 or master branch
2. Run `e2e-aws-csi-driver-no-refreshresource` test
3.

Actual results:
I1107 05:18:31.131666 1 mount_linux.go:174] Cannot run systemd-run, assuming non-systemd OS
I1107 05:18:31.131685 1 mount_linux.go:175] systemd-run failed with: exit status 1
I1107 05:18:31.131702 1 mount_linux.go:176] systemd-run output: System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to create bus connection: Host is down

Expected results:
Test should pass

Additional info:


Description of problem:


Version-Release number of selected component (if applicable):

{ 4.12.0-0.nightly-2022-08-21-135326 }
How reproducible:

Steps to Reproduce:

{See https://bugzilla.redhat.com/show_bug.cgi?id=2118563#c5,
The following messages here are "normal" on startup, but it is very misleading with error statement, suggest suppress them or update them to some more clear context that we can know they are in normal process.

E0818 02:18:53.709223       1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-c955q': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-c955q, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue
E0818 02:18:53.715530       1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-sl9jn': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-sl9jn, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue
E0818 02:18:53.735885       1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-sl9jn': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-sl9jn, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue
E0818 02:18:53.775984       1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-sl9jn': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-sl9jn, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue
E0818 02:18:53.790449       1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-c955q': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-c955q, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue
E0818 02:18:53.856911       1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-sl9jn': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-sl9jn, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue
E0818 02:18:53.950782       1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-c955q': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-c955q, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue
E0818 02:18:54.017583       1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-sl9jn': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-sl9jn, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue
E0818 02:18:54.271967       1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-c955q': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-c955q, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue
E0818 02:18:54.338944       1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-sl9jn': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-sl9jn, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue
E0818 02:18:54.916988       1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-c955q': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-c955q, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue
E0818 02:18:54.982211       1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-sl9jn': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-sl9jn, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue}


Actual results:


Expected results:


Additional info:


Description of problem: Knative tests were disabled due to https://issues.redhat.com/browse/OCPBUGS-190  to unblock the queue and should be enabled back again

https://coreos.slack.com/archives/C6A3NV5J9/p1660659719046909 

https://github.com/openshift/console/pull/11956#discussion_r948075848 

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual results:

Expected results:

Additional info:

Currently, the AWS actuator has a static list of instance types embedded in it. This means that as new instance types are added, we have to continually update this list.

Ideally, we could fetch this information from the AWS API as we do in GCP.

DoD:

  • Investigate availability of instance memory and CPU capacity as an API on AWS
  • Determine if we can use this for the autoscaling scale from zero annotations
  • If possible, implement the change.

Description of problem:

According to https://issues.redhat.com/browse/OCPBUGS-705, thanks Junyun share the test env/result for install part, and we need the fix in vsphere-problem-detector, currently it reports the following missing when using the pre-existing folder and/or resource pool with ReadOnly permission:
  
1. vcenter cluster set ReadOnly permission: 
I0902 10:07:50.324782       1 vsphere_check.go:244] CheckComputeClusterPermissions:jima-permission-q84s8-worker-86gd4 failed: missing privileges for compute cluster workloads: Resource.AssignVMToPool, VApp.AssignResourcePool, VApp.Import, VirtualMachine.Config.AddNewDisk


2. datacenter set ReadOnly permission:
I0902 08:09:19.462001       1 vsphere_check.go:225] CheckAccountPermissions failed: missing privileges for datacenter OCP-DC: Resource.AssignVMToPool, VApp.Import, VirtualMachine.Config.AddExistingDisk, VirtualMachine.Config.AddNewDisk, VirtualMachine.Config.AddRemoveDevice, VirtualMachine.Config.AdvancedConfig, VirtualMachine.Config.Annotation, VirtualMachine.Config.CPUCount, VirtualMachine.Config.DiskExtend, VirtualMachine.Config.DiskLease, VirtualMachine.Config.EditDevice, VirtualMachine.Config.Memory, VirtualMachine.Config.RemoveDisk, VirtualMachine.Config.Rename, VirtualMachine.Config.ResetGuestInfo, VirtualMachine.Config.Resource, VirtualMachine.Config.Settings, VirtualMachine.Config.UpgradeVirtualHardware, VirtualMachine.Interact.GuestControl, VirtualMachine.Interact.PowerOff, VirtualMachine.Interact.PowerOn, VirtualMachine.Interact.Reset, VirtualMachine.Inventory.Create, VirtualMachine.Inventory.CreateFromExisting, VirtualMachine.Inventory.Delete, VirtualMachine.Provisioning.Clone, VirtualMachine.Provisioning.DeployTemplate, VirtualMachine.Provisioning.MarkAsTemplate, Folder.Create, Folder.Delete 

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-09-02-194931

How reproducible:

Always 

Steps to Reproduce:

See Description of problem

Actual results:

The vsphere-problem-detector operator reports privilege missing when using pre-existing folder and/or resource pool with ReadOnly permission

Expected results:

The vsphere-problem-detector operator should not reports privilege missing in that case.

Additional info:

 

Description of problem:

Network policy code has some problems, most of them are races, therefore it can be difficult to reproduce and verify, here is the list

1. all kinds of add/delete port to/from default deny port group failures, possible symptoms:
  - port should’ve been added to default deny port group, but wasn’t: connections that should’ve been dropped are allowed
  - port should’ve been deleted from default deny port group, but wasn’t: connections that should be allowed are dropped
  - db ops failures when an attempt to add/delete port to/from default deny port group fails, e.g. because this operation already was done
2. default deny port group was overwritten when 2 network policies are created in a namespace at the same time. Can lead to ports not being added to the default deny port group => denied connections will be allowed
3. handle error when getting local pod from the cache fails, possible symptoms
  - "Failed to get LSP after multiple retries for pod %s/%s for networkPolicy" log message
  - pod is not added to netpol port groups, network policy is not applied
4. creating deleted namespace via ensureNamespaceLocked, symptoms:
  - namespace was deleted, but address set is present in the db
5. policy acl loglevel update wasn’t applied, possible symptoms:
  - netpol acl log level isn’t set/updated to namespace loglevel
6. netpol cleanup failures, symptoms:
  - network policy failed to be deleted, something is still left in the db, error messages like
  - "failed to destroy network policy"
  - "Rollback of default port groups and acls for policy: %s/%s failed, Unable to ensure namespace for network policy"
7. concurrent write to sets.String - this will panic, you won’t miss
8. retry for network policy handler after network policy was deleted, you should see failures saying that some network policy related object is nil or doesn’t exist, e.g.
  - "peer AddressSet is nil, cannot add <object>"
9. host network and completed pods selected by network policy can produce error logs, no real harm
  - "Failed to get LSP for pod <namespace>/<name> for networkPolicy %s refetching err"
10. namespace pod handlers are never stopped, can affect memory usage and look like a memory leak
11. add local pod failure, since netpol port group is not committed to db yet, error looks like
  - "Failed to create *factory.localPodSelector <name>, error: object not found"

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

Example 1
1. Create network policy with [in/e]gress selector that applies to a namespace labeled project: myproject
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: test
spec:
  podSelector: {}
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              project: myproject

2. Use oc apply to delete network policy and crate a pod in project: myproject namespace at the same time
3. check ovnkube-master logs for "peer AddressSet is nil, cannot add peer pod(s)", this should retry with the same error 15 times
4. This may not work from the first try, since we need to hit specific order of network policy delete and pod add handling
5. With the new version no error messages should be present

Example 2
1. create network policy that applies to a namespace test
piVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: test
spec:
  podSelector: {}
  policyTypes:
    - Ingress
  ingress:
2. Create host network pod in namespace test
3. Check 15 logs saying "Failed to get LSP for pod %s/%s for networkPolicy %s refetching err: "
4. check final log "Failed to get LSP after multiple retries for pod %s/%s for networkPolicy"
5. With the new version no error message should be present

All the other cases are difficult to reproduce, maybe just running some standard network policy tests and making sure everything works will be a good verification.

Actual results:

 

Expected results:

 

Additional info:

 

This is a clone of issue OCPBUGS-6647. The following is the description of the original issue:

Description of problem:

Resource type drop-down menu item 'Last used' is in English

Version-Release number of selected component (if applicable):

4.12

How reproducible:

 

Steps to Reproduce:

1. Navigate to kube:admin -> User Preferences -> Applications
2. Click on Resource type dorp-down

Actual results:

Content is in English

Expected results:

Content should be in target language

Additional info:

Screenshot reference provided

Description of problem:

During an upgrade from 4.12.0 to 4.12.1 a customer has observed crashlooping ovn-master pods with the following error message

$ oc logs -n openshift-ovn-kubernetes ovnkube-master-bx99r -c ovnkube-master --tail=20 -p
:Transaction causes multiple rows in "IGMP_Group" table to have identical values (mrouters, 038b16fa-6aba-4244-9d4f-00a1e2cbf9a2, and []) 
for index on columns "address", "datapath", and "chassis".  First row, with UUID 7e9a18fa-e58c-4547-a7cb-afa934b6cdc9, had the following index values before the trans
action: mrouters, 038b16fa-6aba-4244-9d4f-00a1e2cbf9a2, and d9755997-e909-4d0c-8770-82a902d69a90.  Second row, with UUID 84da3622-3ac7-41f0-a6b5-536a2d5f9137, had the
 following index values before the transaction: mrouters, 038b16fa-6aba-4244-9d4f-00a1e2cbf9a2, and 578d4dd9-cc02-4bcc-8a9c-08dcc3a94190. UUID:{GoUUID:} Rows:[]}] and
 errors []: constraint violation: Transaction causes multiple rows in "IGMP_Group" table to have identical values (mrouters, 038b16fa-6aba-4244-9d4f-00a1e2cbf9a2, and
 []) for index on columns "address", "datapath", and "chassis".  First row, with UUID 7e9a18fa-e58c-4547-a7cb-afa934b6cdc9, had the following index values before the 
transaction: mrouters, 038b16fa-6aba-4244-9d4f-00a1e2cbf9a2, and d9755997-e909-4d0c-8770-82a902d69a90.  Second row, with UUID 84da3622-3ac7-41f0-a6b5-536a2d5f9137, ha
d the following index values before the transaction: mrouters, 038b16fa-6aba-4244-9d4f-00a1e2cbf9a2, and 578d4dd9-cc02-4bcc-8a9c-08dcc3a94190.

Version-Release number of selected component (if applicable):

4.12.0

How reproducible:

Unknown

Steps to Reproduce:

1. Upgrade from 4.12.0 to 4.12.1
2.
3.

Actual results:

crashlooping ovnkube-master pods

Expected results:

functional ovnkube-master pods

Additional info:

This cluster was upgraded from 4.11 to 4.12.0 then to 4.12.1.
The attached case has a must-gather.

Description of problem:

When providing the openshift-install agent create command with installconfig + agentconfig manifests that contain the InstallConfig Proxy section, the Proxy configuration does not get applied.

Version-Release number of selected component (if applicable):

4.12

How reproducible:

100%

Steps to Reproduce:

1.Define InstallConfig with Proxy section
2.openshift-install agent create image
3.Boot ISO
4.Check /etc/assisted/manifests for InfraEnv to contain its Proxy section

Actual results:

Missing proxy

Expected results:

Proxy present and matching InstallConfig's

Additional info:

 

Searching recent 4.12 CI, there are a number of failures in the clusteroperator/machine-config should not change condition/Available test case:

$ w3m -dump -cols 200 'https://search.ci.openshift.org/?search=clusteroperator%2Fmachine-config+should+not+change+condition%2FAvailable&maxAge=48h&type=junit' | grep '4[.]12.*failures match' | sort
periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade (all) - 129 runs, 53% failed, 6% of failures match = 3% impact
periodic-ci-openshift-release-master-ci-4.12-e2e-aws-sdn-techpreview-serial (all) - 6 runs, 50% failed, 67% of failures match = 33% impact
periodic-ci-openshift-release-master-ci-4.12-e2e-azure-ovn-upgrade (all) - 60 runs, 50% failed, 3% of failures match = 2% impact
periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-aws-ovn-upgrade (all) - 129 runs, 56% failed, 8% of failures match = 5% impact
periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-azure-sdn-upgrade (all) - 129 runs, 69% failed, 12% of failures match = 9% impact
periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-ovn-rt-upgrade (all) - 8 runs, 38% failed, 67% of failures match = 25% impact
periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-ovn-upgrade (all) - 60 runs, 57% failed, 6% of failures match = 3% impact
periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-sdn-upgrade (all) - 12 runs, 42% failed, 20% of failures match = 8% impact
periodic-ci-openshift-release-master-nightly-4.12-e2e-aws-sdn-upgrade (all) - 60 runs, 40% failed, 4% of failures match = 2% impact
periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-serial-virtualmedia (all) - 6 runs, 100% failed, 17% of failures match = 17% impact
periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-upgrade (all) - 6 runs, 67% failed, 25% of failures match = 17% impact
periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-serial-ovn-dualstack (all) - 6 runs, 67% failed, 25% of failures match = 17% impact
periodic-ci-openshift-release-master-nightly-4.12-e2e-vsphere-ovn-techpreview-serial (all) - 9 runs, 56% failed, 20% of failures match = 11% impact
periodic-ci-openshift-release-master-nightly-4.12-upgrade-from-stable-4.11-e2e-metal-ipi-upgrade (all) - 6 runs, 100% failed, 17% of failures match = 17% impact
periodic-ci-openshift-release-master-nightly-4.12-upgrade-from-stable-4.11-e2e-metal-ipi-upgrade-ovn-ipv6 (all) - 6 runs, 83% failed, 20% of failures match = 17% impact
periodic-ci-openshift-release-master-okd-4.12-e2e-vsphere (all) - 25 runs, 100% failed, 4% of failures match = 4% impact
release-openshift-ocp-installer-e2e-gcp-serial-4.12 (all) - 6 runs, 83% failed, 20% of failures match = 17% impact

Doesn't seem like reason is getting set?

$ curl -s 'https://search.ci.openshift.org/search?name=periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade&search=clusteroperator%2Fmachine-config+should+not+change+condition%2FAvailable&maxAge=48h&type=junit&context=15' | jq -r 'to_entries[].value | to_entries[].value[].context[]' | grep 'clusteroperator/machine-config condition/Available status/False reason'
Aug 31 01:13:56.724 - 698s  E clusteroperator/machine-config condition/Available status/False reason/Cluster not available for [{operator 4.12.0-0.ci-2022-08-30-194744}]
Aug 31 09:09:15.460 - 1078s E clusteroperator/machine-config condition/Available status/False reason/Cluster not available for [{operator 4.12.0-0.ci-2022-08-30-194744}]
Sep 01 03:31:24.808 - 1131s E clusteroperator/machine-config condition/Available status/False reason/Cluster not available for [{operator 4.12.0-0.ci-2022-08-31-111359}]
Sep 01 07:15:58.029 - 1085s E clusteroperator/machine-config condition/Available status/False reason/Cluster not available for [{operator 4.12.0-0.ci-2022-08-31-111359}]

Example runs in the job I've randomly selected to drill into:

$ curl -s 'https://search.ci.openshift.org/search?name=periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade&search=clusteroperator%2Fmachine-config+should+not+change+condition%2FAvailable&maxAge=48h&type=junit' | jq -r 'keys[]'
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade/1564757706458271744
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade/1564879945233076224
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade/1565158084484009984
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade/1565212566194491392

Drilling into that last run, the Available=False was the whole pool-update phase:

And details from the origin's monitor:

$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade/1565212566194491392/artifacts/e2e-aws-ovn-upgrade/openshift-e2e-test/build-log.txt | grep clusteroperator/machine-config
Sep 01 07:15:57.629 E clusteroperator/machine-config condition/Degraded status/True reason/RenderConfigFailed changed: Failed to resync 4.12.0-0.ci-2022-08-31-111359 because: refusing to read osImageURL version "4.12.0-0.ci-2022-09-01-053740", operator version "4.12.0-0.ci-2022-08-31-111359"
Sep 01 07:15:57.629 - 49s   E clusteroperator/machine-config condition/Degraded status/True reason/Failed to resync 4.12.0-0.ci-2022-08-31-111359 because: refusing to read osImageURL version "4.12.0-0.ci-2022-09-01-053740", operator version "4.12.0-0.ci-2022-08-31-111359"
Sep 01 07:15:58.029 E clusteroperator/machine-config condition/Available status/False changed: Cluster not available for [{operator 4.12.0-0.ci-2022-08-31-111359}]
Sep 01 07:15:58.029 - 1085s E clusteroperator/machine-config condition/Available status/False reason/Cluster not available for [{operator 4.12.0-0.ci-2022-08-31-111359}]
Sep 01 07:16:47.000 I /machine-config reason/OperatorVersionChanged clusteroperator/machine-config-operator started a version change from [{operator 4.12.0-0.ci-2022-08-31-111359}] to [{operator 4.12.0-0.ci-2022-09-01-053740}]
Sep 01 07:16:47.377 W clusteroperator/machine-config condition/Progressing status/True changed: Working towards 4.12.0-0.ci-2022-09-01-053740
Sep 01 07:16:47.377 - 1037s W clusteroperator/machine-config condition/Progressing status/True reason/Working towards 4.12.0-0.ci-2022-09-01-053740
Sep 01 07:16:47.405 W clusteroperator/machine-config condition/Degraded status/False changed: 
Sep 01 07:18:02.614 W clusteroperator/machine-config condition/Upgradeable status/False reason/PoolUpdating changed: One or more machine config pools are updating, please see `oc get mcp` for further details
Sep 01 07:34:03.000 I /machine-config reason/OperatorVersionChanged clusteroperator/machine-config-operator version changed from [{operator 4.12.0-0.ci-2022-08-31-111359}] to [{operator 4.12.0-0.ci-2022-09-01-053740}]
Sep 01 07:34:03.699 W clusteroperator/machine-config condition/Available status/True changed: Cluster has deployed [{operator 4.12.0-0.ci-2022-08-31-111359}]
Sep 01 07:34:03.715 W clusteroperator/machine-config condition/Upgradeable status/True changed: 
Sep 01 07:34:04.065 I clusteroperator/machine-config versions: operator 4.12.0-0.ci-2022-08-31-111359 -> 4.12.0-0.ci-2022-09-01-053740
Sep 01 07:34:04.663 W clusteroperator/machine-config condition/Progressing status/False changed: Cluster version is 4.12.0-0.ci-2022-09-01-053740
[bz-Machine Config Operator] clusteroperator/machine-config should not change condition/Available
[bz-Machine Config Operator] clusteroperator/machine-config should not change condition/Degraded

No idea if whatever was happening there is the same thing that was happening in other runs, and I haven't checked 4.11 and earlier either. The test-case is non-fatal, so it doesn't break CI, but it can cause noise like ClusterOperatorDown if it continues for 10 or more minutes. Whic PromeCIeus says actually fired in this run, although apparently the origin monitors didn't notice to complain:

So parallel asks (and I'm happy to shard into separate bugs, if that's helpful):

  • Set a reason when you go Available=False, so Telemetry can collect information to aggregate and hunt for frequent reasons to prioritize improvements.
  • Figure out at least one reason why we're going Available=False in apparently healthy CI runs. If we find and fix one reason, we can circle back later to see if there are more that remain unfixed.

Description of problem:
When opening the Devfile sample developer catalog, switch the project in another browser tab, and then open devfile samples link in a new tab, the current project context is getting lost.

Version-Release number of selected component (if applicable):
4.12, expecting that this happen also in older versions

How reproducible:
Always

Steps to Reproduce:
1. Switch to the developer perspective, navigate to Add > Samples
2. Open a new browser tab and create a new project
3. Ctrl+click a sample in the first tab.

Actual results:
The project has also changed in the "Import sample" page

Expected results:
The project should be used also for the new "Import sample" page

Additional info:
We had this issue earlier for other catalog entries. Other samples works already fine, just the Devfile sample links doesn't contain the current namespace.

Description of problem:

TO address: 'Static Pod is managed but errored" err="managed container xxx does not have Resource.Requests'

Version-Release number of selected component (if applicable):

4.12

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

This is a clone of issue OCPBUGS-4350. The following is the description of the original issue:

Steps to reproduce:
Release: 4.13.0-0.nightly-2022-11-30-183109 (latest 4.12 nightly as well)
Create a HyperShift cluster on AWS, wait til its completed rolling out
Upgrade the HostedCluster by updating its release image to a newer one
Observe the 'network' clusteroperator resource in the guest cluster as well as the 'version' clusterversion resource in the guest cluster.
When the clusteroperator resource reports the upgraded release and the clusterversion resource reports the new release as applied, take a look at the ovnkube-master statefulset in the control plane namespace of the management cluster. It is still not finished rolling out.

Expected: that the network clusteroperator reports the new version only when all components have finished rolling out.

Description of problem:

Insights operator gathers related clusteroperator's related objects from operators.openshift.io group. Ingresscontrollers are now missing, because it's a namespaceed resource and the "default" name is not provided in the related objects of the ingress clusteroperator

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

Description of problem:
If cluster install failed and no tag attached to vm, run ./openshift-install destroy cluster get stuck, details pls see openshift-install.log
...
time="2022-09-28T08:19:14-04:00" level=debug msg="Delete Folder"
time="2022-09-28T08:19:14-04:00" level=debug msg="Find attached Folder on tag"
time="2022-09-28T08:19:15-04:00" level=debug msg="Folder: Expected Folder sgao-rtf6v to be empty"
time="2022-09-28T08:19:25-04:00" level=debug msg="Power Off Virtual Machines"
time="2022-09-28T08:19:25-04:00" level=debug msg="Find attached VirtualMachine on tag"
time="2022-09-28T08:19:25-04:00" level=debug msg="Delete Virtual Machines"
time="2022-09-28T08:19:25-04:00" level=debug msg="Find attached VirtualMachine on tag"
time="2022-09-28T08:19:25-04:00" level=debug msg="Delete Folder"
time="2022-09-28T08:19:25-04:00" level=debug msg="Find attached Folder on tag"
time="2022-09-28T08:19:25-04:00" level=debug msg="Folder: Expected Folder sgao-rtf6v to be empty"
time="2022-09-28T08:19:35-04:00" level=debug msg="Power Off Virtual Machines"
time="2022-09-28T08:19:35-04:00" level=debug msg="Find attached VirtualMachine on tag"
time="2022-09-28T08:19:35-04:00" level=debug msg="Delete Virtual Machines"
time="2022-09-28T08:19:35-04:00" level=debug msg="Find attached VirtualMachine on tag"
time="2022-09-28T08:19:35-04:00" level=debug msg="Delete Folder"

Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-25-071630

How reproducible:
always when cluster install failed and no tag attached to vm

Steps to Reproduce:
1. cluster install failed and no tag attached to vm
2. run ./openshift-install destroy cluster
3.

Actual results:
installer destroy get stuck

Expected results:
installer destroy should set timeout and be able to quit in such situation

Additional info:

This is a clone of issue OCPBUGS-1627. The following is the description of the original issue:

Description of problem:
Two issues when setting user-defined folder in failureDomain.
1. installer get error when setting folder as a path of user-defined folder in failureDomain.

failureDomains setting in install-config.yaml:

    failureDomains:
    - name: us-east-1
      region: us-east
      zone: us-east-1a
      server: xxx
      topology:
        datacenter: IBMCloud
        computeCluster: /IBMCloud/host/vcs-mdcnc-workload-1
        networks:
        - multi-zone-qe-dev-1
        datastore: multi-zone-ds-1
        folder: /IBMCloud/vm/qe-jima
    - name: us-east-2
      region: us-east
      zone: us-east-2a
      server: xxx
      topology:
        datacenter: IBMCloud
        computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2
        networks:
        - multi-zone-qe-dev-1
        datastore: multi-zone-ds-2
        folder: /IBMCloud/vm/qe-jima
    - name: us-east-3
      region: us-east
      zone: us-east-3a
      server: xxx
      topology:
        datacenter: IBMCloud
        computeCluster: /IBMCloud/host/vcs-mdcnc-workload-3
        networks:
        - multi-zone-qe-dev-1
        datastore: workload_share_vcsmdcncworkload3_joYiR
        folder: /IBMCloud/vm/qe-jima
    - name: us-west-1
      region: us-west
      zone: us-west-1a
      server: ibmvcenter.vmc-ci.devcluster.openshift.com
      topology:
        datacenter: datacenter-2
        computeCluster: /datacenter-2/host/vcs-mdcnc-workload-4
        networks:
        - multi-zone-qe-dev-1
        datastore: workload_share_vcsmdcncworkload3_joYiR

Error message in terraform after completing ova image import:

DEBUG vsphereprivate_import_ova.import[0]: Still creating... [1m40s elapsed] 
DEBUG vsphereprivate_import_ova.import[3]: Creation complete after 1m40s [id=vm-367860] 
DEBUG vsphereprivate_import_ova.import[1]: Creation complete after 1m49s [id=vm-367863] 
DEBUG vsphereprivate_import_ova.import[0]: Still creating... [1m50s elapsed] 
DEBUG vsphereprivate_import_ova.import[2]: Still creating... [1m50s elapsed] 
DEBUG vsphereprivate_import_ova.import[2]: Still creating... [2m0s elapsed] 
DEBUG vsphereprivate_import_ova.import[0]: Still creating... [2m0s elapsed] 
DEBUG vsphereprivate_import_ova.import[2]: Creation complete after 2m2s [id=vm-367862] 
DEBUG vsphereprivate_import_ova.import[0]: Still creating... [2m10s elapsed] 
DEBUG vsphereprivate_import_ova.import[0]: Creation complete after 2m20s [id=vm-367861] 
DEBUG data.vsphere_virtual_machine.template[0]: Reading... 
DEBUG data.vsphere_virtual_machine.template[3]: Reading... 
DEBUG data.vsphere_virtual_machine.template[1]: Reading... 
DEBUG data.vsphere_virtual_machine.template[2]: Reading... 
DEBUG data.vsphere_virtual_machine.template[3]: Read complete after 1s [id=42054e33-85d6-e310-7f4f-4c52a73f8338] 
DEBUG data.vsphere_virtual_machine.template[1]: Read complete after 2s [id=42053e17-cc74-7c89-f5d1-059c9030ecc7] 
DEBUG data.vsphere_virtual_machine.template[2]: Read complete after 2s [id=4205019f-26d8-f9b4-ac0c-2c073fd70b35] 
DEBUG data.vsphere_virtual_machine.template[0]: Read complete after 2s [id=4205eaf2-c727-c647-ad44-bd9ad7023c56] 
ERROR                                              
ERROR Error: error trying to determine parent targetFolder: folder '/IBMCloud/vm//IBMCloud/vm' not found 
ERROR                                              
ERROR   with vsphere_folder.folder["IBMCloud-/IBMCloud/vm/qe-jima"], 
ERROR   on main.tf line 61, in resource "vsphere_folder" "folder": 
ERROR   61: resource "vsphere_folder" "folder" {   
ERROR                                              
ERROR failed to fetch Cluster: failed to generate asset "Cluster": failure applying terraform for "pre-bootstrap" stage: failed to create cluster: failed to apply Terraform: exit status 1 
ERROR                                              
ERROR Error: error trying to determine parent targetFolder: folder '/IBMCloud/vm//IBMCloud/vm' not found 
ERROR                                              
ERROR   with vsphere_folder.folder["IBMCloud-/IBMCloud/vm/qe-jima"], 
ERROR   on main.tf line 61, in resource "vsphere_folder" "folder": 
ERROR   61: resource "vsphere_folder" "folder" {   
ERROR                                              
ERROR   

2.  installer get panic error when setting folder as user-defined folder name in failure domains.

failure domain in install-config.yaml

    failureDomains:
    - name: us-east-1
      region: us-east
      zone: us-east-1a
      server: xxx
      topology:
        datacenter: IBMCloud
        computeCluster: /IBMCloud/host/vcs-mdcnc-workload-1
        networks:
        - multi-zone-qe-dev-1
        datastore: multi-zone-ds-1
        folder: qe-jima
    - name: us-east-2
      region: us-east
      zone: us-east-2a
      server: xxx
      topology:
        datacenter: IBMCloud
        computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2
        networks:
        - multi-zone-qe-dev-1
        datastore: multi-zone-ds-2
        folder: qe-jima
    - name: us-east-3
      region: us-east
      zone: us-east-3a
      server: xxx
      topology:
        datacenter: IBMCloud
        computeCluster: /IBMCloud/host/vcs-mdcnc-workload-3
        networks:
        - multi-zone-qe-dev-1
        datastore: workload_share_vcsmdcncworkload3_joYiR
        folder: qe-jima
    - name: us-west-1
      region: us-west
      zone: us-west-1a
      server: xxx
      topology:
        datacenter: datacenter-2
        computeCluster: /datacenter-2/host/vcs-mdcnc-workload-4
        networks:
        - multi-zone-qe-dev-1
        datastore: workload_share_vcsmdcncworkload3_joYiR                                  

panic error message in installer:

INFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.12/412.86.202208101039-0/x86_64/rhcos-412.86.202208101039-0-vmware.x86_64.ova?sha256=' 
INFO The file was found in cache: /home/user/.cache/openshift-installer/image_cache/rhcos-412.86.202208101039-0-vmware.x86_64.ova. Reusing... 
panic: runtime error: index out of range [1] with length 1goroutine 1 [running]:
github.com/openshift/installer/pkg/tfvars/vsphere.TFVars({{0xc0013bd068, 0x3, 0x3}, {0xc000b11dd0, 0x12}, {0xc000b11db8, 0x14}, {0xc000b11d28, 0x14}, {0xc000fe8fc0, ...}, ...})
    /go/src/github.com/openshift/installer/pkg/tfvars/vsphere/vsphere.go:79 +0x61b
github.com/openshift/installer/pkg/asset/cluster.(*TerraformVariables).Generate(0x1d1ed360, 0x5?)
    /go/src/github.com/openshift/installer/pkg/asset/cluster/tfvars.go:847 +0x4798
 

Based on explanation of field folder, looks like folder name should be ok. If it is not allowed to use folder name, need to validate the folder and update explain.

 

sh-4.4$ ./openshift-install explain installconfig.platform.vsphere.failureDomains.topology.folder
KIND:     InstallConfig
VERSION:  v1RESOURCE: <string>
  folder is the name or inventory path of the folder in which the virtual machine is created/located.
 

 

 

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-09-20-095559

How reproducible:

always

Steps to Reproduce:

see description

Actual results:

installation has errors when set user-defined folder

Expected results:

installation is successful when set user-defined folder

Additional info:

 

Description of problem:

This PR: https://github.com/openshift/cluster-network-operator/pull/1612/files removed the fallback logic of checking for the hosts kubeconfig file when apiserver-url.env was not populated on the machine. In IBM Cloud ROKS (both public cloud + Satellite (Hypershift)) this file is not populated. This means that any upgrade to 4.12 will result in the cluster network operator failing and cause impacts to the cluster.

I am proposing the following plan: First, this PR is held till 4.13. Second: IBM Cloud ROKS team will ensure from the initial release of 4.12 that this file is populated in it's entire fleet of workers (4.12 and beyond). Holding this to 4.13 will allow a seamless upgrade experience when the user upgrades the control plane to 4.12 but the workers are still 4.11. Then when the user goes to upgrade to 4.13: their workers will all be at 4.12 which is guarenteed to have this file and the logic to remove the check for the host kubeconfig can be removed.

For full disclosure was brought up that we could go and push a daemonset across our entire fleet of 16000+ ROKS clusters that just lays down the file but that still introduces race conditions with the network-operator and results in significant resource increase of cluster workload across our entire fleet that the plan I proposed above would remove

Example on a ROKS on Satellite worker showing that this file does not exist (yet): 
[root@tyler-test-24 ~]# ls /etc/kubernetes/apiserver-url.env
ls: cannot access '/etc/kubernetes/apiserver-url.env': No such file or directory

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

Originally reported by lance5890 in issue https://github.com/openshift/cluster-etcd-operator/issues/1000

Under some circumstances the static pod machinery fails to populate the node status in time to generate the correct env variables for ETCD_URL_HOST, ETCD_NAME etc. The pods that come up will fail to accept those variables.

This is particularly pronounced in SNO topologies, leading to installation failures. 

The fix is to fail fast in the targetconfig/envvar controller to ensure the CEO goes degraded instead of silently failing on the rollout of an invalid static pod.

I saw the following while trying to debug the following "unexpectedly found multiple equivalent ACLs" error.

Add a generic networkpolicy:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-same-namespace
namespace: nbc9-demo-project
spec:
podSelector: {}
ingress:

  • from:
  • podSelector: {}
    policyTypes:
  • Ingress

$ kubectl get pod ovnkube-master-pk89w -o jsonpath='

{range .spec.containers[]} {@.image}

'
quay.io/openshift/okd-content@sha256:79ee71e045a7b224a132f6c75b4220ec35b9a06049061a6bd9ca9fc976c412e5

[root@dev-nkjpp-master-2 ~]# ovnkube -v
I0609 17:33:34.930787 58 ovs.go:93] Maximum command line arguments set to: 191102
Version: 0.3.0
Git commit: 7bf36eea28fe66365d0dfdf8c39e3311ea14d19b
Git branch: release-4.10
Go version: go1.16.6
Build date: 2022-05-27
OS/Arch: linux amd64

Which then fails to apply, retries, and when the networkpolicy is deleted, the ovnkube-master pod segfaults:

I0609 17:00:26.653710 1 policy.go:1092] Adding network policy allow-same-namespace in namespace nbc9-demo-project
E0609 17:00:26.656858 1 ovn.go:753] Failed to create network policy nbc9-demo-project/allow-same-namespace, error: failed to create default port groups and acls for policy: nbc9-demo-project/allow-same-namespace, error: unexpectedly found multiple equivalent ACLs: [

{UUID:7b55ba0c-150f-4a63-9601-cfde25f29408 Action:drop Direction:from-lport ExternalIDs:map[default-deny-policy-type:Egress] Label:0 Log:false Match:inport == @a7830797310894963783_egressDefaultDeny Meter:0xc0010df310 Name:0xc0010df320 Options:map[apply-after-lb:true] Priority:1000 Severity:0xc0010df330}

{UUID:60cb946a-46e9-4623-9ba4-3cb35f018ed6 Action:drop Direction:from-lport ExternalIDs:map[default-deny-policy-type:Egress] Label:0 Log:false Match:inport == @a7830797310894963783_egressDefaultDeny Meter:0xc0010df390 Name:0xc0010df3d0 Options:map[apply-after-lb:true] Priority:1000 Severity:0xc0010df3e0}

]
I0609 17:00:51.437895 1 policy_retry.go:46] Network Policy Retry: nbc9-demo-project/allow-same-namespace retry network policy setup
I0609 17:00:51.437935 1 policy_retry.go:63] Network Policy Retry: Creating new policy for nbc9-demo-project/allow-same-namespace
I0609 17:00:51.437941 1 policy.go:1092] Adding network policy allow-same-namespace in namespace nbc9-demo-project
I0609 17:00:51.438174 1 policy_retry.go:65] Network Policy Retry create failed for nbc9-demo-project/allow-same-namespace, will try again later: failed to create default port groups and acls for policy: nbc9-demo-project/allow-same-namespace, error: unexpectedly found multiple equivalent ACLs: [

{UUID:60cb946a-46e9-4623-9ba4-3cb35f018ed6 Action:drop Direction:from-lport ExternalIDs:map[default-deny-policy-type:Egress] Label:0 Log:false Match:inport == @a7830797310894963783_egressDefaultDeny Meter:0xc002215e00 Name:0xc002215e70 Options:map[apply-after-lb:true] Priority:1000 Severity:0xc002215e80}

{UUID:7b55ba0c-150f-4a63-9601-cfde25f29408 Action:drop Direction:from-lport ExternalIDs:map[default-deny-policy-type:Egress] Label:0 Log:false Match:inport == @a7830797310894963783_egressDefaultDeny Meter:0xc0022b0310 Name:0xc0022b03a0 Options:map[apply-after-lb:true] Priority:1000 Severity:0xc000070ab0}

]
I0609 17:01:02.679219 1 policy.go:1174] Deleting network policy allow-same-namespace in namespace nbc9-demo-project

E0609 17:01:02.679407 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 249 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1c19c80, 0x2e9a810)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x1c19c80, 0x2e9a810)
/usr/lib/golang/src/runtime/panic.go:965 +0x1b9
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn.(*Controller).destroyNetworkPolicy(0xc0022c2000, 0x0, 0xc000bb9000, 0x0, 0x0)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/ovn/policy.go:1210 +0x55
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn.(*Controller).deleteNetworkPolicy(0xc0022c2000, 0xc002544f00, 0x0, 0x0, 0x0)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/ovn/policy.go:1198 +0x43f
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn.(*Controller).WatchNetworkPolicy.func4(0x1e7e840, 0xc002544f00)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/ovn/ovn.go:800 +0xae
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnDelete(...)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/controller.go:245
k8s.io/client-go/tools/cache.FilteringResourceEventHandler.OnDelete(0xc000f4c4c0, 0x2160f10, 0xc002f498c0, 0x1e7e840, 0xc002544f00)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/controller.go:288 +0x6a
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/factory.(*Handler).OnDelete(...)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/factory/handler.go:52
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/factory.(*informer).newFederatedHandler.func3.1(0xc00463dbf0)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/factory/handler.go:340 +0x65
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/factory.(*informer).forEachHandler(0xc0002c61b0, 0x1e7e840, 0xc002544f00, 0xc003dc9d60)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/factory/handler.go:114 +0x156
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/factory.(*informer).newFederatedHandler.func3(0x1e7e840, 0xc002544f00)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/factory/handler.go:339 +0x1b2
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnDelete(...)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/controller.go:245
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/shared_informer.go:779 +0x166
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc002367760)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc003dc9f60, 0x2127a00, 0xc000229a70, 0x1bd5d01, 0xc000039740)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002367760, 0x3b9aca00, 0x0, 0x1, 0xc000039740)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0004f3180)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/shared_informer.go:771 +0x95
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0002bed80, 0xc000ed5850)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1a021d5]

goroutine 249 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109
panic(0x1c19c80, 0x2e9a810)
/usr/lib/golang/src/runtime/panic.go:965 +0x1b9
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn.(*Controller).destroyNetworkPolicy(0xc0022c2000, 0x0, 0xc000bb9000, 0x0, 0x0)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/ovn/policy.go:1210 +0x55
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn.(*Controller).deleteNetworkPolicy(0xc0022c2000, 0xc002544f00, 0x0, 0x0, 0x0)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/ovn/policy.go:1198 +0x43f
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/ovn.(*Controller).WatchNetworkPolicy.func4(0x1e7e840, 0xc002544f00)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/ovn/ovn.go:800 +0xae
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnDelete(...)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/controller.go:245
k8s.io/client-go/tools/cache.FilteringResourceEventHandler.OnDelete(0xc000f4c4c0, 0x2160f10, 0xc002f498c0, 0x1e7e840, 0xc002544f00)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/controller.go:288 +0x6a
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/factory.(*Handler).OnDelete(...)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/factory/handler.go:52
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/factory.(*informer).newFederatedHandler.func3.1(0xc00463dbf0)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/factory/handler.go:340 +0x65
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/factory.(*informer).forEachHandler(0xc0002c61b0, 0x1e7e840, 0xc002544f00, 0xc003dc9d60)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/factory/handler.go:114 +0x156
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/factory.(*informer).newFederatedHandler.func3(0x1e7e840, 0xc002544f00)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/factory/handler.go:339 +0x1b2
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnDelete(...)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/controller.go:245
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/shared_informer.go:779 +0x166
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc002367760)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc003dc9f60, 0x2127a00, 0xc000229a70, 0x1bd5d01, 0xc000039740)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002367760, 0x3b9aca00, 0x0, 0x1, 0xc000039740)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0004f3180)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/shared_informer.go:771 +0x95
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0002bed80, 0xc000ed5850)
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65

Please let me know if any further information is required. I have a must-gather for this cluster but the file attachment tool in bugzilla won't let me attach anything larger than 19.5MB (the must-gather is 212.1MB)

Description of problem:

NPE on topology for the ns which just got deleted, see screenshot below

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1. Login as regular user
2. Create a ns and delete the ns
3. visit the deleted ns in topology

Actual results:

console breaks dur to NPE

Expected results:

console shouldn't break

Additional info: