Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
As a user, I should be able to configure CSI driver to have a storage topology.
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.
We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.
We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.
related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
This includes ibm-vpc-node-label-updater!
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
There is a new driver release 5.0.0 since the last rebase that includes snapshot support:
https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0
Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
This Epic tracks the GA of this feature
Epic Goal
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
OC mirror is GA product as of Openshift 4.11 .
The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term.
HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]
To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes:
If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE.
For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA:
Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc.
Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:
As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters.
HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story.
Thus the following stories are important for HyperShift:
Refs:
HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed.
Main user story: When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees.
Ref: What are we missing in Core HyperShift for GA Readiness?
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumptions:
HyperShift - proposed cuts from data plane
When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:
More information:
To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.
See Hosted Control Planes (aka HyperShift) Strategy [Live Document]
Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path.
https://issues.redhat.com/browse/OCPPLAN-8901
HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead).
We should make sure our SD milestones are unblocked by the core team.
This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.
- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors.
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771)
Epic Goal*
The goal is to split client certificate trust chains from the global Hypershift root CA.
Why is this important? (mandatory)
This is important to:
Scenarios (mandatory)
Provide details for user scenarios including actions to be performed, platform specifications, and user personas.
Dependencies (internal and external) (mandatory)
Hypershift team needs to provide us with code reviews and merge the changes we are to deliver
Contributing Teams(and contacts) (mandatory)
Acceptance Criteria (optional)
The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.
Drawbacks or Risk (optional)
Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release
Done - Checklist (mandatory)
AUTH-311 introduced an enhancement. Implement the signer separation described there.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12
This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9
More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4
update ironic software to pick up latest bug fixes
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
As an admin, I want to hide user perspective(s) based on the customization.
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
As a SRE, I want hypershift operator to expose a metric when hosted control plane is ready.
This should allow SRE to tune (or silence) alerts occurring while the hosted control plane is spinning up.
The Kube APIServer has a sidecar to output audit logs. We need similar sidecars for other APIServers that run on the control plane side. We also need to pass the same audit log policy that we pass to the KAS to these other API servers.
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
Place holder epic to track spontaneous task which does not deserve its own epic.
DoD:
At the moment if the input etcd kms encryption (key and role) is invalid we fail transparently.
We should check that both key and role are compatible/operational for a given cluster and fail in a condition otherwise
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
This is a clone of issue OCPBUGS-3633. The following is the description of the original issue:
—
I think something is wrong with the alerts refactor, or perhaps my sync to 4.12.
Failed: suite=[openshift-tests], [sig-instrumentation][Late] Alerts shouldn't report any unexpected alerts in firing or pending state [apigroup:config.openshift.io] [Suite:openshift/conformance/parallel] Passed 1 times, failed 0 times, skipped 0 times: we require at least 6 attempts to have a chance at success
We're not getting the passes - from https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/aggregated-azure-ovn-upgrade-4.12-micro-release-openshift-release-analysis-aggregator/1592021681235300352, the successful runs don't show any record of the test at all. We need to record successes and failures for aggregation to work right.
The issue found while testing HOSTEDCP-400 and HOSTEDCP-401.
Hypershift operator installed with flags:
--platform-monitoring=operator-only --enable-uwm-telemetry-remote-write=true --metrics-set=telemetry
Service monitors and pod monitors in the control plane:
[jiezhao@cube hypershift]$ oc get servicemonitor -n clusters-jz-test NAME AGE catalog-operator 45m cluster-version-operator 45m etcd 46m kube-apiserver 46m kube-controller-manager 45m monitor-multus-admission-controller 43m monitor-ovn-master-metrics 43m node-tuning-operator 45m olm-operator 45m openshift-apiserver 45m openshift-controller-manager 45m [jiezhao@cube hypershift]$ oc get podmonitor -n clusters-jz-test NAME AGE cluster-image-registry-operator 46m controlplane-operator 47m hosted-cluster-config-operator 46m ignition-server 47m
In OCP management web console, go to Observe->Targets:
1. Status of service monitor 'monitor-multus-admission-controller' is Down, error: Scraped failed: server returned HTTP status 401 Unauthorized. It doesn't have cluster id in target labels 2. Target of pod monitor 'cluster-image-registry-operator' is missing, not shown
Description of problem:
cloud-network-config-controller pod crashloops in proxy deployments as it tries to reach Openstack keystone API directly (not through the proxy) and there is no connectivity. NAMESPACE NAME READY STATUS RESTARTS AGE openshift-cloud-network-config-controller cloud-network-config-controller-c4867b748-vlq9h 0/1 CrashLoopBackOff 158 (2m10s ago) 13h $ oc -n openshift-cloud-network-config-controller logs -p cloud-network-config-controller-c4867b748-vlq9h W0927 05:48:18.678947 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0927 05:48:18.680269 1 leaderelection.go:248] attempting to acquire leader lease openshift-cloud-network-config-controller/cloud-network-config-controller-lock... I0927 05:48:26.754377 1 leaderelection.go:258] successfully acquired lease openshift-cloud-network-config-controller/cloud-network-config-controller-lock I0927 05:48:26.755413 1 openstack.go:121] Custom CA bundle found at location '/kube-cloud-config/ca-bundle.pem' - reading certificate information F0927 05:48:28.233519 1 main.go:101] Error building cloud provider client, err: Get "https://10.46.44.10:13000/": dial tcp 10.46.44.10:13000: connect: no route to host goroutine 51 [running]: k8s.io/klog/v2.stacks(0x1) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:860 +0x8a k8s.io/klog/v2.(*loggingT).output(0x37696c0, 0x3, 0x0, 0xc000636000, 0x1, {0x2cbcbd8?, 0x1?}, 0xc000438400?, 0x0) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:825 +0x686 k8s.io/klog/v2.(*loggingT).printfDepth(0x37696c0, 0x237798a?, 0x0, {0x0, 0x0}, 0x7fff81041af7?, {0x23a20d0, 0x2d}, {0xc00052c050, 0x1, ...}) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:630 +0x1f2 k8s.io/klog/v2.(*loggingT).printf(...) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:612 k8s.io/klog/v2.Fatalf(...) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:1516 main.main.func1({0x26e5638, 0xc00016c040}) /go/src/github.com/openshift/cloud-network-config-controller/cmd/cloud-network-config-controller/main.go:101 +0x26d created by k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:211 +0x11bgoroutine 1 [select]: k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00052bb60?, {0x26cee20, 0xc000581740}, 0x1, 0xc00052bb60) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x135 k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00016c080?, 0x60db88400, 0x0, 0x20?, 0x7fea470ec108?) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89 k8s.io/apimachinery/pkg/util/wait.Until(...) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew(0xc0000a8120, {0x26e5638?, 0xc00016c040?}) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:268 +0xd0 k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc0000a8120, {0x26e5638, 0xc00025fcc0}) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:212 +0x12f k8s.io/client-go/tools/leaderelection.RunOrDie({0x26e5638, 0xc00025fcc0}, {{0x26e7430, 0xc00062afa0}, 0x1fe5d61a00, 0x18e9b26e00, 0x60db88400, {0xc00065e630, 0xc000634810, 0x0}, ...}) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:226 +0x94 main.main() /go/src/github.com/openshift/cloud-network-config-controller/cmd/cloud-network-config-controller/main.go:86 +0x450
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-26-050728
How reproducible:
Always
Steps to Reproduce:
1. Install OCP with proxy
Actual results:
Bootstrap failure and pod crashloop
Expected results:
Successful installation
Additional info:
Please find the must-gather here.
Description of problem:
If using ingresscontroller.spec.routeSelector.matchExpressions or ingresscontroller.spec.namespaceSelector.matchExpressions, the route will not count in the new route_metrics_controller_routes_per_shard prometheus metric. This is due to the logic only using "matchLabels". The logic needs to be updated to also use "matchExpressions".
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1. Create IC with matchExpressions: oc apply -f - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: reproducer.$domain routeSelector: matchExpressions: - key: type operator: In values: - shard replicas: 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" EOF 2. Create the route: oc apply -f - <<EOF apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-shard labels: type: shard spec: to: kind: Service name: router-shard EOF 3. Check route_metrics_controller_routes_per_shard{name="sharded"} in prometheus, it's 0
Actual results:
route_metrics_controller_routes_per_shard{name="sharded"} has 0 routes
Expected results:
route_metrics_controller_routes_per_shard{name="sharded"} should have 1 route
Additional info:
Description of problem:
If you set a services cluster IP to an IP with a leading zero (e.g. 192.168.0.011), ovn-k should normalise this and remove the leading zero before sending it to ovn.
This was seen by me on a CI run executing the k8 test here: test/e2e/network/funny_ips.go +75
you can reproduce using that above test.
Have a read of the text there:
43 // What are funny IPs: 44 // The adjective is because of the curl blog that explains the history and the problem of liberal 45 // parsing of IP addresses and the consequences and security risks caused the lack of normalization, 46 // mainly due to the use of different notations to abuse parsers misalignment to bypass filters. 47 // xref: https://daniel.haxx.se/blog/2021/04/19/curl-those-funny-ipv4-addresses/ 48 // 49 // Since golang 1.17, IPv4 addresses with leading zeros are rejected by the standard library. 50 // xref: https://github.com/golang/go/issues/30999 51 // 52 // Because this change on the parsers can cause that previous valid data become invalid, Kubernetes 53 // forked the old parsers allowing leading zeros on IPv4 address to not break the compatibility. 54 // 55 // Kubernetes interprets leading zeros on IPv4 addresses as decimal, users must not rely on parser 56 // alignment to not being impacted by the associated security advisory: CVE-2021-29923 golang 57 // standard library "net" - Improper Input Validation of octal literals in golang 1.16.2 and below 58 // standard library "net" results in indeterminate SSRF & RFI vulnerabilities. xref: 59 // https://nvd.nist.gov/vuln/detail/CVE-2021-29923
northd is logging an error about this also:
|socket_util|ERR|172.30.0.011:7180: bad IP address "172.30.0.011" ... 2022-08-23T14:14:21.968Z|01839|ovn_util|WARN|bad ip address or port for load balancer key 172.30.0.011:7180
Also, I see the error:
E0823 14:14:34.135115 3284 gateway_shared_intf.go:600] Failed to delete conntrack entry for service e2e-funny-ips-8626/funny-ip: failed to delete conntrack entry for service e2e-funny-ips-8626/funny-ip with svcVIP 172.30.0.011, svcPort 7180, protocol TCP: value "<nil>" passed to DeleteConntrack is not an IP address
We should normalise the IPs before sending to OVN-k. I see also theres conntrack error when trying to set this bad IP.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. See above k8 test
Actual results:
Leading zero IP sent to OVN
Expected results:
No leading zero IP sent to OVN
Additional info:
This is a clone of issue OCPBUGS-3414. The following is the description of the original issue:
—
Description of problem:
The current implementation of new OCI FBC feature omits the creation of the ImageContentSourcePolicy
and CatalogSource resources
Description of problem:
scale up more worker nodes but they are not added to the Load Balancer instances (backend pool), if moving the router pod to the new worker nodes then co/ingress becomes degraded
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-23-204408
How reproducible:
100%
Steps to Reproduce:
1. ensure the fresh install cluster works well. 2. scale up worker nodes. $ oc -n openshift-machine-api get machineset NAME DESIRED CURRENT READY AVAILABLE AGE hongli-1024-hnkrm-worker-us-east-2a 1 1 1 1 5h21m hongli-1024-hnkrm-worker-us-east-2b 1 1 1 1 5h21m hongli-1024-hnkrm-worker-us-east-2c 1 1 1 1 5h21m $ oc -n openshift-machine-api scale machineset hongli-1024-hnkrm-worker-us-east-2a --replicas=2 machineset.machine.openshift.io/hongli-1024-hnkrm-worker-us-east-2a scaled $ oc -n openshift-machine-api scale machineset hongli-1024-hnkrm-worker-us-east-2b --replicas=2 machineset.machine.openshift.io/hongli-1024-hnkrm-worker-us-east-2b scaled (about 5 minutes later) $ oc -n openshift-machine-api get machineset NAME DESIRED CURRENT READY AVAILABLE AGE hongli-1024-hnkrm-worker-us-east-2a 2 2 2 2 5h29m hongli-1024-hnkrm-worker-us-east-2b 2 2 2 2 5h29m hongli-1024-hnkrm-worker-us-east-2c 1 1 1 1 5h29m 3. delete router pods and to make new ones running on new workers $ oc get node NAME STATUS ROLES AGE VERSION ip-10-0-128-45.us-east-2.compute.internal Ready worker 71m v1.25.2+4bd0702 ip-10-0-131-192.us-east-2.compute.internal Ready control-plane,master 6h35m v1.25.2+4bd0702 ip-10-0-139-51.us-east-2.compute.internal Ready worker 6h29m v1.25.2+4bd0702 ip-10-0-162-228.us-east-2.compute.internal Ready worker 71m v1.25.2+4bd0702 ip-10-0-172-216.us-east-2.compute.internal Ready control-plane,master 6h35m v1.25.2+4bd0702 ip-10-0-190-82.us-east-2.compute.internal Ready worker 6h25m v1.25.2+4bd0702 ip-10-0-196-26.us-east-2.compute.internal Ready control-plane,master 6h35m v1.25.2+4bd0702 ip-10-0-199-158.us-east-2.compute.internal Ready worker 6h28m v1.25.2+4bd0702 $ oc -n openshift-ingress get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86444dcd84-cm96l 1/1 Running 0 65m 10.130.2.7 ip-10-0-128-45.us-east-2.compute.internal <none> <none> router-default-86444dcd84-vpnjz 1/1 Running 0 65m 10.131.2.7 ip-10-0-162-228.us-east-2.compute.internal <none> <none>
Actual results:
$ oc get co ingress console authentication NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE ingress 4.12.0-0.nightly-2022-10-23-204408 True False True 66m The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing) console 4.12.0-0.nightly-2022-10-23-204408 False False False 66m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.hongli-1024.qe.devcluster.openshift.com): Get "https://console-openshift-console.apps.hongli-1024.qe.devcluster.openshift.com": EOF authentication 4.12.0-0.nightly-2022-10-23-204408 False False True 66m OAuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.hongli-1024.qe.devcluster.openshift.com/healthz": EOF checked the Load Balancer on AWS console and found that new created nodes are not added to load balancer. see the snapshot attached.
Expected results:
the LB should added new created instances automatically and ingress should work with new workers.
Additional info:
1. this is also reproducible with common user created LoadBalancer service. 2. if the LB service is created after adding the new nodes then it works well, we can see that all nodes are added to LB on AWS console.
Description of problem:
This is a follow up on OCPBUGSM-47202 (https://bugzilla.redhat.com/show_bug.cgi?id=2110570)
While OCPBUGSM-47202 fixes the issue specific for Set Pod Count, many other actions aren't fixed. When the user updates a Deployment with one of this options, and selects the action again, the old values are still shown.
Version-Release number of selected component (if applicable)
4.8-4.12 as well as master with the changes of OCPBUGSM-47202
How reproducible:
Always
Steps to Reproduce:
Actual results:
Old data (labels, annotations, etc.) was shown.
Expected results:
Latest data should be shown
Additional info:
Description of problem:
With "createFirewallRules: Enabled", after successful "create cluster" and then "destroy cluster", the created firewall-rules in the shared VPC are not deleted.
Version-Release number of selected component (if applicable):
$ ./openshift-install version ./openshift-install 4.12.0-0.nightly-2022-09-28-204419 built from commit 9eb0224926982cdd6cae53b872326292133e532d release image registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc release architecture amd64
How reproducible:
Always
Steps to Reproduce:
1. try IPI installation with "createFirewallRules: Enabled", which succeeded 2. try destroying the cluster, which succeeded 3. check firewall-rules in the shared VPC
Actual results:
After destroying the cluster, its firewall-rules created by installer in the shared VPC are not deleted.
Expected results:
Those firewall-rules should be deleted during destroying the cluster.
Additional info:
$ gcloud --project openshift-qe-shared-vpc compute firewall-rules list --filter='network=installer-shared-vpc' NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED ci-op-xpn-ingress-common installer-shared-vpc INGRESS 60000 tcp:6443,tcp:22,tcp:80,tcp:443,icmp False ci-op-xpn-ingress-health-checks installer-shared-vpc INGRESS 60000 tcp:30000-32767,udp:30000-32767,tcp:6080,tcp:6443,tcp:226 24,tcp:32335 False ci-op-xpn-ingress-internal-network installer-shared-vpc INGRESS 60000 udp:4789,udp:6081,udp:500,udp:4500,esp,tcp:9000-9999,udp: 9000-9999,tcp:10250,tcp:30000-32767,udp:30000-32767,tcp:10257,tcp:10259,tcp:22623,tcp:2379-2380 FalseTo show all fields of the firewall, please show in JSON format: --format=json To show all fields in table format, please see the examples in --help. $ $ yq-3.3.0 r test2/install-config.yaml platform gcp: projectID: openshift-qe region: us-central1 computeSubnet: installer-shared-vpc-subnet-2 controlPlaneSubnet: installer-shared-vpc-subnet-1 createFirewallRules: Enabled network: installer-shared-vpc networkProjectID: openshift-qe-shared-vpc $ $ yq-3.3.0 r test2/install-config.yaml metadata creationTimestamp: null name: jiwei-1013-01 $ $ openshift-install create cluster --dir test2 INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" INFO Consuming Install Config from target directory INFO Creating infrastructure resources... INFO Waiting up to 20m0s (until 4:06AM) for the Kubernetes API at https://api.jiwei-1013-01.qe.gcp.devcluster.openshift.com:6443... INFO API v1.24.0+8c7c967 up INFO Waiting up to 30m0s (until 4:20AM) for bootstrapping to complete... INFO Destroying the bootstrap resources... INFO Waiting up to 40m0s (until 4:42AM) for the cluster at https://api.jiwei-1013-01.qe.gcp.devcluster.openshift.com:6443 to initialize... INFO Checking to see if there is a route at openshift-console/console... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/fedora/test2/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.jiwei-1013-01.qe.gcp.devcluster.openshift.com INFO Login to the console with user: "kubeadmin", and password: "wWPkc-8G2Lw-xe2Vw-DgWha" INFO Time elapsed: 39m14s $ $ openshift-install destroy cluster --dir test2 INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" INFO Stopped instance jiwei-1013-01-464st-worker-b-pmg5z INFO Stopped instance jiwei-1013-01-464st-worker-a-csg2j INFO Stopped instance jiwei-1013-01-464st-master-1 INFO Stopped instance jiwei-1013-01-464st-master-2 INFO Stopped instance jiwei-1013-01-464st-master-0 INFO Deleted 2 recordset(s) in zone qe INFO Deleted 3 recordset(s) in zone jiwei-1013-01-464st-private-zone INFO Deleted DNS zone jiwei-1013-01-464st-private-zone INFO Deleted bucket jiwei-1013-01-464st-image-registry-us-central1-ulgxgjfqxbdnrhd INFO Deleted instance jiwei-1013-01-464st-master-0 INFO Deleted instance jiwei-1013-01-464st-worker-a-csg2j INFO Deleted instance jiwei-1013-01-464st-master-1 INFO Deleted instance jiwei-1013-01-464st-worker-b-pmg5z INFO Deleted instance jiwei-1013-01-464st-master-2 INFO Deleted disk jiwei-1013-01-464st-master-2 INFO Deleted disk jiwei-1013-01-464st-master-1 INFO Deleted disk jiwei-1013-01-464st-worker-b-pmg5z INFO Deleted disk jiwei-1013-01-464st-master-0 INFO Deleted disk jiwei-1013-01-464st-worker-a-csg2j INFO Deleted address jiwei-1013-01-464st-cluster-public-ip INFO Deleted address jiwei-1013-01-464st-cluster-ip INFO Deleted forwarding rule a516d89f9a4f14bdfb55a525b1a12a91 INFO Deleted forwarding rule jiwei-1013-01-464st-api INFO Deleted forwarding rule jiwei-1013-01-464st-api-internal INFO Deleted target pool a516d89f9a4f14bdfb55a525b1a12a91 INFO Deleted target pool jiwei-1013-01-464st-api INFO Deleted backend service jiwei-1013-01-464st-api-internal INFO Deleted instance group jiwei-1013-01-464st-master-us-central1-a INFO Deleted instance group jiwei-1013-01-464st-master-us-central1-c INFO Deleted instance group jiwei-1013-01-464st-master-us-central1-b INFO Deleted health check jiwei-1013-01-464st-api-internal INFO Deleted HTTP health check a516d89f9a4f14bdfb55a525b1a12a91 INFO Deleted HTTP health check jiwei-1013-01-464st-api INFO Time elapsed: 4m18s $ $ gcloud --project openshift-qe-shared-vpc compute firewall-rules list --filter='network=installer-shared-vpc' NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED ci-op-xpn-ingress-common installer-shared-vpc INGRESS 60000 tcp:6443,tcp:22,tcp:80,tcp:443,icmp False ci-op-xpn-ingress-health-checks installer-shared-vpc INGRESS 60000 tcp:30000-32767,udp:30000-32767,tcp:6080,tcp:6443,tcp:22624,tcp:32335 False ci-op-xpn-ingress-internal-network installer-shared-vpc INGRESS 60000 udp:4789,udp:6081,udp:500,udp:4500,esp,tcp:9000-9999,udp:9000-9999,tcp:10250,tcp:30000-32767,udp:30000-32767,tcp:10257,tcp:10259,tcp:22623,tcp:2379-2380 False jiwei-1013-01-464st-api installer-shared-vpc INGRESS 1000 tcp:6443 False jiwei-1013-01-464st-control-plane installer-shared-vpc INGRESS 1000 tcp:22623,tcp:10257,tcp:10259 False jiwei-1013-01-464st-etcd installer-shared-vpc INGRESS 1000 tcp:2379-2380 False jiwei-1013-01-464st-health-checks installer-shared-vpc INGRESS 1000 tcp:6080,tcp:6443,tcp:22624 False jiwei-1013-01-464st-internal-cluster installer-shared-vpc INGRESS 1000 tcp:30000-32767,udp:9000-9999,udp:30000-32767,udp:4789,udp:6081,tcp:9000-9999,udp:500,udp:4500,esp,tcp:10250 False jiwei-1013-01-464st-internal-network installer-shared-vpc INGRESS 1000 icmp,tcp:22 False k8s-a516d89f9a4f14bdfb55a525b1a12a91-http-hc installer-shared-vpc INGRESS 1000 tcp:30268 False k8s-fw-a516d89f9a4f14bdfb55a525b1a12a91 installer-shared-vpc INGRESS 1000 tcp:80,tcp:443 FalseTo show all fields of the firewall, please show in JSON format: --format=json To show all fields in table format, please see the examples in --help. $ FYI manually deleting those firewall-rules in the shared VPC does work. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-api Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-api]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-control-plane Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-control-plane]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-etcd Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-etcd]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-health-checks Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-health-checks]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-internal-cluster Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-internal-cluster]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-internal-network Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-internal-network]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q k8s-a516d89f9a4f14bdfb55a525b1a12a91-http-hc Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/k8s-a516d89f9a4f14bdfb55a525b1a12a91-http-hc]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q k8s-fw-a516d89f9a4f14bdfb55a525b1a12a91 Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/k8s-fw-a516d89f9a4f14bdfb55a525b1a12a91]. $ $ gcloud --project openshift-qe-shared-vpc compute firewall-rules list --filter='network=installer-shared-vpc' NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED ci-op-xpn-ingress-common installer-shared-vpc INGRESS 60000 tcp:6443,tcp:22,tcp:80,tcp:443,icmp False ci-op-xpn-ingress-health-checks installer-shared-vpc INGRESS 60000 tcp:30000-32767,udp:30000-32767,tcp:6080,tcp:6443,tcp:22624,tcp:32335 False ci-op-xpn-ingress-internal-network installer-shared-vpc INGRESS 60000 udp:4789,udp:6081,udp:500,udp:4500,esp,tcp:9000-9999,udp:9000-9999,tcp:10250,tcp:30000-32767,udp:30000-32767,tcp:10257,tcp:10259,tcp:22623,tcp:2379-2380 FalseTo show all fields of the firewall, please show in JSON format: --format=json To show all fields in table format, please see the examples in --help. $
Description of problem:
Custom manifest files can be placed in the /openshift folder so that they will be applied during cluster installation. Anyhow, if a file contains more than one manifests, all but the first are ignored.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1.Create the following custom manifest file in the /openshift folder: ``` apiVersion: v1 kind: ConfigMap metadata: name: agent-test namespace: openshift-config data: value: agent-test --- apiVersion: v1 kind: ConfigMap metadata: name: agent-test-2 namespace: openshift-config data: value: agent-test-2 ``` 2. Create the agent ISO image and deploy a cluster
Actual results:
ConfigMap agent-test-2 does not exist in the openshift-config namespace
Expected results:
ConfigMap agent-test-2 must exist in the openshift-config namespace
Additional info:
Description of problem:
The API Explorer page layout is incorrect, please check the attachment for more details
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible:
Always
Steps to Reproduce:
1. Login OCP, Go to Home -> API Explorer page
2. Check if there is an extra blank line between the dropdown filter and the list
Actual results:
There is an extra blank line between the dropdown filter and the list
Expected results:
Use right patternfly package, remove the extra blank line
Additional info:
104.0.5112.79 (Official Build) (64-bit)
Description of problem:
Image registry pods panic while deploying OCP in ap-south-2 AWS region
Version-Release number of selected component (if applicable):
4.11.2
How reproducible:
Deploy OCP in AWS ap-south-2 region
Steps to Reproduce:
Deploy OCP in AWS ap-south-2 region
Actual results:
panic: Invalid region provided: ap-south-2
Expected results:
Image registry pods should come up with no errors
Additional info:
Description of problem:
node_exporter collects network metrics for "virtual" interfaces like br-*. When OVN is used, it also reports metrics for ovs-*, ovn, and genev_sys_* interfaces.
Version-Release number of selected component (if applicable):
4.12 (and before)
How reproducible:
Always
Steps to Reproduce:
1. Launch a 4.12 cluster. 2. Run the following PromQL query: "group by(device) (node_network_info)" 3.
Expected results:
Only real host interfaces should be present.
Additional info:
Copied from an upstream issue: https://github.com/operator-framework/operator-lifecycle-manager/issues/2830
What did you do?
When attempting to reinstall an operator that uses conversion webhooks by
The resulting InstallPlan enters a failed state with message similar to
error validating existing CRs against new CRD's schema for "devworkspaces.workspace.devfile.io": error listing resources in GroupVersionResource schema.GroupVersionResource{Group:"workspace.devfile.io", Version:"v1alpha1", Resource:"devworkspaces"}: conversion webhook for workspace.devfile.io/v1alpha2, Kind=DevWorkspace failed: Post "https://devworkspace-controller-manager-service.test-namespace.svc:443/convert?timeout=30s": service "devworkspace-controller-manager-service" not found
When the original CSVs are deleted, the operator's main deployment and service are removed, but CRDs are left in-cluster. However, since the service/CA bundle/deployment that serve the conversion webhook are removed, conversion webhooks are broken at that point. Eventually this impacts garbage collection on the cluster as well.
This can be reproduced by installing the DevWorkspace Operator from the Red Hat catalog. (I can provide yamls/upstream images that reproduce as well, if that's helpful). It may be necessary to create a DevWorkspace in the cluster before deletion, e.g. by oc apply -f https://raw.githubusercontent.com/devfile/devworkspace-operator/main/samples/plain.yaml
What did you expect to see?
Operator is able to be reinstalled without removing CRDs and all instances.
What did you see instead? Under which circumstances?
It's necessary to completely remove the operator including CRDs. For our operator (DevWorkspace), this also makes uninstall especially complicated as finalizers are used (so CRDs cannot be deleted if the controller is removed, and the controller cannot be restored by reinstalling)
Environment
operator-lifecycle-manager version: 4.10.24
Kubernetes version information: Kubernetes Version: v1.23.5+012e945 (OpenShift 4.10.24)
Kubernetes cluster kind: OpenShift
Description of problem:
When scaling down the machineSet for worker nodes, a PV(vmdk) file got deleted.
Version-Release number of selected component (if applicable):
4.10
How reproducible:
N/A
Steps to Reproduce:
1. Scale down worker nodes 2. Check VMware logs and VM gets deleted with vmdk still attached
Actual results:
After scaling down nodes, volumes still attached to the VM get deleted alongside the VM
Expected results:
Worker nodes scaled down without any accidental deletion
Additional info:
This is a clone of issue OCPBUGS-2281. The following is the description of the original issue:
—
Description of problem:
E2E test cases for knative and pipeline packages have been disabled on CI due to respective operator installation issues. Tests have to be enabled after new operator version be available or the issue resolves
References:
https://coreos.slack.com/archives/C6A3NV5J9/p1664545970777239
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This bug is a backport clone of [Bugzilla Bug 2100181](https://bugzilla.redhat.com/show_bug.cgi?id=2100181). The following is the description of the original bug:
—
Created attachment 1891950
log
Description of problem:
Prior to OCP 4.7.48, the configure-ovs script picked the corrected bonded interface for br-ex. In OCP 4.7.48 we have that is consistently fail. It picks one of the slave interfaces (ens3f0).
Version-Release number of selected component (if applicable):
OCP Release > OCP 4.7.37
How reproducible:
100%
Steps to Reproduce:
1. Deploy an OCP cluster with bonding
2.
3.
Actual results:
Expected results:
configure-ovs should not fail and assign the correct interface to br-ex (bond1)
Additional info:
There appears to be a new default NM profile from 4.7.37 to 4.7.38 a that was not there before
This is a clone of issue OCPBUGS-10221. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-5469. The following is the description of the original issue:
—
Description of problem:
When changing channels it's possible that multiple new conditional update risks will need to be evaluated. For instance, a cluster running 4.10.34 in a 4.10 channel today only has to evaluate `OpenStackNodeCreationFails` but when the channel is changed to a 4.11 channel multiple new risks require evaluation and the evaluation of new risks is throttled at one every 10 minutes. This means if there are three new risks it may take up to 30 minutes after the channel has changed for the full set of conditional updates to be computed. This leads to a perception that no update paths are recommended because most will not wait 30 minutes, they expect immediate feedback.
Version-Release number of selected component (if applicable):
4.10.z, 4.11.z, 4.12, 4.13
How reproducible:
100%
Steps to Reproduce:
1. Install 4.10.34 2. Switch from stable-4.10 to stable-4.11 3.
Actual results:
Observe no recommended updates for 10-20 minutes because all available paths to 4.11 have a risk associated with them
Expected results:
Risks are computed in a timely manner for an interactive UX, lets say < 10s
Additional info:
This was intentional in the design, we didn't want risks to continuously re-evaluate or overwhelm the monitoring stack, however we didn't anticipate that we'd have long standing pile of risks and realize how confusing the user experience would be. We intend to work around this in the deployed fleet by converting older risks from `type: promql` to `type: Always` avoiding the evaluation period but preserving the notification. While this may lead customers to believe they're exposed to a risk they may not be, as long as the set of outstanding risks to the latest version is limited to no more than one it's likely no one will notice. All 4.10 and 4.11 clusters currently have a clear path toward relatively recent 4.10.z or 4.11.z with no more than one risk to be evaluated.
This is a clone of issue OCPBUGS-1427. The following is the description of the original issue:
—
Description of problem:
Jump looks the worst on gcp, but looking closer Azure and AWS both jumped as well just not as high.
Disruption data indicates that the image registry on GCP was averaging around 30-40 seconds of disruption during an upgrade, until Aug 27th when it jumped to 125-135 seconds and has remained there ever since.
We see similar spikes in ingress-to-console and ingress-to-oauth. NOTE: image registry backend is also behind ingress, so all three are ingress related disruption.
https://datastudio.google.com/s/uBC4zuBFdTE
These charts show the problem on Aug 27 for registry, ingress to console, and ingress to oauth.
sdn network type appears unaffected.
Something merged Aug 26-27 that caused a significant change for anything behind ingress using ovn on gcp.
Description of problem:
When the user selects Serverless as an import strategy and tried to import a Devfile, the import fails because of an invalid Deployment.
Could reproduce this already in 4.11, but its even more prominent in 4.12 when the console automatically selects the resource type serverless when the Serverless operator is installed.
Version-Release number of selected component (if applicable):
Works on 4.10
Failed on 4.11 and 4.12 master
How reproducible:
Always
Steps to Reproduce:
1. Install and setup Serverless operator
1. Switch to dev perspective, navigate to add > import from git
3. Enter a non-Devfile git URL like https://github.com/jerolimov/nodeinfo
4. On 4.11 select resource type Serverless (on 4.12 this should be selected automatically)
5. Update the git URL to a repo with a Devfile like https://github.com/nodeshift-starters/devfile-sample
6. Press create
Actual results:
Import fails with error:
Error "Invalid value: "": name part must be non-empty" for field "spec.template.labels".
Expected results:
Devfile should be imported
Additional info:
Description of problem:
oc --context build02 get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-ec.1 True False 45h Error while reconciling 4.12.0-ec.1: the cluster operator kube-controller-manager is degraded oc --context build02 get co kube-controller-manager NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE kube-controller-manager 4.12.0-ec.1 True False True 2y87d GarbageCollectorDegraded: error fetching rules: Get "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules": dial tcp 172.30.153.28:9091: connect: cannot assign requested address
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
build02 is a build farm cluster in CI production.
I can provide credentials to access the cluster if needed.
Description of problem:
Alert actions are not triggering modal from where storage cluster can be expanded.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
1/1
Steps to Reproduce:
1. Fill up a storage cluster to 80% 2. Alert is seen in cluster dashboard. 3. Click the Add Capacity button
Actual results:
Modal is not launched.
Expected results:
Modal should be launched.
Additional info:
Description of problem:
console.openshift.io/use-i18n false in v1alpha API is converted to "" in the v1 APi, which is not a valid value for the enum type declared in the code.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-25-071630
How reproducible:
Always
Steps to Reproduce:
1. Load a dynamic plugin with v1alpha API console.openshift.io/use-i18n set to 'false' 2. In the v1 API the {"spec":{"i18n":{"loadType":""}}} loadType is set to empty string, which is not a valid value defined here: https://github.com/jhadvig/api/blob/22d69793277ffeb618d642724515f249262959a5/console/v1/types_console_plugin.go#L46 https://github.com/openshift/api/pull/1186/files#
Actual results:
{"spec":{"i18n":{"loadType":""}}}
Expected results:
{"spec":{"i18n":{"loadType":"Lazy"}}}
Additional info:
https://github.com/openshift/origin/pull/27444 was intended to move the scaling test out of serial to it's own test suite, but it added it to parallel – meaning it's running in all our normal upgrade jobs, causing them to frequently fail with repeating pathological events as well as greatly increasing their run time.
See https://github.com/openshift/origin/pull/27444#discussion_r991296925 for more info
Description of problem:
We have ODF bug for it here: https://bugzilla.redhat.com/show_bug.cgi?id=2169779 Discussed in formu-storage with Hemant here: https://redhat-internal.slack.com/archives/CBQHQFU0N/p1677085216391669 And asked to open bug for it. This currently blocking ODF 4.13 deployment over vSphere
Version-Release number of selected component (if applicable):
How reproducible:
YES
Steps to Reproduce:
1. Deploy ODF 4.13 on vSphere with `thin-csi` SC 2. 3.
Actual results:
Expected results:
Additional info:
When multi-cluster is enabled, it possible to get in a situation where you can't cancel login. If you select a cluster you don't know the credentials for, console will remember the last cluster and repeatedly send you to the login page with no way to cancel or go back. If we decide to set the last cluster in the user's preferences, it might be possible to get stuck even if you clear cookies and localStorage.
There are similar issues logging into cluster that are hibernating. See attached video.
cc Scott Berens
Description of problem:
When trying to enable Hardware Backed Management Ports (e.g. Virtual functions) on BF2 in NIC mode OR any other MLX NICs (CX-6, CX-5) by setting the node_mgmt_port_netdev_flags flags to a VF in the CNO; then OVN-K Node will crash.
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
Always
Steps to Reproduce:
Start by enabling OvS HWOL and setting sriovnetworknodepolicy https://docs.openshift.com/container-platform/4.11/networking/hardware_networks/configuring-hardware-offloading.html 1. Scale down CNO: oc scale --replicas=0 deploy/network-operator -n openshift-network-operator 2. Make changes to OVN-K node: oc edit daemonsets ovnkube-node -n openshift-ovn-kubernetes a. Find "node_mgmt_port_netdev_flags=" and replace it with something like this: node_mgmt_port_netdev_flags= if [[ ${K8S_NODE} != *"master"* ]]; then node_mgmt_port_netdev_flags="--ovnkube-node-mgmt-port-netdev=ens1f0v0" fi b. Additionally you have to add the "node_mgmt_port_netdev_flags" to the " exec /usr/bin/ovnkube --init-node "${K8S_NODE}"" call in the same script. Since this is missing. 3. Save the edit. 4. Observe OVN-K node on baremetal worker nodes.
Actual results:
I0822 14:21:56.250285 496356 ovs.go:204] Exec(3): stderr: "" I0822 14:21:56.250290 496356 node.go:310] Detected support for port binding with external IDs I0822 14:21:56.250516 496356 management-port-dpu.go:181] Setup management port dpu host: ens1f0v0 F0822 14:21:56.250568 496356 ovnkube.go:133] failed to set management port name. file exists Workaround is to go to the node and run this command: sudo ovs-vsctl del-port br-int ovn-k8s-mp0
Expected results:
There should not be any errors when changing node_mgmt_port_netdev_flags to a valid value.
Additional info:
Reported here: https://github.com/ovn-org/ovn-kubernetes/pull/3160 Discussed briefly here: https://issues.redhat.com/browse/OCPBUGS-4098 Fixed Upstream here: https://github.com/ovn-org/ovn-kubernetes/pull/3251
Description of problem:
vSphere privilege checking failing when providing user-defined folder and/or resource pool
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-30-054458
How reproducible:
consistently
Steps to Reproduce:
1. Provide pre-existing folder and/or resource pool to the install-config 2. Perform an installation with an account with read only privileges on the datacenter and cluster 3. The installer will fail with missing privileges for the cluster and datacenter. When a pre-existing folder and resource pool are defined, the account can hold read only privileges on the datacenter and cluster .
Actual results:
Installer reports missing privileges
Expected results:
Installer should succeed
Additional info:
Description of problem:
The samples operator needs to update it's imagestreams to use the Jenkins 4.12 release.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
Customer is not able anymore to provision new baremetal nodes in 4.10.35 using the same rootDeviceHints used in 4.10.10. Customer uses HP DL360 Gen10, with exteranal SAN storage that is seen by the system as a multipath device. Latest IPA versions are implementing some changes to avoid wiping shared disks and this seems to affect what we should provide as rootDeviceHints. They used to put /dev/sda as rootDeviceHints, in 4.10.35 it doesn't make the IPA write the image to the disk anymore because it sees the disk as part of a multipath device, we tried using the on top multipath device /dev/dm-0, the system is then able to write the image to the disk but then it gets stuck when it tried to issue a partprobe command, rebooting the systems to boot from the disk does not seem to help complete the provisioning, no workaround so far.
Version-Release number of selected component (if applicable):
How reproducible:
by trying to provisioning a baremetal node with a multipath device.
Steps to Reproduce:
1. Create a new BMH using a multipath device as rootDeviceHints 2. 3.
Actual results:
The node does not get provisioned
Expected results:
the node gets provisioned correctly
Additional info:
Description of problem:
Disconnected IPI OCP 4.11.5 cluster install on baremetal fails when hostname of master nodes does not include "master"
Version-Release number of selected component (if applicable): 4.11.5
How reproducible: Perform disconnected IPI install of OCP 4.11.5 on bare metal with master nodes that do not contain the text "master"
Steps to Reproduce:
Perform disconnected IPI install of OCP 4.11.5 on bare metal with master nodes that do not contain the text "master"
Actual results: master nodes do come up.
Expected results: master nodes should come up despite that the text "master" is not in their hostname.
Additional info:
Disconnected IPI OCP 4.11.5 cluster install on baremetal fails when hostname of master nodes does not include "master"
My cust reinstall new cluster using the fix here . But they have the exact same issue. The metal3 pod have PROVISIONING_MACS value empty. Can we work together with them to understand why the new code fix https://github.com/openshift/cluster-baremetal-operator/commit/76bd6bc461b30a6a450f85a42e492a0933178aee is not working.
cat metal3-static-ip-set/metal3-static-ip-set/logs/current.log 2022-09-27T14:19:38.140662564Z + '[' -z 10.17.199.3/27 ']' 2022-09-27T14:19:38.140662564Z + '[' -z '' ']' 2022-09-27T14:19:38.140662564Z + '[' -n '' ']' 2022-09-27T14:19:38.140722345Z ERROR: Could not find suitable interface for "10.17.199.3/27" 2022-09-27T14:19:38.140726312Z + '[' -n '' ']' 2022-09-27T14:19:38.140726312Z + echo 'ERROR: Could not find suitable interface for "10.17.199.3/27"' 2022-09-27T14:19:38.140726312Z + exit 1
cat metal3-b9bf8d595-gv94k.yaml ... initContainers: command: /set-static-ip env: name: PROVISIONING_IP value: 10.17.199.3/27 name: PROVISIONING_INTERFACE name: PROVISIONING_MACS <------------------------- missing MACS image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4f04793bd109ecba2dfe43be93dc990ac5299272482c150bd5f2eee0f80c983b imagePullPolicy: IfNotPresent name: metal3-static-ip-set ....
omc logs machine-api-controllers-6b9ffd96cd-grh6l -c nodelink-controller -n openshift-machine-api 2022-09-21T16:13:43.600517485Z I0921 16:13:43.600513 1 nodelink_controller.go:408] Finding machine from node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" 2022-09-21T16:13:43.600521381Z I0921 16:13:43.600517 1 nodelink_controller.go:425] Finding machine from node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" by ProviderID 2022-09-21T16:13:43.600525225Z W0921 16:13:43.600521 1 nodelink_controller.go:427] Node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" has no providerID 2022-09-21T16:13:43.600528917Z I0921 16:13:43.600524 1 nodelink_controller.go:448] Finding machine from node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" by IP 2022-09-21T16:13:43.600532711Z I0921 16:13:43.600529 1 nodelink_controller.go:453] Found internal IP for node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca": "10.17.192.33" 2022-09-21T16:13:43.600551289Z I0921 16:13:43.600544 1 nodelink_controller.go:477] Matching machine not found for node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" with internal IP "10.17.192.33"
From @dtantsur WIP PR: https://github.com/openshift/cluster-baremetal-operator/pull/299
Customer is waiting for this fix. The previous code change don't fix customer situation.
Please refer to this slack thread :https://coreos.slack.com/archives/CFP6ST0A3/p1664215102459219
This is a clone of issue OCPBUGS-3085. The following is the description of the original issue:
—
Description of problem:
IPI on BareMetal Dual stack deployment failed and Bootstrap timed out before completion
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-25-210451
How reproducible:
Always
Steps to Reproduce:
1. Deploy IPI on BM using Dual stack 2. 3.
Actual results:
Deployment failed
Expected results:
Should pass
Additional info:
Same deployment works fine on 4.11
We rely on the user providing accurate information about the MAC addresses in the agent-config, because at the point we read it we haven't seen the hosts yet. However, if the user gets this wrong then chaos may ensue.
Once inventory is available, we should validate that the user has not:
and fail the install if they have.
This is a clone of issue OCPBUGS-4973. The following is the description of the original issue:
—
Description of problem:
Config OAuth with htpasswd in the hostedcluster doesn't work as expected.
Version-Release number of selected component (if applicable):
How reproducible:
enable OAuth htpasswd in hostedcluster
Steps to Reproduce:
1. create passwd file for user init by htpasswd ``` htpasswd -cbB .passwd helitest helitest oc create secret generic testuser --from-file=htpasswd=.passwd -n clusters ``` 2. edit hostedcluster.yaml ``` spec: configuration: oauth: identityProviders: - htpasswd: fileData: name: testuser mappingMethod: claim name: htpasswd type: HTPasswd ``` 3. oc login hostedcluster apiserver $ oc login https://ac0be21b169ff4399b6a2044388c38cf-5789e1b174d7424b.elb.us-east-2.amazonaws.com:6443 --username=testuser --password=testuser The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y Login failed (401 Unauthorized)
Actual results:
oc login with error : "Login failed (401 Unauthorized) "
Expected results:
oc login successfully.
Additional info:
# check configmap of oauth $ oc get cm -n clusters-demo-02 oauth-openshift -oyaml ... oauthConfig: alwaysShowProviderSelection: false assetPublicURL: "" grantConfig: method: deny serviceAccountMethod: prompt identityProviders: [] loginURL: https://ac0be21b169ff4399b6a2044388c38cf-5789e1b174d7424b.elb.us-east-2.amazonaws.com:6443 ---> seems `identityProviders` is not synced correctly ?
This is a clone of issue OCPBUGS-7102. The following is the description of the original issue:
—
Description of problem:
https://github.com/openshift/operator-framework-olm/blob/7ec6b948a148171bd336750fed98818890136429/staging/operator-lifecycle-manager/pkg/controller/operators/olm/plugins/downstream_csv_namespace_labeler_plugin_test.go#L309
has a dependency on creation of a next-version release branch.
Version-Release number of selected component (if applicable):
4.13
How reproducible:
Steps to Reproduce:
1. clone operator-framework/operator-framework-olm 2. make unit/olm 3. deal with a really bumpy first-time kubebuilder/envtest install experience 4. profit
Actual results:
error
Expected results:
pass
Additional info:
Description of problem: Knative tests were disabled due to https://issues.redhat.com/browse/OCPBUGS-190 to unblock the queue and should be enabled back again
https://coreos.slack.com/archives/C6A3NV5J9/p1660659719046909
https://github.com/openshift/console/pull/11956#discussion_r948075848
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-723. The following is the description of the original issue:
—
Description of problem:
I have a customer who created clusterquota for one of the namespace, it got created but the values were not reflecting under limits or not displaying namespace details.
~~~
$ oc describe AppliedClusterResourceQuota
Name: test-clusterquota
Created: 19 minutes ago
Labels: size=custom
Annotations: <none>
Namespace Selector: []
Label Selector:
AnnotationSelector: map[openshift.io/requester:system:serviceaccount:application-service-accounts:test-sa]
Scopes: NotTerminating
Resource Used Hard
-------- ---- ----
~~~
WORKAROUND: They recreated the clusterquota object (cache it off, delete it, create new) after which it displayed values as expected.
In the past, they saw similar behavior on their test cluster, there it was heavily utilized the etcd DB was much larger in size (>2.5Gi), and had many more objects (at that time, helm secrets were being cached for all deployments, and keeping a history of 10, so etcd was being bombarded).
This cluster the same "symptom" was noticed however etcd was nowhere near that in size nor the amount of etcd objects and/or helm cached secrets.
Version-Release number of selected component (if applicable): OCP 4.9
How reproducible: Occurred only twice(once in test and in current cluster)
Steps to Reproduce:
1. Create ClusterQuota
2. Check AppliedClusterResourceQuota
3. The values and namespace is empty
Actual results: ClusterQuota should display the values
Expected results: ClusterQuota not displaying values
This is a clone of issue OCPBUGS-6175. The following is the description of the original issue:
—
Description of problem:
When the cluster is configured with Proxy the swift client in the image registry operator is not using the proxy to authenticate with OpenStack, so it's unable to reach the OpenStack API. This issue became evident since recently the support was added to not fallback to cinder in case swift is available[1].
[1]https://github.com/openshift/cluster-image-registry-operator/pull/819
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Deploy a cluster with proxy and restricted installation 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-4955. The following is the description of the original issue:
—
Description of problem:
Customer needs "IfNotPresent" ImagePullPolicy set for bundle unpacker images which reference iamges by digest. Currently, policy is set to "Always" no matter what.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1.Install an operator via bundle referencing an image by digest 2.Check the bundle unpacker pod
Actual results:
Image pull policy will be set to "Always"
Expected results:
Image pull policy will be set to "IfNotPresent" when pulling via digest
Additional info:
This is a clone of issue OCPBUGS-3405. The following is the description of the original issue:
—
In case it should be used for publishing artifacts in CI jobs.
Look into to see if the following things are leaked:
This is a clone of issue OCPBUGS-11004. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-8349. The following is the description of the original issue:
—
Description of problem:
On a freshly installed cluster, the control-plane-machineset-operator begins rolling a new master node, but the machine remains in a Provisioned state and never joins as a node. Its status is: Drain operation currently blocked by: [{Name:EtcdQuorumOperator Owner:clusteroperator/etcd}] The cluster is left in this state until an admin manually removes the stuck master node, at which point a new master machine is provisioned and successfully joins the cluster.
Version-Release number of selected component (if applicable):
4.12.4
How reproducible:
Observed at least 4 times over the last week, but unsure on how to reproduce.
Actual results:
A master node remains in a stuck Provisioned state and requires manual deletion to unstick the control plane machine set process.
Expected results:
No manual interaction should be necessary.
Additional info:
Description of problem:
The current version of openshift's corendns is based on Kubernetes 1.24 packages. OpenShift 4.12 is based on Kubernetes 1.25.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Check https://github.com/openshift/coredns/blob/release-4.12/go.mod
Actual results:
Kubernetes packages (k8s.io/api, k8s.io/apimachinery, and k8s.io/client-go) are at version v0.24.0.
Expected results:
Kubernetes packages are at version v0.25.0 or later.
Additional info:
Using old Kubernetes API and client packages brings risk of API compatibility issues.
Description of problem:
Kebab menu for helm repository is showing inconsistent behavior
Version-Release number of selected component (if applicable): 4.12
How reproducible: Always
Steps to Reproduce:
1. Create some helm chart repository
2. Go to the Helm page and switch to the repositories tab
3. Open kebab menu for different repos
Actual results:
Menus are overlapping
Expected results:
The menu should work properly; one menu should close before opening a new one
Additional info:
Video has been added for the reference
Description of problem:
revert "force cert rotation every couple days for development" in 4.12 We want short expiry times during development and long expiry times when we ship. --- Additional comment from Eric Paris on 2020-04-02 19:57:29 CEST --- This bug has been set to target the 4.5.0 release without specifying a severity. As part of triage when determining the priority of bugs a severity should be specified. Since these bugs have no been properly triaged I am removing the target release. Teams will need to add a severity before deferring these bugs again. --- Additional comment from Michal Fojtik on 2020-05-12 12:45:25 CEST --- This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity. If you have further information on the current state of the bug, please update it, otherwise this bug will be automatically closed in 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. --- Additional comment from Standa Laznicka on 2020-05-12 14:53:12 CEST --- you don't really want to close this --- Additional comment from Stefan Schimanski on 2020-05-19 13:11:00 CEST --- Waiting for master to open. We will fix it then on the release branch. --- Additional comment from Stefan Schimanski on 2020-06-18 12:23:34 CEST --- Will be done when 4.6 branches from master. --- Additional comment from Michal Fojtik on 2020-07-09 14:46:02 CEST --- Stefan is PTO, adding UpcomingSprint to his bugs to fulfill the duty. --- Additional comment from Michal Fojtik on 2020-08-24 15:12:08 CEST --- This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. --- Additional comment from Michal Fojtik on 2020-08-31 15:59:33 CEST --- This bug hasn't had any activity 7 days after it was marked as LifecycleStale, so we are closing this bug as WONTFIX. If you consider this bug still valuable, please reopen it or create new bug. --- Additional comment from Michal Fojtik on 2020-08-31 17:00:25 CEST --- The LifecycleStale keyword was removed because the bug got commented on recently. The bug assignee was notified. --- Additional comment from Stefan Schimanski on 2020-09-11 13:00:27 CEST --- This is waiting for Eric Paris to stop fast forwarding release-4.6 from master. --- Additional comment from Michal Fojtik on 2020-10-30 11:12:07 CET --- This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Keywords if you think this bug should never be marked as stale. Please consult with bug assignee before you do that. --- Additional comment from Nick Stielau on 2021-01-20 18:49:09 CET --- Can we get some context on why this is blocker+? Would we further delay the release if we don't get a fix in for this? --- Additional comment from Stefan Schimanski on 2021-03-16 17:28:08 CET --- --- Additional comment from Eric Paris on 2021-06-08 14:00:16 CEST --- This bug sets blocker+ without setting a Target Release. This is an invalid state as it is impossible to determine what is being blocked. Please be sure to set Priority, Severity, and Target Release before you attempt to set blocker+ --- Additional comment from Michal Fojtik on 2021-06-10 10:49:36 CEST --- This is a blocker? until we have Target Release 4.9 (it is a blocker+ for 4.9). --- Additional comment from Wally on 2021-06-11 15:14:26 CEST --- Setting blocker- until next week to clear reports heading to code freeze. Will reset once 4.9 opens. --- Additional comment from Wally on 2021-08-31 19:26:13 UTC --- Setting blocker- until next week to clear reports heading to code freeze. Will reset once 4.10 opens. --- Additional comment from Michal Fojtik on 2022-02-03 21:53:15 UTC --- ** A NOTE ABOUT USING URGENT ** This BZ has been set to urgent severity and priority. When a BZ is marked urgent priority Engineers are asked to stop whatever they are doing, putting everything else on hold. Please be prepared to have reasonable justification ready to discuss, and ensure your own and engineering management are aware and agree this BZ is urgent. Keep in mind, urgent bugs are very expensive and have maximal management visibility. NOTE: This bug was automatically assigned to an engineering manager with the severity reset to *unspecified* until the emergency is vetted and confirmed. Please do not manually override the severity. ** INFORMATION REQUIRED ** Please answer these questions before escalation to engineering: 1. Has a link to must-gather output been provided in this BZ? We cannot work without. If must-gather fails to run, attach all relevant logs and provide the error message of must-gather. 2. Give the output of "oc get clusteroperators -o yaml". 3. In case of degraded/unavailable operators, have all their logs and the logs of the operands been analyzed [yes/no] 4. List the top 5 relevant errors from the logs of the operators and operands in (3). 5. Order the list of degraded/unavailable operators according to which is likely the cause of the failure of the other, root-cause at the top. 6. Explain why (5) is likely the right order and list the information used for that assessment. 7. Explain why Engineering is necessary to make progress. --- Additional comment from Wally on 2022-02-09 20:11:25 UTC --- Setting blocker- for now but will add reminder and keep in my queue for visibility. --- Additional comment from Red Hat Bugzilla on 2022-05-09 08:32:21 UTC --- Account disabled by LDAP Audit for extended failure --- Additional comment from OpenShift Automated Release Tooling on 2022-06-24 01:06:13 UTC --- Elliott changed bug status from MODIFIED to ON_QA. This bug is expected to ship in the next 4.11 release. --- Additional comment from Ke Wang on 2022-06-24 15:24:03 UTC --- To verify the bug, refer to https://bugzilla.redhat.com/show_bug.cgi?id=1921139#c6 --- Additional comment from OpenShift BugZilla Robot on 2022-06-25 12:40:12 UTC --- Bugfix included in accepted release 4.11.0-0.nightly-2022-06-25-081133 Bug will not be automatically moved to VERIFIED for the following reasons: - PR openshift/cluster-kube-apiserver-operator#1307 not approved by QA contact This bug must now be manually moved to VERIFIED by dpunia@redhat.com --- Additional comment from Deepak Punia on 2022-06-27 08:20:33 UTC --- Below is the steps to verify this bug: # oc adm release info --commits registry.ci.openshift.org/ocp/release:4.11.0-0.nightly-2022-06-25-081133|grep -i cluster-kube-apiserver-operator cluster-kube-apiserver-operator https://github.com/openshift/cluster-kube-apiserver-operator 7764681777edfa3126981a0a1d390a6060a840a3 # git log --date local --pretty="%h %an %cd - %s" 776468 |grep -i "#1307" 08973b820 openshift-ci[bot] Thu Jun 23 22:40:08 2022 - Merge pull request #1307 from tkashem/revert-cert-rotation # oc get clusterversions.config.openshift.io NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-06-25-081133 True False 64m Cluster version is 4.11.0-0.nightly-2022-06-25-081133 $ cat scripts/check_secret_expiry.sh FILE="$1" if [ ! -f "$1" ]; then echo "must provide \$1" && exit 0 fi export IFS=$'\n' for i in `cat "$FILE"` do if `echo "$i" | grep "^#" > /dev/null`; then continue fi NS=`echo $i | cut -d ' ' -f 1` SECRET=`echo $i | cut -d ' ' -f 2` rm -f tls.crt; oc extract secret/$SECRET -n $NS --confirm > /dev/null echo "Check cert dates of $SECRET in project $NS:" openssl x509 -noout --dates -in tls.crt; echo done $ cat certs.txt openshift-kube-controller-manager-operator csr-signer-signer openshift-kube-controller-manager-operator csr-signer openshift-kube-controller-manager kube-controller-manager-client-cert-key openshift-kube-apiserver-operator aggregator-client-signer openshift-kube-apiserver aggregator-client openshift-kube-apiserver external-loadbalancer-serving-certkey openshift-kube-apiserver internal-loadbalancer-serving-certkey openshift-kube-apiserver service-network-serving-certkey openshift-config-managed kube-controller-manager-client-cert-key openshift-config-managed kube-scheduler-client-cert-key openshift-kube-scheduler kube-scheduler-client-cert-key Checking the Certs, they are with one day expiry times, this is as expected. # ./check_secret_expiry.sh certs.txt Check cert dates of csr-signer-signer in project openshift-kube-controller-manager-operator: notBefore=Jun 27 04:41:38 2022 GMT notAfter=Jun 28 04:41:38 2022 GMT Check cert dates of csr-signer in project openshift-kube-controller-manager-operator: notBefore=Jun 27 04:52:21 2022 GMT notAfter=Jun 28 04:41:38 2022 GMT Check cert dates of kube-controller-manager-client-cert-key in project openshift-kube-controller-manager: notBefore=Jun 27 04:52:26 2022 GMT notAfter=Jul 27 04:52:27 2022 GMT Check cert dates of aggregator-client-signer in project openshift-kube-apiserver-operator: notBefore=Jun 27 04:41:37 2022 GMT notAfter=Jun 28 04:41:37 2022 GMT Check cert dates of aggregator-client in project openshift-kube-apiserver: notBefore=Jun 27 04:52:26 2022 GMT notAfter=Jun 28 04:41:37 2022 GMT Check cert dates of external-loadbalancer-serving-certkey in project openshift-kube-apiserver: notBefore=Jun 27 04:52:26 2022 GMT notAfter=Jul 27 04:52:27 2022 GMT Check cert dates of internal-loadbalancer-serving-certkey in project openshift-kube-apiserver: notBefore=Jun 27 04:52:49 2022 GMT notAfter=Jul 27 04:52:50 2022 GMT Check cert dates of service-network-serving-certkey in project openshift-kube-apiserver: notBefore=Jun 27 04:52:28 2022 GMT notAfter=Jul 27 04:52:29 2022 GMT Check cert dates of kube-controller-manager-client-cert-key in project openshift-config-managed: notBefore=Jun 27 04:52:26 2022 GMT notAfter=Jul 27 04:52:27 2022 GMT Check cert dates of kube-scheduler-client-cert-key in project openshift-config-managed: notBefore=Jun 27 04:52:47 2022 GMT notAfter=Jul 27 04:52:48 2022 GMT Check cert dates of kube-scheduler-client-cert-key in project openshift-kube-scheduler: notBefore=Jun 27 04:52:47 2022 GMT notAfter=Jul 27 04:52:48 2022 GMT # # cat check_secret_expiry_within.sh #!/usr/bin/env bash # usage: ./check_secret_expiry_within.sh 1day # or 15min, 2days, 2day, 2month, 1year WITHIN=${1:-24hours} echo "Checking validity within $WITHIN ..." oc get secret --insecure-skip-tls-verify -A -o json | jq -r '.items[] | select(.metadata.annotations."auth.openshift.io/certificate-not-after" | . != null and fromdateiso8601<='$( date --date="+$WITHIN" +%s )') | "\(.metadata.annotations."auth.openshift.io/certificate-not-before") \(.metadata.annotations."auth.openshift.io/certificate-not-after") \(.metadata.namespace)\t\(.metadata.name)"' # ./check_secret_expiry_within.sh 1day Checking validity within 1day ... 2022-06-27T04:41:37Z 2022-06-28T04:41:37Z openshift-kube-apiserver-operator aggregator-client-signer 2022-06-27T04:52:26Z 2022-06-28T04:41:37Z openshift-kube-apiserver aggregator-client 2022-06-27T04:52:21Z 2022-06-28T04:41:38Z openshift-kube-controller-manager-operator csr-signer 2022-06-27T04:41:38Z 2022-06-28T04:41:38Z openshift-kube-controller-manager-operator csr-signer-signer
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
While backporting to 4.12 the node healthz server (#1570), a number of functions related to checking stale ovs ports (checkForStaleOVSInternalPorts, checkForStaleOVSRepresentorInterfaces, checkForStaleOVSInterfaces) were moved to pkg/node/openflow_manager.go and their related tests were left in pkg/node/healthcheck_test.go. In 4.13, we have everything under pkg/network-controller-manager. To keep consistency, let's move these to pkg/node/node.go and pkg/node/node_test.go
This is a clone of issue OCPBUGS-4656. The following is the description of the original issue:
—
Description of problem:
`/etc/hostname` may exist, but be empty. `vsphere-hostname` service should check that the file is not empty instead of just that it exists. OKD's machine-os-content starting from F37 has an empty /etc/hostname file, which breaks joining workers in vsphere IPI
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Install OKD w/ workers on vsphere 2. 3.
Actual results:
Workers get hostname resolved using NM
Expected results:
Workers get hostname resolved using vmtoolsd
Additional info:
Description of problem:
Each LB created for a Service type LoadBalancer results in 1 client rule and <# of public subnets> health rules being created. The rules per SG quota in AWS is quite small; 60 by default, and 200 hard max. OCP has about 40 rules OOTB. Assuming an HA cluster in 3 AZs, that is 4 rules per LB. With default AWS quota, only ~5 LBs can be create and with the hard max of 200, only ~40 LBs can be created.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Create Service type LoadBalancer and observe increase in master-sg and worker-sg rules sets 2. 3.
Actual results:
4 rules are created
Expected results:
1 rules is created when the client rule is a superset of the per-subnet health rules
Additional info:
This ~4x the number of Services of type LoadBalancer. This is required for Hypershift.
Description of problem:
Clusters created with platform 'vsphere' in the install-config end up as type 'BareMetal' in the infrastructure CR.
Version-Release number of selected component (if applicable):
4.12.3
How reproducible:
100%
Steps to Reproduce:
1. Create a cluster through the agent installer with platform: vsphere in the install-config 2. oc get infrastructure cluster -o jsonpath='{.status.platform}'
Actual results:
BareMetal
Expected results:
VSphere
Additional info:
The platform type is not being case converted ("vsphere" -> "VSphere") when constructing the AgentClusterInstall CR. When read by the assisted-service client, the platform reads as unknown and therefore the platform field is left blank when the Cluster object is created in the assisted API. Presumably that results in the correct default platform for the topology: None for SNO, BareMetal for everything else, but never VSphere. Since the platform VIPs are passed through a non-platform-specific API in assisted, everything worked but the resulting cluster would have the BareMetal platform.
This is a clone of issue OCPBUGS-3316. The following is the description of the original issue:
—
Description of problem:
Branch name in repository pipelineruns list view should match the actual github branch name.
Version-Release number of selected component (if applicable):
4.11.z
How reproducible:
alwaus
Steps to Reproduce:
1. Create a repository 2. Trigger the pipelineruns by push or pull request event on the github
Actual results:
Branch name contains "refs-heads-" prefix in front of the actual branch name eg: "refs-heads-cicd-demo" (cicd-demo is the branch name)
Expected results:
Branch name should be the acutal github branch name. just `cicd-demo`should be shown in the branch column.
Additional info:
Ref: https://coreos.slack.com/archives/CHG0KRB7G/p1667564311865459
Description of problem:
some upgrade ci jobs from 4.11.z to 4.12 nightly build are failed, because system unit machine-config-daemon-update-rpmostree-via-container is failed
omg get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-6e18de1272fad7a5ca1529941e3ceaed False True True 3 0 0 1 3h53m master rendered-master-60f4ff5893c94f53acd9ebb7a6bf53d4 False True True 3 0 0 1 3h53m
check issued node
omg get node/ip-10-0-57-74.us-east-2.compute.internal -o yaml|yq -y '.metadata.annotations' cloud.network.openshift.io/egress-ipconfig: '[{"interface":"eni-0f6de21569b5b65c8","ifaddr":{"ipv4":"10.0.48.0/20"},"capacity":{"ipv4":14,"ipv6":15}}]' csi.volume.kubernetes.io/nodeid: '{"ebs.csi.aws.com":"i-01a34f6b5f2cd1e41"}' machine.openshift.io/machine: openshift-machine-api/ci-op-kb95kxx9-2a438-r6z94-master-2 machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-master-065664319cfbaee64277097d49a8a5a6 machineconfiguration.openshift.io/desiredConfig: rendered-master-60f4ff5893c94f53acd9ebb7a6bf53d4 machineconfiguration.openshift.io/desiredDrain: drain-rendered-master-60f4ff5893c94f53acd9ebb7a6bf53d4 machineconfiguration.openshift.io/lastAppliedDrain: drain-rendered-master-60f4ff5893c94f53acd9ebb7a6bf53d4 machineconfiguration.openshift.io/reason: 'error running systemd-run --unit machine-config-daemon-update-rpmostree-via-container --collect --wait -- podman run --authfile /var/lib/kubelet/config.json --privileged --pid=host --net=host --rm -v /:/run/host quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0daf5c4a35424410e88dde102022fc3581302bc8a98e09e2e4748502c59b3661 rpm-ostree ex deploy-from-self /run/host: Running as unit: machine-config-daemon-update-rpmostree-via-container.service Finished with result: exit-code Main processes terminated with: code=exited/status=125 Service runtime: 2min 52ms CPU time consumed: 144ms : exit status 125' machineconfiguration.openshift.io/state: Degraded volumes.kubernetes.io/controller-managed-attach-detach: 'true'
check mcd log on issued node
omg get pod -n openshift-machine-config-operator -o json | jq -r '.items[]|select(.spec.nodeName=="ip-10-0-57-74.us-east-2.compute.internal")|.metadata.name' | grep daemon machine-config-daemon-znbvf 2022-10-09T22:12:58.797891917Z I1009 22:12:58.797821 179598 update.go:1917] Updating OS to layered image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0daf5c4a35424410e88dde102022fc3581302bc8a98e09e2e4748502c59b3661 2022-10-09T22:12:58.797891917Z I1009 22:12:58.797846 179598 rpm-ostree.go:447] Running captured: rpm-ostree --version 2022-10-09T22:12:58.815829171Z I1009 22:12:58.815800 179598 update.go:2068] rpm-ostree is not new enough for layering; forcing an update via container 2022-10-09T22:12:58.817577513Z I1009 22:12:58.817555 179598 update.go:2053] Running: systemd-run --unit machine-config-daemon-update-rpmostree-via-container --collect --wait -- podman run --authfile /var/lib/kubelet/config.json --privileged --pid=host --net=host --rm -v /:/run/host quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0daf5c4a35424410e88dde102022fc3581302bc8a98e09e2e4748502c59b3661 rpm-ostree ex deploy-from-self /run/host ... 2022-10-09T22:15:00.831959313Z E1009 22:15:00.831949 179598 writer.go:200] Marking Degraded due to: error running systemd-run --unit machine-config-daemon-update-rpmostree-via-container --collect --wait -- podman run --authfile /var/lib/kubelet/config.json --privileged --pid=host --net=host --rm -v /:/run/host quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0daf5c4a35424410e88dde102022fc3581302bc8a98e09e2e4748502c59b3661 rpm-ostree ex deploy-from-self /run/host: Running as unit: machine-config-daemon-update-rpmostree-via-container.service 2022-10-09T22:15:00.831959313Z Finished with result: exit-code 2022-10-09T22:15:00.831959313Z Main processes terminated with: code=exited/status=125 2022-10-09T22:15:00.831959313Z Service runtime: 2min 52ms 2022-10-09T22:15:00.831959313Z CPU time consumed: 144ms 2022-10-09T22:15:00.831959313Z : exit status 125
Version-Release number of selected component (if applicable):
4.12
Steps to Reproduce:
upgrade cluster from 4.11.8 to 4.12.0-0.nightly-2022-10-05-053337
Actual results:
upgrade is failed due to node is degraded, rpm-ostree update via container is failed
Expected results:
upgrade can be completed successfully
Additional info:
must-gather: https://gcsweb-qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/qe-private-deck/logs/periodic-ci-openshift-openshift-tests-private-release-4.12-nightly-4.12-upgrade-from-stable-4.11-aws-ipi-proxy-p1/1579169944476585984/artifacts/aws-ipi-proxy-p1/gather-must-gather/artifacts/must-gather.tar
Other build logs of failed jobs
Description of problem:
Whereabouts reconciliation is not launched when
How reproducible:
Always
Steps to Reproduce:
1. oc edit the networks object and create a net-attach-def that references whereabouts – in a conflist.
Actual results:
The reconciler is not launched.
Expected results:
The reconciler is launched.
Description of problem:
Cluster can not be installed when updating join network CIDR using v6InternalSubnet fdxx::/64 in the manifests/cluster-network-03-config.yml
Version-Release number of selected component (if applicable):
v4.12
How reproducible:
Always
Steps to Reproduce:
Using v6InternalSubnet: fd66::/48 in manifests/cluster-network-03-config.yml to install a dual stack cluster: cp manifests/cluster-network-02-config.yml manifests/cluster-network-03-config.yml sed -i 's/config.openshift.io\/v1/operator.openshift.io\/v1/g' manifests/cluster-network-03-config.yml cat > ovn_kube_config <<HEREDOC defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: v6InternalSubnet: fd66::/48 HEREDOC sed -i $'/^status/{e cat ovn_kube_config\n}' manifests/cluster-network-03-config.yml
Actual results:
Installation fail
Expected results:
Installation pass
Additional info:
Description of problem:
Duplicate notification "Getting started" would be shown on Search page
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-26-111919
How reproducible:
Always
Steps to Reproduce:
1. Login OCP as normal user, and change to developer prespective, create a new project 2. Delete the project on page (switch to Administator prespective, go to Home -> Projects page) 3. Switch to Developer prespective, and go to Search page, check the notification "Getting Started"
Actual results:
Two notification shown on page
Expected results:
Only one should exist
Additional info:
Description of problem:
On MicroShift, the Route API is served by kube-apiserver as a CRD. Reusing the same defaulting implementation as vanilla OpenShift through a patch to kube- apiserver is expected to resolve OCPBUGS-4189 but have no detectable effect on OCP.
Additional info:
This patch will be inert on OCP, but is implemented in openshift/kubernetes because MicroShift ingests kube-apiserver through its build-time dependency on openshift/kubernetes.
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Go to the detail page of some Deployments with PDB connected to it
2. Click Edit PDB from the kebab menu
3. Inspect the second input box under the `Availability requirement `
Actual results: The name and aria-label attributes always show minAvailable
Expected results: They should be consistent with the first input box
Additional info:
This is a clone of issue OCPBUGS-6018. The following is the description of the original issue:
—
This is a public clone of OCPBUGS-3821
The MCO can sometimes render a rendered-config in the middle of an upgrade with old MCs, e.g.:
This will cause the render controller to create a new rendered MC that uses the OLD kubeletconfig-MC, which at best is a double reboot for 1 node, and at worst block the update and break maxUnavailable nodes per pool.
This is a clone of issue OCPBUGS-3668. The following is the description of the original issue:
—
Description of problem:
Installer fails to install 4.12.0-rc.0 on VMware IPI with the script that worked with prior OCP versions. Error happens during Terraform prepare step when gathering information in the "Platform Provisioning Check". It looks like a permission issue, but we're using the VCenter administrator account. I double checked and that account has all the necessary permissions.
Version-Release number of selected component (if applicable):
OCP installer 4.12.0-rc.0 VSphere & Vcenter 7.0.3 - no pending updates
How reproducible:
always - we observed this already in the nightlies, but wanted to wait for a RC to confirm
Steps to Reproduce:
1. Try to install using the openshift-install binary
Actual results:
Fails during the preparation step
Expected results:
Installs the cluster ;)
Additional info:
This runs in our CICD pipeline, let me know if you want to need access to the full run log: https://gitlab.consulting.redhat.com/cblum/storage-ocs-lab/-/jobs/219304 This includes the install-config.yaml, all component versions and the full debug log output
This is a clone of issue OCPBUGS-4954. The following is the description of the original issue:
—
Description of problem:
During the cluster destroy process for IBM Cloud IPI, failures can occur when COS Instances are deleted, but Reclamations are created for the COS deletions, and prevent cleanup of the ResourceGroup
Version-Release number of selected component (if applicable):
4.13.0 (and 4.12.0)
How reproducible:
Sporadic, it depends on IBM Cloud COS
Steps to Reproduce:
1. Create an IPI cluster on IBM Cloud
2. Delete the IPI cluster on IBM Cloud
3. COS Reclamation may be created, and can cause the destroy cluster to fail
Actual results:
time="2022-12-12T16:50:06Z" level=debug msg="Listing resource groups" time="2022-12-12T16:50:06Z" level=debug msg="Deleting resource group \"eu-gb-reclaim-1-zc6xg\"" time="2022-12-12T16:50:07Z" level=debug msg="Failed to delete resource group eu-gb-reclaim-1-zc6xg: Resource groups with active or pending reclamation instances can't be deleted. Use the CLI commands \"ibmcloud resource service-instances --type all\" and \"ibmcloud resource reclamations\" to check for remaining instances, then delete the instances and try again."
Expected results:
Successful destroy cluster (including deletion of ResourceGroup)
Additional info:
IBM Cloud is testing a potential fix currently.
It was also identified, the destroy stages are not in a proper order.
https://github.com/openshift/installer/blob/9377cb3974986a08b531a5e807fd90a3a4e85ebf/pkg/destroy/ibmcloud/ibmcloud.go#L128-L155
Changes are being made in an attempt to resolve this along with a fix for this bug as well.
This is a clone of issue OCPBUGS-5016. The following is the description of the original issue:
—
Description of problem:
When editing any pipeline in the openshift console, the correct content cannot be obtained (the obtained information is the initial information).
Version-Release number of selected component (if applicable):
How reproducible:
100%
Steps to Reproduce:
Developer -> Pipeline -> select pipeline -> Details -> Actions -> Edit Pipeline -> YAML view -> Cancel -> Actions -> Edit Pipeline -> YAML view
Actual results:
displayed content is incorrect.
Expected results:
Get the content of the current pipeline, not the "pipeline create" content.
Additional info:
If cancel or save in the "Pipeline Builder" interface after "Edit Pipeline", can get the expected content. ~ Developer -> Pipeline -> select pipeline -> Details -> Actions -> Edit Pipeline -> Pipeline builder -> Cancel -> Actions -> Edit Pipeline -> YAML view :Display resource content normally ~
Description of problem:
In a complete disconnected cluster, the dev catalog is taking too much time in loading
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. A complete disconnected cluster
2. In add page go to the All services page
3.
Actual results:
Taking too much time too load
Expected results:
Time taken should be reduced
Additional info:
Attached a gif for reference
Description of problem:
release-4.12 of openshift/cloud-provider-openstack is missing some commits that were backported in upstream project into the release-1.25 branch. We should import them in our downstream fork.
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-8342. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-8258. The following is the description of the original issue:
—
Invoking 'create cluster-manifests' fails when imageContentSources is missing in install-config yaml:
$ openshift-install agent create cluster-manifests INFO Consuming Install Config from target directory FATAL failed to write asset (Mirror Registries Config) to disk: failed to write file: open .: is a directory
install-config.yaml:
apiVersion: v1alpha1 metadata: name: appliance rendezvousIP: 192.168.122.116 hosts: - hostname: sno installerArgs: '["--save-partlabel", "agent*", "--save-partlabel", "rhcos-*"]' interfaces: - name: enp1s0 macAddress: 52:54:00:e7:05:72 networkConfig: interfaces: - name: enp1s0 type: ethernet state: up mac-address: 52:54:00:e7:05:72 ipv4: enabled: true dhcp: true
Description of problem:
ovnkube-trace fails on hypershift deployments:
https://bugzilla.redhat.com/show_bug.cgi?id=2066891#c8
getDatabaseURIs looks for pods with container ovnkube-master, and those don't exist in hypershift.
https://github.com/ovn-org/ovn-kubernetes/blob/6b8acf05cb6043ebdc42d9d36e700390baabea4a/go-controller/cmd/ovnkube-trace/ovnkube-trace.go#L540
~~~
// Returns nbAddress, sbAddress, protocol == "ssl", nil
func getDatabaseURIs(coreclient *corev1client.CoreV1Client, restconfig *rest.Config, ovnNamespace string) (string, string, bool, error) {
containerName := "ovnkube-master"
var err error
found := false
var podName string
listOptions := metav1.ListOptions{}
pods, err := coreclient.Pods(ovnNamespace).List(context.TODO(), listOptions)
if err != nil
for _, pod := range pods.Items {
for _, container := range pod.Spec.Containers {
if container.Name == containerName
}
}
if !found
~~~
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-2083. The following is the description of the original issue:
—
Description of problem:
Currently we are running VMWare CSI Operator in OpenShift 4.10.33. After running vulnerability scans, the operator was discovered to be running a known weak cipher 3DES. We are attempting to upgrade or modify the operator to customize the ciphers available. We were looking at performing a manual upgrade via Quay.io but can't seem to pull the image and was trying to steer away from performing a custom install from scratch. Looking for any suggestions into mitigated the weak cipher in the kube-rbac-proxy under VMware CSI Operator.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
There is capacity limit on egressIP for different cloud provider, for example, GCP, the limit is 10.
If the number of egressIP added to hostsubnet exceeds the capability limit, it is expected some logging message is emitted to event log, that can be seen through "oc get event"
On a GCP with SDN plugin, configure egressCIDRs on one worker node, configured 12 netnamespaces, each has 1 egressIP configured, the total number of egressIP for the hostsubnet has exceeded its capacity limit of 10. No event log was seen to indicate that the number of egressIP for the hostsubnet has exceeded the limit.
$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.11.0-0.nightly-2022-08-02-014045 True False 160m Cluster version is 4.11.0-0.nightly-2022-08-02-014045
See attachment for more details.
Description of problem:
Have 6 runs of techpreview jobs where the jobs fails due to the MCO:
{Operator degraded (RequiredPoolsFailed): Unable to apply 4.12.0-0.ci.test-2022-09-21-183414-ci-op-qd6plyhc-latest: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)] Operator degraded (RequiredPoolsFailed): Unable to apply 4.12.0-0.ci.test-2022-09-21-183414-ci-op-qd6plyhc-latest: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)]}
looking at the MCD logs the master seems to go degraded in bootstrap due to the rendered config not being found?
I0921 18:49:47.091804 8171 daemon.go:444] Node ci-op-qd6plyhc-6dd9a-bfmjd-master-1 is part of the control plane I0921 18:49:49.213556 8171 node.go:24] No machineconfiguration.openshift.io/currentConfig annotation on node ci-op-qd6plyhc-6dd9a-bfmjd-master-1: map[csi.volume.kubernetes.io/nodeid: {"pd.csi.storage.gke.io":"projects/openshift-gce-devel-ci-2/zones/us-central1-b/instances/ci-op-qd6plyhc-6dd9a-bfmjd-master-1"} volumes.kubernetes.io/controller-managed-attach-detach:true], in cluster bootstrap, loading initial node annotation from /etc/machine-config-daemon/node-annotations.json I0921 18:49:49.215186 8171 node.go:45] Setting initial node config: rendered-master-2dde32327e4e5d15092fccbac1dcec49 I0921 18:49:49.253706 8171 daemon.go:1184] In bootstrap mode E0921 18:49:49.254046 8171 writer.go:200] Marking Degraded due to: machineconfig.machineconfiguration.openshift.io "rendered-master-2dde32327e4e5d15092fccbac1dcec49" not found I0921 18:49:51.232610 8171 daemon.go:499] Transitioned from state: Done -> Degraded I0921 18:49:51.249618 8171 daemon.go:1184] In bootstrap mode E0921 18:49:51.249906 8171 writer.go:200] Marking Degraded due to: machineconfig.machineconfiguration.openshift.io "rendered-master-2dde32327e4e5d15092fccbac1dcec49" not found
However looking at controller a rendered-config was generated correctly but it's not the missing config from above:
I0921 18:54:06.736984 1 render_controller.go:506] Generated machineconfig rendered-master-acc8491aafab8ef511a40b76372325ee from 6 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 }] I0921 18:54:06.737226 1 event.go:285] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"master", UID:"b2084ca6-4b33-46bf-b83b-9e98010ff085", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"5648", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-acc8491aafab8ef511a40b76372325ee successfully generated (release version: 4.12.0-0.ci.test-2022-09-21-183220-ci-op-9ksj7d7g-latest, controller version: a627415c240b4c7dd2f9e90f659690d9c0f623f3) I0921 18:54:06.742053 1 render_controller.go:532] Pool master: now targeting: rendered-master-acc8491aafab8ef511a40b76372325ee
So far I see this in the following techpreview jobs:
GCP techpreview
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-gcp-sdn-techpreview/1572638837954318336
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-gcp-sdn-techpreview-serial/1572638838793179136
Vsphere techpreview
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-nightly-4.12-e2e-vsphere-ovn-techpreview/1572638854794448896
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-nightly-4.12-e2e-vsphere-ovn-techpreview-serial/1572638855574589440
AWS Techpreview:
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-aws-sdn-techpreview/1572638828672323584
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-aws-sdn-techpreview-serial/1572638829217583104
The above jobs affect the k8s 1.25 bump and are blocking the job.
There are also other occurances not in our PR:
https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/31965/rehearse-31965-pull-ci-openshift-openshift-controller-manager-master-openshift-e2e-aws-builds-techpreview/1572581504297472000
Also see a quick search:
https://search.ci.openshift.org/?search=timed+out+waiting+for+the+condition%2C+error+pool+master+is+not+ready&maxAge=48h&context=1&type=bug%2Bissue%2Bjunit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Did something change that would affect tech preview jobs?
Also note, this seems like a new failure. I have some of these jobs passing in the last ~ 8 days.
This is a clone of issue OCPBUGS-2479. The following is the description of the original issue:
—
Description of problem:
Right border radius is 0 for the pipeline visualization wrapper in dark mode but looks fine in light mode
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Steps to Reproduce:
1. Switch the theme to dark mode 2. Create a pipeline and navigate to the Pipeline details page
Actual results:
Right border radius is 0, see the screenshots
Expected results:
Right border radius should be same as left border radius.
Additional info:
Description of problem:
In a 4.11 cluster with only openshift-samples enabled, the 4.12 introduced optional COs console and insights are installed. While upgrading to 4.12, CVO considers them to be disabled explicitly and skips reconciling them. So these COs are not upgraded to 4.12. Installed COs cannot be disabled, so CVO is supposed to implicitly enable them. $ oc get clusterversion -oyaml { "apiVersion": "config.openshift.io/v1", "kind": "ClusterVersion", "metadata": { "creationTimestamp": "2022-09-30T05:02:31Z", "generation": 3, "name": "version", "resourceVersion": "134808", "uid": "bd95473f-ffda-402d-8fe3-74f852a9d6eb" }, "spec": { "capabilities": { "additionalEnabledCapabilities": [ "openshift-samples" ], "baselineCapabilitySet": "None" }, "channel": "stable-4.11", "clusterID": "8eda5167-a730-4b39-be1d-214a80506d34", "desiredUpdate": { "force": true, "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "version": "" } }, "status": { "availableUpdates": null, "capabilities": { "enabledCapabilities": [ "openshift-samples" ], "knownCapabilities": [ "Console", "Insights", "Storage", "baremetal", "marketplace", "openshift-samples" ] }, "conditions": [ { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Unable to retrieve available updates: currently reconciling cluster version 4.12.0-0.nightly-2022-09-28-204419 not found in the \"stable-4.11\" channel", "reason": "VersionNotFound", "status": "False", "type": "RetrievedUpdates" }, { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Capabilities match configured spec", "reason": "AsExpected", "status": "False", "type": "ImplicitlyEnabledCapabilities" }, { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Payload loaded version=\"4.12.0-0.nightly-2022-09-28-204419\" image=\"registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc\" architecture=\"amd64\"", "reason": "PayloadLoaded", "status": "True", "type": "ReleaseAccepted" }, { "lastTransitionTime": "2022-09-30T05:23:18Z", "message": "Done applying 4.12.0-0.nightly-2022-09-28-204419", "status": "True", "type": "Available" }, { "lastTransitionTime": "2022-09-30T07:05:42Z", "status": "False", "type": "Failing" }, { "lastTransitionTime": "2022-09-30T07:41:53Z", "message": "Cluster version is 4.12.0-0.nightly-2022-09-28-204419", "status": "False", "type": "Progressing" } ], "desired": { "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "version": "4.12.0-0.nightly-2022-09-28-204419" }, "history": [ { "completionTime": "2022-09-30T07:41:53Z", "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "startedTime": "2022-09-30T06:42:01Z", "state": "Completed", "verified": false, "version": "4.12.0-0.nightly-2022-09-28-204419" }, { "completionTime": "2022-09-30T05:23:18Z", "image": "registry.ci.openshift.org/ocp/release@sha256:5a6f6d1bf5c752c75d7554aa927c06b5ea0880b51909e83387ee4d3bca424631", "startedTime": "2022-09-30T05:02:33Z", "state": "Completed", "verified": false, "version": "4.11.0-0.nightly-2022-09-29-191451" } ], "observedGeneration": 3, "versionHash": "CSCJ2fxM_2o=" } } $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2022-09-28-204419 True False False 93m cloud-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h56m cloud-credential 4.12.0-0.nightly-2022-09-28-204419 True False False 3h59m cluster-autoscaler 4.12.0-0.nightly-2022-09-28-204419 True False False 3h53m config-operator 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m console 4.11.0-0.nightly-2022-09-29-191451 True False False 3h45m control-plane-machine-set 4.12.0-0.nightly-2022-09-28-204419 True False False 117m csi-snapshot-controller 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m dns 4.12.0-0.nightly-2022-09-28-204419 True False False 3h53m etcd 4.12.0-0.nightly-2022-09-28-204419 True False False 3h52m image-registry 4.12.0-0.nightly-2022-09-28-204419 True False False 3h46m ingress 4.12.0-0.nightly-2022-09-28-204419 True False False 151m insights 4.11.0-0.nightly-2022-09-29-191451 True False False 3h48m kube-apiserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h50m kube-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h51m kube-scheduler 4.12.0-0.nightly-2022-09-28-204419 True False False 3h51m kube-storage-version-migrator 4.12.0-0.nightly-2022-09-28-204419 True False False 91m machine-api 4.12.0-0.nightly-2022-09-28-204419 True False False 3h50m machine-approver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m machine-config 4.12.0-0.nightly-2022-09-28-204419 True False False 3h52m monitoring 4.12.0-0.nightly-2022-09-28-204419 True False False 3h44m network 4.12.0-0.nightly-2022-09-28-204419 True False False 3h55m node-tuning 4.12.0-0.nightly-2022-09-28-204419 True False False 113m openshift-apiserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h48m openshift-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 113m openshift-samples 4.12.0-0.nightly-2022-09-28-204419 True False False 116m operator-lifecycle-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m operator-lifecycle-manager-catalog 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h48m service-ca 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m storage 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-28-204419
How reproducible:
Always
Steps to Reproduce:
1. Install a 4.11 cluster with only openshift-samples enabled 2. Upgrade to 4.12 3.
Actual results:
The 4.12 introduced optional CO console and insights are not upgraded to 4.12
Expected results:
All the installed COs get upgraded
Additional info:
This is a clone of issue OCPBUGS-6651. The following is the description of the original issue:
—
Description of problem:
When running a hypershift HostedCluster with a publicAndPrivate / private setup behind a proxy, Nodes never go ready. ovn-kubernetes pods fail to run because the init container fails. [root@ip-10-0-129-223 core]# crictl logs cf142bb9f427d + [[ -f /env/ ]] ++ date -Iseconds 2023-01-25T12:18:46+00:00 - checking sbdb + echo '2023-01-25T12:18:46+00:00 - checking sbdb' + echo 'hosts: dns files' + proxypid=15343 + ovndb_ctl_ssl_opts='-p /ovn-cert/tls.key -c /ovn-cert/tls.crt -C /ovn-ca/ca-bundle.crt' + sbdb_ip=ssl:ovnkube-sbdb.apps.agl-proxy.hypershift.local:9645 + retries=0 + ovn-sbctl --no-leader-only --timeout=5 --db=ssl:ovnkube-sbdb.apps.agl-proxy.hypershift.local:9645 -p /ovn-cert/tls.key -c /ovn-cert/tls.crt -C /ovn-ca/ca-bundle.crt get-connection + exec socat TCP-LISTEN:9645,reuseaddr,fork PROXY:10.0.140.167:ovnkube-sbdb.apps.agl-proxy.hypershift.local:443,proxyport=3128 ovn-sbctl: ssl:ovnkube-sbdb.apps.agl-proxy.hypershift.local:9645: database connection failed () + (( retries += 1 ))
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always.
Steps to Reproduce:
1. Create a publicAndPrivate hypershift HostedCluster behind a proxy. E.g" ➜ hypershift git:(main) ✗ ./bin/hypershift create cluster \ aws --pull-secret ~/www/pull-secret-ci.txt \ --ssh-key ~/.ssh/id_ed25519.pub \ --name agl-proxy \ --aws-creds ~/www/config/aws-osd-hypershift-creds \ --node-pool-replicas=3 \ --region=us-east-1 \ --base-domain=agl.hypershift.devcluster.openshift.com \ --zones=us-east-1a \ --endpoint-access=PublicAndPrivate \ --external-dns-domain=agl-services.hypershift.devcluster.openshift.com --enable-proxy=true 2. Get the kubeconfig for the guest cluster. E.g kubectl get secret -nclusters agl-proxy-admin-kubeconfig -oyaml 3. Get pods in the guest cluster. See ovnkube-node pods init container failing with [root@ip-10-0-129-223 core]# crictl logs cf142bb9f427d + [[ -f /env/ ]] ++ date -Iseconds 2023-01-25T12:18:46+00:00 - checking sbdb + echo '2023-01-25T12:18:46+00:00 - checking sbdb' + echo 'hosts: dns files' + proxypid=15343 + ovndb_ctl_ssl_opts='-p /ovn-cert/tls.key -c /ovn-cert/tls.crt -C /ovn-ca/ca-bundle.crt' + sbdb_ip=ssl:ovnkube-sbdb.apps.agl-proxy.hypershift.local:9645 + retries=0 + ovn-sbctl --no-leader-only --timeout=5 --db=ssl:ovnkube-sbdb.apps.agl-proxy.hypershift.local:9645 -p /ovn-cert/tls.key -c /ovn-cert/tls.crt -C /ovn-ca/ca-bundle.crt get-connection + exec socat TCP-LISTEN:9645,reuseaddr,fork PROXY:10.0.140.167:ovnkube-sbdb.apps.agl-proxy.hypershift.local:443,proxyport=3128 ovn-sbctl: ssl:ovnkube-sbdb.apps.agl-proxy.hypershift.local:9645: database connection failed () + (( retries += 1 ))
To create a bastion an ssh into the Nodes See https://hypershift-docs.netlify.app/how-to/debug-nodes/
Actual results:
Nodes unready
Expected results:
Nodes go ready
Additional info:
This is a clone of issue OCPBUGS-3277. The following is the description of the original issue:
—
I saw this occur one time when running installs in a continuous loop. This was with COMPaCT_IPV4 in a non-disconnected setup.
WaitForBootrapComplete shows it can't access the API
level=info msg=Unable to retrieve cluster metadata from Agent Rest API: no clusterID known for the cluster
level=debug msg=cluster is not registered in rest API
level=debug msg=infraenv is not registered in rest API
This is because create-cluster-and-infraenv.service failed
Failed Units: 2
create-cluster-and-infraenv.service
NetworkManager-wait-online.service
The agentbasedinstaller register command wasn't able to retrieve the image to get the version
Nov 03 23:03:24 master-0 create-cluster-and-infraenv[2702]: time="2022-11-03T23:03:24Z" level=error msg="command 'oc adm release info -o template --template '\{{.metadata.version}}' --insecure=false registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451 --registry-config=/tmp/registry-config3852044519' exited with non-zero exit code 1: \nerror: unable to read image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451: Get \"https://registry.ci.openshift.org/v2/\": dial tcp: lookup registry.ci.openshift.org on 192.168.111.1:53: read udp 192.168.111.80:51315->192.168.111.1:53: i/o timeout\n" Nov 03 23:03:24 master-0 create-cluster-and-infraenv[2702]: time="2022-11-03T23:03:24Z" level=error msg="failed to get image openshift version from release image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451" error="command 'oc adm release info -o template --template '\{{.metadata.version}}' --insecure=false registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451 --registry-config=/tmp/registry-config3852044519' exited with non-zero exit code 1: \nerror: unable to read image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451: Get \"https://registry.ci.openshift.org/v2/\": dial tcp: lookup registry.ci.openshift.org on 192.168.111.1:53: read udp 192.168.111.80:51315->192.168.111.1:53: i/o timeout\n"
This occurs when attempting to get the release here:
https://github.com/openshift/assisted-service/blob/master/cmd/agentbasedinstaller/register.go#L58
We should add a retry mechanism or restart the service to handle spurious network failures like this.
Description of problem:
The default catalogSources are not being ran in restricted mode.
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
Always
Steps to Reproduce:
1. Create an 4.12 openshift cluster 2. Check the securityContextConfig for the default catalogSources
Actual results:
$ k get catsrc -n openshift-marketplace -o yaml | grep securityContextConfig securityContextConfig: legacy securityContextConfig: legacy securityContextConfig: legacy securityContextConfig: legacy
Expected results:
$ k get catsrc -n openshift-marketplace -o yaml | grep securityContextConfig securityContextConfig: restricted securityContextConfig: restricted securityContextConfig: restricted securityContextConfig: restricted
Additional info:
Description of problem:
Container networking pods cannot access the host network pods on another node which caused some operators DEGRADED $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2022-10-23-204408 False True True 63m OAuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.jhou.arm.eng.rdu2.redhat.com/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)... baremetal 4.12.0-0.nightly-2022-10-23-204408 True False False 62m cloud-controller-manager 4.12.0-0.nightly-2022-10-23-204408 True False False 68m cloud-credential 4.12.0-0.nightly-2022-10-23-204408 True False False 78m cluster-autoscaler 4.12.0-0.nightly-2022-10-23-204408 True False False 62m config-operator 4.12.0-0.nightly-2022-10-23-204408 True False False 63m console 4.12.0-0.nightly-2022-10-23-204408 False False False 30m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.jhou.arm.eng.rdu2.redhat.com): Get "https://console-openshift-console.apps.jhou.arm.eng.rdu2.redhat.com": context deadline exceeded (Client.Timeout exceeded while awaiting headers) control-plane-machine-set 4.12.0-0.nightly-2022-10-23-204408 True False False 62m csi-snapshot-controller 4.12.0-0.nightly-2022-10-23-204408 True False False 62m dns 4.12.0-0.nightly-2022-10-23-204408 True False False 62m etcd 4.12.0-0.nightly-2022-10-23-204408 False True True 13m EtcdMembersAvailable: 1 of 2 members are available, openshift-qe-048.arm.eng.rdu2.redhat.com is unhealthy image-registry 4.12.0-0.nightly-2022-10-23-204408 True False False 39m ingress 4.12.0-0.nightly-2022-10-23-204408 True False True 47m The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing) insights 4.12.0-0.nightly-2022-10-23-204408 True False False 56m kube-apiserver 4.12.0-0.nightly-2022-10-23-204408 True False False 50m kube-controller-manager 4.12.0-0.nightly-2022-10-23-204408 True False True 60m GarbageCollectorDegraded: error querying alerts: client_error: client error: 403 kube-scheduler 4.12.0-0.nightly-2022-10-23-204408 True False False 54m kube-storage-version-migrator 4.12.0-0.nightly-2022-10-23-204408 True False False 63m machine-api 4.12.0-0.nightly-2022-10-23-204408 True False False 51m machine-approver 4.12.0-0.nightly-2022-10-23-204408 True False False 62m machine-config 4.12.0-0.nightly-2022-10-23-204408 True False False 29m marketplace 4.12.0-0.nightly-2022-10-23-204408 True False False 62m monitoring 4.12.0-0.nightly-2022-10-23-204408 True False False 38m network 4.12.0-0.nightly-2022-10-23-204408 True False False 62m node-tuning 4.12.0-0.nightly-2022-10-23-204408 True False False 62m openshift-apiserver 4.12.0-0.nightly-2022-10-23-204408 True False False 30m openshift-controller-manager 4.12.0-0.nightly-2022-10-23-204408 True False False 56m openshift-samples 4.12.0-0.nightly-2022-10-23-204408 True False False 43m operator-lifecycle-manager 4.12.0-0.nightly-2022-10-23-204408 True False False 62m operator-lifecycle-manager-catalog 4.12.0-0.nightly-2022-10-23-204408 True False False 62m operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2022-10-23-204408 True False False 43m service-ca 4.12.0-0.nightly-2022-10-23-204408 True False False 63m storage 4.12.0-0.nightly-2022-10-23-204408 True False False 63m $ oc get pod -n openshift-ingress -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-58f6498646-gf6ns 1/1 Running 1 (79m ago) 93m 2620:52:0:1eb:3673:5aff:fe9e:5abc openshift-qe-049.arm.eng.rdu2.redhat.com <none> <none> router-default-58f6498646-qjtbk 1/1 Running 1 (79m ago) 93m 2620:52:0:1eb:3673:5aff:fe9e:593c openshift-qe-052.arm.eng.rdu2.redhat.com <none> <none> $ oc get pod -n openshift-network-diagnostics -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES network-check-source-5f967d78bc-cfwz4 1/1 Running 0 103m fd01:0:0:3::9 openshift-qe-052.arm.eng.rdu2.redhat.com <none> <none> network-check-target-52krv 1/1 Running 0 91m fd01:0:0:4::3 openshift-qe-049.arm.eng.rdu2.redhat.com <none> <none> network-check-target-56q9q 1/1 Running 0 91m fd01:0:0:3::5 openshift-qe-052.arm.eng.rdu2.redhat.com <none> <none> network-check-target-ggqsf 1/1 Running 0 103m fd01:0:0:2::4 openshift-qe-048.arm.eng.rdu2.redhat.com <none> <none> network-check-target-xfrq4 1/1 Running 0 103m fd01:0:0:1::3 openshift-qe-047.arm.eng.rdu2.redhat.com <none> <none> network-check-target-zrglr 1/1 Running 0 73m fd01:0:0:6::4 openshift-qe-051.arm.eng.rdu2.redhat.com <none> <none> network-check-target-zwb4t 1/1 Running 0 91m fd01:0:0:5::5 openshift-qe-053.arm.eng.rdu2.redhat.com <none> <none> ####Failed from containers pod on openshift-qe-053.arm.eng.rdu2.redhat.com to access ingress pods $ oc rsh -n openshift-network-diagnostics netw