Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
As a user, I should be able to configure CSI driver to have a storage topology.
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
This includes ibm-vpc-node-label-updater!
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
There is a new driver release 5.0.0 since the last rebase that includes snapshot support:
https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0
Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.
The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.
We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.
We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.
related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729
This Epic tracks the GA of this feature
Epic Goal
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
OC mirror is GA product as of Openshift 4.11 .
The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term.
HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]
To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes:
If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE.
For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA:
Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc.
Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:
As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters.
HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story.
Thus the following stories are important for HyperShift:
Refs:
HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed.
Main user story: When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees.
Ref: What are we missing in Core HyperShift for GA Readiness?
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumptions:
HyperShift - proposed cuts from data plane
When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:
More information:
To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.
See Hosted Control Planes (aka HyperShift) Strategy [Live Document]
Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path.
https://issues.redhat.com/browse/OCPPLAN-8901
HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead).
We should make sure our SD milestones are unblocked by the core team.
This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.
- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors.
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771)
Epic Goal*
The goal is to split client certificate trust chains from the global Hypershift root CA.
Why is this important? (mandatory)
This is important to:
Scenarios (mandatory)
Provide details for user scenarios including actions to be performed, platform specifications, and user personas.
Dependencies (internal and external) (mandatory)
Hypershift team needs to provide us with code reviews and merge the changes we are to deliver
Contributing Teams(and contacts) (mandatory)
Acceptance Criteria (optional)
The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.
Drawbacks or Risk (optional)
Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release
Done - Checklist (mandatory)
AUTH-311 introduced an enhancement. Implement the signer separation described there.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12
This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9
More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4
update ironic software to pick up latest bug fixes
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
As an admin, I want to hide user perspective(s) based on the customization.
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
As a SRE, I want hypershift operator to expose a metric when hosted control plane is ready.
This should allow SRE to tune (or silence) alerts occurring while the hosted control plane is spinning up.
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
Probably for: 1h or some such; I don't think it needs to go off immediately. But in-cluster admins and folks monitoring submitted Insights should have a way to figure out that the cluster is trying and failing to submit Telemetry. The alert should not fire when Telemetry submission has been explicitly disabled.
There is an existing alert for PrometheusRemoteWriteBehind in a similar space, but as of today, the Temeletry submissions are happening via telemeter-client, due to concerns about the load of submitting via remote-write.
Description of problem:
Duplicate notification "Getting started" would be shown on Search page
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-26-111919
How reproducible:
Always
Steps to Reproduce:
1. Login OCP as normal user, and change to developer prespective, create a new project 2. Delete the project on page (switch to Administator prespective, go to Home -> Projects page) 3. Switch to Developer prespective, and go to Search page, check the notification "Getting Started"
Actual results:
Two notification shown on page
Expected results:
Only one should exist
Additional info:
Description of problem:
When providing install-config as
platform: baremetal: apiVIP: 192.168.122.10 ingressVIP: 192.168.122.11
agent installer fails with
bin/openshift-install agent create cluster-manifests
FATAL failed to fetch Agent Manifests: failed to load asset "Install Config": invalid install-config configuration: [Platform.Baremetal.ApiVips: Required value: apiVips must be set for baremetal platform, Platform.Baremetal.IngressVips: Required value: ingressVips must be set for baremetal platform]
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Git clone latest installer https://github.com/openshift/installer and build it 2. Provide install-config.yaml for baremetal platform with deprecated apiVip and ingressVip set 3. Create agent image with "bin/openshift-install agent create cluster-manifests"
Actual results:
bin/openshift-install agent create cluster-manifests FATAL failed to fetch Agent Manifests: failed to load asset "Install Config": invalid install-config configuration: [Platform.Baremetal.ApiVips: Required value: apiVips must be set for baremetal platform, Platform.Baremetal.IngressVips: Required value: ingressVips must be set for baremetal platform]
Expected results:
agent installer should upconvert the depreacted fields and should not error. apiVip, ingressVip should be upconverted into apiVips and ingressVips respectively
Additional info:
This bug is a backport clone of [Bugzilla Bug 1948666](https://bugzilla.redhat.com/show_bug.cgi?id=1948666). The following is the description of the original bug:
—
Description of problem:
When users try to deploy an application from git method on dev console it throws warning message for specific public repos `URL is valid but cannot be reached. If this is a private repository, enter a source secret in Advanced Git Options.`. If we ignore the warning and go ahead the build will be successful although the warning message seems to be misleading.
Actual results:
Getting a warning for url while trying to deploy an application from git method on dev console from a public repo
Expected results:
It should show validated
Description of problem:
We need to have admin-ack in 4.12 so that admins can check the deprecated APIs and approve when they move to 4.12.Refer https://access.redhat.com/articles/6958394 for more information. As planned we want to add the admin-ack around 4.13 feature freeze.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Install a cluster in 4.12. 2. Run an application which uses the deprecated API. See https://access.redhat.com/articles/6958394 for more information. 3. Upgrade to 4.13
Actual results:
The upgrade happens without asking the admin to confirm that the worksloads do not use the deprecated APIs.
Expected results:
Upgrade should wait for the admin-ack.
Additional info:
This was the PR for 4.11.z https://github.com/openshift/cluster-version-operator/pull/836
Description of problem:
NPE on topology if creates a k8s svc and KSVC which has no metadata in template
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. create a KSVC from admin -> serving -> create service 2. create a k8s svc from search service (create)
Actual results:
topology breaks (see attached screenshot)
Expected results:
topology shouldn't break
Additional info:
Description of problem:
One multus case always fail in QE e2e testing. Using same net-attach-def and pod configure files, testing passed in 4.11 but failed in 4.12 and 4.13
Version-Release number of selected component (if applicable):
4.12 and 4.13
How reproducible:
All the times
Steps to Reproduce:
[weliang@weliang networking]$ oc create -f https://raw.githubusercontent.com/weliang1/verification-tests/master/testdata/networking/multus-cni/NetworkAttachmentDefinitions/runtimeconfig-def-ipandmac.yaml networkattachmentdefinition.k8s.cni.cncf.io/runtimeconfig-def created [weliang@weliang networking]$ oc get net-attach-def -o yaml apiVersion: v1 items: - apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: creationTimestamp: "2023-01-03T16:33:03Z" generation: 1 name: runtimeconfig-def namespace: test resourceVersion: "64139" uid: bb26c08f-adbf-477e-97ab-2aa7461e50c4 spec: config: '{ "cniVersion": "0.3.1", "name": "runtimeconfig-def", "plugins": [{ "type": "macvlan", "capabilities": { "ips": true }, "mode": "bridge", "ipam": { "type": "static" } }, { "type": "tuning", "capabilities": { "mac": true } }] }' kind: List metadata: resourceVersion: "" [weliang@weliang networking]$ oc create -f https://raw.githubusercontent.com/weliang1/verification-tests/master/testdata/networking/multus-cni/Pods/runtimeconfig-pod-ipandmac.yaml pod/runtimeconfig-pod created [weliang@weliang networking]$ oc get pod NAME READY STATUS RESTARTS AGE runtimeconfig-pod 0/1 ContainerCreating 0 6s [weliang@weliang networking]$ oc describe pod runtimeconfig-pod Name: runtimeconfig-pod Namespace: test Priority: 0 Node: weliang-01031-bvxtz-worker-a-qlwz7.c.openshift-qe.internal/10.0.128.4 Start Time: Tue, 03 Jan 2023 11:33:45 -0500 Labels: <none> Annotations: k8s.v1.cni.cncf.io/networks: [ { "name": "runtimeconfig-def", "ips": [ "192.168.22.2/24" ], "mac": "CA:FE:C0:FF:EE:00" } ] openshift.io/scc: anyuid Status: Pending IP: IPs: <none> Containers: runtimeconfig-pod: Container ID: Image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4 Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5zqd (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-k5zqd: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 26s default-scheduler Successfully assigned test/runtimeconfig-pod to weliang-01031-bvxtz-worker-a-qlwz7.c.openshift-qe.internal Normal AddedInterface 24s multus Add eth0 [10.128.2.115/23] from openshift-sdn Warning FailedCreatePodSandBox 23s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_runtimeconfig-pod_test_7d5f3e7a-846d-4cfb-ac78-fd08b27102ae_0(cff792dbd07e8936d04aad31964bd7b626c19a90eb9d92a67736323a1a2303c4): error adding pod test_runtimeconfig-pod to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [test/runtimeconfig-pod/7d5f3e7a-846d-4cfb-ac78-fd08b27102ae:runtimeconfig-def]: error adding container to network "runtimeconfig-def": Interface name contains an invalid character / Normal AddedInterface 7s multus Add eth0 [10.128.2.116/23] from openshift-sdn Warning FailedCreatePodSandBox 7s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_runtimeconfig-pod_test_7d5f3e7a-846d-4cfb-ac78-fd08b27102ae_0(d2456338fa65847d5dc744dea64972912c10b2a32d3450910b0b81cdc9159ca4): error adding pod test_runtimeconfig-pod to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [test/runtimeconfig-pod/7d5f3e7a-846d-4cfb-ac78-fd08b27102ae:runtimeconfig-def]: error adding container to network "runtimeconfig-def": Interface name contains an invalid character / [weliang@weliang networking]$
Actual results:
Pod is not running
Expected results:
Pod should be in running state
Additional info:
This is a clone of issue OCPBUGS-2479. The following is the description of the original issue:
—
Description of problem:
Right border radius is 0 for the pipeline visualization wrapper in dark mode but looks fine in light mode
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Steps to Reproduce:
1. Switch the theme to dark mode 2. Create a pipeline and navigate to the Pipeline details page
Actual results:
Right border radius is 0, see the screenshots
Expected results:
Right border radius should be same as left border radius.
Additional info:
Description of problem:
When scaling down the machineSet for worker nodes, a PV(vmdk) file got deleted.
Version-Release number of selected component (if applicable):
4.10
How reproducible:
N/A
Steps to Reproduce:
1. Scale down worker nodes 2. Check VMware logs and VM gets deleted with vmdk still attached
Actual results:
After scaling down nodes, volumes still attached to the VM get deleted alongside the VM
Expected results:
Worker nodes scaled down without any accidental deletion
Additional info:
This is a clone of issue OCPBUGS-1557. The following is the description of the original issue:
—
Seen in an instance created recently by a 4.12.0-ec.2 GCP provider:
"scheduling": { "automaticRestart": false, "onHostMaintenance": "MIGRATE", "preemptible": false, "provisioningModel": "STANDARD" },
From GCP's docs, they may stop instances on hardware failures and other causes, and we'd need automaticRestart: true to auto-recover from that. Also from GCP docs, the default for automaticRestart is true. And on the Go provider side, we doc:
If omitted, the platform chooses a default, which is subject to change over time, currently that default is "Always".
But the implementing code does not actually float the setting. Seems like a regression here, which is part of 4.10:
$ git clone https://github.com/openshift/machine-api-provider-gcp.git $ cd machine-api-provider-gcp $ git log --oneline origin/release-4.10 | grep 'migrate to openshift/api' 44f0f958 migrate to openshift/api
But that's not where the 4.9 and earlier code is located:
$ git branch -a | grep origin/release remotes/origin/release-4.10 remotes/origin/release-4.11 remotes/origin/release-4.12 remotes/origin/release-4.13
Hunting for 4.9 code:
$ oc adm release info --commits quay.io/openshift-release-dev/ocp-release:4.9.48-x86_64 | grep gcp gcp-machine-controllers https://github.com/openshift/cluster-api-provider-gcp c955c03b2d05e3b8eb0d39d5b4927128e6d1c6c6 gcp-pd-csi-driver https://github.com/openshift/gcp-pd-csi-driver 48d49f7f9ef96a7a42a789e3304ead53f266f475 gcp-pd-csi-driver-operator https://github.com/openshift/gcp-pd-csi-driver-operator d8a891de5ae9cf552d7d012ebe61c2abd395386e
So looking there:
$ git clone https://github.com/openshift/cluster-api-provider-gcp.git $ cd cluster-api-provider-gcp $ git log --oneline | grep 'migrate to openshift/api' ...no hits... $ git grep -i automaticRestart origin/release-4.9 | grep -v '"description"\|compute-gen.go' origin/release-4.9:vendor/google.golang.org/api/compute/v1/compute-api.json: "automaticRestart": {
Not actually clear to me how that code is structured. So 4.10 and later GCP machine-API providers are impacted, and I'm unclear on 4.9 and earlier.
Description of problem:
Clusters created with platform 'vsphere' in the install-config end up as type 'BareMetal' in the infrastructure CR.
Version-Release number of selected component (if applicable):
4.12.3
How reproducible:
100%
Steps to Reproduce:
1. Create a cluster through the agent installer with platform: vsphere in the install-config 2. oc get infrastructure cluster -o jsonpath='{.status.platform}'
Actual results:
BareMetal
Expected results:
VSphere
Additional info:
The platform type is not being case converted ("vsphere" -> "VSphere") when constructing the AgentClusterInstall CR. When read by the assisted-service client, the platform reads as unknown and therefore the platform field is left blank when the Cluster object is created in the assisted API. Presumably that results in the correct default platform for the topology: None for SNO, BareMetal for everything else, but never VSphere. Since the platform VIPs are passed through a non-platform-specific API in assisted, everything worked but the resulting cluster would have the BareMetal platform.
Description of problem:
This bug is a clone of https://bugzilla.redhat.com/show_bug.cgi?id=2109140 on odf-console side. Corresponding PR needed to be merged in console as well. Please, verify this Jira console's bug and https://bugzilla.redhat.com/show_bug.cgi?id=2109140 simultaneous. Steps are exactly same, no difference.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
prometheus-k8s-0 ends in CrashLoopBackOff with evel=error err="opening storage failed: /prometheus/chunks_head/000002: invalid magic number 0" on SNO after hard reboot tests
Version-Release number of selected component (if applicable):
4.11.6
How reproducible:
Not always, after ~10 attempts
Steps to Reproduce:
1. Deploy SNO with Telco DU profile applied 2. Hard reboot node via out of band interface 3. oc -n openshift-monitoring get pods prometheus-k8s-0
Actual results:
NAME READY STATUS RESTARTS AGE prometheus-k8s-0 5/6 CrashLoopBackOff 125 (4m57s ago) 5h28m
Expected results:
Running
Additional info:
Attaching must-gather. The pod recovers successfully after deleting/re-creating. [kni@registry.kni-qe-0 ~]$ oc -n openshift-monitoring logs prometheus-k8s-0 ts=2022-09-26T14:54:01.919Z caller=main.go:552 level=info msg="Starting Prometheus Server" mode=server version="(version=2.36.2, branch=rhaos-4.11-rhel-8, revision=0d81ba04ce410df37ca2c0b1ec619e1bc02e19ef)" ts=2022-09-26T14:54:01.919Z caller=main.go:557 level=info build_context="(go=go1.18.4, user=root@371541f17026, date=20220916-14:15:37)" ts=2022-09-26T14:54:01.919Z caller=main.go:558 level=info host_details="(Linux 4.18.0-372.26.1.rt7.183.el8_6.x86_64 #1 SMP PREEMPT_RT Sat Aug 27 22:04:33 EDT 2022 x86_64 prometheus-k8s-0 (none))" ts=2022-09-26T14:54:01.919Z caller=main.go:559 level=info fd_limits="(soft=1048576, hard=1048576)" ts=2022-09-26T14:54:01.919Z caller=main.go:560 level=info vm_limits="(soft=unlimited, hard=unlimited)" ts=2022-09-26T14:54:01.921Z caller=web.go:553 level=info component=web msg="Start listening for connections" address=127.0.0.1:9090 ts=2022-09-26T14:54:01.922Z caller=main.go:989 level=info msg="Starting TSDB ..." ts=2022-09-26T14:54:01.924Z caller=tls_config.go:231 level=info component=web msg="TLS is disabled." http2=false ts=2022-09-26T14:54:01.926Z caller=main.go:848 level=info msg="Stopping scrape discovery manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:862 level=info msg="Stopping notify discovery manager..." ts=2022-09-26T14:54:01.926Z caller=manager.go:951 level=info component="rule manager" msg="Stopping rule manager..." ts=2022-09-26T14:54:01.926Z caller=manager.go:961 level=info component="rule manager" msg="Rule manager stopped" ts=2022-09-26T14:54:01.926Z caller=main.go:899 level=info msg="Stopping scrape manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:858 level=info msg="Notify discovery manager stopped" ts=2022-09-26T14:54:01.926Z caller=main.go:891 level=info msg="Scrape manager stopped" ts=2022-09-26T14:54:01.926Z caller=notifier.go:599 level=info component=notifier msg="Stopping notification manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:844 level=info msg="Scrape discovery manager stopped" ts=2022-09-26T14:54:01.926Z caller=manager.go:937 level=info component="rule manager" msg="Starting rule manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:1120 level=info msg="Notifier manager stopped" ts=2022-09-26T14:54:01.926Z caller=main.go:1129 level=error err="opening storage failed: /prometheus/chunks_head/000002: invalid magic number 0"
Description of problem:
The current version of openshift's corendns is based on Kubernetes 1.24 packages. OpenShift 4.12 is based on Kubernetes 1.25.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Check https://github.com/openshift/coredns/blob/release-4.12/go.mod
Actual results:
Kubernetes packages (k8s.io/api, k8s.io/apimachinery, and k8s.io/client-go) are at version v0.24.0.
Expected results:
Kubernetes packages are at version v0.25.0 or later.
Additional info:
Using old Kubernetes API and client packages brings risk of API compatibility issues.
`aws-ebs-csi-driver-operator` runs in the mgmt cluster and does not need to be configured with the guest cluster proxy
hypershift proxy conformance test currently fails because the `storage` CO never becomes `Available`
https://k8s-testgrid.appspot.com/redhat-hypershift#4.12-conformance-aws-ovn-proxy
This is a clone of issue OCPBUGS-3621. The following is the description of the original issue:
—
Description of problem:
EUS-to-EUS upgrade(4.10.38-4.11.13-4.12.0-rc.0), after control-plane nodes are upgraded to 4.12 successfully, unpause the worker pool to get worker nodes updated. But worker nodes failed to be updated with degraded worker pool: ``` # ./oc get node NAME STATUS ROLES AGE VERSION jliu410-6hmkz-master-0.c.openshift-qe.internal Ready master 4h40m v1.25.2+f33d98e jliu410-6hmkz-master-1.c.openshift-qe.internal Ready master 4h40m v1.25.2+f33d98e jliu410-6hmkz-master-2.c.openshift-qe.internal Ready master 4h40m v1.25.2+f33d98e jliu410-6hmkz-worker-a-xdwvv.c.openshift-qe.internal Ready,SchedulingDisabled worker 4h31m v1.23.12+6b34f32 jliu410-6hmkz-worker-b-9hnb8.c.openshift-qe.internal Ready worker 4h31m v1.23.12+6b34f32 jliu410-6hmkz-worker-c-bdv4f.c.openshift-qe.internal Ready worker 4h31m v1.23.12+6b34f32 ... # ./oc get co machine-config machine-config 4.12.0-rc.0 True False True 3h41m Failed to resync 4.12.0-rc.0 because: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool worker is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 1)] ... # ./oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b81233204496767f2fe32fbb6cb088e1 True False False 3 3 3 0 4h10m worker rendered-worker-a2caae543a144d94c17a27e56038d4c4 False True True 3 0 0 1 4h10m ... # ./oc describe mcp worker Message: Reason: Status: True Type: Degraded Last Transition Time: 2022-11-14T07:19:42Z Message: Node jliu410-6hmkz-worker-a-xdwvv.c.openshift-qe.internal is reporting: "Error checking type of update image: error running skopeo inspect --no-tags --retry-times 5 --authfile /var/lib/kubelet/config.json docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c01b0ae9870dbee5609c52b4d649334ce6854fff1237f1521929d151f6876daa: exit status 1\ntime=\"2022-11-14T07:42:47Z\" level=fatal msg=\"unknown flag: --no-tags\"\n" Reason: 1 nodes are reporting degraded status on sync Status: True Type: NodeDegraded ... # ./oc logs machine-config-daemon-mg2zn E1114 08:11:27.115577 192836 writer.go:200] Marking Degraded due to: Error checking type of update image: error running skopeo inspect --no-tags --retry-times 5 --authfile /var/lib/kubelet/config.json docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c01b0ae9870dbee5609c52b4d649334ce6854fff1237f1521929d151f6876daa: exit status 1 time="2022-11-14T08:11:25Z" level=fatal msg="unknown flag: --no-tags" ```
Version-Release number of selected component (if applicable):
4.12.0-rc.0
How reproducible:
Steps to Reproduce:
1. EUS upgrade with path 4.10.38-> 4.11.13-> 4.12.0-rc.0 with paused worker pool 2. After master pool upgrade succeed, unpause worker pool 3.
Actual results:
Worker pool upgrade failed
Expected results:
Worker pool upgrade succeed
Additional info:
Description of problem:
intra namespace allow network policy doesn't work after applying ingress&egress deny all network policy
Version-Release number of selected component (if applicable):
OpenShift 4.10.12
How reproducible:
Always
Steps to Reproduce:
1. Define deny all network policy for egress an ingress in a namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
2. Define the following network policy to allow the traffic between the pods in the namespace:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-intra-namespace-001 spec: egress: - to: - podSelector: {} ingress: - from: - podSelector: {} podSelector: {} policyTypes: - Ingress - Egress
3. Test the connectivity between two pods from the namespace.
Actual results:
The connectivity is not allowed
Expected results:
The connectivity should be allowed between pods from the same namespace.
Additional info:
After performing a test and analyzing SDN flows for the namespace:
sh-4.4# ovs-ofctl dump-flows -O OpenFlow13 br0 | grep --color 0x964376 cookie=0x0, duration=99375.342s, table=20, n_packets=14, n_bytes=588, priority=100,arp,in_port=21,arp_spa=10.128.2.20,arp_sha=00:00:0a:80:02:14/00:00:ff:ff:ff:ff actions=load:0x964376->NXM_NX_REG0[],goto_table:30 cookie=0x0, duration=1681.845s, table=20, n_packets=11, n_bytes=462, priority=100,arp,in_port=24,arp_spa=10.128.2.23,arp_sha=00:00:0a:80:02:17/00:00:ff:ff:ff:ff actions=load:0x964376->NXM_NX_REG0[],goto_table:30 cookie=0x0, duration=99375.342s, table=20, n_packets=135610, n_bytes=759239814, priority=100,ip,in_port=21,nw_src=10.128.2.20 actions=load:0x964376->NXM_NX_REG0[],goto_table:27 cookie=0x0, duration=1681.845s, table=20, n_packets=2006, n_bytes=12684967, priority=100,ip,in_port=24,nw_src=10.128.2.23 actions=load:0x964376->NXM_NX_REG0[],goto_table:27 cookie=0x0, duration=99375.342s, table=25, n_packets=0, n_bytes=0, priority=100,ip,nw_src=10.128.2.20 actions=load:0x964376->NXM_NX_REG0[],goto_table:27 cookie=0x0, duration=1681.845s, table=25, n_packets=0, n_bytes=0, priority=100,ip,nw_src=10.128.2.23 actions=load:0x964376->NXM_NX_REG0[],goto_table:27 cookie=0x0, duration=975.129s, table=27, n_packets=0, n_bytes=0, priority=150,reg0=0x964376,reg1=0x964376 actions=goto_table:30 cookie=0x0, duration=99375.342s, table=70, n_packets=145260, n_bytes=11722173, priority=100,ip,nw_dst=10.128.2.20 actions=load:0x964376->NXM_NX_REG1[],load:0x15->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=1681.845s, table=70, n_packets=2336, n_bytes=191079, priority=100,ip,nw_dst=10.128.2.23 actions=load:0x964376->NXM_NX_REG1[],load:0x18->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=975.129s, table=80, n_packets=0, n_bytes=0, priority=150,reg0=0x964376,reg1=0x964376 actions=output:NXM_NX_REG2[]
We see that the following rule doesn't match because `reg1` hasn't been defined:
cookie=0x0, duration=975.129s, table=27, n_packets=0, n_bytes=0, priority=150,reg0=0x964376,reg1=0x964376 actions=goto_table:30
This is a clone of issue OCPBUGS-3316. The following is the description of the original issue:
—
Description of problem:
Branch name in repository pipelineruns list view should match the actual github branch name.
Version-Release number of selected component (if applicable):
4.11.z
How reproducible:
alwaus
Steps to Reproduce:
1. Create a repository 2. Trigger the pipelineruns by push or pull request event on the github
Actual results:
Branch name contains "refs-heads-" prefix in front of the actual branch name eg: "refs-heads-cicd-demo" (cicd-demo is the branch name)
Expected results:
Branch name should be the acutal github branch name. just `cicd-demo`should be shown in the branch column.
Additional info:
Ref: https://coreos.slack.com/archives/CHG0KRB7G/p1667564311865459
This is a clone of issue OCPBUGS-4049. The following is the description of the original issue:
—
Description of problem:
In case of CRC we provision the cluster first and the create the disk image out of it and that what we share to our users. Now till now we always remove the pull secret from the cluster after provision it using https://github.com/crc-org/snc/blob/master/snc.sh#L241-L258 and it worked without any issue till 4.11.x but for 4.12.0-rc.1 we are seeing that MCO not able to reconcile.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Create a single node cluster using cluster bot `launch 4.12.0-rc.1 aws,single-node` 2. Once cluster is provisioned update the pull secret from the config ``` $ cat pull-secret.yaml apiVersion: v1 data: .dockerconfigjson: e30K kind: Secret metadata: name: pull-secret namespace: openshift-config type: kubernetes.io/dockerconfigjson $ oc replace -f pull-secret.yaml ``` 3. Wait for MCO recocile and you will see failure to reconcile MCO
Actual results:
$ oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-66086aa249a9f92b773403f7c3745ea4 False True True 1 0 0 1 94m worker rendered-worker-0c07becff7d3c982e24257080cc2981b True False False 0 0 0 0 94m $ oc get co machine-config NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE machine-config 4.12.0-rc.1 True False True 93m Failed to resync 4.12.0-rc.1 because: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)] $ oc logs machine-config-daemon-nf9mg -n openshift-machine-config-operator [...] I1123 15:00:37.864581 10194 run.go:19] Running: podman pull -q --authfile /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba Error: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba: (Mirrors also failed: [quayio-pull-through-cache-us-west-2-ci.apps.ci.l2s4.p1.openshiftapps.com/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba: reading manifest sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba in quayio-pull-through-cache-us-west-2-ci.apps.ci.l2s4.p1.openshiftapps.com/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: authentication required]): quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba: reading manifest sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized W1123 15:00:39.186103 10194 run.go:45] podman failed: running podman pull -q --authfile /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba failed: Error: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba: (Mirrors also failed: [quayio-pull-through-cache-us-west-2-ci.apps.ci.l2s4.p1.openshiftapps.com/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba: reading manifest sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba in quayio-pull-through-cache-us-west-2-ci.apps.ci.l2s4.p1.openshiftapps.com/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: authentication required]): quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba: reading manifest sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized : exit status 125; retrying...
Expected results:
Additional info:
Description of problem:
When running node-density (245 pods/node) on a 120 node cluster, we see that there is a huge spike (~22s) in Avg pod-latency. When the spike occurs we see all the ovnkube-master pods go through a restart.
The restart happens because of (ovnkube-master pods)
2022-08-10T04:04:44.494945179Z panic: reflect: call of reflect.Value.Len on ptr Value
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-09-114621
How reproducible:
Steps to Reproduce:
1. Run node-density on a 120 node cluster
Actual results:
Spike observed in pod-latency graph ~22s
Expected results:
Steady pod-latency graph ~4s
Additional info:
Description of problem:
See: https://issues.redhat.com/browse/CPSYN-143 tldr: Based on the previous direction that 4.12 was going to enforce PSA restricted by default, OLM had to make a few changes because the way we run catalog pods (and we have to run them that way because of how the opm binary worked) was incompatible w/ running restricted. 1) We set openshift-marketplace to enforce restricted (this was our choice, we didn't have to do it, but we did) 2) we updated the opm binary so catalog images using a newer opm binary don't have to run privileged 3) we added a field to catalogsource that allows you to choose whether to run the pod privileged(legacy mode) or restricted. The default is restricted. We made that the default so that users running their own catalogs in their own NSes (which would be default PSA enforcing) would be able to be successful w/o needing their NS upgraded to privileged. Unfortunately this means: 1) legacy catalog images(i.e. using older opm binaries) won't run on 4.12 by default (the catalogsource needs to be modified to specify legacy mode. 2) legacy catalog images cannot be run in the openshift-marketplace NS since that NS does not allow privileged pods. This means legacy catalogs can't contribute to the global catalog (since catalogs must be in that NS to be in the global catalog). Before 4.12 ships we need to: 1) remove the PSA restricted label on the openshift-marketplace NS 2) change the catalogsource securitycontextconfig mode default to use "legacy" as the default, not restricted. This gives catalog authors another release to update to using a newer opm binary that can run restricted, or get their NSes explicitly labeled as privileged (4.12 will not enforce restricted, so in 4.12 using the legacy mode will continue to work) In 4.13 we will need to revisit what we want the default to be, since at that point catalogs will start breaking if they try to run in legacy mode in most NSes.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-95. The following is the description of the original issue:
—
In an OpenShift cluster with OpenShiftSDN network plugin with egressIP and NMstate operator configured, there are some conditions when the egressIP is deconfigured from the network interface.
The bug is 100% reproducible.
Steps for reproducing the issue are:
1. Install a cluster with OpenShiftSDN network plugin.
2. Configure egressip for a project.
3. Install NMstate operator.
4. Create a NodeNetworkConfigurationPolicy.
5. Identify on which node the egressIP is present.
6. Restart the nmstate-handler pod running on the identified node.
7. Verify that the egressIP is no more present.
Restarting the sdn pod related to the identified node will reconfigure the egressIP in the node.
This issue has a high impact since any changes triggered for the NMstate operator will prevent application traffic. For example, in the customer environment, the issue is triggered any time a new node is added to the cluster.
The expectation is that NMstate operator should not interfere with SDN configuration.
This is a clone of issue OCPBUGS-3096. The following is the description of the original issue:
—
While the installer binary is statically linked, the terraform binaries shipped with it are dynamically linked.
This could give issues when running the installer on Linux and depending on the GLIBC version the specific Linux distribution has installed. It becomes a risk when switching the base image of the builders from ubi8 to ubi9 and trying to run the installer in cs8 or rhel8.
For example, building the installer on cs9 and trying to run it in a cs8 distribution leads to:
time="2022-10-31T14:31:47+01:00" level=debug msg="[INFO] running Terraform command: /root/test/terraform/bin/terraform version -json" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=debug msg="[INFO] running Terraform command: /root/test/terraform/bin/terraform version -json" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=debug msg="[INFO] running Terraform command: /root/test/terraform/bin/terraform init -no-color -force-copy -input=false -backend=true -get=true -upgrade=false -plugin-dir=/root/test/terraform/plugins" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="failed to fetch Cluster: failed to generate asset \"Cluster\": failure applying terraform for \"cluster\" stage: failed to create cluster: failed doing terraform init: exit status 1\n/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)\n/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)\n"
How reproducible:Always
Steps to Reproduce:{code:none} 1. Build the installer on cs9 2. Run the installer on cs8 until the terraform binary are started 3. Looking at the terrform binary with ldd or file, you can get it is not a statically linked binary and the error above might occur depending on the glibc version you are running on
Actual results:
Expected results:
The terraform and providers binaries have to be statically linked as well as the installer is.
Additional info:
This comes from a build of OKD/SCOS that is happening outside of Prow on a cs9-based builder image. One can use the Dockerfile at images/installer/Dockerfile.ci and replace the builder image with one like https://github.com/okd-project/images/blob/main/okd-builder.Dockerfile
Description of problem:
The service project and the host project both have a private DNS zone named as "ipi-xpn-private-zone". The thing is, although platform.gcp.privateDNSZone.project is set as the host project, the installer checks the zone of the service project, and complains dns name not match.
Version-Release number of selected component (if applicable):
$ openshift-install version openshift-install 4.12.0-0.nightly-2022-10-25-210451 built from commit 14d496fdaec571fa97604a487f5df6a0433c0c68 release image registry.ci.openshift.org/ocp/release@sha256:d6cc07402fee12197ca1a8592b5b781f9f9a84b55883f126d60a3896a36a9b74 release architecture amd64
How reproducible:
Always, if both the service project and the host project have a private DNS zone with the same name.
Steps to Reproduce:
1. try IPI installation to a shared VPC, using "privateDNSZone" of the host project
Actual results:
$ openshift-install create cluster --dir test7 INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" ERROR failed to fetch Metadata: failed to load asset "Install Config": failed to create install config: platform.gcp.privateManagedZone: Invalid value: "ipi-xpn-private-zone": dns zone jiwei-1026a.qe1.gcp.devcluster.openshift.com. did not match expected jiwei-1027a.qe-shared-vpc.qe.gcp.devcluster.openshift.com $
Expected results:
The installer should check the private zone in the specified project (i.e. the host project).
Additional info:
$ yq-3.3.0 r test7/install-config.yaml platform gcp: projectID: openshift-qe region: us-central1 computeSubnet: installer-shared-vpc-subnet-2 controlPlaneSubnet: installer-shared-vpc-subnet-1 createFirewallRules: Disabled publicDNSZone: id: qe-shared-vpc project: openshift-qe-shared-vpc privateDNSZone: id: ipi-xpn-private-zone project: openshift-qe-shared-vpc network: installer-shared-vpc networkProjectID: openshift-qe-shared-vpc $ yq-3.3.0 r test7/install-config.yaml baseDomain qe-shared-vpc.qe.gcp.devcluster.openshift.com $ yq-3.3.0 r test7/install-config.yaml metadata creationTimestamp: null name: jiwei-1027a $ $ openshift-install create cluster --dir test7 INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" ERROR failed to fetch Metadata: failed to load asset "Install Config": failed to create install config: platform.gcp.privateManagedZone: Invalid value: "ipi-xpn-private-zone": dns zone jiwei-1026a.qe1.gcp.devcluster.openshift.com. did not match expected jiwei-1027a.qe-shared-vpc.qe.gcp.devcluster.openshift.com $ $ gcloud --project openshift-qe-shared-vpc dns managed-zones list --filter='name=qe-shared-vpc' NAME DNS_NAME DESCRIPTION VISIBILITY qe-shared-vpc qe-shared-vpc.qe.gcp.devcluster.openshift.com. public $ gcloud --project openshift-qe-shared-vpc dns managed-zones list --filter='name=ipi-xpn-private-zone' NAME DNS_NAME DESCRIPTION VISIBILITY ipi-xpn-private-zone jiwei-1027a.qe-shared-vpc.qe.gcp.devcluster.openshift.com. Preserved private zone for IPI XPN private $ gcloud dns managed-zones list --filter='name=ipi-xpn-private-zone' NAME DNS_NAME DESCRIPTION VISIBILITY ipi-xpn-private-zone jiwei-1026a.qe1.gcp.devcluster.openshift.com. Preserved private zone for IPI XPN private $ $ gcloud --project openshift-qe-shared-vpc dns managed-zones describe qe-shared-vpc cloudLoggingConfig: kind: dns#managedZoneCloudLoggingConfig creationTime: '2020-04-26T02:50:25.172Z' description: '' dnsName: qe-shared-vpc.qe.gcp.devcluster.openshift.com. id: '7036327024919173373' kind: dns#managedZone name: qe-shared-vpc nameServers: - ns-cloud-b1.googledomains.com. - ns-cloud-b2.googledomains.com. - ns-cloud-b3.googledomains.com. - ns-cloud-b4.googledomains.com. visibility: public $ $ gcloud --project openshift-qe-shared-vpc dns managed-zones describe ipi-xpn-private-zone cloudLoggingConfig: kind: dns#managedZoneCloudLoggingConfig creationTime: '2022-10-27T08:05:18.332Z' description: Preserved private zone for IPI XPN dnsName: jiwei-1027a.qe-shared-vpc.qe.gcp.devcluster.openshift.com. id: '5506116785330943369' kind: dns#managedZone name: ipi-xpn-private-zone nameServers: - ns-gcp-private.googledomains.com. privateVisibilityConfig: kind: dns#managedZonePrivateVisibilityConfig networks: - kind: dns#managedZonePrivateVisibilityConfigNetwork networkUrl: https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/networks/installer-shared-vpc visibility: private $ $ gcloud dns managed-zones describe ipi-xpn-private-zone cloudLoggingConfig: kind: dns#managedZoneCloudLoggingConfig creationTime: '2022-10-26T06:42:52.268Z' description: Preserved private zone for IPI XPN dnsName: jiwei-1026a.qe1.gcp.devcluster.openshift.com. id: '7663537481778983285' kind: dns#managedZone name: ipi-xpn-private-zone nameServers: - ns-gcp-private.googledomains.com. privateVisibilityConfig: kind: dns#managedZonePrivateVisibilityConfig networks: - kind: dns#managedZonePrivateVisibilityConfigNetwork networkUrl: https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/networks/installer-shared-vpc visibility: private $
This is a clone of issue OCPBUGS-5068. The following is the description of the original issue:
—
Description of problem:
virtual media provisioning fails when iLO Ironic driver is used
Version-Release number of selected component (if applicable):
4.13
How reproducible:
Always
Steps to Reproduce:
1. attempt virtual media provisioning on a node configured with ilo-virtualmedia:// drivers 2. 3.
Actual results:
Provisioning fails with "An auth plugin is required to determine endpoint URL" error
Expected results:
Provisioning succeeds
Additional info:
Relevant log snippet: 3742 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector [None req-e58ac1f2-fac6-4d28-be9e-983fa900a19b - - - - - -] Unable to start managed inspection for node e4445d43-3458-4cee-9cbe-6da1de75 78cd: An auth plugin is required to determine endpoint URL: keystoneauth1.exceptions.auth_plugins.MissingAuthPlugin: An auth plugin is required to determine endpoint URL 3743 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector Traceback (most recent call last): 3744 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/inspector.py", line 210, in _start_managed_inspection 3745 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector task.driver.boot.prepare_ramdisk(task, ramdisk_params=params) 3746 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector File "/usr/lib/python3.9/site-packages/ironic_lib/metrics.py", line 59, in wrapped 3747 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector result = f(*args, **kwargs) 3748 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/ilo/boot.py", line 408, in prepare_ramdisk 3749 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector iso = image_utils.prepare_deploy_iso(task, ramdisk_params, 3750 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/image_utils.py", line 624, in prepare_deploy_iso 3751 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector return prepare_iso_image(inject_files=inject_files) 3752 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/image_utils.py", line 537, in _prepare_iso_image 3753 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector image_url = img_handler.publish_image( 3754 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/image_utils.py", line 193, in publish_image 3755 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector swift_api = swift.SwiftAPI() 3756 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector File "/usr/lib/python3.9/site-packages/ironic/common/swift.py", line 66, in __init__ 3757 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector endpoint = keystone.get_endpoint('swift', session=session)
This is a clone of issue OCPBUGS-5466. The following is the description of the original issue:
—
Description of problem:
It is possible to change some of the fields in default catalogSource specs and the Marketplace Operator will not revert the changes
Version-Release number of selected component (if applicable):
4.13.0 and back
How reproducible:
Always
Steps to Reproduce:
1. Create a 4.13.0 OpenShift cluster 2. Set the redhat-operator catalogSource.spec.grpcPodConfig.SecurityContextConfig field to `legacy`.
Actual results:
The field remains set to `legacy` mode.
Expected results:
The field is reverted to `restricted` mode.
Additional info:
This code needs to be updated to account for new fields in the catalogSource spec.
Description of problem:
The name of "Role" on Compute -> Nodes page should update to "Roles" to match the name in the CLI
Compare with other resources, the title of the column should keep pace with the name in CLI
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible:
Always
Steps to Reproduce:
1. Login OCP with CLI, use below command to get nodes information
$ oc get nodes
2. Go to Compute -> nodes page, check the column name of "Role"
3.
Actual results:
CLI will return information as below shown, and the title of the column is "ROLES"
NAME STATUS ROLES AGE VERSION ip-10-0-145-18.us-east-2.compute.internal Ready worker 9h v1.24.0+4f0dd4d ip-10-0-145-203.us-east-2.compute.internal Ready master 9h v1.24.0+4f0dd4d ip-10-0-163-205.us-east-2.compute.internal Ready master 9h v1.24.0+4f0dd4d ip-10-0-169-118.us-east-2.compute.internal Ready worker 9h v1.24.0+4f0dd4d ip-10-0-198-234.us-east-2.compute.internal Ready master 9h v1.24.0+4f0dd4d ip-10-0-212-34.us-east-2.compute.internal Ready worker 9h v1.24.0+4f0dd4d
But in UI, the name of ROLES is "Role" which is incorrect. (Attached)
Expected results:
The title of "Role" should update to "Roles"
Additional info:
This is a clone of issue OCPBUGS-2598. The following is the description of the original issue:
—
Description of problem:
Liveness probe of ipsec pods fail with large clusters. Currently the command that is executed in the ipsec container is ovs-appctl -t ovs-monitor-ipsec ipsec/status && ipsec status The problem is with command "ipsec/status". In clusters with high node count this command will return a list with all the node daemons of the cluster. This means that as the node count raises the completion time of the command raises too.
This makes the main command
ovs-appctl -t ovs-monitor-ipsec
To hang until the subcommand is finished.
As the liveness and readiness probe values are hardcoded in the manifest of the ipsec container herehttps//github.com/openshift/cluster-network-operator/blob/9c1181e34316d34db49d573698d2779b008bcc20/bindata/network/ovn-kubernetes/common/ipsec.yaml] the liveness timeout of the container probe of 60 seconds start to be insufficient as the node count list is growing. This resulted in a cluster with 170 + nodes to have 15+ ipsec pods in a crashloopbackoff state.
Version-Release number of selected component (if applicable):
Openshift Container Platform 4.10 but i think the same will be visible to other versions too.
How reproducible:
I was not able to reproduce due to an extreamely high amount of resources are needed and i think that there is no point as we have spotted the issue.
Steps to Reproduce:
1. Install an Openshift cluster with IPSEC enabled 2. Scale to 170+ nodes or more 3. Notice that the ipsec pods will start getting in a Crashloopbackoff state with failed Liveness/Readiness probes.
Actual results:
Ip Sec pods are stuck in a Crashloopbackoff state
Expected results:
Ip Sec pods to work normally
Additional info:
We have provided a workaround where CVO and CNO operators are scaled to 0 replicas in order for us to be able to increase the liveness probe limit to a value of 600 that recovered the cluster. As a next step the customer will try to reduce the node count and restore the default liveness timeout value along with bringing the operators back to see if the cluster will stabilize.
This is a clone of issue OCPBUGS-4207. The following is the description of the original issue:
—
Description of problem:
We added a line to increase debugging verbosity to aid in debugging WRKLDS-540
Version-Release number of selected component (if applicable):
13
How reproducible:
very
Steps to Reproduce:
1.just a revert 2. 3.
Actual results:
Extra debugging lines are present in the openshift-config-operator pod logs
Expected results:
Extra debugging lines no longer in the openshift-config-operator pod logs
Additional info:
This is a clone of issue OCPBUGS-2988. The following is the description of the original issue:
—
Description of problem:
openshift-apiserver, openshift-oauth-apiserver and kube-apiserver pods cannot validate the certificate when trying to reach etcd reporting certificate validation errors: }. Err: connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for ::1, 127.0.0.1, ::1, fd69::2, not 2620:52:0:198::10" W1018 11:36:43.523673 15 logging.go:59] [core] [Channel #186 SubChannel #187] grpc: addrConn.createTransport failed to connect to { "Addr": "[2620:52:0:198::10]:2379", "ServerName": "2620:52:0:198::10", "Attributes": null, "BalancerAttributes": null, "Type": 0, "Metadata": null }. Err: connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for ::1, 127.0.0.1, ::1, fd69::2, not 2620:52:0:198::10"
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-18-041406
How reproducible:
100%
Steps to Reproduce:
1. Deploy SNO with single stack IPv6 via ZTP procedure
Actual results:
Deployment times out and some of the operators aren't deployed successfully. NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2022-10-18-041406 False False True 124m APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.... baremetal 4.12.0-0.nightly-2022-10-18-041406 True False False 112m cloud-controller-manager 4.12.0-0.nightly-2022-10-18-041406 True False False 111m cloud-credential 4.12.0-0.nightly-2022-10-18-041406 True False False 115m cluster-autoscaler 4.12.0-0.nightly-2022-10-18-041406 True False False 111m config-operator 4.12.0-0.nightly-2022-10-18-041406 True False False 124m console control-plane-machine-set 4.12.0-0.nightly-2022-10-18-041406 True False False 111m csi-snapshot-controller 4.12.0-0.nightly-2022-10-18-041406 True False False 111m dns 4.12.0-0.nightly-2022-10-18-041406 True False False 111m etcd 4.12.0-0.nightly-2022-10-18-041406 True False True 121m ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries image-registry 4.12.0-0.nightly-2022-10-18-041406 False True True 104m Available: The registry is removed... ingress 4.12.0-0.nightly-2022-10-18-041406 True True True 111m The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: DeploymentReplicasAllAvailable=False (DeploymentReplicasNotAvailable: 0/1 of replicas are available) insights 4.12.0-0.nightly-2022-10-18-041406 True False False 118s kube-apiserver 4.12.0-0.nightly-2022-10-18-041406 True False False 102m kube-controller-manager 4.12.0-0.nightly-2022-10-18-041406 True False True 107m GarbageCollectorDegraded: error fetching rules: Get "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules": dial tcp [fd02::3c5f]:9091: connect: connection refused kube-scheduler 4.12.0-0.nightly-2022-10-18-041406 True False False 107m kube-storage-version-migrator 4.12.0-0.nightly-2022-10-18-041406 True False False 117m machine-api 4.12.0-0.nightly-2022-10-18-041406 True False False 111m machine-approver 4.12.0-0.nightly-2022-10-18-041406 True False False 111m machine-config 4.12.0-0.nightly-2022-10-18-041406 True False False 115m marketplace 4.12.0-0.nightly-2022-10-18-041406 True False False 116m monitoring False True True 98m deleting Thanos Ruler Route failed: Timeout: request did not complete within requested timeout - context deadline exceeded, deleting UserWorkload federate Route failed: Timeout: request did not complete within requested timeout - context deadline exceeded, reconciling Alertmanager Route failed: retrieving Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io alertmanager-main), reconciling Thanos Querier Route failed: retrieving Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io thanos-querier), reconciling Prometheus API Route failed: retrieving Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io prometheus-k8s), prometheuses.monitoring.coreos.com "k8s" not found network 4.12.0-0.nightly-2022-10-18-041406 True False False 124m node-tuning 4.12.0-0.nightly-2022-10-18-041406 True False False 111m openshift-apiserver 4.12.0-0.nightly-2022-10-18-041406 True False False 104m openshift-controller-manager 4.12.0-0.nightly-2022-10-18-041406 True False False 107m openshift-samples False True False 103m The error the server was unable to return a response in the time allotted, but may still be processing the request (get imagestreams.image.openshift.io) during openshift namespace cleanup has left the samples in an unknown state operator-lifecycle-manager 4.12.0-0.nightly-2022-10-18-041406 True False False 111m operator-lifecycle-manager-catalog 4.12.0-0.nightly-2022-10-18-041406 True False False 111m operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2022-10-18-041406 True False False 106m service-ca 4.12.0-0.nightly-2022-10-18-041406 True False False 124m storage 4.12.0-0.nightly-2022-10-18-041406 True False False 111m
Expected results:
Deployment succeeds without issues.
Additional info:
I was unable to run must-gather so attaching the pods logs copied from the host file system.
Description of problem:
According to https://issues.redhat.com/browse/OCPBUGS-705, thanks Junyun share the test env/result for install part, and we need the fix in vsphere-problem-detector, currently it reports the following missing when using the pre-existing folder and/or resource pool with ReadOnly permission: 1. vcenter cluster set ReadOnly permission: I0902 10:07:50.324782 1 vsphere_check.go:244] CheckComputeClusterPermissions:jima-permission-q84s8-worker-86gd4 failed: missing privileges for compute cluster workloads: Resource.AssignVMToPool, VApp.AssignResourcePool, VApp.Import, VirtualMachine.Config.AddNewDisk 2. datacenter set ReadOnly permission: I0902 08:09:19.462001 1 vsphere_check.go:225] CheckAccountPermissions failed: missing privileges for datacenter OCP-DC: Resource.AssignVMToPool, VApp.Import, VirtualMachine.Config.AddExistingDisk, VirtualMachine.Config.AddNewDisk, VirtualMachine.Config.AddRemoveDevice, VirtualMachine.Config.AdvancedConfig, VirtualMachine.Config.Annotation, VirtualMachine.Config.CPUCount, VirtualMachine.Config.DiskExtend, VirtualMachine.Config.DiskLease, VirtualMachine.Config.EditDevice, VirtualMachine.Config.Memory, VirtualMachine.Config.RemoveDisk, VirtualMachine.Config.Rename, VirtualMachine.Config.ResetGuestInfo, VirtualMachine.Config.Resource, VirtualMachine.Config.Settings, VirtualMachine.Config.UpgradeVirtualHardware, VirtualMachine.Interact.GuestControl, VirtualMachine.Interact.PowerOff, VirtualMachine.Interact.PowerOn, VirtualMachine.Interact.Reset, VirtualMachine.Inventory.Create, VirtualMachine.Inventory.CreateFromExisting, VirtualMachine.Inventory.Delete, VirtualMachine.Provisioning.Clone, VirtualMachine.Provisioning.DeployTemplate, VirtualMachine.Provisioning.MarkAsTemplate, Folder.Create, Folder.Delete
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-02-194931
How reproducible:
Always
Steps to Reproduce:
See Description of problem
Actual results:
The vsphere-problem-detector operator reports privilege missing when using pre-existing folder and/or resource pool with ReadOnly permission
Expected results:
The vsphere-problem-detector operator should not reports privilege missing in that case.
Additional info:
Package golang.org/x/text/language has a vulnerability which could cause a denial of service (included on version v0.3.7 of golang.org/x/text/language)
This is a clone of issue OCPBUGS-3069. The following is the description of the original issue:
—
Description of problem:
On cluster setting page, it shows available upgrade on page. After user chooses one target version and clicks "Upgrade", wait for a long time, there is no info about upgrade status.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-25-210451
How reproducible:
Always
Steps to Reproduce:
1.Login console with available upgrade for the cluster, select a target version in the available version list. Then click "Update". Check the upgrade progress on the cluster setting page. 2.Check upgrade info from client with "oc adm upgrade". 3.
Actual results:
1.There is not any information or upgrade progress shown on the page. 2.It shows info about retrieving target version failed. $ oc adm upgrade Cluster version is 4.12.0-0.nightly-2022-10-25-210451 ReleaseAccepted=False Reason: RetrievePayload Message: Retrieving payload failed version="4.12.0-0.nightly-2022-10-27-053332" image="registry.ci.openshift.org/ocp/release@sha256:fd4e9bec095b845c6f726f9ce17ee70449971b8286bb9b7478c06c5f697f05f1" failure=The update cannot be verified: unable to verify sha256:fd4e9bec095b845c6f726f9ce17ee70449971b8286bb9b7478c06c5f697f05f1 against keyrings: verifier-public-key-redhatUpstream: https://openshift-release.apps.ci.l2s4.p1.openshiftapps.com/graph Channel: stable-4.12 Recommended updates: VERSION IMAGE 4.12.0-0.nightly-2022-11-01-135441 registry.ci.openshift.org/ocp/release@sha256:f79d25c821a73496f4664a81a123925236d0c7818fd6122feb953bc64e91f5d0 4.12.0-0.nightly-2022-10-31-232349 registry.ci.openshift.org/ocp/release@sha256:cb2d157805abc413394fc579776d3f4406b0a2c2ed03047b6f7958e6f3d92622 4.12.0-0.nightly-2022-10-28-001703 registry.ci.openshift.org/ocp/release@sha256:c914c11492cf78fb819f4b617544cd299c3a12f400e106355be653c0013c2530 4.12.0-0.nightly-2022-10-27-053332 registry.ci.openshift.org/ocp/release@sha256:fd4e9bec095b845c6f726f9ce17ee70449971b8286bb9b7478c06c5f697f05f1
Expected results:
1. It should also show this kind of message on console page if retrieving target payload failed, so that user knows the actual result after try to upgrade.
Additional info:
This is a clone of issue OCPBUGS-3085. The following is the description of the original issue:
—
Description of problem:
IPI on BareMetal Dual stack deployment failed and Bootstrap timed out before completion
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-25-210451
How reproducible:
Always
Steps to Reproduce:
1. Deploy IPI on BM using Dual stack 2. 3.
Actual results:
Deployment failed
Expected results:
Should pass
Additional info:
Same deployment works fine on 4.11
Description of problem:
health_statuses_insights metrics is showing disabled rules in "total". In other fields, it shows the correct amount. In the code linked below, we can see that the "Disabled" rules are only skipped during the value assigning of TotalRisk
How reproducible:
Always
Steps to Reproduce:
1. Upload a fake archive to trigger health checks (for example with rule CVE_2020_8555_kubernetes) 2. Disable one of the rules through https://console.redhat.com/api/insights-results-aggregator/v1/clusters/{cluster.id}/rules/{rule}/error_key/{error_key}/disable 3. Create support secret and set endpoint="https://httpstat.us/200" 4. restart insights operator 5. wait for alerts to trigger 6. Check health_statuses_insights metrics.
rule:
ccx_rules_ocp.external.rules.ocp_version_end_of_life.report
error_key:
OCP4X_BEYOND_EOL
Actual results:
"moderate" health_statuses_insights shows 2 triggers "total" shows 3. Therefore, it is accounting for the deactivated rule.
Expected results:
"moderate" health_statuses_insights shows 2 triggers "total" health_statuses_insights shows 2 triggers (doesn't account for deactivated rule)
Additional info:
If there is any issue in triggering this events, you may contact me and I can help with the steps.
Description of problem:
When the user runs:
openshift-install agent create image --dir cluster-manifests
But the manifests are either not in cluster-manifests or are missing, the error code generated by the tool leads users to believe that they are missing some tool dependency:
ERROR failed to write asset (Agent Installer ISO) to disk: image reader not available
Version-Release number of selected component (if applicable):4.11.0
How reproducible: 100%
Steps to Reproduce:
1. rm -fr /tmp/cluster-manifests && mkdir /tmp/cluster-manifests
2.openshift-install agent create image --dir cluster-manifests
Actual results:
ERROR failed to write asset (Agent Installer ISO) to disk: image reader not available
Expected results:
Error: Missing manifets in the specified cluster manifest directory: "/tmp/cluster-manifests"
Additional info:
Description of problem:
In OCP 4.9, the package-server-manager was introduced to manage the packageserver CSV. However, when OCP 4.8 in upgraded to 4.9, the packageserver stays stuck in v0.17.0, which is the version in OCP 4.8, and v0.18.3 does not roll out, which is the version in OCP 4.9
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Install OCP 4.8 2. Upgrade to OCP 4.9 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2022-08-31-160214 True True 50m Working towards 4.9.47: 619 of 738 done (83% complete) $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.47 True False 4m26s Cluster version is 4.9.47
Actual results:
Check packageserver CSV. It's in v0.17.0 $ oc get csv NAME DISPLAY VERSION REPLACES PHASE packageserver Package Server 0.17.0 Succeeded
Expected results:
packageserver CSV is at 0.18.3
Additional info:
packageserver CSV version in 4.8: https://github.com/openshift/operator-framework-olm/blob/release-4.8/manifests/0000_50_olm_15-packageserver.clusterserviceversion.yaml#L12 packageserver CSV version in 4.9: https://github.com/openshift/operator-framework-olm/blob/release-4.9/pkg/manifests/csv.yaml#L8
This is a clone of issue OCPBUGS-2851. The following is the description of the original issue:
—
Description of problem:
The current implementation of registries.conf support is not working as expected. This bug report will outline the expectations of how we believe this should work.
The containers/image project defines a configuration file called registries.conf, which controls how image pulls can be redirected to another registry. Effectively the pull request for a given registry is redirected to another registry which can satisfy the image pull request instead. The specification for the registries.conf file is located here. For tools such as podman and skopeo, this configuration file allows those tools to indicate where images should be pulled from, and the containers/image project rewrites the image reference on the fly and tries to get the image from the first location it can, preferring these "alternate locations" and then falling back to the original location if one of the alternate locations can't satisfy the image request.
An important aspect of this redirection mechanism is it allows the "host:port" and "namespace" portions of the image reference to be redirected. To be clear on the nomenclature used in the registries.conf specification, a namespace refers to zero or more slash separated sections leading up to the image name (which is called repo in the specification and has the tag or digest after it. See repo(:_tag|@digest) below) and the host[:port] refers to the domain where the image registry is being hosted.
Example:
host[:port]/namespace[/namespace…]/repo(:_tag|@digest)
For example, if we have an image called myimage@sha:1234 the and the image normally resides in quay.io/foo/myimage@sha:1234 you could redirect the image pull request to my registry.com/bar/baz/myimage@sha:1234. Note that in this example the alternate registry location is in a different host, and the namespace "path" is different too.
In a typical development scenario, image references within an OLM catalog should always point to a production location where the image is intended to be pulled from when a catalog is published publicly. Doing this prevents publishing a catalog which contains image references to internal repositories, which would never be accessible by a customer. By using the registries.conf redirection mechanism, we can perform testing even before the images are officially published to public locations, and we can redirect the image reference from a production location to an internal repository for testing purposes. Below is a simple example of a registries.conf file that redirects image pull requests away from prodlocation.io to preprodlocation.com:
[[registry]] location = "prodlocation.io/xx" insecure = false blocked = false mirror-by-digest-only = true prefix = "" [[registry.mirror]] location = "preprodlocation.com/xx" insecure = false
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
oc mirror -c ImageSetConfiguration.yaml --use-oci-feature --oci-feature-action mirror --oci-insecure-signature-policy --oci-registries-config registries.conf --dest-skip-tls docker://localhost:5000/example/test
Example registries.conf
[[registry]] prefix = "" insecure = false blocked = false location = "prod.com/abc" mirror-by-digest-only = true [[registry.mirror]] location = "internal.exmaple.io/cp" insecure = false [[registry]] prefix = "" insecure = false blocked = false location = "quay.io" mirror-by-digest-only = true [[registry.mirror]] location = "internal.exmaple.io/abcd" insecure = false
Actual results:
images are not pulled from "internal" registry
Expected results:
images should be pulled from "internal" registry
Additional info:
The current implementation in oc mirror creates its own structs to approximate the ones provided by the containers/image project, but it might not be necessary to do that. Since the oc mirror project already uses containers/image as a dependency, it could leverage the FindRegistry function, which takes a image reference, loads the registries.conf information and returns the most appropriate [[registry]] reference (in the form of Registry struct) or nil if no match was found. Obviously custom processing will be necessary to do something useful with the Registry instance. Using this code is not a requirement, just a suggestion of another possible path to load the configuration.
Description of problem:
After editing a MachineSet on AWS (just changed an annotation) it shows a warning [~] $ oc -n openshift-machine-api edit machineset.machine.openshift.io/ci-ln-hlf4lft-76ef8-p7rc4-worker-us-west-1b W1111 16:06:32.385856 88719 warnings.go:70] incorrect GroupVersionKind for AWSMachineProviderConfig object: machine.openshift.io/v1beta1, Kind=AWSMachineProviderConfig machineset.machine.openshift.io/ci-ln-hlf4lft-76ef8-p7rc4-worker-us-west-1b edited
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Add an annotation or label to a machine 2. 3.
Actual results:
There is a warning about incorrect GroupVersionKind for AWSMachineProviderConfig object
Expected results:
No warnings shown
Additional info:
cloud-controller-manager does not react to changes to infrastructure secrets (in the OpenStack case: clouds.yaml).
As a consequence, if credentials are rotated (and the old ones are rendered useless), load balancer creation and deletion will not succeed any more. Restarting the controller fixes the issue on a live cluster.
Logs show that it couldn't find the application credentials:
Dec 19 12:58:58.909: INFO: At 2022-12-19 12:53:58 +0000 UTC - event for udp-lb-default-svc: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Dec 19 12:58:58.909: INFO: At 2022-12-19 12:53:58 +0000 UTC - event for udp-lb-default-svc: {service-controller } SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: failed to get subnet to create load balancer for service e2e-test-openstack-q9jnk/udp-lb-default-svc: Unable to re-authenticate: Expected HTTP response code [200 204 300] when accessing [GET https://compute.rdo.mtl2.vexxhost.net/v2.1/0693e2bb538c42b79a49fe6d2e61b0fc/servers/fbeb21b8-05f0-4734-914e-926b6a6225f1/os-interface], but got 401 instead {"error": {"code": 401, "title": "Unauthorized", "message": "The request you have made requires authentication."}}: Resource not found: [POST https://identity.rdo.mtl2.vexxhost.net/v3/auth/tokens], error message: {"error":{"code":404,"message":"Could not find Application Credential: 1b78233956b34c6cbe5e1c95445972a4.","title":"Not Found"}}
OpenStack CI has been instrumented to restart CCM after credentials rotation, so that we silence this particular issue and avoid masking any other. That workaround must be reverted once this bug is fixed.
4.2 AWS boot images such as ami-01e7fdcb66157b224 include the old ignition.platform.id=ec2 kernel command line parameter. When launched against 4.12.0-rc.3, new machines fail with:
coreos-assemblers used ignition.platform.id=ec2, but pivoted to =aws here. It's not clear when that made its way into new AWS boot images. Some time after 4.2 and before 4.6.
Afterburn dropped support for legacy command-line options like the ec2 slug in 5.0.0. But it's not clear when that shipped into RHCOS. The release controller points at this RHCOS diff, but that has afterburn-0-5.3.0-1 builds on both sides.
100%, given a sufficiently old AMI and a sufficiently new OpenShift release target.
The new Machine will get to Provisioned but fail to progress to Running. systemd journal logs will include unknown provider 'ec2' for Afterburn units.
Old boot-image AMIs can successfully update to 4.12.
Alternatively, we pin down the set of exposed boot images sufficiently that users with older clusters can audit for exposure and avoid the issue by updating to more modern boot images (although updating boot images is not trivial, see RFE-3001 and the Ignition spec 2 to 3 transition discussed in kcs#5514051.
Description of problem:
When disable all helm chart repos the helm navigation item is disabled.
To re-enable the helm charts again the user can search for HCP or PHCPs but the action menu doesn't work if no other helm chart repo is enabled.
Version-Release number of selected component (if applicable):
Only 4.12 (4.11 is fine)
How reproducible:
Always
Steps to Reproduce:
1. Switch to developer perspective
2. Navigate to Helm > Repos > Edit the default repo and disable it
3. Helm Navigation should disappear and the content area maybe switch to 404, that's fine.
4. Navigate to Search and select HelmChartRepository as resource
5. Click on the action menu (kebab icon) to edit the HCR
Actual results:
The action menu is not shown
Expected results:
The action menu should be shown so that the user can edit or delete the HCR.
Additional info:
Description of problem:
According to OCP 4.11 doc (https://docs.openshift.com/container-platform/4.11/installing/installing_gcp/installing-gcp-account.html#installation-gcp-enabling-api-services_installing-gcp-account), the Service Usage API (serviceusage.googleapis.com) is an optional API service to be enabled. But, the installation cannot succeed if this API is disabled.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-25-071630
How reproducible:
Always, if the Service Usage API is disabled in the GCP project.
Steps to Reproduce:
1. Make sure the Service Usage API (serviceusage.googleapis.com) is disabled in the GCP project. 2. Try IPI installation in the GCP project.
Actual results:
The installation would fail finally, without any worker machines launched.
Expected results:
Installation should succeed, or the OCP doc should be updated.
Additional info:
Please see the attached must-gather logs (http://virt-openshift-05.lab.eng.nay.redhat.com/jiwei/jiwei-0926-03-cnxn5/) and the sanity check results. FYI if enabling the API, and without changing anything else, the installation could succeed.
Description of problem:
Setting up Github App from the console is lacking the required permission
Events and Permissions: https://pipelinesascode.com/docs/install/github_apps/
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Setup Github App from administrator perspective.
2. Create Repository and configure it to use the Github App method.
Actual results:
Creates Github App with limited permission.
Expected results:
Created Github App should contain all the required permission and should trigger the pipelinerun successfully on git events.
Additional info:
Console needs to update the default_events and default_permissions here it has to be matching with the CLI - refer this.
we need to update the See Github permission section in the UI as well.
This is a clone of issue OCPBUGS-5018. The following is the description of the original issue:
—
Description of problem:
When upgrading from 4.11 to 4.12 an IPI AWS cluster which included Machineset and BYOH Windows nodes, the upgrade hanged while trying to upgrade the machine-api component: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-12-16-190443 True True 117m Working towards 4.12.0-rc.5: 214 of 827 done (25% complete), waiting on machine-api $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.11.0-0.nightly-2022-12-16-190443 True False False 4h47m baremetal 4.11.0-0.nightly-2022-12-16-190443 True False False 4h59m cloud-controller-manager 4.12.0-rc.5 True False False 5h3m cloud-credential 4.11.0-0.nightly-2022-12-16-190443 True False False 5h4m cluster-autoscaler 4.11.0-0.nightly-2022-12-16-190443 True False False 4h59m config-operator 4.12.0-rc.5 True False False 5h1m console 4.11.0-0.nightly-2022-12-16-190443 True False False 4h43m csi-snapshot-controller 4.11.0-0.nightly-2022-12-16-190443 True False False 5h dns 4.11.0-0.nightly-2022-12-16-190443 True False False 4h59m etcd 4.12.0-rc.5 True False False 4h58m image-registry 4.11.0-0.nightly-2022-12-16-190443 True False False 4h54m ingress 4.11.0-0.nightly-2022-12-16-190443 True False False 4h55m insights 4.11.0-0.nightly-2022-12-16-190443 True False False 4h53m kube-apiserver 4.12.0-rc.5 True False False 4h50m kube-controller-manager 4.12.0-rc.5 True False False 4h57m kube-scheduler 4.12.0-rc.5 True False False 4h57m kube-storage-version-migrator 4.11.0-0.nightly-2022-12-16-190443 True False False 5h machine-api 4.11.0-0.nightly-2022-12-16-190443 True True False 4h56m Progressing towards operator: 4.12.0-rc.5 machine-approver 4.11.0-0.nightly-2022-12-16-190443 True False False 5h machine-config 4.11.0-0.nightly-2022-12-16-190443 True False False 4h59m marketplace 4.11.0-0.nightly-2022-12-16-190443 True False False 4h59m monitoring 4.11.0-0.nightly-2022-12-16-190443 True False False 4h53m network 4.11.0-0.nightly-2022-12-16-190443 True False False 5h3m node-tuning 4.11.0-0.nightly-2022-12-16-190443 True False False 4h59m openshift-apiserver 4.11.0-0.nightly-2022-12-16-190443 True False False 4h53m openshift-controller-manager 4.11.0-0.nightly-2022-12-16-190443 True False False 4h56m openshift-samples 4.11.0-0.nightly-2022-12-16-190443 True False False 4h55m operator-lifecycle-manager 4.11.0-0.nightly-2022-12-16-190443 True False False 5h operator-lifecycle-manager-catalog 4.11.0-0.nightly-2022-12-16-190443 True False False 5h operator-lifecycle-manager-packageserver 4.11.0-0.nightly-2022-12-16-190443 True False False 4h55m service-ca 4.11.0-0.nightly-2022-12-16-190443 True False False 5h storage 4.11.0-0.nightly-2022-12-16-190443 True False False 5h When digging a little deeper into the exact component hanging, we observed that it was the machine-api-termination-handler that was running in the Machine Windows workers, the one that was in ImagePullBackOff state: $ oc get pods -n openshift-machine-api NAME READY STATUS RESTARTS AGE cluster-autoscaler-operator-6ff66b6655-kpgp9 2/2 Running 0 5h5m cluster-baremetal-operator-6dbcd6f76b-d9dwd 2/2 Running 0 5h5m machine-api-controllers-cdb8d979b-79xlh 7/7 Running 0 94m machine-api-operator-86bf4f6d79-g2vwm 2/2 Running 0 97m machine-api-termination-handler-fcfq2 0/1 ImagePullBackOff 0 94m machine-api-termination-handler-gj4pf 1/1 Running 0 4h57m machine-api-termination-handler-krwdg 0/1 ImagePullBackOff 0 94m machine-api-termination-handler-l95x2 1/1 Running 0 4h54m machine-api-termination-handler-p6sw6 1/1 Running 0 4h57m $ oc describe pods machine-api-termination-handler-fcfq2 -n openshift-machine-api Name: machine-api-termination-handler-fcfq2 Namespace: openshift-machine-api Priority: 2000001000 Priority Class Name: system-node-critical ..................................................................... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 94m default-scheduler Successfully assigned openshift-machine-api/machine-api-termination-handler-fcfq2 to ip-10-0-145-114.us-east-2.compute.internal Warning FailedCreatePodSandBox 94m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7b80f84cc547310f5370a7dde7c651ca661dd40ebd0730296329d1cbe8981b37": plugin type="win-ov erlay" name="OVNKubernetesHybridOverlayNetwork" failed (add): error while adding HostComputeEndpoint: failed to create the new HostComputeEndpoint: hcnCreateEndpoint failed in Win32: The object already exists. (0x1392) {"Success":false,"Error":"The object already exists. ","ErrorCode":2147947410} Warning FailedCreatePodSandBox 94m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6b3e020a419dde8359a31b56129c65821011e232467d712f9f5081f32fe380c9": plugin type="win-ov erlay" name="OVNKubernetesHybridOverlayNetwork" failed (add): error while adding HostComputeEndpoint: failed to create the new HostComputeEndpoint: hcnCreateEndpoint failed in Win32: The object already exists. (0x1392) {"Success":false,"Error":"The object already exists. ","ErrorCode":2147947410} Normal Pulling 93m (x4 over 94m) kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aa96cb22047b62f785b87bf81ec1762703c1489079dd33008085b5585adc258" Warning Failed 93m (x4 over 94m) kubelet Error: ErrImagePull Normal BackOff 4m39s (x393 over 94m) kubelet Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aa96cb22047b62f785b87bf81ec1762703c1489079dd33008085b5585adc258" $ oc get pods -n openshift-machine-api -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cluster-autoscaler-operator-6ff66b6655-kpgp9 2/2 Running 0 5h8m 10.130.0.10 ip-10-0-180-35.us-east-2.compute.internal <none> <none> cluster-baremetal-operator-6dbcd6f76b-d9dwd 2/2 Running 0 5h8m 10.130.0.8 ip-10-0-180-35.us-east-2.compute.internal <none> <none> machine-api-controllers-cdb8d979b-79xlh 7/7 Running 0 97m 10.128.0.144 ip-10-0-138-246.us-east-2.compute.internal <none> <none> machine-api-operator-86bf4f6d79-g2vwm 2/2 Running 0 100m 10.128.0.143 ip-10-0-138-246.us-east-2.compute.internal <none> <none> machine-api-termination-handler-fcfq2 0/1 ImagePullBackOff 0 97m 10.129.0.7 ip-10-0-145-114.us-east-2.compute.internal <none> <none> machine-api-termination-handler-gj4pf 1/1 Running 0 5h 10.0.223.37 ip-10-0-223-37.us-east-2.compute.internal <none> <none> machine-api-termination-handler-krwdg 0/1 ImagePullBackOff 0 97m 10.128.0.4 ip-10-0-143-111.us-east-2.compute.internal <none> <none> machine-api-termination-handler-l95x2 1/1 Running 0 4h57m 10.0.172.211 ip-10-0-172-211.us-east-2.compute.internal <none> <none> machine-api-termination-handler-p6sw6 1/1 Running 0 5h 10.0.146.227 ip-10-0-146-227.us-east-2.compute.internal <none> <none> [jfrancoa@localhost byoh-auto]$ oc get nodes -o wide | grep ip-10-0-143-111.us-east-2.compute.internal ip-10-0-143-111.us-east-2.compute.internal Ready worker 4h24m v1.24.0-2566+5157800f2a3bc3 10.0.143.111 <none> Windows Server 2019 Datacenter 10.0.17763.3770 containerd://1.18 [jfrancoa@localhost byoh-auto]$ oc get nodes -o wide | grep ip-10-0-145-114.us-east-2.compute.internal ip-10-0-145-114.us-east-2.compute.internal Ready worker 4h18m v1.24.0-2566+5157800f2a3bc3 10.0.145.114 <none> Windows Server 2019 Datacenter 10.0.17763.3770 containerd://1.18 [jfrancoa@localhost byoh-auto]$ oc get machine.machine.openshift.io -n openshift-machine-api -o wide | grep ip-10-0-145-114.us-east-2.compute.internal jfrancoa-1912-aws-rvkrp-windows-worker-us-east-2a-v57sh Running m5a.large us-east-2 us-east-2a 4h37m ip-10-0-145-114.us-east-2.compute.internal aws:///us-east-2a/i-0b69d52c625c46a6a running [jfrancoa@localhost byoh-auto]$ oc get machine.machine.openshift.io -n openshift-machine-api -o wide | grep ip-10-0-143-111.us-east-2.compute.internal jfrancoa-1912-aws-rvkrp-windows-worker-us-east-2a-j6gkc Running m5a.large us-east-2 us-east-2a 4h37m ip-10-0-143-111.us-east-2.compute.internal aws:///us-east-2a/i-05e422c0051707d16 running This is blocking the whole upgrade process, as the upgrade is not able to move further from this component.
Version-Release number of selected component (if applicable):
$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-12-16-190443 True True 141m Working towards 4.12.0-rc.5: 214 of 827 done (25% complete), waiting on machine-api $ oc version Client Version: 4.11.0-0.ci-2022-06-09-065118 Kustomize Version: v4.5.4 Server Version: 4.11.0-0.nightly-2022-12-16-190443 Kubernetes Version: v1.25.4+77bec7a
How reproducible:
Always
Steps to Reproduce:
1. Deploy a 4.11 IPI AWS cluster with Windows workers using a MachineSet 2. Perform the upgrade to 4.12 3. Wait for the upgrade to hang on the machine-api component
Actual results:
The upgrade hangs when upgrading the machine-api component.
Expected results:
The upgrade suceeds
Additional info:
Description of problem:
In looking at jobs on an accepted payload at https://amd64.ocp.releases.ci.openshift.org/releasestream/4.12.0-0.ci/release/4.12.0-0.ci-2022-08-30-122201 , I observed this job https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-sdn-serial/1564589538850902016 with "Undiagnosed panic detected in pod" "pods/openshift-controller-manager-operator_openshift-controller-manager-operator-74bf985788-8v9qb_openshift-controller-manager-operator.log.gz:E0830 12:41:48.029165 1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)"
Version-Release number of selected component (if applicable):
4.12
How reproducible:
probably relatively easy to reproduce (but not consistently) given it's happened several times according to this search: https://search.ci.openshift.org/?search=Observed+a+panic%3A+%22invalid+memory+address+or+nil+pointer+dereference%22&maxAge=48h&context=1&type=junit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Steps to Reproduce:
1. let nightly payloads run or run one of the presubmit jobs mentioned in the search above 2. 3.
Actual results:
Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)}
Expected results:
no panics
Additional info:
We should reformat assisted-installer ops.go code and use exec commands as interface to allow mocking
Description of the problem:
Noticed there were no thread IDs in the assisted-installer logs when debugging 240 node cluster deployment with MCE (slack thread) making it difficult to debug.
How reproducible: 100%
Steps to reproduce:
1. Create cluster using assisted service and start the install
2. Look at the assisted-installer logs
Actual results:
Logs look like
time="2022-07-14T16:17:31Z" level=info msg="Start complete installation step, with params success: true, error info: "
Expected results: Thread ID would also print so we can understand which thread it came from
Adding setReportCaller to true will also help
Description of problem:
When trying to enable Hardware Backed Management Ports (e.g. Virtual functions) on BF2 in NIC mode OR any other MLX NICs (CX-6, CX-5) by setting the node_mgmt_port_netdev_flags flags to a VF in the CNO; then OVN-K Node will crash.
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
Always
Steps to Reproduce:
Start by enabling OvS HWOL and setting sriovnetworknodepolicy https://docs.openshift.com/container-platform/4.11/networking/hardware_networks/configuring-hardware-offloading.html 1. Scale down CNO: oc scale --replicas=0 deploy/network-operator -n openshift-network-operator 2. Make changes to OVN-K node: oc edit daemonsets ovnkube-node -n openshift-ovn-kubernetes a. Find "node_mgmt_port_netdev_flags=" and replace it with something like this: node_mgmt_port_netdev_flags= if [[ ${K8S_NODE} != *"master"* ]]; then node_mgmt_port_netdev_flags="--ovnkube-node-mgmt-port-netdev=ens1f0v0" fi b. Additionally you have to add the "node_mgmt_port_netdev_flags" to the " exec /usr/bin/ovnkube --init-node "${K8S_NODE}"" call in the same script. Since this is missing. 3. Save the edit. 4. Observe OVN-K node on baremetal worker nodes.
Actual results:
I0822 14:21:56.250285 496356 ovs.go:204] Exec(3): stderr: "" I0822 14:21:56.250290 496356 node.go:310] Detected support for port binding with external IDs I0822 14:21:56.250516 496356 management-port-dpu.go:181] Setup management port dpu host: ens1f0v0 F0822 14:21:56.250568 496356 ovnkube.go:133] failed to set management port name. file exists Workaround is to go to the node and run this command: sudo ovs-vsctl del-port br-int ovn-k8s-mp0
Expected results:
There should not be any errors when changing node_mgmt_port_netdev_flags to a valid value.
Additional info:
Reported here: https://github.com/ovn-org/ovn-kubernetes/pull/3160 Discussed briefly here: https://issues.redhat.com/browse/OCPBUGS-4098 Fixed Upstream here: https://github.com/ovn-org/ovn-kubernetes/pull/3251
This is a clone of issue OCPBUGS-6092. The following is the description of the original issue:
—
Description of problem:
While configuring 4.12.0 dualstack baremetal cluster ovs-configuration.service fails
Jan 19 22:01:05 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[14588]: Attempt 10 to bring up connection ovs-if-phys1 Jan 19 22:01:05 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[14588]: + nmcli conn up ovs-if-phys1 Jan 19 22:01:05 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[26588]: Error: Connection activation failed: No suitable device found for this connection (device eno1np0 not available because profile i s not compatible with device (mismatching interface name)). Jan 19 22:01:05 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[14588]: + s=4 Jan 19 22:01:05 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[14588]: + sleep 5 Jan 19 22:01:10 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[14588]: + '[' 4 -eq 0 ']' Jan 19 22:01:10 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[14588]: + false Jan 19 22:01:10 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[14588]: + echo 'ERROR: Cannot bring up connection ovs-if-phys1 after 10 attempts' Jan 19 22:01:10 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[14588]: ERROR: Cannot bring up connection ovs-if-phys1 after 10 attempts Jan 19 22:01:10 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[14588]: + return 4 Jan 19 22:01:10 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[14588]: + handle_exit Jan 19 22:01:10 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[14588]: + e=4 Jan 19 22:01:10 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[14588]: + '[' 4 -eq 0 ']' Jan 19 22:01:10 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[14588]: + echo 'ERROR: configure-ovs exited with error: 4' Jan 19 22:01:10 openshift-worker-0.kni-qe-4.lab.eng.rdu2.redhat.com configure-ovs.sh[14588]: ERROR: configure-ovs exited with error: 4
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
So far 100%
Steps to Reproduce:
1. Deploy dualstack baremetal cluster with bonded interfaces(configured with MC and not NMState within install-config.yaml) 2. Run migration to second interface, part of machine config - contents: source: data:text/plain;charset=utf-8,bond0.117 filesystem: root mode: 420 path: /etc/ovnk/extra_bridge 3. Install operators: * kubevirt-hyperconverged * sriov-network-operator * cluster-logging * elasticsearch-operator 4. Start applying node-tunning profiles 5. During node reboots ovs-configuration service fails
Actual results:
ovs-configuration service fails on some nodes resulting in ovnkube-node-* pods failure oc get po -n openshift-ovn-kubernetes NAME READY STATUS RESTARTS AGE ovnkube-master-dvgx7 6/6 Running 8 16h ovnkube-master-vs7mp 6/6 Running 6 16h ovnkube-master-zrm4c 6/6 Running 6 16h ovnkube-node-2g8mb 4/5 CrashLoopBackOff 175 (3m48s ago) 16h ovnkube-node-bfbcc 4/5 CrashLoopBackOff 176 (64s ago) 16h ovnkube-node-cj6vf 5/5 Running 5 16h ovnkube-node-f92rm 5/5 Running 5 16h ovnkube-node-nmjpn 5/5 Running 5 16h ovnkube-node-pfv5z 4/5 CrashLoopBackOff 163 (4m53s ago) 15h ovnkube-node-z5vf9 5/5 Running 10 15h
Expected results:
ovs-configuration service succeeds on all nodes
Additional info:
Description of problem:
When the cluster install finished, wait-for install-complete command didn't exit as expected.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Get the latest agent-installer and build image git clone https://github.com/openshift/installer.git cd installer/ hack/build.sh Edit agent-config and install-config yaml file Create the agent.iso image: OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=quay.io/openshift-release-dev/ocp-release:4.12.0-ec.3-x86_64 bin/openshift-install agent create image --log-level debug 2. Install SNO cluster virt-install --connect qemu:///system -n control-0 -r 33000 --vcpus 8 --cdrom ./agent.iso --disk pool=installer,size=120 --boot uefi,hd,cdrom --os-variant=rhel8.5 --network network=default,mac=52:54:00:aa:aa:aa --wait=-1 3. Run 'bin/openshift agent wait-for bootstrap-complete --log-level debug' and the command finished as expected. 4. After 'bootstrap' completion, run 'bin/openshift agent wait-for install-complete --log-level debug', the command didn't finish as expected.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-3084. The following is the description of the original issue:
—
Upstream Issue: https://github.com/kubernetes/kubernetes/issues/77603
Long log lines get corrupted when using '--timestamps' by the Kubelet.
The root cause is that the buffer reads up to a new line. If the line is greater than 4096 bytes and '--timestamps' is turrned on the kubelet will write the timestamp and the partial log line. We will need to refactor the ReadLogs function to allow for a partial line read.
apiVersion: v1
kind: Pod
metadata:
name: logs
spec:
restartPolicy: Never
containers:
- name: logs
image: fedora
args:
- bash
- -c
- 'for i in `seq 1 10000000`; do echo -n $i; done'
kubectl logs logs --timestamps
This is a clone of issue OCPBUGS-1428. The following is the description of the original issue:
—
Description of problem:
When using an OperatorGroup attached to a service account, AND if there is a secret present in the namespace, the operator installation will fail with the message: the service account does not have any API secret sa=testx-ns/testx-sa This issue seems similar to https://bugzilla.redhat.com/show_bug.cgi?id=2094303 - which was resolved in 4.11.0 - however, the new element now, is that the presence of a secret in the namespace is causing the issue. The name of the secret seems significant - suggesting something somewhere is depending on the order that secrets are listed in. For example, If the secret in the namespace is called "asecret", the problem does not occur. If it is called "zsecret", the problem always occurs.
"zsecret" is not a "kubernetes.io/service-account-token". The issue I have raised here relates to Opaque secrets - zsecret is an Opaque secret. The issue may apply to other types of secrets, but specifically my issue is that when there is an opaque secret present in the namespace, the operator install fails as described. I aught to be allowed to have an opaque secret present in the namespace where I am installing the operator.
Version-Release number of selected component (if applicable):
4.11.0 & 4.11.1
How reproducible:
100% reproducible
Steps to Reproduce:
1.Create namespace: oc new-project testx-ns 2. oc apply -f api-secret-issue.yaml
Actual results:
Expected results:
Additional info:
API YAML:
cat api-secret-issue.yaml
apiVersion: v1
kind: Secret
metadata:
name: zsecret
namespace: testx-ns
annotations:
kubernetes.io/service-account.name: testx-sa
type: Opaque
stringData:
mykey: mypass
—
apiVersion: v1
kind: ServiceAccount
metadata:
name: testx-sa
namespace: testx-ns
—
kind: OperatorGroup
apiVersion: operators.coreos.com/v1
metadata:
name: testx-og
namespace: testx-ns
spec:
serviceAccountName: "testx-sa"
targetNamespaces:
- testx-ns
—
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: testx-role
namespace: testx-ns
rules:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: testx-rolebinding
namespace: testx-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: testx-role
subjects:
—
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: etcd-operator
namespace: testx-ns
spec:
channel: singlenamespace-alpha
installPlanApproval: Automatic
name: etcd
source: community-operators
sourceNamespace: openshift-marketplace
Our CMO e2e tests create several containers besides the standard CMO deployment. These pods do currently not set any security context capabilities. Currently this creates a warning like so:
W0705 08:35:38.590283 15206 warnings.go:70] would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "alertmanager-webhook-e2e-testutil" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "alertmanager-webhook-e2e-testutil" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "alertmanager-webhook-e2e-testutil" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "alertmanager-webhook-e2e-testutil" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
We should be proactive and set security capability contraints. From this run this seems to impact the following pods/containers:
Both are used more then once.
We rely on the user providing accurate information about the MAC addresses in the agent-config, because at the point we read it we haven't seen the hosts yet. However, if the user gets this wrong then chaos may ensue.
Once inventory is available, we should validate that the user has not:
and fail the install if they have.
And possibly other alerts. Declaring namespace labels on alerts makes it easy to find the source or affected resource, as described here. But because Insights alerts are based on metrics exported by the cluster-version operator, they inherit source information from the CVO, and end up looking like:
ALERTS{alertname="SimpleContentAccessNotAvailable", alertstate="firing", condition="SCAAvailable", endpoint="metrics", instance="10.58.57.116:9099", job="cluster-version-operator", name="insights", namespace="openshift-cluster-version", pod="cluster-version-operator-5d8579fb58-p5hfn", prometheus="openshift-monitoring/k8s", reason="NotFound", receive="true", service="cluster-version-operator", severity="info"}
Adding namespace: openshift-insights to the labels block for InsightsDisabled and SimpleContentAccessNotAvailable would avoid this confusion.
You might also want to clear the job and service labels as irrelevant source information. And you might want to clear the pod label to avoid churning alerts when the CVO rolls out a new pod. You can get the label clearing by wrapping the expr with max without (job, pod, service) (...) or similar.
This is a clone of issue OCPBUGS-4656. The following is the description of the original issue:
—
Description of problem:
`/etc/hostname` may exist, but be empty. `vsphere-hostname` service should check that the file is not empty instead of just that it exists. OKD's machine-os-content starting from F37 has an empty /etc/hostname file, which breaks joining workers in vsphere IPI
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Install OKD w/ workers on vsphere 2. 3.
Actual results:
Workers get hostname resolved using NM
Expected results:
Workers get hostname resolved using vmtoolsd
Additional info:
This is a clone of issue OCPBUGS-4724. The following is the description of the original issue:
—
Description of problem: Installing OCP4.12 on top of Openstack 16.1 following the multi-availabilityZone installation is creating a cluster where the egressIP annotations ("cloud.network.openshift.io/egress-ipconfig") are created with empty value for the workers:
$ oc get nodes NAME STATUS ROLES AGE VERSION ostest-kncvv-master-0 Ready control-plane,master 9h v1.25.4+86bd4ff ostest-kncvv-master-1 Ready control-plane,master 9h v1.25.4+86bd4ff ostest-kncvv-master-2 Ready control-plane,master 9h v1.25.4+86bd4ff ostest-kncvv-worker-0-qxr5g Ready worker 8h v1.25.4+86bd4ff ostest-kncvv-worker-1-bmvvv Ready worker 8h v1.25.4+86bd4ff ostest-kncvv-worker-2-pbgww Ready worker 8h v1.25.4+86bd4ff $ oc get node ostest-kncvv-worker-0-qxr5g -o json | jq -r '.metadata.annotations' { "alpha.kubernetes.io/provided-node-ip": "10.196.2.156", "cloud.network.openshift.io/egress-ipconfig": "null", "csi.volume.kubernetes.io/nodeid": "{\"cinder.csi.openstack.org\":\"8327aef0-c6a7-4bf6-8f8f-d25c9abd9bce\",\"manila.csi.openstack.org\":\"ostest-kncvv-worker-0-qxr5g\"}", "k8s.ovn.org/host-addresses": "[\"10.196.2.156\",\"172.17.5.154\"]", "k8s.ovn.org/l3-gateway-config": "{\"default\":{\"mode\":\"shared\",\"interface-id\":\"br-ex_ostest-kncvv-worker-0-qxr5g\",\"mac-address\":\"fa:16:3e:7e:b5:70\",\"ip-addresses\":[\"10.196.2.156/16\"],\"ip-address\":\"10.196.2.156/16\",\"next-hops\":[\"10.196.0.1\"],\"next-hop\":\"10.196.0.1\",\"node-port-enable\":\"true\",\"vlan-id\":\"0\"}}", "k8s.ovn.org/node-chassis-id": "fd777b73-aa64-4fa5-b0b1-70c3bebc2ac6", "k8s.ovn.org/node-gateway-router-lrp-ifaddr": "{\"ipv4\":\"100.64.0.6/16\"}", "k8s.ovn.org/node-mgmt-port-mac-address": "42:e8:4f:42:9f:7d", "k8s.ovn.org/node-primary-ifaddr": "{\"ipv4\":\"10.196.2.156/16\"}", "k8s.ovn.org/node-subnets": "{\"default\":\"10.128.2.0/23\"}", "machine.openshift.io/machine": "openshift-machine-api/ostest-kncvv-worker-0-qxr5g", "machineconfiguration.openshift.io/controlPlaneTopology": "HighlyAvailable", "machineconfiguration.openshift.io/currentConfig": "rendered-worker-31323caf2b510e5b81179bb8ec9c150f", "machineconfiguration.openshift.io/desiredConfig": "rendered-worker-31323caf2b510e5b81179bb8ec9c150f", "machineconfiguration.openshift.io/desiredDrain": "uncordon-rendered-worker-31323caf2b510e5b81179bb8ec9c150f", "machineconfiguration.openshift.io/lastAppliedDrain": "uncordon-rendered-worker-31323caf2b510e5b81179bb8ec9c150f", "machineconfiguration.openshift.io/reason": "", "machineconfiguration.openshift.io/state": "Done", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }
Furthermore, Below is observed on openshift-cloud-network-config-controller:
$ oc logs -n openshift-cloud-network-config-controller cloud-network-config-controller-5fcdb6fcff-6sddj | grep egress I1212 00:34:14.498298 1 node_controller.go:146] Setting annotation: 'cloud.network.openshift.io/egress-ipconfig: null' on node: ostest-kncvv-worker-2-pbgww I1212 00:34:15.777129 1 node_controller.go:146] Setting annotation: 'cloud.network.openshift.io/egress-ipconfig: null' on node: ostest-kncvv-worker-0-qxr5g I1212 00:38:13.115115 1 node_controller.go:146] Setting annotation: 'cloud.network.openshift.io/egress-ipconfig: null' on node: ostest-kncvv-worker-1-bmvvv I1212 01:58:54.414916 1 node_controller.go:146] Setting annotation: 'cloud.network.openshift.io/egress-ipconfig: null' on node: ostest-kncvv-worker-0-drd5l I1212 02:01:03.312655 1 node_controller.go:146] Setting annotation: 'cloud.network.openshift.io/egress-ipconfig: null' on node: ostest-kncvv-worker-1-h976w I1212 02:04:11.656408 1 node_controller.go:146] Setting annotation: 'cloud.network.openshift.io/egress-ipconfig: null' on node: ostest-kncvv-worker-2-zxwrv
Version-Release number of selected component (if applicable):
RHOS-16.1-RHEL-8-20221206.n.1 4.12.0-0.nightly-2022-12-09-063749
How reproducible:
Always
Steps to Reproduce:
1. Run AZ job on D/S CI (Openshift on Openstack QE CI) 2. Run conformance/serial tests
Actual results:
conformance/serial TCs are failing because it is not finding the egressIP annotation on the workers
Expected results:
Tests passing
Additional info:
Links provided on private comment.
When we create an HCP, the Root CA in the HCP namespaces has the certificate and key named as
Done criteria: The Root CA should have the certificate and key named as the cert manager expects.
Hi,
Bare Metal IPI provisioning is failing to provision the worker nodes. The metal3-machine-os-downloader InitContainer is getting in CrashLoopBackOff state because it cannot find virt-* commands in the container image.
> oc -n openshift-machine-api get pods | grep -v Running NAME READY STATUS metal3-fc66f5846-gtq9m 0/7 Init:CrashLoopBackOff metal3-image-cache-d4qcz 0/1 Init:1/2 metal3-image-cache-djzcf 0/1 Init:1/2 metal3-image-cache-p5mwg 0/1 Init:1/2
> oc -n openshift-machine-api logs deployment/metal3 -c metal3-machine-os-downloader [omitted] ++ LIBGUESTFS_BACKEND=direct ++ virt-filesystems -a rhcos-412.86.202207142104-0-openstack.x86_64.qcow2 -l /usr/local/bin/get-resource.sh: line 88: virt-filesystems: command not found ++ grep boot ++ cut -f1 '-d ' + BOOT_DISK= ++ LIBGUESTFS_BACKEND=direct ++ virt-ls -a rhcos-412.86.202207142104-0-openstack.x86_64.qcow2 -m '' /boot/loader/entries /usr/local/bin/get-resource.sh: line 90: virt-ls: command not found + BOOT_ENTRIES= + rm -fr /shared/tmp/tmp.CnCd2E3kxN
OpenShift 4.12.0-ec.0+
Since https://github.com/openshift/ocp-build-data/pull/1757, the ironic-machine-os-downloader container image is built using RHEL9 repositories.
However, following upstream move of guestfs tools to a dedicated repository [1], the libguestfs packaging differs between RHEL8 and RHEL9:
Since the Dockerfile specifies only the libguestfs-tools package, the virt-* commands are not installed when using RHEL9 repositories.
A trivial fix is to update the Dockerfile to install the guestfs-tools package instead of the libguestfs-tools package.
Regards,
Denis
Description of problem:
$ oc adm must-gather -- gather_ingress_node_firewall [must-gather ] OUT Using must-gather plug-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3dec5a08681e11eedcd31f075941b74f777b9187f0e711a498a212f9d96adb2f When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 0ef60b50-4378-431d-8ca2-faa5af098274 ClusterVersion: Stable at "4.12.0-0.nightly-2022-09-26-111919" ClusterOperators: clusteroperator/insights is not available (Reporting was not allowed: your Red Hat account is not enabled for remote support or your token has expired: UHC services authentication failed ) because Reporting was not allowed: your Red Hat account is not enabled for remote support or your token has expired: UHC services authentication failed[must-gather ] OUT namespace/openshift-must-gather-fr7kc created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-xx2fh created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3dec5a08681e11eedcd31f075941b74f777b9187f0e711a498a212f9d96adb2f created [must-gather-xvfj4] POD 2022-09-28T16:57:00.887445531Z /bin/bash: /usr/bin/gather_ingress_node_firewall: Permission denied [must-gather-xvfj4] OUT waiting for gather to complete [must-gather-xvfj4] OUT downloading gather output [must-gather-xvfj4] OUT receiving incremental file list [must-gather-xvfj4] OUT ./ [must-gather-xvfj4] OUT [must-gather-xvfj4] OUT sent 27 bytes received 40 bytes 26.80 bytes/sec [must-gather-xvfj4] OUT total size is 0 speedup is 0.00 [must-gather ] OUT namespace/openshift-must-gather-fr7kc deleted [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-xx2fh deleted Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 0ef60b50-4378-431d-8ca2-faa5af098274 ClusterVersion: Stable at "4.12.0-0.nightly-2022-09-26-111919" ClusterOperators: clusteroperator/insights is not available (Reporting was not allowed: your Red Hat account is not enabled for remote support or your token has expired: UHC services authentication failed ) because Reporting was not allowed: your Red Hat account is not enabled for remote support or your token has expired: UHC services authentication failed
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-5306. The following is the description of the original issue:
—
Description of problem:
One old machine stuck in Deleting and many co get degraded when doing master replacement on the cluster with OVN network
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2023-01-02-175114
How reproducible:
always after several times
Steps to Reproduce:
1.Install a cluster liuhuali@Lius-MacBook-Pro huali-test % oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-0.nightly-2023-01-02-175114 True False 30m Cluster version is 4.12.0-0.nightly-2023-01-02-175114 liuhuali@Lius-MacBook-Pro huali-test % oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2023-01-02-175114 True False False 33m baremetal 4.12.0-0.nightly-2023-01-02-175114 True False False 80m cloud-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 84m cloud-credential 4.12.0-0.nightly-2023-01-02-175114 True False False 80m cluster-api 4.12.0-0.nightly-2023-01-02-175114 True False False 81m cluster-autoscaler 4.12.0-0.nightly-2023-01-02-175114 True False False 80m config-operator 4.12.0-0.nightly-2023-01-02-175114 True False False 81m console 4.12.0-0.nightly-2023-01-02-175114 True False False 33m control-plane-machine-set 4.12.0-0.nightly-2023-01-02-175114 True False False 79m csi-snapshot-controller 4.12.0-0.nightly-2023-01-02-175114 True False False 81m dns 4.12.0-0.nightly-2023-01-02-175114 True False False 80m etcd 4.12.0-0.nightly-2023-01-02-175114 True False False 79m image-registry 4.12.0-0.nightly-2023-01-02-175114 True False False 74m ingress 4.12.0-0.nightly-2023-01-02-175114 True False False 74m insights 4.12.0-0.nightly-2023-01-02-175114 True False False 21m kube-apiserver 4.12.0-0.nightly-2023-01-02-175114 True False False 77m kube-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 77m kube-scheduler 4.12.0-0.nightly-2023-01-02-175114 True False False 77m kube-storage-version-migrator 4.12.0-0.nightly-2023-01-02-175114 True False False 81m machine-api 4.12.0-0.nightly-2023-01-02-175114 True False False 75m machine-approver 4.12.0-0.nightly-2023-01-02-175114 True False False 80m machine-config 4.12.0-0.nightly-2023-01-02-175114 True False False 74m marketplace 4.12.0-0.nightly-2023-01-02-175114 True False False 80m monitoring 4.12.0-0.nightly-2023-01-02-175114 True False False 72m network 4.12.0-0.nightly-2023-01-02-175114 True False False 83m node-tuning 4.12.0-0.nightly-2023-01-02-175114 True False False 80m openshift-apiserver 4.12.0-0.nightly-2023-01-02-175114 True False False 75m openshift-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 76m openshift-samples 4.12.0-0.nightly-2023-01-02-175114 True False False 22m operator-lifecycle-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 81m operator-lifecycle-manager-catalog 4.12.0-0.nightly-2023-01-02-175114 True False False 81m operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2023-01-02-175114 True False False 75m platform-operators-aggregated 4.12.0-0.nightly-2023-01-02-175114 True False False 74m service-ca 4.12.0-0.nightly-2023-01-02-175114 True False False 81m storage 4.12.0-0.nightly-2023-01-02-175114 True False False 74m liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-0 Running m6i.xlarge us-east-2 us-east-2a 85m huliu-aws4d2-fcks7-master-1 Running m6i.xlarge us-east-2 us-east-2b 85m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 85m huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 80m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 80m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 80m liuhuali@Lius-MacBook-Pro huali-test % oc get controlplanemachineset NAME DESIRED CURRENT READY UPDATED UNAVAILABLE STATE AGE cluster 3 3 3 3 Active 86m 2.Edit controlplanemachineset, change instanceType to another value to trigger RollingUpdate liuhuali@Lius-MacBook-Pro huali-test % oc edit controlplanemachineset cluster controlplanemachineset.machine.openshift.io/cluster edited liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-0 Running m6i.xlarge us-east-2 us-east-2a 86m huliu-aws4d2-fcks7-master-1 Running m6i.xlarge us-east-2 us-east-2b 86m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 86m huliu-aws4d2-fcks7-master-mbgz6-0 Provisioning m5.xlarge us-east-2 us-east-2a 5s huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 81m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 81m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 81m liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-0 Deleting m6i.xlarge us-east-2 us-east-2a 92m huliu-aws4d2-fcks7-master-1 Running m6i.xlarge us-east-2 us-east-2b 92m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 92m huliu-aws4d2-fcks7-master-mbgz6-0 Running m5.xlarge us-east-2 us-east-2a 5m36s huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 87m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 87m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 87m liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-1 Running m6i.xlarge us-east-2 us-east-2b 101m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 101m huliu-aws4d2-fcks7-master-mbgz6-0 Running m5.xlarge us-east-2 us-east-2a 15m huliu-aws4d2-fcks7-master-nbt9g-1 Provisioned m5.xlarge us-east-2 us-east-2b 3m1s huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 96m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 96m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 96m liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-1 Deleting m6i.xlarge us-east-2 us-east-2b 149m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 149m huliu-aws4d2-fcks7-master-mbgz6-0 Running m5.xlarge us-east-2 us-east-2a 62m huliu-aws4d2-fcks7-master-nbt9g-1 Running m5.xlarge us-east-2 us-east-2b 50m huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 144m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 144m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 144m liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-1 Deleting m6i.xlarge us-east-2 us-east-2b 4h12m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 4h12m huliu-aws4d2-fcks7-master-mbgz6-0 Running m5.xlarge us-east-2 us-east-2a 166m huliu-aws4d2-fcks7-master-nbt9g-1 Running m5.xlarge us-east-2 us-east-2b 153m huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 4h7m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 4h7m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 4h7m 3.master-1 stuck in Deleting, and many co get degraded, many pod cannot get Running liuhuali@Lius-MacBook-Pro huali-test % oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2023-01-02-175114 True True True 9s APIServerDeploymentDegraded: 1 of 4 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-7b65bbc76b-mxl99 pod)... baremetal 4.12.0-0.nightly-2023-01-02-175114 True False False 4h8m cloud-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 4h11m cloud-credential 4.12.0-0.nightly-2023-01-02-175114 True False False 4h8m cluster-api 4.12.0-0.nightly-2023-01-02-175114 True False False 4h8m cluster-autoscaler 4.12.0-0.nightly-2023-01-02-175114 True False False 4h8m config-operator 4.12.0-0.nightly-2023-01-02-175114 True False False 4h9m console 4.12.0-0.nightly-2023-01-02-175114 False False False 150m RouteHealthAvailable: console route is not admitted control-plane-machine-set 4.12.0-0.nightly-2023-01-02-175114 True True False 4h7m Observed 1 replica(s) in need of update csi-snapshot-controller 4.12.0-0.nightly-2023-01-02-175114 True True False 4h9m CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods... dns 4.12.0-0.nightly-2023-01-02-175114 True False False 4h8m etcd 4.12.0-0.nightly-2023-01-02-175114 True True True 4h7m GuardControllerDegraded: Missing operand on node ip-10-0-79-159.us-east-2.compute.internal... image-registry 4.12.0-0.nightly-2023-01-02-175114 True False False 4h2m ingress 4.12.0-0.nightly-2023-01-02-175114 True False False 4h2m insights 4.12.0-0.nightly-2023-01-02-175114 True False False 3h8m kube-apiserver 4.12.0-0.nightly-2023-01-02-175114 True True True 4h5m GuardControllerDegraded: Missing operand on node ip-10-0-79-159.us-east-2.compute.internal kube-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False True 4h5m GarbageCollectorDegraded: error querying alerts: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp 172.30.19.115:9091: i/o timeout kube-scheduler 4.12.0-0.nightly-2023-01-02-175114 True False False 4h5m kube-storage-version-migrator 4.12.0-0.nightly-2023-01-02-175114 True False False 162m machine-api 4.12.0-0.nightly-2023-01-02-175114 True False False 4h3m machine-approver 4.12.0-0.nightly-2023-01-02-175114 True False False 4h8m machine-config 4.12.0-0.nightly-2023-01-02-175114 False False True 139m Cluster not available for [{operator 4.12.0-0.nightly-2023-01-02-175114}]: error during waitForDeploymentRollout: [timed out waiting for the condition, deployment machine-config-controller is not ready. status: (replicas: 1, updated: 1, ready: 0, unavailable: 1)] marketplace 4.12.0-0.nightly-2023-01-02-175114 True False False 4h8m monitoring 4.12.0-0.nightly-2023-01-02-175114 False True True 144m reconciling Prometheus Operator Deployment failed: updating Deployment object failed: waiting for DeploymentRollout of openshift-monitoring/prometheus-operator: got 1 unavailable replicas network 4.12.0-0.nightly-2023-01-02-175114 True True False 4h11m DaemonSet "/openshift-ovn-kubernetes/ovnkube-master" is not available (awaiting 1 nodes)... node-tuning 4.12.0-0.nightly-2023-01-02-175114 True False False 4h7m openshift-apiserver 4.12.0-0.nightly-2023-01-02-175114 False True False 151m APIServicesAvailable: "apps.openshift.io.v1" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request... openshift-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 4h4m openshift-samples 4.12.0-0.nightly-2023-01-02-175114 True False False 3h10m operator-lifecycle-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 4h9m operator-lifecycle-manager-catalog 4.12.0-0.nightly-2023-01-02-175114 True False False 4h9m operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2023-01-02-175114 True False False 2m44s platform-operators-aggregated 4.12.0-0.nightly-2023-01-02-175114 True False False 4h2m service-ca 4.12.0-0.nightly-2023-01-02-175114 True False False 4h9m storage 4.12.0-0.nightly-2023-01-02-175114 True True False 4h2m AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods... liuhuali@Lius-MacBook-Pro huali-test % liuhuali@Lius-MacBook-Pro huali-test % oc get pod --all-namespaces|grep -v Running NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver apiserver-5cbdf985f9-85z4t 0/2 Init:0/1 0 155m openshift-authentication oauth-openshift-5c46d6658b-lkbjj 0/1 Pending 0 156m openshift-cloud-credential-operator pod-identity-webhook-77bf7c646d-4rtn8 0/1 ContainerCreating 0 156m openshift-cluster-api capa-controller-manager-d484bc464-lhqbk 0/1 ContainerCreating 0 156m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-5668745dcb-jc7fm 0/11 ContainerCreating 0 156m openshift-cluster-csi-drivers aws-ebs-csi-driver-operator-5d6b9fbd77-827vs 0/1 ContainerCreating 0 156m openshift-cluster-csi-drivers shared-resource-csi-driver-operator-866d897954-z77gz 0/1 ContainerCreating 0 156m openshift-cluster-csi-drivers shared-resource-csi-driver-webhook-d794748dc-kctkn 0/1 ContainerCreating 0 156m openshift-cluster-samples-operator cluster-samples-operator-754758b9d7-nbcc9 0/2 ContainerCreating 0 156m openshift-cluster-storage-operator csi-snapshot-controller-6d9c448fdd-wdb7n 0/1 ContainerCreating 0 156m openshift-cluster-storage-operator csi-snapshot-webhook-6966f555f8-cbdc7 0/1 ContainerCreating 0 156m openshift-console-operator console-operator-7d8567876b-nxgpj 0/2 ContainerCreating 0 156m openshift-console console-855f66f4f8-q869k 0/1 ContainerCreating 0 156m openshift-console downloads-7b645b6b98-7jqfw 0/1 ContainerCreating 0 156m openshift-controller-manager controller-manager-548c7f97fb-bl68p 0/1 Pending 0 156m openshift-etcd installer-13-ip-10-0-76-132.us-east-2.compute.internal 0/1 ContainerCreating 0 9m39s openshift-etcd installer-3-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h13m openshift-etcd installer-4-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h12m openshift-etcd installer-5-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h7m openshift-etcd installer-6-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h1m openshift-etcd installer-8-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 168m openshift-etcd revision-pruner-10-ip-10-0-48-21.us-east-2.compute.internal 0/1 ContainerCreating 0 160m openshift-etcd revision-pruner-10-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 160m openshift-etcd revision-pruner-11-ip-10-0-48-21.us-east-2.compute.internal 0/1 ContainerCreating 0 159m openshift-etcd revision-pruner-11-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 159m openshift-etcd revision-pruner-11-ip-10-0-79-159.us-east-2.compute.internal 0/1 Completed 0 156m openshift-etcd revision-pruner-12-ip-10-0-48-21.us-east-2.compute.internal 0/1 ContainerCreating 0 156m openshift-etcd revision-pruner-12-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 156m openshift-etcd revision-pruner-12-ip-10-0-79-159.us-east-2.compute.internal 0/1 Completed 0 156m openshift-etcd revision-pruner-13-ip-10-0-48-21.us-east-2.compute.internal 0/1 ContainerCreating 0 155m openshift-etcd revision-pruner-13-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 155m openshift-etcd revision-pruner-13-ip-10-0-76-132.us-east-2.compute.internal 0/1 ContainerCreating 0 10m openshift-etcd revision-pruner-13-ip-10-0-79-159.us-east-2.compute.internal 0/1 Completed 0 155m openshift-etcd revision-pruner-6-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 169m openshift-etcd revision-pruner-6-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 3h57m openshift-etcd revision-pruner-7-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 168m openshift-etcd revision-pruner-7-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 168m openshift-etcd revision-pruner-8-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 168m openshift-etcd revision-pruner-8-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 168m openshift-etcd revision-pruner-9-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 166m openshift-etcd revision-pruner-9-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 166m openshift-kube-apiserver installer-6-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h4m openshift-kube-apiserver installer-7-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 168m openshift-kube-apiserver installer-9-ip-10-0-76-132.us-east-2.compute.internal 0/1 ContainerCreating 0 9m52s openshift-kube-apiserver revision-pruner-6-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed