Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
As a user, I should be able to configure CSI driver to have a storage topology.
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.
We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.
We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.
related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
There is a new driver release 5.0.0 since the last rebase that includes snapshot support:
https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0
Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
This includes ibm-vpc-node-label-updater!
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
This Epic tracks the GA of this feature
Epic Goal
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
OC mirror is GA product as of Openshift 4.11 .
The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term.
HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]
To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes:
If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE.
For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA:
Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc.
Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:
As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters.
HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story.
Thus the following stories are important for HyperShift:
Refs:
HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed.
Main user story: When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees.
Ref: What are we missing in Core HyperShift for GA Readiness?
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumptions:
HyperShift - proposed cuts from data plane
When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:
More information:
To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.
See Hosted Control Planes (aka HyperShift) Strategy [Live Document]
Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path.
https://issues.redhat.com/browse/OCPPLAN-8901
HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead).
We should make sure our SD milestones are unblocked by the core team.
This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.
- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors.
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771)
Epic Goal*
The goal is to split client certificate trust chains from the global Hypershift root CA.
Why is this important? (mandatory)
This is important to:
Scenarios (mandatory)
Provide details for user scenarios including actions to be performed, platform specifications, and user personas.
Dependencies (internal and external) (mandatory)
Hypershift team needs to provide us with code reviews and merge the changes we are to deliver
Contributing Teams(and contacts) (mandatory)
Acceptance Criteria (optional)
The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.
Drawbacks or Risk (optional)
Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release
Done - Checklist (mandatory)
AUTH-311 introduced an enhancement. Implement the signer separation described there.
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12
This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9
More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4
update ironic software to pick up latest bug fixes
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I want to hide user perspective(s) based on the customization.
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
As a SRE, I want hypershift operator to expose a metric when hosted control plane is ready.
This should allow SRE to tune (or silence) alerts occurring while the hosted control plane is spinning up.
The Kube APIServer has a sidecar to output audit logs. We need similar sidecars for other APIServers that run on the control plane side. We also need to pass the same audit log policy that we pass to the KAS to these other API servers.
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
Place holder epic to track spontaneous task which does not deserve its own epic.
AWS has a hard limit of 100 OIDC providers globally.
Currently each HostedCluster created by e2e creates its own OIDC provider, which results in hitting the quota limit frequently and causing the tests to fail as a result.
DOD:
Only a single OIDC provider should be created and shared between all e2e HostedClusters.
AC:
We have connectDirectlyToCloudAPIs flag in konnectiviy socks5 proxy to dial directly to cloud providers without going through konnectivity.
This introduce another path for exception https://github.com/openshift/hypershift/pull/1722
We should consolidate both by keep using connectDirectlyToCloudAPIs until there's a reason to not.
Once the HostedCluster and NodePool gets stopped using PausedUntil statement, the awsprivatelink controller will continue reconciling.
How to test this:
DoD:
At the moment if the input etcd kms encryption (key and role) is invalid we fail transparently.
We should check that both key and role are compatible/operational for a given cluster and fail in a condition otherwise
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
This is a clone of issue OCPBUGS-5184. The following is the description of the original issue:
—
Description of problem:
Fail to deploy IPI azure cluster, where set region as westus3, vm type as NV8as_v4. Master node is running from azure portal, but could not ssh login. From serials log, get below error:
[ 3009.547219] amdgpu d1ef:00:00.0: amdgpu: failed to write reg:de0 [ 3011.982399] mlx5_core 6637:00:02.0 enP26167s1: TX timeout detected [ 3011.987010] mlx5_core 6637:00:02.0 enP26167s1: TX timeout on queue: 0, SQ: 0x170, CQ: 0x84d, SQ Cons: 0x823 SQ Prod: 0x840, usecs since last trans: 2418884000 [ 3011.996946] mlx5_core 6637:00:02.0 enP26167s1: TX timeout on queue: 1, SQ: 0x175, CQ: 0x852, SQ Cons: 0x248c SQ Prod: 0x24a7, usecs since last trans: 2148366000 [ 3012.006980] mlx5_core 6637:00:02.0 enP26167s1: TX timeout on queue: 2, SQ: 0x17a, CQ: 0x857, SQ Cons: 0x44a1 SQ Prod: 0x44c0, usecs since last trans: 2055000000 [ 3012.016936] mlx5_core 6637:00:02.0 enP26167s1: TX timeout on queue: 3, SQ: 0x17f, CQ: 0x85c, SQ Cons: 0x405f SQ Prod: 0x4081, usecs since last trans: 1913890000 [ 3012.026954] mlx5_core 6637:00:02.0 enP26167s1: TX timeout on queue: 4, SQ: 0x184, CQ: 0x861, SQ Cons: 0x39f2 SQ Prod: 0x3a11, usecs since last trans: 2020978000 [ 3012.037208] mlx5_core 6637:00:02.0 enP26167s1: TX timeout on queue: 5, SQ: 0x189, CQ: 0x866, SQ Cons: 0x1784 SQ Prod: 0x17a6, usecs since last trans: 2185513000 [ 3012.047178] mlx5_core 6637:00:02.0 enP26167s1: TX timeout on queue: 6, SQ: 0x18e, CQ: 0x86b, SQ Cons: 0x4c96 SQ Prod: 0x4cb3, usecs since last trans: 2124353000 [ 3012.056893] mlx5_core 6637:00:02.0 enP26167s1: TX timeout on queue: 7, SQ: 0x193, CQ: 0x870, SQ Cons: 0x3bec SQ Prod: 0x3c0f, usecs since last trans: 1855857000 [ 3021.535888] amdgpu d1ef:00:00.0: amdgpu: failed to write reg:e15 [ 3021.545955] BUG: unable to handle kernel paging request at ffffb57b90159000 [ 3021.550864] PGD 100145067 P4D 100145067 PUD 100146067 PMD 0
From azure doc https://learn.microsoft.com/en-us/azure/virtual-machines/nvv4-series , looks like nvv4 series only supports Window VM.
Version-Release number of selected component (if applicable):
4.12 nightly build
How reproducible:
Always
Steps to Reproduce:
1. prepare install-config.yaml, set region as westus3, vm type as NV8as_v4 2. install cluster 3.
Actual results:
installation failed
Expected results:
If nvv4 series is not supported for Linux VM, installer might validate and show the message that such size is not supported.
Additional info:
This is a clone of issue OCPBUGS-10558. The following is the description of the original issue:
—
Description of problem:
When running a cluster on application credentials, this event appears repeatedly: ns/openshift-machine-api machineset/nhydri0d-f8dcc-kzcwf-worker-0 hmsg/173228e527 - pathological/true reason/ReconcileError could not find information for "ci.m1.xlarge"
Version-Release number of selected component (if applicable):
How reproducible:
Happens in the CI (https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/33330/rehearse-33330-periodic-ci-shiftstack-shiftstack-ci-main-periodic-4.13-e2e-openstack-ovn-serial/1633149670878351360).
Steps to Reproduce:
1. On a living cluster, rotate the OpenStack cloud credentials 2. Invalidate the previous credentials 3. Watch the machine-api events (`oc -n openshift-machine-api get event`). A `Warning` type of issue could not find information for "name-of-the-flavour" will appear. If the cluster was installed using a password that you can't invalidate: 1. Rotate the cloud credentials to application credentials 2. Restart MAPO (`oc -n openshift-machine-api get pods -o NAME | xargs -r oc -n openshift-machine-api delete`) 3. Rotate cloud credentials again 4. Revoke the first application credentials you set 5. Finally watch the events (`oc -n openshift-machine-api get event`) The event signals that MAPO wasn't able to update flavour information on the MachineSet status.
Actual results:
Expected results:
No issue detecting the flavour details
Additional info:
Offending code likely around this line: https://github.com/openshift/machine-api-provider-openstack/blob/bcb08a7835c08d20606d75757228fd03fbb20dab/pkg/machineset/controller.go#L116
A nil-pointer dereference occurred in the TestRouterCompressionOperation test in the e2e-gcp-operator CI job for the openshift/cluster-ingress-operator repository.
4.12.
Observed once. However, we run e2e-gcp-operator infrequently.
1. Run the e2e-gcp-operator CI job on a cluster-ingress-operator PR.
panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0x14cabef] goroutine 8048 [running]: testing.tRunner.func1.2({0x1624920, 0x265b870}) /usr/lib/golang/src/testing/testing.go:1389 +0x24e testing.tRunner.func1() /usr/lib/golang/src/testing/testing.go:1392 +0x39f panic({0x1624920, 0x265b870}) /usr/lib/golang/src/runtime/panic.go:838 +0x207 k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x40e43e5698?}) /go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xd8 panic({0x1624920, 0x265b870}) /usr/lib/golang/src/runtime/panic.go:838 +0x207 github.com/openshift/cluster-ingress-operator/test/e2e.getHttpHeaders(0xc0002b9380?, 0xc0000e4540, 0x1) /go/src/github.com/openshift/cluster-ingress-operator/test/e2e/router_compression_test.go:257 +0x2ef github.com/openshift/cluster-ingress-operator/test/e2e.testContentEncoding.func1() /go/src/github.com/openshift/cluster-ingress-operator/test/e2e/router_compression_test.go:220 +0x57 k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc00003f000}) /go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x1b25d40?, 0xc000138000?}, 0xc000befe08?) /go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/apimachinery/pkg/util/wait.poll({0x1b25d40, 0xc000138000}, 0x48?, 0xc4fa25?, 0x30?) /go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38 k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x1b25d40, 0xc000138000}, 0xc000b1da00?, 0xc000befe98?, 0x414207?) /go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00088cea0?, 0x3b9aca00?, 0xc000138000?) /go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 github.com/openshift/cluster-ingress-operator/test/e2e.testContentEncoding(0xc00088cea0, 0xc000a8a270, 0xc0000e4540, 0x1, {0x17fe569, 0x4}) /go/src/github.com/openshift/cluster-ingress-operator/test/e2e/router_compression_test.go:219 +0xfc github.com/openshift/cluster-ingress-operator/test/e2e.TestRouterCompressionOperation(0xc00088cea0) /go/src/github.com/openshift/cluster-ingress-operator/test/e2e/router_compression_test.go:208 +0x454 testing.tRunner(0xc00088cea0, 0x191cdd0) /usr/lib/golang/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/lib/golang/src/testing/testing.go:1486 +0x35f
The test should pass.
The faulty logic was introduced in https://github.com/openshift/cluster-ingress-operator/pull/679/commits/211b9c15b1fd6217dee863790c20f34c26c138aa.
The test was subsequently marked as a parallel test in https://github.com/openshift/cluster-ingress-operator/pull/756/commits/a22322b25569059c61e1973f37f0a4b49e9407bc.
The job history shows that the e2e-gcp-operator job has only run once since June: https://prow.ci.openshift.org/job-history/gs/origin-ci-test/pr-logs/directory/pull-ci-openshift-cluster-ingress-operator-master-e2e-gcp-operator. I see failures in May, but none of those failures shows the panic.
Description of problem:
For OVNK to become CNCF complaint, we need to support session affinity timeout feature and enable the e2e's on OpenShift side. This bug tracks the efforts to get this into 4.12 OCP.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
when provisioningNetwork is changed from Disabled to Managed/Unmanaged, the ironic-proxy daemonset is not removed This causes the metal3 pod to be stuck in pending, since both pods are trying to use port 6385 on the host: 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. preemption: 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports
Version-Release number of selected component (if applicable):
4.12rc.4
How reproducible:
Every time for me
Steps to Reproduce:
1. On a multinode cluster, change the provisioningNetwork from Disabled to Unmanaged (I didn't try Managed) 2. 3.
Actual results:
0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. preemption: 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports
Expected results:
I believe the ironic-proxy daemonset should be deleted when the provisioningNetwork is set to Managed/Unmanaged
Additional info:
If I manually delete the ironic-proxy Daemonset, the controller does not re-create it.
This is a clone of issue OCPBUGS-5734. The following is the description of the original issue:
—
Description of problem:
In https://issues.redhat.com/browse/OCPBUGSM-46450, the VIP was added to noProxy for StackCloud but it should also be added for all national clouds.
Version-Release number of selected component (if applicable):
4.10.20
How reproducible:
always
Steps to Reproduce:
1. Set up a proxy 2. Deploy a cluster in a national cloud using the proxy 3.
Actual results:
Installation fails
Expected results:
Additional info:
The inconsistence was discovered when testing the cluster-network-operator changes https://issues.redhat.com/browse/OCPBUGS-5559
OpenShift 4.12 is going to be built with Go 1.19, but automatic migration in our repository has failed. The migration should be done manually.
[1]: https://github.com/openshift/cluster-image-registry-operator/pull/802
Description of problem:
etcd and kube-apiserver pods get restarted due to failed liveness probes while deleting/re-creating pods on SNO
Version-Release number of selected component (if applicable):
4.10.32
How reproducible:
Not always, after ~10 attempts
Steps to Reproduce:
1. Deploy SNO with Telco DU profile applied 2. Create multiple pods with local storage volumes attached(attaching yaml manifest) 3. Force delete and re-create pods 10 times
Actual results:
etcd and kube-apiserver pods get restarted, making to cluster unavailable for a period of time
Expected results:
etcd and kube-apiserver do not get restarted
Additional info:
Attaching must-gather. Please let me know if any additional info is required. Thank you!
In order to start 4.12 development, we need to merge the agent-installer branch. We need to create a PR and engage the Installer team on getting it approved
This is a clone of https://bugzilla.redhat.com/show_bug.cgi?id=2083087 (OCPBUGSM-44070) to backport this issue.
Description of problem:
"Delete dependent objects of this resource" is a bit of confusing for some users because when creating the Application in Dev console not only the deployment but also IS, route, svc, secret objects will be created as well. When deleting the Application (in fact it is deployment), there is an option called "Delete dependent objects of this resource" and some users might think this means the IS, route, svc and any other objects which are created alongside with the deployment will be deleted as well
Version-Release number of selected component (if applicable):
4.8
How reproducible:
Always
Steps to Reproduce:
1. Create Application in Dev console
2. Delete the deployment
3. Check "Delete dependent objects of this resource"
Actual results:
Only deployment will be deleted and IS, svc, route will not be deleted
Expected results:
We either change the description of this option, or we really delete IS, svc, route and any other objects created under this Application.
Additional info:
This is a clone of issue OCPBUGS-3524. The following is the description of the original issue:
—
Description of problem:
Install fully private cluster on Azure against 4.12.0-0.nightly-2022-11-10-033725, sa for coreOS image have public access.
$ az storage account list -g jima-azure-11a-f58lp-rg --query "[].[name,allowBlobPublicAccess]" -o tsv
clusterptkpx True
imageregistryjimaazrsgcc False
same profile on 4.11.0-0.nightly-2022-11-10-202051, sa for coreos image are not publicly accessible.
$ az storage account list -g jima-azure-11c-kf9hw-rg --query "[].[name,allowBlobPublicAccess]" -o tsv
clusterr8wv9 False
imageregistryjimaaz9btdx False
Checked that terraform-provider-azurerm version is different between 4.11 and 4.12.
4.11: v2.98.0
4.12: v3.19.1
In terraform-provider-azurerm v2.98.0, it use property allow_blob_public_access to manage sa public access, the default value is false.
In terraform-provider-azurerm v3.19.1, property allow_blob_public_access is renamed to allow_nested_items_to_be_public , the default value is true.
https://github.com/hashicorp/terraform-provider-azurerm/blob/main/CHANGELOG.md#300-march-24-2022
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-11-10-033725
How reproducible:
always on 4.12
Steps to Reproduce:
1. Install fully private cluster on azure against 4.12 payload 2. 3.
Actual results:
sa for coreos image is publicly accessible
Expected results:
sa for coreos image should not be publicly accessible
Additional info:
only happened on 4.12
Manoj noticed that the cluster registration fails for SNO clusters when the network type is set to OpenShiftSDN. We should add some validation to prevent this combination.
Failed to register cluster with assisted-service: AssistedServiceError Code: 400 Href: ID: 400 Kind: Error Reason: OpenShiftSDN network type is not allowed in single node mode
Documentation also indicates OpenShiftSDN is not compatible: https://docs.openshift.com/container-platform/4.11/installing/installing_sno/install-sno-preparing-to-install-sno.html
This is a clone of issue OCPBUGS-4700. The following is the description of the original issue:
—
Description of problem:
In at least 4.12.0-rc.0, a user with read-only access to ClusterVersion can see an "Update blocked" pop-up talking about "...alert above the visualization...". It is referencing a banner about "This cluster should not be updated to the next minor version...", but that banner is not displayed because hasPermissionsToUpdate is false, so canPerformUpgrade is false.
Version-Release number of selected component (if applicable):
4.12.0-rc.0. Likely more. I haven't traced it out.
How reproducible:
Always.
Steps to Reproduce:
1. Install 4.12.0-rc.0
2. Create a user with cluster-wide read-only permissions. For me, it's via binding to a sudoer ClusterRole. I'm not sure where that ClusterRole comes from, but it's:
$ oc get -o yaml clusterrole sudoer apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: "2020-05-21T19:39:09Z" name: sudoer resourceVersion: "7715" uid: 28eb2ffa-dccd-47e8-a2d5-6a95e0e8b1e9 rules: - apiGroups: - "" - user.openshift.io resourceNames: - system:admin resources: - systemusers - users verbs: - impersonate - apiGroups: - "" - user.openshift.io resourceNames: - system:masters resources: - groups - systemgroups verbs: - impersonate
3. View /settings/cluster
Actual results:
See the "Update blocked" pop-up talking about "...alert above the visualization...".
Expected results:
Something more internally consistent. E.g. having the referenced banner "...alert above the visualization..." show up, or not having the "Update blocked" pop-up reference the non-existent banner.
Description of problem:
NPE on topology for the ns which just got deleted, see screenshot below
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Login as regular user 2. Create a ns and delete the ns 3. visit the deleted ns in topology
Actual results:
console breaks dur to NPE
Expected results:
console shouldn't break
Additional info:
Description of problem:
The ovn-kubernetes ovnkube-master containers are continuously crashlooping since we updated to 4.11.0-0.okd-2022-10-15-073651.
Log Excerpt:
] [] [] [{kubectl-client-side-apply Update networking.k8s.io/v1 2022-09-12 12:25:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:ingress":{},"f:policyTypes":{}}} }]},Spec:NetworkPolicySpec{PodSelector:{map[] []},Ingress:[]NetworkPolicyIngressRule{NetworkPolicyIngressRule{Ports:[]NetworkPolicyPort{},From:[]NetworkPolicyPeer{NetworkPolicyPeer{PodSelector:&v1.LabelSelector{MatchLabels:map[string]string{access: true,},MatchExpressions:[]LabelSelectorRequirement{},},NamespaceSelector:nil,IPBlock:nil,},},},},Egress:[]NetworkPolicyEgressRule{},PolicyTypes:[Ingress],},} &NetworkPolicy{ObjectMeta:{allow-from-openshift-ingress compsci-gradcentral a405f843-c250-40d7-8dd4-a759f764f091 217304038 1 2022-09-22 14:36:38 +0000 UTC <nil> <nil> map[] map[] [] [] [{openshift-apiserver Update networking.k8s.io/v1 2022-09-22 14:36:38 +0000 UTC FieldsV1 {"f:spec":{"f:ingress":{},"f:policyTypes":{}}} }]},Spec:NetworkPolicySpec{PodSelector:{map[] []},Ingress:[]NetworkPolicyIngressRule{NetworkPolicyIngressRule{Ports:[]NetworkPolicyPort{},From:[]NetworkPolicyPeer{NetworkPolicyPeer{PodSelector:nil,NamespaceSelector:&v1.LabelSelector{MatchLabels:map[string]string{policy-group.network.openshift.io/ingress: ,},MatchExpressions:[]LabelSelectorRequirement{},},IPBlock:nil,},},},},Egress:[]NetworkPolicyEgressRule{},PolicyTypes:[Ingress],},}]: cannot clean up egress default deny ACL name: error in transact with ops [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:delete Value:{GoSet:[{GoUUID:60cb946a-46e9-4623-9ba4-3cb35f018ed6}]}}] Timeout:<nil> Where:[where column _uuid == {ccdd01bf-3009-42fb-9672-e1df38190cd7}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:delete Value:{GoSet:[{GoUUID:60cb946a-46e9-4623-9ba4-3cb35f018ed6}]}}] Timeout:<nil> Where:[where column _uuid == {10bbf229-8c1b-4c62-b36e-4ba0097722db}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:delete Table:ACL Row:map[] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {7b55ba0c-150f-4a63-9601-cfde25f29408}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:delete Table:ACL Row:map[] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {60cb946a-46e9-4623-9ba4-3cb35f018ed6}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}] results [{Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:0 Error:referential integrity violation Details:cannot delete ACL row 7b55ba0c-150f-4a63-9601-cfde25f29408 because of 1 remaining reference(s) UUID:{GoUUID:} Rows:[]}] and errors []: referential integrity violation: cannot delete ACL row 7b55ba0c-150f-4a63-9601-cfde25f29408 because of 1 remaining reference(s)
Additional info:
https://github.com/okd-project/okd/issues/1372 Issue persisted through update to 4.11.0-0.okd-2022-10-28-153352 must-gather: https://nbc9-snips.cloud.duke.edu/snips/must-gather.local.2859117512952590880.zip
Description of problem:
When a pod runs to a completed state, we typically rely on the update event that will indicate to us that this pod is completed. At that point the pod IP is released and the port configuration is removed in OVN. The subsequent delete event for this pod will be ignored because it should have been cleaned up in the previous update. However, there can be cases where the update event is missed with pod completed. In this case we will only receive a delete with pod completed event, and ignore tearing down the pod. The end result is the pod is not cleaned up in OVN and the IP address remains allocated, reducing the amount of address range available to launch another pod. This can lead to exhausting all IP addresses available for pod allocation on a node.
Version-Release number of selected component (if applicable):
4.10.24
How reproducible:
Not sure how to reproduce this. I'm guessing some lag in kapi updates can cause the completed update event and the final delete event to be combined into a single event.
Steps to Reproduce:
1. 2. 3.
Actual results:
Port still exists in OVN, IP remains allocated for a deleted pod.
Expected results:
IP should be freed, port should be removed from OVN.
Additional info:
Description of problem:
The IPI installation in some regions got bootstrap failure, and without any node available/ready.
Version-Release number of selected component (if applicable):
12-22 16:22:27.970 ./openshift-install 4.12.0-0.nightly-2022-12-21-202045 12-22 16:22:27.970 built from commit 3f9c38a5717c638f952df82349c45c7d6964fcd9 12-22 16:22:27.970 release image registry.ci.openshift.org/ocp/release@sha256:2d910488f25e2638b6d61cda2fb2ca5de06eee5882c0b77e6ed08aa7fe680270 12-22 16:22:27.971 release architecture amd64
How reproducible:
Always
Steps to Reproduce:
1. try the IPI installation in the problem regions (so far tried and failed with ap-southeast-2, ap-south-1, eu-west-1, ap-southeast-6, ap-southeast-3, ap-southeast-5, eu-central-1, cn-shanghai, cn-hangzhou and cn-beijing)
Actual results:
Bootstrap failed to complete
Expected results:
Installation in those regions should succeed.
Additional info:
FYI the QE flexy-install job: https://mastern-jenkins-csb-openshift-qe.apps.ocp-c1.prod.psi.redhat.com/job/ocp-common/job/Flexy-install/166672/ No any node available/ready, and no any operator available. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 30m Unable to apply 4.12.0-0.nightly-2022-12-21-202045: an unknown error has occurred: MultipleErrors $ oc get nodes No resources found $ oc get machines -n openshift-machine-api -o wide NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE jiwei-1222f-v729x-master-0 30m jiwei-1222f-v729x-master-1 30m jiwei-1222f-v729x-master-2 30m $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication baremetal cloud-controller-manager cloud-credential cluster-autoscaler config-operator console control-plane-machine-set csi-snapshot-controller dns etcd image-registry ingress insights kube-apiserver kube-controller-manager kube-scheduler kube-storage-version-migrator machine-api machine-approver machine-config marketplace monitoring network node-tuning openshift-apiserver openshift-controller-manager openshift-samples operator-lifecycle-manager operator-lifecycle-manager-catalog operator-lifecycle-manager-packageserver service-ca storage $ Mater nodes don't run for example kubelet and crio services. [core@jiwei-1222f-v729x-master-0 ~]$ sudo crictl ps FATA[0000] unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory" [core@jiwei-1222f-v729x-master-0 ~]$ The machine-config-daemon firstboot tells "failed to update OS". [jiwei@jiwei log-bundle-20221222085846]$ grep -Ei 'error|failed' control-plane/10.0.187.123/journals/journal.log Dec 22 16:24:16 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Dec 22 16:24:16 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Dec 22 16:24:18 localhost ignition[867]: failed to fetch config: resource requires networking Dec 22 16:24:18 localhost ignition[891]: GET error: Get "http://100.100.100.200/latest/user-data": dial tcp 100.100.100.200:80: connect: network is unreachable Dec 22 16:24:18 localhost ignition[891]: GET error: Get "http://100.100.100.200/latest/user-data": dial tcp 100.100.100.200:80: connect: network is unreachable Dec 22 16:24:19 localhost.localdomain NetworkManager[919]: <info> [1671726259.0329] hostname: hostname: hostnamed not used as proxy creation failed with: Could not connect: No such file or directory Dec 22 16:24:19 localhost.localdomain NetworkManager[919]: <warn> [1671726259.0464] sleep-monitor-sd: failed to acquire D-Bus proxy: Could not connect: No such file or directory Dec 22 16:24:19 localhost.localdomain ignition[891]: GET error: Get "https://api-int.jiwei-1222f.alicloud-qe.devcluster.openshift.com:22623/config/master": dial tcp 10.0.187.120:22623: connect: connection refused ...repeated logs omitted... Dec 22 16:27:46 jiwei-1222f-v729x-master-0 ovs-ctl[1888]: 2022-12-22T16:27:46Z|00001|dns_resolve|WARN|Failed to read /etc/resolv.conf: No such file or directory Dec 22 16:27:46 jiwei-1222f-v729x-master-0 ovs-vswitchd[1888]: ovs|00001|dns_resolve|WARN|Failed to read /etc/resolv.conf: No such file or directory Dec 22 16:27:46 jiwei-1222f-v729x-master-0 dbus-daemon[1669]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.resolve1.service': Unit dbus-org.freedesktop.resolve1.service not found. Dec 22 16:27:46 jiwei-1222f-v729x-master-0 nm-dispatcher[1924]: Error: Device '' not found. Dec 22 16:27:46 jiwei-1222f-v729x-master-0 nm-dispatcher[1937]: Error: Device '' not found. Dec 22 16:27:46 jiwei-1222f-v729x-master-0 nm-dispatcher[2037]: Error: Device '' not found. Dec 22 08:35:32 jiwei-1222f-v729x-master-0 machine-config-daemon[2181]: Warning: failed, retrying in 1s ... (1/2)I1222 08:35:32.477770 2181 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-extensions/os-extensions-content-910221290 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:259d8c6b9ec714d53f0275db9f2962769f703d4d395afb9d902e22cfe96021b0 Dec 22 08:56:06 jiwei-1222f-v729x-master-0 rpm-ostree[2288]: Txn Rebase on /org/projectatomic/rpmostree1/rhcos failed: remote error: Get "https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:27f262e70d98996165748f4ab50248671d4a4f97eb67465cd46e1de2d6bd24d0": net/http: TLS handshake timeout Dec 22 08:56:06 jiwei-1222f-v729x-master-0 machine-config-daemon[2181]: W1222 08:56:06.785425 2181 firstboot_complete_machineconfig.go:46] error: failed to update OS to quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:411e6e3be017538859cfbd7b5cd57fc87e5fee58f15df19ed3ec11044ebca511 : error running rpm-ostree rebase --experimental ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:411e6e3be017538859cfbd7b5cd57fc87e5fee58f15df19ed3ec11044ebca511: Warning: The unit file, source configuration file or drop-ins of rpm-ostreed.service changed on disk. Run 'systemctl daemon-reload' to reload units. Dec 22 08:56:06 jiwei-1222f-v729x-master-0 machine-config-daemon[2181]: error: remote error: Get "https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:27f262e70d98996165748f4ab50248671d4a4f97eb67465cd46e1de2d6bd24d0": net/http: TLS handshake timeout Dec 22 08:57:31 jiwei-1222f-v729x-master-0 machine-config-daemon[2181]: Warning: failed, retrying in 1s ... (1/2)I1222 08:57:31.244684 2181 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-extensions/os-extensions-content-4021566291 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:259d8c6b9ec714d53f0275db9f2962769f703d4d395afb9d902e22cfe96021b0 Dec 22 08:59:20 jiwei-1222f-v729x-master-0 systemd[2353]: /usr/lib/systemd/user/podman-kube@.service:10: Failed to parse service restart specifier, ignoring: never Dec 22 08:59:21 jiwei-1222f-v729x-master-0 podman[2437]: Error: open default: no such file or directory Dec 22 08:59:21 jiwei-1222f-v729x-master-0 podman[2450]: Error: failed to start API service: accept unixgram @00026: accept4: operation not supported Dec 22 08:59:21 jiwei-1222f-v729x-master-0 systemd[2353]: podman-kube@default.service: Failed with result 'exit-code'. Dec 22 08:59:21 jiwei-1222f-v729x-master-0 systemd[2353]: Failed to start A template for running K8s workloads via podman-play-kube. Dec 22 08:59:21 jiwei-1222f-v729x-master-0 systemd[2353]: podman.service: Failed with result 'exit-code'. [jiwei@jiwei log-bundle-20221222085846]$
This is a clone of issue OCPBUGS-5165. The following is the description of the original issue:
—
Currently, the Dev Sandbox clusters sends the clusterType "OSD" instead of "DEVSANDBOX" because the configuration annotations of the console config are automatically overridden by some SyncSets.
Open Dev Sandbox and browser console and inspect window.SERVER_FLAGS.telemetry
Description of problem: upon attempting to install OCP 4.10 UPI on baremetal ppc64le, the openshift-install gather command returns `panic: unsupported platform "none"`
Version-Release number of selected component (if applicable):
OCP 4.10.16
openshift-install 4.10.24
How reproducible:
easily
Steps to Reproduce:
1. create install config
2. create manifests
3. create ignition configs
4. openshift-install gather bootstrap --log-level "debug"
Actual results:
DEBUG OpenShift Installer 4.10.24
DEBUG Built from commit d63a12ba0ec33d492093a8fc0e268a01a075f5da
DEBUG Fetching Bootstrap SSH Key Pair...
DEBUG Loading Bootstrap SSH Key Pair...
DEBUG Using Bootstrap SSH Key Pair loaded from state file
DEBUG Reusing previously-fetched Bootstrap SSH Key Pair
DEBUG Fetching Install Config...
DEBUG Loading Install Config...
DEBUG Loading SSH Key...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Cluster Name...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Networking...
DEBUG Loading Platform...
DEBUG Loading Pull Secret...
DEBUG Loading Platform...
DEBUG Loading Install Config from both state file and target directory
DEBUG On-disk Install Config matches asset in state file
DEBUG Using Install Config loaded from state file
DEBUG Reusing previously-fetched Install Config
panic: unsupported platform "none"
goroutine 1 [running]:
github.com/openshift/installer/pkg/terraform/stages/platform.StagesForPlatform({0x146f2d0a, 0x1619aa08})
/go/src/github.com/openshift/installer/pkg/terraform/stages/platform/stages.go:55 +0x2ff
main.runGatherBootstrapCmd({0x14d8e028, 0x1})
/go/src/github.com/openshift/installer/cmd/openshift-install/gather.go:115 +0x2d6
main.newGatherBootstrapCmd.func1(0xc001364500, {0xc0005a0b40, 0x2, 0x2})
/go/src/github.com/openshift/installer/cmd/openshift-install/gather.go:65 +0x59
github.com/spf13/cobra.(*Command).execute(0xc001364500, {0xc0005a0b20, 0x2, 0x2})
/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:860 +0x5f8
github.com/spf13/cobra.(*Command).ExecuteC(0xc001334c80)
/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:974 +0x3bc
github.com/spf13/cobra.(*Command).Execute(...)
/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:902
main.installerMain()
/go/src/github.com/openshift/installer/cmd/openshift-install/main.go:72 +0x29e
main.main()
/go/src/github.com/openshift/installer/cmd/openshift-install/main.go:50 +0x125
Expected results:
I'm not really sure what I expected to happen. I've never used that gather before..
I would assume at least no panicking.
Additional info:
Description of problem:
Added a script to collect PodNetworkConnectivityChecks to able to view the overall status of the pod network connectivity. Current must-gather collects the contents of `openshift-network-diagnostics` but does not collect the PodNetworkConnectivityCheck.
Version-Release number of selected component (if applicable):
4.12, 4.11, 4.10
Create a script that gathers debug information from a host running the agent ISO and exports it in a standard format so that we can ask customers to provide it for debugging when something has gone wrong (and also use it in CI).
For now, it is fine to require the user to ssh into the host to run the script. The script should be already in place inside the agent ISO.
The output should probably be a compressed tar file. That file could be saved locally, or potentially piped to stdout so that a user only has to run a command like: ssh node0 -c agent-gather >node0.tgz
Things we need to collect:
Description of problem:
Using a daemonset that causes failures during draining as leases are not gracefully released and instead age out as pods are killed after potentially losing network access due to daemonset pods not being terminated. As pointed out in https://github.com/openshift/origin/pull/27394#discussion_r964002900 This should be fixed when moving to a deployment and is also tracked here https://issues.redhat.com/browse/BUILD-495
Version-Release number of selected component (if applicable):
How reproducible:
100
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-1704. The following is the description of the original issue:
—
Description of problem:
According to OCP 4.11 doc (https://docs.openshift.com/container-platform/4.11/installing/installing_gcp/installing-gcp-account.html#installation-gcp-enabling-api-services_installing-gcp-account), the Service Usage API (serviceusage.googleapis.com) is an optional API service to be enabled. But, the installation cannot succeed if this API is disabled.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-25-071630
How reproducible:
Always, if the Service Usage API is disabled in the GCP project.
Steps to Reproduce:
1. Make sure the Service Usage API (serviceusage.googleapis.com) is disabled in the GCP project. 2. Try IPI installation in the GCP project.
Actual results:
The installation would fail finally, without any worker machines launched.
Expected results:
Installation should succeed, or the OCP doc should be updated.
Additional info:
Please see the attached must-gather logs (http://virt-openshift-05.lab.eng.nay.redhat.com/jiwei/jiwei-0926-03-cnxn5/) and the sanity check results. FYI if enabling the API, and without changing anything else, the installation could succeed.
Probably for: 1h or some such; I don't think it needs to go off immediately. But in-cluster admins and folks monitoring submitted Insights should have a way to figure out that the cluster is trying and failing to submit Telemetry. The alert should not fire when Telemetry submission has been explicitly disabled.
There is an existing alert for PrometheusRemoteWriteBehind in a similar space, but as of today, the Temeletry submissions are happening via telemeter-client, due to concerns about the load of submitting via remote-write.
Description of problem:
Have 6 runs of techpreview jobs where the jobs fails due to the MCO:
{Operator degraded (RequiredPoolsFailed): Unable to apply 4.12.0-0.ci.test-2022-09-21-183414-ci-op-qd6plyhc-latest: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)] Operator degraded (RequiredPoolsFailed): Unable to apply 4.12.0-0.ci.test-2022-09-21-183414-ci-op-qd6plyhc-latest: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)]}
looking at the MCD logs the master seems to go degraded in bootstrap due to the rendered config not being found?
I0921 18:49:47.091804 8171 daemon.go:444] Node ci-op-qd6plyhc-6dd9a-bfmjd-master-1 is part of the control plane I0921 18:49:49.213556 8171 node.go:24] No machineconfiguration.openshift.io/currentConfig annotation on node ci-op-qd6plyhc-6dd9a-bfmjd-master-1: map[csi.volume.kubernetes.io/nodeid: {"pd.csi.storage.gke.io":"projects/openshift-gce-devel-ci-2/zones/us-central1-b/instances/ci-op-qd6plyhc-6dd9a-bfmjd-master-1"} volumes.kubernetes.io/controller-managed-attach-detach:true], in cluster bootstrap, loading initial node annotation from /etc/machine-config-daemon/node-annotations.json I0921 18:49:49.215186 8171 node.go:45] Setting initial node config: rendered-master-2dde32327e4e5d15092fccbac1dcec49 I0921 18:49:49.253706 8171 daemon.go:1184] In bootstrap mode E0921 18:49:49.254046 8171 writer.go:200] Marking Degraded due to: machineconfig.machineconfiguration.openshift.io "rendered-master-2dde32327e4e5d15092fccbac1dcec49" not found I0921 18:49:51.232610 8171 daemon.go:499] Transitioned from state: Done -> Degraded I0921 18:49:51.249618 8171 daemon.go:1184] In bootstrap mode E0921 18:49:51.249906 8171 writer.go:200] Marking Degraded due to: machineconfig.machineconfiguration.openshift.io "rendered-master-2dde32327e4e5d15092fccbac1dcec49" not found
However looking at controller a rendered-config was generated correctly but it's not the missing config from above:
I0921 18:54:06.736984 1 render_controller.go:506] Generated machineconfig rendered-master-acc8491aafab8ef511a40b76372325ee from 6 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 }] I0921 18:54:06.737226 1 event.go:285] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"master", UID:"b2084ca6-4b33-46bf-b83b-9e98010ff085", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"5648", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-acc8491aafab8ef511a40b76372325ee successfully generated (release version: 4.12.0-0.ci.test-2022-09-21-183220-ci-op-9ksj7d7g-latest, controller version: a627415c240b4c7dd2f9e90f659690d9c0f623f3) I0921 18:54:06.742053 1 render_controller.go:532] Pool master: now targeting: rendered-master-acc8491aafab8ef511a40b76372325ee
So far I see this in the following techpreview jobs:
GCP techpreview
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-gcp-sdn-techpreview/1572638837954318336
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-gcp-sdn-techpreview-serial/1572638838793179136
Vsphere techpreview
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-nightly-4.12-e2e-vsphere-ovn-techpreview/1572638854794448896
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-nightly-4.12-e2e-vsphere-ovn-techpreview-serial/1572638855574589440
AWS Techpreview:
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-aws-sdn-techpreview/1572638828672323584
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-aws-sdn-techpreview-serial/1572638829217583104
The above jobs affect the k8s 1.25 bump and are blocking the job.
There are also other occurances not in our PR:
https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/31965/rehearse-31965-pull-ci-openshift-openshift-controller-manager-master-openshift-e2e-aws-builds-techpreview/1572581504297472000
Also see a quick search:
https://search.ci.openshift.org/?search=timed+out+waiting+for+the+condition%2C+error+pool+master+is+not+ready&maxAge=48h&context=1&type=bug%2Bissue%2Bjunit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Did something change that would affect tech preview jobs?
Also note, this seems like a new failure. I have some of these jobs passing in the last ~ 8 days.
This is a clone of issue OCPBUGS-6213. The following is the description of the original issue:
—
Please review the following PR: https://github.com/openshift/machine-config-operator/pull/3450
The PR has been automatically opened by ART (#aos-art) team automation and indicates
that the image(s) being used downstream for production builds are not consistent
with the images referenced in this component's github repository.
Differences in upstream and downstream builds impact the fidelity of your CI signal.
If you disagree with the content of this PR, please contact @release-artists
in #aos-art to discuss the discrepancy.
Closing this issue without addressing the difference will cause the issue to
be reopened automatically.
Description of problem:
The current version of openshift/router vendors Kubernetes 1.24 packages. OpenShift 4.12 is based on Kubernetes 1.25.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Check https://github.com/openshift/router/blob/release-4.12/go.mod
Actual results:
Kubernetes packages (k8s.io/api, k8s.io/apimachinery, and k8s.io/client-go) are at version v0.24.0.
Expected results:
Kubernetes packages are at version v0.25.0 or later.
Additional info:
Using old Kubernetes API and client packages brings risk of API compatibility issues.
Description of problem:
Create network LoadBalancer service, but always get Connection time out when accessing the LB
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-27-135134
How reproducible:
100%
Steps to Reproduce:
1. create custom ingresscontroller that using Network LB service $ Domain="nlb.$(oc get dns.config cluster -o=jsonpath='{.spec.baseDomain}')" $ oc create -f - << EOF kind: IngressController apiVersion: operator.openshift.io/v1 metadata: name: nlb namespace: openshift-ingress-operator spec: domain: ${Domain} replicas: 3 endpointPublishingStrategy: loadBalancer: providerParameters: aws: type: NLB type: AWS scope: External type: LoadBalancerService EOF 2. wait for the ingress NLB service is ready. $ oc -n openshift-ingress get svc/router-nlb NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-nlb LoadBalancer 172.30.75.134 a765a5eb408aa4a68988e35b72672379-78a76c339ded64fa.elb.us-east-2.amazonaws.com 80:31833/TCP,443:32499/TCP 117s 3. curl the network LB $ curl a765a5eb408aa4a68988e35b72672379-78a76c339ded64fa.elb.us-east-2.amazonaws.com -I <hang>
Actual results:
Connection time out
Expected results:
curl should return 503
Additional info:
the NLB service has the annotation: service.beta.kubernetes.io/aws-load-balancer-type: nlb
This is a clone of issue OCPBUGS-11257. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-9964. The following is the description of the original issue:
—
Description of problem:
egressip cannot be assigned on hypershift hosted cluster node
Version-Release number of selected component (if applicable):
4.13.0-0.nightly-2023-03-09-162945
How reproducible:
100%
Steps to Reproduce:
1. setup hypershift env 2. lable egress ip node on hosted cluster % oc get node NAME STATUS ROLES AGE VERSION ip-10-0-129-175.us-east-2.compute.internal Ready worker 3h20m v1.26.2+bc894ae ip-10-0-129-244.us-east-2.compute.internal Ready worker 3h20m v1.26.2+bc894ae ip-10-0-141-41.us-east-2.compute.internal Ready worker 3h20m v1.26.2+bc894ae ip-10-0-142-54.us-east-2.compute.internal Ready worker 3h20m v1.26.2+bc894ae % oc label node/ip-10-0-129-175.us-east-2.compute.internal k8s.ovn.org/egress-assignable="" node/ip-10-0-129-175.us-east-2.compute.internal labeled % oc label node/ip-10-0-129-244.us-east-2.compute.internal k8s.ovn.org/egress-assignable="" node/ip-10-0-129-244.us-east-2.compute.internal labeled % oc label node/ip-10-0-141-41.us-east-2.compute.internal k8s.ovn.org/egress-assignable="" node/ip-10-0-141-41.us-east-2.compute.internal labeled % oc label node/ip-10-0-142-54.us-east-2.compute.internal k8s.ovn.org/egress-assignable="" node/ip-10-0-142-54.us-east-2.compute.internal labeled 3. create egressip % cat egressip.yaml apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressip-1 spec: egressIPs: [ "10.0.129.180" ] namespaceSelector: matchLabels: env: ovn-tests % oc apply -f egressip.yaml egressip.k8s.ovn.org/egressip-1 created 4. check egressip assignment
Actual results:
egressip cannot assigned to node % oc get egressip NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS egressip-1 10.0.129.180
Expected results:
egressip can be assigned to one of the hosted cluster node
Additional info:
Description of problem:
One multus case always fail in QE e2e testing. Using same net-attach-def and pod configure files, testing passed in 4.11 but failed in 4.12 and 4.13
Version-Release number of selected component (if applicable):
4.12 and 4.13
How reproducible:
All the times
Steps to Reproduce:
[weliang@weliang networking]$ oc create -f https://raw.githubusercontent.com/weliang1/verification-tests/master/testdata/networking/multus-cni/NetworkAttachmentDefinitions/runtimeconfig-def-ipandmac.yaml networkattachmentdefinition.k8s.cni.cncf.io/runtimeconfig-def created [weliang@weliang networking]$ oc get net-attach-def -o yaml apiVersion: v1 items: - apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: creationTimestamp: "2023-01-03T16:33:03Z" generation: 1 name: runtimeconfig-def namespace: test resourceVersion: "64139" uid: bb26c08f-adbf-477e-97ab-2aa7461e50c4 spec: config: '{ "cniVersion": "0.3.1", "name": "runtimeconfig-def", "plugins": [{ "type": "macvlan", "capabilities": { "ips": true }, "mode": "bridge", "ipam": { "type": "static" } }, { "type": "tuning", "capabilities": { "mac": true } }] }' kind: List metadata: resourceVersion: "" [weliang@weliang networking]$ oc create -f https://raw.githubusercontent.com/weliang1/verification-tests/master/testdata/networking/multus-cni/Pods/runtimeconfig-pod-ipandmac.yaml pod/runtimeconfig-pod created [weliang@weliang networking]$ oc get pod NAME READY STATUS RESTARTS AGE runtimeconfig-pod 0/1 ContainerCreating 0 6s [weliang@weliang networking]$ oc describe pod runtimeconfig-pod Name: runtimeconfig-pod Namespace: test Priority: 0 Node: weliang-01031-bvxtz-worker-a-qlwz7.c.openshift-qe.internal/10.0.128.4 Start Time: Tue, 03 Jan 2023 11:33:45 -0500 Labels: <none> Annotations: k8s.v1.cni.cncf.io/networks: [ { "name": "runtimeconfig-def", "ips": [ "192.168.22.2/24" ], "mac": "CA:FE:C0:FF:EE:00" } ] openshift.io/scc: anyuid Status: Pending IP: IPs: <none> Containers: runtimeconfig-pod: Container ID: Image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4 Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5zqd (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-k5zqd: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 26s default-scheduler Successfully assigned test/runtimeconfig-pod to weliang-01031-bvxtz-worker-a-qlwz7.c.openshift-qe.internal Normal AddedInterface 24s multus Add eth0 [10.128.2.115/23] from openshift-sdn Warning FailedCreatePodSandBox 23s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_runtimeconfig-pod_test_7d5f3e7a-846d-4cfb-ac78-fd08b27102ae_0(cff792dbd07e8936d04aad31964bd7b626c19a90eb9d92a67736323a1a2303c4): error adding pod test_runtimeconfig-pod to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [test/runtimeconfig-pod/7d5f3e7a-846d-4cfb-ac78-fd08b27102ae:runtimeconfig-def]: error adding container to network "runtimeconfig-def": Interface name contains an invalid character / Normal AddedInterface 7s multus Add eth0 [10.128.2.116/23] from openshift-sdn Warning FailedCreatePodSandBox 7s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_runtimeconfig-pod_test_7d5f3e7a-846d-4cfb-ac78-fd08b27102ae_0(d2456338fa65847d5dc744dea64972912c10b2a32d3450910b0b81cdc9159ca4): error adding pod test_runtimeconfig-pod to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [test/runtimeconfig-pod/7d5f3e7a-846d-4cfb-ac78-fd08b27102ae:runtimeconfig-def]: error adding container to network "runtimeconfig-def": Interface name contains an invalid character / [weliang@weliang networking]$
Actual results:
Pod is not running
Expected results:
Pod should be in running state
Additional info:
Description of problem:
Normal user cannot open the debug container from the pods(crashLoopbackoff) they created, And would be got error message: pods "<pod name>" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , <nil>
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-20-040107, 4.11.z, 4.10.z
How reproducible:
Always
Steps to Reproduce:
1. Login OCP as a normal user eg: flexy-htpasswd-provider 2. Create a project, go to Developer prespective -> +Add page 3. Click "Import from Git", and provide below data to get a Pods with CrashLoopBackOff state Git Repo URL: https://github.com/sclorg/nodejs-ex.git Name: nodejs-ex-git Run command: star a wktw 4. Navigate to /k8s/ns/<project name>/pods page, find the pod with CrashLoopBackOff status, and go to it details page -> Logs Tab 5. Click the link of "Debug container" 6. Check if the Debug container can be opened
Actual results:
6. Error message would be shown on page, user cannot open debug container via UI pods "nodejs-ex-git-6dd986d8bd-9h2wj-debug-tkqk2" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , <nil>
Expected results:
6. Normal user could use debug container without any error message
Additional info:
The debug container could be created for the normal user successfully via CommandLine $ oc debug <crashloopbackoff pod name> -n <project name>
This is a clone of issue OCPBUGS-1428. The following is the description of the original issue:
—
Description of problem:
When using an OperatorGroup attached to a service account, AND if there is a secret present in the namespace, the operator installation will fail with the message: the service account does not have any API secret sa=testx-ns/testx-sa This issue seems similar to https://bugzilla.redhat.com/show_bug.cgi?id=2094303 - which was resolved in 4.11.0 - however, the new element now, is that the presence of a secret in the namespace is causing the issue. The name of the secret seems significant - suggesting something somewhere is depending on the order that secrets are listed in. For example, If the secret in the namespace is called "asecret", the problem does not occur. If it is called "zsecret", the problem always occurs.
"zsecret" is not a "kubernetes.io/service-account-token". The issue I have raised here relates to Opaque secrets - zsecret is an Opaque secret. The issue may apply to other types of secrets, but specifically my issue is that when there is an opaque secret present in the namespace, the operator install fails as described. I aught to be allowed to have an opaque secret present in the namespace where I am installing the operator.
Version-Release number of selected component (if applicable):
4.11.0 & 4.11.1
How reproducible:
100% reproducible
Steps to Reproduce:
1.Create namespace: oc new-project testx-ns 2. oc apply -f api-secret-issue.yaml
Actual results:
Expected results:
Additional info:
API YAML:
cat api-secret-issue.yaml
apiVersion: v1
kind: Secret
metadata:
name: zsecret
namespace: testx-ns
annotations:
kubernetes.io/service-account.name: testx-sa
type: Opaque
stringData:
mykey: mypass
—
apiVersion: v1
kind: ServiceAccount
metadata:
name: testx-sa
namespace: testx-ns
—
kind: OperatorGroup
apiVersion: operators.coreos.com/v1
metadata:
name: testx-og
namespace: testx-ns
spec:
serviceAccountName: "testx-sa"
targetNamespaces:
- testx-ns
—
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: testx-role
namespace: testx-ns
rules:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: testx-rolebinding
namespace: testx-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: testx-role
subjects:
—
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: etcd-operator
namespace: testx-ns
spec:
channel: singlenamespace-alpha
installPlanApproval: Automatic
name: etcd
source: community-operators
sourceNamespace: openshift-marketplace
Description of problem:
When a user tries to run `oc debug,` they end up getting errors about pod security labels:
Ensure the target namespace has the appropriate security level set or consider creating a dedicated privileged namespace using: "oc create ns <namespace> -o yaml | oc label -f - security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged". Original error: pods "ip-10-0-129-209ec2internal-debug" is forbidden: violates PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true, hostIPC=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") command failed, 3 retries left
This happens since https://docs.openshift.com/container-platform/4.11/authentication/understanding-and-managing-pod-security-admission.html
Fixing it requires the user running something like
oc create ns fips-check -o yaml | \
oc label -f - \
security.openshift.io/scc.podSecurityLabelSync=false \
pod-security.kubernetes.io/enforce=privileged \
pod-security.kubernetes.io/audit=privileged \
pod-security.kubernetes.io/warn=privileged
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Try to run `oc debug node/....` in a new namespace
Actual results:
Error message
Expected results:
oc debug works without the user having to perform additional steps. If namespace is omitted, perhaps oc debug could create a temporary one with the correct pod security labels?
Additional info:
This is a clone of issue OCPBUGS-3164. The following is the description of the original issue:
—
During first bootstrap boot we need crio and kubelet on the disk, so we start release-image-pivot systemd task. However, its not blocking bootkube, so these two run in parallel.
release-image-pivot restarts the node to apply new OS image, which may leave bootkube in an inconsistent state. This task should run before bootkube
Searching recent 4.12 CI, there are a number of failures in the clusteroperator/machine-config should not change condition/Available test case:
$ w3m -dump -cols 200 'https://search.ci.openshift.org/?search=clusteroperator%2Fmachine-config+should+not+change+condition%2FAvailable&maxAge=48h&type=junit' | grep '4[.]12.*failures match' | sort
periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade (all) - 129 runs, 53% failed, 6% of failures match = 3% impact
periodic-ci-openshift-release-master-ci-4.12-e2e-aws-sdn-techpreview-serial (all) - 6 runs, 50% failed, 67% of failures match = 33% impact
periodic-ci-openshift-release-master-ci-4.12-e2e-azure-ovn-upgrade (all) - 60 runs, 50% failed, 3% of failures match = 2% impact
periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-aws-ovn-upgrade (all) - 129 runs, 56% failed, 8% of failures match = 5% impact
periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-azure-sdn-upgrade (all) - 129 runs, 69% failed, 12% of failures match = 9% impact
periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-ovn-rt-upgrade (all) - 8 runs, 38% failed, 67% of failures match = 25% impact
periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-ovn-upgrade (all) - 60 runs, 57% failed, 6% of failures match = 3% impact
periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-sdn-upgrade (all) - 12 runs, 42% failed, 20% of failures match = 8% impact
periodic-ci-openshift-release-master-nightly-4.12-e2e-aws-sdn-upgrade (all) - 60 runs, 40% failed, 4% of failures match = 2% impact
periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-serial-virtualmedia (all) - 6 runs, 100% failed, 17% of failures match = 17% impact
periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-upgrade (all) - 6 runs, 67% failed, 25% of failures match = 17% impact
periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-serial-ovn-dualstack (all) - 6 runs, 67% failed, 25% of failures match = 17% impact
periodic-ci-openshift-release-master-nightly-4.12-e2e-vsphere-ovn-techpreview-serial (all) - 9 runs, 56% failed, 20% of failures match = 11% impact
periodic-ci-openshift-release-master-nightly-4.12-upgrade-from-stable-4.11-e2e-metal-ipi-upgrade (all) - 6 runs, 100% failed, 17% of failures match = 17% impact
periodic-ci-openshift-release-master-nightly-4.12-upgrade-from-stable-4.11-e2e-metal-ipi-upgrade-ovn-ipv6 (all) - 6 runs, 83% failed, 20% of failures match = 17% impact
periodic-ci-openshift-release-master-okd-4.12-e2e-vsphere (all) - 25 runs, 100% failed, 4% of failures match = 4% impact
release-openshift-ocp-installer-e2e-gcp-serial-4.12 (all) - 6 runs, 83% failed, 20% of failures match = 17% impact
Doesn't seem like reason is getting set?
$ curl -s 'https://search.ci.openshift.org/search?name=periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade&search=clusteroperator%2Fmachine-config+should+not+change+condition%2FAvailable&maxAge=48h&type=junit&context=15' | jq -r 'to_entries[].value | to_entries[].value[].context[]' | grep 'clusteroperator/machine-config condition/Available status/False reason' Aug 31 01:13:56.724 - 698s E clusteroperator/machine-config condition/Available status/False reason/Cluster not available for [{operator 4.12.0-0.ci-2022-08-30-194744}] Aug 31 09:09:15.460 - 1078s E clusteroperator/machine-config condition/Available status/False reason/Cluster not available for [{operator 4.12.0-0.ci-2022-08-30-194744}] Sep 01 03:31:24.808 - 1131s E clusteroperator/machine-config condition/Available status/False reason/Cluster not available for [{operator 4.12.0-0.ci-2022-08-31-111359}] Sep 01 07:15:58.029 - 1085s E clusteroperator/machine-config condition/Available status/False reason/Cluster not available for [{operator 4.12.0-0.ci-2022-08-31-111359}]
Example runs in the job I've randomly selected to drill into:
$ curl -s 'https://search.ci.openshift.org/search?name=periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade&search=clusteroperator%2Fmachine-config+should+not+change+condition%2FAvailable&maxAge=48h&type=junit' | jq -r 'keys[]' https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade/1564757706458271744 https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade/1564879945233076224 https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade/1565158084484009984 https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade/1565212566194491392
Drilling into that last run, the Available=False was the whole pool-update phase:
And details from the origin's monitor:
$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade/1565212566194491392/artifacts/e2e-aws-ovn-upgrade/openshift-e2e-test/build-log.txt | grep clusteroperator/machine-config Sep 01 07:15:57.629 E clusteroperator/machine-config condition/Degraded status/True reason/RenderConfigFailed changed: Failed to resync 4.12.0-0.ci-2022-08-31-111359 because: refusing to read osImageURL version "4.12.0-0.ci-2022-09-01-053740", operator version "4.12.0-0.ci-2022-08-31-111359" Sep 01 07:15:57.629 - 49s E clusteroperator/machine-config condition/Degraded status/True reason/Failed to resync 4.12.0-0.ci-2022-08-31-111359 because: refusing to read osImageURL version "4.12.0-0.ci-2022-09-01-053740", operator version "4.12.0-0.ci-2022-08-31-111359" Sep 01 07:15:58.029 E clusteroperator/machine-config condition/Available status/False changed: Cluster not available for [{operator 4.12.0-0.ci-2022-08-31-111359}] Sep 01 07:15:58.029 - 1085s E clusteroperator/machine-config condition/Available status/False reason/Cluster not available for [{operator 4.12.0-0.ci-2022-08-31-111359}] Sep 01 07:16:47.000 I /machine-config reason/OperatorVersionChanged clusteroperator/machine-config-operator started a version change from [{operator 4.12.0-0.ci-2022-08-31-111359}] to [{operator 4.12.0-0.ci-2022-09-01-053740}] Sep 01 07:16:47.377 W clusteroperator/machine-config condition/Progressing status/True changed: Working towards 4.12.0-0.ci-2022-09-01-053740 Sep 01 07:16:47.377 - 1037s W clusteroperator/machine-config condition/Progressing status/True reason/Working towards 4.12.0-0.ci-2022-09-01-053740 Sep 01 07:16:47.405 W clusteroperator/machine-config condition/Degraded status/False changed: Sep 01 07:18:02.614 W clusteroperator/machine-config condition/Upgradeable status/False reason/PoolUpdating changed: One or more machine config pools are updating, please see `oc get mcp` for further details Sep 01 07:34:03.000 I /machine-config reason/OperatorVersionChanged clusteroperator/machine-config-operator version changed from [{operator 4.12.0-0.ci-2022-08-31-111359}] to [{operator 4.12.0-0.ci-2022-09-01-053740}] Sep 01 07:34:03.699 W clusteroperator/machine-config condition/Available status/True changed: Cluster has deployed [{operator 4.12.0-0.ci-2022-08-31-111359}] Sep 01 07:34:03.715 W clusteroperator/machine-config condition/Upgradeable status/True changed: Sep 01 07:34:04.065 I clusteroperator/machine-config versions: operator 4.12.0-0.ci-2022-08-31-111359 -> 4.12.0-0.ci-2022-09-01-053740 Sep 01 07:34:04.663 W clusteroperator/machine-config condition/Progressing status/False changed: Cluster version is 4.12.0-0.ci-2022-09-01-053740 [bz-Machine Config Operator] clusteroperator/machine-config should not change condition/Available [bz-Machine Config Operator] clusteroperator/machine-config should not change condition/Degraded
No idea if whatever was happening there is the same thing that was happening in other runs, and I haven't checked 4.11 and earlier either. The test-case is non-fatal, so it doesn't break CI, but it can cause noise like ClusterOperatorDown if it continues for 10 or more minutes. Whic PromeCIeus says actually fired in this run, although apparently the origin monitors didn't notice to complain:
So parallel asks (and I'm happy to shard into separate bugs, if that's helpful):
This is a clone of issue OCPBUGS-5151. The following is the description of the original issue:
—
Description of problem:
Cx is not able to install new cluster OCP BM IPI. During the bootstrapping the provisioning interfaces from master node not getting ipv4 dhcp ip address from bootstrap dhcp server on OCP IPI BareMetal install Please refer to following BUG --> https://issues.redhat.com/browse/OCPBUGS-872 The problem was solved by applying rd.net.timeout.carrier=30 to the kernel parameters of compute nodes via cluster-baremetal operator. The fix also need to be apply to the control-plane. ref:// https://github.com/openshift/cluster-baremetal-operator/pull/286/files
Version-Release number of selected component (if applicable):
How reproducible:
Perform OCP 4.10.16 IPI BareMetal install.
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Customer should be able to install the cluster without any issue.
Additional info:
This is placeholder for backporting fixes related to checking the serving field to 4.12.z. This code is already in 4.14 and 4.13: https://github.com/openshift/ovn-kubernetes/commit/f6ef43368cc79b85bf0de535ff3b79a5856ab481
This is a clone of issue OCPBUGS-11004. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-8349. The following is the description of the original issue:
—
Description of problem:
On a freshly installed cluster, the control-plane-machineset-operator begins rolling a new master node, but the machine remains in a Provisioned state and never joins as a node. Its status is: Drain operation currently blocked by: [{Name:EtcdQuorumOperator Owner:clusteroperator/etcd}] The cluster is left in this state until an admin manually removes the stuck master node, at which point a new master machine is provisioned and successfully joins the cluster.
Version-Release number of selected component (if applicable):
4.12.4
How reproducible:
Observed at least 4 times over the last week, but unsure on how to reproduce.
Actual results:
A master node remains in a stuck Provisioned state and requires manual deletion to unstick the control plane machine set process.
Expected results:
No manual interaction should be necessary.
Additional info:
This is a clone of issue OCPBUGS-11371. The following is the description of the original issue:
—
Description of problem:
oc-mirror fails to complete with heads only complaining about devworkspace-operator
Version-Release number of selected component (if applicable):
# oc-mirror version Client Version: version.Info{Major:"", Minor:"", GitVersion:"4.12.0-202302280915.p0.g3d51740.assembly.stream-3d51740", GitCommit:"3d517407dcbc46ededd7323c7e8f6d6a45efc649", GitTreeState:"clean", BuildDate:"2023-03-01T00:20:53Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
How reproducible:
Attempt a headsonly mirroring for registry.redhat.io/redhat/redhat-operator-index:v4.10
Steps to Reproduce:
1. Imageset currently:
kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: myregistry.mydomain:5000/redhat-operators skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10
2.$ oc mirror --config=./imageset-config.yml docker://otherregistry.mydomain:5000/redhat-operators Checking push permissions for otherregistry.mydomain:5000 Found: oc-mirror-workspace/src/publish Found: oc-mirror-workspace/src/v2 Found: oc-mirror-workspace/src/charts Found: oc-mirror-workspace/src/release-signatures WARN[0026] DEPRECATION NOTICE: Sqlite-based catalogs and their related subcommands are deprecated. Support for them will be removed in a future release. Please migrate your catalog workflows to the new file-based catalog format. The rendered catalog is invalid. Run "oc-mirror list operators --catalog CATALOG-NAME --package PACKAGE-NAME" for more information. error: error generating diff: channel fast: head "devworkspace-operator.v0.19.1-0.1679521112.p" not reachable from bundle "devworkspace-operator.v0.19.1"
Actual results:
error: error generating diff: channel fast: head "devworkspace-operator.v0.19.1-0.1679521112.p" not reachable from bundle "devworkspace-operator.v0.19.1"
Expected results:
For the catalog to be mirrored.
Description of problem:
AWS tagging - when applying user defined tags you cannot add more than 10
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Configure userTags for aws platform with more than 8 tags. 2. Installer fails to add the tags while AWS supports upto 50 tags.
Actual results:
Installer validation fails.
Expected results:
Installer should be able to add more than 8 tags.
Additional info:
This is a clone of issue OCPBUGS-3633. The following is the description of the original issue:
—
I think something is wrong with the alerts refactor, or perhaps my sync to 4.12.
Failed: suite=[openshift-tests], [sig-instrumentation][Late] Alerts shouldn't report any unexpected alerts in firing or pending state [apigroup:config.openshift.io] [Suite:openshift/conformance/parallel] Passed 1 times, failed 0 times, skipped 0 times: we require at least 6 attempts to have a chance at success
We're not getting the passes - from https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/aggregated-azure-ovn-upgrade-4.12-micro-release-openshift-release-analysis-aggregator/1592021681235300352, the successful runs don't show any record of the test at all. We need to record successes and failures for aggregation to work right.
Backport DualStack and the new reconciler to whereabouts plugin 4.12
This is a clone of issue OCPBUGS-6647. The following is the description of the original issue:
—
Description of problem:
Resource type drop-down menu item 'Last used' is in English
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Steps to Reproduce:
1. Navigate to kube:admin -> User Preferences -> Applications 2. Click on Resource type dorp-down
Actual results:
Content is in English
Expected results:
Content should be in target language
Additional info:
Screenshot reference provided
This is a clone of issue OCPBUGS-2281. The following is the description of the original issue:
—
Description of problem:
E2E test cases for knative and pipeline packages have been disabled on CI due to respective operator installation issues. Tests have to be enabled after new operator version be available or the issue resolves
References:
https://coreos.slack.com/archives/C6A3NV5J9/p1664545970777239
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
Name of workload get changed, when project and image stream gets changed on reloading the form on the edit deployment page of the workload
Version-Release number of selected component (if applicable):
4.9 and above
How reproducible:
Always
Steps to Reproduce:
1. Create a deployment workload 2. Select Edit Deployment option on workload 3. Verify initially name was same as workload name and field was not changeable. 4. Change the project to "openshift", image stream to "golang" or anything and tag to "latest" 5. Reload the form 6. Now check that the name also got changed to golang.
Actual results:
Name of workload changes when project and image stream name changed on edit deployment page.
Expected results:
Workload name doesn't have to be changed, when image stream name changed on edit deployment page, as name field is not changeable.
Additional info:
While performing automation, I can see the error "the name of the object(imageStreamName) does not match the name on the URL(workloadName)", but while performing this on UI, no errors.
When installing OCP cluster with worker nodes VM type specified as high performance, some of the configuration settings of said VMs do not match the configuration settings a high performance VM should have.
Specific configurations that do not match are described in subtasks.
Default configuration settings of high performance VMs:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index?extIdCarryOver=true&sc_cid=701f2000001Css5AAC#Configuring_High_Performance_Virtual_Machines_Templates_and_Pools
When installing OCP cluster with worker nodes VM type specified as high performance, manual and automatic migration is enabled in the said VMs.
However, high performance worker VMs are created with default values of the engine, so only manual migration should be enabled.
Default configuration settings of high performance VMs:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index?extIdCarryOver=true&sc_cid=701f2000001Css5AAC#Configuring_High_Performance_Virtual_Machines_Templates_and_Pools
How reproducible: 100%
How to reproduce:
1. Create install-config.yaml with a vmType field and set it to high performance, i.e.:
apiVersion: v1 baseDomain: basedomain.com compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: ovirt: affinityGroupsNames: [] vmType: high_performance replicas: 2 ...
2. Run installation
./openshift-install create cluster --dir=resources --log-level=debug
3. Check worker VM's configuration in the RHV webconsole.
Expected:
Only manual migration (under Host) should be enabled.
Actual:
Manual and automatic migration is enabled.
Description of problem:
In a complete disconnected cluster, the dev catalog is taking too much time in loading
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. A complete disconnected cluster
2. In add page go to the All services page
3.
Actual results:
Taking too much time too load
Expected results:
Time taken should be reduced
Additional info:
Attached a gif for reference
This is a clone of issue OCPBUGS-855. The following is the description of the original issue:
—
Description of problem:
When setting the allowedregistries like the example below, the openshift-samples operator is degraded: oc get image.config.openshift.io/cluster -o yaml apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2020-12-16T15:48:20Z" generation: 2 name: cluster resourceVersion: "422284920" uid: d406d5a0-c452-4a84-b6b3-763abb51d7a5 spec: additionalTrustedCA: name: registry-ca allowedRegistriesForImport: - domainName: quay.io insecure: false - domainName: registry.redhat.io insecure: false - domainName: registry.access.redhat.com insecure: false - domainName: registry.redhat.io/redhat/redhat-operator-index insecure: true - domainName: registry.redhat.io/redhat/redhat-marketplace-index insecure: true - domainName: registry.redhat.io/redhat/certified-operator-index insecure: true - domainName: registry.redhat.io/redhat/community-operator-index insecure: true registrySources: allowedRegistries: - quay.io - registry.redhat.io - registry.rijksapps.nl - registry.access.redhat.com - registry.redhat.io/redhat/redhat-operator-index - registry.redhat.io/redhat/redhat-marketplace-index - registry.redhat.io/redhat/certified-operator-index - registry.redhat.io/redhat/community-operator-index oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.10.21 True False False 5d13h baremetal 4.10.21 True False False 450d cloud-controller-manager 4.10.21 True False False 94d cloud-credential 4.10.21 True False False 624d cluster-autoscaler 4.10.21 True False False 624d config-operator 4.10.21 True False False 624d console 4.10.21 True False False 42d csi-snapshot-controller 4.10.21 True False False 31d dns 4.10.21 True False False 217d etcd 4.10.21 True False False 624d image-registry 4.10.21 True False False 94d ingress 4.10.21 True False False 94d insights 4.10.21 True False False 104s kube-apiserver 4.10.21 True False False 624d kube-controller-manager 4.10.21 True False False 624d kube-scheduler 4.10.21 True False False 624d kube-storage-version-migrator 4.10.21 True False False 31d machine-api 4.10.21 True False False 624d machine-approver 4.10.21 True False False 624d machine-config 4.10.21 True False False 17d marketplace 4.10.21 True False False 258d monitoring 4.10.21 True False False 161d network 4.10.21 True False False 624d node-tuning 4.10.21 True False False 31d openshift-apiserver 4.10.21 True False False 42d openshift-controller-manager 4.10.21 True False False 22d openshift-samples 4.10.21 True True True 31d Samples installation in error at 4.10.21: &errors.errorString{s:"global openshift image configuration prevents the creation of imagestreams using the registry "} operator-lifecycle-manager 4.10.21 True False False 624d operator-lifecycle-manager-catalog 4.10.21 True False False 624d operator-lifecycle-manager-packageserver 4.10.21 True False False 31d service-ca 4.10.21 True False False 624d storage 4.10.21 True False False 113d After applying the fix as described here( https://access.redhat.com/solutions/6547281 ) it is resolved: oc patch configs.samples.operator.openshift.io cluster --type merge --patch '{"spec": {"samplesRegistry": "registry.redhat.io"}}' But according the the BZ this should be fixed in 4.10.3 https://bugzilla.redhat.com/show_bug.cgi?id=2027745 but the issue is still occur in our 4.10.21 cluster: oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.21 True False 31d Error while reconciling 4.10.21: the cluster operator openshift-samples is degraded
Version-Release number of selected component (if applicable):
How reproducible:
100%
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
For the disconnected installation , we should not be able to provision machines successfully with publicIP:true , this has been the behavior earlier till -
4.11 and around 17th Aug nightly released 4.12 , but it has started allowing creation of machines with publicIP:true set in machineset
Issue reproduced on - Cluster version - 4.12.0-0.nightly-2022-08-23-223922
It is always reproducible .
Steps :
Create machineset using yaml with
{"spec":{"providerSpec":{"value":{"publicIP": true}}}}
Machineset created successfully and machine provisioned successfully .
This seems to be regression bug refer - https://bugzilla.redhat.com/show_bug.cgi?id=1889620
Here is the must gather log - https://drive.google.com/file/d/1UXjiqAx7obISTxkmBsSBuo44ciz9HD1F/view?usp=sharing
Here is the test successfully ran for 4.11 , for exactly same profile and machine creation failed with InvalidConfiguration Error- https://mastern-jenkins-csb-openshift-qe.apps.ocp-c1.prod.psi.redhat.com/job/ocp-common/job/Runner/575822/console
We can confirm disconnected cluster using below there would be lot of mirrors used in those -
oc get ImageContentSourcePolicy image-policy-aosqe -o yaml apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: creationTimestamp: "2022-08-24T09:08:47Z" generation: 1 name: image-policy-aosqe resourceVersion: "34648" uid: 20e45d6d-e081-435d-b6bb-16c4ca21c9d6 spec: repositoryDigestMirrors: - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/olmqe source: quay.io/olmqe - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/openshifttest source: quay.io/openshifttest - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/openshift-qe-optional-operators source: quay.io/openshift-qe-optional-operators - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: registry.redhat.io - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: registry.stage.redhat.io - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: brew.registry.redhat.io
This is a clone of issue OCPBUGS-3096. The following is the description of the original issue:
—
While the installer binary is statically linked, the terraform binaries shipped with it are dynamically linked.
This could give issues when running the installer on Linux and depending on the GLIBC version the specific Linux distribution has installed. It becomes a risk when switching the base image of the builders from ubi8 to ubi9 and trying to run the installer in cs8 or rhel8.
For example, building the installer on cs9 and trying to run it in a cs8 distribution leads to:
time="2022-10-31T14:31:47+01:00" level=debug msg="[INFO] running Terraform command: /root/test/terraform/bin/terraform version -json" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=debug msg="[INFO] running Terraform command: /root/test/terraform/bin/terraform version -json" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=debug msg="[INFO] running Terraform command: /root/test/terraform/bin/terraform init -no-color -force-copy -input=false -backend=true -get=true -upgrade=false -plugin-dir=/root/test/terraform/plugins" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="failed to fetch Cluster: failed to generate asset \"Cluster\": failure applying terraform for \"cluster\" stage: failed to create cluster: failed doing terraform init: exit status 1\n/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)\n/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)\n"
How reproducible:Always
Steps to Reproduce:{code:none} 1. Build the installer on cs9 2. Run the installer on cs8 until the terraform binary are started 3. Looking at the terrform binary with ldd or file, you can get it is not a statically linked binary and the error above might occur depending on the glibc version you are running on
Actual results:
Expected results:
The terraform and providers binaries have to be statically linked as well as the installer is.
Additional info:
This comes from a build of OKD/SCOS that is happening outside of Prow on a cs9-based builder image. One can use the Dockerfile at images/installer/Dockerfile.ci and replace the builder image with one like https://github.com/okd-project/images/blob/main/okd-builder.Dockerfile
Console should be using v1 version of the ConsolePlugin model rather then the old v1alpha1.
CONSOLE-3077 was updating this version, but did not made the cut for the 4.12 release. Based on discussion with Samuel Padgett we should be backporting to 4.12.
The risk should be minimal since we are only updating the model itself + validation + Readme
Failures like:
$ oc login --token=... Logged into "https://api..." as "..." using the token provided. Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get projects.project.openshift.io)
break login, which tries to gather information before saving the configuration, including a giant project list.
Ideally login would be able to save the successful login credentials, even when the informative gathering had difficulties. And possibly the informative gathering could be made conditional (--quiet or similar?) so expensive gathering could be skipped in use-cases where the context was not needed.
Description of problem:
Container networking pods cannot access the host network pods on another node which caused some operators DEGRADED $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2022-10-23-204408 False True True 63m OAuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.jhou.arm.eng.rdu2.redhat.com/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)... baremetal 4.12.0-0.nightly-2022-10-23-204408 True False False 62m cloud-controller-manager 4.12.0-0.nightly-2022-10-23-204408 True False False 68m cloud-credential 4.12.0-0.nightly-2022-10-23-204408 True False False 78m cluster-autoscaler 4.12.0-0.nightly-2022-10-23-204408 True False False 62m config-operator 4.12.0-0.nightly-2022-10-23-204408 True False False 63m console 4.12.0-0.nightly-2022-10-23-204408 False False False 30m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.jhou.arm.eng.rdu2.redhat.com): Get "https://console-openshift-console.apps.jhou.arm.eng.rdu2.redhat.com": context deadline exceeded (Client.Timeout exceeded while awaiting headers) control-plane-machine-set 4.12.0-0.nightly-2022-10-23-204408 True False False 62m csi-snapshot-controller 4.12.0-0.nightly-2022-10-23-204408 True False False 62m dns 4.12.0-0.nightly-2022-10-23-204408 True False False 62m etcd 4.12.0-0.nightly-2022-10-23-204408 False True True 13m EtcdMembersAvailable: 1 of 2 members are available, openshift-qe-048.arm.eng.rdu2.redhat.com is unhealthy image-registry 4.12.0-0.nightly-2022-10-23-204408 True False False 39m ingress 4.12.0-0.nightly-2022-10-23-204408 True False True 47m The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing) insights 4.12.0-0.nightly-2022-10-23-204408 True False False 56m kube-apiserver 4.12.0-0.nightly-2022-10-23-204408 True False False 50m kube-controller-manager 4.12.0-0.nightly-2022-10-23-204408 True False True 60m GarbageCollectorDegraded: error querying alerts: client_error: client error: 403 kube-scheduler 4.12.0-0.nightly-2022-10-23-204408 True False False 54m kube-storage-version-migrator 4.12.0-0.nightly-2022-10-23-204408 True False False 63m machine-api 4.12.0-0.nightly-2022-10-23-204408 True False False 51m machine-approver 4.12.0-0.nightly-2022-10-23-204408 True False False 62m machine-config 4.12.0-0.nightly-2022-10-23-204408 True False False 29m marketplace 4.12.0-0.nightly-2022-10-23-204408 True False False 62m monitoring 4.12.0-0.nightly-2022-10-23-204408 True False False 38m network 4.12.0-0.nightly-2022-10-23-204408 True False False 62m node-tuning 4.12.0-0.nightly-2022-10-23-204408 True False False 62m openshift-apiserver 4.12.0-0.nightly-2022-10-23-204408 True False False 30m openshift-controller-manager 4.12.0-0.nightly-2022-10-23-204408 True False False 56m openshift-samples 4.12.0-0.nightly-2022-10-23-204408 True False False 43m operator-lifecycle-manager 4.12.0-0.nightly-2022-10-23-204408 True False False 62m operator-lifecycle-manager-catalog 4.12.0-0.nightly-2022-10-23-204408 True False False 62m operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2022-10-23-204408 True False False 43m service-ca 4.12.0-0.nightly-2022-10-23-204408 True False False 63m storage 4.12.0-0.nightly-2022-10-23-204408 True False False 63m $ oc get pod -n openshift-ingress -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-58f6498646-gf6ns 1/1 Running 1 (79m ago) 93m 2620:52:0:1eb:3673:5aff:fe9e:5abc openshift-qe-049.arm.eng.rdu2.redhat.com <none> <none> router-default-58f6498646-qjtbk 1/1 Running 1 (79m ago) 93m 2620:52:0:1eb:3673:5aff:fe9e:593c openshift-qe-052.arm.eng.rdu2.redhat.com <none> <none> $ oc get pod -n openshift-network-diagnostics -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES network-check-source-5f967d78bc-cfwz4 1/1 Running 0 103m fd01:0:0:3::9 openshift-qe-052.arm.eng.rdu2.redhat.com <none> <none> network-check-target-52krv 1/1 Running 0 91m fd01:0:0:4::3 openshift-qe-049.arm.eng.rdu2.redhat.com <none> <none> network-check-target-56q9q 1/1 Running 0 91m fd01:0:0:3::5 openshift-qe-052.arm.eng.rdu2.redhat.com <none> <none> network-check-target-ggqsf 1/1 Running 0 103m fd01:0:0:2::4 openshift-qe-048.arm.eng.rdu2.redhat.com <none> <none> network-check-target-xfrq4 1/1 Running 0 103m fd01:0:0:1::3 openshift-qe-047.arm.eng.rdu2.redhat.com <none> <none> network-check-target-zrglr 1/1 Running 0 73m fd01:0:0:6::4 openshift-qe-051.arm.eng.rdu2.redhat.com <none> <none> network-check-target-zwb4t 1/1 Running 0 91m fd01:0:0:5::5 openshift-qe-053.arm.eng.rdu2.redhat.com <none> <none> ####Failed from containers pod on openshift-qe-053.arm.eng.rdu2.redhat.com to access ingress pods $ oc rsh -n openshift-network-diagnostics network-check-target-zwb4t sh-4.4$ curl https://[2620:52:0:1eb:3673:5aff:fe9e:5abc]:443 -k -I ^C sh-4.4$ curl https://[2620:52:0:1eb:3673:5aff:fe9e:593c]:443 -k -I ^C
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-23-204408
How reproducible:
always
Steps to Reproduce:
1. Deploy ipv6 disconnect single cluster 2. 3.
Actual results:
Expected results:
Additional info:
The dependency on openshift/api needs to be bumped in openshift/kubernetes in order to pull in the fix from https://issues.redhat.com/browse/OCPBUGS-3635.
This is a clone of issue OCPBUGS-10888. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10887. The following is the description of the original issue:
—
Description of problem:
Following https://bugzilla.redhat.com/show_bug.cgi?id=2102765 respectively https://issues.redhat.com/browse/OCPBUGS-2140 problems with OpenID Group sync have been resolved. Yet the problem documented in https://bugzilla.redhat.com/show_bug.cgi?id=2102765 still does exist and we see that Groups that are being removed are still part of the chache in oauth-apiserver, causing a panic of the respective components and failures during login for potentially affected users. So in general, it looks like that oauth-apiserver cache is not properly refreshing or handling the OpenID Groups being synced. E1201 11:03:14.625799 1 runtime.go:76] Observed a panic: interface conversion: interface {} is nil, not *v1.Group goroutine 3706798 [running]: k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1.1() k8s.io/apiserver@v0.22.2/pkg/server/filters/timeout.go:103 +0xb0 panic({0x1aeab00, 0xc001400390}) runtime/panic.go:838 +0x207 k8s.io/apiserver/pkg/endpoints/filters.WithAudit.func1.1.1() k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/audit.go:80 +0x2a k8s.io/apiserver/pkg/endpoints/filters.WithAudit.func1.1() k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/audit.go:89 +0x250 panic({0x1aeab00, 0xc001400390}) runtime/panic.go:838 +0x207 github.com/openshift/library-go/pkg/oauth/usercache.(*GroupCache).GroupsFor(0xc00081bf18?, {0xc000c8ac03?, 0xc001400360?}) github.com/openshift/library-go@v0.0.0-20211013122800-874db8a3dac9/pkg/oauth/usercache/groups.go:47 +0xe7 github.com/openshift/oauth-server/pkg/groupmapper.(*UserGroupsMapper).processGroups(0xc0002c8880, {0xc0005d4e60, 0xd}, {0xc000c8ac03, 0x7}, 0x1?) github.com/openshift/oauth-server/pkg/groupmapper/groupmapper.go:101 +0xb5 github.com/openshift/oauth-server/pkg/groupmapper.(*UserGroupsMapper).UserFor(0xc0002c8880, {0x20f3c40, 0xc000e18bc0}) github.com/openshift/oauth-server/pkg/groupmapper/groupmapper.go:83 +0xf4 github.com/openshift/oauth-server/pkg/oauth/external.(*Handler).login(0xc00022bc20, {0x20eebb0, 0xc00041b058}, 0xc0015d8200, 0xc001438140?, {0xc0000e7ce0, 0x150}) github.com/openshift/oauth-server/pkg/oauth/external/handler.go:209 +0x74f github.com/openshift/oauth-server/pkg/oauth/external.(*Handler).ServeHTTP(0xc00022bc20, {0x20eebb0, 0xc00041b058}, 0x0?) github.com/openshift/oauth-server/pkg/oauth/external/handler.go:180 +0x74a net/http.(*ServeMux).ServeHTTP(0x1c9dda0?, {0x20eebb0, 0xc00041b058}, 0xc0015d8200) net/http/server.go:2462 +0x149 github.com/openshift/oauth-server/pkg/server/headers.WithRestoreAuthorizationHeader.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) github.com/openshift/oauth-server/pkg/server/headers/oauthbasic.go:27 +0x10f net/http.HandlerFunc.ServeHTTP(0x0?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:103 +0x1a5 net/http.HandlerFunc.ServeHTTP(0xc0005e0280?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/authorization.go:64 +0x498 net/http.HandlerFunc.ServeHTTP(0x0?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:79 +0x178 net/http.HandlerFunc.ServeHTTP(0x2f6cea0?, {0x20eebb0?, 0xc00041b058?}, 0x3?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/server/filters/maxinflight.go:187 +0x2a4 net/http.HandlerFunc.ServeHTTP(0x0?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:103 +0x1a5 net/http.HandlerFunc.ServeHTTP(0x11?, {0x20eebb0?, 0xc00041b058?}, 0x1aae340?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/impersonation.go:50 +0x21c net/http.HandlerFunc.ServeHTTP(0xc000d52120?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:79 +0x178 net/http.HandlerFunc.ServeHTTP(0x0?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:103 +0x1a5 net/http.HandlerFunc.ServeHTTP(0xc0015d8100?, {0x20eebb0?, 0xc00041b058?}, 0xc000531930?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithAudit.func1({0x7fae682a40d8?, 0xc00041b048}, 0x9dbbaa?) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/audit.go:111 +0x549 net/http.HandlerFunc.ServeHTTP(0xc00003def0?, {0x7fae682a40d8?, 0xc00041b048?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x7fae682a40d8, 0xc00041b048}, 0xc0015d8100) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:79 +0x178 net/http.HandlerFunc.ServeHTTP(0x0?, {0x7fae682a40d8?, 0xc00041b048?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x7fae682a40d8, 0xc00041b048}, 0xc0015d8100) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:103 +0x1a5 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x7fae682a40d8?, 0xc00041b048?}, 0x20cfd00?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.withAuthentication.func1({0x7fae682a40d8, 0xc00041b048}, 0xc0015d8100) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/authentication.go:80 +0x8b9 net/http.HandlerFunc.ServeHTTP(0x20f0f20?, {0x7fae682a40d8?, 0xc00041b048?}, 0x20cfc08?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x7fae682a40d8, 0xc00041b048}, 0xc000e69e00) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:88 +0x46b net/http.HandlerFunc.ServeHTTP(0xc0019f5890?, {0x7fae682a40d8?, 0xc00041b048?}, 0xc000848764?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.WithCORS.func1({0x7fae682a40d8, 0xc00041b048}, 0xc000e69e00) k8s.io/apiserver@v0.22.2/pkg/server/filters/cors.go:75 +0x10b net/http.HandlerFunc.ServeHTTP(0xc00149a380?, {0x7fae682a40d8?, 0xc00041b048?}, 0xc0008487d0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1() k8s.io/apiserver@v0.22.2/pkg/server/filters/timeout.go:108 +0xa2 created by k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP k8s.io/apiserver@v0.22.2/pkg/server/filters/timeout.go:94 +0x2cc goroutine 3706802 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic({0x19eb780?, 0xc001206e20}) k8s.io/apimachinery@v0.22.2/pkg/util/runtime/runtime.go:74 +0x99 k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0xc0016aec60, 0x1, 0x1560f26?}) k8s.io/apimachinery@v0.22.2/pkg/util/runtime/runtime.go:48 +0x75 panic({0x19eb780, 0xc001206e20}) runtime/panic.go:838 +0x207 k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0005047c8, {0x20eecd0?, 0xc0010fae00}, 0xdf8475800?) k8s.io/apiserver@v0.22.2/pkg/server/filters/timeout.go:114 +0x452 k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1({0x20eecd0, 0xc0010fae00}, 0xc000e69d00) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/request_deadline.go:101 +0x494 net/http.HandlerFunc.ServeHTTP(0xc0016af048?, {0x20eecd0?, 0xc0010fae00?}, 0xc0000bc138?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1({0x20eecd0?, 0xc0010fae00}, 0xc000e69d00) k8s.io/apiserver@v0.22.2/pkg/server/filters/waitgroup.go:59 +0x177 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20eecd0?, 0xc0010fae00?}, 0x7fae705daff0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithAuditAnnotations.func1({0x20eecd0, 0xc0010fae00}, 0xc000e69c00) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/audit_annotations.go:37 +0x230 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20eecd0?, 0xc0010fae00?}, 0x20cfc08?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithWarningRecorder.func1({0x20eecd0?, 0xc0010fae00}, 0xc000e69b00) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/warning.go:35 +0x2bb net/http.HandlerFunc.ServeHTTP(0x1c9dda0?, {0x20eecd0?, 0xc0010fae00?}, 0xd?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1({0x20eecd0, 0xc0010fae00}, 0x0?) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/cachecontrol.go:31 +0x126 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20eecd0?, 0xc0010fae00?}, 0x20cfc08?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/httplog.WithLogging.func1({0x20ef480?, 0xc001c20620}, 0xc000e69a00) k8s.io/apiserver@v0.22.2/pkg/server/httplog/httplog.go:103 +0x518 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20ef480?, 0xc001c20620?}, 0x20cfc08?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1({0x20ef480, 0xc001c20620}, 0xc000e69900) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/requestinfo.go:39 +0x316 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20ef480?, 0xc001c20620?}, 0xc0007c3f70?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.withRequestReceivedTimestampWithClock.func1({0x20ef480, 0xc001c20620}, 0xc000e69800) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/request_received_time.go:38 +0x27e net/http.HandlerFunc.ServeHTTP(0x419e2c?, {0x20ef480?, 0xc001c20620?}, 0xc0007c3e40?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1({0x20ef480?, 0xc001c20620?}, 0xc0004ff600?) k8s.io/apiserver@v0.22.2/pkg/server/filters/wrap.go:74 +0xb1 net/http.HandlerFunc.ServeHTTP(0x1c05260?, {0x20ef480?, 0xc001c20620?}, 0x8?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.withAuditID.func1({0x20ef480, 0xc001c20620}, 0xc000e69600) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/with_auditid.go:66 +0x40d net/http.HandlerFunc.ServeHTTP(0x1c9dda0?, {0x20ef480?, 0xc001c20620?}, 0xd?) net/http/server.go:2084 +0x2f github.com/openshift/oauth-server/pkg/server/headers.WithPreserveAuthorizationHeader.func1({0x20ef480, 0xc001c20620}, 0xc000e69600) github.com/openshift/oauth-server/pkg/server/headers/oauthbasic.go:16 +0xe8 net/http.HandlerFunc.ServeHTTP(0xc0016af9d0?, {0x20ef480?, 0xc001c20620?}, 0x16?) net/http/server.go:2084 +0x2f github.com/openshift/oauth-server/pkg/server/headers.WithStandardHeaders.func1({0x20ef480, 0xc001c20620}, 0x4d55c0?) github.com/openshift/oauth-server/pkg/server/headers/headers.go:30 +0x18f net/http.HandlerFunc.ServeHTTP(0x0?, {0x20ef480?, 0xc001c20620?}, 0xc0016afac8?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0xc00098d622?, {0x20ef480?, 0xc001c20620?}, 0xc000401000?) k8s.io/apiserver@v0.22.2/pkg/server/handler.go:189 +0x2b net/http.serverHandler.ServeHTTP({0xc0019f5170?}, {0x20ef480, 0xc001c20620}, 0xc000e69600) net/http/server.go:2916 +0x43b net/http.(*conn).serve(0xc0002b1720, {0x20f0f58, 0xc0001e8120}) net/http/server.go:1966 +0x5d7 created by net/http.(*Server).Serve net/http/server.go:3071 +0x4db
Version-Release number of selected component (if applicable):
OpenShift Container Platform 4.11.13
How reproducible:
- Always
Steps to Reproduce:
1. Install OpenShift Container Platform 4.11 2. Configure OpenID Group Sync (as per https://docs.openshift.com/container-platform/4.11/authentication/identity_providers/configuring-oidc-identity-provider.html#identity-provider-oidc-CR_configuring-oidc-identity-provider) 3. Have users with hundrets of groups 4. Login and after a while, remove some Groups from the user in the IDP and from OpenShift Container Platform 5. Try to login again and see the panic in oauth-apiserver
Actual results:
User is unable to login and oauth pods are reporting a panic as shown above
Expected results:
oauth-apiserver should invalidate the cache quickly to remove potential invalid references to non exsting groups
Additional info:
Description of problem:
When alert raised for vSphere privilege check which is reported by vsphere-problem-detector, we could only get the very simple info as below:
=======================================
Description
The vsphere-problem-detector monitors the health and configuration of OpenShift on VSphere. If problems are found which may prevent machine scaling, storage provisioning, and safe upgrades, the vsphere-problem-detector will raise alerts.
Summary
VSphere cluster health checks are failing
Message
VSphere cluster health checks are failing with CheckAccountPermissions
=======================================
(We could get the namespace/pod info from metric, but I think adding it in alert Description or Message should be more clear)
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-12-152748
How reproducible:
Always
Steps to Reproduce:
See description
Actual results:
Alert info is not so clear
Expected results:
Add more Alert info
Description of problem:
When trying to add a Cisco UCS Rackmount server as a `baremetalhost` CR the following error comes up in the metal3 container log in the openshift-machine-api namespace.
'TransferProtocolType' property which is mandatory to complete the action is missing in the request body
Full log entry:
{"level":"info","ts":1677155695.061805,"logger":"provisioner.ironic","msg":"current provision state","host":"ucs-rackmounts~ocp-test-1","lastError":"Deploy step deploy.deploy failed with BadRequestError: HTTP POST https://10.5.4.78/redfish/v1/Managers/CIMC/VirtualMedia/0/Actions/VirtualMedia.InsertMedia returned code 400. Base.1.4.0.GeneralError: 'TransferProtocolType' property which is mandatory to complete the action is missing in the request body. Extended information: [{'@odata.type': 'Message.v1_0_6.Message', 'MessageId': 'Base.1.4.0.GeneralError', 'Message': "'TransferProtocolType' property which is mandatory to complete the action is missing in the request body.", 'MessageArgs': [], 'Severity': 'Critical'}].","current":"deploy failed","target":"active"}
Version-Release number of selected component (if applicable):
image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30328143480d6598d0b52d41a6b755bb0f4dfe04c4b7aa7aefd02ea793a2c52b imagePullPolicy: IfNotPresent name: metal3-ironic
How reproducible:
Adding a Cisco UCS Rackmount with Redfish enabled as a baremetalhost to metal3
Steps to Reproduce:
1. The address to use: redfish-virtualmedia://10.5.4.78/redfish/v1/Systems/WZP22100SBV
Actual results:
[baelen@baelen-jumphost mce]$ oc get baremetalhosts.metal3.io -n ucs-rackmounts ocp-test-1 NAME STATE CONSUMER ONLINE ERROR AGE ocp-test-1 provisioning true provisioning error 23h
Expected results:
For the provisioning to be successfull.
Additional info:
OVS 2.17+ introduced an optimization of "weak references" to substantially speed up database snapshots. in some cases weak references may leak memory; to aforementioned commit fixes that and has been pulled into ovs2.17-62 and later.
Description of problem:
Disconnected IPI OCP 4.11.5 cluster install on baremetal fails when hostname of master nodes does not include "master"
Version-Release number of selected component (if applicable): 4.11.5
How reproducible: Perform disconnected IPI install of OCP 4.11.5 on bare metal with master nodes that do not contain the text "master"
Steps to Reproduce:
Perform disconnected IPI install of OCP 4.11.5 on bare metal with master nodes that do not contain the text "master"
Actual results: master nodes do come up.
Expected results: master nodes should come up despite that the text "master" is not in their hostname.
Additional info:
Disconnected IPI OCP 4.11.5 cluster install on baremetal fails when hostname of master nodes does not include "master"
My cust reinstall new cluster using the fix here . But they have the exact same issue. The metal3 pod have PROVISIONING_MACS value empty. Can we work together with them to understand why the new code fix https://github.com/openshift/cluster-baremetal-operator/commit/76bd6bc461b30a6a450f85a42e492a0933178aee is not working.
cat metal3-static-ip-set/metal3-static-ip-set/logs/current.log 2022-09-27T14:19:38.140662564Z + '[' -z 10.17.199.3/27 ']' 2022-09-27T14:19:38.140662564Z + '[' -z '' ']' 2022-09-27T14:19:38.140662564Z + '[' -n '' ']' 2022-09-27T14:19:38.140722345Z ERROR: Could not find suitable interface for "10.17.199.3/27" 2022-09-27T14:19:38.140726312Z + '[' -n '' ']' 2022-09-27T14:19:38.140726312Z + echo 'ERROR: Could not find suitable interface for "10.17.199.3/27"' 2022-09-27T14:19:38.140726312Z + exit 1
cat metal3-b9bf8d595-gv94k.yaml ... initContainers: command: /set-static-ip env: name: PROVISIONING_IP value: 10.17.199.3/27 name: PROVISIONING_INTERFACE name: PROVISIONING_MACS <------------------------- missing MACS image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4f04793bd109ecba2dfe43be93dc990ac5299272482c150bd5f2eee0f80c983b imagePullPolicy: IfNotPresent name: metal3-static-ip-set ....
omc logs machine-api-controllers-6b9ffd96cd-grh6l -c nodelink-controller -n openshift-machine-api 2022-09-21T16:13:43.600517485Z I0921 16:13:43.600513 1 nodelink_controller.go:408] Finding machine from node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" 2022-09-21T16:13:43.600521381Z I0921 16:13:43.600517 1 nodelink_controller.go:425] Finding machine from node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" by ProviderID 2022-09-21T16:13:43.600525225Z W0921 16:13:43.600521 1 nodelink_controller.go:427] Node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" has no providerID 2022-09-21T16:13:43.600528917Z I0921 16:13:43.600524 1 nodelink_controller.go:448] Finding machine from node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" by IP 2022-09-21T16:13:43.600532711Z I0921 16:13:43.600529 1 nodelink_controller.go:453] Found internal IP for node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca": "10.17.192.33" 2022-09-21T16:13:43.600551289Z I0921 16:13:43.600544 1 nodelink_controller.go:477] Matching machine not found for node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" with internal IP "10.17.192.33"
From @dtantsur WIP PR: https://github.com/openshift/cluster-baremetal-operator/pull/299
Customer is waiting for this fix. The previous code change don't fix customer situation.
Please refer to this slack thread :https://coreos.slack.com/archives/CFP6ST0A3/p1664215102459219
This is a clone of issue OCPBUGS-6011. The following is the description of the original issue:
—
Description of problem:
The 4.12.0 openshift-client package has kubectl 1.24.1 bundled in it when it should have 1.25.x
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
Very
Steps to Reproduce:
1. Download and unpack https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable/openshift-client-linux-4.12.0.tar.gz 2. ./kubectl version
Actual results:
# ./kubectl version Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"1928ac4250660378a7d8c3430478dfe77977cb2a", GitTreeState:"clean", BuildDate:"2022-12-07T05:08:22Z", GoVersion:"go1.18.7", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.4
Expected results:
kubectl version 1.25.x
Additional info:
This is a clone of issue OCPBUGS-5018. The following is the description of the original issue:
—
Description of problem:
When upgrading from 4.11 to 4.12 an IPI AWS cluster which included Machineset and BYOH Windows nodes, the upgrade hanged while trying to upgrade the machine-api component: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-12-16-190443 True True 117m Working towards 4.12.0-rc.5: 214 of 827 done (25% complete), waiting on machine-api $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.11.0-0.nightly-2022-12-16-190443 True False False 4h47m baremetal 4.11.0-0.nightly-2022-12-16-190443 True False False 4h59m cloud-controller-manager 4.12.0-rc.5 True False False 5h3m cloud-credential 4.11.0-0.nightly-2022-12-16-190443 True False False 5h4m cluster-autoscaler 4.11.0-0.nightly-2022-12-16-190443 True False False 4h59m config-operator 4.12.0-rc.5 True False False 5h1m console 4.11.0-0.nightly-2022-12-16-190443 True False False 4h43m csi-snapshot-controller 4.11.0-0.nightly-2022-12-16-190443 True False False 5h dns 4.11.0-0.nightly-2022-12-16-190443 True False False 4h59m etcd 4.12.0-rc.5 True False False 4h58m image-registry 4.11.0-0.nightly-2022-12-16-190443 True False False 4h54m ingress 4.11.0-0.nightly-2022-12-16-190443 True False False 4h55m insights 4.11.0-0.nightly-2022-12-16-190443 True False False 4h53m kube-apiserver 4.12.0-rc.5 True False False 4h50m kube-controller-manager 4.12.0-rc.5 True False False 4h57m kube-scheduler 4.12.0-rc.5 True False False 4h57m kube-storage-version-migrator 4.11.0-0.nightly-2022-12-16-190443 True False False 5h machine-api 4.11.0-0.nightly-2022-12-16-190443 True True False 4h56m Progressing towards operator: 4.12.0-rc.5 machine-approver 4.11.0-0.nightly-2022-12-16-190443 True False False 5h machine-config 4.11.0-0.nightly-2022-12-16-190443 True False False 4h59m marketplace 4.11.0-0.nightly-2022-12-16-190443 True False False 4h59m monitoring 4.11.0-0.nightly-2022-12-16-190443 True False False 4h53m network 4.11.0-0.nightly-2022-12-16-190443 True False False 5h3m node-tuning 4.11.0-0.nightly-2022-12-16-190443 True False False 4h59m openshift-apiserver 4.11.0-0.nightly-2022-12-16-190443 True False False 4h53m openshift-controller-manager 4.11.0-0.nightly-2022-12-16-190443 True False False 4h56m openshift-samples 4.11.0-0.nightly-2022-12-16-190443 True False False 4h55m operator-lifecycle-manager 4.11.0-0.nightly-2022-12-16-190443 True False False 5h operator-lifecycle-manager-catalog 4.11.0-0.nightly-2022-12-16-190443 True False False 5h operator-lifecycle-manager-packageserver 4.11.0-0.nightly-2022-12-16-190443 True False False 4h55m service-ca 4.11.0-0.nightly-2022-12-16-190443 True False False 5h storage 4.11.0-0.nightly-2022-12-16-190443 True False False 5h When digging a little deeper into the exact component hanging, we observed that it was the machine-api-termination-handler that was running in the Machine Windows workers, the one that was in ImagePullBackOff state: $ oc get pods -n openshift-machine-api NAME READY STATUS RESTARTS AGE cluster-autoscaler-operator-6ff66b6655-kpgp9 2/2 Running 0 5h5m cluster-baremetal-operator-6dbcd6f76b-d9dwd 2/2 Running 0 5h5m machine-api-controllers-cdb8d979b-79xlh 7/7 Running 0 94m machine-api-operator-86bf4f6d79-g2vwm 2/2 Running 0 97m machine-api-termination-handler-fcfq2 0/1 ImagePullBackOff 0 94m machine-api-termination-handler-gj4pf 1/1 Running 0 4h57m machine-api-termination-handler-krwdg 0/1 ImagePullBackOff 0 94m machine-api-termination-handler-l95x2 1/1 Running 0 4h54m machine-api-termination-handler-p6sw6 1/1 Running 0 4h57m $ oc describe pods machine-api-termination-handler-fcfq2 -n openshift-machine-api Name: machine-api-termination-handler-fcfq2 Namespace: openshift-machine-api Priority: 2000001000 Priority Class Name: system-node-critical ..................................................................... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 94m default-scheduler Successfully assigned openshift-machine-api/machine-api-termination-handler-fcfq2 to ip-10-0-145-114.us-east-2.compute.internal Warning FailedCreatePodSandBox 94m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7b80f84cc547310f5370a7dde7c651ca661dd40ebd0730296329d1cbe8981b37": plugin type="win-ov erlay" name="OVNKubernetesHybridOverlayNetwork" failed (add): error while adding HostComputeEndpoint: failed to create the new HostComputeEndpoint: hcnCreateEndpoint failed in Win32: The object already exists. (0x1392) {"Success":false,"Error":"The object already exists. ","ErrorCode":2147947410} Warning FailedCreatePodSandBox 94m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6b3e020a419dde8359a31b56129c65821011e232467d712f9f5081f32fe380c9": plugin type="win-ov erlay" name="OVNKubernetesHybridOverlayNetwork" failed (add): error while adding HostComputeEndpoint: failed to create the new HostComputeEndpoint: hcnCreateEndpoint failed in Win32: The object already exists. (0x1392) {"Success":false,"Error":"The object already exists. ","ErrorCode":2147947410} Normal Pulling 93m (x4 over 94m) kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aa96cb22047b62f785b87bf81ec1762703c1489079dd33008085b5585adc258" Warning Failed 93m (x4 over 94m) kubelet Error: ErrImagePull Normal BackOff 4m39s (x393 over 94m) kubelet Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aa96cb22047b62f785b87bf81ec1762703c1489079dd33008085b5585adc258" $ oc get pods -n openshift-machine-api -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cluster-autoscaler-operator-6ff66b6655-kpgp9 2/2 Running 0 5h8m 10.130.0.10 ip-10-0-180-35.us-east-2.compute.internal <none> <none> cluster-baremetal-operator-6dbcd6f76b-d9dwd 2/2 Running 0 5h8m 10.130.0.8 ip-10-0-180-35.us-east-2.compute.internal <none> <none> machine-api-controllers-cdb8d979b-79xlh 7/7 Running 0 97m 10.128.0.144 ip-10-0-138-246.us-east-2.compute.internal <none> <none> machine-api-operator-86bf4f6d79-g2vwm 2/2 Running 0 100m 10.128.0.143 ip-10-0-138-246.us-east-2.compute.internal <none> <none> machine-api-termination-handler-fcfq2 0/1 ImagePullBackOff 0 97m 10.129.0.7 ip-10-0-145-114.us-east-2.compute.internal <none> <none> machine-api-termination-handler-gj4pf 1/1 Running 0 5h 10.0.223.37 ip-10-0-223-37.us-east-2.compute.internal <none> <none> machine-api-termination-handler-krwdg 0/1 ImagePullBackOff 0 97m 10.128.0.4 ip-10-0-143-111.us-east-2.compute.internal <none> <none> machine-api-termination-handler-l95x2 1/1 Running 0 4h57m 10.0.172.211 ip-10-0-172-211.us-east-2.compute.internal <none> <none> machine-api-termination-handler-p6sw6 1/1 Running 0 5h 10.0.146.227 ip-10-0-146-227.us-east-2.compute.internal <none> <none> [jfrancoa@localhost byoh-auto]$ oc get nodes -o wide | grep ip-10-0-143-111.us-east-2.compute.internal ip-10-0-143-111.us-east-2.compute.internal Ready worker 4h24m v1.24.0-2566+5157800f2a3bc3 10.0.143.111 <none> Windows Server 2019 Datacenter 10.0.17763.3770 containerd://1.18 [jfrancoa@localhost byoh-auto]$ oc get nodes -o wide | grep ip-10-0-145-114.us-east-2.compute.internal ip-10-0-145-114.us-east-2.compute.internal Ready worker 4h18m v1.24.0-2566+5157800f2a3bc3 10.0.145.114 <none> Windows Server 2019 Datacenter 10.0.17763.3770 containerd://1.18 [jfrancoa@localhost byoh-auto]$ oc get machine.machine.openshift.io -n openshift-machine-api -o wide | grep ip-10-0-145-114.us-east-2.compute.internal jfrancoa-1912-aws-rvkrp-windows-worker-us-east-2a-v57sh Running m5a.large us-east-2 us-east-2a 4h37m ip-10-0-145-114.us-east-2.compute.internal aws:///us-east-2a/i-0b69d52c625c46a6a running [jfrancoa@localhost byoh-auto]$ oc get machine.machine.openshift.io -n openshift-machine-api -o wide | grep ip-10-0-143-111.us-east-2.compute.internal jfrancoa-1912-aws-rvkrp-windows-worker-us-east-2a-j6gkc Running m5a.large us-east-2 us-east-2a 4h37m ip-10-0-143-111.us-east-2.compute.internal aws:///us-east-2a/i-05e422c0051707d16 running This is blocking the whole upgrade process, as the upgrade is not able to move further from this component.
Version-Release number of selected component (if applicable):
$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-12-16-190443 True True 141m Working towards 4.12.0-rc.5: 214 of 827 done (25% complete), waiting on machine-api $ oc version Client Version: 4.11.0-0.ci-2022-06-09-065118 Kustomize Version: v4.5.4 Server Version: 4.11.0-0.nightly-2022-12-16-190443 Kubernetes Version: v1.25.4+77bec7a
How reproducible:
Always
Steps to Reproduce:
1. Deploy a 4.11 IPI AWS cluster with Windows workers using a MachineSet 2. Perform the upgrade to 4.12 3. Wait for the upgrade to hang on the machine-api component
Actual results:
The upgrade hangs when upgrading the machine-api component.
Expected results:
The upgrade suceeds
Additional info: