Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
As a user, I should be able to configure CSI driver to have a storage topology.
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.
We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.
We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.
related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
This includes ibm-vpc-node-label-updater!
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
There is a new driver release 5.0.0 since the last rebase that includes snapshot support:
https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0
Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.
This Epic tracks the GA of this feature
Epic Goal
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
OC mirror is GA product as of Openshift 4.11 .
The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term.
HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]
To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes:
If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE.
For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA:
Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc.
Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:
As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters.
HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story.
Thus the following stories are important for HyperShift:
Refs:
HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed.
Main user story: When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees.
Ref: What are we missing in Core HyperShift for GA Readiness?
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumptions:
HyperShift - proposed cuts from data plane
When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:
More information:
To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.
See Hosted Control Planes (aka HyperShift) Strategy [Live Document]
Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path.
https://issues.redhat.com/browse/OCPPLAN-8901
HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead).
We should make sure our SD milestones are unblocked by the core team.
This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.
- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors.
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771)
Epic Goal*
The goal is to split client certificate trust chains from the global Hypershift root CA.
Why is this important? (mandatory)
This is important to:
Scenarios (mandatory)
Provide details for user scenarios including actions to be performed, platform specifications, and user personas.
Dependencies (internal and external) (mandatory)
Hypershift team needs to provide us with code reviews and merge the changes we are to deliver
Contributing Teams(and contacts) (mandatory)
Acceptance Criteria (optional)
The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.
Drawbacks or Risk (optional)
Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release
Done - Checklist (mandatory)
AUTH-311 introduced an enhancement. Implement the signer separation described there.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12
This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9
More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4
update ironic software to pick up latest bug fixes
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I want to hide user perspective(s) based on the customization.
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
As a SRE, I want hypershift operator to expose a metric when hosted control plane is ready.
This should allow SRE to tune (or silence) alerts occurring while the hosted control plane is spinning up.
The Kube APIServer has a sidecar to output audit logs. We need similar sidecars for other APIServers that run on the control plane side. We also need to pass the same audit log policy that we pass to the KAS to these other API servers.
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
Place holder epic to track spontaneous task which does not deserve its own epic.
DoD:
At the moment if the input etcd kms encryption (key and role) is invalid we fail transparently.
We should check that both key and role are compatible/operational for a given cluster and fail in a condition otherwise
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
Description of problem:
Disconnected IPI OCP 4.11.5 cluster install on baremetal fails when hostname of master nodes does not include "master"
Version-Release number of selected component (if applicable): 4.11.5
How reproducible: Perform disconnected IPI install of OCP 4.11.5 on bare metal with master nodes that do not contain the text "master"
Steps to Reproduce:
Perform disconnected IPI install of OCP 4.11.5 on bare metal with master nodes that do not contain the text "master"
Actual results: master nodes do come up.
Expected results: master nodes should come up despite that the text "master" is not in their hostname.
Additional info:
Disconnected IPI OCP 4.11.5 cluster install on baremetal fails when hostname of master nodes does not include "master"
My cust reinstall new cluster using the fix here . But they have the exact same issue. The metal3 pod have PROVISIONING_MACS value empty. Can we work together with them to understand why the new code fix https://github.com/openshift/cluster-baremetal-operator/commit/76bd6bc461b30a6a450f85a42e492a0933178aee is not working.
cat metal3-static-ip-set/metal3-static-ip-set/logs/current.log 2022-09-27T14:19:38.140662564Z + '[' -z 10.17.199.3/27 ']' 2022-09-27T14:19:38.140662564Z + '[' -z '' ']' 2022-09-27T14:19:38.140662564Z + '[' -n '' ']' 2022-09-27T14:19:38.140722345Z ERROR: Could not find suitable interface for "10.17.199.3/27" 2022-09-27T14:19:38.140726312Z + '[' -n '' ']' 2022-09-27T14:19:38.140726312Z + echo 'ERROR: Could not find suitable interface for "10.17.199.3/27"' 2022-09-27T14:19:38.140726312Z + exit 1
cat metal3-b9bf8d595-gv94k.yaml ... initContainers: command: /set-static-ip env: name: PROVISIONING_IP value: 10.17.199.3/27 name: PROVISIONING_INTERFACE name: PROVISIONING_MACS <------------------------- missing MACS image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4f04793bd109ecba2dfe43be93dc990ac5299272482c150bd5f2eee0f80c983b imagePullPolicy: IfNotPresent name: metal3-static-ip-set ....
omc logs machine-api-controllers-6b9ffd96cd-grh6l -c nodelink-controller -n openshift-machine-api 2022-09-21T16:13:43.600517485Z I0921 16:13:43.600513 1 nodelink_controller.go:408] Finding machine from node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" 2022-09-21T16:13:43.600521381Z I0921 16:13:43.600517 1 nodelink_controller.go:425] Finding machine from node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" by ProviderID 2022-09-21T16:13:43.600525225Z W0921 16:13:43.600521 1 nodelink_controller.go:427] Node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" has no providerID 2022-09-21T16:13:43.600528917Z I0921 16:13:43.600524 1 nodelink_controller.go:448] Finding machine from node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" by IP 2022-09-21T16:13:43.600532711Z I0921 16:13:43.600529 1 nodelink_controller.go:453] Found internal IP for node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca": "10.17.192.33" 2022-09-21T16:13:43.600551289Z I0921 16:13:43.600544 1 nodelink_controller.go:477] Matching machine not found for node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" with internal IP "10.17.192.33"
From @dtantsur WIP PR: https://github.com/openshift/cluster-baremetal-operator/pull/299
Customer is waiting for this fix. The previous code change don't fix customer situation.
Please refer to this slack thread :https://coreos.slack.com/archives/CFP6ST0A3/p1664215102459219
Description of problem:
The samples operator needs to update it's imagestreams to use the Jenkins 4.12 release.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
ovnkube-trace fails on hypershift deployments:
https://bugzilla.redhat.com/show_bug.cgi?id=2066891#c8
getDatabaseURIs looks for pods with container ovnkube-master, and those don't exist in hypershift.
https://github.com/ovn-org/ovn-kubernetes/blob/6b8acf05cb6043ebdc42d9d36e700390baabea4a/go-controller/cmd/ovnkube-trace/ovnkube-trace.go#L540
~~~
// Returns nbAddress, sbAddress, protocol == "ssl", nil
func getDatabaseURIs(coreclient *corev1client.CoreV1Client, restconfig *rest.Config, ovnNamespace string) (string, string, bool, error) {
containerName := "ovnkube-master"
var err error
found := false
var podName string
listOptions := metav1.ListOptions{}
pods, err := coreclient.Pods(ovnNamespace).List(context.TODO(), listOptions)
if err != nil
for _, pod := range pods.Items {
for _, container := range pod.Spec.Containers {
if container.Name == containerName
}
}
if !found
~~~
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
Description of problem:
We need to have admin-ack in 4.12 so that admins can check the deprecated APIs and approve when they move to 4.12.Refer https://access.redhat.com/articles/6958394 for more information. As planned we want to add the admin-ack around 4.13 feature freeze.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Install a cluster in 4.12. 2. Run an application which uses the deprecated API. See https://access.redhat.com/articles/6958394 for more information. 3. Upgrade to 4.13
Actual results:
The upgrade happens without asking the admin to confirm that the worksloads do not use the deprecated APIs.
Expected results:
Upgrade should wait for the admin-ack.
Additional info:
This was the PR for 4.11.z https://github.com/openshift/cluster-version-operator/pull/836
Description of problem:
When using the agent based instller to zero-touch provision the cluster. If the network bandwidth is low, and the assisted-service or the assisted-service fails to pull the docker image within the timeout. The create-cluster-and-infraenv, apply-host-config, and start-cluster-installation services will be deactivated due to dependency failed. The process will be blocked, and require enable & start the service manually.
Version-Release number of selected component (if applicable):
openshift-install 4.11.0 built from commit 863cd1ea823559116e26de327705ed72ccdede8f release image quay.io/openshift-release-dev/ocp-release@sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4 release architecture amd64
How reproducible:
Install Openshift with agent based installer with local mirror.
Steps to Reproduce:
1.Stop the local registry or limit the network bandwidth to make assisted-service-pod.service or assisted-service.service fails to started within the 90s timeout. 2.Start the local registry or mannully pull the image on the node0. 3.
Actual results:
When using the agent based instller to zero-touch pprovision the cluster. If the network bandwidth is low, and the assisted-service or the assisted-service fails to pull the docker image within the timeout. The create-cluster-and-infraenv, apply-host-config, and start-cluster-installation services will be deactivated due to dependency failed. The process will be blocked, and require enable & start the service manually.
Expected results:
Provision start after the assisted-service started.
Additional info:
Given: assisted-service-pod.service requires assisted-service-db.service assisted-service.service assisted-service.service BindsTo=assisted-service-pod.service create-cluster-and-infraenv.service Requires=assisted-service.service and PartOf=assisted-service-pod.service apply-host-config.service Requires=create-cluster-and-infraenv.service start-cluster-installation.service Requires=apply-host-config.service Requires= "Configures requirement dependencies on other units. If this unit gets activated, the units listed here will be activated as well. If one of the other units gets deactivated or its activation fails, this unit will be deactivated."When assisted-service-pod.service starts, assisted-service-db.service and assisted-service.service also be started, Once assisted-service-pod.service fails to be started, assisted-service.service also fail to be started due to "BindsTo=assisted-service-pod.service". Then dependency failed for create-cluster-and-infraenv.service due to Requires=assisted-service.service which activation fails, Therefore it will be deactived. Then dependency failed for apply-host-config.service, due to Requires=create-cluster-and-infraenv.service which activation fails, Therefore it will be deactived. Then dependency failed for start-cluster-installation.service, due to Requires=apply-host-config.service which activation fails, Therefore it will be deactived.Then assisted-service-pod.service restarts, assisted-service.service and assisted-service-db.service restarts as well, since they are binded to assisted-service-pod.service. However, create-cluster-and-infraenv.service apply-host-config.service and start-cluster-installation.service was be deactivated, they requires to be activate mannully.Eventually, assisted-service started and hang with waitting for create infraenv. The provisioning is blocked.
Description of problem:
Install a single node cluster on AWS, then enable TechPreview, cause the cluster error. The CMA and CAPI CMA shouldn't be on the same port.
Version-Release number of selected component (if applicable):
4.11.9
How reproducible:
always
Steps to Reproduce:
1.Launch 4.11.9 single node cluster on AWS liuhuali@Lius-MacBook-Pro huali-test % oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.9 True False 34m Cluster version is 4.11.9 liuhuali@Lius-MacBook-Pro huali-test % oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.11.9 True False False 31m baremetal 4.11.9 True False False 49m cloud-controller-manager 4.11.9 True False False 52m cloud-credential 4.11.9 True False False 53m cluster-autoscaler 4.11.9 True False False 48m config-operator 4.11.9 True False False 50m console 4.11.9 True False False 37m csi-snapshot-controller 4.11.9 True False False 49m dns 4.11.9 True False False 48m etcd 4.11.9 True False False 47m image-registry 4.11.9 True False False 43m ingress 4.11.9 True False False 86s insights 4.11.9 True False False 43m kube-apiserver 4.11.9 True False False 43m kube-controller-manager 4.11.9 True False False 47m kube-scheduler 4.11.9 True False False 44m kube-storage-version-migrator 4.11.9 True False False 50m machine-api 4.11.9 True False False 44m machine-approver 4.11.9 True False False 49m machine-config 4.11.9 True False False 49m marketplace 4.11.9 True False False 48m monitoring 4.11.9 True False False 56s network 4.11.9 True False False 52m node-tuning 4.11.9 True False False 49m openshift-apiserver 4.11.9 True False False 72s openshift-controller-manager 4.11.9 True False False 39m openshift-samples 4.11.9 True False False 43m operator-lifecycle-manager 4.11.9 True False False 49m operator-lifecycle-manager-catalog 4.11.9 True False False 49m operator-lifecycle-manager-packageserver 4.11.9 True False False 104s service-ca 4.11.9 True False False 50m storage 4.11.9 True False False 49m liuhuali@Lius-MacBook-Pro huali-test % oc get node NAME STATUS ROLES AGE VERSION ip-10-0-137-222.us-east-2.compute.internal Ready master,worker 53m v1.24.0+dc5a2fd 2.Enable TechPreview spec: featureSet: TechPreviewNoUpgrade liuhuali@Lius-MacBook-Pro huali-test % oc edit featuregate featuregate.config.openshift.io/cluster edited 3.Check the cluster liuhuali@Lius-MacBook-Pro huali-test % oc get pod -n openshift-cloud-controller-manager NAME READY STATUS RESTARTS AGE aws-cloud-controller-manager-5888c85fc6-28tgt 1/1 Running 12 (10m ago) 55m liuhuali@Lius-MacBook-Pro huali-test % oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.9 True False 111m Error while reconciling 4.11.9: the workload openshift-cluster-machine-approver/machine-approver-capi has not yet successfully rolled out liuhuali@Lius-MacBook-Pro huali-test % oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.11.9 False False False 9m44s OAuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.huliu-aws411arn2.qe.devcluster.openshift.com/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)... baremetal 4.11.9 True False False 128m cloud-controller-manager 4.11.9 True False False 131m cloud-credential 4.11.9 True False False 133m cluster-api 4.11.9 True False False 41m cluster-autoscaler 4.11.9 True False False 128m config-operator 4.11.9 True False False 129m console 4.11.9 False True False 10m DeploymentAvailable: 0 replicas available for console deployment... csi-snapshot-controller 4.11.9 True False False 4m52s dns 4.11.9 True False False 128m etcd 4.11.9 True False False 127m image-registry 4.11.9 True False False 123m ingress 4.11.9 True False False 3m15s insights 4.11.9 True False False 122m kube-apiserver 4.11.9 True False False 123m kube-controller-manager 4.11.9 True False False 126m kube-scheduler 4.11.9 True False False 124m kube-storage-version-migrator 4.11.9 True False False 129m machine-api 4.11.9 True False False 124m machine-approver 4.11.9 True False False 128m machine-config 4.11.9 True False False 129m marketplace 4.11.9 True False False 128m monitoring 4.11.9 True False False 5m1s network 4.11.9 True False False 131m node-tuning 4.11.9 True False False 128m openshift-apiserver 4.11.9 True False False 23s openshift-controller-manager 4.11.9 True False False 118m openshift-samples 4.11.9 True False False 122m operator-lifecycle-manager 4.11.9 True False False 128m operator-lifecycle-manager-catalog 4.11.9 True False False 128m operator-lifecycle-manager-packageserver 4.11.9 True False False 2m43s service-ca 4.11.9 True False False 129m storage 4.11.9 True False False 69m liuhuali@Lius-MacBook-Pro huali-test %
Actual results:
Cluster is broken CMA is complaining, message: '0/1 nodes are available: 1 node(s) didn''t have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 node(s) didn''t have free ports for the requested pod ports.'
Expected results:
Cluster should be healthy
Additional info:
Talked with dev here https://coreos.slack.com/archives/GE2HQ9QP4/p1666178083034159?thread_ts=1666176493.224399&cid=GE2HQ9QP4 Must-Gather https://drive.google.com/file/d/1Q7Ddnhbg3Cq4ptBA2ycJnGKK01As1JcF/view?usp=sharing If enable TechPreview during installation on single node cluster, the cluster installation failed.
This is a clone of issue OCPBUGS-6503. The following is the description of the original issue:
—
Description of problem:
While looking into OCPBUGS-5505 I discovered that some 4.10->4.11 upgrade job runs perform an Admin Ack check, while some do not. 4.11 has a ack-4.11-kube-1.25-api-removals-in-4.12 gate, so these upgrade jobs sometimes test that Upgradeable goes false after the ugprade, and sometimes they do not. This is only determined by the polling race condition: the check is executed once per 10 minutes, and we cancel the polling after upgrade is completed. This means that in some cases we are lucky and manage to run one check before the cancel, and sometimes we are not and only check while still on the base version.
Example job that checked admin acks post-upgrade:
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-cluster-version-operator-880-ci-4.11-upgrade-from-stable-4.10-e2e-azure-upgrade/1611444032104304640
$ curl --silent https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/openshift-cluster-version-operator-880-ci-4.11-upgrade-from-stable-4.10-e2e-azure-upgrade/1611444032104304640/artifacts/e2e-azure-upgrade/openshift-e2e-test/artifacts/e2e.log | grep 'Waiting for Upgradeable to be AdminAckRequired' Jan 6 21:16:40.153: INFO: Waiting for Upgradeable to be AdminAckRequired ...
Example job that did not check admin acks post-upgrade:
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-cluster-version-operator-880-ci-4.11-upgrade-from-stable-4.10-e2e-azure-upgrade/1611444033509396480
$ curl --silent https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/openshift-cluster-version-operator-880-ci-4.11-upgrade-from-stable-4.10-e2e-azure-upgrade/1611444033509396480/artifacts/e2e-azure-upgrade/openshift-e2e-test/artifacts/e2e.log | grep 'Waiting for Upgradeable to be AdminAckRequired'
Version-Release number of selected component (if applicable):
4.11+ openshift-tests
How reproducible:
nondeterministic, wild guess is ~30% of upgrade jobs
Steps to Reproduce:
1. Inspect the E2E test log of an upgrade jobs and compare the time of the update ("Completed upgrade") with the time of the last check ( "Skipping admin ack", "Gate .* not applicable to current version", "Admin Ack verified') done by the admin ack test
Actual results:
Jan 23 00:47:43.842: INFO: Admin Ack verified Jan 23 00:57:43.836: INFO: Admin Ack verified Jan 23 01:07:43.839: INFO: Admin Ack verified Jan 23 01:17:33.474: INFO: Completed upgrade to registry.build01.ci.openshift.org/ci-op-z09ll8fw/release@sha256:322cf67dc00dd6fa4fdd25c3530e4e75800f6306bd86c4ad1418c92770d58ab8
No check done after the upgrade
Expected results:
Jan 23 00:57:37.894: INFO: Admin Ack verified Jan 23 01:07:37.894: INFO: Admin Ack verified Jan 23 01:16:43.618: INFO: Completed upgrade to registry.build01.ci.openshift.org/ci-op-z8h5x1c5/release@sha256:9c4c732a0b4c2ae887c73b35685e52146518e5d2b06726465d99e6a83ccfee8d Jan 23 01:17:57.937: INFO: Admin Ack verified
One or more checks done after upgrade
Description of problem:
NPE on topology for the ns which just got deleted, see screenshot below
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Login as regular user 2. Create a ns and delete the ns 3. visit the deleted ns in topology
Actual results:
console breaks dur to NPE
Expected results:
console shouldn't break
Additional info:
Description of problem: After I run the golang script for OCP-53608, I find the created
ingress-controller couldn't be deleted
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible: Run the script and try to delete the custom ingress-controller
Steps to Reproduce:
1.
% oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.12.0-0.nightly-2022-08-15-150248 True False 43m Cluster version is 4.12.0-0.nightly-2022-08-15-150248
shudi@Shudis-MacBook-Pro openshift-tests-private %
2. Run the script
shudi@Shudis-MacBook-Pro openshift-tests-private % ./bin/extended-platform-tests run all --dry-run | grep 53608 | ./bin/extended-platform-tests run -f -
...
---------------------------------------------------------
Received interrupt. Running AfterSuite...
^C again to terminate immediately
Aug 18 10:35:51.087: INFO: Running AfterSuite actions on all nodes
Aug 18 10:35:51.088: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-test-router-tunning-77627" for this suite.
Aug 18 10:35:54.654: INFO: Running AfterSuite actions on node 1
failed: (15m4s) 2022-08-18T02:35:54 "[sig-network-edge] Network_Edge should Author:shudili-Low-53608-Negative Test of Expose a Configurable Reload Interval in HAproxy [Suite:openshift/conformance/parallel]"
Failing tests:
[sig-network-edge] Network_Edge should Author:shudili-Low-53608-Negative Test of Expose a Configurable Reload Interval in HAproxy [Suite:openshift/conformance/parallel]
error: 1 fail, 0 pass, 0 skip (15m4s)
shudi@Shudis-MacBook-Pro openshift-tests-private %
3. show the ingress-controllers
shudi@Shudis-MacBook-Pro openshift-tests-private % oc -n openshift-ingress-operator get ingresscontroller
NAME AGE
default 113m
ocp53608 42m
shudi@Shudis-MacBook-Pro openshift-tests-private %
4. Try to delete the ingress-controller ocp53608, when the message "ingresscontroller.operator.openshift.io "ocp53608" deleted" appears, it is hanged for a long time until the error message appears.
shudi@Shudis-MacBook-Pro openshift-tests-private % oc -n openshift-ingress-operator delete ingresscontroller ocp53608
ingresscontroller.operator.openshift.io "ocp53608" deleted
error: An error occurred while waiting for the object to be deleted: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeedingUnable to connect to the server: dial tcp 35.194.1.60:6443: i/o timeout
shudi@Shudis-MacBook-Pro openshift-tests-private %
5. After "ingresscontroller.operator.openshift.io "ocp53608" deleted" message appears, show the ingress-controller, ocp53608 isn't deleted
shudi@Shudis-MacBook-Pro golang % oc -n openshift-ingress-operator get ingresscontroller
NAME AGE
default 3h
ocp53608 109m
shudi@Shudis-MacBook-Pro golang %
6. After the error message(rror: An error occurred while waiting for the object to be deleted) appears, try to show the ingresscontroller
shudi@Shudis-MacBook-Pro openshift-tests-private % oc -n openshift-ingress-operator get ingresscontroller
E0818 12:21:57.272967 4168 request.go:1085] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body)
E0818 12:21:57.273379 4168 request.go:1085] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body)
E0818 12:21:57.274306 4168 request.go:1085] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body)
Unable to connect to the server: dial tcp 35.194.1.60:6443: i/o timeout
shudi@Shudis-MacBook-Pro openshift-tests-private %
Actual results: ingress-controller ocp53608 is still there after executed the oc delete command
Expected results:
ingress-controller ocp53608 will be deleted soon after executed the oc delete command
Additional info:
This is a clone of issue OCPBUGS-4758. The following is the description of the original issue:
—
Description of problem:
See: https://issues.redhat.com/browse/CPSYN-143 tldr: Based on the previous direction that 4.12 was going to enforce PSA restricted by default, OLM had to make a few changes because the way we run catalog pods (and we have to run them that way because of how the opm binary worked) was incompatible w/ running restricted. 1) We set openshift-marketplace to enforce restricted (this was our choice, we didn't have to do it, but we did) 2) we updated the opm binary so catalog images using a newer opm binary don't have to run privileged 3) we added a field to catalogsource that allows you to choose whether to run the pod privileged(legacy mode) or restricted. The default is restricted. We made that the default so that users running their own catalogs in their own NSes (which would be default PSA enforcing) would be able to be successful w/o needing their NS upgraded to privileged. Unfortunately this means: 1) legacy catalog images(i.e. using older opm binaries) won't run on 4.12 by default (the catalogsource needs to be modified to specify legacy mode. 2) legacy catalog images cannot be run in the openshift-marketplace NS since that NS does not allow privileged pods. This means legacy catalogs can't contribute to the global catalog (since catalogs must be in that NS to be in the global catalog). Before 4.12 ships we need to: 1) remove the PSA restricted label on the openshift-marketplace NS 2) change the catalogsource securitycontextconfig mode default to use "legacy" as the default, not restricted. This gives catalog authors another release to update to using a newer opm binary that can run restricted, or get their NSes explicitly labeled as privileged (4.12 will not enforce restricted, so in 4.12 using the legacy mode will continue to work) In 4.13 we will need to revisit what we want the default to be, since at that point catalogs will start breaking if they try to run in legacy mode in most NSes.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Tracker issue for bootimage bump in 4.12. This issue should block issues which need a bootimage bump to fix.
The previous bump was OCPBUGS-1941.
This is a clone of issue OCPBUGS-2891. The following is the description of the original issue:
—
Deprovisioning can fail with the error:
level=warning msg=unrecognized elastic load balancing resource type listener arn=arn:aws:elasticloadbalancing:us-west-2:460538899914:listener/net/a9ac9f1b3019c4d1299e7ededc92b42b/a6f0655da877ddd4/45e05ee69d99bab0
Further background is available in this write up:
https://docs.google.com/document/d/1TsTqIVwHDmjuDjG7v06w_5AAbXSisaDX-UfUI9-GVJo/edit#
Incident channel:
incident-aws-leaking-tags-for-deleted-resources
Name: Routing
Description: Please change the "Routing" component to be a subcomponent "router" of the "Networking" component.
Component: change to "Networking".
Subcomponent: change to "router".
Existing fields (default assignee, default QA contact, default CC email list, etc.) should remain the same as they currently are.
Default Assignee: aos-network-edge-staff@bot.bugzilla.redhat.com
Default QA Contact: hongli@redhat.com
Default CC List: aos-network-edge-staff@bot.bugzilla.redhat.com
Additional Notes:
I filled in "Default CC email list" because the form validation would not permit me to omit it. However, it can be left empty in Bugzilla (it is currently empty).
If possible, we would like this change to be done prior to the Bugzilla-to-Jira migration to avoid the need to make the change after the migration.
In the Known Issues section of the OpenStack-specific Installer docs issues, there is a point about control plane anti-affinity.
The known issue has several problems:
DVO metrics have some sensitive data that isn't desired to be sent outside the cluster. For that, IO must remove this data from the metrics before saving it to the archive and uploading it to the pipeline.
Remove the name and namespace from DVO metrics before saving it to the IO archive.
This is a clone of issue OCPBUGS-5068. The following is the description of the original issue:
—
Description of problem:
virtual media provisioning fails when iLO Ironic driver is used
Version-Release number of selected component (if applicable):
4.13
How reproducible:
Always
Steps to Reproduce:
1. attempt virtual media provisioning on a node configured with ilo-virtualmedia:// drivers 2. 3.
Actual results:
Provisioning fails with "An auth plugin is required to determine endpoint URL" error
Expected results:
Provisioning succeeds
Additional info:
Relevant log snippet: 3742 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector [None req-e58ac1f2-fac6-4d28-be9e-983fa900a19b - - - - - -] Unable to start managed inspection for node e4445d43-3458-4cee-9cbe-6da1de75 78cd: An auth plugin is required to determine endpoint URL: keystoneauth1.exceptions.auth_plugins.MissingAuthPlugin: An auth plugin is required to determine endpoint URL 3743 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector Traceback (most recent call last): 3744 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/inspector.py", line 210, in _start_managed_inspection 3745 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector task.driver.boot.prepare_ramdisk(task, ramdisk_params=params) 3746 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector File "/usr/lib/python3.9/site-packages/ironic_lib/metrics.py", line 59, in wrapped 3747 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector result = f(*args, **kwargs) 3748 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/ilo/boot.py", line 408, in prepare_ramdisk 3749 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector iso = image_utils.prepare_deploy_iso(task, ramdisk_params, 3750 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/image_utils.py", line 624, in prepare_deploy_iso 3751 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector return prepare_iso_image(inject_files=inject_files) 3752 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/image_utils.py", line 537, in _prepare_iso_image 3753 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector image_url = img_handler.publish_image( 3754 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/image_utils.py", line 193, in publish_image 3755 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector swift_api = swift.SwiftAPI() 3756 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector File "/usr/lib/python3.9/site-packages/ironic/common/swift.py", line 66, in __init__ 3757 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector endpoint = keystone.get_endpoint('swift', session=session)
Description of problem:
openshift-install does not detect releaseImage mismatches between cluster-image-set.yaml and registries.conf
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1.Create ZTP inputs for image generation where registries.conf does not have any source matching the binary releaseimage (the binary image which can be obtained by running "openshift-install version". You can also set this value in ZTP manifest cluster-image-set.yaml 2.run openshift-install agent create image
Actual results:
Image is generated with no warnings
Expected results:
Image is generated with warning message - "The ImageContentSources configuration in install-config.yaml should have at-least one source field matching the releaseImage value %s", releaseImagePath
Additional info:
This is a clone of issue OCPBUGS-3235. The following is the description of the original issue:
—
Frequently we see the loading state of the topology view, even when there aren't many resources in the project.
Including an example
topology will sometimes hang with the loading indicator showing indefinitely
topology should load consistently without fail
intermittent
4.9
Description of problem:
When a pod runs to a completed state, we typically rely on the update event that will indicate to us that this pod is completed. At that point the pod IP is released and the port configuration is removed in OVN. The subsequent delete event for this pod will be ignored because it should have been cleaned up in the previous update. However, there can be cases where the update event is missed with pod completed. In this case we will only receive a delete with pod completed event, and ignore tearing down the pod. The end result is the pod is not cleaned up in OVN and the IP address remains allocated, reducing the amount of address range available to launch another pod. This can lead to exhausting all IP addresses available for pod allocation on a node.
Version-Release number of selected component (if applicable):
4.10.24
How reproducible:
Not sure how to reproduce this. I'm guessing some lag in kapi updates can cause the completed update event and the final delete event to be combined into a single event.
Steps to Reproduce:
1. 2. 3.
Actual results:
Port still exists in OVN, IP remains allocated for a deleted pod.
Expected results:
IP should be freed, port should be removed from OVN.
Additional info:
Description of problem:
When providing the openshift-install agent create command with installconfig + agentconfig manifests that contain the InstallConfig Proxy section, the Proxy configuration does not get configured cluster-wide.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1.Define InstallConfig with Proxy section 2.openshift-install agent create image 3.Boot ISO 4.Check /etc/assisted/manifests for agent-cluster-install.yaml to contain the Proxy section
Actual results:
Missing proxy
Expected results:
Proxy should be present and match with the InstallConfig
Additional info:
Description of problem:
Currently in 4.11, MAPI nutanix machine-controller does not provide the machine (VM)’s instance-type, region, zone, etc. labels to the Machine CR. And these columns are empty when viewing the Machine CRs, via cli “oc get Machine” or from the OCP cluster web console. $ oc -n openshift-machine-api get machine NAME PHASE TYPE REGION ZONE AGE demo-ocp-cluster-g1-77nws-master-0 Running 133m demo-ocp-cluster-g1-77nws-master-1 Running 133m demo-ocp-cluster-g1-77nws-master-2 Running 133m demo-ocp-cluster-g1-77nws-worker-2bsxn Running 129m demo-ocp-cluster-g1-77nws-worker-75hr5 Running 129m demo-ocp-cluster-g1-77nws-worker-rg7b9 Running 129m We can add something like the below labels to the Machine CR in the mapi-nutanix when reconciling for the Machine CRs: machine.openshift.io/instance-type: AHV machine.openshift.io/region: <prism-central-address> machine.openshift.io/zone: <prism-element-name/uuid>
Version-Release number of selected component (if applicable):
How reproducible:
run cli “oc get Machine” or from the OCP cluster web console to view the Machines resource
Steps to Reproduce:
1. 2. 3.
Actual results:
The "Type", "Region", "Zone" columns are empty for each Machine CR.
Expected results:
The "Type", "Region", "Zone" columns showing data for each Machine CR.
Additional info:
When multi-cluster is enabled, it possible to get in a situation where you can't cancel login. If you select a cluster you don't know the credentials for, console will remember the last cluster and repeatedly send you to the login page with no way to cancel or go back. If we decide to set the last cluster in the user's preferences, it might be possible to get stuck even if you clear cookies and localStorage.
There are similar issues logging into cluster that are hibernating. See attached video.
cc Scott Berens
Description of problem:
Using OLM descriptor components.Using OLM descriptor components deletes operand
Steps to Reproduce:
Description of problem:
OCP v4.9.31 cluster didn't have the $search domain in /etc/resolv.conf, which was there in the v4.8.29 OCP cluster. This was observed in all the nodes of the v4.9.31 cluster.
~~~
OpenShift 4.9.31
sh-4.4# cat /etc/resolv.conf
OpenShift 4.8.29
ENV: OpenStack IAD2, IPI installation. Connected cluster.
Version-Release number of selected component (if applicable):
OCP v4.9.31
How reproducible:
Always
Steps to Reproduce:
1. Install IPI cluster on OpenStack IAD2 platform having cluster version 4.9.31
2. Debug to any of the node(master/worker)
3. Check and confirm the missing search domain on all nodes of the cluster.
Actual results:
The search domain was missing when checked in `/etc/resolv.conf` file on all nodes of the cluster causing serious issues in the cluster.
Expected results:
The installer should embed the search domain in /etc/resolv.conf file on all nodes of the cluster.
Additional info:
set -eo pipefail
DISPATCHER_FILE="/etc/NetworkManager/dispatcher.d/30-resolv-prepender"
DOMAINS="$(grep -E '\s*DOMAINS=.*iad2.dc.paas.redhat.com' $DISPATCHER_FILE \
grep -oE '[a-z0-9]*.dev.iad2.dc.paas.redhat.com' \ |
tr '\n' ' ')" |
>&2 echo "IT-PaaS: overwriting search domains in /etc/resolv.conf with: $DOMAINS"
sed -e "/^search/d" \
-e "/Generated by/c# Generated by KNI resolv prepender NM dispatcher script \nsearch $DOMAINS" \
/etc/resolv.conf > /etc/resolv.tmp
mv /etc/resolv.tmp /etc/resolv.conf
~~~
This is a clone of issue OCPBUGS-2479. The following is the description of the original issue:
—
Description of problem:
Right border radius is 0 for the pipeline visualization wrapper in dark mode but looks fine in light mode
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Steps to Reproduce:
1. Switch the theme to dark mode 2. Create a pipeline and navigate to the Pipeline details page
Actual results:
Right border radius is 0, see the screenshots
Expected results:
Right border radius should be same as left border radius.
Additional info:
This is a clone of issue OCPBUGS-7780. The following is the description of the original issue:
—
Description of problem:
4.9 and 4.10 oc calls to oc adm upgrade channel ... for 4.11+ clusters would clear spec.capabilities. Not all that many clusters try to restrict capabilities, but folks will need to bump their channel for at least every other minor (if their using EUS channels), and while we recommend folks use an oc from the 4.y they're heading towards, we don't have anything in place to enforce that.
Version-Release number of selected component (if applicable):
4.9 and 4.10 oc are exposed vs. the new-in-4.11 spec.capabilities. Newer oc could theoretically be exposed vs. any new ClusterVersion spec capabilities.
How reproducible:
100%
Steps to Reproduce:
1. Install a 4.11+ cluster with None capabilities.
2. Set the channel with a 4.10.51 oc, like oc adm upgrade channel fast-4.11.
3. Check the capabilities with oc get -o json clusterversion version | jq -c .spec.capabilities.
Actual results:
null
Expected results:
{"baselineCapabilitySet":"None"}
Description of problem:
failed even trying to "create install-config" in the epic's scenario
Version-Release number of selected component (if applicable):
$ ./openshift-install version ./openshift-install 4.12.0-0.nightly-2022-09-28-204419 built from commit 9eb0224926982cdd6cae53b872326292133e532d release image registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc release architecture amd64
How reproducible:
Always
Steps to Reproduce:
1. create vpc network, subnets, and a firewall-rule to allow ssh access to the bastion host 2. create the bastion host, with setting a valid service-account and scopes of "https://www.googleapis.com/auth/cloud-platform" 3. scp pull secret to the bastion host 4. ssh to the bastion host (subsequent steps would be on the bastion host, except told explicitly) 5. get "oc", e.g. curl https://mirror2.openshift.com/pub/openshift-v4/clients/ocp/4.9.9/openshift-client-linux-4.9.9.tar.gz -o openshift-client-linux-4.9.9.tar.gz; tar zxvf openshift-client-linux-4.9.9.tar.gz 6. obtain the installation program 7. try "create install-config" of platform "gcp"
Actual results:
[cloud-user@jiwei-0930-02-rhel8-mirror ~]$ ./openshift-install create install-config --dir work ? SSH Public Key /home/cloud-user/.ssh/id_rsa.pub ? Platform gcp INFO Credentials loaded from gcloud CLI defaults ? Project ID OpenShift QE Shared VPC (openshift-qe-shared-vpc) ? Region us-west1 ? Base Domain qe-shared-vpc.qe.gcp.devcluster.openshift.com ? Cluster Name jiwei-0930-03 ? Pull Secret [? for help] ****** FATAL failed to fetch Install Config: failed to generate asset "Install Config": credentialsMode: Forbidden: environmental authentication is only supported with Manual credentials mode [cloud-user@jiwei-0930-02-rhel8-mirror ~]$
Expected results:
"create install-config" should succeed.
Additional info:
This is a clone of issue OCPBUGS-2083. The following is the description of the original issue:
—
Description of problem:
Currently we are running VMWare CSI Operator in OpenShift 4.10.33. After running vulnerability scans, the operator was discovered to be running a known weak cipher 3DES. We are attempting to upgrade or modify the operator to customize the ciphers available. We were looking at performing a manual upgrade via Quay.io but can't seem to pull the image and was trying to steer away from performing a custom install from scratch. Looking for any suggestions into mitigated the weak cipher in the kube-rbac-proxy under VMware CSI Operator.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-4701. The following is the description of the original issue:
—
Description of problem:
In at least 4.12.0-rc.0, a user with read-only access to ClusterVersion can see a "Control plane is hosted" banner (despite the control plane not being hosted), because hasPermissionsToUpdate is false, so canPerformUpgrade is false.
Version-Release number of selected component (if applicable):
4.12.0-rc.0. Likely more. I haven't traced it out.
How reproducible:
Always.
Steps to Reproduce:
1. Install 4.12.0-rc.0
2. Create a user with cluster-wide read-only permissions. For me, it's via binding to a sudoer ClusterRole. I'm not sure where that ClusterRole comes from, but it's:
$ oc get -o yaml clusterrole sudoer apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: "2020-05-21T19:39:09Z" name: sudoer resourceVersion: "7715" uid: 28eb2ffa-dccd-47e8-a2d5-6a95e0e8b1e9 rules: - apiGroups: - "" - user.openshift.io resourceNames: - system:admin resources: - systemusers - users verbs: - impersonate - apiGroups: - "" - user.openshift.io resourceNames: - system:masters resources: - groups - systemgroups verbs: - impersonate
3. View /settings/cluster
Actual results:
See the "Control plane is hosted" banner.
Expected results:
Possible cases:
Description of problem:
Git icon shown in the repository details page should be based on the git provider.
Version-Release number of selected component (if applicable):
4.11
How reproducible:
Always
Steps to Reproduce:
1. Create a Repository with gitlab repo url
2. Navigate to the detail page.
Actual results:
github icon is displayed for the gitlab url.
Expected results:
gitlab icon should be displayed for the gitlab url.
Additional info:
use `GitLabIcon` and `BitBucketIcon` from patternfly react-icons.
We are seeing windows to linux networking failures, across all PRs.
This is occurring across all clouds.
Example test failure
seems this could have been due to the downstream merge, the windows jobs did not pass before the PR was merged
Job that failed against the downstream merge, but did not prevent it from merging
This is blocking all PRs against the WMCO repo.
Description of problem: As discovered in https://issues.redhat.com/browse/OCPBUGS-2795, gophercloud fails to list swift containers when the endpoint speaks HTTP2. This means that CIRO will provision a 100GB cinder volume even though swift is available to the tenant.
We're for example seeing this behavior in our CI on vexxhost.
The gophercloud commit that fixed this issue is https://github.com/gophercloud/gophercloud/commit/b7d5b2cdd7ffc13e79d924f61571b0e5f74ec91c, specifically the `|| ct == ""` part on line 75 of openstack/objectstorage/v1/containers/results.go. This commit made it in gophercloud v0.18.0.
CIRO still depends on gophercloud v0.17.0. We should bump gophercloud to fix the bug.
Version-Release number of selected component (if applicable):
All versions. Fix should go to 4.8 - 4.12.
How reproducible:
Always, when swift speaks HTTP2.
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
TO address: 'Static Pod is managed but errored" err="managed container xxx does not have Resource.Requests'
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
Using a daemonset that causes failures during draining as leases are not gracefully released and instead age out as pods are killed after potentially losing network access due to daemonset pods not being terminated. As pointed out in https://github.com/openshift/origin/pull/27394#discussion_r964002900 This should be fixed when moving to a deployment and is also tracked here https://issues.redhat.com/browse/BUILD-495
Version-Release number of selected component (if applicable):
How reproducible:
100
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
In order to start 4.12 development, we need to merge the agent-installer branch. We need to create a PR and engage the Installer team on getting it approved
This is a clone of issue OCPBUGS-5542. The following is the description of the original issue:
—
Description of problem:
The project list orders projects by its name and is smart enough to keep a "numerical order" like:
The more prominent project dropdown is not so smart and shows just a simple "ascii ordered" list:
Version-Release number of selected component (if applicable):
4.8-4.13 (master)
How reproducible:
Always
Steps to Reproduce:
1. Create some new projects called test-1, test-11, test-2
2. Check the project list page (in admin perspective)
3. Check the project dropdown (in dev perspective)
Actual results:
Order is
Expected results:
Order should be
Additional info:
none
There is capacity limit on egressIP for different cloud provider, for example, GCP, the limit is 10.
If the number of egressIP added to hostsubnet exceeds the capability limit, it is expected some logging message is emitted to event log, that can be seen through "oc get event"
On a GCP with SDN plugin, configure egressCIDRs on one worker node, configured 12 netnamespaces, each has 1 egressIP configured, the total number of egressIP for the hostsubnet has exceeded its capacity limit of 10. No event log was seen to indicate that the number of egressIP for the hostsubnet has exceeded the limit.
$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.11.0-0.nightly-2022-08-02-014045 True False 160m Cluster version is 4.11.0-0.nightly-2022-08-02-014045
See attachment for more details.
Description of problem:
When solving flakiness of a test in IO tests, we found that there are some issues in the cluster_version_matches condition for the conditional gatherer. Firstly the character limit should be increased as 32 characters does not cover every possible release version as some exceed that limit. Furthermore, there is an error in the schema
There is no name, it should be version
How reproducible:
Sometimes
Steps to Reproduce:
1. Spin a cluster from a PR 2. If version exceeds 32 characters, we get in the pod logs: 'Could not get version from string: "<"'
Actual results:
'Could not get version from string: "<"'
Expected results:
Metadata should contain "Metadata should contain invalid range error"
Additional info:
However, since there's the possibility for versions to exceed 32 characters, we shouldn't expect an error in this situation. Therefore, there might be more than one issue.
This is a clone of issue OCPBUGS-3164. The following is the description of the original issue:
—
During first bootstrap boot we need crio and kubelet on the disk, so we start release-image-pivot systemd task. However, its not blocking bootkube, so these two run in parallel.
release-image-pivot restarts the node to apply new OS image, which may leave bootkube in an inconsistent state. This task should run before bootkube
The issue found while testing HOSTEDCP-400 and HOSTEDCP-401.
Hypershift operator installed with flags:
--platform-monitoring=operator-only --enable-uwm-telemetry-remote-write=true --metrics-set=telemetry
Service monitors and pod monitors in the control plane:
[jiezhao@cube hypershift]$ oc get servicemonitor -n clusters-jz-test NAME AGE catalog-operator 45m cluster-version-operator 45m etcd 46m kube-apiserver 46m kube-controller-manager 45m monitor-multus-admission-controller 43m monitor-ovn-master-metrics 43m node-tuning-operator 45m olm-operator 45m openshift-apiserver 45m openshift-controller-manager 45m [jiezhao@cube hypershift]$ oc get podmonitor -n clusters-jz-test NAME AGE cluster-image-registry-operator 46m controlplane-operator 47m hosted-cluster-config-operator 46m ignition-server 47m
In OCP management web console, go to Observe->Targets:
1. Status of service monitor 'monitor-multus-admission-controller' is Down, error: Scraped failed: server returned HTTP status 401 Unauthorized. It doesn't have cluster id in target labels 2. Target of pod monitor 'cluster-image-registry-operator' is missing, not shown
Description of problem:
The API Explorer page layout is incorrect, please check the attachment for more details
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible:
Always
Steps to Reproduce:
1. Login OCP, Go to Home -> API Explorer page
2. Check if there is an extra blank line between the dropdown filter and the list
Actual results:
There is an extra blank line between the dropdown filter and the list
Expected results:
Use right patternfly package, remove the extra blank line
Additional info:
104.0.5112.79 (Official Build) (64-bit)
Description of problem:
some upgrade ci jobs from 4.11.z to 4.12 nightly build are failed, because system unit machine-config-daemon-update-rpmostree-via-container is failed
omg get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-6e18de1272fad7a5ca1529941e3ceaed False True True 3 0 0 1 3h53m master rendered-master-60f4ff5893c94f53acd9ebb7a6bf53d4 False True True 3 0 0 1 3h53m
check issued node
omg get node/ip-10-0-57-74.us-east-2.compute.internal -o yaml|yq -y '.metadata.annotations' cloud.network.openshift.io/egress-ipconfig: '[{"interface":"eni-0f6de21569b5b65c8","ifaddr":{"ipv4":"10.0.48.0/20"},"capacity":{"ipv4":14,"ipv6":15}}]' csi.volume.kubernetes.io/nodeid: '{"ebs.csi.aws.com":"i-01a34f6b5f2cd1e41"}' machine.openshift.io/machine: openshift-machine-api/ci-op-kb95kxx9-2a438-r6z94-master-2 machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-master-065664319cfbaee64277097d49a8a5a6 machineconfiguration.openshift.io/desiredConfig: rendered-master-60f4ff5893c94f53acd9ebb7a6bf53d4 machineconfiguration.openshift.io/desiredDrain: drain-rendered-master-60f4ff5893c94f53acd9ebb7a6bf53d4 machineconfiguration.openshift.io/lastAppliedDrain: drain-rendered-master-60f4ff5893c94f53acd9ebb7a6bf53d4 machineconfiguration.openshift.io/reason: 'error running systemd-run --unit machine-config-daemon-update-rpmostree-via-container --collect --wait -- podman run --authfile /var/lib/kubelet/config.json --privileged --pid=host --net=host --rm -v /:/run/host quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0daf5c4a35424410e88dde102022fc3581302bc8a98e09e2e4748502c59b3661 rpm-ostree ex deploy-from-self /run/host: Running as unit: machine-config-daemon-update-rpmostree-via-container.service Finished with result: exit-code Main processes terminated with: code=exited/status=125 Service runtime: 2min 52ms CPU time consumed: 144ms : exit status 125' machineconfiguration.openshift.io/state: Degraded volumes.kubernetes.io/controller-managed-attach-detach: 'true'
check mcd log on issued node
omg get pod -n openshift-machine-config-operator -o json | jq -r '.items[]|select(.spec.nodeName=="ip-10-0-57-74.us-east-2.compute.internal")|.metadata.name' | grep daemon machine-config-daemon-znbvf 2022-10-09T22:12:58.797891917Z I1009 22:12:58.797821 179598 update.go:1917] Updating OS to layered image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0daf5c4a35424410e88dde102022fc3581302bc8a98e09e2e4748502c59b3661 2022-10-09T22:12:58.797891917Z I1009 22:12:58.797846 179598 rpm-ostree.go:447] Running captured: rpm-ostree --version 2022-10-09T22:12:58.815829171Z I1009 22:12:58.815800 179598 update.go:2068] rpm-ostree is not new enough for layering; forcing an update via container 2022-10-09T22:12:58.817577513Z I1009 22:12:58.817555 179598 update.go:2053] Running: systemd-run --unit machine-config-daemon-update-rpmostree-via-container --collect --wait -- podman run --authfile /var/lib/kubelet/config.json --privileged --pid=host --net=host --rm -v /:/run/host quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0daf5c4a35424410e88dde102022fc3581302bc8a98e09e2e4748502c59b3661 rpm-ostree ex deploy-from-self /run/host ... 2022-10-09T22:15:00.831959313Z E1009 22:15:00.831949 179598 writer.go:200] Marking Degraded due to: error running systemd-run --unit machine-config-daemon-update-rpmostree-via-container --collect --wait -- podman run --authfile /var/lib/kubelet/config.json --privileged --pid=host --net=host --rm -v /:/run/host quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0daf5c4a35424410e88dde102022fc3581302bc8a98e09e2e4748502c59b3661 rpm-ostree ex deploy-from-self /run/host: Running as unit: machine-config-daemon-update-rpmostree-via-container.service 2022-10-09T22:15:00.831959313Z Finished with result: exit-code 2022-10-09T22:15:00.831959313Z Main processes terminated with: code=exited/status=125 2022-10-09T22:15:00.831959313Z Service runtime: 2min 52ms 2022-10-09T22:15:00.831959313Z CPU time consumed: 144ms 2022-10-09T22:15:00.831959313Z : exit status 125
Version-Release number of selected component (if applicable):
4.12
Steps to Reproduce:
upgrade cluster from 4.11.8 to 4.12.0-0.nightly-2022-10-05-053337
Actual results:
upgrade is failed due to node is degraded, rpm-ostree update via container is failed
Expected results:
upgrade can be completed successfully
Additional info:
must-gather: https://gcsweb-qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/qe-private-deck/logs/periodic-ci-openshift-openshift-tests-private-release-4.12-nightly-4.12-upgrade-from-stable-4.11-aws-ipi-proxy-p1/1579169944476585984/artifacts/aws-ipi-proxy-p1/gather-must-gather/artifacts/must-gather.tar
Other build logs of failed jobs
Description of problem:
In a complete disconnected cluster, the dev catalog is taking too much time in loading
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. A complete disconnected cluster
2. In add page go to the All services page
3.
Actual results:
Taking too much time too load
Expected results:
Time taken should be reduced
Additional info:
Attached a gif for reference
Description of problem:
For example, "openshift-install explain installconfig.platform.gcp.publicDNSZone" tells "PublicDNSZone contains the zone ID and project where the Public DNS zone will be created", but in fact it's for specifying an existing zone where the Public DNS zone records will be put in.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-10-015203
How reproducible:
Always
Steps to Reproduce:
1. openshift-install explain installconfig.platform.gcp.publicDNSZone 2. openshift-install explain installconfig.platform.gcp.privateDNSZone 3.
Actual results:
For example, it tells "PublicDNSZone contains the zone ID and project where the Public DNS zone will be created."
Expected results:
It should be like "PublicDNSZone contains the zone ID and project where the Public DNS zone records will be created."
Additional info:
$ openshift-install version openshift-install 4.12.0-0.nightly-2022-10-10-015203 built from commit 02102a96b3f7c78337b32dcafe2e28be6fb67a0f release image registry.ci.openshift.org/ocp/release@sha256:00806cf7faaa86981e73b478a72c1b7a838cd08b215f3a9ab9b278ae94d9a794 release architecture amd64 $ $ openshift-install explain installconfig.platform.gcp.publicDNSZone KIND: InstallConfig VERSION: v1RESOURCE: <object> PublicDNSZone Technology Preview. PublicDNSZone contains the zone ID and project where the Public DNS zone will be created.FIELDS: id <string> ID Technology Preview. ID or name of the zone. project <string> ProjectID Technology Preview When the ProjectID is provided, the zone will be created in this project. When the ProjectID is empty, the DNS zone with this ID will be created and managed in the Service Project (GCP.ProjectID). $ $ openshift-install explain installconfig.platform.gcp.privateDNSZone KIND: InstallConfig VERSION: v1RESOURCE: <object> PrivateDNSZone Technology Preview. PrivateDNSZone contains the zone ID and project where the Private DNS zone will be created.FIELDS: id <string> ID Technology Preview. ID or name of the zone. project <string> ProjectID Technology Preview When the ProjectID is provided, the zone will be created in this project. When the ProjectID is empty, the DNS zone with this ID will be created and managed in the Service Project (GCP.ProjectID). $
Description of problem:
After editing a MachineSet on AWS (just changed an annotation) it shows a warning [~] $ oc -n openshift-machine-api edit machineset.machine.openshift.io/ci-ln-hlf4lft-76ef8-p7rc4-worker-us-west-1b W1111 16:06:32.385856 88719 warnings.go:70] incorrect GroupVersionKind for AWSMachineProviderConfig object: machine.openshift.io/v1beta1, Kind=AWSMachineProviderConfig machineset.machine.openshift.io/ci-ln-hlf4lft-76ef8-p7rc4-worker-us-west-1b edited
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Add an annotation or label to a machine 2. 3.
Actual results:
There is a warning about incorrect GroupVersionKind for AWSMachineProviderConfig object
Expected results:
No warnings shown
Additional info:
Description of problem:
OLM has a dependency on openshift/cluster-policy-controller. This project had dependencies with v0.0.0 versions, which due to a bug in ART was causing issues building the olm image. To fix this, we have to update the dependencies in the cluster-policy-controller project to point to actual versions. This was already done: * https://github.com/openshift/cluster-policy-controller/pull/103 * https://github.com/openshift/cluster-policy-controller/pull/101 And these changes already made it to 4.14 and 4.13 branches of the cluster-policy-controller. The backport to 4.12 is: https://github.com/openshift/cluster-policy-controller/pull/102
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
When using an install-config with missing VIP values set in the baremetal-platform section, we attempt to get defaults for them by doing a DNS lookup on the cluster domain name. If this lookup fails, we set the error message from DNS as the default value, resulting in a very confusing error message:
[platform.baremetal.apiVIPs: Invalid value: []string{"DNS lookup failure: lookup api.test-cluster.test-domain on 10.0.80.11:53: no such host"}: ip <nil> is invalid, platform.baremetal.apiVIPs: Invalid value: "DNS lookup failure: lookup api.test-cluster.test-domain on 10.0.80.11:53: no such host": "DNS lookup failure: lookup api.test-cluster.test-domain on 10.0.80.11:53: no such host" is not a valid IP, platform.baremetal.apiVIPs: Invalid value: "DNS lookup failure: lookup api.test-cluster.test-domain on 10.0.80.11:53: no such host": IP expected to be in one of the machine networks: 192.168.122.0/23]
This has been the case since the inception of baremetal IPI, but it has gotten considerably worse in 4.12 due to the VIP fields changing from a single string to a list.
If the user doesn't supply a value and we can't generate a sensible default, we should report that the error is that they didn't supply a value, not that they supplied an invalid value that they did not in fact supply:
[platform.baremetal.apiVIPs: Required value: must specify at least one VIP for the API, platform.baremetal.apiVIPs: Required value: must specify VIP for API, when VIP for ingress is set]
Description of problem:
If using ingresscontroller.spec.routeSelector.matchExpressions or ingresscontroller.spec.namespaceSelector.matchExpressions, the route will not count in the new route_metrics_controller_routes_per_shard prometheus metric. This is due to the logic only using "matchLabels". The logic needs to be updated to also use "matchExpressions".
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1. Create IC with matchExpressions: oc apply -f - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: reproducer.$domain routeSelector: matchExpressions: - key: type operator: In values: - shard replicas: 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" EOF 2. Create the route: oc apply -f - <<EOF apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-shard labels: type: shard spec: to: kind: Service name: router-shard EOF 3. Check route_metrics_controller_routes_per_shard{name="sharded"} in prometheus, it's 0
Actual results:
route_metrics_controller_routes_per_shard{name="sharded"} has 0 routes
Expected results:
route_metrics_controller_routes_per_shard{name="sharded"} should have 1 route
Additional info:
Description of problem:
"Failed to open directory, disabling udev device properties" in node-exporter logs
$ for i in $(oc -n openshift-monitoring get pod | grep node-exporter | awk '{print $1}'); do echo $i; oc -n openshift-monitoring logs -c node-exporter $i | grep "Failed to open directory, disabling udev device properties"; echo -e "\n"; done node-exporter-4279b ts=2022-10-17T01:16:05.833Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data node-exporter-9tq64 ts=2022-10-17T01:16:04.642Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data node-exporter-dwtwh ts=2022-10-17T01:16:04.936Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data node-exporter-nrznc ts=2022-10-17T01:16:05.601Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data node-exporter-q87s4 ts=2022-10-17T01:16:05.228Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data node-exporter-twtxj ts=2022-10-17T01:16:05.249Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
debug on node, /run/udev/data is readable
# oc debug node/ip-10-0-138-107.us-east-2.compute.internal Temporary namespace openshift-debug-dhvqv is created for debugging node... Starting pod/ip-10-0-138-107us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Pod IP: 10.0.138.107 If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host sh-4.4# ls -l /run/udev/ total 0 srw-------. 1 root root 0 Oct 17 01:04 control drwxr-xr-x. 2 root root 3780 Oct 17 01:26 data drwxr-xr-x. 40 root root 800 Oct 17 01:04 links drwxr-xr-x. 3 root root 60 Oct 17 01:04 static_node-tags drwxr-xr-x. 5 root root 100 Oct 17 01:04 tags drwxr-xr-x. 2 root root 140 Oct 17 01:04 watch sh-4.4# ls -l /run/udev/data total 304 -rw-r--r--. 1 root root 55 Oct 17 01:04 +acpi:AMZN0000:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXCPU:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXCPU:01 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXCPU:02 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXCPU:03 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXPWRBN:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXSLPBN:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXSYBUS:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXSYBUS:01 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXSYSTM:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0103:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0303:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0400:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0501:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0A03:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0B00:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0C0F:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0C0F:01 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0C0F:02 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0C0F:03 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0C0F:04 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0F13:00 -rw-r--r--. 1 root root 142 Oct 17 01:04 +input:input0 -rw-r--r--. 1 root root 142 Oct 17 01:04 +input:input1 -rw-r--r--. 1 root root 218 Oct 17 01:04 +input:input2 -rw-r--r--. 1 root root 198 Oct 17 01:04 +input:input4 -rw-r--r--. 1 root root 143 Oct 17 01:04 +input:input5 -rw-r--r--. 1 root root 60 Oct 17 01:04 +module:configfs -rw-r--r--. 1 root root 66 Oct 17 01:04 +module:fuse -rw-r--r--. 1 root root 188 Oct 17 01:04 +pci:0000:00:00.0 -rw-r--r--. 1 root root 195 Oct 17 01:04 +pci:0000:00:01.0 -rw-r--r--. 1 root root 213 Oct 17 01:04 +pci:0000:00:01.3 -rw-r--r--. 1 root root 207 Oct 17 01:04 +pci:0000:00:03.0 -rw-r--r--. 1 root root 259 Oct 17 01:04 +pci:0000:00:04.0 -rw-r--r--. 1 root root 208 Oct 17 01:04 +pci:0000:00:05.0 -rw-r--r--. 1 root root 55 Oct 17 01:04 +platform:AMZN0000:00 -rw-r--r--. 1 root root 825 Oct 17 01:04 b259:0 -rw-r--r--. 1 root root 1357 Oct 17 01:04 b259:1 -rw-r--r--. 1 root root 1568 Oct 17 01:04 b259:2 -rw-r--r--. 1 root root 1619 Oct 17 01:04 b259:3 -rw-r--r--. 1 root root 1602 Oct 17 01:04 b259:4 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:144 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:183 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:227 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:228 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:229 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:231 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:235 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:236 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:62 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:63 -rw-r--r--. 1 root root 193 Oct 17 01:04 c13:32 -rw-r--r--. 1 root root 0 Oct 17 01:04 c13:63 -rw-r--r--. 1 root root 113 Oct 17 01:04 c13:64 -rw-r--r--. 1 root root 113 Oct 17 01:04 c13:65 -rw-r--r--. 1 root root 232 Oct 17 01:04 c13:66 -rw-r--r--. 1 root root 199 Oct 17 01:04 c13:67 -rw-r--r--. 1 root root 143 Oct 17 01:04 c13:68 -rw-r--r--. 1 root root 0 Oct 17 01:04 c162:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:1 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:11 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:3 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:4 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:5 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:7 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:8 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:9 -rw-r--r--. 1 root root 0 Oct 17 01:04 c202:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c202:1 -rw-r--r--. 1 root root 0 Oct 17 01:04 c202:2 -rw-r--r--. 1 root root 0 Oct 17 01:04 c202:3 -rw-r--r--. 1 root root 0 Oct 17 01:04 c203:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c203:1 -rw-r--r--. 1 root root 0 Oct 17 01:04 c203:2 -rw-r--r--. 1 root root 0 Oct 17 01:04 c203:3 -rw-r--r--. 1 root root 0 Oct 17 01:04 c241:0 -rw-r--r--. 1 root root 259 Oct 17 01:04 c242:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c246:0 -rw-r--r--. 1 root root 23 Oct 17 01:04 c251:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:1 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:10 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:11 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:12 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:13 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:14 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:15 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:16 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:17 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:18 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:19 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:2 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:20 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:21 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:22 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:23 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:24 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:25 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:26 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:27 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:28 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:29 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:3 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:30 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:31 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:32 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:33 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:34 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:35 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:36 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:37 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:38 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:39 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:4 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:40 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:41 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:42 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:43 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:44 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:45 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:46 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:47 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:48 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:49 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:5 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:50 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:51 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:52 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:53 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:54 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:55 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:56 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:57 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:58 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:59 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:6 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:60 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:61 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:62 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:63 -rw-r--r--. 1 root root 20 Oct 17 01:04 c4:64 -rw-r--r--. 1 root root 20 Oct 17 01:04 c4:65 -rw-r--r--. 1 root root 20 Oct 17 01:04 c4:66 -rw-r--r--. 1 root root 20 Oct 17 01:04 c4:67 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:7 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:8 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:9 -rw-r--r--. 1 root root 0 Oct 17 01:04 c5:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c5:1 -rw-r--r--. 1 root root 0 Oct 17 01:04 c5:2 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:1 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:128 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:129 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:130 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:131 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:132 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:133 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:134 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:2 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:3 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:4 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:5 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:6 -rw-r--r--. 1 root root 87 Oct 17 01:04 n1 -rw-r--r--. 1 root root 360 Oct 17 01:06 n10 -rw-r--r--. 1 root root 360 Oct 17 01:06 n11 -rw-r--r--. 1 root root 360 Oct 17 01:06 n13 -rw-r--r--. 1 root root 360 Oct 17 01:07 n14 -rw-r--r--. 1 root root 595 Oct 17 01:04 n2 -rw-r--r--. 1 root root 360 Oct 17 01:09 n25 -rw-r--r--. 1 root root 360 Oct 17 01:10 n29 -rw-r--r--. 1 root root 195 Oct 17 01:04 n3 -rw-r--r--. 1 root root 360 Oct 17 01:10 n30 -rw-r--r--. 1 root root 360 Oct 17 01:11 n31 -rw-r--r--. 1 root root 360 Oct 17 01:14 n35 -rw-r--r--. 1 root root 360 Oct 17 01:14 n37 -rw-r--r--. 1 root root 360 Oct 17 01:14 n39 -rw-r--r--. 1 root root 188 Oct 17 01:04 n4 -rw-r--r--. 1 root root 360 Oct 17 01:15 n41 -rw-r--r--. 1 root root 193 Oct 17 01:04 n5 -rw-r--r--. 1 root root 360 Oct 17 01:18 n50 -rw-r--r--. 1 root root 362 Oct 17 01:26 n54 -rw-r--r--. 1 root root 189 Oct 17 01:04 n6 -rw-r--r--. 1 root root 357 Oct 17 01:05 n7 -rw-r--r--. 1 root root 357 Oct 17 01:05 n8 -rw-r--r--. 1 root root 359 Oct 17 01:05 n9
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-15-094115 node-exporter version=1.4.0
How reproducible:
always
Steps to Reproduce:
1. check node-exporter logs 2. 3.
Actual results:
"Failed to open directory, disabling udev device properties" in node-exporter logs
Expected results:
no error logs
Additional info:
no functional affection for the cluster code: https://github.com/prometheus/node_exporter/blob/release-1.4/collector/diskstats_linux.go#L262-L270
The pipeline run nodes used to show a focus border when they were in focus but no longer do.
There is no indication of which node has the focus
There should be a focus border indicating the current focus node.
always
4.12
Previously:
Currently:
Description of problem:
Seems ART is having trouble building OLM images: https://redhat-internal.slack.com/archives/CB95J6R4N/p1676531421724929 I've already fixed master: * https://github.com/openshift/cluster-policy-controller/pull/103 * https://github.com/openshift/cluster-policy-controller/pull/101 Need a bug to backport...
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Tracker bug for bootimage bump in 4.12. This bug should block bugs which need a bootimage bump to fix.
The previous tracker is OCPBUGS-561.
This is a clone of issue OCPBUGS-4656. The following is the description of the original issue:
—
Description of problem:
`/etc/hostname` may exist, but be empty. `vsphere-hostname` service should check that the file is not empty instead of just that it exists. OKD's machine-os-content starting from F37 has an empty /etc/hostname file, which breaks joining workers in vsphere IPI
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Install OKD w/ workers on vsphere 2. 3.
Actual results:
Workers get hostname resolved using NM
Expected results:
Workers get hostname resolved using vmtoolsd
Additional info:
4.2 AWS boot images such as ami-01e7fdcb66157b224 include the old ignition.platform.id=ec2 kernel command line parameter. When launched against 4.12.0-rc.3, new machines fail with:
coreos-assemblers used ignition.platform.id=ec2, but pivoted to =aws here. It's not clear when that made its way into new AWS boot images. Some time after 4.2 and before 4.6.
Afterburn dropped support for legacy command-line options like the ec2 slug in 5.0.0. But it's not clear when that shipped into RHCOS. The release controller points at this RHCOS diff, but that has afterburn-0-5.3.0-1 builds on both sides.
100%, given a sufficiently old AMI and a sufficiently new OpenShift release target.
The new Machine will get to Provisioned but fail to progress to Running. systemd journal logs will include unknown provider 'ec2' for Afterburn units.
Old boot-image AMIs can successfully update to 4.12.
Alternatively, we pin down the set of exposed boot images sufficiently that users with older clusters can audit for exposure and avoid the issue by updating to more modern boot images (although updating boot images is not trivial, see RFE-3001 and the Ignition spec 2 to 3 transition discussed in kcs#5514051.
This is a clone of issue OCPBUGS-7719. The following is the description of the original issue:
—
An update from 4.13.0-ec.2 to 4.13.0-ec.3 stuck on:
$ oc get clusteroperator machine-config NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE machine-config 4.13.0-ec.2 True True True 30h Unable to apply 4.13.0-ec.3: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool worker is not ready, retrying. Status: (pool degraded: true total: 105, ready 105, updated: 105, unavailable: 0)]
The worker MachineConfigPool status included:
type: NodeDegraded - lastTransitionTime: "2023-02-16T14:29:21Z" message: 'Failed to render configuration for pool worker: Ignoring MC 99-worker-generated-containerruntime generated by older version 8276d9c1f574481043d3661a1ace1f36cd8c3b62 (my version: c06601510c0917a48912cc2dda095d8414cc5182)'
4.13.0-ec.3. The behavior was apparently introduced as part of OCPBUGS-6018, which has been backported, so the following update targets are expected to be vulnerable: 4.10.52+, 4.11.26+, 4.12.2+, and 4.13.0-ec.3.
100%, when updating into a vulnerable release, if you happen to have leaked MachineConfig.
1. 4.12.0-ec.1 dropped cleanUpDuplicatedMC. Run a later release, like 4.13.0-ec.2.
2. Create more than one KubeletConfig or ContainerRuntimeConfig targeting the worker pool (or any pool other than master). The number of clusters who have had redundant configuration objects like this is expected to be small.
3. (Optionally?) delete the extra KubeletConfig and ContainerRuntimeConfig.
4. Update to 4.13.0-ec.3.
Update sticks on the machine-config ClusterOperator, as described above.
Update completes without issues.
Description of problem:
According to https://issues.redhat.com/browse/OCPBUGS-705, thanks Junyun share the test env/result for install part, and we need the fix in vsphere-problem-detector, currently it reports the following missing when using the pre-existing folder and/or resource pool with ReadOnly permission: 1. vcenter cluster set ReadOnly permission: I0902 10:07:50.324782 1 vsphere_check.go:244] CheckComputeClusterPermissions:jima-permission-q84s8-worker-86gd4 failed: missing privileges for compute cluster workloads: Resource.AssignVMToPool, VApp.AssignResourcePool, VApp.Import, VirtualMachine.Config.AddNewDisk 2. datacenter set ReadOnly permission: I0902 08:09:19.462001 1 vsphere_check.go:225] CheckAccountPermissions failed: missing privileges for datacenter OCP-DC: Resource.AssignVMToPool, VApp.Import, VirtualMachine.Config.AddExistingDisk, VirtualMachine.Config.AddNewDisk, VirtualMachine.Config.AddRemoveDevice, VirtualMachine.Config.AdvancedConfig, VirtualMachine.Config.Annotation, VirtualMachine.Config.CPUCount, VirtualMachine.Config.DiskExtend, VirtualMachine.Config.DiskLease, VirtualMachine.Config.EditDevice, VirtualMachine.Config.Memory, VirtualMachine.Config.RemoveDisk, VirtualMachine.Config.Rename, VirtualMachine.Config.ResetGuestInfo, VirtualMachine.Config.Resource, VirtualMachine.Config.Settings, VirtualMachine.Config.UpgradeVirtualHardware, VirtualMachine.Interact.GuestControl, VirtualMachine.Interact.PowerOff, VirtualMachine.Interact.PowerOn, VirtualMachine.Interact.Reset, VirtualMachine.Inventory.Create, VirtualMachine.Inventory.CreateFromExisting, VirtualMachine.Inventory.Delete, VirtualMachine.Provisioning.Clone, VirtualMachine.Provisioning.DeployTemplate, VirtualMachine.Provisioning.MarkAsTemplate, Folder.Create, Folder.Delete
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-02-194931
How reproducible:
Always
Steps to Reproduce:
See Description of problem
Actual results:
The vsphere-problem-detector operator reports privilege missing when using pre-existing folder and/or resource pool with ReadOnly permission
Expected results:
The vsphere-problem-detector operator should not reports privilege missing in that case.
Additional info:
Description of problem:
Installing 1000+ SNOs via ACM/MCE via ZTP with gitops, a small percentage of clusters end up never completing install because the monitoring operator does not reconcile to available.
# oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version False True 16h Unable to apply 4.11.0: the cluster operator monitoring has not yet successfully rolled out
# oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig get co monitoring
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
monitoring False True True 15h Rollout of the monitoring stack failed and is degraded. Please investigate the degraded status error.
Version-Release number of selected component (if applicable):
How reproducible:
Additional info:
# oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig get po -n openshift-monitoring NAME READY STATUS RESTARTS AGE alertmanager-main-0 0/6 ContainerCreating 0 15h cluster-monitoring-operator-54dd78cc74-l5w24 2/2 Running 0 15h kube-state-metrics-b6455c4dc-8hcfn 3/3 Running 0 15h node-exporter-k7899 2/2 Running 0 15h openshift-state-metrics-7984888fbd-cl67v 3/3 Running 0 15h prometheus-adapter-785bf4f975-wgmnh 1/1 Running 0 15h prometheus-k8s-0 0/6 Init:0/1 0 15h prometheus-operator-74d8754ff7-9zrgw 2/2 Running 0 15h prometheus-operator-admission-webhook-6665fb687d-c5jgv 1/1 Running 0 15h thanos-querier-575496c665-jcc8l 6/6 Running 0 15h # oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig describe po -n openshift-monitoring alertmanager-main-0 Name: alertmanager-main-0 Namespace: openshift-monitoring Priority: 2000000000 Priority Class Name: system-cluster-critical Node: sno01219/fc00:1001::8aa Start Time: Mon, 15 Aug 2022 23:53:39 +0000 Labels: alertmanager=main app.kubernetes.io/component=alert-router app.kubernetes.io/instance=main app.kubernetes.io/managed-by=prometheus-operator app.kubernetes.io/name=alertmanager app.kubernetes.io/part-of=openshift-monitoring app.kubernetes.io/version=0.24.0 controller-revision-hash=alertmanager-main-fcf8dd5fb statefulset.kubernetes.io/pod-name=alertmanager-main-0 Annotations: kubectl.kubernetes.io/default-container: alertmanager openshift.io/scc: nonroot Status: Pending IP: IPs: <none> Controlled By: StatefulSet/alertmanager-main Containers: alertmanager: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91308d35c1e56463f55c1aaa519ff4de7335d43b254c21abdb845fc8c72821a1 Image ID: Ports: 9094/TCP, 9094/UDP Host Ports: 0/TCP, 0/UDP Args: --config.file=/etc/alertmanager/config/alertmanager.yaml --storage.path=/alertmanager --data.retention=120h --cluster.listen-address= --web.listen-address=127.0.0.1:9093 --web.external-url=https:/console-openshift-console.apps.sno01219.rdu2.scalelab.redhat.com/monitoring --web.route-prefix=/ --cluster.peer=alertmanager-main-0.alertmanager-operated:9094 --cluster.reconnect-timeout=5m State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 4m memory: 40Mi Environment: POD_IP: (v1:status.podIP) Mounts: /alertmanager from alertmanager-main-db (rw) /etc/alertmanager/certs from tls-assets (ro) /etc/alertmanager/config from config-volume (rw) /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy from secret-alertmanager-kube-rbac-proxy (ro) /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric from secret-alertmanager-kube-rbac-proxy-metric (ro) /etc/alertmanager/secrets/alertmanager-main-proxy from secret-alertmanager-main-proxy (ro) /etc/alertmanager/secrets/alertmanager-main-tls from secret-alertmanager-main-tls (ro) /etc/pki/ca-trust/extracted/pem/ from alertmanager-trusted-ca-bundle (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) config-reloader: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:209e20410ec2d3d7a502f568d2b7fe1cd1beadcb36fff2d1e6f59d77be3200e3 Image ID: Port: <none> Host Port: <none> Command: /bin/prometheus-config-reloader Args: --listen-address=localhost:8080 --reload-url=http://localhost:9093/-/reload --watched-dir=/etc/alertmanager/config --watched-dir=/etc/alertmanager/secrets/alertmanager-main-tls --watched-dir=/etc/alertmanager/secrets/alertmanager-main-proxy --watched-dir=/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy --watched-dir=/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 10Mi Environment: POD_NAME: alertmanager-main-0 (v1:metadata.name) SHARD: -1 Mounts: /etc/alertmanager/config from config-volume (ro) /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy from secret-alertmanager-kube-rbac-proxy (ro) /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric from secret-alertmanager-kube-rbac-proxy-metric (ro) /etc/alertmanager/secrets/alertmanager-main-proxy from secret-alertmanager-main-proxy (ro) /etc/alertmanager/secrets/alertmanager-main-tls from secret-alertmanager-main-tls (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) alertmanager-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:140f8947593d92e1517e50a201e83bdef8eb965b552a21d3caf346a250d0cf6e Image ID: Port: 9095/TCP Host Port: 0/TCP Args: -provider=openshift -https-address=:9095 -http-address= -email-domain=* -upstream=http://localhost:9093 -openshift-sar=[{"resource": "namespaces", "verb": "get"}, {"resource": "alertmanagers", "resourceAPIGroup": "monitoring.coreos.com", "namespace": "openshift-monitoring", "verb": "patch", "resourceName": "non-existant"}] -openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}, "/": {"resource":"alertmanagers", "group": "monitoring.coreos.com", "namespace": "openshift-monitoring", "verb": "patch", "name": "non-existant"}} -tls-cert=/etc/tls/private/tls.crt -tls-key=/etc/tls/private/tls.key -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token -cookie-secret-file=/etc/proxy/secrets/session_secret -openshift-service-account=alertmanager-main -openshift-ca=/etc/pki/tls/cert.pem -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 20Mi Environment: HTTP_PROXY: HTTPS_PROXY: NO_PROXY: Mounts: /etc/pki/ca-trust/extracted/pem/ from alertmanager-trusted-ca-bundle (ro) /etc/proxy/secrets from secret-alertmanager-main-proxy (rw) /etc/tls/private from secret-alertmanager-main-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) kube-rbac-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5e1c69d005727e3245604cfca7a63e4f9bc6e15128c7489e41d5e967305089e Image ID: Port: 9092/TCP Host Port: 0/TCP Args: --secure-listen-address=0.0.0.0:9092 --upstream=http://127.0.0.1:9096 --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logtostderr=true --tls-min-version=VersionTLS12 State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /etc/kube-rbac-proxy from secret-alertmanager-kube-rbac-proxy (rw) /etc/tls/private from secret-alertmanager-main-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) kube-rbac-proxy-metric: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5e1c69d005727e3245604cfca7a63e4f9bc6e15128c7489e41d5e967305089e Image ID: Port: 9097/TCP Host Port: 0/TCP Args: --secure-listen-address=0.0.0.0:9097 --upstream=http://127.0.0.1:9093 --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --client-ca-file=/etc/tls/client/client-ca.crt --logtostderr=true --allow-paths=/metrics --tls-min-version=VersionTLS12 State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /etc/kube-rbac-proxy from secret-alertmanager-kube-rbac-proxy-metric (ro) /etc/tls/client from metrics-client-ca (ro) /etc/tls/private from secret-alertmanager-main-tls (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) prom-label-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2550b2cbdf864515b1edacf43c25eb6b6f179713c1df34e51f6e9bba48d6430a Image ID: Port: <none> Host Port: <none> Args: --insecure-listen-address=127.0.0.1:9096 --upstream=http://127.0.0.1:9093 --label=namespace --error-on-replace State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 20Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: config-volume: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-main-generated Optional: false tls-assets: Type: Projected (a volume that contains injected data from multiple sources) SecretName: alertmanager-main-tls-assets-0 SecretOptionalName: <nil> secret-alertmanager-main-tls: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-main-tls Optional: false secret-alertmanager-main-proxy: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-main-proxy Optional: false secret-alertmanager-kube-rbac-proxy: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-kube-rbac-proxy Optional: false secret-alertmanager-kube-rbac-proxy-metric: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-kube-rbac-proxy-metric Optional: false alertmanager-main-db: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> metrics-client-ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: metrics-client-ca Optional: false alertmanager-trusted-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: alertmanager-trusted-ca-bundle-2rsonso43rc5p Optional: true kube-api-access-hl77l: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 2m25s (x409 over 15h) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_1c367a83-24e3-4249-861a-a107a6beaee2_0(dff5f302f774d060728261b3c86841ebdbd7ba11537ec9f4d90d57be17bdf44b): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [openshift-monitoring/alertmanager-main-0/1c367a83-24e3-4249-861a-a107a6beaee2:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-monitoring/alertmanager-main-0 dff5f302f774d060728261b3c86841ebdbd7ba11537ec9f4d90d57be17bdf44b] [openshift-monitoring/alertmanager-main-0 dff5f302f774d060728261b3c86841ebdbd7ba11537ec9f4d90d57be17bdf44b] failed to get pod annotation: timed out waiting for annotations: context deadline exceeded oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig describe po -n openshift-monitoring prometheus-k8s-0 Name: prometheus-k8s-0 Namespace: openshift-monitoring Priority: 2000000000 Priority Class Name: system-cluster-critical Node: sno01219/fc00:1001::8aa Start Time: Mon, 15 Aug 2022 23:53:39 +0000 Labels: app.kubernetes.io/component=prometheus app.kubernetes.io/instance=k8s app.kubernetes.io/managed-by=prometheus-operator app.kubernetes.io/name=prometheus app.kubernetes.io/part-of=openshift-monitoring app.kubernetes.io/version=2.36.2 controller-revision-hash=prometheus-k8s-546b544f8b operator.prometheus.io/name=k8s operator.prometheus.io/shard=0 prometheus=k8s statefulset.kubernetes.io/pod-name=prometheus-k8s-0 Annotations: kubectl.kubernetes.io/default-container: prometheus openshift.io/scc: nonroot Status: Pending IP: IPs: <none> Controlled By: StatefulSet/prometheus-k8s Init Containers: init-config-reloader: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:209e20410ec2d3d7a502f568d2b7fe1cd1beadcb36fff2d1e6f59d77be3200e3 Image ID: Port: 8080/TCP Host Port: 0/TCP Command: /bin/prometheus-config-reloader Args: --watch-interval=0 --listen-address=:8080 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 10Mi Environment: POD_NAME: prometheus-k8s-0 (v1:metadata.name) SHARD: 0 Mounts: /etc/prometheus/config from config (rw) /etc/prometheus/config_out from config-out (rw) /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) Containers: prometheus: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c7df53b796e81ba8301ba74d02317226329bd5752fd31c1b44d028e4832f21c3 Image ID: Port: <none> Host Port: <none> Args: --web.console.templates=/etc/prometheus/consoles --web.console.libraries=/etc/prometheus/console_libraries --storage.tsdb.retention.time=15d --config.file=/etc/prometheus/config_out/prometheus.env.yaml --storage.tsdb.path=/prometheus --web.enable-lifecycle --web.external-url=https:/console-openshift-console.apps.sno01219.rdu2.scalelab.redhat.com/monitoring --web.route-prefix=/ --web.listen-address=127.0.0.1:9090 --web.config.file=/etc/prometheus/web_config/web-config.yaml State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 70m memory: 1Gi Liveness: exec [sh -c if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/healthy; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/healthy; else exit 1; fi] delay=0s timeout=3s period=5s #success=1 #failure=6 Readiness: exec [sh -c if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi] delay=0s timeout=3s period=5s #success=1 #failure=3 Startup: exec [sh -c if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi] delay=0s timeout=3s period=15s #success=1 #failure=60 Environment: <none> Mounts: /etc/pki/ca-trust/extracted/pem/ from prometheus-trusted-ca-bundle (ro) /etc/prometheus/certs from tls-assets (ro) /etc/prometheus/config_out from config-out (ro) /etc/prometheus/configmaps/kubelet-serving-ca-bundle from configmap-kubelet-serving-ca-bundle (ro) /etc/prometheus/configmaps/metrics-client-ca from configmap-metrics-client-ca (ro) /etc/prometheus/configmaps/serving-certs-ca-bundle from configmap-serving-certs-ca-bundle (ro) /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw) /etc/prometheus/secrets/kube-etcd-client-certs from secret-kube-etcd-client-certs (ro) /etc/prometheus/secrets/kube-rbac-proxy from secret-kube-rbac-proxy (ro) /etc/prometheus/secrets/metrics-client-certs from secret-metrics-client-certs (ro) /etc/prometheus/secrets/prometheus-k8s-proxy from secret-prometheus-k8s-proxy (ro) /etc/prometheus/secrets/prometheus-k8s-thanos-sidecar-tls from secret-prometheus-k8s-thanos-sidecar-tls (ro) /etc/prometheus/secrets/prometheus-k8s-tls from secret-prometheus-k8s-tls (ro) /etc/prometheus/web_config/web-config.yaml from web-config (ro,path="web-config.yaml") /prometheus from prometheus-k8s-db (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) config-reloader: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:209e20410ec2d3d7a502f568d2b7fe1cd1beadcb36fff2d1e6f59d77be3200e3 Image ID: Port: <none> Host Port: <none> Command: /bin/prometheus-config-reloader Args: --listen-address=localhost:8080 --reload-url=http://localhost:9090/-/reload --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 10Mi Environment: POD_NAME: prometheus-k8s-0 (v1:metadata.name) SHARD: 0 Mounts: /etc/prometheus/config from config (rw) /etc/prometheus/config_out from config-out (rw) /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) thanos-sidecar: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36fc214537c763b3a3f0a9dc7a1bd4378a80428c31b2629df8786a9b09155e6d Image ID: Ports: 10902/TCP, 10901/TCP Host Ports: 0/TCP, 0/TCP Args: sidecar --prometheus.url=http://localhost:9090/ --tsdb.path=/prometheus --http-address=127.0.0.1:10902 --grpc-server-tls-cert=/etc/tls/grpc/server.crt --grpc-server-tls-key=/etc/tls/grpc/server.key --grpc-server-tls-client-ca=/etc/tls/grpc/ca.crt State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 25Mi Environment: <none> Mounts: /etc/tls/grpc from secret-grpc-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) prometheus-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:140f8947593d92e1517e50a201e83bdef8eb965b552a21d3caf346a250d0cf6e Image ID: Port: 9091/TCP Host Port: 0/TCP Args: -provider=openshift -https-address=:9091 -http-address= -email-domain=* -upstream=http://localhost:9090 -openshift-service-account=prometheus-k8s -openshift-sar={"resource": "namespaces", "verb": "get"} -openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}} -tls-cert=/etc/tls/private/tls.crt -tls-key=/etc/tls/private/tls.key -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token -cookie-secret-file=/etc/proxy/secrets/session_secret -openshift-ca=/etc/pki/tls/cert.pem -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 20Mi Environment: HTTP_PROXY: HTTPS_PROXY: NO_PROXY: Mounts: /etc/pki/ca-trust/extracted/pem/ from prometheus-trusted-ca-bundle (ro) /etc/proxy/secrets from secret-prometheus-k8s-proxy (rw) /etc/tls/private from secret-prometheus-k8s-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) kube-rbac-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5e1c69d005727e3245604cfca7a63e4f9bc6e15128c7489e41d5e967305089e Image ID: Port: 9092/TCP Host Port: 0/TCP Args: --secure-listen-address=0.0.0.0:9092 --upstream=http://127.0.0.1:9090 --allow-paths=/metrics --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --client-ca-file=/etc/tls/client/client-ca.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logtostderr=true --tls-min-version=VersionTLS12 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /etc/kube-rbac-proxy from secret-kube-rbac-proxy (rw) /etc/tls/client from configmap-metrics-client-ca (ro) /etc/tls/private from secret-prometheus-k8s-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) kube-rbac-proxy-thanos: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5e1c69d005727e3245604cfca7a63e4f9bc6e15128c7489e41d5e967305089e Image ID: Port: 10902/TCP Host Port: 0/TCP Args: --secure-listen-address=[$(POD_IP)]:10902 --upstream=http://127.0.0.1:10902 --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --client-ca-file=/etc/tls/client/client-ca.crt --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --allow-paths=/metrics --logtostderr=true --tls-min-version=VersionTLS12 --client-ca-file=/etc/tls/client/client-ca.crt State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 10Mi Environment: POD_IP: (v1:status.podIP) Mounts: /etc/kube-rbac-proxy from secret-kube-rbac-proxy (rw) /etc/tls/client from metrics-client-ca (ro) /etc/tls/private from secret-prometheus-k8s-thanos-sidecar-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) Conditions: Type Status Initialized False Ready False ContainersReady False PodScheduled True Volumes: config: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s Optional: false tls-assets: Type: Projected (a volume that contains injected data from multiple sources) SecretName: prometheus-k8s-tls-assets-0 SecretOptionalName: <nil> config-out: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> prometheus-k8s-rulefiles-0: Type: ConfigMap (a volume populated by a ConfigMap) Name: prometheus-k8s-rulefiles-0 Optional: false web-config: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-web-config Optional: false secret-kube-etcd-client-certs: Type: Secret (a volume populated by a Secret) SecretName: kube-etcd-client-certs Optional: false secret-prometheus-k8s-tls: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-tls Optional: false secret-prometheus-k8s-proxy: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-proxy Optional: false secret-prometheus-k8s-thanos-sidecar-tls: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-thanos-sidecar-tls Optional: false secret-kube-rbac-proxy: Type: Secret (a volume populated by a Secret) SecretName: kube-rbac-proxy Optional: false secret-metrics-client-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-client-certs Optional: false configmap-serving-certs-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: serving-certs-ca-bundle Optional: false configmap-kubelet-serving-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: kubelet-serving-ca-bundle Optional: false configmap-metrics-client-ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: metrics-client-ca Optional: false prometheus-k8s-db: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> metrics-client-ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: metrics-client-ca Optional: false secret-grpc-tls: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-grpc-tls-crdkohb1gb92n Optional: false prometheus-trusted-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: prometheus-trusted-ca-bundle-2rsonso43rc5p Optional: true kube-api-access-85zlc: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 4m19s (x409 over 15h) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_debda4d2-6914-4b36-92e0-78f68d539ab3_0(86af91d4e64ab0fbad95352b029762e9856ff24005445b458bccb22e0ee9b655): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [openshift-monitoring/prometheus-k8s-0/debda4d2-6914-4b36-92e0-78f68d539ab3:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-monitoring/prometheus-k8s-0 86af91d4e64ab0fbad95352b029762e9856ff24005445b458bccb22e0ee9b655] [openshift-monitoring/prometheus-k8s-0 86af91d4e64ab0fbad95352b029762e9856ff24005445b458bccb22e0ee9b655] failed to get pod annotation: timed out waiting for annotations: context deadline exceeded
Both pods in error state seem to be waiting on this issue "failed to get pod annotation: timed out waiting for annotations: context deadline exceeded"
A related slack thread: here
The error:
which: no kustomize in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin:/go/bin) + curl -L --retry 5 https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv4.3.0/kustomize_v4.3.0_linux_amd64.tar.gz + tar -zx -C /usr/bin/ % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 1523 0 1523 0 0 27196 0 --:--:-- --:--:-- --:--:-- 26719 Warning: Problem : HTTP error. Will retry in 300 seconds. 5 retries left. 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 gzip: stdin: not in gzip format tar: Child died with signal 13 tar: Error is not recoverable: exiting now
A related job search: https://search.ci.openshift.org/?search=gzip%3A+stdin%3A+not+in+gzip+format&maxAge=336h&context=1&type=junit&name=assisted&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Description of problem:
It is a disconnected cluster on AWS. There is an issue configuring Egress IP where the cluster uses STS. While looking into cloud-network-config-controller pod it is trying to connect to the global sts service "https://sts.amazonaws.com/" rather it should connect to the regional one "https://ec2.ap-southeast-1.amazonaws.com".
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Create a disconected OCP cluster on AWS.
$ oc get netnamespace | grep egress egress-ip-test 2689387 ["172.16.1.24"]
$ oc get hostsubnet NAME HOST HOST IP SUBNET EGRESS CIDRS EGRESS IPS ip-172-16-1-151.ap-southeast-1.compute.internal ip-172-16-1-151.ap-southeast-1.compute.internal 172.16.1.151 10.130.0.0/23 ip-172-16-1-53.ap-southeast-1.compute.internal ip-172-16-1-53.ap-southeast-1.compute.internal 172.16.1.53 10.131.0.0/23 ["172.16.1.24"] ip-172-16-2-15.ap-southeast-1.compute.internal ip-172-16-2-15.ap-southeast-1.compute.internal 172.16.2.15 10.128.0.0/23 ip-172-16-2-77.ap-southeast-1.compute.internal ip-172-16-2-77.ap-southeast-1.compute.internal 172.16.2.77 10.128.2.0/23 ip-172-16-3-111.ap-southeast-1.compute.internal ip-172-16-3-111.ap-southeast-1.compute.internal 172.16.3.111 10.129.0.0/23 ip-172-16-3-79.ap-southeast-1.compute.internal ip-172-16-3-79.ap-southeast-1.compute.internal 172.16.3.79 10.129.2.0/23
$ oc logs sdn-controller-6m5kb -n openshift-sdn I0922 04:09:53.348615 1 vnids.go:105] Allocated netid 2689387 for namespace "egress-ip-test" E0922 04:24:00.682018 1 egressip.go:254] Ignoring invalid HostSubnet ip-172-16-1-53.ap-southeast-1.compute.internal (host: "ip-172-16-1-53.ap-southeast-1.compute.internal", ip: "172.16.1.53", subnet: "10.131.0.0/23"): related node object "ip-172-16-1-53.ap-southeast-1.compute.internal" has an incomplete annotation "cloud.network.openshift.io/egress-ipconfig", CloudEgressIPConfig: <nil>
$ oc logs cloud-network-config-controller-5c7556db9f-x78bs -n openshift-cloud-network-config-controller E0922 04:26:59.468726 1 controller.go:165] error syncing 'ip-172-16-2-77.ap-southeast-1.compute.internal': error retrieving the private IP configuration for node: ip-172-16-2-77.ap-southeast-1.compute.internal, err: error: cannot list ec2 instance for node: ip-172-16-2-77.ap-southeast-1.compute.internal, err: WebIdentityErr: failed to retrieve credentials caused by: RequestError: send request failed caused by: Post "https://sts.amazonaws.com/": dial tcp 54.239.29.25:443: i/o timeout, requeuing in node workqueue
$ oc get Infrastructure -o yaml apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: creationTimestamp: "2022-09-22T03:28:15Z" generation: 1 name: cluster resourceVersion: "598" uid: 994da301-2a96-43b7-b43b-4b7c18d4b716 spec: cloudConfig: name: "" platformSpec: aws: serviceEndpoints: - name: sts url: https://sts.ap-southeast-1.amazonaws.com - name: ec2 url: https://ec2.ap-southeast-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.ap-southeast-1.amazonaws.com type: AWS status: apiServerInternalURI: https://api-int.openshiftyy.ocpaws.sadiqueonline.com:6443 apiServerURL: https://api.openshiftyy.ocpaws.sadiqueonline.com:6443 controlPlaneTopology: HighlyAvailable etcdDiscoveryDomain: "" infrastructureName: openshiftyy-wfrpf infrastructureTopology: HighlyAvailable platform: AWS platformStatus: aws: region: ap-southeast-1 serviceEndpoints: - name: ec2 url: https://ec2.ap-southeast-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.ap-southeast-1.amazonaws.com - name: sts url: https://sts.ap-southeast-1.amazonaws.com type: AWS kind: List metadata: resourceVersion: ""
$ oc get secret aws-cloud-credentials -n openshift-machine-api -o json |jq -r .data.credentials |base64 -d [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-machine-api-aws-cloud-credentials web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token [ec2-user@ip-172-17-1-229 ~]$ oc get secret cloud-credential-operator-iam-ro-creds -n openshift-cloud-credential-operator -o json |jq -r .data.credentials |base64 -d [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-cloud-credential-operator-cloud-creden web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token [ec2-user@ip-172-17-1-229 ~]$ oc get secret installer-cloud-credentials -n openshift-image-registry -o json |jq -r .data.credentials |base64 -d [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-image-registry-installer-cloud-credent web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token [ec2-user@ip-172-17-1-229 ~]$ oc get secret cloud-credentials -n openshift-ingress-operator -o json |jq -r .data.credentials |base64 -d [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-ingress-operator-cloud-credentials web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token [ec2-user@ip-172-17-1-229 ~]$ oc get secret cloud-credentials -n openshift-cloud-network-config-controller -o json |jq -r .data.credentials |base64 -d [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-cloud-network-config-controller-cloud- web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token [ec2-user@ip-172-17-1-229 ~]$ oc get secret ebs-cloud-credentials -n openshift-cluster-csi-drivers -o json |jq -r .data.credentials |base64 -d [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-cluster-csi-drivers-ebs-cloud-credenti web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
Actual results:
Egress IP not configured properly and cloud-network-config-controller trying to connect to global STS service.
Expected results:
Egress IP should get configured and cloud-network-config-controller should connect to regional STS service instead of global.
Additional info:
Description of problem:
When opening the Devfile sample developer catalog, switch the project in another browser tab, and then open devfile samples link in a new tab, the current project context is getting lost.
Version-Release number of selected component (if applicable):
4.12, expecting that this happen also in older versions
How reproducible:
Always
Steps to Reproduce:
1. Switch to the developer perspective, navigate to Add > Samples
2. Open a new browser tab and create a new project
3. Ctrl+click a sample in the first tab.
Actual results:
The project has also changed in the "Import sample" page
Expected results:
The project should be used also for the new "Import sample" page
Additional info:
We had this issue earlier for other catalog entries. Other samples works already fine, just the Devfile sample links doesn't contain the current namespace.
Description of problem:
cluster-version-operator pod crashloop during the bootstrap process might be leading to a longer bootstrap process causing the installer to timeout and fail. The cluster-version-operator pod is continuously restarting due to a go panic. The bootstrap process fails due to the timeout although it completes the process correctly after more time, once the cluster-version-operator pod runs correctly. $ oc -n openshift-cluster-version logs -p cluster-version-operator-754498df8b-5gll8 I0919 10:25:05.790124 1 start.go:23] ClusterVersionOperator 4.12.0-202209161347.p0.gc4fd1f4.assembly.stream-c4fd1f4 F0919 10:25:05.791580 1 start.go:29] error: Get "https://127.0.0.1:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 127.0.0.1:6443: connect: connection refused goroutine 1 [running]: k8s.io/klog/v2.stacks(0x1) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:860 +0x8a k8s.io/klog/v2.(*loggingT).output(0x2bee180, 0x3, 0x0, 0xc00017d5e0, 0x1, {0x22e9abc?, 0x1?}, 0x2beed80?, 0x0) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:825 +0x686 k8s.io/klog/v2.(*loggingT).printfDepth(0x2bee180, 0x0?, 0x0, {0x0, 0x0}, 0x1?, {0x1b9cff0, 0x9}, {0xc000089140, 0x1, ...}) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:630 +0x1f2 k8s.io/klog/v2.(*loggingT).printf(...) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:612 k8s.io/klog/v2.Fatalf(...) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:1516 main.init.3.func1(0xc00012ac80?, {0x1b96f60?, 0x6?, 0x6?}) /go/src/github.com/openshift/cluster-version-operator/cmd/start.go:29 +0x1e6 github.com/spf13/cobra.(*Command).execute(0xc00012ac80, {0xc0002fea20, 0x6, 0x6}) /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:860 +0x663 github.com/spf13/cobra.(*Command).ExecuteC(0x2bd52a0) /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:974 +0x3b4 github.com/spf13/cobra.(*Command).Execute(...) /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:902 main.main() /go/src/github.com/openshift/cluster-version-operator/cmd/main.go:29 +0x46
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-18-234318
How reproducible:
Most of the times, with any network type and installation type (IPI, UPI and proxy).
Steps to Reproduce:
1. Install OCP 4.12 IPI $ openshift-install create cluster 2. Wait until bootstrap is completed
Actual results:
[...] level=error msg="Bootstrap failed to complete: timed out waiting for the condition" level=error msg="Failed to wait for bootstrapping to complete. This error usually happens when there is a problem with control plane hosts that prevents the control plane operators from creating the control plane."
NAMESPACE NAME READY STATUS RESTARTS AGE openshift-cluster-version cluster-version-operator-754498df8b-5gll8 0/1 CrashLoopBackOff 7 (3m21s ago) 24m openshift-image-registry image-registry-94fd8b75c-djbxb 0/1 Pending 0 6m44s openshift-image-registry image-registry-94fd8b75c-ft66c 0/1 Pending 0 6m44s openshift-ingress router-default-64fbb749b4-cmqgw 0/1 Pending 0 13m openshift-ingress router-default-64fbb749b4-mhtqx 0/1 Pending 0 13m openshift-monitoring prometheus-operator-admission-webhook-6d8cb95cf7-6jn5q 0/1 Pending 0 14m openshift-monitoring prometheus-operator-admission-webhook-6d8cb95cf7-r6nnk 0/1 Pending 0 14m openshift-network-diagnostics network-check-source-8758bd6fc-vzf5k 0/1 Pending 0 18m openshift-operator-lifecycle-manager collect-profiles-27726375-hlq89 0/1 Pending 0 21m
$ oc -n openshift-cluster-version describe pod cluster-version-operator-754498df8b-5gll8 Name: cluster-version-operator-754498df8b-5gll8 Namespace: openshift-cluster-version Priority: 2000000000 Priority Class Name: system-cluster-critical Node: ostest-4gtwr-master-1/10.196.0.68 Start Time: Mon, 19 Sep 2022 10:17:41 +0000 Labels: k8s-app=cluster-version-operator pod-template-hash=754498df8b Annotations: openshift.io/scc: hostaccess Status: Running IP: 10.196.0.68 IPs: IP: 10.196.0.68 Controlled By: ReplicaSet/cluster-version-operator-754498df8b Containers: cluster-version-operator: Container ID: cri-o://1e2879600c89baabaca68c1d4d0a563d4b664c507f0617988cbf9ea7437f0b27 Image: registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69 Image ID: registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69 Port: <none> Host Port: <none> Args: start --release-image=registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69 --enable-auto-update=false --listen=0.0.0.0:9099 --serving-cert-file=/etc/tls/serving-cert/tls.crt --serving-key-file=/etc/tls/serving-cert/tls.key --v=2 State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Message: I0919 10:33:07.798614 1 start.go:23] ClusterVersionOperator 4.12.0-202209161347.p0.gc4fd1f4.assembly.stream-c4fd1f4 F0919 10:33:07.800115 1 start.go:29] error: Get "https://127.0.0.1:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 127.0.0.1:6443: connect: connection refused goroutine 1 [running]: [43/497] k8s.io/klog/v2.stacks(0x1) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:860 +0x8a k8s.io/klog/v2.(*loggingT).output(0x2bee180, 0x3, 0x0, 0xc000433ea0, 0x1, {0x22e9abc?, 0x1?}, 0x2beed80?, 0x0) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:825 +0x686 k8s.io/klog/v2.(*loggingT).printfDepth(0x2bee180, 0x0?, 0x0, {0x0, 0x0}, 0x1?, {0x1b9cff0, 0x9}, {0xc0002d6630, 0x1, ...}) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:630 +0x1f2 k8s.io/klog/v2.(*loggingT).printf(...) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:612 k8s.io/klog/v2.Fatalf(...) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:1516 main.init.3.func1(0xc0003b4f00?, {0x1b96f60?, 0x6?, 0x6?}) /go/src/github.com/openshift/cluster-version-operator/cmd/start.go:29 +0x1e6 github.com/spf13/cobra.(*Command).execute(0xc0003b4f00, {0xc000311980, 0x6, 0x6}) /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:860 +0x663 github.com/spf13/cobra.(*Command).ExecuteC(0x2bd52a0) /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:974 +0x3b4 github.com/spf13/cobra.(*Command).Execute(...) /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:902 main.main() /go/src/github.com/openshift/cluster-version-operator/cmd/main.go:29 +0x46 Exit Code: 255 Started: Mon, 19 Sep 2022 10:33:07 +0000 Finished: Mon, 19 Sep 2022 10:33:07 +0000 Ready: False Restart Count: 7 Requests: cpu: 20m memory: 50Mi Environment: KUBERNETES_SERVICE_PORT: 6443 KUBERNETES_SERVICE_HOST: 127.0.0.1 NODE_NAME: (v1:spec.nodeName) CLUSTER_PROFILE: self-managed-high-availability Mounts: /etc/cvo/updatepayloads from etc-cvo-updatepayloads (ro) /etc/ssl/certs from etc-ssl-certs (ro) /etc/tls/service-ca from service-ca (ro) /etc/tls/serving-cert from serving-cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access (ro) onditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: etc-ssl-certs: Type: HostPath (bare host directory volume) Path: /etc/ssl/certs HostPathType: etc-cvo-updatepayloads: Type: HostPath (bare host directory volume) Path: /etc/cvo/updatepayloads HostPathType: serving-cert: Type: Secret (a volume populated by a Secret) SecretName: cluster-version-operator-serving-cert Optional: false service-ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: openshift-service-ca.crt Optional: false kube-api-access: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3600 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: node-role.kubernetes.io/master= Tolerations: node-role.kubernetes.io/master:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/network-unavailable:NoSchedule op=Exists node.kubernetes.io/not-ready:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 120s node.kubernetes.io/unreachable:NoExecute op=Exists for 120s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 25m default-scheduler no nodes available to schedule pods Warning FailedScheduling 21m default-scheduler 0/2 nodes are available: 2 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/2 nodes are available: 2 Preemption is no t helpful for scheduling. Normal Scheduled 19m default-scheduler Successfully assigned openshift-cluster-version/cluster-version-operator-754498df8b-5gll8 to ostest-4gtwr-master-1 by ostest-4gtwr-bootstrap Warning FailedMount 17m kubelet Unable to attach or mount volumes: unmounted volumes=[serving-cert], unattached volumes=[service-ca kube-api-access etc-ssl-certs etc-cvo-updatepayloads serving-cert]: timed out waiting for the condition Warning FailedMount 17m (x9 over 19m) kubelet MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found Normal Pulling 15m kubelet Pulling image "registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69" Normal Pulled 15m kubelet Successfully pulled image "registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69" in 7.481824271s Normal Started 14m (x3 over 15m) kubelet Started container cluster-version-operator Normal Created 14m (x4 over 15m) kubelet Created container cluster-version-operator Normal Pulled 14m (x3 over 15m) kubelet Container image "registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69" already present on machine Warning BackOff 4m22s (x52 over 15m) kubelet Back-off restarting failed container
Expected results:
No panic?
Additional info:
Seen in most of OCP on OSP QE CI jobs.
Attached [^must-gather-install.tar.gz]
This is a clone of issue OCPBUGS-5458. The following is the description of the original issue:
—
reported in https://coreos.slack.com/archives/C027U68LP/p1673010878672479
Description of problem:
Hey guys, I have a openshift cluster that was upgraded to version 4.9.58 from version 4.8. After the upgrade was done, the etcd pod on master1 isn't coming up and is crashlooping. and it gives the following error: {"level":"fatal","ts":"2023-01-06T12:12:58.709Z","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"wal: max entry size limit exceeded, recBytes: 13279, fileSize(313430016) - offset(313418480) - padBytes(1) = entryLimit(11535)","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\t/remote-source/cachito-gomod-with-deps/app/server/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\t/remote-source/cachito-gomod-with-deps/app/server/etcdmain/main.go:40\nmain.main\n\t/remote-source/cachito-gomod-with-deps/app/server/main.go:32\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:225"}
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-3458. The following is the description of the original issue:
—
Description of problem:
Since way back in 4.8, we've had a banner with To request update recommendations, configure a channel that supports your version when ClusterVersion has RetrievedUpdates=False . But that's only one of several reasons we could be RetrievedUpdates=False. Can we pivot to passing through the ClusterVersion condition message?
Version-Release number of selected component (if applicable):
4.8 and later.
How reproducible:
100%
Steps to Reproduce:
1. Launch a cluster-bot cluster like 4.11.12.
2. Set a channel with oc adm upgrade channel stable-4.11.
3. Scale down the CVO with oc scale --replicas 0 -n openshift-cluster-version deployments/cluster-version-operator.
4. Patch in a RetrievedUpdates condition with:
$ CONDITIONS="$(oc get -o json clusterversion version | jq -c '[.status.conditions[] | if .type == "RetrievedUpdates" then .status = "False" | .message = "Testing" else . end]')" $ oc patch --subresource status clusterversion version --type json -p "[{\"op\": \"add\", \"path\": \"/status/conditions\", \"value\": ${CONDITIONS}}]"
5. View the admin console at /settings/cluster.
Actual results:
Advice about configuring the channel (but it's already configured).
Expected results:
See the message you patched into the RetrievedUpdates condition.
This is a clone of issue OCPBUGS-3195. The following is the description of the original issue:
—
Description of problem:
the service ca controller start func seems to return th