Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
As a user, I should be able to configure CSI driver to have a storage topology.
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
There is a new driver release 5.0.0 since the last rebase that includes snapshot support:
https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0
Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
This includes ibm-vpc-node-label-updater!
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.
We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.
We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.
related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729
This Epic tracks the GA of this feature
Epic Goal
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
OC mirror is GA product as of Openshift 4.11 .
The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term.
HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]
To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes:
If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE.
For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA:
Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc.
Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:
As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters.
HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story.
Thus the following stories are important for HyperShift:
Refs:
HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed.
Main user story: When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees.
Ref: What are we missing in Core HyperShift for GA Readiness?
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumptions:
HyperShift - proposed cuts from data plane
When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:
More information:
To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.
See Hosted Control Planes (aka HyperShift) Strategy [Live Document]
Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path.
https://issues.redhat.com/browse/OCPPLAN-8901
HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead).
We should make sure our SD milestones are unblocked by the core team.
This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.
- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors.
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771)
Epic Goal*
The goal is to split client certificate trust chains from the global Hypershift root CA.
Why is this important? (mandatory)
This is important to:
Scenarios (mandatory)
Provide details for user scenarios including actions to be performed, platform specifications, and user personas.
Dependencies (internal and external) (mandatory)
Hypershift team needs to provide us with code reviews and merge the changes we are to deliver
Contributing Teams(and contacts) (mandatory)
Acceptance Criteria (optional)
The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.
Drawbacks or Risk (optional)
Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release
Done - Checklist (mandatory)
AUTH-311 introduced an enhancement. Implement the signer separation described there.
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12
This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9
More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4
update ironic software to pick up latest bug fixes
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
As an admin, I want to hide user perspective(s) based on the customization.
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
As a SRE, I want hypershift operator to expose a metric when hosted control plane is ready.
This should allow SRE to tune (or silence) alerts occurring while the hosted control plane is spinning up.
The Kube APIServer has a sidecar to output audit logs. We need similar sidecars for other APIServers that run on the control plane side. We also need to pass the same audit log policy that we pass to the KAS to these other API servers.
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
Place holder epic to track spontaneous task which does not deserve its own epic.
AWS has a hard limit of 100 OIDC providers globally.
Currently each HostedCluster created by e2e creates its own OIDC provider, which results in hitting the quota limit frequently and causing the tests to fail as a result.
DOD:
Only a single OIDC provider should be created and shared between all e2e HostedClusters.
AC:
We have connectDirectlyToCloudAPIs flag in konnectiviy socks5 proxy to dial directly to cloud providers without going through konnectivity.
This introduce another path for exception https://github.com/openshift/hypershift/pull/1722
We should consolidate both by keep using connectDirectlyToCloudAPIs until there's a reason to not.
DoD:
At the moment if the input etcd kms encryption (key and role) is invalid we fail transparently.
We should check that both key and role are compatible/operational for a given cluster and fail in a condition otherwise
Once the HostedCluster and NodePool gets stopped using PausedUntil statement, the awsprivatelink controller will continue reconciling.
How to test this:
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
This is a clone of issue OCPBUGS-10239. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-8082. The following is the description of the original issue:
—
Description of problem:
Currently during the gathering some of the ServiceAccounts were lost. This tasks fixes that problem.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-5129. The following is the description of the original issue:
—
Description of problem:
I attempted to install a BM SNO with the agent based installer. In the install_config, I disabled all supported capabilities except marketplace. Install_config snippet: capabilities: baselineCapabilitySet: None additionalEnabledCapabilities: - marketplace The system installed fine but the capabilities config was not passed down to the cluster. clusterversion: status: availableUpdates: null capabilities: enabledCapabilities: - CSISnapshot - Console - Insights - Storage - baremetal - marketplace - openshift-samples knownCapabilities: - CSISnapshot - Console - Insights - Storage - baremetal - marketplace - openshift-samples oc -n kube-system get configmap cluster-config-v1 -o yaml apiVersion: v1 data: install-config: | additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: ptp.lab.eng.bos.redhat.com bootstrapInPlace: installationDisk: /dev/disk/by-id/wwn-0x62cea7f04d10350026c6f2ec315557a0 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 1 metadata: creationTimestamp: null name: cnfde8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.16.231.0/24 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} publish: External pullSecret: ""
Version-Release number of selected component (if applicable):
4.12.0-rc.5
How reproducible:
100%
Steps to Reproduce:
1. Install SNO with agent based installer as described above 2. 3.
Actual results:
Capabilities installed
Expected results:
Capabilities not installed
Additional info:
This is a clone of issue OCPBUGS-3277. The following is the description of the original issue:
—
I saw this occur one time when running installs in a continuous loop. This was with COMPaCT_IPV4 in a non-disconnected setup.
WaitForBootrapComplete shows it can't access the API
level=info msg=Unable to retrieve cluster metadata from Agent Rest API: no clusterID known for the cluster
level=debug msg=cluster is not registered in rest API
level=debug msg=infraenv is not registered in rest API
This is because create-cluster-and-infraenv.service failed
Failed Units: 2
create-cluster-and-infraenv.service
NetworkManager-wait-online.service
The agentbasedinstaller register command wasn't able to retrieve the image to get the version
Nov 03 23:03:24 master-0 create-cluster-and-infraenv[2702]: time="2022-11-03T23:03:24Z" level=error msg="command 'oc adm release info -o template --template '\{{.metadata.version}}' --insecure=false registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451 --registry-config=/tmp/registry-config3852044519' exited with non-zero exit code 1: \nerror: unable to read image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451: Get \"https://registry.ci.openshift.org/v2/\": dial tcp: lookup registry.ci.openshift.org on 192.168.111.1:53: read udp 192.168.111.80:51315->192.168.111.1:53: i/o timeout\n" Nov 03 23:03:24 master-0 create-cluster-and-infraenv[2702]: time="2022-11-03T23:03:24Z" level=error msg="failed to get image openshift version from release image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451" error="command 'oc adm release info -o template --template '\{{.metadata.version}}' --insecure=false registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451 --registry-config=/tmp/registry-config3852044519' exited with non-zero exit code 1: \nerror: unable to read image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451: Get \"https://registry.ci.openshift.org/v2/\": dial tcp: lookup registry.ci.openshift.org on 192.168.111.1:53: read udp 192.168.111.80:51315->192.168.111.1:53: i/o timeout\n"
This occurs when attempting to get the release here:
https://github.com/openshift/assisted-service/blob/master/cmd/agentbasedinstaller/register.go#L58
We should add a retry mechanism or restart the service to handle spurious network failures like this.
Description of problem:
This bug is a copy of https://bugzilla.redhat.com/show_bug.cgi?id=2137616 as fix needs to go on OCP side. For must gather and attached screenshots please refer the bugzilla.
Add Capacity button does not exist after upgrade OCP version [OCP4.11->OCP4.12]
Version-Release number of selected component (if applicable):
ODF Version:4.11.3-3 OCP Version: 4.12.0-0.nightly-2022-10-24-103753 Provider: AWS
How reproducible:
Steps to Reproduce:
1.Install ODF4.11 +OCP4.11 2.Upgrade OCP4.11 to OCP4.12 3.Log in to the OpenShift Web Console. 4.Click Operators → Installed Operators. 5.Click OpenShift Data Foundation Operator. 6.Click the Storage Systems tab. 7.Click the Action Menu (⋮) on the far right of the storage system name to extend the options menu. "Add Capacity" button does not exist on menu. *Attached Screenshot
Actual results:
Expected results:
Additional info:
The dependency on openshift/api needs to be bumped in openshift/kubernetes in order to pull in the fix from https://issues.redhat.com/browse/OCPBUGS-3635.
This is a clone of issue OCPBUGS-5458. The following is the description of the original issue:
—
reported in https://coreos.slack.com/archives/C027U68LP/p1673010878672479
Description of problem:
Hey guys, I have a openshift cluster that was upgraded to version 4.9.58 from version 4.8. After the upgrade was done, the etcd pod on master1 isn't coming up and is crashlooping. and it gives the following error: {"level":"fatal","ts":"2023-01-06T12:12:58.709Z","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"wal: max entry size limit exceeded, recBytes: 13279, fileSize(313430016) - offset(313418480) - padBytes(1) = entryLimit(11535)","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\t/remote-source/cachito-gomod-with-deps/app/server/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\t/remote-source/cachito-gomod-with-deps/app/server/etcdmain/main.go:40\nmain.main\n\t/remote-source/cachito-gomod-with-deps/app/server/main.go:32\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:225"}
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Name: Routing
Description: Please change the "Routing" component to be a subcomponent "router" of the "Networking" component.
Component: change to "Networking".
Subcomponent: change to "router".
Existing fields (default assignee, default QA contact, default CC email list, etc.) should remain the same as they currently are.
Default Assignee: aos-network-edge-staff@bot.bugzilla.redhat.com
Default QA Contact: hongli@redhat.com
Default CC List: aos-network-edge-staff@bot.bugzilla.redhat.com
Additional Notes:
I filled in "Default CC email list" because the form validation would not permit me to omit it. However, it can be left empty in Bugzilla (it is currently empty).
If possible, we would like this change to be done prior to the Bugzilla-to-Jira migration to avoid the need to make the change after the migration.
https://github.com/openshift/origin/pull/27444 was intended to move the scaling test out of serial to it's own test suite, but it added it to parallel – meaning it's running in all our normal upgrade jobs, causing them to frequently fail with repeating pathological events as well as greatly increasing their run time.
See https://github.com/openshift/origin/pull/27444#discussion_r991296925 for more info
For the disconnected installation , we should not be able to provision machines successfully with publicIP:true , this has been the behavior earlier till -
4.11 and around 17th Aug nightly released 4.12 , but it has started allowing creation of machines with publicIP:true set in machineset
Issue reproduced on - Cluster version - 4.12.0-0.nightly-2022-08-23-223922
It is always reproducible .
Steps :
Create machineset using yaml with
{"spec":{"providerSpec":{"value":{"publicIP": true}}}}
Machineset created successfully and machine provisioned successfully .
This seems to be regression bug refer - https://bugzilla.redhat.com/show_bug.cgi?id=1889620
Here is the must gather log - https://drive.google.com/file/d/1UXjiqAx7obISTxkmBsSBuo44ciz9HD1F/view?usp=sharing
Here is the test successfully ran for 4.11 , for exactly same profile and machine creation failed with InvalidConfiguration Error- https://mastern-jenkins-csb-openshift-qe.apps.ocp-c1.prod.psi.redhat.com/job/ocp-common/job/Runner/575822/console
We can confirm disconnected cluster using below there would be lot of mirrors used in those -
oc get ImageContentSourcePolicy image-policy-aosqe -o yaml apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: creationTimestamp: "2022-08-24T09:08:47Z" generation: 1 name: image-policy-aosqe resourceVersion: "34648" uid: 20e45d6d-e081-435d-b6bb-16c4ca21c9d6 spec: repositoryDigestMirrors: - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/olmqe source: quay.io/olmqe - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/openshifttest source: quay.io/openshifttest - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/openshift-qe-optional-operators source: quay.io/openshift-qe-optional-operators - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: registry.redhat.io - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: registry.stage.redhat.io - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: brew.registry.redhat.io
Description of problem:
Currently in 4.11, MAPI nutanix machine-controller does not provide the machine (VM)’s instance-type, region, zone, etc. labels to the Machine CR. And these columns are empty when viewing the Machine CRs, via cli “oc get Machine” or from the OCP cluster web console. $ oc -n openshift-machine-api get machine NAME PHASE TYPE REGION ZONE AGE demo-ocp-cluster-g1-77nws-master-0 Running 133m demo-ocp-cluster-g1-77nws-master-1 Running 133m demo-ocp-cluster-g1-77nws-master-2 Running 133m demo-ocp-cluster-g1-77nws-worker-2bsxn Running 129m demo-ocp-cluster-g1-77nws-worker-75hr5 Running 129m demo-ocp-cluster-g1-77nws-worker-rg7b9 Running 129m We can add something like the below labels to the Machine CR in the mapi-nutanix when reconciling for the Machine CRs: machine.openshift.io/instance-type: AHV machine.openshift.io/region: <prism-central-address> machine.openshift.io/zone: <prism-element-name/uuid>
Version-Release number of selected component (if applicable):
How reproducible:
run cli “oc get Machine” or from the OCP cluster web console to view the Machines resource
Steps to Reproduce:
1. 2. 3.
Actual results:
The "Type", "Region", "Zone" columns are empty for each Machine CR.
Expected results:
The "Type", "Region", "Zone" columns showing data for each Machine CR.
Additional info:
This is a clone of issue OCPBUGS-1453. The following is the description of the original issue:
—
Description of problem:
TargetDown alert fired while it shouldn't. Prometheus endpoints are not always properly unregistered and the alert will therefore think that some Kube service endpoints are down
Version-Release number of selected component (if applicable):
The problem as always been there.
How reproducible:
Not reproducible. Most of the time Prometheus endpoints are properly unregistered. Aim here is to get the TargetDown Prometheus expression be more resilient; this can be tested on past metrics data in which the unregistration issue was encountered.
Steps to Reproduce:
N/A
Actual results:
TargetDown alert triggered while Kube service endpoints are all up & running.
Expected results:
TargetDown alert should not have been trigerred.
Description of problem:
SYN packets for new tcp connections from inside the cluster to an external destination are dropped at random. After few seconds (i.e. few retries), they eventually succeed and no more packet drop happens. Hence, this is perceived as too long TCP connection establishment delay.
Version-Release number of selected component (if applicable):
4.10.0
How reproducible:
Frequently at a concrete cluster. Other clusters with apparently similar configuration don't show the issue.
Steps to Reproduce:
1. Establish TCP connection from pod to external destination. 2. 3.
Actual results:
SYN packets dropped, long TCP establishment time, leading to timeouts.
Expected results:
No drops
Additional info:
This becomes especially harmful because it impacts communication from openshift-apiserver (not to be confused with kube-apiserver) and etcd, because the former is inside the SDN and etcd isn't. More details will follow in comments.
Changelog between 3.5.5 and 3.5.4:
https://github.com/etcd-io/etcd/blob/main/CHANGELOG/CHANGELOG-3.5.md#v355-tbd
Changelog between 3.5.3 and 3.5.4:
https://github.com/etcd-io/etcd/blob/main/CHANGELOG/CHANGELOG-3.5.md#v354-2022-04-24
This is a clone of issue OCPBUGS-11536. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-11434. The following is the description of the original issue:
—
Description of problem:
node-exporter profiling shows that ~16% of CPU time is spend fetching details about btrfs mounts. RHEL kernel doesn't have btrfs, so its safe to disable this exporter
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-10678. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10655. The following is the description of the original issue:
—
Description of problem:
The dev console shows a list of samples. The user can create a sample based on a git repository. But some of these samples doesn't include a git repository reference and could not be created.
Version-Release number of selected component (if applicable):
Tested different frontend versions against a 4.11 cluster and all (oldest tested frontend was 4.8) show the sample without git repository.
But the result also depends on the installed samples operator and installed ImageStreams.
How reproducible:
Always
Steps to Reproduce:
Actual results:
The git repository is not filled and the create button is disabled.
Expected results:
Samples without git repositories should not be displayed in the list.
Additional info:
The Git repository is saved as "sampleRepo" in the ImageStream tag section.
This is a clone of issue OCPBUGS-3084. The following is the description of the original issue:
—
Upstream Issue: https://github.com/kubernetes/kubernetes/issues/77603
Long log lines get corrupted when using '--timestamps' by the Kubelet.
The root cause is that the buffer reads up to a new line. If the line is greater than 4096 bytes and '--timestamps' is turrned on the kubelet will write the timestamp and the partial log line. We will need to refactor the ReadLogs function to allow for a partial line read.
apiVersion: v1
kind: Pod
metadata:
name: logs
spec:
restartPolicy: Never
containers:
- name: logs
image: fedora
args:
- bash
- -c
- 'for i in `seq 1 10000000`; do echo -n $i; done'
kubectl logs logs --timestamps
This is a clone of issue OCPBUGS-6011. The following is the description of the original issue:
—
Description of problem:
The 4.12.0 openshift-client package has kubectl 1.24.1 bundled in it when it should have 1.25.x
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
Very
Steps to Reproduce:
1. Download and unpack https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable/openshift-client-linux-4.12.0.tar.gz 2. ./kubectl version
Actual results:
# ./kubectl version Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"1928ac4250660378a7d8c3430478dfe77977cb2a", GitTreeState:"clean", BuildDate:"2022-12-07T05:08:22Z", GoVersion:"go1.18.7", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.4
Expected results:
kubectl version 1.25.x
Additional info:
Description of problem:
The error message of "opm alpha render-veneer semver" is not correct, "semver &{%!q(*os.file=&{{{0 0 0} 3 {0} 0 1 true true true}" is meaningless, should not be printed.
Version-Release number of selected component (if applicable):
zhaoxia@xzha-mac operator-framework-olm % opm version Version: version.Version{OpmVersion:"2149aebcc", GitCommit:"2149aebcc71367e6fba8f5416374917dae1e6a1c", BuildDate:"2022-09-08T04:31:47Z", GoOs:"darwin", GoArch:"amd64"}
How reproducible:
always
Steps to Reproduce:
1. create file zhaoxia@xzha-mac OCP-53915 % cat catalog-semver-veneer-1.yaml Schema: olm.semver Candidate: Bundles: - Image: quay.io/olmqe/nginxolm-operator-bundle:v0.0.1 - Image: quay.io/olmqe/nginxolm-operator-bundle:v1.0.1 - Image: quay.io/olmqe/nginxolm-operator-bundle:v1.0.1-alpha - Image: quay.io/olmqe/nginxolm-operator-bundle:v1.0.1-beta - Image: quay.io/olmqe/nginxolm-operator-bundle:v1.0.1-alpha20220829 - Image: quay.io/olmqe/nginxolm-operator-bundle:v1.0.1-alpha20220830 Stable: Bundles: - Image: quay.io/olmqe/nginxolm-operator-bundle:v1.0.1-beta Fast: Bundles: - Image: quay.io/olmqe/nginxolm-operator-bundle:v0.0.1 - Image: quay.io/olmqe/nginxolm-operator-bundle:v1.0.1-beta 2. run "opm alpha render-veneer semver" zhaoxia@xzha-mac operator-framework-olm % opm alpha render-veneer semver catalog-semver-veneer-1.yaml 2022/09/08 12:35:05 semver &{%!q(*os.file=&{{{0 0 0} 3 {0} <nil> 0 1 true true true} catalog-semver-veneer-1.yaml <nil> false false false})}: semver-render: unable to post-process bundle info: encountered bundle versions which differ only by build metadata, which cannot be ordered: [bundle version "1.0.1-alpha" cannot be compared to "1.0.1-alpha", bundle version "1.0.1-alpha+20220829" cannot be compared to "1.0.1-alpha"] 3.
Actual results:
"semver &{%!q(*os.file=&{{{0 0 0} 3 {0} 0 1 true true true}" is meaningless, should not be printed.
Expected results:
no error message "semver &{%!q(*os.file=&{{{0 0 0} 3 {0} 0 1 true true true}"
Additional info:
Description of problem:
If a customer creates a machine with a networks section like this networks: - filter: {} noAllowedAddressPairs: false subnets: - filter: {} uuid: primary-subnet-uuid - filter: {} noAllowedAddressPairs: true subnets: - filter: {} uuid: other-subnet-uuid primarySubnet: primary-subnet-uuid Then all the ports are created without the allowed address pairs. Doing some research in the source code, I have found that: - For each entry on the networks: section, networks are filtered as per its filter: section[1] - Then, if the subnets: section of the network entry is not empty, for each of the network IDs found above[2], 2 things are done that are relevant for this situatoin: - The net ID is saved on a netsWithoutAllowedAddressPairs[3]. That map is later checked while creating any port[4]. - For each subnet entry that matches the network ID, a port is created[5]. So, the problematic behavior happens due to the following: - Both entries in the networks array have empty filters. This means that both entries selected all the neutron networks. - This configuration results in one port per subnet as expected because, in the later traversal of the subnets array of each entry[5], it is filtering by subnet and creating a single port as expected. - However, the entry with "noAllowedAddressPairs: true" is selecting all the neutron networks, so it adds all of them to the netsWithoutAllowedAddressPairs map[3], regardless of the subnets filtering. - As all the networks are in noAllowedAddressPairs: true array, all the ports created for the VM have their allowed address pairs removed[4]. Why do we consider this behavior undesired? I understand that, if we create a port for a network that has no allowed pairs, we create all the other ports in the same networks without the pairs. However, it is surprising that a port in a network is removed the allowed address pairs due to a setting in an entry that yielded no port on that network. In other words, one would expect that the same subnet filtering that happens on each network entry in what regards yielding ports for the VM would also work for the noAllowedPairs parameter.
Version-Release number of selected component (if applicable):
4.10.30
How reproducible:
Always
Steps to Reproduce:
1. Create a machineset like in the description 2. 3.
Actual results:
All ports have no address pairs
Expected results:
Only the port on the secondary subnet has no address pairs.
Additional info:
A simple workaround would be to just fill the filter so that a single network is selected for each network entry. References: [1] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L576 [2] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L580 [3] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L581-L583 [4] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L658-L660 [5] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L610-L625
This is a clone of issue OCPBUGS-1427. The following is the description of the original issue:
—
Description of problem:
Jump looks the worst on gcp, but looking closer Azure and AWS both jumped as well just not as high.
Disruption data indicates that the image registry on GCP was averaging around 30-40 seconds of disruption during an upgrade, until Aug 27th when it jumped to 125-135 seconds and has remained there ever since.
We see similar spikes in ingress-to-console and ingress-to-oauth. NOTE: image registry backend is also behind ingress, so all three are ingress related disruption.
https://datastudio.google.com/s/uBC4zuBFdTE
These charts show the problem on Aug 27 for registry, ingress to console, and ingress to oauth.
sdn network type appears unaffected.
Something merged Aug 26-27 that caused a significant change for anything behind ingress using ovn on gcp.
Sample archive with both resources:
archives/compressed/3c/3cc4318d-e564-450b-b16e-51ef279b87fa/202209/30/200617.tar.gz
Sample query to find more archives:
with t as ( select cluster_id, file_path, json_extract_scalar(content, '$.kind') as kind from raw_io_archives where date = '2022-09-30' and file_path like 'config/storage/%' ) select cluster_id, count(*) as cnt from t group by cluster_id order by cnt desc;
Description of problem:
The ovn-kubernetes ovnkube-master containers are continuously crashlooping since we updated to 4.11.0-0.okd-2022-10-15-073651.
Log Excerpt:
] [] [] [{kubectl-client-side-apply Update networking.k8s.io/v1 2022-09-12 12:25:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:ingress":{},"f:policyTypes":{}}} }]},Spec:NetworkPolicySpec{PodSelector:{map[] []},Ingress:[]NetworkPolicyIngressRule{NetworkPolicyIngressRule{Ports:[]NetworkPolicyPort{},From:[]NetworkPolicyPeer{NetworkPolicyPeer{PodSelector:&v1.LabelSelector{MatchLabels:map[string]string{access: true,},MatchExpressions:[]LabelSelectorRequirement{},},NamespaceSelector:nil,IPBlock:nil,},},},},Egress:[]NetworkPolicyEgressRule{},PolicyTypes:[Ingress],},} &NetworkPolicy{ObjectMeta:{allow-from-openshift-ingress compsci-gradcentral a405f843-c250-40d7-8dd4-a759f764f091 217304038 1 2022-09-22 14:36:38 +0000 UTC <nil> <nil> map[] map[] [] [] [{openshift-apiserver Update networking.k8s.io/v1 2022-09-22 14:36:38 +0000 UTC FieldsV1 {"f:spec":{"f:ingress":{},"f:policyTypes":{}}} }]},Spec:NetworkPolicySpec{PodSelector:{map[] []},Ingress:[]NetworkPolicyIngressRule{NetworkPolicyIngressRule{Ports:[]NetworkPolicyPort{},From:[]NetworkPolicyPeer{NetworkPolicyPeer{PodSelector:nil,NamespaceSelector:&v1.LabelSelector{MatchLabels:map[string]string{policy-group.network.openshift.io/ingress: ,},MatchExpressions:[]LabelSelectorRequirement{},},IPBlock:nil,},},},},Egress:[]NetworkPolicyEgressRule{},PolicyTypes:[Ingress],},}]: cannot clean up egress default deny ACL name: error in transact with ops [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:delete Value:{GoSet:[{GoUUID:60cb946a-46e9-4623-9ba4-3cb35f018ed6}]}}] Timeout:<nil> Where:[where column _uuid == {ccdd01bf-3009-42fb-9672-e1df38190cd7}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:delete Value:{GoSet:[{GoUUID:60cb946a-46e9-4623-9ba4-3cb35f018ed6}]}}] Timeout:<nil> Where:[where column _uuid == {10bbf229-8c1b-4c62-b36e-4ba0097722db}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:delete Table:ACL Row:map[] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {7b55ba0c-150f-4a63-9601-cfde25f29408}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:delete Table:ACL Row:map[] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {60cb946a-46e9-4623-9ba4-3cb35f018ed6}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}] results [{Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:0 Error:referential integrity violation Details:cannot delete ACL row 7b55ba0c-150f-4a63-9601-cfde25f29408 because of 1 remaining reference(s) UUID:{GoUUID:} Rows:[]}] and errors []: referential integrity violation: cannot delete ACL row 7b55ba0c-150f-4a63-9601-cfde25f29408 because of 1 remaining reference(s)
Additional info:
https://github.com/okd-project/okd/issues/1372 Issue persisted through update to 4.11.0-0.okd-2022-10-28-153352 must-gather: https://nbc9-snips.cloud.duke.edu/snips/must-gather.local.2859117512952590880.zip
This is a clone of issue OCPBUGS-4491. The following is the description of the original issue:
—
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
Users on a disconnected cluster with a proxy could not import a Devfile (from GitHub).
The API call /api/devfile/ takes 30 seconds until it fails with 504 Gateway timeout.
Version-Release number of selected component (if applicable):
This might happen since 4.8
Tested this yet only on 4.12.0-0.nightly-2022-09-07-112008
How reproducible:
Always
Steps to Reproduce:
Actual results:
Expected results:
Additional info:
The console Pod log contains this error:
E0909 10:28:18.448680 1 devfile-handler.go:74] Failed to parse devfile: failed to populateAndParseDevfile: Get "https://registry.devfile.io/devfiles/go": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
This is a clone of issue OCPBUGS-3621. The following is the description of the original issue:
—
Description of problem:
EUS-to-EUS upgrade(4.10.38-4.11.13-4.12.0-rc.0), after control-plane nodes are upgraded to 4.12 successfully, unpause the worker pool to get worker nodes updated. But worker nodes failed to be updated with degraded worker pool: ``` # ./oc get node NAME STATUS ROLES AGE VERSION jliu410-6hmkz-master-0.c.openshift-qe.internal Ready master 4h40m v1.25.2+f33d98e jliu410-6hmkz-master-1.c.openshift-qe.internal Ready master 4h40m v1.25.2+f33d98e jliu410-6hmkz-master-2.c.openshift-qe.internal Ready master 4h40m v1.25.2+f33d98e jliu410-6hmkz-worker-a-xdwvv.c.openshift-qe.internal Ready,SchedulingDisabled worker 4h31m v1.23.12+6b34f32 jliu410-6hmkz-worker-b-9hnb8.c.openshift-qe.internal Ready worker 4h31m v1.23.12+6b34f32 jliu410-6hmkz-worker-c-bdv4f.c.openshift-qe.internal Ready worker 4h31m v1.23.12+6b34f32 ... # ./oc get co machine-config machine-config 4.12.0-rc.0 True False True 3h41m Failed to resync 4.12.0-rc.0 because: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool worker is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 1)] ... # ./oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b81233204496767f2fe32fbb6cb088e1 True False False 3 3 3 0 4h10m worker rendered-worker-a2caae543a144d94c17a27e56038d4c4 False True True 3 0 0 1 4h10m ... # ./oc describe mcp worker Message: Reason: Status: True Type: Degraded Last Transition Time: 2022-11-14T07:19:42Z Message: Node jliu410-6hmkz-worker-a-xdwvv.c.openshift-qe.internal is reporting: "Error checking type of update image: error running skopeo inspect --no-tags --retry-times 5 --authfile /var/lib/kubelet/config.json docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c01b0ae9870dbee5609c52b4d649334ce6854fff1237f1521929d151f6876daa: exit status 1\ntime=\"2022-11-14T07:42:47Z\" level=fatal msg=\"unknown flag: --no-tags\"\n" Reason: 1 nodes are reporting degraded status on sync Status: True Type: NodeDegraded ... # ./oc logs machine-config-daemon-mg2zn E1114 08:11:27.115577 192836 writer.go:200] Marking Degraded due to: Error checking type of update image: error running skopeo inspect --no-tags --retry-times 5 --authfile /var/lib/kubelet/config.json docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c01b0ae9870dbee5609c52b4d649334ce6854fff1237f1521929d151f6876daa: exit status 1 time="2022-11-14T08:11:25Z" level=fatal msg="unknown flag: --no-tags" ```
Version-Release number of selected component (if applicable):
4.12.0-rc.0
How reproducible:
Steps to Reproduce:
1. EUS upgrade with path 4.10.38-> 4.11.13-> 4.12.0-rc.0 with paused worker pool 2. After master pool upgrade succeed, unpause worker pool 3.
Actual results:
Worker pool upgrade failed
Expected results:
Worker pool upgrade succeed
Additional info:
This is a clone of issue OCPBUGS-2851. The following is the description of the original issue:
—
Description of problem:
The current implementation of registries.conf support is not working as expected. This bug report will outline the expectations of how we believe this should work.
The containers/image project defines a configuration file called registries.conf, which controls how image pulls can be redirected to another registry. Effectively the pull request for a given registry is redirected to another registry which can satisfy the image pull request instead. The specification for the registries.conf file is located here. For tools such as podman and skopeo, this configuration file allows those tools to indicate where images should be pulled from, and the containers/image project rewrites the image reference on the fly and tries to get the image from the first location it can, preferring these "alternate locations" and then falling back to the original location if one of the alternate locations can't satisfy the image request.
An important aspect of this redirection mechanism is it allows the "host:port" and "namespace" portions of the image reference to be redirected. To be clear on the nomenclature used in the registries.conf specification, a namespace refers to zero or more slash separated sections leading up to the image name (which is called repo in the specification and has the tag or digest after it. See repo(:_tag|@digest) below) and the host[:port] refers to the domain where the image registry is being hosted.
Example:
host[:port]/namespace[/namespace…]/repo(:_tag|@digest)
For example, if we have an image called myimage@sha:1234 the and the image normally resides in quay.io/foo/myimage@sha:1234 you could redirect the image pull request to my registry.com/bar/baz/myimage@sha:1234. Note that in this example the alternate registry location is in a different host, and the namespace "path" is different too.
In a typical development scenario, image references within an OLM catalog should always point to a production location where the image is intended to be pulled from when a catalog is published publicly. Doing this prevents publishing a catalog which contains image references to internal repositories, which would never be accessible by a customer. By using the registries.conf redirection mechanism, we can perform testing even before the images are officially published to public locations, and we can redirect the image reference from a production location to an internal repository for testing purposes. Below is a simple example of a registries.conf file that redirects image pull requests away from prodlocation.io to preprodlocation.com:
[[registry]] location = "prodlocation.io/xx" insecure = false blocked = false mirror-by-digest-only = true prefix = "" [[registry.mirror]] location = "preprodlocation.com/xx" insecure = false
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
oc mirror -c ImageSetConfiguration.yaml --use-oci-feature --oci-feature-action mirror --oci-insecure-signature-policy --oci-registries-config registries.conf --dest-skip-tls docker://localhost:5000/example/test
Example registries.conf
[[registry]] prefix = "" insecure = false blocked = false location = "prod.com/abc" mirror-by-digest-only = true [[registry.mirror]] location = "internal.exmaple.io/cp" insecure = false [[registry]] prefix = "" insecure = false blocked = false location = "quay.io" mirror-by-digest-only = true [[registry.mirror]] location = "internal.exmaple.io/abcd" insecure = false
Actual results:
images are not pulled from "internal" registry
Expected results:
images should be pulled from "internal" registry
Additional info:
The current implementation in oc mirror creates its own structs to approximate the ones provided by the containers/image project, but it might not be necessary to do that. Since the oc mirror project already uses containers/image as a dependency, it could leverage the FindRegistry function, which takes a image reference, loads the registries.conf information and returns the most appropriate [[registry]] reference (in the form of Registry struct) or nil if no match was found. Obviously custom processing will be necessary to do something useful with the Registry instance. Using this code is not a requirement, just a suggestion of another possible path to load the configuration.
the in repo upi docs point to the terraform configs. If we remove those, we should update the docs to not use them.
Description of problem:
Kebab menu for helm repository is showing inconsistent behavior
Version-Release number of selected component (if applicable): 4.12
How reproducible: Always
Steps to Reproduce:
1. Create some helm chart repository
2. Go to the Helm page and switch to the repositories tab
3. Open kebab menu for different repos
Actual results:
Menus are overlapping
Expected results:
The menu should work properly; one menu should close before opening a new one
Additional info:
Video has been added for the reference
Description of problem:
Seeing intermittently during cluster installs Network operator stuck in Progressing with network 4.12.0-0.nightly-2022-10-25-210451 True True False 117m DaemonSet "/openshift-network-diagnostics/network-check-target" is not available (awaiting 1 nodes) MG: http://shell.lab.bos.redhat.com/~anusaxen/must-gather.local.5450303633101217331/ iptables-save on master-2 node - http://shell.lab.bos.redhat.com/~anusaxen/iptables-save pod events Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 129m default-scheduler Successfully assigned openshift-network-diagnostics/network-check-target-gnld6 to qe-anurag114e-9xkz4-master-2.c.openshift-qe.internal Warning FailedMount 128m (x7 over 129m) kubelet MountVolume.SetUp failed for volume "kube-api-access-kfg5s" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Warning NetworkNotReady 128m (x18 over 129m) kubelet network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? Warning ErrorAddingLogicalPort 127m (x2 over 127m) controlplane addLogicalPort failed for openshift-network-diagnostics/network-check-target-gnld6: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "qe-anurag114e-9xkz4-master-2.c.openshift-qe.internal" Normal AddedInterface 127m multus Add eth0 [10.130.0.3/23] from ovn-kubernetes Warning ProbeError 9m (x16 over 71m) kubelet Readiness probe error: Get "http://10.130.0.3:8080/": dial tcp 10.130.0.3:8080: i/o timeout (Client.Timeout exceeded while awaiting headers) body: Warning ProbeError 4m (x717 over 126m) kubelet Readiness probe error: Get "http://10.130.0.3:8080/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body:
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-25-210451
How reproducible:
rare
Steps to Reproduce:
1.Install OCP with OVNKubernetes with HO enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: [] 2. 3.
Actual results:
Installation stuck due to network-check-target issue
Expected results:
Installation should succeed
Additional info:
Will add additional logs
Description of problem:
After IPI installing a 3-node Hub Cluster, and converting them to dual stack, fd69::/125 address is seen in the Baremetal br-ex interface
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
Ranodmly reproduced and this IP is assigned in one of the 3 master hub cluster nodes
Steps to Reproduce:
1. IPI install 4.12.0 2. Use the Convert from IPv4/IPv6 dual stack procedure. 3.
Actual results:
Check for the IP fd69::/125 in the br-ex interface
OVN CrashLoopBackOff
Expected results:
The IP is a internal OVNKUBE IP, and it should not appear on the interface.
fd69::2/125 should be present on br-ex, but make sure fd69::2 does not :
Additional info:
This is one of the issues in IPv6 that is discovered, the other issue is linked here as well.
Description of problem:
During an upgrade from 4.12.0 to 4.12.1 a customer has observed crashlooping ovn-master pods with the following error message $ oc logs -n openshift-ovn-kubernetes ovnkube-master-bx99r -c ovnkube-master --tail=20 -p :Transaction causes multiple rows in "IGMP_Group" table to have identical values (mrouters, 038b16fa-6aba-4244-9d4f-00a1e2cbf9a2, and []) for index on columns "address", "datapath", and "chassis". First row, with UUID 7e9a18fa-e58c-4547-a7cb-afa934b6cdc9, had the following index values before the trans action: mrouters, 038b16fa-6aba-4244-9d4f-00a1e2cbf9a2, and d9755997-e909-4d0c-8770-82a902d69a90. Second row, with UUID 84da3622-3ac7-41f0-a6b5-536a2d5f9137, had the following index values before the transaction: mrouters, 038b16fa-6aba-4244-9d4f-00a1e2cbf9a2, and 578d4dd9-cc02-4bcc-8a9c-08dcc3a94190. UUID:{GoUUID:} Rows:[]}] and errors []: constraint violation: Transaction causes multiple rows in "IGMP_Group" table to have identical values (mrouters, 038b16fa-6aba-4244-9d4f-00a1e2cbf9a2, and []) for index on columns "address", "datapath", and "chassis". First row, with UUID 7e9a18fa-e58c-4547-a7cb-afa934b6cdc9, had the following index values before the transaction: mrouters, 038b16fa-6aba-4244-9d4f-00a1e2cbf9a2, and d9755997-e909-4d0c-8770-82a902d69a90. Second row, with UUID 84da3622-3ac7-41f0-a6b5-536a2d5f9137, ha d the following index values before the transaction: mrouters, 038b16fa-6aba-4244-9d4f-00a1e2cbf9a2, and 578d4dd9-cc02-4bcc-8a9c-08dcc3a94190.
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
Unknown
Steps to Reproduce:
1. Upgrade from 4.12.0 to 4.12.1 2. 3.
Actual results:
crashlooping ovnkube-master pods
Expected results:
functional ovnkube-master pods
Additional info:
This cluster was upgraded from 4.11 to 4.12.0 then to 4.12.1. The attached case has a must-gather.
Description of problem:
Create network LoadBalancer service, but always get Connection time out when accessing the LB
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-27-135134
How reproducible:
100%
Steps to Reproduce:
1. create custom ingresscontroller that using Network LB service $ Domain="nlb.$(oc get dns.config cluster -o=jsonpath='{.spec.baseDomain}')" $ oc create -f - << EOF kind: IngressController apiVersion: operator.openshift.io/v1 metadata: name: nlb namespace: openshift-ingress-operator spec: domain: ${Domain} replicas: 3 endpointPublishingStrategy: loadBalancer: providerParameters: aws: type: NLB type: AWS scope: External type: LoadBalancerService EOF 2. wait for the ingress NLB service is ready. $ oc -n openshift-ingress get svc/router-nlb NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-nlb LoadBalancer 172.30.75.134 a765a5eb408aa4a68988e35b72672379-78a76c339ded64fa.elb.us-east-2.amazonaws.com 80:31833/TCP,443:32499/TCP 117s 3. curl the network LB $ curl a765a5eb408aa4a68988e35b72672379-78a76c339ded64fa.elb.us-east-2.amazonaws.com -I <hang>
Actual results:
Connection time out
Expected results:
curl should return 503
Additional info:
the NLB service has the annotation: service.beta.kubernetes.io/aws-load-balancer-type: nlb
Description of problem:
Currently openshift-installer and ARO installer have diverged in code bases. In effort from the ARO team to be able to reduce/remove this, the we are patching openshift-installer. ARO uses a newer version of the Azure SDK. We need to backport this change to previous versions of openshift-installer
Version-Release number of selected component (if applicable):
See affected versions
How reproducible:
N/A
Steps to Reproduce:
N/A
Actual results:
N/A
Expected results:
N/A
Additional info:
This is a clone of issue OCPBUGS-266. The following is the description of the original issue:
—
Description of problem: I am working with a customer who uses the web console. From the Developer Perspective's Project Access tab, they cannot differentiate between users and groups and furthermore cannot add groups from this web console. This has led to confusion whether existing resources were in fact users or groups, and furthermore they have added users when they intended to add groups instead. What we really need is a third column in the Project Access tab that says whether a resource is a user or group.
Version-Release number of selected component (if applicable): This is an issue in OCP 4.10 and 4.11, and I presume future versions as well
How reproducible: Every time. My customer is running on ROSA, but I have determined this issue to be general to OpenShift.
Steps to Reproduce:
From the oc cli, I create a group and add a user to it.
$ oc adm groups new techlead
group.user.openshift.io/techlead created
$ oc adm groups add-users techlead admin
group.user.openshift.io/techlead added: "admin"
$ oc get groups
NAME USERS
cluster-admins
dedicated-admins admin
techlead admin
I create a new namespace so that I can assign a group project level access:
$ oc new-project my-namespace
$ oc adm policy add-role-to-group edit techlead -n my-namespace
I then went to the web console -> Developer perspective -> Project -> Project Access. I verified the rolebinding named 'edit' is bound to a group named 'techlead'.
$ oc get rolebinding
NAME ROLE AGE
admin ClusterRole/admin 15m
admin-dedicated-admins ClusterRole/admin 15m
admin-system:serviceaccounts:dedicated-admin ClusterRole/admin 15m
dedicated-admins-project-dedicated-admins ClusterRole/dedicated-admins-project 15m
dedicated-admins-project-system:serviceaccounts:dedicated-admin ClusterRole/dedicated-admins-project 15m
edit ClusterRole/edit 2m18s
system:deployers ClusterRole/system:deployer 15m
system:image-builders ClusterRole/system:image-builder 15m
system:image-pullers ClusterRole/system:image-puller 15m
$ oc get rolebinding edit -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: "2022-08-15T14:16:56Z"
name: edit
namespace: my-namespace
resourceVersion: "108357"
uid: 4abca27d-08e8-43a3-b9d3-d20d5c294bbe
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
Now back to the CLI, I view the newly created rolebinding named 'developer-view-c15b720facbc8deb', and find that the "View" role is assigned to a user named 'developer', rather than a group.
$ oc get rolebinding
NAME ROLE AGE
admin ClusterRole/admin 17m
admin-dedicated-admins ClusterRole/admin 17m
admin-system:serviceaccounts:dedicated-admin ClusterRole/admin 17m
dedicated-admins-project-dedicated-admins ClusterRole/dedicated-admins-project 17m
dedicated-admins-project-system:serviceaccounts:dedicated-admin ClusterRole/dedicated-admins-project 17m
edit ClusterRole/edit 4m25s
developer-view-c15b720facbc8deb ClusterRole/view 90s
system:deployers ClusterRole/system:deployer 17m
system:image-builders ClusterRole/system:image-builder 17m
system:image-pullers ClusterRole/system:image-puller 17m
[10:21:21] kechung:~ $ oc get rolebinding developer-view-c15b720facbc8deb -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: "2022-08-15T14:19:51Z"
name: developer-view-c15b720facbc8deb
namespace: my-namespace
resourceVersion: "113298"
uid: cc2d1b37-922b-4e9b-8e96-bf5e1fa77779
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
So in conclusion, from the Project Access tab, we're unable to add groups and unable to differentiate between users and groups. This is in essence our ask for this RFE.
Actual results:
Developer perspective -> Project -> Project Access tab shows a list of resources which can be users or groups, but does not differentiate between them. Furthermore, when we add resources, they are only users and there is no way to add a group from this tab in the web console.
Expected results:
Should have the ability to add groups and differentiate between users and groups. Ideally, we're looking at a third column for user or group.
Additional info:
Description of problem:
When log line number is too big, the number will overlap with cut-off line in the log viewer.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible:
Always
Steps to Reproduce:
1.Go to a pod log page with lots of logs, such as pod in openshift-cluster-version namespace. Check log line numbers.
2.
3.
Actual results:
1. When line number is too big, it will overlap with cut-off line.
Expected results:
1. Should have no overlaps in logs
Additional info:
This is a clone of issue OCPBUGS-7729. The following is the description of the original issue:
—
Description of problem:
Etcd's liveliness probe should be removed.
Version-Release number of selected component (if applicable):
4.11
Additional info:
When the Master Hosts hit CPU load this can cause a cascading restart loop for etcd and kube-api due to the etcd liveliness probes failing. Due to this loop load on the masters stays high because the api and controllers restarting over and over again.. There is no reason for etcd to have a liveliness probe, we removed this probe in 3.11 due issues like this.
At runtime we know the version of OpenShift that we're installing, so we can dynamically generate the OS_IMAGES environment variable to point at the image for the current release. This will prevent having to add to the hard-coded list for every release.
Description of problem:
The default catalogSources in the openshift-4.12 payload are using the 4.12 image tag
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
Always
Steps to Reproduce:
1. Install a 4.12 OpenShift cluster 2. Inspect the default catalogSource image tags.
Actual results:
The default catalogSources reference the 4.11 image tags.
Expected results:
The default catalogSources reference the 4.12 image tags.
Additional info:
This is a clone of issue OCPBUGS-4181. The following is the description of the original issue:
—
Description of problem:
After configuring a webhook receiver in alertmanager to send alerts to an external tool, a customer noticed that when receiving alerts they have as source "https:///<console-url>" (notice the 3 /).
Version-Release number of selected component (if applicable):
OCP 4.10
How reproducible:
Always
Steps to Reproduce:
1. 2. 3.
Actual results:
https:///<console-url>
Expected results:
https://<console-url>
Additional info:
After investigating I discovered that the problem might be in the CMO code:
→ oc get Alertmanager main -o yaml | grep externalUrl externalUrl: https:/console-openshift-console.apps.jakumar-2022-11-27-224014.devcluster.openshift.com/monitoring → oc get Prometheus k8s -o yaml | grep externalUrl externalUrl: https:/console-openshift-console.apps.jakumar-2022-11-27-224014.devcluster.openshift.com/monitoring
This is a clone of issue OCPBUGS-4758. The following is the description of the original issue:
—
Description of problem:
See: https://issues.redhat.com/browse/CPSYN-143 tldr: Based on the previous direction that 4.12 was going to enforce PSA restricted by default, OLM had to make a few changes because the way we run catalog pods (and we have to run them that way because of how the opm binary worked) was incompatible w/ running restricted. 1) We set openshift-marketplace to enforce restricted (this was our choice, we didn't have to do it, but we did) 2) we updated the opm binary so catalog images using a newer opm binary don't have to run privileged 3) we added a field to catalogsource that allows you to choose whether to run the pod privileged(legacy mode) or restricted. The default is restricted. We made that the default so that users running their own catalogs in their own NSes (which would be default PSA enforcing) would be able to be successful w/o needing their NS upgraded to privileged. Unfortunately this means: 1) legacy catalog images(i.e. using older opm binaries) won't run on 4.12 by default (the catalogsource needs to be modified to specify legacy mode. 2) legacy catalog images cannot be run in the openshift-marketplace NS since that NS does not allow privileged pods. This means legacy catalogs can't contribute to the global catalog (since catalogs must be in that NS to be in the global catalog). Before 4.12 ships we need to: 1) remove the PSA restricted label on the openshift-marketplace NS 2) change the catalogsource securitycontextconfig mode default to use "legacy" as the default, not restricted. This gives catalog authors another release to update to using a newer opm binary that can run restricted, or get their NSes explicitly labeled as privileged (4.12 will not enforce restricted, so in 4.12 using the legacy mode will continue to work) In 4.13 we will need to revisit what we want the default to be, since at that point catalogs will start breaking if they try to run in legacy mode in most NSes.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-11636. The following is the description of the original issue:
—
Description of problem:
The ACLs are disabled for all newly created s3 buckets, this causes all OCP installs to fail: the bootstrap ignition can not be uploaded: level=info msg=Creating infrastructure resources... level=error level=error msg=Error: error creating S3 bucket ACL for yunjiang-acl413-4dnhx-bootstrap: AccessControlListNotSupported: The bucket does not allow ACLs level=error msg= status code: 400, request id: HTB2HSH6XDG0Q3ZA, host id: V6CrEgbc6eyfJkUbLXLxuK4/0IC5hWCVKEc1RVonSbGpKAP1RWB8gcl5dfyKjbrLctVlY5MG2E4= level=error level=error msg= with aws_s3_bucket_acl.ignition, level=error msg= on main.tf line 62, in resource "aws_s3_bucket_acl" "ignition": level=error msg= 62: resource "aws_s3_bucket_acl" ignition { level=error level=error msg=failed to fetch Cluster: failed to generate asset "Cluster": failure applying terraform for "bootstrap" stage: failed to create cluster: failed to apply Terraform: exit status 1 level=error level=error msg=Error: error creating S3 bucket ACL for yunjiang-acl413-4dnhx-bootstrap: AccessControlListNotSupported: The bucket does not allow ACLs level=error msg= status code: 400, request id: HTB2HSH6XDG0Q3ZA, host id: V6CrEgbc6eyfJkUbLXLxuK4/0IC5hWCVKEc1RVonSbGpKAP1RWB8gcl5dfyKjbrLctVlY5MG2E4= level=error level=error msg= with aws_s3_bucket_acl.ignition, level=error msg= on main.tf line 62, in resource "aws_s3_bucket_acl" "ignition": level=error msg= 62: resource "aws_s3_bucket_acl" ignition {
Version-Release number of selected component (if applicable):
4.11+
How reproducible:
Always
Steps to Reproduce:
1.Create a cluster via IPI
Actual results:
install fail
Expected results:
install succeed
Additional info:
Heads-Up: Amazon S3 Security Changes Are Coming in April of 2023 - https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/ https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-error-responses.html - After you apply the bucket owner enforced setting for Object Ownership, ACLs are disabled.
Description of problem:
Agent based installation fails during the 3+1 deployment. I found that the machine-api-operator degraded due to minimum worker replica count is 2 and for 3+1 deployment we need to define one worker node.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Create agent.iso (openshift-install agent create image) using install-config.yaml and agent-config.yaml (PFA sample files) 2. Deploy a 3+1 cluster using agent.iso 3. Execute "openshift-install agent wait-for install-complete" command to wait for install complete.
Actual results:
Getting below error: ERROR Cluster operator kube-controller-manager Degraded is True with GarbageCollector_Error: GarbageCollectorDegraded: error fetching rules: Get "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host INFO Cluster operator machine-api Progressing is True with SyncingResources: Progressing towards operator: 4.12.0-0.nightly-2022-10-05-053337 ERROR Cluster operator machine-api Degraded is True with SyncingFailed: Failed when progressing towards operator: 4.12.0-0.nightly-2022-10-05-053337 because minimum worker replica count (2) not yet met: current running replicas 1, waiting for [] INFO Cluster operator machine-api Available is False with Initializing: Operator is initializing INFO Cluster operator monitoring Available is False with UpdatingPrometheusOperatorFailed: Rollout of the monitoring stack failed and is degraded. Please investigate the degraded status error. ERROR Cluster operator monitoring Degraded is True with UpdatingPrometheusOperatorFailed: Failed to rollout the stack. Error: updating prometheus operator: reconciling Prometheus Operator Admission Webhook Deployment failed: updating Deployment object failed: waiting for DeploymentRollout of openshift-monitoring/prometheus-operator-admission-webhook: got 1 unavailable replicas INFO Cluster operator monitoring Progressing is True with RollOutInProgress: Rolling out the stack. INFO Cluster operator network ManagementStateDegraded is False with : ERROR Cluster initialization failed because one or more operators are not functioning properly. ERROR The cluster should be accessible for troubleshooting as detailed in the documentation linked below, ERROR https://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html
Expected results:
3+1 deployment should be successful.
Additional info:
I found that there is a condition in the machine-api-operator to check that the worker node count should be 2 which is preventing the 3+1 deployment. https://github.com/openshift/machine-api-operator/blob/master/pkg/operator/sync.go#L322
Description of problem:
install 4.12 of IPv6 single stack disconnected cluster: etcd member is in abnormal status:
E1026 03:35:58.409977 1 etcdmemberscontroller.go:73] Unhealthy etcd member found: openshift-qe-057.arm.eng.rdu2.redhat.com, took=, err=create client failure: failed to make etcd client for endpoints https://[26xx:52:0:1eb:3xx3:5xx:fxxe:7550]:2379: context deadline exceeded
How reproducible:
not Always
Steps to Reproduce:
As description
Actual results:
As title
Expected results
etcd co stauts is normal
This bug is a backport clone of [Bugzilla Bug 2050230](https://bugzilla.redhat.com/show_bug.cgi?id=2050230). The following is the description of the original bug:
—
Description of problem:
In a large cluster, sdn daemonset can DoS the kube-apiserver with un-paginated LIST calls on high count resources.
Version-Release number of selected component (if applicable):
How reproducible:
NA
Steps to Reproduce:
NA
Actual results:
Kube API Server and Openshift API Server in one of the cluster keeps restarting, without proper exception. The cluster is not accessible.
Expected results:
Kube API Server and Openshift API Server should be stable.
Additional info:
This is a clone of issue OCPBUGS-5164. The following is the description of the original issue:
—
Description of problem:
It looks like the ODC doesn't register KNATIVE_SERVING and KNATIVE_EVENTING flags. Those are based on KnativeServing and KnativeEventing CRs, but they are looking for v1alpha1 version of those: https://github.com/openshift/console/blob/f72519fdf2267ad91cc0aa51467113cc36423a49/frontend/packages/knative-plugin/console-extensions.json#L6-L8
This PR https://github.com/openshift-knative/serverless-operator/pull/1695 moved the CRs to v1beta1, and that breaks that ODC discovery.
Version-Release number of selected component (if applicable):
Openshift 4.8, Serverless Operator 1.27
Additional info:
https://coreos.slack.com/archives/CHGU4P8UU/p1671634903447019
Grafana has been removed in 4.11 and we can safely remove any logic in CMO that deals with Grafana (except dashboards since they are used by OCP console).
Another point to clarify is to communicate to ProdSec and ART that Grafana isn't part of OCP anymore.
Description of problem:
The IBM VPC block CSI driver was rebased to v5.0.0 in this PR: https://github.com/openshift/ibm-vpc-block-csi-driver/pull/26 However, we're missing the manifest changes from this PR in 4.12 (delayed by CI issues): https://github.com/openshift/ibm-vpc-block-csi-driver-operator/pull/45 That includes some important changes: - add csi-snapshotter sidecar and snapshotter manifests - only deploy volumesnapshotclass if CRD exists - set consistent imagePullPolicy in deployment manifests - enable topology tests
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-4913. The following is the description of the original issue:
—
Description of problem:
Currently the Terraform code waits for 45 seconds, but anecdotal data suggest we should actually wait for 3 minutes in order to avoid "failures" due to occasional slow boots of a new VM in PowerVS.
Version-Release number of selected component (if applicable):
How reproducible:
often enough
Steps to Reproduce:
1. run IPI installer against PowerVS 2. look for "empty tuple" in the error message when it fails to reach `bootstrap-complete` 3.
Actual results:
Expected results:
VMs to always have IP address assigned by DHCP after a certain wait
Additional info:
The change has already been merged into master/4.13, but 4.12 also needs this for planned PowerVS IPI GA on the z-stream.
This is a clone of issue OCPBUGS-6222. The following is the description of the original issue:
—
Please review the following PR: https://github.com/openshift/alibaba-cloud-csi-driver/pull/20
The PR has been automatically opened by ART (#aos-art) team automation and indicates
that the image(s) being used downstream for production builds are not consistent
with the images referenced in this component's github repository.
Differences in upstream and downstream builds impact the fidelity of your CI signal.
If you disagree with the content of this PR, please contact @release-artists
in #aos-art to discuss the discrepancy.
Closing this issue without addressing the difference will cause the issue to
be reopened automatically.
This is a clone of issue OCPBUGS-860. The following is the description of the original issue:
—
Description of problem:
In GCP, once an external IP address is assigned to master/infra node through GCP console, numbers of pending CSR from kubernetes.io/kubelet-serving is increasing, and the following error are reported: I0902 10:48:29.254427 1 controller.go:121] Reconciling CSR: csr-q7bwd I0902 10:48:29.365774 1 csr_check.go:157] csr-q7bwd: CSR does not appear to be client csr I0902 10:48:29.371827 1 csr_check.go:545] retrieving serving cert from build04-c92hb-master-1.c.openshift-ci-build-farm.internal (10.0.0.5:10250) I0902 10:48:29.375052 1 csr_check.go:188] Found existing serving cert for build04-c92hb-master-1.c.openshift-ci-build-farm.internal I0902 10:48:29.375152 1 csr_check.go:192] Could not use current serving cert for renewal: CSR Subject Alternate Name values do not match current certificate I0902 10:48:29.375166 1 csr_check.go:193] Current SAN Values: [build04-c92hb-master-1.c.openshift-ci-build-farm.internal 10.0.0.5], CSR SAN Values: [build04-c92hb-master-1.c.openshift-ci-build-farm.internal 10.0.0.5 35.211.234.95] I0902 10:48:29.375175 1 csr_check.go:202] Falling back to machine-api authorization for build04-c92hb-master-1.c.openshift-ci-build-farm.internal E0902 10:48:29.375184 1 csr_check.go:420] csr-q7bwd: IP address '35.211.234.95' not in machine addresses: 10.0.0.5 I0902 10:48:29.375193 1 csr_check.go:205] Could not use Machine for serving cert authorization: IP address '35.211.234.95' not in machine addresses: 10.0.0.5 I0902 10:48:29.379457 1 csr_check.go:218] Falling back to serving cert renewal with Egress IP checks I0902 10:48:29.382668 1 csr_check.go:221] Could not use current serving cert and egress IPs for renewal: CSR Subject Alternate Names includes unknown IP addresses I0902 10:48:29.382702 1 controller.go:233] csr-q7bwd: CSR not authorized
Version-Release number of selected component (if applicable):
4.11.2
Steps to Reproduce:
1. Assign external IPs to master/infra node in GCP 2. oc get csr | grep kubernetes.io/kubelet-serving
Actual results:
CSRs are not approved
Expected results:
CSRs are approved
Additional info:
This issue is only happen in GCP. Same OpenShift installations in AWS do not have this issue. It looks like the CSR are created using external IP addresses once assigned. Ref: https://coreos.slack.com/archives/C03KEQZC1L2/p1662122007083059
Description of problem:
Pipeline Repository (Pipeline-as-code) list never shows an Event type.
Version-Release number of selected component (if applicable):
4.9+
How reproducible:
Always
Steps to Reproduce:
Actual results:
Pipeline Repository list shows a column Event type but no value.
Expected results:
Pipeline Repository list should show the Event type from the matching Pipeline Run.
Similar to the Pipeline Run Details page based on the label.
Additional info:
The list page packages/pipelines-plugin/src/components/repository/list-page/RepositoryRow.tsx renders obj.metadata.namespace as event type.
I believe we should show the Pipeline Run event type instead. packages/pipelines-plugin/src/components/repository/RepositoryLinkList.tsx uses
{plrLabels[RepositoryLabels[RepositoryFields.EVENT_TYPE]]}to render it.
Also the Pipeline Repository details page tried to render the Branch and Event type from the Repository resource. My research says these properties doesn't exist on the Repository resource. The code should be removed from the Repository details page.
Description of problem:
When enabling OvS HWOL on 4.12.0 nightly, traffic does not pass between pods.
Version-Release number of selected component (if applicable):
4.12.0 nightly
How reproducible:
Always
Steps to Reproduce:
1. Create 2 pods with sriov and try to ping between them (same node or different node)
Actual results:
No Traffic Passes (Ping or other)
Expected results:
Traffic Passes (Ping or other)
Additional info:
Missing this commit in 4.12 branch https://github.com/openshift/ovn-kubernetes/commit/37c6c1d7039fd4c8f3cca560691a254e720172de
This is a clone of issue OCPBUGS-3235. The following is the description of the original issue:
—
Frequently we see the loading state of the topology view, even when there aren't many resources in the project.
Including an example
topology will sometimes hang with the loading indicator showing indefinitely
topology should load consistently without fail
intermittent
4.9
Description of problem:
Added a script to collect PodNetworkConnectivityChecks to able to view the overall status of the pod network connectivity. Current must-gather collects the contents of `openshift-network-diagnostics` but does not collect the PodNetworkConnectivityCheck.
Version-Release number of selected component (if applicable):
4.12, 4.11, 4.10
Description of problem:
Installing 1000+ SNOs via ACM/MCE via ZTP with gitops, a small percentage of clusters end up never completing install because the monitoring operator does not reconcile to available.
# oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version False True 16h Unable to apply 4.11.0: the cluster operator monitoring has not yet successfully rolled out
# oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig get co monitoring
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
monitoring False True True 15h Rollout of the monitoring stack failed and is degraded. Please investigate the degraded status error.
Version-Release number of selected component (if applicable):
How reproducible:
Additional info:
# oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig get po -n openshift-monitoring NAME READY STATUS RESTARTS AGE alertmanager-main-0 0/6 ContainerCreating 0 15h cluster-monitoring-operator-54dd78cc74-l5w24 2/2 Running 0 15h kube-state-metrics-b6455c4dc-8hcfn 3/3 Running 0 15h node-exporter-k7899 2/2 Running 0 15h openshift-state-metrics-7984888fbd-cl67v 3/3 Running 0 15h prometheus-adapter-785bf4f975-wgmnh 1/1 Running 0 15h prometheus-k8s-0 0/6 Init:0/1 0 15h prometheus-operator-74d8754ff7-9zrgw 2/2 Running 0 15h prometheus-operator-admission-webhook-6665fb687d-c5jgv 1/1 Running 0 15h thanos-querier-575496c665-jcc8l 6/6 Running 0 15h # oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig describe po -n openshift-monitoring alertmanager-main-0 Name: alertmanager-main-0 Namespace: openshift-monitoring Priority: 2000000000 Priority Class Name: system-cluster-critical Node: sno01219/fc00:1001::8aa Start Time: Mon, 15 Aug 2022 23:53:39 +0000 Labels: alertmanager=main app.kubernetes.io/component=alert-router app.kubernetes.io/instance=main app.kubernetes.io/managed-by=prometheus-operator app.kubernetes.io/name=alertmanager app.kubernetes.io/part-of=openshift-monitoring app.kubernetes.io/version=0.24.0 controller-revision-hash=alertmanager-main-fcf8dd5fb statefulset.kubernetes.io/pod-name=alertmanager-main-0 Annotations: kubectl.kubernetes.io/default-container: alertmanager openshift.io/scc: nonroot Status: Pending IP: IPs: <none> Controlled By: StatefulSet/alertmanager-main Containers: alertmanager: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91308d35c1e56463f55c1aaa519ff4de7335d43b254c21abdb845fc8c72821a1 Image ID: Ports: 9094/TCP, 9094/UDP Host Ports: 0/TCP, 0/UDP Args: --config.file=/etc/alertmanager/config/alertmanager.yaml --storage.path=/alertmanager --data.retention=120h --cluster.listen-address= --web.listen-address=127.0.0.1:9093 --web.external-url=https:/console-openshift-console.apps.sno01219.rdu2.scalelab.redhat.com/monitoring --web.route-prefix=/ --cluster.peer=alertmanager-main-0.alertmanager-operated:9094 --cluster.reconnect-timeout=5m State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 4m memory: 40Mi Environment: POD_IP: (v1:status.podIP) Mounts: /alertmanager from alertmanager-main-db (rw) /etc/alertmanager/certs from tls-assets (ro) /etc/alertmanager/config from config-volume (rw) /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy from secret-alertmanager-kube-rbac-proxy (ro) /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric from secret-alertmanager-kube-rbac-proxy-metric (ro) /etc/alertmanager/secrets/alertmanager-main-proxy from secret-alertmanager-main-proxy (ro) /etc/alertmanager/secrets/alertmanager-main-tls from secret-alertmanager-main-tls (ro) /etc/pki/ca-trust/extracted/pem/ from alertmanager-trusted-ca-bundle (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) config-reloader: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:209e20410ec2d3d7a502f568d2b7fe1cd1beadcb36fff2d1e6f59d77be3200e3 Image ID: Port: <none> Host Port: <none> Command: /bin/prometheus-config-reloader Args: --listen-address=localhost:8080 --reload-url=http://localhost:9093/-/reload --watched-dir=/etc/alertmanager/config --watched-dir=/etc/alertmanager/secrets/alertmanager-main-tls --watched-dir=/etc/alertmanager/secrets/alertmanager-main-proxy --watched-dir=/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy --watched-dir=/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 10Mi Environment: POD_NAME: alertmanager-main-0 (v1:metadata.name) SHARD: -1 Mounts: /etc/alertmanager/config from config-volume (ro) /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy from secret-alertmanager-kube-rbac-proxy (ro) /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric from secret-alertmanager-kube-rbac-proxy-metric (ro) /etc/alertmanager/secrets/alertmanager-main-proxy from secret-alertmanager-main-proxy (ro) /etc/alertmanager/secrets/alertmanager-main-tls from secret-alertmanager-main-tls (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) alertmanager-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:140f8947593d92e1517e50a201e83bdef8eb965b552a21d3caf346a250d0cf6e Image ID: Port: 9095/TCP Host Port: 0/TCP Args: -provider=openshift -https-address=:9095 -http-address= -email-domain=* -upstream=http://localhost:9093 -openshift-sar=[{"resource": "namespaces", "verb": "get"}, {"resource": "alertmanagers", "resourceAPIGroup": "monitoring.coreos.com", "namespace": "openshift-monitoring", "verb": "patch", "resourceName": "non-existant"}] -openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}, "/": {"resource":"alertmanagers", "group": "monitoring.coreos.com", "namespace": "openshift-monitoring", "verb": "patch", "name": "non-existant"}} -tls-cert=/etc/tls/private/tls.crt -tls-key=/etc/tls/private/tls.key -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token -cookie-secret-file=/etc/proxy/secrets/session_secret -openshift-service-account=alertmanager-main -openshift-ca=/etc/pki/tls/cert.pem -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 20Mi Environment: HTTP_PROXY: HTTPS_PROXY: NO_PROXY: Mounts: /etc/pki/ca-trust/extracted/pem/ from alertmanager-trusted-ca-bundle (ro) /etc/proxy/secrets from secret-alertmanager-main-proxy (rw) /etc/tls/private from secret-alertmanager-main-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) kube-rbac-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5e1c69d005727e3245604cfca7a63e4f9bc6e15128c7489e41d5e967305089e Image ID: Port: 9092/TCP Host Port: 0/TCP Args: --secure-listen-address=0.0.0.0:9092 --upstream=http://127.0.0.1:9096 --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logtostderr=true --tls-min-version=VersionTLS12 State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /etc/kube-rbac-proxy from secret-alertmanager-kube-rbac-proxy (rw) /etc/tls/private from secret-alertmanager-main-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) kube-rbac-proxy-metric: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5e1c69d005727e3245604cfca7a63e4f9bc6e15128c7489e41d5e967305089e Image ID: Port: 9097/TCP Host Port: 0/TCP Args: --secure-listen-address=0.0.0.0:9097 --upstream=http://127.0.0.1:9093 --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --client-ca-file=/etc/tls/client/client-ca.crt --logtostderr=true --allow-paths=/metrics --tls-min-version=VersionTLS12 State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /etc/kube-rbac-proxy from secret-alertmanager-kube-rbac-proxy-metric (ro) /etc/tls/client from metrics-client-ca (ro) /etc/tls/private from secret-alertmanager-main-tls (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) prom-label-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2550b2cbdf864515b1edacf43c25eb6b6f179713c1df34e51f6e9bba48d6430a Image ID: Port: <none> Host Port: <none> Args: --insecure-listen-address=127.0.0.1:9096 --upstream=http://127.0.0.1:9093 --label=namespace --error-on-replace State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 20Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: config-volume: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-main-generated Optional: false tls-assets: Type: Projected (a volume that contains injected data from multiple sources) SecretName: alertmanager-main-tls-assets-0 SecretOptionalName: <nil> secret-alertmanager-main-tls: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-main-tls Optional: false secret-alertmanager-main-proxy: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-main-proxy Optional: false secret-alertmanager-kube-rbac-proxy: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-kube-rbac-proxy Optional: false secret-alertmanager-kube-rbac-proxy-metric: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-kube-rbac-proxy-metric Optional: false alertmanager-main-db: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> metrics-client-ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: metrics-client-ca Optional: false alertmanager-trusted-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: alertmanager-trusted-ca-bundle-2rsonso43rc5p Optional: true kube-api-access-hl77l: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 2m25s (x409 over 15h) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_1c367a83-24e3-4249-861a-a107a6beaee2_0(dff5f302f774d060728261b3c86841ebdbd7ba11537ec9f4d90d57be17bdf44b): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [openshift-monitoring/alertmanager-main-0/1c367a83-24e3-4249-861a-a107a6beaee2:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-monitoring/alertmanager-main-0 dff5f302f774d060728261b3c86841ebdbd7ba11537ec9f4d90d57be17bdf44b] [openshift-monitoring/alertmanager-main-0 dff5f302f774d060728261b3c86841ebdbd7ba11537ec9f4d90d57be17bdf44b] failed to get pod annotation: timed out waiting for annotations: context deadline exceeded oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig describe po -n openshift-monitoring prometheus-k8s-0 Name: prometheus-k8s-0 Namespace: openshift-monitoring Priority: 2000000000 Priority Class Name: system-cluster-critical Node: sno01219/fc00:1001::8aa Start Time: Mon, 15 Aug 2022 23:53:39 +0000 Labels: app.kubernetes.io/component=prometheus app.kubernetes.io/instance=k8s app.kubernetes.io/managed-by=prometheus-operator app.kubernetes.io/name=prometheus app.kubernetes.io/part-of=openshift-monitoring app.kubernetes.io/version=2.36.2 controller-revision-hash=prometheus-k8s-546b544f8b operator.prometheus.io/name=k8s operator.prometheus.io/shard=0 prometheus=k8s statefulset.kubernetes.io/pod-name=prometheus-k8s-0 Annotations: kubectl.kubernetes.io/default-container: prometheus openshift.io/scc: nonroot Status: Pending IP: IPs: <none> Controlled By: StatefulSet/prometheus-k8s Init Containers: init-config-reloader: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:209e20410ec2d3d7a502f568d2b7fe1cd1beadcb36fff2d1e6f59d77be3200e3 Image ID: Port: 8080/TCP Host Port: 0/TCP Command: /bin/prometheus-config-reloader Args: --watch-interval=0 --listen-address=:8080 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 10Mi Environment: POD_NAME: prometheus-k8s-0 (v1:metadata.name) SHARD: 0 Mounts: /etc/prometheus/config from config (rw) /etc/prometheus/config_out from config-out (rw) /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) Containers: prometheus: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c7df53b796e81ba8301ba74d02317226329bd5752fd31c1b44d028e4832f21c3 Image ID: Port: <none> Host Port: <none> Args: --web.console.templates=/etc/prometheus/consoles --web.console.libraries=/etc/prometheus/console_libraries --storage.tsdb.retention.time=15d --config.file=/etc/prometheus/config_out/prometheus.env.yaml --storage.tsdb.path=/prometheus --web.enable-lifecycle --web.external-url=https:/console-openshift-console.apps.sno01219.rdu2.scalelab.redhat.com/monitoring --web.route-prefix=/ --web.listen-address=127.0.0.1:9090 --web.config.file=/etc/prometheus/web_config/web-config.yaml State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 70m memory: 1Gi Liveness: exec [sh -c if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/healthy; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/healthy; else exit 1; fi] delay=0s timeout=3s period=5s #success=1 #failure=6 Readiness: exec [sh -c if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi] delay=0s timeout=3s period=5s #success=1 #failure=3 Startup: exec [sh -c if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi] delay=0s timeout=3s period=15s #success=1 #failure=60 Environment: <none> Mounts: /etc/pki/ca-trust/extracted/pem/ from prometheus-trusted-ca-bundle (ro) /etc/prometheus/certs from tls-assets (ro) /etc/prometheus/config_out from config-out (ro) /etc/prometheus/configmaps/kubelet-serving-ca-bundle from configmap-kubelet-serving-ca-bundle (ro) /etc/prometheus/configmaps/metrics-client-ca from configmap-metrics-client-ca (ro) /etc/prometheus/configmaps/serving-certs-ca-bundle from configmap-serving-certs-ca-bundle (ro) /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw) /etc/prometheus/secrets/kube-etcd-client-certs from secret-kube-etcd-client-certs (ro) /etc/prometheus/secrets/kube-rbac-proxy from secret-kube-rbac-proxy (ro) /etc/prometheus/secrets/metrics-client-certs from secret-metrics-client-certs (ro) /etc/prometheus/secrets/prometheus-k8s-proxy from secret-prometheus-k8s-proxy (ro) /etc/prometheus/secrets/prometheus-k8s-thanos-sidecar-tls from secret-prometheus-k8s-thanos-sidecar-tls (ro) /etc/prometheus/secrets/prometheus-k8s-tls from secret-prometheus-k8s-tls (ro) /etc/prometheus/web_config/web-config.yaml from web-config (ro,path="web-config.yaml") /prometheus from prometheus-k8s-db (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) config-reloader: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:209e20410ec2d3d7a502f568d2b7fe1cd1beadcb36fff2d1e6f59d77be3200e3 Image ID: Port: <none> Host Port: <none> Command: /bin/prometheus-config-reloader Args: --listen-address=localhost:8080 --reload-url=http://localhost:9090/-/reload --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 10Mi Environment: POD_NAME: prometheus-k8s-0 (v1:metadata.name) SHARD: 0 Mounts: /etc/prometheus/config from config (rw) /etc/prometheus/config_out from config-out (rw) /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) thanos-sidecar: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36fc214537c763b3a3f0a9dc7a1bd4378a80428c31b2629df8786a9b09155e6d Image ID: Ports: 10902/TCP, 10901/TCP Host Ports: 0/TCP, 0/TCP Args: sidecar --prometheus.url=http://localhost:9090/ --tsdb.path=/prometheus --http-address=127.0.0.1:10902 --grpc-server-tls-cert=/etc/tls/grpc/server.crt --grpc-server-tls-key=/etc/tls/grpc/server.key --grpc-server-tls-client-ca=/etc/tls/grpc/ca.crt State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 25Mi Environment: <none> Mounts: /etc/tls/grpc from secret-grpc-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) prometheus-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:140f8947593d92e1517e50a201e83bdef8eb965b552a21d3caf346a250d0cf6e Image ID: Port: 9091/TCP Host Port: 0/TCP Args: -provider=openshift -https-address=:9091 -http-address= -email-domain=* -upstream=http://localhost:9090 -openshift-service-account=prometheus-k8s -openshift-sar={"resource": "namespaces", "verb": "get"} -openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}} -tls-cert=/etc/tls/private/tls.crt -tls-key=/etc/tls/private/tls.key -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token -cookie-secret-file=/etc/proxy/secrets/session_secret -openshift-ca=/etc/pki/tls/cert.pem -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 20Mi Environment: HTTP_PROXY: HTTPS_PROXY: NO_PROXY: Mounts: /etc/pki/ca-trust/extracted/pem/ from prometheus-trusted-ca-bundle (ro) /etc/proxy/secrets from secret-prometheus-k8s-proxy (rw) /etc/tls/private from secret-prometheus-k8s-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) kube-rbac-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5e1c69d005727e3245604cfca7a63e4f9bc6e15128c7489e41d5e967305089e Image ID: Port: 9092/TCP Host Port: 0/TCP Args: --secure-listen-address=0.0.0.0:9092 --upstream=http://127.0.0.1:9090 --allow-paths=/metrics --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --client-ca-file=/etc/tls/client/client-ca.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logtostderr=true --tls-min-version=VersionTLS12 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /etc/kube-rbac-proxy from secret-kube-rbac-proxy (rw) /etc/tls/client from configmap-metrics-client-ca (ro) /etc/tls/private from secret-prometheus-k8s-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) kube-rbac-proxy-thanos: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5e1c69d005727e3245604cfca7a63e4f9bc6e15128c7489e41d5e967305089e Image ID: Port: 10902/TCP Host Port: 0/TCP Args: --secure-listen-address=[$(POD_IP)]:10902 --upstream=http://127.0.0.1:10902 --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --client-ca-file=/etc/tls/client/client-ca.crt --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --allow-paths=/metrics --logtostderr=true --tls-min-version=VersionTLS12 --client-ca-file=/etc/tls/client/client-ca.crt State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 10Mi Environment: POD_IP: (v1:status.podIP) Mounts: /etc/kube-rbac-proxy from secret-kube-rbac-proxy (rw) /etc/tls/client from metrics-client-ca (ro) /etc/tls/private from secret-prometheus-k8s-thanos-sidecar-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) Conditions: Type Status Initialized False Ready False ContainersReady False PodScheduled True Volumes: config: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s Optional: false tls-assets: Type: Projected (a volume that contains injected data from multiple sources) SecretName: prometheus-k8s-tls-assets-0 SecretOptionalName: <nil> config-out: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> prometheus-k8s-rulefiles-0: Type: ConfigMap (a volume populated by a ConfigMap) Name: prometheus-k8s-rulefiles-0 Optional: false web-config: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-web-config Optional: false secret-kube-etcd-client-certs: Type: Secret (a volume populated by a Secret) SecretName: kube-etcd-client-certs Optional: false secret-prometheus-k8s-tls: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-tls Optional: false secret-prometheus-k8s-proxy: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-proxy Optional: false secret-prometheus-k8s-thanos-sidecar-tls: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-thanos-sidecar-tls Optional: false secret-kube-rbac-proxy: Type: Secret (a volume populated by a Secret) SecretName: kube-rbac-proxy Optional: false secret-metrics-client-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-client-certs Optional: false configmap-serving-certs-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: serving-certs-ca-bundle Optional: false configmap-kubelet-serving-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: kubelet-serving-ca-bundle Optional: false configmap-metrics-client-ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: metrics-client-ca Optional: false prometheus-k8s-db: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> metrics-client-ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: metrics-client-ca Optional: false secret-grpc-tls: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-grpc-tls-crdkohb1gb92n Optional: false prometheus-trusted-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: prometheus-trusted-ca-bundle-2rsonso43rc5p Optional: true kube-api-access-85zlc: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 4m19s (x409 over 15h) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_debda4d2-6914-4b36-92e0-78f68d539ab3_0(86af91d4e64ab0fbad95352b029762e9856ff24005445b458bccb22e0ee9b655): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [openshift-monitoring/prometheus-k8s-0/debda4d2-6914-4b36-92e0-78f68d539ab3:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-monitoring/prometheus-k8s-0 86af91d4e64ab0fbad95352b029762e9856ff24005445b458bccb22e0ee9b655] [openshift-monitoring/prometheus-k8s-0 86af91d4e64ab0fbad95352b029762e9856ff24005445b458bccb22e0ee9b655] failed to get pod annotation: timed out waiting for annotations: context deadline exceeded
Both pods in error state seem to be waiting on this issue "failed to get pod annotation: timed out waiting for annotations: context deadline exceeded"
This is a clone of issue OCPBUGS-2551. The following is the description of the original issue:
—
Description of problem:
When normal user select "All namespaces" by using the radio button "Show operands in", The ""Error Loading" error will be shown
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-18-192348, 4.11
How reproducible:
Always
Steps to Reproduce:
1. Install operator "Red Hat Intergration-Camel K" on All namespace 2. Login console by using normal user 3. Navigate to "All instances" Tab for the opertor 4. Check the radio button "All namespaces" is being selected 5. Check the page
Actual results:
The Error Loading info will be shown on page
Expected results:
The error should not shown
Additional info:
This is a clone of issue OCPBUGS-5151. The following is the description of the original issue:
—
Description of problem:
Cx is not able to install new cluster OCP BM IPI. During the bootstrapping the provisioning interfaces from master node not getting ipv4 dhcp ip address from bootstrap dhcp server on OCP IPI BareMetal install Please refer to following BUG --> https://issues.redhat.com/browse/OCPBUGS-872 The problem was solved by applying rd.net.timeout.carrier=30 to the kernel parameters of compute nodes via cluster-baremetal operator. The fix also need to be apply to the control-plane. ref:// https://github.com/openshift/cluster-baremetal-operator/pull/286/files
Version-Release number of selected component (if applicable):
How reproducible:
Perform OCP 4.10.16 IPI BareMetal install.
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Customer should be able to install the cluster without any issue.
Additional info:
This is a clone of issue OCPBUGS-5988. The following is the description of the original issue:
—
Description of problem:
Etcd operator is in degraded state as one of the masters can't connect. Master that fails to connect was previously bootstrap and pivoted as part of assisted-installer installation to master. Etcd log: 2023-01-17T23:09:26.523562312Z 28dcf1b0a44481b0, started, test-infra-cluster-04bf4418-master-1, https://192.168.127.11:2380, https://192.168.127.11:2379, false 2023-01-17T23:09:26.523562312Z 30600b5b86e23c8e, started, etcd-bootstrap, https://192.168.127.12:2380, https://192.168.127.12:2379, false 2023-01-17T23:09:26.523562312Z 73f00626fee34a87, started, test-infra-cluster-04bf4418-master-0, https://192.168.127.10:2380, https://192.168.127.10:2379, false 2023-01-17T23:09:26.541214220Z #### attempt 0 2023-01-17T23:09:26.547811132Z member={name="test-infra-cluster-04bf4418-master-1", peerURLs=[https://192.168.127.11:2380}, clientURLs=[https://192.168.127.11:2379] 2023-01-17T23:09:26.547811132Z member={name="etcd-bootstrap", peerURLs=[https://192.168.127.12:2380}, clientURLs=[https://192.168.127.12:2379] 2023-01-17T23:09:26.547811132Z member={name="test-infra-cluster-04bf4418-master-0", peerURLs=[https://192.168.127.10:2380}, clientURLs=[https://192.168.127.10:2379] 2023-01-17T23:09:26.547811132Z target={name="etcd-bootstrap", peerURLs=[https://192.168.127.12:2380}, clientURLs=[https://192.168.127.12:2379] 2023-01-17T23:09:26.547846508Z member "https://192.168.127.12:2380" dataDir has been destroyed and must be removed from the cluster There are couple of problems that we see: 1. For unknown reason etcd operator BootstrapTeardownController fails to start as it fails to see "openshift-etcd" namespace though by the logs it is there. 2023-01-17T21:39:43.323928903Z E0117 21:39:43.323917 1 base_controller.go:272] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd names 2. DelayStrategy code was change by https://github.com/openshift/cluster-etcd-operator/pull/964/files and currently it requires 3 healthy members in order to remove. It can create issues as etcd and cluster-bootstrap(bootkube) are not synchronized and nothing is actually blocking bootstrap on stop etcd and block remove of bootstrap etcd.(at least how i understand the flow)
Version-Release number of selected component (if applicable):
How reproducible:
It is race as far as i understand but reproduced pretty much in our CI by installing 4.12 nightlies
Steps to Reproduce:
1. 2. 3.
Actual results:
Etcd is degrade cause third joined master etcd can't start
Expected results:
Etcd is healthy
Additional info:
In order to start 4.12 development, we need to merge the agent-installer branch. We need to create a PR and engage the Installer team on getting it approved
This is a clone of issue OCPBUGS-8035. The following is the description of the original issue:
—
Description of problem:
install discnnect private cluster, ssh to master/bootstrap nodes from the bastion on the vpc failed.
Version-Release number of selected component (if applicable):
Pre-merge build https://github.com/openshift/installer/pull/6836 registry.build05.ci.openshift.org/ci-ln-5g4sj02/release:latest Tag: 4.13.0-0.ci.test-2023-02-27-033047-ci-ln-5g4sj02-latest
How reproducible:
always
Steps to Reproduce:
1.Create bastion instance maxu-ibmj-p1-int-svc 2.Create vpc on the bastion host 3.Install private disconnect cluster on the bastion host with mirror registry 4.ssh to the bastion 5.ssh to the master/bootstrap nodes from the bastion
Actual results:
[core@maxu-ibmj-p1-int-svc ~]$ ssh -i ~/openshift-qe.pem core@10.241.0.5 -v OpenSSH_8.8p1, OpenSSL 3.0.5 5 Jul 2022 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf debug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config debug1: configuration requests final Match pass debug1: re-parsing configuration debug1: Reading configuration data /etc/ssh/ssh_config debug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf debug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config debug1: Connecting to 10.241.0.5 [10.241.0.5] port 22. debug1: connect to address 10.241.0.5 port 22: Connection timed out ssh: connect to host 10.241.0.5 port 22: Connection timed out
Expected results:
ssh succeed.
Additional info:
$ibmcloud is sg-rules r014-5a6c16f4-8a4c-4c02-ab2d-626c14f72a77 --vpc maxu-ibmj-p1-vpc Listing rules of security group r014-5a6c16f4-8a4c-4c02-ab2d-626c14f72a77 under account OpenShift-QE as user ServiceId-dff277a9-b608-410a-ad24-c544e59e3778... ID Direction IP version Protocol Remote r014-6739d68f-6827-41f4-b51a-5da742c353b2 outbound ipv4 all 0.0.0.0/0 r014-06d44c15-d3fd-4a14-96c4-13e96aa6769c inbound ipv4 all shakiness-perfectly-rundown-take r014-25b86956-5370-4925-adaf-89dfca9fb44b inbound ipv4 tcp Ports:Min=22,Max=22 0.0.0.0/0 r014-e18f0f5e-c4e5-44a5-b180-7a84aa59fa97 inbound ipv4 tcp Ports:Min=3128,Max=3129 0.0.0.0/0 r014-7e79c4b7-d0bb-4fab-9f5d-d03f6b427d89 inbound ipv4 icmp Type=8,Code=0 0.0.0.0/0 r014-03f23b04-c67a-463d-9754-895b8e474e75 inbound ipv4 tcp Ports:Min=5000,Max=5000 0.0.0.0/0 r014-8febe8c8-c937-42b6-b352-8ae471749321 inbound ipv4 tcp Ports:Min=6001,Max=6002 0.0.0.0/0
Description of problem:
https://github.com/openshift/api/pull/1186 - https://issues.redhat.com/browse/CONSOLE-3069 promoted ConsolePlugin CRD to v1. The PR introduces also a conversion webhook from v1alpha1 to v1. In new CRD version I18n ConsolePluginI18n is marked as optional. The conversion webhook will not set a default valid ("Lazy"/"Preload") value writing the v1 object and a v1 object completely omitting spec.i18n will be accepted we no valid default value as well. On the other side, at garbage collection time the object will be stuck forever due to the lack of a valid value for spec.i18n.loadType Example, create a v1 ConsolePlugin object: cat <<EOF | oc apply -f - apiVersion: console.openshift.io/v1 kind: ConsolePlugin metadata: name: test472 spec: backend: service: basePath: / name: test472-service namespace: kubevirt-hyperconverged port: 9443 type: Service displayName: Test 472 Plugin EOF Delete it in foreground mode: stirabos@t14s:~$ oc delete consoleplugin test472 --timeout=30s --cascade='foreground' -v 7 I1011 18:20:03.255605 31610 loader.go:372] Config loaded from file: /home/stirabos/.kube/config I1011 18:20:03.266567 31610 round_trippers.go:463] DELETE https://api.ci-ln-krdzphb-72292.gcp-2.ci.openshift.org:6443/apis/console.openshift.io/v1/consoleplugins/test472 I1011 18:20:03.266581 31610 round_trippers.go:469] Request Headers: I1011 18:20:03.266588 31610 round_trippers.go:473] Accept: application/json I1011 18:20:03.266594 31610 round_trippers.go:473] Content-Type: application/json I1011 18:20:03.266600 31610 round_trippers.go:473] User-Agent: oc/4.11.0 (linux/amd64) kubernetes/fcf512e I1011 18:20:03.266606 31610 round_trippers.go:473] Authorization: Bearer <masked> I1011 18:20:03.688569 31610 round_trippers.go:574] Response Status: 200 OK in 421 milliseconds consoleplugin.console.openshift.io "test472" deleted I1011 18:20:03.688911 31610 round_trippers.go:463] GET https://api.ci-ln-krdzphb-72292.gcp-2.ci.openshift.org:6443/apis/console.openshift.io/v1/consoleplugins?fieldSelector=metadata.name%3Dtest472 I1011 18:20:03.688919 31610 round_trippers.go:469] Request Headers: I1011 18:20:03.688928 31610 round_trippers.go:473] Authorization: Bearer <masked> I1011 18:20:03.688935 31610 round_trippers.go:473] Accept: application/json I1011 18:20:03.688941 31610 round_trippers.go:473] User-Agent: oc/4.11.0 (linux/amd64) kubernetes/fcf512e I1011 18:20:03.840103 31610 round_trippers.go:574] Response Status: 200 OK in 151 milliseconds I1011 18:20:03.840825 31610 round_trippers.go:463] GET https://api.ci-ln-krdzphb-72292.gcp-2.ci.openshift.org:6443/apis/console.openshift.io/v1/consoleplugins?fieldSelector=metadata.name%3Dtest472&resourceVersion=175205&watch=true I1011 18:20:03.840848 31610 round_trippers.go:469] Request Headers: I1011 18:20:03.840884 31610 round_trippers.go:473] Accept: application/json I1011 18:20:03.840907 31610 round_trippers.go:473] User-Agent: oc/4.11.0 (linux/amd64) kubernetes/fcf512e I1011 18:20:03.840928 31610 round_trippers.go:473] Authorization: Bearer <masked> I1011 18:20:03.972219 31610 round_trippers.go:574] Response Status: 200 OK in 131 milliseconds error: timed out waiting for the condition on consoleplugins/test472 and in kube-controller-manager logs we see: 2022-10-11T16:25:32.192864016Z I1011 16:25:32.192788 1 garbagecollector.go:501] "Processing object" object="test472" objectUID=0cc46a01-113b-4bbe-9c7a-829a97d6867c kind="ConsolePlugin" virtual=false 2022-10-11T16:25:32.282303274Z I1011 16:25:32.282161 1 garbagecollector.go:623] remove DeleteDependents finalizer for item [console.openshift.io/v1/ConsolePlugin, namespace: , name: test472, uid: 0cc46a01-113b-4bbe-9c7a-829a97d6867c] 2022-10-11T16:25:32.304835330Z E1011 16:25:32.304730 1 garbagecollector.go:379] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"console.openshift.io/v1", Kind:"ConsolePlugin", Name:"test472", UID:"0cc46a01-113b-4bbe-9c7a-829a97d6867c", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:true, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: ConsolePlugin.console.openshift.io "test472" is invalid: spec.i18n.loadType: Unsupported value: "": supported values: "Preload", "Lazy"
Version-Release number of selected component (if applicable):
OCP 4.12.0 ec4
How reproducible:
100%
Steps to Reproduce:
1. cat <<EOF | oc apply -f - apiVersion: console.openshift.io/v1 kind: ConsolePlugin metadata: name: test472 spec: backend: service: basePath: / name: test472-service namespace: kubevirt-hyperconverged port: 9443 type: Service displayName: Test 472 Plugin EOF
2. oc delete consoleplugin test472 --timeout=30s --cascade='foreground' -v 7
Actual results:
2022-10-11T16:25:32.192864016Z I1011 16:25:32.192788 1 garbagecollector.go:501] "Processing object" object="test472" objectUID=0cc46a01-113b-4bbe-9c7a-829a97d6867c kind="ConsolePlugin" virtual=false 2022-10-11T16:25:32.282303274Z I1011 16:25:32.282161 1 garbagecollector.go:623] remove DeleteDependents finalizer for item [console.openshift.io/v1/ConsolePlugin, namespace: , name: test472, uid: 0cc46a01-113b-4bbe-9c7a-829a97d6867c] 2022-10-11T16:25:32.304835330Z E1011 16:25:32.304730 1 garbagecollector.go:379] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"console.openshift.io/v1", Kind:"ConsolePlugin", Name:"test472", UID:"0cc46a01-113b-4bbe-9c7a-829a97d6867c", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:true, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: ConsolePlugin.console.openshift.io "test472" is invalid: spec.i18n.loadType: Unsupported value: "": supported values: "Preload", "Lazy"
Expected results:
Object correctly deleted
Additional info:
The issue doesn't happen with --cascade='background' which is the default on the CLI client
Description of problem: As discovered in https://issues.redhat.com/browse/OCPBUGS-2795, gophercloud fails to list swift containers when the endpoint speaks HTTP2. This means that CIRO will provision a 100GB cinder volume even though swift is available to the tenant.
We're for example seeing this behavior in our CI on vexxhost.
The gophercloud commit that fixed this issue is https://github.com/gophercloud/gophercloud/commit/b7d5b2cdd7ffc13e79d924f61571b0e5f74ec91c, specifically the `|| ct == ""` part on line 75 of openstack/objectstorage/v1/containers/results.go. This commit made it in gophercloud v0.18.0.
CIRO still depends on gophercloud v0.17.0. We should bump gophercloud to fix the bug.
Version-Release number of selected component (if applicable):
All versions. Fix should go to 4.8 - 4.12.
How reproducible:
Always, when swift speaks HTTP2.
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-4874. The following is the description of the original issue:
—
OCPBUGS-3278 is supposed to fix the issue where the user was required to provide data about the baremetal hosts (including MAC addresses) in the install-config, even though this data is ignored.
However, we determine whether we should disable the validation by checking the second CLI arg to see if it is agent.
This works when the command is:
openshift-install agent create image --dir=whatever
But fails when the argument is e.g., as in dev-scripts:
openshift-install --log-level=debug --dir=whatever agent create image
This is a clone of issue OCPBUGS-5306. The following is the description of the original issue:
—
Description of problem:
One old machine stuck in Deleting and many co get degraded when doing master replacement on the cluster with OVN network
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2023-01-02-175114
How reproducible:
always after several times
Steps to Reproduce:
1.Install a cluster liuhuali@Lius-MacBook-Pro huali-test % oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-0.nightly-2023-01-02-175114 True False 30m Cluster version is 4.12.0-0.nightly-2023-01-02-175114 liuhuali@Lius-MacBook-Pro huali-test % oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2023-01-02-175114 True False False 33m baremetal 4.12.0-0.nightly-2023-01-02-175114 True False False 80m cloud-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 84m cloud-credential 4.12.0-0.nightly-2023-01-02-175114 True False False 80m cluster-api 4.12.0-0.nightly-2023-01-02-175114 True False False 81m cluster-autoscaler 4.12.0-0.nightly-2023-01-02-175114 True False False 80m config-operator 4.12.0-0.nightly-2023-01-02-175114 True False False 81m console 4.12.0-0.nightly-2023-01-02-175114 True False False 33m control-plane-machine-set 4.12.0-0.nightly-2023-01-02-175114 True False False 79m csi-snapshot-controller 4.12.0-0.nightly-2023-01-02-175114 True False False 81m dns 4.12.0-0.nightly-2023-01-02-175114 True False False 80m etcd 4.12.0-0.nightly-2023-01-02-175114 True False False 79m image-registry 4.12.0-0.nightly-2023-01-02-175114 True False False 74m ingress 4.12.0-0.nightly-2023-01-02-175114 True False False 74m insights 4.12.0-0.nightly-2023-01-02-175114 True False False 21m kube-apiserver 4.12.0-0.nightly-2023-01-02-175114 True False False 77m kube-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 77m kube-scheduler 4.12.0-0.nightly-2023-01-02-175114 True False False 77m kube-storage-version-migrator 4.12.0-0.nightly-2023-01-02-175114 True False False 81m machine-api 4.12.0-0.nightly-2023-01-02-175114 True False False 75m machine-approver 4.12.0-0.nightly-2023-01-02-175114 True False False 80m machine-config 4.12.0-0.nightly-2023-01-02-175114 True False False 74m marketplace 4.12.0-0.nightly-2023-01-02-175114 True False False 80m monitoring 4.12.0-0.nightly-2023-01-02-175114 True False False 72m network 4.12.0-0.nightly-2023-01-02-175114 True False False 83m node-tuning 4.12.0-0.nightly-2023-01-02-175114 True False False 80m openshift-apiserver 4.12.0-0.nightly-2023-01-02-175114 True False False 75m openshift-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 76m openshift-samples 4.12.0-0.nightly-2023-01-02-175114 True False False 22m operator-lifecycle-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 81m operator-lifecycle-manager-catalog 4.12.0-0.nightly-2023-01-02-175114 True False False 81m operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2023-01-02-175114 True False False 75m platform-operators-aggregated 4.12.0-0.nightly-2023-01-02-175114 True False False 74m service-ca 4.12.0-0.nightly-2023-01-02-175114 True False False 81m storage 4.12.0-0.nightly-2023-01-02-175114 True False False 74m liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-0 Running m6i.xlarge us-east-2 us-east-2a 85m huliu-aws4d2-fcks7-master-1 Running m6i.xlarge us-east-2 us-east-2b 85m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 85m huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 80m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 80m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 80m liuhuali@Lius-MacBook-Pro huali-test % oc get controlplanemachineset NAME DESIRED CURRENT READY UPDATED UNAVAILABLE STATE AGE cluster 3 3 3 3 Active 86m 2.Edit controlplanemachineset, change instanceType to another value to trigger RollingUpdate liuhuali@Lius-MacBook-Pro huali-test % oc edit controlplanemachineset cluster controlplanemachineset.machine.openshift.io/cluster edited liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-0 Running m6i.xlarge us-east-2 us-east-2a 86m huliu-aws4d2-fcks7-master-1 Running m6i.xlarge us-east-2 us-east-2b 86m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 86m huliu-aws4d2-fcks7-master-mbgz6-0 Provisioning m5.xlarge us-east-2 us-east-2a 5s huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 81m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 81m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 81m liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-0 Deleting m6i.xlarge us-east-2 us-east-2a 92m huliu-aws4d2-fcks7-master-1 Running m6i.xlarge us-east-2 us-east-2b 92m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 92m huliu-aws4d2-fcks7-master-mbgz6-0 Running m5.xlarge us-east-2 us-east-2a 5m36s huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 87m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 87m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 87m liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-1 Running m6i.xlarge us-east-2 us-east-2b 101m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 101m huliu-aws4d2-fcks7-master-mbgz6-0 Running m5.xlarge us-east-2 us-east-2a 15m huliu-aws4d2-fcks7-master-nbt9g-1 Provisioned m5.xlarge us-east-2 us-east-2b 3m1s huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 96m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 96m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 96m liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-1 Deleting m6i.xlarge us-east-2 us-east-2b 149m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 149m huliu-aws4d2-fcks7-master-mbgz6-0 Running m5.xlarge us-east-2 us-east-2a 62m huliu-aws4d2-fcks7-master-nbt9g-1 Running m5.xlarge us-east-2 us-east-2b 50m huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 144m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2