Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
As a user, I should be able to configure CSI driver to have a storage topology.
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
This includes ibm-vpc-node-label-updater!
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
There is a new driver release 5.0.0 since the last rebase that includes snapshot support:
https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0
Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.
We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.
We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.
related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
This Epic tracks the GA of this feature
Epic Goal
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
OC mirror is GA product as of Openshift 4.11 .
The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12
This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9
More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4
update ironic software to pick up latest bug fixes
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
As an admin, I want to hide user perspective(s) based on the customization.
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
Console should be using v1 version of the ConsolePlugin model rather then the old v1alpha1.
CONSOLE-3077 was updating this version, but did not made the cut for the 4.12 release. Based on discussion with Samuel Padgett we should be backporting to 4.12.
The risk should be minimal since we are only updating the model itself + validation + Readme
Currently, the AWS actuator has a static list of instance types embedded in it. This means that as new instance types are added, we have to continually update this list.
Ideally, we could fetch this information from the AWS API as we do in GCP.
DoD:
Description of problem:
The cluster-dns-operator does not reconcile the openshift-dns namespace, which has been exposed as an issue in 4.12 due to the requirement for the namespace to have pod-security labels. If a cluster has been incrementally updated from a version less than or equal to 4.9, the openshift-dns namespace will most likely not contain the required pod-security labels since the namespace was statically created when the cluster was installed with old namespace configuration.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always if cluster originally installed with v4.9 or less
Steps to Reproduce:
1. Install v4.9 2. Upgrade to v4.12 (incrementally if required for upgrade path) 3. openshift-dns namespace will be missing pod-security labels
Actual results:
"oc get ns openshift-dns -o yaml" will show missing pod-security labels: apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "" openshift.io/sa.scc.mcs: s0:c15,c0 openshift.io/sa.scc.supplemental-groups: 1000210000/10000 openshift.io/sa.scc.uid-range: 1000210000/10000 creationTimestamp: "2020-05-21T19:36:15Z" labels: kubernetes.io/metadata.name: openshift-dns olm.operatorgroup.uid/3d42c0c1-01cd-4c55-bf88-864f041c7e7a: "" openshift.io/cluster-monitoring: "true" openshift.io/run-level: "0" name: openshift-dns resourceVersion: "3127555382" uid: 0fb4571e-952f-4bea-bc45-461beec54369 spec: finalizers: - kubernetes
Expected results:
pod-security labels should exist: labels: kubernetes.io/metadata.name: openshift-dns olm.operatorgroup.uid/3d42c0c1-01cd-4c55-bf88-864f041c7e7a: "" openshift.io/cluster-monitoring: "true" openshift.io/run-level: "0" pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/warn: privileged
Additional info:
Issue found in CI during upgrade
https://coreos.slack.com/archives/C03G7REB4JV/p1663676443155839
Description of problem:
service machine-config-daemon-update-rpmostree-via-container is failed to deploy commit
sh-4.4# journalctl -u machine-config-daemon-update-rpmostree-via-container.service | tail Oct 12 11:45:56 master-00.wduan-1012e-upg.qe.devcluster.openshift.com peaceful_elbakyan[2022141]: Checking out tree 845113b...done Oct 12 11:45:56 master-00.wduan-1012e-upg.qe.devcluster.openshift.com podman[2019123]: Checking out tree 845113b...done Oct 12 11:45:57 master-00.wduan-1012e-upg.qe.devcluster.openshift.com peaceful_elbakyan[2022141]: error: No enabled repositories Oct 12 11:45:57 master-00.wduan-1012e-upg.qe.devcluster.openshift.com podman[2019123]: error: No enabled repositories Oct 12 11:45:57 master-00.wduan-1012e-upg.qe.devcluster.openshift.com peaceful_elbakyan[2022141]: error: Failed to deploy commit: ExitStatus(unix_wait_status(256)) Oct 12 11:45:57 master-00.wduan-1012e-upg.qe.devcluster.openshift.com podman[2019123]: error: Failed to deploy commit: ExitStatus(unix_wait_status(256)) Oct 12 11:45:57 master-00.wduan-1012e-upg.qe.devcluster.openshift.com podman[2022949]: time="2022-10-12T11:45:57Z" level=warning msg="lstat /sys/fs/cgroup/devices/machine.slice/libpod-ea744a45645d9c8d7a79182a78525a0b9f65b13e2e997f55bf80f626dcc0e945.scope: no such file or directory" Oct 12 11:45:57 master-00.wduan-1012e-upg.qe.devcluster.openshift.com systemd[1]: machine-config-daemon-update-rpmostree-via-container.service: Main process exited, code=exited, status=1/FAILURE Oct 12 11:45:57 master-00.wduan-1012e-upg.qe.devcluster.openshift.com systemd[1]: machine-config-daemon-update-rpmostree-via-container.service: Failed with result 'exit-code'. Oct 12 11:45:57 master-00.wduan-1012e-upg.qe.devcluster.openshift.com systemd[1]: machine-config-daemon-update-rpmostree-via-container.service: Consumed 1min 9.080s CPU time
full service log is attached
Version-Release number of selected component (if applicable):
4.12
Steps to Reproduce:
1. setup SNO cluster upi-on-baremetal with 4.11.8 2. upgrade it to 4.12.0-0.nightly-2022-10-05-053337
Actual results:
service machine-config-daemon-update-rpmostree-via-container is failed to deploy comment due to no enabled repositories issue
Expected results:
service machine-config-daemon-update-rpmostree-via-container can deploy new commit successfully
Additional info:
no proxy configured sh-4.4# cat /etc/mco/proxy.env # Proxy environment variables will be populated in this file. Properly # url encoded passwords with special characters will use '%<HEX><HEX>'. # Systemd requires that any % used in a password be represented as # %% in a unit file since % is a prefix for macros; this restriction does not # apply for environment files. Templates that need the proxy set should use # 'EnvironmentFile=/etc/mco/proxy.env'.
Description of problem: Knative tests were disabled due to https://issues.redhat.com/browse/OCPBUGS-190 to unblock the queue and should be enabled back again
https://coreos.slack.com/archives/C6A3NV5J9/p1660659719046909
https://github.com/openshift/console/pull/11956#discussion_r948075848
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
ovnkube-trace: ofproto/trace fails for IPv6
[akaris@linux go-controller (fix-ovnkube-trace-ipv6)]$ oc exec -ti ovn-trace-two -n ovn-tests-two -- ovnkube-trace -src-namespace ovn-tests-two -src ovn-trace-two -dst-ip 2404:6800:4003:c06::69 -tcp I1021 12:16:56.478752 3356 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace from pod to IP indicates success from ovn-trace-two to 2404:6800:4003:c06::69 F1021 12:16:57.075803 3356 ovnkube-trace.go:601] ovs-appctl ofproto/trace pod to IP error command terminated with exit code 2 stdOut: stdErr: Bad openflow flow syntax: in_port=73af56a18042ab9, tcp, dl_src=0a:58:17:2b:b6:42, dl_dst=0a:58:69:bd:ba:d8, nw_src=fd01:0:0:5::13, nw_dst=2404:6800:4003:c06::69, nw_ttl=64, tcp_dst=80, tcp_src=12345: bad value for nw_src (fd01:0:0:5::13: invalid IP address) ovs-appctl: ovs-vswitchd: server returned an error command terminated with exit code 1 [akaris@linux go-controller (fix-ovnkube-trace-ipv6)]$ oc exec -ti ovn-trace-two -n ovn-tests-two -- ovnkube-trace -src-namespace ovn-tests-two -src ovn-trace-two -dst-namespace ovn-tests -dst ovn-trace -udp I1021 12:17:26.695325 3386 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace source pod to destination pod indicates success from ovn-trace-two to ovn-trace ovn-trace destination pod to source pod indicates success from ovn-trace to ovn-trace-two F1021 12:17:27.708822 3386 ovnkube-trace.go:601] ovs-appctl ofproto/trace source pod to destination pod error command terminated with exit code 2 stdOut: stdErr: Bad openflow flow syntax: in_port=73af56a18042ab9, udp, dl_src=0a:58:17:2b:b6:42, dl_dst=0a:58:69:bd:ba:d8, nw_src=fd01:0:0:5::13, nw_dst=fd01:0:0:5::14, nw_ttl=64, udp_dst=80, udp_src=12345: bad value for nw_src (fd01:0:0:5::13: invalid IP address) ovs-appctl: ovs-vswitchd: server returned an error command terminated with exit code 1
libovsdb builds transaction log messages for every transaction and then throws them away if the log level is not 4 or above. This wastes a bunch of CPU at scale and increases pod ready latency.
Currently on summery logs if there is kube-api issue controller will not upload logs but it should as it has file to read them from
Description of problem:
Using OLM descriptor components.Using OLM descriptor components deletes operand
Steps to Reproduce:
Description of problem:
When spot instances with taints are added to the cluster on AWS, machine-api-termination-handler daemonset pods do not launch on these instances because of the taints. machine-api-termination-handler is used for checking the notification of intance termination, so if it doesn't launch properly, application pods on spot instances could stop without normal shut down procedures. It is common to use taint-toleration to specify workloads on spot instances, because it does not require changing application manifests of other workloads.
Version-Release number of selected component (if applicable):
How reproducible:
100%
Steps to Reproduce:
1. Creating ROSA cluster 2. Adding spot instances with taints on OCM 3. oc get daemonset machine-api-termination-handler -n openshift-machine-api
Actual results:
machine-api-termination-handler pods do not launch on spot instances
Expected results:
machine-api-termination-handler pods launch on spot instances
Additional info:
Adding followings to machine-api-termination-handler daemonset could resolve the problem. --- tolerations: - operator: Exists
This is a clone of issue OCPBUGS-1627. The following is the description of the original issue:
—
Description of problem:
Two issues when setting user-defined folder in failureDomain.
1. installer get error when setting folder as a path of user-defined folder in failureDomain.
failureDomains setting in install-config.yaml:
failureDomains: - name: us-east-1 region: us-east zone: us-east-1a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-1 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-1 folder: /IBMCloud/vm/qe-jima - name: us-east-2 region: us-east zone: us-east-2a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-2 folder: /IBMCloud/vm/qe-jima - name: us-east-3 region: us-east zone: us-east-3a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-3 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR folder: /IBMCloud/vm/qe-jima - name: us-west-1 region: us-west zone: us-west-1a server: ibmvcenter.vmc-ci.devcluster.openshift.com topology: datacenter: datacenter-2 computeCluster: /datacenter-2/host/vcs-mdcnc-workload-4 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR
Error message in terraform after completing ova image import:
DEBUG vsphereprivate_import_ova.import[0]: Still creating... [1m40s elapsed] DEBUG vsphereprivate_import_ova.import[3]: Creation complete after 1m40s [id=vm-367860] DEBUG vsphereprivate_import_ova.import[1]: Creation complete after 1m49s [id=vm-367863] DEBUG vsphereprivate_import_ova.import[0]: Still creating... [1m50s elapsed] DEBUG vsphereprivate_import_ova.import[2]: Still creating... [1m50s elapsed] DEBUG vsphereprivate_import_ova.import[2]: Still creating... [2m0s elapsed] DEBUG vsphereprivate_import_ova.import[0]: Still creating... [2m0s elapsed] DEBUG vsphereprivate_import_ova.import[2]: Creation complete after 2m2s [id=vm-367862] DEBUG vsphereprivate_import_ova.import[0]: Still creating... [2m10s elapsed] DEBUG vsphereprivate_import_ova.import[0]: Creation complete after 2m20s [id=vm-367861] DEBUG data.vsphere_virtual_machine.template[0]: Reading... DEBUG data.vsphere_virtual_machine.template[3]: Reading... DEBUG data.vsphere_virtual_machine.template[1]: Reading... DEBUG data.vsphere_virtual_machine.template[2]: Reading... DEBUG data.vsphere_virtual_machine.template[3]: Read complete after 1s [id=42054e33-85d6-e310-7f4f-4c52a73f8338] DEBUG data.vsphere_virtual_machine.template[1]: Read complete after 2s [id=42053e17-cc74-7c89-f5d1-059c9030ecc7] DEBUG data.vsphere_virtual_machine.template[2]: Read complete after 2s [id=4205019f-26d8-f9b4-ac0c-2c073fd70b35] DEBUG data.vsphere_virtual_machine.template[0]: Read complete after 2s [id=4205eaf2-c727-c647-ad44-bd9ad7023c56] ERROR ERROR Error: error trying to determine parent targetFolder: folder '/IBMCloud/vm//IBMCloud/vm' not found ERROR ERROR with vsphere_folder.folder["IBMCloud-/IBMCloud/vm/qe-jima"], ERROR on main.tf line 61, in resource "vsphere_folder" "folder": ERROR 61: resource "vsphere_folder" "folder" { ERROR ERROR failed to fetch Cluster: failed to generate asset "Cluster": failure applying terraform for "pre-bootstrap" stage: failed to create cluster: failed to apply Terraform: exit status 1 ERROR ERROR Error: error trying to determine parent targetFolder: folder '/IBMCloud/vm//IBMCloud/vm' not found ERROR ERROR with vsphere_folder.folder["IBMCloud-/IBMCloud/vm/qe-jima"], ERROR on main.tf line 61, in resource "vsphere_folder" "folder": ERROR 61: resource "vsphere_folder" "folder" { ERROR ERROR
2. installer get panic error when setting folder as user-defined folder name in failure domains.
failure domain in install-config.yaml
failureDomains: - name: us-east-1 region: us-east zone: us-east-1a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-1 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-1 folder: qe-jima - name: us-east-2 region: us-east zone: us-east-2a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-2 folder: qe-jima - name: us-east-3 region: us-east zone: us-east-3a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-3 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR folder: qe-jima - name: us-west-1 region: us-west zone: us-west-1a server: xxx topology: datacenter: datacenter-2 computeCluster: /datacenter-2/host/vcs-mdcnc-workload-4 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR
panic error message in installer:
INFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.12/412.86.202208101039-0/x86_64/rhcos-412.86.202208101039-0-vmware.x86_64.ova?sha256='
INFO The file was found in cache: /home/user/.cache/openshift-installer/image_cache/rhcos-412.86.202208101039-0-vmware.x86_64.ova. Reusing...
panic: runtime error: index out of range [1] with length 1goroutine 1 [running]:
github.com/openshift/installer/pkg/tfvars/vsphere.TFVars({{0xc0013bd068, 0x3, 0x3}, {0xc000b11dd0, 0x12}, {0xc000b11db8, 0x14}, {0xc000b11d28, 0x14}, {0xc000fe8fc0, ...}, ...})
/go/src/github.com/openshift/installer/pkg/tfvars/vsphere/vsphere.go:79 +0x61b
github.com/openshift/installer/pkg/asset/cluster.(*TerraformVariables).Generate(0x1d1ed360, 0x5?)
/go/src/github.com/openshift/installer/pkg/asset/cluster/tfvars.go:847 +0x4798
Based on explanation of field folder, looks like folder name should be ok. If it is not allowed to use folder name, need to validate the folder and update explain.
sh-4.4$ ./openshift-install explain installconfig.platform.vsphere.failureDomains.topology.folder KIND: InstallConfig VERSION: v1RESOURCE: <string> folder is the name or inventory path of the folder in which the virtual machine is created/located.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-20-095559
How reproducible:
always
Steps to Reproduce:
see description
Actual results:
installation has errors when set user-defined folder
Expected results:
installation is successful when set user-defined folder
Additional info:
This is a clone of issue OCPBUGS-3440. The following is the description of the original issue:
—
Description of problem:
https://github.com/openshift/cluster-authentication-operator/pull/587 addresses an issue in which the auth operator goes degraded when the console capability is not enabled. The rest is that the console publicAssetURL is not configured when the console is disabled. However if the console capability is later enabled on the cluster, there is no logic in place to ensure the auth operator detects this and performs the configuration. Manually restarting the auth operator will address this, but we should have a solution that handles it automatically.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Install a cluster w/o the console cap 2. Inspect the auth configmap, see that assetPublicURL is empty 3. Enable the console capability, wait for console to start up 4. Inspect the auth configmap and see it is still empty
Actual results:
assetPublicURL does not get populated
Expected results:
assetPublicURL is populated once the console is enabled
Additional info:
Description of problem:
The setting of systemReserved: ephemeral-storage in KubeletConfig is not working as expected.
Version-Release number of selected component (if applicable):
4.10.z, may exist on other OCP versions as well.
How reproducible:
always
Steps to Reproduce:
1. Create a KubeletConfig on the node: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: system-reserved-config spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: "" kubeletConfig: systemReserved: cpu: 500m memory: 500Mi ephemeral-storage: 10Gi 2. Check node allocatable storage with command: oc describe node |grep -C 5 ephemeral-storage
Actual results:
The Allocatable:ephemeral-storage on the node is not capacity.ephemeral-storage - systemReserved.ephemeral-storage - eviction-thresholds (10% of the capacity.ephemeral-storage by default)
Expected results:
The Allocatable:ephemeral-storage on the node should be capacity.ephemeral-storage - systemReserved.ephemeral-storage - eviction-thresholds (10% of the capacity.ephemeral-storage by default)
Additional info:
The root cause might be: process argument '--system-reserved=cpu=500m,memory=500Mi' overwrote the setting in /etc/kubernetes/kubelet.conf, one example: root 6824 1 27 Sep30 ? 1-09:00:24 kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --runtime-cgroups=/system.slice/crio.service --node-labels=node-role.kubernetes.io/master,node.openshift.io/os_id=rhcos --node-ip=192.168.58.47 --minimum-container-ttl-duration=6m0s --cloud-provider= --volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --hostname-override= --register-with-taints=node-role.kubernetes.io/master=:NoSchedule --pod-infra-container-image=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a7b6408460148cb73c59677dbc2c261076bc07226c43b0c9192cc70aef5ba62 --system-reserved=cpu=500m,memory=500Mi --v=2 --housekeeping-interval=30s
This is a clone of issue OCPBUGS-3501. The following is the description of the original issue:
—
Description of problem:
On clusters serving Route via CRD (i.e. MicroShift), .spec.host values are not automatically assigned during Route creation, as they are on OCP.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
$ cat<<EOF | oc apply --server-side -f- apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-microshift spec: to: kind: Service name: hello-microshift EOF route.route.openshift.io/hello-microshift serverside-applied $ oc get route hello-microshift -o yaml apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: openshift.io/host.generated: "true" creationTimestamp: "2022-11-11T23:53:33Z" generation: 1 name: hello-microshift namespace: default resourceVersion: "2659" uid: cd35cd20-b3fd-4d50-9912-f34b3935acfd spec: host: hello-microshift-default.cluster.local to: kind: Service name: hello-microshift wildcardPolicy: None
Expected results:
... metadata: annotations: openshift.io/host.generated: "true" ... spec: host: hello-microshift-default.foo.bar.baz ...
Actual results:
Host and host.generated annotation are missing.
Additional info:
** This change will be inert on OCP, which already has the correct behavior. **
Hi,
Bare Metal IPI provisioning is failing to provision the worker nodes. The metal3-machine-os-downloader InitContainer is getting in CrashLoopBackOff state because it cannot find virt-* commands in the container image.
> oc -n openshift-machine-api get pods | grep -v Running NAME READY STATUS metal3-fc66f5846-gtq9m 0/7 Init:CrashLoopBackOff metal3-image-cache-d4qcz 0/1 Init:1/2 metal3-image-cache-djzcf 0/1 Init:1/2 metal3-image-cache-p5mwg 0/1 Init:1/2
> oc -n openshift-machine-api logs deployment/metal3 -c metal3-machine-os-downloader [omitted] ++ LIBGUESTFS_BACKEND=direct ++ virt-filesystems -a rhcos-412.86.202207142104-0-openstack.x86_64.qcow2 -l /usr/local/bin/get-resource.sh: line 88: virt-filesystems: command not found ++ grep boot ++ cut -f1 '-d ' + BOOT_DISK= ++ LIBGUESTFS_BACKEND=direct ++ virt-ls -a rhcos-412.86.202207142104-0-openstack.x86_64.qcow2 -m '' /boot/loader/entries /usr/local/bin/get-resource.sh: line 90: virt-ls: command not found + BOOT_ENTRIES= + rm -fr /shared/tmp/tmp.CnCd2E3kxN
OpenShift 4.12.0-ec.0+
Since https://github.com/openshift/ocp-build-data/pull/1757, the ironic-machine-os-downloader container image is built using RHEL9 repositories.
However, following upstream move of guestfs tools to a dedicated repository [1], the libguestfs packaging differs between RHEL8 and RHEL9:
Since the Dockerfile specifies only the libguestfs-tools package, the virt-* commands are not installed when using RHEL9 repositories.
A trivial fix is to update the Dockerfile to install the guestfs-tools package instead of the libguestfs-tools package.
Regards,
Denis
For the disconnected installation , we should not be able to provision machines successfully with publicIP:true , this has been the behavior earlier till -
4.11 and around 17th Aug nightly released 4.12 , but it has started allowing creation of machines with publicIP:true set in machineset
Issue reproduced on - Cluster version - 4.12.0-0.nightly-2022-08-23-223922
It is always reproducible .
Steps :
Create machineset using yaml with
{"spec":{"providerSpec":{"value":{"publicIP": true}}}}
Machineset created successfully and machine provisioned successfully .
This seems to be regression bug refer - https://bugzilla.redhat.com/show_bug.cgi?id=1889620
Here is the must gather log - https://drive.google.com/file/d/1UXjiqAx7obISTxkmBsSBuo44ciz9HD1F/view?usp=sharing
Here is the test successfully ran for 4.11 , for exactly same profile and machine creation failed with InvalidConfiguration Error- https://mastern-jenkins-csb-openshift-qe.apps.ocp-c1.prod.psi.redhat.com/job/ocp-common/job/Runner/575822/console
We can confirm disconnected cluster using below there would be lot of mirrors used in those -
oc get ImageContentSourcePolicy image-policy-aosqe -o yaml apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: creationTimestamp: "2022-08-24T09:08:47Z" generation: 1 name: image-policy-aosqe resourceVersion: "34648" uid: 20e45d6d-e081-435d-b6bb-16c4ca21c9d6 spec: repositoryDigestMirrors: - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/olmqe source: quay.io/olmqe - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/openshifttest source: quay.io/openshifttest - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/openshift-qe-optional-operators source: quay.io/openshift-qe-optional-operators - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: registry.redhat.io - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: registry.stage.redhat.io - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: brew.registry.redhat.io
This bug is a backport clone of [Bugzilla Bug 2100429](https://bugzilla.redhat.com/show_bug.cgi?id=2100429). The following is the description of the original bug:
—
Description of problem:
[apiserver-auth] default SCC restricted allow volumes don't have "ephemeral" caused deployment with Generic Ephemeral Volumes stuck at Pending
Version-Release number of selected component (if applicable):
Cluster version is 4.11.0-0.nightly-2022-06-22-190830
$ oc version
Client Version: 4.11.0-0.nightly-2022-05-11-054135
Kustomize Version: v4.5.4
Server Version: 4.11.0-0.nightly-2022-06-22-190830
Kubernetes Version: v1.24.0+284d62a
How reproducible:
Always
Steps to Reproduce:
1. Set up a AWS OCP cluster with 4.11 nightly
2. Create a deployment with Generic Ephemeral Volumes
3. Waiting for the deployment ready and check the volume could write and read data
Test data:
wangpenghao@MacBook-Pro ~ cat temp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-dep
spec:
replicas: 1
selector:
matchLabels:
app: my-dep
template:
metadata:
labels:
app: my-dep
spec:
containers:
Actual results:
In Step3 : The deployment stucked at Pending caused by unable to validate against any security context constraint
Expected results:
In Step3 : The deployment should ready with the default scc restricted, the default scc restricted should allow
volumes:
Additional info:
Generic ephemeral volumes are the safer option of these two - it just creates/deletes PVCs on behalf of users. And most users can already create PVCs.
ephemeral type volume not in scc.volumes list definition
https://docs.openshift.com/container-platform/4.10/authentication/managing-security-context-constraints.html#authorization-cont[…]ing-internal-oauth
So currently if customers want to use ephemeral type volume have to use scc with:
volumes:
Discuss record: https://coreos.slack.com/archives/CB48XQ4KZ/p1655465586780419
Generic Ephemeral Volumes docs:
https://kubernetes.io/blog/2020/09/01/ephemeral-volumes-with-storage-capacity-tracking/#generic-ephemeral-volumes
Master Log:
Node Log (of failed PODs):
PV Dump:
PVC Dump:
StorageClass Dump (if StorageClass used by PV/PVC):
This is a clone of issue OCPBUGS-3508. The following is the description of the original issue:
—
Exposed via the fact that the periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-serial-ipv4 job is at 0% for at least the past two weeks over approximatesly 65 runs.
Testgrid shows that this job started failing in a very consistent way on Oct 25th at about 8am UTC: https://testgrid.k8s.io/redhat-openshift-ocp-release-4.12-informing#periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-serial-ipv4
6 disruption tests fail, all with alarming consistency virtually always claiming exactly 8s of disruption, max allowed 1s.
And then openshift-tests.[sig-arch] events should not repeat pathologically fails with an odd signature:
{ 6 events happened too frequently event happened 35 times, something is wrong: node/master-2 - reason/NodeHasNoDiskPressure roles/control-plane,master Node master-2 status is now: NodeHasNoDiskPressure event happened 35 times, something is wrong: node/master-2 - reason/NodeHasSufficientMemory roles/control-plane,master Node master-2 status is now: NodeHasSufficientMemory event happened 35 times, something is wrong: node/master-2 - reason/NodeHasSufficientPID roles/control-plane,master Node master-2 status is now: NodeHasSufficientPID event happened 35 times, something is wrong: node/master-1 - reason/NodeHasNoDiskPressure roles/control-plane,master Node master-1 status is now: NodeHasNoDiskPressure event happened 35 times, something is wrong: node/master-1 - reason/NodeHasSufficientMemory roles/control-plane,master Node master-1 status is now: NodeHasSufficientMemory event happened 35 times, something is wrong: node/master-1 - reason/NodeHasSufficientPID roles/control-plane,master Node master-1 status is now: NodeHasSufficientPID}
The two types of tests started failing together exactly, and the disruption measurements are bizzarely consistent, every single time we see precisely 8s for kube-api, cache-kube-api, openshift-api, cache-openshift-api, oauth-api, cache-oauth-api. It's always these 6, and it seems to be always exactly 8 seconds. I cannot state enough how strange this is. It almost implies that something is happening on a very consistent schedule.
Occasionally these are accompanied by 1-2s of disruption for those backends with new connections, but sometimes not as well.
It looks like all of the disruption consistently happens within two very long tests:
4s within: [sig-network] services when running openshift ipv4 cluster ensures external ip policy is configured correctly on the cluster [Serial] [Suite:openshift/conformance/serial]
4s within: [sig-network] services when running openshift ipv4 cluster on bare metal [apigroup:config.openshift.io] ensures external auto assign cidr is configured correctly on the cluster [Serial] [Suite:openshift/conformance/serial]
Both tests appear to have run prior to oct 25, so I don't think it's a matter of new tests breaking something or getting unskipped. Both tests also always pass, but appear to be impacting the cluster?
The master's going NotReady also appears to fall within the above two tests as well, though it does not seem to directly match with when we measure disruption, but bear in mind there's a 40s delay before the node goes NotReady.
Focusing on https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-serial-ipv4/1590640492373086208 where the above are from:
Two of the three master nodes appear to be going NodeNotReady a couple times throughout the run, as visible in the spyglass chart under the node state row on the left. master-0 does not appear here, but it does exist. (I suspect it has leader and thus is the node reporting the others going not ready)
From the master-0 kubelet log in must-gather we can see one of these examples where it reports that master-2 has not checked in:
2022-11-10T10:38:35.874090961Z I1110 10:38:35.873975 1 node_lifecycle_controller.go:1137] node master-2 hasn't been updated for 40.00700561s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-10 1 0:36:10 +0000 UTC,LastTransitionTime:2022-11-10 10:29:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,} 2022-11-10T10:38:35.874090961Z I1110 10:38:35.874056 1 node_lifecycle_controller.go:1137] node master-2 hasn't been updated for 40.007097549s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:False,LastHeartb eatTime:2022-11-10 10:36:10 +0000 UTC,LastTransitionTime:2022-11-10 10:29:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,} 2022-11-10T10:38:35.874090961Z I1110 10:38:35.874067 1 node_lifecycle_controller.go:1137] node master-2 hasn't been updated for 40.007110285s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatT ime:2022-11-10 10:36:10 +0000 UTC,LastTransitionTime:2022-11-10 10:29:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,} 2022-11-10T10:38:35.874090961Z I1110 10:38:35.874076 1 node_lifecycle_controller.go:1137] node master-2 hasn't been updated for 40.007119541s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTim e:2022-11-10 10:36:10 +0000 UTC,LastTransitionTime:2022-11-10 10:29:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,} 2022-11-10T10:38:35.881749410Z I1110 10:38:35.881705 1 controller_utils.go:181] "Recording status change event message for node" status="NodeNotReady" node="master-2" 2022-11-10T10:38:35.881749410Z I1110 10:38:35.881733 1 controller_utils.go:120] "Update ready status of pods on node" node="master-2" 2022-11-10T10:38:35.881820988Z I1110 10:38:35.881799 1 controller_utils.go:138] "Updating ready status of pod to false" pod="metal3-b7b69fdbb-rfbdj" 2022-11-10T10:38:35.881893234Z I1110 10:38:35.881858 1 topologycache.go:179] Ignoring node master-2 because it has an excluded label 2022-11-10T10:38:35.881893234Z W1110 10:38:35.881886 1 topologycache.go:199] Can't get CPU or zone information for worker-0 node 2022-11-10T10:38:35.881903023Z I1110 10:38:35.881892 1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, false) 2022-11-10T10:38:35.881932172Z I1110 10:38:35.881917 1 controller.go:271] Node changes detected, triggering a full node sync on all loadbalancer services 2022-11-10T10:38:35.882290428Z I1110 10:38:35.882270 1 event.go:294] "Event occurred" object="master-2" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node master-2 status is now: NodeNotReady"
Now from master-2's kubelet log around that time, 40 seconds earlier puts us at 10:37:55, so we'd be looking for something odd around there.
A few potential lines:
Nov 10 10:37:55.232537 master-2 kubenswrapper[1930]: I1110 10:37:55.232495 1930 patch_prober.go:29] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.111.22:10257/healthz\": dial tcp 192.168.111.22:10257: connect: connection refused" start-of-body= Nov 10 10:37:55.232537 master-2 kubenswrapper[1930]: I1110 10:37:55.232549 1930 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID=8be2c6c1-f8f6-4bf0-b26d-53ce487354bd containerName="guard" probeResult=failure output="Get \"https://192.168.111.22:10257/healthz\": dial tcp 192.168.111.22:10257: connect: connection refused" Nov 10 10:38:12.238273 master-2 kubenswrapper[1930]: E1110 10:38:12.238229 1930 controller.go:187] failed to update lease, error: Put "https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Nov 10 10:38:13.034109 master-2 kubenswrapper[1930]: E1110 10:38:13.034077 1930 kubelet_node_status.go:487] "Error updating node status, will retry" err="error getting node \"master-2\": Get \"https://api-int.ostest.test.metalkube.org:6443/api/v1/nodes/master-2?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
At 10:38:40 all kinds of master-2 watches time out with messages like:
Nov 10 10:38:40.244399 master-2 kubenswrapper[1930]: W1110 10:38:40.244272 1930 reflector.go:347] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
And then suddenly we're back online:
Nov 10 10:38:40.252149 master-2 kubenswrapper[1930]: I1110 10:38:40.252131 1930 kubelet_node_status.go:590] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Nov 10 10:38:40.252149 master-2 kubenswrapper[1930]: I1110 10:38:40.252156 1930 kubelet_node_status.go:590] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Nov 10 10:38:40.252268 master-2 kubenswrapper[1930]: I1110 10:38:40.252165 1930 kubelet_node_status.go:590] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Nov 10 10:38:40.252268 master-2 kubenswrapper[1930]: I1110 10:38:40.252177 1930 kubelet_node_status.go:590] "Recording event message for node" node="master-2" event="NodeReady" Nov 10 10:38:47.904430 master-2 kubenswrapper[1930]: I1110 10:38:47.904373 1930 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Nov 10 10:38:47.904842 master-2 kubenswrapper[1930]: I1110 10:38:47.904662 1930 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Nov 10 10:38:47.907900 master-2 kubenswrapper[1930]: I1110 10:38:47.907872 1930 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Nov 10 10:38:48.431448 master-2 kubenswrapper[1930]: I1110 10:38:48.431414 1930 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Nov 10 10:38:54.764069 master-2 kubenswrapper[1930]: I1110 10:38:54.764029 1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" status=Running Nov 10 10:38:54.764069 master-2 kubenswrapper[1930]: I1110 10:38:54.764059 1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-master-2" status=Running Nov 10 10:38:54.764069 master-2 kubenswrapper[1930]: I1110 10:38:54.764077 1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-master-2" status=Running Nov 10 10:38:54.764069 master-2 kubenswrapper[1930]: I1110 10:38:54.764086 1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-master-2" status=Running Nov 10 10:38:54.764492 master-2 kubenswrapper[1930]: I1110 10:38:54.764106 1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-master-2" status=Running Nov 10 10:38:54.764492 master-2 kubenswrapper[1930]: I1110 10:38:54.764113 1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" status=Running
Also curious:
Nov 10 10:37:50.318237 master-2 ovs-vswitchd[1324]: ovs|00251|connmgr|INFO|br0<->unix#468: 2 flow_mods in the last 0 s (2 deletes) Nov 10 10:37:50.342965 master-2 ovs-vswitchd[1324]: ovs|00252|connmgr|INFO|br0<->unix#471: 4 flow_mods in the last 0 s (4 deletes) Nov 10 10:37:50.364271 master-2 ovs-vswitchd[1324]: ovs|00253|bridge|INFO|bridge br0: deleted interface vethcb8d36e6 on port 41 Nov 10 10:37:53.579562 master-2 NetworkManager[1336]: <info> [1668076673.5795] dhcp4 (enp2s0): state changed new lease, address=192.168.111.22
These look like they could be related to the tests these problems appear to coincide with?
This is a clone of issue OCPBUGS-2500. The following is the description of the original issue:
—
Description of problem:
When the Ux switches to the Dev console the topology is always blank in a Project that has a large number of components.
Version-Release number of selected component (if applicable):
How reproducible:
Always occurs
Steps to Reproduce:
1.Create a project with at least 12 components (Apps, Operators, knative Brokers) 2. Go to the Administrator Viewpoint 3. Switch to Developer Viewpoint/Topology 4. No components displayed 5. Click on 'fit to screen' 6. All components appear
Actual results:
Topology renders with all controls but no components visible (see screenshot 1)
Expected results:
All components should be visible
Additional info:
Description of problem:
Machine cannot go into Failed phase when providing an invalid vmSize, it stuck in Provisioning, and the prompt message is not accurate. The case works well in 4.11 and previous versions, it’s a regression issue on 4.12, and seems introduced here: https://github.com/openshift/machine-api-provider-azure/pull/32/files#diff-af805e1e45f03df0b5b56ff4413e5ad52cd31904a94d37e8e916751953e4687dR565
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-28-204419
How reproducible:
always
Steps to Reproduce:
1. Create a machineset with invalid vmSize vmSize: invalid liuhuali@Lius-MacBook-Pro huali-test % oc create -f ms1.yaml machineset.machine.openshift.io/huliu-azure02pr-jmvl2-1 created liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-azure02pr-jmvl2-1-6gbdw Provisioning 4m58s huliu-azure02pr-jmvl2-master-0 Running Standard_D8s_v3 southcentralus 1 5h11m huliu-azure02pr-jmvl2-master-1 Running Standard_D8s_v3 southcentralus 2 5h11m huliu-azure02pr-jmvl2-master-2 Running Standard_D8s_v3 southcentralus 3 5h11m huliu-azure02pr-jmvl2-worker-southcentralus1-9hgmk Running Standard_D4s_v3 southcentralus 1 4h56m huliu-azure02pr-jmvl2-worker-southcentralus2-44mf6 Running Standard_D4s_v3 southcentralus 2 4h56m huliu-azure02pr-jmvl2-worker-southcentralus3-4m9b7 Running Standard_D4s_v3 southcentralus 3 4h56m liuhuali@Lius-MacBook-Pro huali-test % oc get machine huliu-azure02pr-jmvl2-1-6gbdw -o yaml apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: creationTimestamp: "2022-09-29T06:36:03Z" finalizers: - machine.machine.openshift.io generateName: huliu-azure02pr-jmvl2-1- generation: 2 labels: machine.openshift.io/cluster-api-cluster: huliu-azure02pr-jmvl2 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: huliu-azure02pr-jmvl2-1 name: huliu-azure02pr-jmvl2-1-6gbdw namespace: openshift-machine-api ownerReferences: - apiVersion: machine.openshift.io/v1beta1 blockOwnerDeletion: true controller: true kind: MachineSet name: huliu-azure02pr-jmvl2-1 uid: f729cb01-274a-4c6e-8f69-808cff412fe3 resourceVersion: "174604" uid: 2c4b9dd4-5666-47cd-8fc5-38bac0b9cad1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: "" publisher: "" resourceID: /resourceGroups/huliu-azure02pr-jmvl2-rg/providers/Microsoft.Compute/images/huliu-azure02pr-jmvl2-gen2 sku: "" version: "" kind: AzureMachineProviderSpec location: southcentralus managedIdentity: huliu-azure02pr-jmvl2-identity metadata: creationTimestamp: null name: huliu-azure02pr-jmvl2 networkResourceGroup: huliu-azure02pr-jmvl2-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: huliu-azure02pr-jmvl2 resourceGroup: huliu-azure02pr-jmvl2-rg subnet: huliu-azure02pr-jmvl2-worker-subnet userDataSecret: name: worker-user-data vmSize: invalid vnet: huliu-azure02pr-jmvl2-vnet zone: "1" status: conditions: - lastTransitionTime: "2022-09-29T06:36:03Z" status: "True" type: Drainable - lastTransitionTime: "2022-09-29T06:36:03Z" message: Instance has not been created reason: InstanceNotCreated severity: Warning status: "False" type: InstanceExists - lastTransitionTime: "2022-09-29T06:36:03Z" status: "True" type: Terminable lastUpdated: "2022-09-29T06:36:03Z" phase: Provisioning providerStatus: conditions: - lastTransitionTime: "2022-09-29T06:36:03Z" message: 'failed to create nic huliu-azure02pr-jmvl2-1-6gbdw-nic for machine huliu-azure02pr-jmvl2-1-6gbdw: failed to find sku invalid' reason: MachineCreationFailed status: "True" type: MachineCreated metadata: {} machine-controller log: ... W0929 11:38:25.817887 1 controller.go:382] huliu-azure02pr-jmvl2-invalid-lzzb2: failed to create machine: requeue in: 20s I0929 11:38:25.817905 1 controller.go:412] Actuator returned requeue-after error: requeue in: 20s I0929 11:38:25.817984 1 logr.go:252] events "msg"="Warning" "message"="CreateError: failed to reconcile machine \"huliu-azure02pr-jmvl2-invalid-lzzb2\"s: failed to create nic huliu-azure02pr-jmvl2-invalid-lzzb2-nic for machine huliu-azure02pr-jmvl2-invalid-lzzb2: failed to find sku invalid" "object"={"kind":"Machine","namespace":"openshift-machine-api","name":"huliu-azure02pr-jmvl2-invalid-lzzb2","uid":"bab43f44-7da9-4b62-bbdc-01a180cc1de7","apiVersion":"machine.openshift.io/v1beta1","resourceVersion":"316506"} "reason"="FailedCreate" I0929 11:38:25.817989 1 controller.go:187] huliu-azure02pr-jmvl2-invalid-lzzb2: reconciling Machine I0929 11:38:25.818015 1 actuator.go:213] huliu-azure02pr-jmvl2-invalid-lzzb2: actuator checking if machine exists W0929 11:38:25.916417 1 virtualmachines.go:99] vm huliu-azure02pr-jmvl2-invalid-lzzb2 not found: %!w(string=compute.VirtualMachinesClient#Get: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/huliu-azure02pr-jmvl2-invalid-lzzb2' under resource group 'huliu-azure02pr-jmvl2-rg' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix") I0929 11:38:25.916463 1 controller.go:380] huliu-azure02pr-jmvl2-invalid-lzzb2: reconciling machine triggers idempotent create I0929 11:38:25.916476 1 actuator.go:85] Creating machine huliu-azure02pr-jmvl2-invalid-lzzb2 I0929 11:38:25.917540 1 machine_scope.go:176] huliu-azure02pr-jmvl2-invalid-lzzb2: status unchanged I0929 11:38:25.917596 1 machine_scope.go:192] huliu-azure02pr-jmvl2-invalid-lzzb2: patching machine E0929 11:38:25.941083 1 actuator.go:79] Machine error: failed to reconcile machine "huliu-azure02pr-jmvl2-invalid-lzzb2"s: failed to create nic huliu-azure02pr-jmvl2-invalid-lzzb2-nic for machine huliu-azure02pr-jmvl2-invalid-lzzb2: failed to find sku invalid
Actual results:
Machine stuck in Provisioning, the prompt message is not accurate
Expected results:
Machine go into Failed phase and give InvalidConfiguration error, as the previous versions.
Additional info:
test result on previous version: liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE jfan49-jn66b-master-0 Running Standard_D8s_v3 westus 6h27m jfan49-jn66b-master-1 Running Standard_D8s_v3 westus 6h27m jfan49-jn66b-master-2 Running Standard_D8s_v3 westus 6h27m jfan49-jn66b-worker-1-tdpdt Failed 61s jfan49-jn66b-worker-westus-2fz6b Running Standard_D4s_v3 westus 6h21m jfan49-jn66b-worker-westus-6fkgb Running Standard_D4s_v3 westus 6h21m jfan49-jn66b-worker-westus-k74gf Running Standard_D4s_v3 westus 6h21m liuhuali@Lius-MacBook-Pro huali-test % oc get machine jfan49-jn66b-worker-1-tdpdt -o yaml apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: Unknown creationTimestamp: "2022-09-29T08:50:13Z" finalizers: - machine.machine.openshift.io generateName: jfan49-jn66b-worker-1- generation: 2 labels: machine.openshift.io/cluster-api-cluster: jfan49-jn66b machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: jfan49-jn66b-worker-1 name: jfan49-jn66b-worker-1-tdpdt namespace: openshift-machine-api ownerReferences: - apiVersion: machine.openshift.io/v1beta1 blockOwnerDeletion: true controller: true kind: MachineSet name: jfan49-jn66b-worker-1 uid: 4319d2e2-3ee2-4cb2-a7b4-5a0d4e1ea3d7 resourceVersion: "128119" uid: 7d9e4bbe-7c37-416e-a133-577476937b7a spec: metadata: {} providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/jfan49-jn66b-rg/providers/Microsoft.Compute/images/jfan49-jn66b sku: "" version: "" kind: AzureMachineProviderSpec location: westus managedIdentity: jfan49-jn66b-identity metadata: creationTimestamp: null name: jfan49-jn66b networkResourceGroup: jfan49-jn66b-rg osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: jfan49-jn66b resourceGroup: jfan49-jn66b-rg subnet: jfan49-jn66b-worker-subnet userDataSecret: name: worker-user-data vmSize: invalid vnet: jfan49-jn66b-vnet zone: "" status: conditions: - lastTransitionTime: "2022-09-29T08:50:13Z" message: Instance has not been created reason: InstanceNotCreated severity: Warning status: "False" type: InstanceExists errorMessage: 'failed to reconcile machine "jfan49-jn66b-worker-1-tdpdt": failed to create vm jfan49-jn66b-worker-1-tdpdt: failure sending request for machine jfan49-jn66b-worker-1-tdpdt: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value invalid provided for the VM size is not valid. The valid sizes in the current region are: Standard_B1ls,Standard_B1ms,Standard_B1s,Standard_B2ms,Standard_B2s,Standard_B4ms,Standard_B8ms,Standard_B12ms,Standard_B16ms,Standard_B20ms,Standard_E2_v4,Standard_E4_v4,Standard_E8_v4,Standard_E16_v4,Standard_E20_v4,Standard_E32_v4,Standard_E2d_v4,Standard_E4d_v4,Standard_E8d_v4,Standard_E16d_v4,Standard_E20d_v4,Standard_E32d_v4,Standard_E2s_v4,Standard_E4-2s_v4,Standard_E4s_v4,Standard_E8-2s_v4,Standard_E8-4s_v4,Standard_E8s_v4,Standard_E16-4s_v4,Standard_E16-8s_v4,Standard_E16s_v4,Standard_E20s_v4,Standard_E32-8s_v4,Standard_E32-16s_v4,Standard_E32s_v4,Standard_E2ds_v4,Standard_E4-2ds_v4,Standard_E4ds_v4,Standard_E8-2ds_v4,Standard_E8-4ds_v4,Standard_E8ds_v4,Standard_E16-4ds_v4,Standard_E16-8ds_v4,Standard_E16ds_v4,Standard_E20ds_v4,Standard_E32-8ds_v4,Standard_E32-16ds_v4,Standard_E32ds_v4,Standard_D2d_v4,Standard_D4d_v4,Standard_D8d_v4,Standard_D16d_v4,Standard_D32d_v4,Standard_D48d_v4,Standard_D64d_v4,Standard_D2_v4,Standard_D4_v4,Standard_D8_v4,Standard_D16_v4,Standard_D32_v4,Standard_D48_v4,Standard_D64_v4,Standard_D2ds_v4,Standard_D4ds_v4,Standard_D8ds_v4,Standard_D16ds_v4,Standard_D32ds_v4,Standard_D48ds_v4,Standard_D64ds_v4,Standard_D2s_v4,Standard_D4s_v4,Standard_D8s_v4,Standard_D16s_v4,Standard_D32s_v4,Standard_D48s_v4,Standard_D64s_v4,Standard_D1_v2,Standard_D2_v2,Standard_D3_v2,Standard_D4_v2,Standard_D5_v2,Standard_D11_v2,Standard_D12_v2,Standard_D13_v2,Standard_D14_v2,Standard_D15_v2,Standard_D2_v2_Promo,Standard_D3_v2_Promo,Standard_D4_v2_Promo,Standard_D5_v2_Promo,Standard_D11_v2_Promo,Standard_D12_v2_Promo,Standard_D13_v2_Promo,Standard_D14_v2_Promo,Standard_F1,Standard_F2,Standard_F4,Standard_F8,Standard_F16,Standard_DS1_v2,Standard_DS2_v2,Standard_DS3_v2,Standard_DS4_v2,Standard_DS5_v2,Standard_DS11-1_v2,Standard_DS11_v2,Standard_DS12-1_v2,Standard_DS12-2_v2,Standard_DS12_v2,Standard_DS13-2_v2,Standard_DS13-4_v2,Standard_DS13_v2,Standard_DS14-4_v2,Standard_DS14-8_v2,Standard_DS14_v2,Standard_DS15_v2,Standard_DS2_v2_Promo,Standard_DS3_v2_Promo,Standard_DS4_v2_Promo,Standard_DS5_v2_Promo,Standard_DS11_v2_Promo,Standard_DS12_v2_Promo,Standard_DS13_v2_Promo,Standard_DS14_v2_Promo,Standard_F1s,Standard_F2s,Standard_F4s,Standard_F8s,Standard_F16s,Standard_A1_v2,Standard_A2m_v2,Standard_A2_v2,Standard_A4m_v2,Standard_A4_v2,Standard_A8m_v2,Standard_A8_v2,Standard_D2_v3,Standard_D4_v3,Standard_D8_v3,Standard_D16_v3,Standard_D32_v3,Standard_D48_v3,Standard_D64_v3,Standard_D2s_v3,Standard_D4s_v3,Standard_D8s_v3,Standard_D16s_v3,Standard_D32s_v3,Standard_D48s_v3,Standard_D64s_v3,Standard_E2_v3,Standard_E4_v3,Standard_E8_v3,Standard_E16_v3,Standard_E20_v3,Standard_E32_v3,Standard_E2s_v3,Standard_E4-2s_v3,Standard_E4s_v3,Standard_E8-2s_v3,Standard_E8-4s_v3,Standard_E8s_v3,Standard_E16-4s_v3,Standard_E16-8s_v3,Standard_E16s_v3,Standard_E20s_v3,Standard_E32-8s_v3,Standard_E32-16s_v3,Standard_E32s_v3,Standard_F2s_v2,Standard_F4s_v2,Standard_F8s_v2,Standard_F16s_v2,Standard_F32s_v2,Standard_F48s_v2,Standard_F64s_v2,Standard_F72s_v2,Standard_E48_v4,Standard_E64_v4,Standard_E48d_v4,Standard_E64d_v4,Standard_E48s_v4,Standard_E64-16s_v4,Standard_E64-32s_v4,Standard_E64s_v4,Standard_E80is_v4,Standard_E48ds_v4,Standard_E64-16ds_v4,Standard_E64-32ds_v4,Standard_E64ds_v4,Standard_E80ids_v4,Standard_E48_v3,Standard_E64_v3,Standard_E48s_v3,Standard_E64-16s_v3,Standard_E64-32s_v3,Standard_E64s_v3,Standard_A0,Standard_A1,Standard_A2,Standard_A3,Standard_A5,Standard_A4,Standard_A6,Standard_A7,Basic_A0,Basic_A1,Basic_A2,Basic_A3,Basic_A4,Standard_NC4as_T4_v3,Standard_NC8as_T4_v3,Standard_NC16as_T4_v3,Standard_NC64as_T4_v3,Standard_M64,Standard_M64m,Standard_M128,Standard_M128m,Standard_M8-2ms,Standard_M8-4ms,Standard_M8ms,Standard_M16-4ms,Standard_M16-8ms,Standard_M16ms,Standard_M32-8ms,Standard_M32-16ms,Standard_M32ls,Standard_M32ms,Standard_M32ts,Standard_M64-16ms,Standard_M64-32ms,Standard_M64ls,Standard_M64ms,Standard_M64s,Standard_M128-32ms,Standard_M128-64ms,Standard_M128ms,Standard_M128s,Standard_M32ms_v2,Standard_M64ms_v2,Standard_M64s_v2,Standard_M128ms_v2,Standard_M128s_v2,Standard_M192ims_v2,Standard_M192is_v2,Standard_M32dms_v2,Standard_M64dms_v2,Standard_M64ds_v2,Standard_M128dms_v2,Standard_M128ds_v2,Standard_M192idms_v2,Standard_M192ids_v2,Standard_E64i_v3,Standard_E64is_v3,Standard_D1,Standard_D2,Standard_D3,Standard_D4,Standard_D11,Standard_D12,Standard_D13,Standard_D14,Standard_DS1,Standard_DS2,Standard_DS3,Standard_DS4,Standard_DS11,Standard_DS12,Standard_DS13,Standard_DS14,Standard_DC8_v2,Standard_DC1s_v2,Standard_DC2s_v2,Standard_DC4s_v2,Standard_L8s_v2,Standard_L16s_v2,Standard_L32s_v2,Standard_L48s_v2,Standard_L64s_v2,Standard_L80s_v2,Standard_NV4as_v4,Standard_NV8as_v4,Standard_NV16as_v4,Standard_NV32as_v4,Standard_G1,Standard_G2,Standard_G3,Standard_G4,Standard_G5,Standard_GS1,Standard_GS2,Standard_GS3,Standard_GS4,Standard_GS4-4,Standard_GS4-8,Standard_GS5,Standard_GS5-8,Standard_GS5-16,Standard_L4s,Standard_L8s,Standard_L16s,Standard_L32s,Standard_DC2as_v5,Standard_DC4as_v5,Standard_DC8as_v5,Standard_DC16as_v5,Standard_DC32as_v5,Standard_DC48as_v5,Standard_DC64as_v5,Standard_DC96as_v5,Standard_DC2ads_v5,Standard_DC4ads_v5,Standard_DC8ads_v5,Standard_DC16ads_v5,Standard_DC32ads_v5,Standard_DC48ads_v5,Standard_DC64ads_v5,Standard_DC96ads_v5,Standard_EC2as_v5,Standard_EC4as_v5,Standard_EC8as_v5,Standard_EC16as_v5,Standard_EC20as_v5,Standard_EC32as_v5,Standard_EC48as_v5,Standard_EC64as_v5,Standard_EC96as_v5,Standard_EC96ias_v5,Standard_EC2ads_v5,Standard_EC4ads_v5,Standard_EC8ads_v5,Standard_EC16ads_v5,Standard_EC20ads_v5,Standard_EC32ads_v5,Standard_EC48ads_v5,Standard_EC64ads_v5,Standard_EC96ads_v5,Standard_EC96iads_v5,Standard_D2ds_v5,Standard_D4ds_v5,Standard_D8ds_v5,Standard_D16ds_v5,Standard_D32ds_v5,Standard_D48ds_v5,Standard_D64ds_v5,Standard_D96ds_v5,Standard_D2d_v5,Standard_D4d_v5,Standard_D8d_v5,Standard_D16d_v5,Standard_D32d_v5,Standard_D48d_v5,Standard_D64d_v5,Standard_D96d_v5,Standard_D2s_v5,Standard_D4s_v5,Standard_D8s_v5,Standard_D16s_v5,Standard_D32s_v5,Standard_D48s_v5,Standard_D64s_v5,Standard_D96s_v5,Standard_D2_v5,Standard_D4_v5,Standard_D8_v5,Standard_D16_v5,Standard_D32_v5,Standard_D48_v5,Standard_D64_v5,Standard_D96_v5,Standard_E2ds_v5,Standard_E4-2ds_v5,Standard_E4ds_v5,Standard_E8-2ds_v5,Standard_E8-4ds_v5,Standard_E8ds_v5,Standard_E16-4ds_v5,Standard_E16-8ds_v5,Standard_E16ds_v5,Standard_E20ds_v5,Standard_E32-8ds_v5,Standard_E32-16ds_v5,Standard_E32ds_v5,Standard_E48ds_v5,Standard_E64-16ds_v5,Standard_E64-32ds_v5,Standard_E64ds_v5,Standard_E96-24ds_v5,Standard_E96-48ds_v5,Standard_E96ds_v5,Standard_E104ids_v5,Standard_E2d_v5,Standard_E4d_v5,Standard_E8d_v5,Standard_E16d_v5,Standard_E20d_v5,Standard_E32d_v5,Standard_E48d_v5,Standard_E64d_v5,Standard_E96d_v5,Standard_E104id_v5,Standard_E2s_v5,Standard_E4-2s_v5,Standard_E4s_v5,Standard_E8-2s_v5,Standard_E8-4s_v5,Standard_E8s_v5,Standard_E16-4s_v5,Standard_E16-8s_v5,Standard_E16s_v5,Standard_E20s_v5,Standard_E32-8s_v5,Standard_E32-16s_v5,Standard_E32s_v5,Standard_E48s_v5,Standard_E64-16s_v5,Standard_E64-32s_v5,Standard_E64s_v5,Standard_E96-24s_v5,Standard_E96-48s_v5,Standard_E96s_v5,Standard_E104is_v5,Standard_E2_v5,Standard_E4_v5,Standard_E8_v5,Standard_E16_v5,Standard_E20_v5,Standard_E32_v5,Standard_E48_v5,Standard_E64_v5,Standard_E96_v5,Standard_E104i_v5,Standard_E2bs_v5,Standard_E4bs_v5,Standard_E8bs_v5,Standard_E16bs_v5,Standard_E32bs_v5,Standard_E48bs_v5,Standard_E64bs_v5,Standard_E2bds_v5,Standard_E4bds_v5,Standard_E8bds_v5,Standard_E16bds_v5,Standard_E32bds_v5,Standard_E48bds_v5,Standard_E64bds_v5,Standard_D2a_v4,Standard_D4a_v4,Standard_D8a_v4,Standard_D16a_v4,Standard_D32a_v4,Standard_D48a_v4,Standard_D64a_v4,Standard_D96a_v4,Standard_D2as_v4,Standard_D4as_v4,Standard_D8as_v4,Standard_D16as_v4,Standard_D32as_v4,Standard_D48as_v4,Standard_D64as_v4,Standard_D96as_v4,Standard_E2a_v4,Standard_E4a_v4,Standard_E8a_v4,Standard_E16a_v4,Standard_E20a_v4,Standard_E32a_v4,Standard_E48a_v4,Standard_E64a_v4,Standard_E96a_v4,Standard_E2as_v4,Standard_E4-2as_v4,Standard_E4as_v4,Standard_E8-2as_v4,Standard_E8-4as_v4,Standard_E8as_v4,Standard_E16-4as_v4,Standard_E16-8as_v4,Standard_E16as_v4,Standard_E20as_v4,Standard_E32-8as_v4,Standard_E32-16as_v4,Standard_E32as_v4,Standard_E48as_v4,Standard_E64-16as_v4,Standard_E64-32as_v4,Standard_E64as_v4,Standard_E96-24as_v4,Standard_E96-48as_v4,Standard_E96as_v4,Standard_E96ias_v4,Standard_NC6s_v3,Standard_NC12s_v3,Standard_NC24rs_v3,Standard_NC24s_v3,Standard_NV6s_v2,Standard_NV12s_v2,Standard_NV24s_v2,Standard_NV12s_v3,Standard_NV24s_v3,Standard_NV48s_v3,Standard_H8,Standard_H8_Promo,Standard_H16,Standard_H16_Promo,Standard_H8m,Standard_H8m_Promo,Standard_H16m,Standard_H16m_Promo,Standard_H16r,Standard_H16r_Promo,Standard_H16mr,Standard_H16mr_Promo,Standard_M208ms_v2,Standard_M208s_v2,Standard_M416-208s_v2,Standard_M416s_v2,Standard_M416-208ms_v2,Standard_M416ms_v2,Standard_DC1s_v3,Standard_DC2s_v3,Standard_DC4s_v3,Standard_DC8s_v3,Standard_DC16s_v3,Standard_DC24s_v3,Standard_DC32s_v3,Standard_DC48s_v3,Standard_DC1ds_v3,Standard_DC2ds_v3,Standard_DC4ds_v3,Standard_DC8ds_v3,Standard_DC16ds_v3,Standard_DC24ds_v3,Standard_DC32ds_v3,Standard_DC48ds_v3,Standard_NC24ads_A100_v4,Standard_NC48ads_A100_v4,Standard_NC96ads_A100_v4,Standard_D2as_v5,Standard_D4as_v5,Standard_D8as_v5,Standard_D16as_v5,Standard_D32as_v5,Standard_D48as_v5,Standard_D64as_v5,Standard_D96as_v5,Standard_E2as_v5,Standard_E4-2as_v5,Standard_E4as_v5,Standard_E8-2as_v5,Standard_E8-4as_v5,Standard_E8as_v5,Standard_E16-4as_v5,Standard_E16-8as_v5,Standard_E16as_v5,Standard_E20as_v5,Standard_E32-8as_v5,Standard_E32-16as_v5,Standard_E32as_v5,Standard_E48as_v5,Standard_E64-16as_v5,Standard_E64-32as_v5,Standard_E64as_v5,Standard_E96-24as_v5,Standard_E96-48as_v5,Standard_E96as_v5,Standard_E112ias_v5,Standard_D2ads_v5,Standard_D4ads_v5,Standard_D8ads_v5,Standard_D16ads_v5,Standard_D32ads_v5,Standard_D48ads_v5,Standard_D64ads_v5,Standard_D96ads_v5,Standard_E2ads_v5,Standard_E4-2ads_v5,Standard_E4ads_v5,Standard_E8-2ads_v5,Standard_E8-4ads_v5,Standard_E8ads_v5,Standard_E16-4ads_v5,Standard_E16-8ads_v5,Standard_E16ads_v5,Standard_E20ads_v5,Standard_E32-8ads_v5,Standard_E32-16ads_v5,Standard_E32ads_v5,Standard_E48ads_v5,Standard_E64-16ads_v5,Standard_E64-32ads_v5,Standard_E64ads_v5,Standard_E96-24ads_v5,Standard_E96-48ads_v5,Standard_E96ads_v5,Standard_E112iads_v5,Standard_L8s_v3,Standard_L16s_v3,Standard_L32s_v3,Standard_L48s_v3,Standard_L64s_v3,Standard_L80s_v3. Find out more on the valid VM sizes in each region at https://aka.ms/azure-regionservices." Target="vmSize"' errorReason: InvalidConfiguration lastUpdated: "2022-09-29T08:50:19Z" phase: Failed providerStatus: conditions: - lastProbeTime: "2022-09-29T08:50:19Z" lastTransitionTime: "2022-09-29T08:50:19Z" message: 'failed to create vm jfan49-jn66b-worker-1-tdpdt: failure sending request for machine jfan49-jn66b-worker-1-tdpdt: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value invalid provided for the VM size is not valid. The valid sizes in the current region are: Standard_B1ls,Standard_B1ms,Standard_B1s,Standard_B2ms,Standard_B2s,Standard_B4ms,Standard_B8ms,Standard_B12ms,Standard_B16ms,Standard_B20ms,Standard_E2_v4,Standard_E4_v4,Standard_E8_v4,Standard_E16_v4,Standard_E20_v4,Standard_E32_v4,Standard_E2d_v4,Standard_E4d_v4,Standard_E8d_v4,Standard_E16d_v4,Standard_E20d_v4,Standard_E32d_v4,Standard_E2s_v4,Standard_E4-2s_v4,Standard_E4s_v4,Standard_E8-2s_v4,Standard_E8-4s_v4,Standard_E8s_v4,Standard_E16-4s_v4,Standard_E16-8s_v4,Standard_E16s_v4,Standard_E20s_v4,Standard_E32-8s_v4,Standard_E32-16s_v4,Standard_E32s_v4,Standard_E2ds_v4,Standard_E4-2ds_v4,Standard_E4ds_v4,Standard_E8-2ds_v4,Standard_E8-4ds_v4,Standard_E8ds_v4,Standard_E16-4ds_v4,Standard_E16-8ds_v4,Standard_E16ds_v4,Standard_E20ds_v4,Standard_E32-8ds_v4,Standard_E32-16ds_v4,Standard_E32ds_v4,Standard_D2d_v4,Standard_D4d_v4,Standard_D8d_v4,Standard_D16d_v4,Standard_D32d_v4,Standard_D48d_v4,Standard_D64d_v4,Standard_D2_v4,Standard_D4_v4,Standard_D8_v4,Standard_D16_v4,Standard_D32_v4,Standard_D48_v4,Standard_D64_v4,Standard_D2ds_v4,Standard_D4ds_v4,Standard_D8ds_v4,Standard_D16ds_v4,Standard_D32ds_v4,Standard_D48ds_v4,Standard_D64ds_v4,Standard_D2s_v4,Standard_D4s_v4,Standard_D8s_v4,Standard_D16s_v4,Standard_D32s_v4,Standard_D48s_v4,Standard_D64s_v4,Standard_D1_v2,Standard_D2_v2,Standard_D3_v2,Standard_D4_v2,Standard_D5_v2,Standard_D11_v2,Standard_D12_v2,Standard_D13_v2,Standard_D14_v2,Standard_D15_v2,Standard_D2_v2_Promo,Standard_D3_v2_Promo,Standard_D4_v2_Promo,Standard_D5_v2_Promo,Standard_D11_v2_Promo,Standard_D12_v2_Promo,Standard_D13_v2_Promo,Standard_D14_v2_Promo,Standard_F1,Standard_F2,Standard_F4,Standard_F8,Standard_F16,Standard_DS1_v2,Standard_DS2_v2,Standard_DS3_v2,Standard_DS4_v2,Standard_DS5_v2,Standard_DS11-1_v2,Standard_DS11_v2,Standard_DS12-1_v2,Standard_DS12-2_v2,Standard_DS12_v2,Standard_DS13-2_v2,Standard_DS13-4_v2,Standard_DS13_v2,Standard_DS14-4_v2,Standard_DS14-8_v2,Standard_DS14_v2,Standard_DS15_v2,Standard_DS2_v2_Promo,Standard_DS3_v2_Promo,Standard_DS4_v2_Promo,Standard_DS5_v2_Promo,Standard_DS11_v2_Promo,Standard_DS12_v2_Promo,Standard_DS13_v2_Promo,Standard_DS14_v2_Promo,Standard_F1s,Standard_F2s,Standard_F4s,Standard_F8s,Standard_F16s,Standard_A1_v2,Standard_A2m_v2,Standard_A2_v2,Standard_A4m_v2,Standard_A4_v2,Standard_A8m_v2,Standard_A8_v2,Standard_D2_v3,Standard_D4_v3,Standard_D8_v3,Standard_D16_v3,Standard_D32_v3,Standard_D48_v3,Standard_D64_v3,Standard_D2s_v3,Standard_D4s_v3,Standard_D8s_v3,Standard_D16s_v3,Standard_D32s_v3,Standard_D48s_v3,Standard_D64s_v3,Standard_E2_v3,Standard_E4_v3,Standard_E8_v3,Standard_E16_v3,Standard_E20_v3,Standard_E32_v3,Standard_E2s_v3,Standard_E4-2s_v3,Standard_E4s_v3,Standard_E8-2s_v3,Standard_E8-4s_v3,Standard_E8s_v3,Standard_E16-4s_v3,Standard_E16-8s_v3,Standard_E16s_v3,Standard_E20s_v3,Standard_E32-8s_v3,Standard_E32-16s_v3,Standard_E32s_v3,Standard_F2s_v2,Standard_F4s_v2,Standard_F8s_v2,Standard_F16s_v2,Standard_F32s_v2,Standard_F48s_v2,Standard_F64s_v2,Standard_F72s_v2,Standard_E48_v4,Standard_E64_v4,Standard_E48d_v4,Standard_E64d_v4,Standard_E48s_v4,Standard_E64-16s_v4,Standard_E64-32s_v4,Standard_E64s_v4,Standard_E80is_v4,Standard_E48ds_v4,Standard_E64-16ds_v4,Standard_E64-32ds_v4,Standard_E64ds_v4,Standard_E80ids_v4,Standard_E48_v3,Standard_E64_v3,Standard_E48s_v3,Standard_E64-16s_v3,Standard_E64-32s_v3,Standard_E64s_v3,Standard_A0,Standard_A1,Standard_A2,Standard_A3,Standard_A5,Standard_A4,Standard_A6,Standard_A7,Basic_A0,Basic_A1,Basic_A2,Basic_A3,Basic_A4,Standard_NC4as_T4_v3,Standard_NC8as_T4_v3,Standard_NC16as_T4_v3,Standard_NC64as_T4_v3,Standard_M64,Standard_M64m,Standard_M128,Standard_M128m,Standard_M8-2ms,Standard_M8-4ms,Standard_M8ms,Standard_M16-4ms,Standard_M16-8ms,Standard_M16ms,Standard_M32-8ms,Standard_M32-16ms,Standard_M32ls,Standard_M32ms,Standard_M32ts,Standard_M64-16ms,Standard_M64-32ms,Standard_M64ls,Standard_M64ms,Standard_M64s,Standard_M128-32ms,Standard_M128-64ms,Standard_M128ms,Standard_M128s,Standard_M32ms_v2,Standard_M64ms_v2,Standard_M64s_v2,Standard_M128ms_v2,Standard_M128s_v2,Standard_M192ims_v2,Standard_M192is_v2,Standard_M32dms_v2,Standard_M64dms_v2,Standard_M64ds_v2,Standard_M128dms_v2,Standard_M128ds_v2,Standard_M192idms_v2,Standard_M192ids_v2,Standard_E64i_v3,Standard_E64is_v3,Standard_D1,Standard_D2,Standard_D3,Standard_D4,Standard_D11,Standard_D12,Standard_D13,Standard_D14,Standard_DS1,Standard_DS2,Standard_DS3,Standard_DS4,Standard_DS11,Standard_DS12,Standard_DS13,Standard_DS14,Standard_DC8_v2,Standard_DC1s_v2,Standard_DC2s_v2,Standard_DC4s_v2,Standard_L8s_v2,Standard_L16s_v2,Standard_L32s_v2,Standard_L48s_v2,Standard_L64s_v2,Standard_L80s_v2,Standard_NV4as_v4,Standard_NV8as_v4,Standard_NV16as_v4,Standard_NV32as_v4,Standard_G1,Standard_G2,Standard_G3,Standard_G4,Standard_G5,Standard_GS1,Standard_GS2,Standard_GS3,Standard_GS4,Standard_GS4-4,Standard_GS4-8,Standard_GS5,Standard_GS5-8,Standard_GS5-16,Standard_L4s,Standard_L8s,Standard_L16s,Standard_L32s,Standard_DC2as_v5,Standard_DC4as_v5,Standard_DC8as_v5,Standard_DC16as_v5,Standard_DC32as_v5,Standard_DC48as_v5,Standard_DC64as_v5,Standard_DC96as_v5,Standard_DC2ads_v5,Standard_DC4ads_v5,Standard_DC8ads_v5,Standard_DC16ads_v5,Standard_DC32ads_v5,Standard_DC48ads_v5,Standard_DC64ads_v5,Standard_DC96ads_v5,Standard_EC2as_v5,Standard_EC4as_v5,Standard_EC8as_v5,Standard_EC16as_v5,Standard_EC20as_v5,Standard_EC32as_v5,Standard_EC48as_v5,Standard_EC64as_v5,Standard_EC96as_v5,Standard_EC96ias_v5,Standard_EC2ads_v5,Standard_EC4ads_v5,Standard_EC8ads_v5,Standard_EC16ads_v5,Standard_EC20ads_v5,Standard_EC32ads_v5,Standard_EC48ads_v5,Standard_EC64ads_v5,Standard_EC96ads_v5,Standard_EC96iads_v5,Standard_D2ds_v5,Standard_D4ds_v5,Standard_D8ds_v5,Standard_D16ds_v5,Standard_D32ds_v5,Standard_D48ds_v5,Standard_D64ds_v5,Standard_D96ds_v5,Standard_D2d_v5,Standard_D4d_v5,Standard_D8d_v5,Standard_D16d_v5,Standard_D32d_v5,Standard_D48d_v5,Standard_D64d_v5,Standard_D96d_v5,Standard_D2s_v5,Standard_D4s_v5,Standard_D8s_v5,Standard_D16s_v5,Standard_D32s_v5,Standard_D48s_v5,Standard_D64s_v5,Standard_D96s_v5,Standard_D2_v5,Standard_D4_v5,Standard_D8_v5,Standard_D16_v5,Standard_D32_v5,Standard_D48_v5,Standard_D64_v5,Standard_D96_v5,Standard_E2ds_v5,Standard_E4-2ds_v5,Standard_E4ds_v5,Standard_E8-2ds_v5,Standard_E8-4ds_v5,Standard_E8ds_v5,Standard_E16-4ds_v5,Standard_E16-8ds_v5,Standard_E16ds_v5,Standard_E20ds_v5,Standard_E32-8ds_v5,Standard_E32-16ds_v5,Standard_E32ds_v5,Standard_E48ds_v5,Standard_E64-16ds_v5,Standard_E64-32ds_v5,Standard_E64ds_v5,Standard_E96-24ds_v5,Standard_E96-48ds_v5,Standard_E96ds_v5,Standard_E104ids_v5,Standard_E2d_v5,Standard_E4d_v5,Standard_E8d_v5,Standard_E16d_v5,Standard_E20d_v5,Standard_E32d_v5,Standard_E48d_v5,Standard_E64d_v5,Standard_E96d_v5,Standard_E104id_v5,Standard_E2s_v5,Standard_E4-2s_v5,Standard_E4s_v5,Standard_E8-2s_v5,Standard_E8-4s_v5,Standard_E8s_v5,Standard_E16-4s_v5,Standard_E16-8s_v5,Standard_E16s_v5,Standard_E20s_v5,Standard_E32-8s_v5,Standard_E32-16s_v5,Standard_E32s_v5,Standard_E48s_v5,Standard_E64-16s_v5,Standard_E64-32s_v5,Standard_E64s_v5,Standard_E96-24s_v5,Standard_E96-48s_v5,Standard_E96s_v5,Standard_E104is_v5,Standard_E2_v5,Standard_E4_v5,Standard_E8_v5,Standard_E16_v5,Standard_E20_v5,Standard_E32_v5,Standard_E48_v5,Standard_E64_v5,Standard_E96_v5,Standard_E104i_v5,Standard_E2bs_v5,Standard_E4bs_v5,Standard_E8bs_v5,Standard_E16bs_v5,Standard_E32bs_v5,Standard_E48bs_v5,Standard_E64bs_v5,Standard_E2bds_v5,Standard_E4bds_v5,Standard_E8bds_v5,Standard_E16bds_v5,Standard_E32bds_v5,Standard_E48bds_v5,Standard_E64bds_v5,Standard_D2a_v4,Standard_D4a_v4,Standard_D8a_v4,Standard_D16a_v4,Standard_D32a_v4,Standard_D48a_v4,Standard_D64a_v4,Standard_D96a_v4,Standard_D2as_v4,Standard_D4as_v4,Standard_D8as_v4,Standard_D16as_v4,Standard_D32as_v4,Standard_D48as_v4,Standard_D64as_v4,Standard_D96as_v4,Standard_E2a_v4,Standard_E4a_v4,Standard_E8a_v4,Standard_E16a_v4,Standard_E20a_v4,Standard_E32a_v4,Standard_E48a_v4,Standard_E64a_v4,Standard_E96a_v4,Standard_E2as_v4,Standard_E4-2as_v4,Standard_E4as_v4,Standard_E8-2as_v4,Standard_E8-4as_v4,Standard_E8as_v4,Standard_E16-4as_v4,Standard_E16-8as_v4,Standard_E16as_v4,Standard_E20as_v4,Standard_E32-8as_v4,Standard_E32-16as_v4,Standard_E32as_v4,Standard_E48as_v4,Standard_E64-16as_v4,Standard_E64-32as_v4,Standard_E64as_v4,Standard_E96-24as_v4,Standard_E96-48as_v4,Standard_E96as_v4,Standard_E96ias_v4,Standard_NC6s_v3,Standard_NC12s_v3,Standard_NC24rs_v3,Standard_NC24s_v3,Standard_NV6s_v2,Standard_NV12s_v2,Standard_NV24s_v2,Standard_NV12s_v3,Standard_NV24s_v3,Standard_NV48s_v3,Standard_H8,Standard_H8_Promo,Standard_H16,Standard_H16_Promo,Standard_H8m,Standard_H8m_Promo,Standard_H16m,Standard_H16m_Promo,Standard_H16r,Standard_H16r_Promo,Standard_H16mr,Standard_H16mr_Promo,Standard_M208ms_v2,Standard_M208s_v2,Standard_M416-208s_v2,Standard_M416s_v2,Standard_M416-208ms_v2,Standard_M416ms_v2,Standard_DC1s_v3,Standard_DC2s_v3,Standard_DC4s_v3,Standard_DC8s_v3,Standard_DC16s_v3,Standard_DC24s_v3,Standard_DC32s_v3,Standard_DC48s_v3,Standard_DC1ds_v3,Standard_DC2ds_v3,Standard_DC4ds_v3,Standard_DC8ds_v3,Standard_DC16ds_v3,Standard_DC24ds_v3,Standard_DC32ds_v3,Standard_DC48ds_v3,Standard_NC24ads_A100_v4,Standard_NC48ads_A100_v4,Standard_NC96ads_A100_v4,Standard_D2as_v5,Standard_D4as_v5,Standard_D8as_v5,Standard_D16as_v5,Standard_D32as_v5,Standard_D48as_v5,Standard_D64as_v5,Standard_D96as_v5,Standard_E2as_v5,Standard_E4-2as_v5,Standard_E4as_v5,Standard_E8-2as_v5,Standard_E8-4as_v5,Standard_E8as_v5,Standard_E16-4as_v5,Standard_E16-8as_v5,Standard_E16as_v5,Standard_E20as_v5,Standard_E32-8as_v5,Standard_E32-16as_v5,Standard_E32as_v5,Standard_E48as_v5,Standard_E64-16as_v5,Standard_E64-32as_v5,Standard_E64as_v5,Standard_E96-24as_v5,Standard_E96-48as_v5,Standard_E96as_v5,Standard_E112ias_v5,Standard_D2ads_v5,Standard_D4ads_v5,Standard_D8ads_v5,Standard_D16ads_v5,Standard_D32ads_v5,Standard_D48ads_v5,Standard_D64ads_v5,Standard_D96ads_v5,Standard_E2ads_v5,Standard_E4-2ads_v5,Standard_E4ads_v5,Standard_E8-2ads_v5,Standard_E8-4ads_v5,Standard_E8ads_v5,Standard_E16-4ads_v5,Standard_E16-8ads_v5,Standard_E16ads_v5,Standard_E20ads_v5,Standard_E32-8ads_v5,Standard_E32-16ads_v5,Standard_E32ads_v5,Standard_E48ads_v5,Standard_E64-16ads_v5,Standard_E64-32ads_v5,Standard_E64ads_v5,Standard_E96-24ads_v5,Standard_E96-48ads_v5,Standard_E96ads_v5,Standard_E112iads_v5,Standard_L8s_v3,Standard_L16s_v3,Standard_L32s_v3,Standard_L48s_v3,Standard_L64s_v3,Standard_L80s_v3. Find out more on the valid VM sizes in each region at https://aka.ms/azure-regionservices." Target="vmSize"' reason: MachineCreationFailed status: "True" type: MachineCreated metadata: {}
This is a clone of issue OCPBUGS-1428. The following is the description of the original issue:
—
Description of problem:
When using an OperatorGroup attached to a service account, AND if there is a secret present in the namespace, the operator installation will fail with the message: the service account does not have any API secret sa=testx-ns/testx-sa This issue seems similar to https://bugzilla.redhat.com/show_bug.cgi?id=2094303 - which was resolved in 4.11.0 - however, the new element now, is that the presence of a secret in the namespace is causing the issue. The name of the secret seems significant - suggesting something somewhere is depending on the order that secrets are listed in. For example, If the secret in the namespace is called "asecret", the problem does not occur. If it is called "zsecret", the problem always occurs.
"zsecret" is not a "kubernetes.io/service-account-token". The issue I have raised here relates to Opaque secrets - zsecret is an Opaque secret. The issue may apply to other types of secrets, but specifically my issue is that when there is an opaque secret present in the namespace, the operator install fails as described. I aught to be allowed to have an opaque secret present in the namespace where I am installing the operator.
Version-Release number of selected component (if applicable):
4.11.0 & 4.11.1
How reproducible:
100% reproducible
Steps to Reproduce:
1.Create namespace: oc new-project testx-ns 2. oc apply -f api-secret-issue.yaml
Actual results:
Expected results:
Additional info:
API YAML:
cat api-secret-issue.yaml
apiVersion: v1
kind: Secret
metadata:
name: zsecret
namespace: testx-ns
annotations:
kubernetes.io/service-account.name: testx-sa
type: Opaque
stringData:
mykey: mypass
—
apiVersion: v1
kind: ServiceAccount
metadata:
name: testx-sa
namespace: testx-ns
—
kind: OperatorGroup
apiVersion: operators.coreos.com/v1
metadata:
name: testx-og
namespace: testx-ns
spec:
serviceAccountName: "testx-sa"
targetNamespaces:
- testx-ns
—
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: testx-role
namespace: testx-ns
rules:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: testx-rolebinding
namespace: testx-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: testx-role
subjects:
—
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: etcd-operator
namespace: testx-ns
spec:
channel: singlenamespace-alpha
installPlanApproval: Automatic
name: etcd
source: community-operators
sourceNamespace: openshift-marketplace
This is a clone of issue OCPBUGS-3744. The following is the description of the original issue:
—
Description of problem:
Egress router POD creation on Openshift 4.11 is failing with below error. ~~~ Nov 15 21:51:29 pltocpwn03 hyperkube[3237]: E1115 21:51:29.467436 3237 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"stage-wfe-proxy-ext-qrhjw_stage-wfe-proxy(c965a287-28aa-47b6-9e79-0cc0e209fcf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"stage-wfe-proxy-ext-qrhjw_stage-wfe-proxy(c965a287-28aa-47b6-9e79-0cc0e209fcf2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_stage-wfe-proxy-ext-qrhjw_stage-wfe-proxy_c965a287-28aa-47b6-9e79-0cc0e209fcf2_0(72bcf9e52b199061d6e651e84b0892efc142601b2442c2d00b92a1ba23208344): error adding pod stage-wfe-proxy_stage-wfe-proxy-ext-qrhjw to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): [stage-wfe-proxy/stage-wfe-proxy-ext-qrhjw/c965a287-28aa-47b6-9e79-0cc0e209fcf2:openshift-sdn]: error adding container to network \\\"openshift-sdn\\\": CNI request failed with status 400: 'could not open netns \\\"/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669\\\": unknown FS magic on \\\"/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669\\\": 1021994\\n'\"" pod="stage-wfe-proxy/stage-wfe-proxy-ext-qrhjw" podUID=c965a287-28aa-47b6-9e79-0cc0e209fcf2 ~~~ I have checked SDN POD log from node where egress router POD is failing and I could see below error message. ~~~ 2022-11-15T21:51:29.283002590Z W1115 21:51:29.282954 181720 pod.go:296] CNI_ADD stage-wfe-proxy/stage-wfe-proxy-ext-qrhjw failed: could not open netns "/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669": unknown FS magic on "/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669": 1021994 ~~~ Crio is logging below event and looking at the log it seems the namespace has been created on node. ~~~ Nov 15 21:51:29 pltocpwn03 crio[3150]: time="2022-11-15 21:51:29.307184956Z" level=info msg="Got pod network &{Name:stage-wfe-proxy-ext-qrhjw Namespace:stage-wfe-proxy ID:72bcf9e52b199061d6e651e84b0892efc142601b2442c2d00b92a1ba23208344 UID:c965a287-28aa-47b6-9e79-0cc0e209fcf2 NetNS:/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}" ~~~
Version-Release number of selected component (if applicable):
4.11.12
How reproducible:
Not Sure
Steps to Reproduce:
1. 2. 3.
Actual results:
Egress router POD is failing to create. Sample application could be created without any issue.
Expected results:
Egress router POD should get created
Additional info:
Egress router POD is created following below document and it does contain pod.network.openshift.io/assign-macvlan: "true" annotation. https://docs.openshift.com/container-platform/4.11/networking/openshift_sdn/deploying-egress-router-layer3-redirection.html#nw-egress-router-pod_deploying-egress-router-layer3-redirection
Found when running resource watcher, these keep updating with no real changes, just moving conditions around. Likely needs bugs for all three.
Description of problem:
When using the agent based instller to zero-touch provision the cluster. If the network bandwidth is low, and the assisted-service or the assisted-service fails to pull the docker image within the timeout. The create-cluster-and-infraenv, apply-host-config, and start-cluster-installation services will be deactivated due to dependency failed. The process will be blocked, and require enable & start the service manually.
Version-Release number of selected component (if applicable):
openshift-install 4.11.0 built from commit 863cd1ea823559116e26de327705ed72ccdede8f release image quay.io/openshift-release-dev/ocp-release@sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4 release architecture amd64
How reproducible:
Install Openshift with agent based installer with local mirror.
Steps to Reproduce:
1.Stop the local registry or limit the network bandwidth to make assisted-service-pod.service or assisted-service.service fails to started within the 90s timeout. 2.Start the local registry or mannully pull the image on the node0. 3.
Actual results:
When using the agent based instller to zero-touch pprovision the cluster. If the network bandwidth is low, and the assisted-service or the assisted-service fails to pull the docker image within the timeout. The create-cluster-and-infraenv, apply-host-config, and start-cluster-installation services will be deactivated due to dependency failed. The process will be blocked, and require enable & start the service manually.
Expected results:
Provision start after the assisted-service started.
Additional info:
Given: assisted-service-pod.service requires assisted-service-db.service assisted-service.service assisted-service.service BindsTo=assisted-service-pod.service create-cluster-and-infraenv.service Requires=assisted-service.service and PartOf=assisted-service-pod.service apply-host-config.service Requires=create-cluster-and-infraenv.service start-cluster-installation.service Requires=apply-host-config.service Requires= "Configures requirement dependencies on other units. If this unit gets activated, the units listed here will be activated as well. If one of the other units gets deactivated or its activation fails, this unit will be deactivated."When assisted-service-pod.service starts, assisted-service-db.service and assisted-service.service also be started, Once assisted-service-pod.service fails to be started, assisted-service.service also fail to be started due to "BindsTo=assisted-service-pod.service". Then dependency failed for create-cluster-and-infraenv.service due to Requires=assisted-service.service which activation fails, Therefore it will be deactived. Then dependency failed for apply-host-config.service, due to Requires=create-cluster-and-infraenv.service which activation fails, Therefore it will be deactived. Then dependency failed for start-cluster-installation.service, due to Requires=apply-host-config.service which activation fails, Therefore it will be deactived.Then assisted-service-pod.service restarts, assisted-service.service and assisted-service-db.service restarts as well, since they are binded to assisted-service-pod.service. However, create-cluster-and-infraenv.service apply-host-config.service and start-cluster-installation.service was be deactivated, they requires to be activate mannully.Eventually, assisted-service started and hang with waitting for create infraenv. The provisioning is blocked.
Description of problem:
https://github.com/openshift/api/pull/1186 - https://issues.redhat.com/browse/CONSOLE-3069 promoted ConsolePlugin CRD to v1. The PR introduces also a conversion webhook from v1alpha1 to v1. In new CRD version I18n ConsolePluginI18n is marked as optional. The conversion webhook will not set a default valid ("Lazy"/"Preload") value writing the v1 object and a v1 object completely omitting spec.i18n will be accepted we no valid default value as well. On the other side, at garbage collection time the object will be stuck forever due to the lack of a valid value for spec.i18n.loadType Example, create a v1 ConsolePlugin object: cat <<EOF | oc apply -f - apiVersion: console.openshift.io/v1 kind: ConsolePlugin metadata: name: test472 spec: backend: service: basePath: / name: test472-service namespace: kubevirt-hyperconverged port: 9443 type: Service displayName: Test 472 Plugin EOF Delete it in foreground mode: stirabos@t14s:~$ oc delete consoleplugin test472 --timeout=30s --cascade='foreground' -v 7 I1011 18:20:03.255605 31610 loader.go:372] Config loaded from file: /home/stirabos/.kube/config I1011 18:20:03.266567 31610 round_trippers.go:463] DELETE https://api.ci-ln-krdzphb-72292.gcp-2.ci.openshift.org:6443/apis/console.openshift.io/v1/consoleplugins/test472 I1011 18:20:03.266581 31610 round_trippers.go:469] Request Headers: I1011 18:20:03.266588 31610 round_trippers.go:473] Accept: application/json I1011 18:20:03.266594 31610 round_trippers.go:473] Content-Type: application/json I1011 18:20:03.266600 31610 round_trippers.go:473] User-Agent: oc/4.11.0 (linux/amd64) kubernetes/fcf512e I1011 18:20:03.266606 31610 round_trippers.go:473] Authorization: Bearer <masked> I1011 18:20:03.688569 31610 round_trippers.go:574] Response Status: 200 OK in 421 milliseconds consoleplugin.console.openshift.io "test472" deleted I1011 18:20:03.688911 31610 round_trippers.go:463] GET https://api.ci-ln-krdzphb-72292.gcp-2.ci.openshift.org:6443/apis/console.openshift.io/v1/consoleplugins?fieldSelector=metadata.name%3Dtest472 I1011 18:20:03.688919 31610 round_trippers.go:469] Request Headers: I1011 18:20:03.688928 31610 round_trippers.go:473] Authorization: Bearer <masked> I1011 18:20:03.688935 31610 round_trippers.go:473] Accept: application/json I1011 18:20:03.688941 31610 round_trippers.go:473] User-Agent: oc/4.11.0 (linux/amd64) kubernetes/fcf512e I1011 18:20:03.840103 31610 round_trippers.go:574] Response Status: 200 OK in 151 milliseconds I1011 18:20:03.840825 31610 round_trippers.go:463] GET https://api.ci-ln-krdzphb-72292.gcp-2.ci.openshift.org:6443/apis/console.openshift.io/v1/consoleplugins?fieldSelector=metadata.name%3Dtest472&resourceVersion=175205&watch=true I1011 18:20:03.840848 31610 round_trippers.go:469] Request Headers: I1011 18:20:03.840884 31610 round_trippers.go:473] Accept: application/json I1011 18:20:03.840907 31610 round_trippers.go:473] User-Agent: oc/4.11.0 (linux/amd64) kubernetes/fcf512e I1011 18:20:03.840928 31610 round_trippers.go:473] Authorization: Bearer <masked> I1011 18:20:03.972219 31610 round_trippers.go:574] Response Status: 200 OK in 131 milliseconds error: timed out waiting for the condition on consoleplugins/test472 and in kube-controller-manager logs we see: 2022-10-11T16:25:32.192864016Z I1011 16:25:32.192788 1 garbagecollector.go:501] "Processing object" object="test472" objectUID=0cc46a01-113b-4bbe-9c7a-829a97d6867c kind="ConsolePlugin" virtual=false 2022-10-11T16:25:32.282303274Z I1011 16:25:32.282161 1 garbagecollector.go:623] remove DeleteDependents finalizer for item [console.openshift.io/v1/ConsolePlugin, namespace: , name: test472, uid: 0cc46a01-113b-4bbe-9c7a-829a97d6867c] 2022-10-11T16:25:32.304835330Z E1011 16:25:32.304730 1 garbagecollector.go:379] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"console.openshift.io/v1", Kind:"ConsolePlugin", Name:"test472", UID:"0cc46a01-113b-4bbe-9c7a-829a97d6867c", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:true, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: ConsolePlugin.console.openshift.io "test472" is invalid: spec.i18n.loadType: Unsupported value: "": supported values: "Preload", "Lazy"
Version-Release number of selected component (if applicable):
OCP 4.12.0 ec4
How reproducible:
100%
Steps to Reproduce:
1. cat <<EOF | oc apply -f - apiVersion: console.openshift.io/v1 kind: ConsolePlugin metadata: name: test472 spec: backend: service: basePath: / name: test472-service namespace: kubevirt-hyperconverged port: 9443 type: Service displayName: Test 472 Plugin EOF
2. oc delete consoleplugin test472 --timeout=30s --cascade='foreground' -v 7
Actual results:
2022-10-11T16:25:32.192864016Z I1011 16:25:32.192788 1 garbagecollector.go:501] "Processing object" object="test472" objectUID=0cc46a01-113b-4bbe-9c7a-829a97d6867c kind="ConsolePlugin" virtual=false 2022-10-11T16:25:32.282303274Z I1011 16:25:32.282161 1 garbagecollector.go:623] remove DeleteDependents finalizer for item [console.openshift.io/v1/ConsolePlugin, namespace: , name: test472, uid: 0cc46a01-113b-4bbe-9c7a-829a97d6867c] 2022-10-11T16:25:32.304835330Z E1011 16:25:32.304730 1 garbagecollector.go:379] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"console.openshift.io/v1", Kind:"ConsolePlugin", Name:"test472", UID:"0cc46a01-113b-4bbe-9c7a-829a97d6867c", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:true, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: ConsolePlugin.console.openshift.io "test472" is invalid: spec.i18n.loadType: Unsupported value: "": supported values: "Preload", "Lazy"
Expected results:
Object correctly deleted
Additional info:
The issue doesn't happen with --cascade='background' which is the default on the CLI client
This is a clone of issue OCPBUGS-2598. The following is the description of the original issue:
—
Description of problem:
Liveness probe of ipsec pods fail with large clusters. Currently the command that is executed in the ipsec container is ovs-appctl -t ovs-monitor-ipsec ipsec/status && ipsec status The problem is with command "ipsec/status". In clusters with high node count this command will return a list with all the node daemons of the cluster. This means that as the node count raises the completion time of the command raises too.
This makes the main command
ovs-appctl -t ovs-monitor-ipsec
To hang until the subcommand is finished.
As the liveness and readiness probe values are hardcoded in the manifest of the ipsec container herehttps//github.com/openshift/cluster-network-operator/blob/9c1181e34316d34db49d573698d2779b008bcc20/bindata/network/ovn-kubernetes/common/ipsec.yaml] the liveness timeout of the container probe of 60 seconds start to be insufficient as the node count list is growing. This resulted in a cluster with 170 + nodes to have 15+ ipsec pods in a crashloopbackoff state.
Version-Release number of selected component (if applicable):
Openshift Container Platform 4.10 but i think the same will be visible to other versions too.
How reproducible:
I was not able to reproduce due to an extreamely high amount of resources are needed and i think that there is no point as we have spotted the issue.
Steps to Reproduce:
1. Install an Openshift cluster with IPSEC enabled 2. Scale to 170+ nodes or more 3. Notice that the ipsec pods will start getting in a Crashloopbackoff state with failed Liveness/Readiness probes.
Actual results:
Ip Sec pods are stuck in a Crashloopbackoff state
Expected results:
Ip Sec pods to work normally
Additional info:
We have provided a workaround where CVO and CNO operators are scaled to 0 replicas in order for us to be able to increase the liveness probe limit to a value of 600 that recovered the cluster. As a next step the customer will try to reduce the node count and restore the default liveness timeout value along with bringing the operators back to see if the cluster will stabilize.
This is a clone of issue OCPBUGS-5151. The following is the description of the original issue:
—
Description of problem:
Cx is not able to install new cluster OCP BM IPI. During the bootstrapping the provisioning interfaces from master node not getting ipv4 dhcp ip address from bootstrap dhcp server on OCP IPI BareMetal install Please refer to following BUG --> https://issues.redhat.com/browse/OCPBUGS-872 The problem was solved by applying rd.net.timeout.carrier=30 to the kernel parameters of compute nodes via cluster-baremetal operator. The fix also need to be apply to the control-plane. ref:// https://github.com/openshift/cluster-baremetal-operator/pull/286/files
Version-Release number of selected component (if applicable):
How reproducible:
Perform OCP 4.10.16 IPI BareMetal install.
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Customer should be able to install the cluster without any issue.
Additional info:
With CSISnapshot capability is disabled, all Azure Disk CSI Driver Operator gets Degraded.
The reason is that cluster-csi-snapshot-controller-operator does not create VolumeSnapshotClass CRD, which the operator expects to exist.
This is a clone of issue OCPBUGS-3767. The following is the description of the original issue:
—
Description of problem:
Start maintenance action moved from Nodes tab to Bare Metal Hosts tab
Version-Release number of selected component (if applicable):
Cluster version is 4.12.0-0.nightly-2022-11-15-024309
How reproducible:
100%
Steps to Reproduce:
1. Install Node Maintenance operator 2. Go Compute -> Nodes 3. Start maintenance from 3dots menu of worker-0-0 see https://docs.openshift.com/container-platform/4.11/nodes/nodes/eco-node-maintenance-operator.html#eco-setting-node-maintenance-actions-web-console_node-maintenance-operator
Actual results:
No 'Start maintenance' option
Expected results:
Maintenance started successfully
Additional info:
worked for 4.11
Description of problem:
Latest implementation of history pruner (pr805 [1]) had increased max upgrade history in cvo to 100, and implemented a weight based pruning priority strategy for in case history length grows any larger. This pruning however is not happening, letting history grow uncontrollably, and potentially reach resource limits of etcd or kubernetes.
Observed the following while running continuous upgrade-rollback cycles:
$ oc get clusterversion version -o json | jq '.status.history|length'
203
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-23-223922
4.12.0-0.nightly-2022-08-23-153511
How reproducible:
1/1
Steps to Reproduce:
Same as described in bz2097067 [2], with addition of waiting a few minutes after the first rollback to allow it to reach 'Completed' state.
Actual results:
History grows uncontrollably
Expected results:
History should be pruned to keep max size of 100
Additional info:
[1] https://github.com/openshift/cluster-version-operator/pull/805
[2] https://bugzilla.redhat.com/show_bug.cgi?id=2097067#c4
Because the agent ISO is ephemeral, it is probably safe to allow a user to log in to it with a password. If the network configuration is broken, a user may have no other way to debug it other than to log in through the console, which is currently not possible.
The best password to set would be the kubeadmin password used for the OpenShift GUI, since we'll have generated that already.
We must take care to test that this does not result in the installed nodes on disk allowing login with a password.
This bug is a backport clone of [Bugzilla Bug 2073220](https://bugzilla.redhat.com/show_bug.cgi?id=2073220). The following is the description of the original bug:
—
Description of problem:
Version-Release number of selected component (if applicable): 4.*
How reproducible: always
Steps to Reproduce:
1. Set audit profile to WriteRequestBodies
2. Wait for api server rollout to complete
3. tail -f /var/log/kube-apiserver/audit.log | grep routes/status
Actual results:
Write events to routes/status are recorded at the RequestResponse level, which often includes keys and certificates.
Expected results:
Events involving routes should always be recorded at the Metadata level, per the documentation at https://docs.openshift.com/container-platform/4.10/security/audit-log-policy-config.html#about-audit-log-profiles_audit-log-policy-config
Additional info:
This is a clone of issue OCPBUGS-2513. The following is the description of the original issue:
—
Description of problem:
Agent based installation is failing for Disconnected env due to pull secret is required for registry.ci.openshift.org. As we are installing cluster in disconnected env, only mirror registry secrets should be enough for pulling the image.
Version-Release number of selected component (if applicable):
registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-18-041406
How reproducible:
Always
Steps to Reproduce:
1. Setup mirror registry with this registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-18-041406 release. 2. Add the ICSP information in the install-config file 4. Create agent.iso using install-config.yaml and agent-config.yaml 5. ssh to the node zero to see the error in create-cluster-and-infraenv.service.
Actual results:
create-cluster-and-infraenv.service is failing with below error: time="2022-10-18T09:36:13Z" level=fatal msg="Failed to register cluster with assisted-service: AssistedServiceError Code: 400 Href: ID: 400 Kind: Error Reason: pull secret for new cluster is invalid: pull secret must contain auth for \"registry.ci.openshift.org\""
Expected results:
create-cluster-and-infraenv.service should be successfully started.
Additional info:
Refer this similar bug https://bugzilla.redhat.com/show_bug.cgi?id=1990659
Sample archive with both resources:
archives/compressed/3c/3cc4318d-e564-450b-b16e-51ef279b87fa/202209/30/200617.tar.gz
Sample query to find more archives:
with t as ( select cluster_id, file_path, json_extract_scalar(content, '$.kind') as kind from raw_io_archives where date = '2022-09-30' and file_path like 'config/storage/%' ) select cluster_id, count(*) as cnt from t group by cluster_id order by cnt desc;
This is a clone of issue OCPBUGS-501. The following is the description of the original issue:
—
Description of problem:
Version-Release number of selected component (if applicable): 4.10.16
How reproducible: Always
Steps to Reproduce:
1. Edit the apiserver resource and add spec.audit.customRules field
$ oc get apiserver cluster -o yaml
spec:
audit:
customRules:
2. Allow the kube-apiserver pods to rollout new revision.
3. Once the kube-apiserver pods are in new revision execute $ oc get dc
Actual results:
Error from server (InternalError): an error on the server ("This request caused apiserver to panic. Look in the logs for details.") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io)
Expected results: The command "oc get dc" should display the deploymentconfig without any error.
Additional info:
This is a clone of issue OCPBUGS-3426. The following is the description of the original issue:
—
Description of problem:
We need to update the operator to be synced with the K8 api version used by OCP 4.13. We also need to sync our samples libraries with latest available libraries. Any deprecated libraries should be removed as well.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
In a 4.11 cluster with only openshift-samples enabled, the 4.12 introduced optional COs console and insights are installed. While upgrading to 4.12, CVO considers them to be disabled explicitly and skips reconciling them. So these COs are not upgraded to 4.12. Installed COs cannot be disabled, so CVO is supposed to implicitly enable them. $ oc get clusterversion -oyaml { "apiVersion": "config.openshift.io/v1", "kind": "ClusterVersion", "metadata": { "creationTimestamp": "2022-09-30T05:02:31Z", "generation": 3, "name": "version", "resourceVersion": "134808", "uid": "bd95473f-ffda-402d-8fe3-74f852a9d6eb" }, "spec": { "capabilities": { "additionalEnabledCapabilities": [ "openshift-samples" ], "baselineCapabilitySet": "None" }, "channel": "stable-4.11", "clusterID": "8eda5167-a730-4b39-be1d-214a80506d34", "desiredUpdate": { "force": true, "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "version": "" } }, "status": { "availableUpdates": null, "capabilities": { "enabledCapabilities": [ "openshift-samples" ], "knownCapabilities": [ "Console", "Insights", "Storage", "baremetal", "marketplace", "openshift-samples" ] }, "conditions": [ { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Unable to retrieve available updates: currently reconciling cluster version 4.12.0-0.nightly-2022-09-28-204419 not found in the \"stable-4.11\" channel", "reason": "VersionNotFound", "status": "False", "type": "RetrievedUpdates" }, { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Capabilities match configured spec", "reason": "AsExpected", "status": "False", "type": "ImplicitlyEnabledCapabilities" }, { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Payload loaded version=\"4.12.0-0.nightly-2022-09-28-204419\" image=\"registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc\" architecture=\"amd64\"", "reason": "PayloadLoaded", "status": "True", "type": "ReleaseAccepted" }, { "lastTransitionTime": "2022-09-30T05:23:18Z", "message": "Done applying 4.12.0-0.nightly-2022-09-28-204419", "status": "True", "type": "Available" }, { "lastTransitionTime": "2022-09-30T07:05:42Z", "status": "False", "type": "Failing" }, { "lastTransitionTime": "2022-09-30T07:41:53Z", "message": "Cluster version is 4.12.0-0.nightly-2022-09-28-204419", "status": "False", "type": "Progressing" } ], "desired": { "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "version": "4.12.0-0.nightly-2022-09-28-204419" }, "history": [ { "completionTime": "2022-09-30T07:41:53Z", "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "startedTime": "2022-09-30T06:42:01Z", "state": "Completed", "verified": false, "version": "4.12.0-0.nightly-2022-09-28-204419" }, { "completionTime": "2022-09-30T05:23:18Z", "image": "registry.ci.openshift.org/ocp/release@sha256:5a6f6d1bf5c752c75d7554aa927c06b5ea0880b51909e83387ee4d3bca424631", "startedTime": "2022-09-30T05:02:33Z", "state": "Completed", "verified": false, "version": "4.11.0-0.nightly-2022-09-29-191451" } ], "observedGeneration": 3, "versionHash": "CSCJ2fxM_2o=" } } $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2022-09-28-204419 True False False 93m cloud-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h56m cloud-credential 4.12.0-0.nightly-2022-09-28-204419 True False False 3h59m cluster-autoscaler 4.12.0-0.nightly-2022-09-28-204419 True False False 3h53m config-operator 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m console 4.11.0-0.nightly-2022-09-29-191451 True False False 3h45m control-plane-machine-set 4.12.0-0.nightly-2022-09-28-204419 True False False 117m csi-snapshot-controller 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m dns 4.12.0-0.nightly-2022-09-28-204419 True False False 3h53m etcd 4.12.0-0.nightly-2022-09-28-204419 True False False 3h52m image-registry 4.12.0-0.nightly-2022-09-28-204419 True False False 3h46m ingress 4.12.0-0.nightly-2022-09-28-204419 True False False 151m insights 4.11.0-0.nightly-2022-09-29-191451 True False False 3h48m kube-apiserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h50m kube-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h51m kube-scheduler 4.12.0-0.nightly-2022-09-28-204419 True False False 3h51m kube-storage-version-migrator 4.12.0-0.nightly-2022-09-28-204419 True False False 91m machine-api 4.12.0-0.nightly-2022-09-28-204419 True False False 3h50m machine-approver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m machine-config 4.12.0-0.nightly-2022-09-28-204419 True False False 3h52m monitoring 4.12.0-0.nightly-2022-09-28-204419 True False False 3h44m network 4.12.0-0.nightly-2022-09-28-204419 True False False 3h55m node-tuning 4.12.0-0.nightly-2022-09-28-204419 True False False 113m openshift-apiserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h48m openshift-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 113m openshift-samples 4.12.0-0.nightly-2022-09-28-204419 True False False 116m operator-lifecycle-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m operator-lifecycle-manager-catalog 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h48m service-ca 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m storage 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-28-204419
How reproducible:
Always
Steps to Reproduce:
1. Install a 4.11 cluster with only openshift-samples enabled 2. Upgrade to 4.12 3.
Actual results:
The 4.12 introduced optional CO console and insights are not upgraded to 4.12
Expected results:
All the installed COs get upgraded
Additional info:
Description of problem:
For OVNK to become CNCF complaint, we need to support session affinity timeout feature and enable the e2e's on OpenShift side. This bug tracks the efforts to get this into 4.12 OCP.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-4913. The following is the description of the original issue:
—
Description of problem:
Currently the Terraform code waits for 45 seconds, but anecdotal data suggest we should actually wait for 3 minutes in order to avoid "failures" due to occasional slow boots of a new VM in PowerVS.
Version-Release number of selected component (if applicable):
How reproducible:
often enough
Steps to Reproduce:
1. run IPI installer against PowerVS 2. look for "empty tuple" in the error message when it fails to reach `bootstrap-complete` 3.
Actual results:
Expected results:
VMs to always have IP address assigned by DHCP after a certain wait
Additional info:
The change has already been merged into master/4.13, but 4.12 also needs this for planned PowerVS IPI GA on the z-stream.
Description of problem:
The SQL-based index image created by old opm failed to run in 4.12 even if added the `privileged` permission to the namespace.
MacBook-Pro:~ jianzhang$ oc get pods NAME READY STATUS RESTARTS AGE jian-operators-4g5ln 0/1 CrashLoopBackOff 1 (2s ago) 11s MacBook-Pro:~ jianzhang$ oc logs jian-operators-4g5ln Error: open /etc/nsswitch.conf: permission denied
PS: the SQL-based index created by the new opm version doesn't have this issue.
opm version Version: version.Version{OpmVersion:"e41024eb3", GitCommit:"e41024eb37c721bc43e8b3df226dd30c0589aee7", BuildDate:"2022-08-16T01:50:17Z", GoOs:"darwin", GoArch:"amd64"}
Version-Release number of selected component (if applicable):
OCP 4.12
MacBook-Pro:~ jianzhang$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-0.nightly-2022-08-15-150248 True False 3h25m Cluster version is 4.12.0-0.nightly-2022-08-15-150248
How reproducible:
always
Steps to Reproduce:
1. Deploy OCP 4.12
2, Deploy a CatalogSource in the `openshift-marketplace` namespace.
MacBook-Pro:~ jianzhang$ oc get ns openshift-marketplace -o yaml apiVersion: v1 kind: Namespace metadata: annotations: capability.openshift.io/name: marketplace include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" openshift.io/node-selector: "" openshift.io/sa.scc.mcs: s0:c16,c10 openshift.io/sa.scc.supplemental-groups: 1000260000/10000 openshift.io/sa.scc.uid-range: 1000260000/10000 workload.openshift.io/allowed: management creationTimestamp: "2022-08-15T23:15:27Z" labels: kubernetes.io/metadata.name: openshift-marketplace olm.operatorgroup.uid/1b776321-2714-4c1f-95ba-2ddff49c4efe: "" openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/audit: baseline pod-security.kubernetes.io/enforce: baseline pod-security.kubernetes.io/warn: baseline name: openshift-marketplace ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: cd81594b-4f6c-46d6-9369-75deef542ec8 resourceVersion: "8617" uid: 1c35352e-3636-4f2b-a3b1-c84ebc6681e0 spec: finalizers: - kubernetes status: phase: Active
3, Check the CatalogSource pod status, crashed.
MacBook-Pro:~ jianzhang$ oc get catalogsource -n openshift-marketplace jian-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: creationTimestamp: "2022-08-16T02:24:20Z" generation: 1 name: jian-operators namespace: openshift-marketplace resourceVersion: "106145" uid: 6a75ecc9-7b88-4411-bcf5-e34618f9b3cd spec: displayName: Jian Operators image: quay.io/olmqe/etcd-index:v1 priority: -100 publisher: Jian sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: jian-operators.openshift-marketplace.svc:50051 lastConnect: "2022-08-16T03:12:28Z" lastObservedState: TRANSIENT_FAILURE latestImageRegistryPoll: "2022-08-16T02:34:21Z" registryService: createdAt: "2022-08-16T02:24:20Z" port: "50051" protocol: grpc serviceName: jian-operators serviceNamespace: openshift-marketplace MacBook-Pro:~ jianzhang$ oc get pods -n openshift-marketplace NAME READY STATUS RESTARTS AGE 28bb83ea022e9728d25570ab0adbe09a31d6a0a606917488e0ddb00f925mnfw 0/1 Completed 0 3h23m 7049ea48beb27a712fa506b76ad672be201ce5d3a6a93d627a0091e0fesvdlj 0/1 Completed 0 3h23m certified-operators-ftt2n 1/1 Running 0 3h49m community-operators-27dx9 1/1 Running 0 3h49m jian-operators-5zq7d 0/1 CrashLoopBackOff 12 (71s ago) 38m jian-operators-gpg4v 0/1 CrashLoopBackOff 14 (57s ago) 48m marketplace-operator-9c8496b58-2jfmv 1/1 Running 0 3h56m qe-app-registry-rqrrv 1/1 Running 0 141m redhat-marketplace-s6zrj 1/1 Running 0 3h49m redhat-operators-54cqr 1/1 Running 0 3h49m MacBook-Pro:~ jianzhang$ oc -n openshift-marketplace logs jian-operators-gpg4v Error: open /etc/nsswitch.conf: permission denied Usage: opm registry serve [flags] Flags: -d, --database string relative path to sqlite db (default "bundles.db") --debug enable debug logging -h, --help help for serve -p, --port string port number to serve on (default "50051") --skip-migrate do not attempt to migrate to the latest db revision when starting -t, --termination-log string path to a container termination log file (default "/dev/termination-log") --timeout-seconds string Timeout in seconds. This flag will be removed later. (default "infinite") Global Flags: --skip-tls skip TLS certificate verification for container image registries while pulling bundles or index
4. Create a namespace with the `privileged` permission.
MacBook-Pro:~ jianzhang$ oc get ns debug -o yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/sa.scc.mcs: s0:c30,c10 openshift.io/sa.scc.supplemental-groups: 1000890000/10000 openshift.io/sa.scc.uid-range: 1000890000/10000 creationTimestamp: "2022-08-16T02:46:41Z" labels: kubernetes.io/metadata.name: debug pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" name: debug resourceVersion: "95718" uid: bdf93839-6c42-4365-a65c-d9c0b9fe0504 spec: finalizers: - kubernetes status: phase: Active
5. Deploy a CatalogSource as above step 2. Still crashed.
MacBook-Pro:~ jianzhang$ oc get pods -n debug NAME READY STATUS RESTARTS AGE jian-operators-4g5ln 0/1 CrashLoopBackOff 10 (114s ago) 28m jian-operators-wn766 0/1 CrashLoopBackOff 8 (2m25s ago) 18m MacBook-Pro:~ jianzhang$ oc -n debug logs jian-operators-wn766 Error: open /etc/nsswitch.conf: permission denied Usage: opm registry serve [flags] Flags: -d, --database string relative path to sqlite db (default "bundles.db") --debug enable debug logging -h, --help help for serve -p, --port string port number to serve on (default "50051") --skip-migrate do not attempt to migrate to the latest db revision when starting -t, --termination-log string path to a container termination log file (default "/dev/termination-log") --timeout-seconds string Timeout in seconds. This flag will be removed later. (default "infinite") Global Flags: --skip-tls skip TLS certificate verification for container image registries while pulling bundles or index
Actual results:
The sql-based index image created by the old opm version cannot be run.
MacBook-Pro:~ jianzhang$ oc -n debug logs jian-operators-wn766 Error: open /etc/nsswitch.conf: permission denied
Expected results:
The old SQL-based index image runs well. Or we have a workaround for it.
Additional info:
I changed another old sql-based image and have a try, get another permission issue.
MacBook-Pro:~ jianzhang$ oc get catalogsource NAME DISPLAY TYPE PUBLISHER AGE jian-operators Jian Operators grpc Jian 37m xia-operators Xia Operators grpc Xia 101s MacBook-Pro:~ jianzhang$ oc get catalogsource xia-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: creationTimestamp: "2022-08-16T03:22:38Z" generation: 1 name: xia-operators namespace: debug resourceVersion: "110629" uid: 8be42e68-43be-4fd4-9b67-c74edc5e6353 spec: displayName: Xia Operators image: quay.io/olmqe/ditto-index:test-xzha-1 priority: -100 publisher: Xia sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: xia-operators.debug.svc:50051 lastConnect: "2022-08-16T03:24:18Z" lastObservedState: CONNECTING registryService: createdAt: "2022-08-16T03:22:38Z" port: "50051" protocol: grpc serviceName: xia-operators serviceNamespace: debug MacBook-Pro:~ jianzhang$ oc project Using project "debug" on server "https://api.qe-daily-412-0816.ibmcloud.qe.devcluster.openshift.com:6443". MacBook-Pro:~ jianzhang$ oc get pods NAME READY STATUS RESTARTS AGE jian-operators-4g5ln 0/1 CrashLoopBackOff 11 (3m41s ago) 35m jian-operators-wn766 0/1 CrashLoopBackOff 9 (4m13s ago) 25m xia-operators-6wgjt 0/1 CrashLoopBackOff 1 (8s ago) 13s MacBook-Pro:~ jianzhang$ oc logs xia-operators-6wgjt time="2022-08-16T03:22:43Z" level=warning msg="\x1b[1;33mDEPRECATION NOTICE:\nSqlite-based catalogs and their related subcommands are deprecated. Support for\nthem will be removed in a future release. Please migrate your catalog workflows\nto the new file-based catalog format.\x1b[0m" Error: open ./db-609956243: permission denied Usage: opm registry serve [flags] Flags: -d, --database string relative path to sqlite db (default "bundles.db") --debug enable debug logging
Even if that namespace is `privileged`.
MacBook-Pro:~ jianzhang$ oc get ns debug -o yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/sa.scc.mcs: s0:c30,c10 openshift.io/sa.scc.supplemental-groups: 1000890000/10000 openshift.io/sa.scc.uid-range: 1000890000/10000 creationTimestamp: "2022-08-16T02:46:41Z" labels: kubernetes.io/metadata.name: debug pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" name: debug resourceVersion: "95718" uid: bdf93839-6c42-4365-a65c-d9c0b9fe0504 spec: finalizers: - kubernetes status: phase: Active
But, both of them work well in the 4.11 cluster. As follows,
MacBook-Pro:~ jianzhang$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-08-15-152346 True False 91m Cluster version is 4.11.0-0.nightly-2022-08-15-152346 MacBook-Pro:~ jianzhang$ oc get catalogsource NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 106m community-operators Community Operators grpc Red Hat 106m jian-operators Jian Operators grpc Jian 48m redhat-marketplace Red Hat Marketplace grpc Red Hat 106m redhat-operators Red Hat Operators grpc Red Hat 106m xia-operators Xia Operators grpc Xia 6s MacBook-Pro:~ jianzhang$ oc get pods NAME READY STATUS RESTARTS AGE certified-operators-fsjc8 1/1 Running 0 107m community-operators-9qvzt 1/1 Running 0 107m jian-operators-n5s8c 1/1 Running 0 48m marketplace-operator-7b777f747-22rwq 1/1 Running 0 109m redhat-marketplace-2mgrl 1/1 Running 0 107m redhat-operators-72q6z 1/1 Running 0 107m xia-operators-ngq86 1/1 Running 0 23s MacBook-Pro:~ jianzhang$ oc get catalogsource jian-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: creationTimestamp: "2022-08-16T02:39:52Z" generation: 1 name: jian-operators namespace: openshift-marketplace resourceVersion: "58565" uid: 481a6fbe-00a5-4af5-86f7-d7413c658db3 spec: displayName: Jian Operators image: quay.io/olmqe/etcd-index:v1 priority: -100 publisher: Jian sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: jian-operators.openshift-marketplace.svc:50051 lastConnect: "2022-08-16T02:44:45Z" lastObservedState: READY latestImageRegistryPoll: "2022-08-16T03:24:54Z" registryService: createdAt: "2022-08-16T02:39:52Z" port: "50051" protocol: grpc serviceName: jian-operators serviceNamespace: openshift-marketplace MacBook-Pro:~ jianzhang$ oc get catalogsource xia-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: creationTimestamp: "2022-08-16T03:28:07Z" generation: 1 name: xia-operators namespace: openshift-marketplace resourceVersion: "59886" uid: a270f665-ee0b-49a5-badb-d3394c7a9344 spec: displayName: Xia Operators image: quay.io/olmqe/ditto-index:test-xzha-1 priority: -100 publisher: Xia sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: xia-operators.openshift-marketplace.svc:50051 lastConnect: "2022-08-16T03:28:27Z" lastObservedState: READY registryService: createdAt: "2022-08-16T03:28:07Z" port: "50051" protocol: grpc serviceName: xia-operators serviceNamespace: openshift-marketplace MacBook-Pro:~ jianzhang$ oc get ns openshift-marketplace -o yaml apiVersion: v1 kind: Namespace metadata: annotations: capability.openshift.io/name: marketplace include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" openshift.io/node-selector: "" openshift.io/sa.scc.mcs: s0:c16,c5 openshift.io/sa.scc.supplemental-groups: 1000250000/10000 openshift.io/sa.scc.uid-range: 1000250000/10000 workload.openshift.io/allowed: management creationTimestamp: "2022-08-16T01:38:10Z" labels: kubernetes.io/metadata.name: openshift-marketplace olm.operatorgroup.uid/24dae571-2843-445b-b09f-5a4631cb25ba: "" openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/audit: baseline pod-security.kubernetes.io/warn: baseline name: openshift-marketplace ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 470d072e-37d9-4203-bc5a-c675800d593c resourceVersion: "6981" uid: 554a5ceb-8343-46f4-ae69-af36ee45d7fe spec: finalizers: - kubernetes status: phase: Active
Description of problem:
pkg/devfile/sample_test.go fails after devfile registry was updated (https://github.com/devfile/registry/pull/126)
This issue is about updating our assertion so that the CI job runs successfully again. We might want to backport this as well.
OCPBUGS-1678 is about updating the code that the test should use a mock response instead of the latest registry content OR check some specific attributes instead of comparing the full JSON response.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Clone openshift/console
2. Run ./test-backend.sh
Actual results:
Unit tests fail
Expected results:
Unit tests should pass again
Additional info:
Description of problem:
The icon color of Alerts in the Topology list view should be based on alert type.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. create a deployment 2. Create a resource quota so that quota alert will be visible in topology list page 3. navigate to topology list page 3.
Actual results:
Alert icon color is black and white. See the screenshots
Expected results:
Alert icon color should be base on alert type.
Additional info:
This is a clone of issue OCPBUGS-3195. The following is the description of the original issue:
—
Description of problem:
the service ca controller start func seems to return that error as soon as its context is cancelled (which seems to happen the moment the first signal is received): https://github.com/openshift/service-ca-operator/blob/42088528ef8a6a4b8c99b0f558246b8025584056/pkg/controller/starter.go#L24 that apparently triggers os.Exit(1) immediately https://github.com/openshift/service-ca-operator/blob/42088528ef8a6a4b8c99b0f55824[…]om/openshift/library-go/pkg/controller/controllercmd/builder.go the lock release doesn't happen until the periodic renew tick breaks out https://github.com/openshift/service-ca-operator/blob/42088528ef8a6a4b8c99b0f55824[…]/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go seems unlikely that you'd reach the call to le.release() before the call to os.Exit(1) in the other goroutine
Version-Release number of selected component (if applicable):
4.13.0
How reproducible:
~always
Steps to Reproduce:
1. oc delete -n openshift-service-ca pod <service-ca pod>
Actual results:
the old pod logs show:
W1103 09:59:14.370594 1 builder.go:106] graceful termination failed, controllers failed with error: stopped
and when a new pod comes up to replace it, it has to wait for a while before acquiring the leader lock
I1103 16:46:00.166173 1 leaderelection.go:248] attempting to acquire leader lease openshift-service-ca/service-ca-controller-lock... .... waiting .... I1103 16:48:30.004187 1 leaderelection.go:258] successfully acquired lease openshift-service-ca/service-ca-controller-lock
Expected results:
new pod can acquire the leader lease without waiting for the old pod's lease to expire
Additional info:
Description of problem:
IPI installation failed with master nodes being NotReady and CCM error "alicloud: unable to split instanceid and region from providerID".
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-05-053337
How reproducible:
Always
Steps to Reproduce:
1. try IPI installation on alibabacloud, with credentialsMode being "Manual" 2. 3.
Actual results:
Installation failed.
Expected results:
Installation should succeed.
Additional info:
$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 34m Unable to apply 4.12.0-0.nightly-2022-10-05-053337: an unknown error has occurred: MultipleErrors $ $ oc get nodes NAME STATUS ROLES AGE VERSION jiwei-1012-02-9jkj4-master-0 NotReady control-plane,master 30m v1.25.0+3ef6ef3 jiwei-1012-02-9jkj4-master-1 NotReady control-plane,master 30m v1.25.0+3ef6ef3 jiwei-1012-02-9jkj4-master-2 NotReady control-plane,master 30m v1.25.0+3ef6ef3 $ CCM logs: E1012 03:46:45.223137 1 node_controller.go:147] node-controller "msg"="fail to find ecs" "error"="cloud instance api fail, alicloud: unable to split instanceid and region from providerID, error unexpected providerID=" "providerId"="alicloud://" E1012 03:46:45.223174 1 controller.go:317] controller/node-controller "msg"="Reconciler error" "error"="find ecs: cloud instance api fail, alicloud: unable to split instanceid and region from providerID, error unexpected providerID=" "name"="jiwei-1012-02-9jkj4-master-0" "namespace"="" https://mastern-jenkins-csb-openshift-qe.apps.ocp-c1.prod.psi.redhat.com/job/ocp-common/job/Flexy-install/145768/ (Finished: FAILURE) 10-12 10:55:15.987 ./openshift-install 4.12.0-0.nightly-2022-10-05-053337 10-12 10:55:15.987 built from commit 84aa8222b622dee71185a45f1e0ba038232b114a 10-12 10:55:15.987 release image registry.ci.openshift.org/ocp/release@sha256:41fe173061b00caebb16e2fd11bac19980d569cd933fdb4fab8351cdda14d58e 10-12 10:55:15.987 release architecture amd64 FYI the installation could succeed with 4.12.0-0.nightly-2022-09-28-204419: https://mastern-jenkins-csb-openshift-qe.apps.ocp-c1.prod.psi.redhat.com/job/ocp-common/job/Flexy-install/145756/ (Finished: SUCCESS) 10-12 09:59:19.914 ./openshift-install 4.12.0-0.nightly-2022-09-28-204419 10-12 09:59:19.914 built from commit 9eb0224926982cdd6cae53b872326292133e532d 10-12 09:59:19.914 release image registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc 10-12 09:59:19.914 release architecture amd64
Not all information provided in the install-config gets passed through to assisted-service.
An example is that platform settings other than the VIPs are ignored. So are the "capabilities". There may be others - we need to do a thorough audit.
If the user supplies data that we then ignore, we should log a warning. However, we must not return an error, because this may prevent people using their existing install-configs with the agent install method.
Description of problem:
The Console Operator has a suite of tests responsible for assuring that Console can successfully interact with Operators managed by OLM. The operator-hub.spec test references an operator no longer present in the 4.12 certified operators catalog source: https://github.com/openshift/console/blob/master/frontend/packages/operator-lifecycle-manager/integration-tests-cypress/tests/operator-hub.spec.ts#L64 OLM is unable to set the default catalog sources to the 4.12 image tag until the test is update to reference an operator in both the 4.11 and 4.12 images of the certified operators catalog source.
Version-Release number of selected component (if applicable):4.12
How reproducible: always
Steps to Reproduce:
1. Update the certified operators catalogSource images to the 4.12 tag 2. Attempt to run the operatorhub.spec test suite.
Actual results:
The test fails
Expected results:
The test passes
Additional info:
This is a clone of issue OCPBUGS-4089. The following is the description of the original issue:
—
The kube-state-metric pod inside the openshift-monitoring namespace is not running as expected.
On checking the logs I am able to see that there is a memory panic
~~~
2022-11-22T09:57:17.901790234Z I1122 09:57:17.901768 1 main.go:199] Starting kube-state-metrics self metrics server: 127.0.0.1:8082
2022-11-22T09:57:17.901975837Z I1122 09:57:17.901951 1 main.go:66] levelinfomsgTLS is disabled.http2false
2022-11-22T09:57:17.902389844Z I1122 09:57:17.902291 1 main.go:210] Starting metrics server: 127.0.0.1:8081
2022-11-22T09:57:17.903191857Z I1122 09:57:17.903133 1 main.go:66] levelinfomsgTLS is disabled.http2false
2022-11-22T09:57:17.906272505Z I1122 09:57:17.906224 1 builder.go:191] Active resources: certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
2022-11-22T09:57:17.917758187Z E1122 09:57:17.917560 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
2022-11-22T09:57:17.917758187Z goroutine 24 [running]:
2022-11-22T09:57:17.917758187Z k8s.io/apimachinery/pkg/util/runtime.logPanic(
)
2022-11-22T09:57:17.917758187Z /usr/lib/golang/src/runtime/panic.go:1038 +0x215
2022-11-22T09:57:17.917758187Z k8s.io/kube-state-metrics/v2/internal/store.ingressMetricFamilies.func6(0x40)
2022-11-22T09:57:17.917758187Z /go/src/k8s.io/kube-state-metrics/internal/store/ingress.go:136 +0x189
2022-11-22T09:57:17.917758187Z k8s.io/kube-state-metrics/v2/internal/store.wrapIngressFunc.func1(
)
2022-11-22T09:57:17.917758187Z /go/src/k8s.io/kube-state-metrics/pkg/metric_generator/generator.go:107 +0xd8
~~~
Logs are attached to the support case
https://github.com/openshift/api/pull/1213 and https://github.com/openshift/api/pull/1202 PR's have been merged but the latest 4.12 OCP clusters do not show the changes .
According to https://github.com/openshift/console-operator/blob/bd2a7c9077ccf214dd8a725a7660e86d96e045b0/Dockerfile.rhel7#L18-L23, we need to vendor the openshift/api in console operator repo so that the latest manifests get's applied.
Description of problem:
Normal user cannot open the debug container from the pods(crashLoopbackoff) they created, And would be got error message: pods "<pod name>" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , <nil>
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-20-040107, 4.11.z, 4.10.z
How reproducible:
Always
Steps to Reproduce:
1. Login OCP as a normal user eg: flexy-htpasswd-provider 2. Create a project, go to Developer prespective -> +Add page 3. Click "Import from Git", and provide below data to get a Pods with CrashLoopBackOff state Git Repo URL: https://github.com/sclorg/nodejs-ex.git Name: nodejs-ex-git Run command: star a wktw 4. Navigate to /k8s/ns/<project name>/pods page, find the pod with CrashLoopBackOff status, and go to it details page -> Logs Tab 5. Click the link of "Debug container" 6. Check if the Debug container can be opened
Actual results:
6. Error message would be shown on page, user cannot open debug container via UI pods "nodejs-ex-git-6dd986d8bd-9h2wj-debug-tkqk2" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , <nil>
Expected results:
6. Normal user could use debug container without any error message
Additional info:
The debug container could be created for the normal user successfully via CommandLine $ oc debug <crashloopbackoff pod name> -n <project name>
With CSISnapshot capability is disabled, all CSI driver operators are Degraded. For example AWS EBS CSI driver operator during installation:
18:12:16.895: Some cluster operators are not ready: storage (Degraded=True AWSEBSCSIDriverOperatorCR_AWSEBSDriverStaticResourcesController_SyncError: AWSEBSCSIDriverOperatorCRDegraded: AWSEBSDriverStaticResourcesControllerDegraded: "volumesnapshotclass.yaml" (string): the server could not find the requested resource AWSEBSCSIDriverOperatorCRDegraded: AWSEBSDriverStaticResourcesControllerDegraded: ) Ginkgo exit error 1: exit with code 1}
Version-Release number of selected component (if applicable):
4.12.nightly
The reason is that cluster-csi-snapshot-controller-operator does not create VolumeSnapshotClass CRD, which AWS EBS CSI driver operator expects to exist.
CSI driver operators must skip VolumeSnapshotClass creation if the CRD does not exist.
Description of problem:
OCP v4.9.31 cluster didn't have the $search domain in /etc/resolv.conf, which was there in the v4.8.29 OCP cluster. This was observed in all the nodes of the v4.9.31 cluster.
~~~
OpenShift 4.9.31
sh-4.4# cat /etc/resolv.conf
OpenShift 4.8.29
ENV: OpenStack IAD2, IPI installation. Connected cluster.
Version-Release number of selected component (if applicable):
OCP v4.9.31
How reproducible:
Always
Steps to Reproduce:
1. Install IPI cluster on OpenStack IAD2 platform having cluster version 4.9.31
2. Debug to any of the node(master/worker)
3. Check and confirm the missing search domain on all nodes of the cluster.
Actual results:
The search domain was missing when checked in `/etc/resolv.conf` file on all nodes of the cluster causing serious issues in the cluster.
Expected results:
The installer should embed the search domain in /etc/resolv.conf file on all nodes of the cluster.
Additional info:
set -eo pipefail
DISPATCHER_FILE="/etc/NetworkManager/dispatcher.d/30-resolv-prepender"
DOMAINS="$(grep -E '\s*DOMAINS=.*iad2.dc.paas.redhat.com' $DISPATCHER_FILE \
grep -oE '[a-z0-9]*.dev.iad2.dc.paas.redhat.com' \ |
tr '\n' ' ')" |
>&2 echo "IT-PaaS: overwriting search domains in /etc/resolv.conf with: $DOMAINS"
sed -e "/^search/d" \
-e "/Generated by/c# Generated by KNI resolv prepender NM dispatcher script \nsearch $DOMAINS" \
/etc/resolv.conf > /etc/resolv.tmp
mv /etc/resolv.tmp /etc/resolv.conf
~~~
Description of problem:
OCPBUGS-3499 and OCPBUGS-3501 both require a more recent version of openshift/library-go containing the shared validation and host-assignment logic.
Description of problem:
When opening the Devfile sample developer catalog, switch the project in another browser tab, and then open devfile samples link in a new tab, the current project context is getting lost.
Version-Release number of selected component (if applicable):
4.12, expecting that this happen also in older versions
How reproducible:
Always
Steps to Reproduce:
1. Switch to the developer perspective, navigate to Add > Samples
2. Open a new browser tab and create a new project
3. Ctrl+click a sample in the first tab.
Actual results:
The project has also changed in the "Import sample" page
Expected results:
The project should be used also for the new "Import sample" page
Additional info:
We had this issue earlier for other catalog entries. Other samples works already fine, just the Devfile sample links doesn't contain the current namespace.
In the Known Issues section of the OpenStack-specific Installer docs issues, there is a point about control plane anti-affinity.
The known issue has several problems:
This is a clone of issue OCPBUGS-1427. The following is the description of the original issue:
—
Description of problem:
Jump looks the worst on gcp, but looking closer Azure and AWS both jumped as well just not as high.
Disruption data indicates that the image registry on GCP was averaging around 30-40 seconds of disruption during an upgrade, until Aug 27th when it jumped to 125-135 seconds and has remained there ever since.
We see similar spikes in ingress-to-console and ingress-to-oauth. NOTE: image registry backend is also behind ingress, so all three are ingress related disruption.
https://datastudio.google.com/s/uBC4zuBFdTE
These charts show the problem on Aug 27 for registry, ingress to console, and ingress to oauth.
sdn network type appears unaffected.
Something merged Aug 26-27 that caused a significant change for anything behind ingress using ovn on gcp.
Description of the problem:
I installed a cluster with OCS and CNV.
The issue is that cluster event contain repeated messages:
1/9/2022, 6:17:31 PM Operator ocs status: available message: install strategy completed with no errors 1/9/2022, 6:17:30 PM Operator lso status: available message: install strategy completed with no errors 1/9/2022, 6:17:30 PM Operator cnv status: available message: install strategy completed with no errors 1/9/2022, 6:17:06 PM Successfully completed installing cluster 1/9/2022, 6:17:06 PM Updated status of the cluster to installed 1/9/2022, 6:17:01 PM Operator ocs status: available message: install strategy completed with no errors 1/9/2022, 6:17:00 PM Operator lso status: available message: install strategy completed with no errors 1/9/2022, 6:17:00 PM Operator cnv status: available message: install strategy completed with no errors 1/9/2022, 6:16:31 PM Operator ocs status: progressing message: installing: waiting for deployment ocs-operator to become ready: deployment "ocs-operator" not available: Deployment does not have minimum availability. 1/9/2022, 6:16:30 PM Operator lso status: available message: install strategy completed with no errors 1/9/2022, 6:16:30 PM Operator cnv status: available message: install strategy completed with no errors 1/9/2022, 6:16:01 PM Operator ocs status: progressing message: installing: waiting for deployment ocs-operator to become ready: deployment "ocs-operator" not available: Deployment does not have minimum availability. 1/9/2022, 6:16:00 PM Operator lso status: available message: install strategy completed with no errors 1/9/2022, 6:16:00 PM Operator cnv status: available message: install strategy completed with no errors 1/9/2022, 6:15:31 PM Operator ocs status: progressing message: installing: waiting for deployment ocs-operator to become ready: deployment "ocs-operator" not available: Deployment does not have minimum availability. 1/9/2022, 6:15:31 PM Operator lso status: available message: install strategy completed with no errors 1/9/2022, 6:15:30 PM Operator cnv status: available message: install strategy completed with no errors
How reproducible:
100%
Steps to reproduce:
1. Install cluster with OCS and CNV
2. Watch cluster events
Actual results:
repeated message when olm operator completed installation
Expected results:
1 event record for olm operator finished successfully
Description of problem:
OVNKubernetesControllerDisconnectedSouthboundDatabase alert seems to fire in the e2e-aws-ovn-serial CI job. Note that something funny happens in the job itself, which is that a set of ovnkube-node pods get created and then deleted and then get recreated again and test runs. But the alert gets fired for the first set of pods that got deleted. From the initial screening of artifacts alone its not clear what happened to the old pods. This needs investigation
Version-Release number of selected component (if applicable):
4.12 OCP
How reproducible:
Seems like always
Steps to Reproduce:
1.https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/27043/pull-ci-openshift-origin-master-e2e-aws-ovn-serial/1568166237639282688 2. https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/27043/pull-ci-openshift-origin-master-e2e-aws-ovn-serial/1567913444936519680
Actual results:
Alert is fired
Expected results:
Alert shouldn't be fired and this is expected in the serial job then we need to silence that alert for that job, make it flaky and not fail hard if that alert fires.
Additional info:
Description of problem:
Event souces are not shown in topology
Version-Release number of selected component (if applicable):
Have verified it on 4.12.0-0.nightly-2022-09-20-095559
How reproducible:
Steps to Reproduce:
1. Install Serverless operator 2. Create CR for knative-serving and knative-eventing respectively 3. Create/select a ns -> go to dev console -> add -> event souce 4. Create any event source
Actual results:
Can't see created resouoce(Event source) in topology
Expected results:
Should be able to see created resoouce on topology
Additional info:
This is a clone of issue OCPBUGS-2083. The following is the description of the original issue:
—
Description of problem:
Currently we are running VMWare CSI Operator in OpenShift 4.10.33. After running vulnerability scans, the operator was discovered to be running a known weak cipher 3DES. We are attempting to upgrade or modify the operator to customize the ciphers available. We were looking at performing a manual upgrade via Quay.io but can't seem to pull the image and was trying to steer away from performing a custom install from scratch. Looking for any suggestions into mitigated the weak cipher in the kube-rbac-proxy under VMware CSI Operator.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-4401. The following is the description of the original issue:
—
Description of problem:
cluster-policy-controller has unnecessary permissions and is able to operate on all leases in KCM namespace. This also applies to namespace-security-allocation-controller that was moved some time ago and does not need lock mechanism.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
When a pod runs to a completed state, we typically rely on the update event that will indicate to us that this pod is completed. At that point the pod IP is released and the port configuration is removed in OVN. The subsequent delete event for this pod will be ignored because it should have been cleaned up in the previous update. However, there can be cases where the update event is missed with pod completed. In this case we will only receive a delete with pod completed event, and ignore tearing down the pod. The end result is the pod is not cleaned up in OVN and the IP address remains allocated, reducing the amount of address range available to launch another pod. This can lead to exhausting all IP addresses available for pod allocation on a node.
Version-Release number of selected component (if applicable):
4.10.24
How reproducible:
Not sure how to reproduce this. I'm guessing some lag in kapi updates can cause the completed update event and the final delete event to be combined into a single event.
Steps to Reproduce:
1. 2. 3.
Actual results:
Port still exists in OVN, IP remains allocated for a deleted pod.
Expected results:
IP should be freed, port should be removed from OVN.
Additional info:
Description of problem:
when install private cluster, firstly failed , then need
ibmcloud is security-group-rule-add "${infra}-sg-kube-api-lb" inbound tcp --port-min 6443 --port-max 6443 --remote $sg
then openshift-install wait-for again.
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
1. try to create cluster with BYON, in install-config.yaml publish: Internal, install failed
Actual results:
firstly time, install failed
Expected results:
Just need install once. need not manually security-group-rule-add.
Additional info:
https://coreos.slack.com/archives/C01U40AM37F/p1664439142279079?thread_ts=1663769891.358229&cid=C01U40AM37F
this issue blocked set up private cluster automatically
The dependency on openshift/api needs to be bumped in openshift/kubernetes in order to pull in the fix from https://issues.redhat.com/browse/OCPBUGS-3635.
Description of problem:
The default dns-default pod is missing the "target.workload.openshift.io/management:" annotation. As a result when the workload partitioning feature is enabled on SNO, this pod resources will not get mutated and pinned to the reserved cpuset. This is a regresion from 4.10. Pod spec from 4.10.17 Annotations: ... resources.workload.openshift.io/dns: {"cpushares": 51} resources.workload.openshift.io/kube-rbac-proxy: {"cpushares": 10} target.workload.openshift.io/management {"effect":"PreferredDuringScheduling"}
Version-Release number of selected component (if applicable):
4.11.0
How reproducible:
100%
Steps to Reproduce:
1. Install a SNO and check the annotation 2. 3.
Actual results:
Expected results:
Additional info:
As an OpenShift operator, i would like to be able to add labels to my MachineSets and nodes which contain unique values, while also using the cluster autoscaler's ability to balance similar node groups. Being able to specify additional labels through the ClusterAutoscaler CRD would allow me to do that.
Something that has arisen during the investigation of https://bugzilla.redhat.com/show_bug.cgi?id=2001027 is the notion that each CSI driver could create its own zone topology labels, and that they do not have to be consistent with the well known kubernetes label.
It is possible, although not entirely confirmed, that a CSI driver might add these labels even when not in use (although running in the cluster).
Additionally, users may need the option to specify more labels to ignore (as illustrated in the discussion of the bug).
Description of problem:
The TestReloadInterval E2E test has completely wrong validations in which the min value should be 1s, not 5s. But there is a race condition which allow these tests to sometimes pass due to the last test condition. Therefore, failures in CI are actually correct, and successes are wrong based on the E2E conditions.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
50%
Steps to Reproduce:
1.Run TestReloadInterval E2E test (make test-e2e TEST=TestReloadInterval)
Actual results:
Sometimes fails on 5us test case: reloadinterval_test.go:106: router deployment not updated with RELOAD_INTERVAL=5s: timed out waiting for the condition
Expected results:
Should pass E2E
Additional info:
Description of problem:
With every pod update we are executing a mutate operation to add the pod port to the port group or add the pod IP to an address set. This functionally doesn't hurt, since mutate will not add duplicate values to the same set. However, this is bad for performance. For example, with a 730 network policies affecting a pod, and issuing 7 pod updates would result in over 5k transactions.
Package golang.org/x/text/language has a vulnerability which could cause a denial of service (included on version v0.3.7 of golang.org/x/text/language)
The linux kernel was updated:
https://lkml.org/lkml/2020/3/20/1030
to include steal
accounting
This would greatly assist in troubleshooting vSphere performance issues
caused by over-provisioned ESXi hosts.
Description of problem:
This bug is a copy of https://bugzilla.redhat.com/show_bug.cgi?id=2137616 as fix needs to go on OCP side. For must gather and attached screenshots please refer the bugzilla.
Add Capacity button does not exist after upgrade OCP version [OCP4.11->OCP4.12]
Version-Release number of selected component (if applicable):
ODF Version:4.11.3-3 OCP Version: 4.12.0-0.nightly-2022-10-24-103753 Provider: AWS
How reproducible:
Steps to Reproduce:
1.Install ODF4.11 +OCP4.11 2.Upgrade OCP4.11 to OCP4.12 3.Log in to the OpenShift Web Console. 4.Click Operators → Installed Operators. 5.Click OpenShift Data Foundation Operator. 6.Click the Storage Systems tab. 7.Click the Action Menu (⋮) on the far right of the storage system name to extend the options menu. "Add Capacity" button does not exist on menu. *Attached Screenshot
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-4181. The following is the description of the original issue:
—
Description of problem:
After configuring a webhook receiver in alertmanager to send alerts to an external tool, a customer noticed that when receiving alerts they have as source "https:///<console-url>" (notice the 3 /).
Version-Release number of selected component (if applicable):
OCP 4.10
How reproducible:
Always
Steps to Reproduce:
1. 2. 3.
Actual results:
https:///<console-url>
Expected results:
https://<console-url>
Additional info:
After investigating I discovered that the problem might be in the CMO code:
→ oc get Alertmanager main -o yaml | grep externalUrl externalUrl: https:/console-openshift-console.apps.jakumar-2022-11-27-224014.devcluster.openshift.com/monitoring → oc get Prometheus k8s -o yaml | grep externalUrl externalUrl: https:/console-openshift-console.apps.jakumar-2022-11-27-224014.devcluster.openshift.com/monitoring
Currently, we have this validation https://github.com/openshift/installer/blob/master/pkg/asset/agent/installconfig_test.go#L103 which checks if the platform is none then the number of control planes should be 1 and workers should be zero.
We need another validation to check if the number of control planes is 1 and workers are zero, the in the install-config.yaml the platform can only be set as none and in agent-cluster-install.yaml, the platformType should only be set as none. If we try to do SNO (i.e. control planes is 1 and workers are zero) with e.g. platform: baremetal then assisted will reject it, so we should catch it as early as possible
Description of problem:
Installer fails due to Neutron policy error when creating Openstack servers for OCP master nodes. $ oc get machines -A NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api ostest-kwtf8-master-0 Running 23h openshift-machine-api ostest-kwtf8-master-1 Running 23h openshift-machine-api ostest-kwtf8-master-2 Running 23h openshift-machine-api ostest-kwtf8-worker-0-g7nrw Provisioning 23h openshift-machine-api ostest-kwtf8-worker-0-lrkvb Provisioning 23h openshift-machine-api ostest-kwtf8-worker-0-vwrsk Provisioning 23h $ oc -n openshift-machine-api logs machine-api-controllers-7454f5d65b-8fqx2 -c machine-controller [...] E1018 10:51:49.355143 1 controller.go:317] controller/machine_controller "msg"="Reconciler error" "error"="error creating Openstack instance: Failed to create port err: Request forbidden: [POST https://overcl