Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
As a user, I should be able to configure CSI driver to have a storage topology.
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
There is a new driver release 5.0.0 since the last rebase that includes snapshot support:
https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0
Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
This includes ibm-vpc-node-label-updater!
(Using separate cards for each driver because these updates can be more complicated)
The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.
We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.
We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.
related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729
This Epic tracks the GA of this feature
Epic Goal
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
Much like core OpenShift operators, a standardized flow exists for OLM-managed operators to interact with the cluster in a specific way to leverage AWS STS authorization when using AWS APIs as opposed to insecure static, long-lived credentials. OLM-managed operators can implement integration with the CloudCredentialOperator in well-defined way to support this flow.
Enable customers to easily leverage OpenShift's capabilities around AWS STS with layered products, for increased security posture. Enable OLM-managed operators to implement support for this in well-defined pattern.
See Operators & STS slide deck.
The CloudCredentialsOperator already provides a powerful API for OpenShift's cluster core operator to request credentials and acquire them via short-lived tokens. This capability should be expanded to OLM-managed operators, specifically to Red Hat layered products that interact with AWS APIs. The process today is cumbersome to none-existent based on the operator in question and seen as an adoption blocker of OpenShift on AWS.
This is particularly important for ROSA customers. Customers are expected to be asked to pre-create the required IAM roles outside of OpenShift, which is deemed acceptable.
This Section: High-Level description of the Market Problem ie: Executive Summary
This Section: Articulates and defines the value proposition from a users point of view
This Section: Effect is the expected outcome within the market. There are two dimensions of outcomes; growth or retention. This represents part of the “why” statement for a feature.
As an engineer I want the capability to implement CI test cases that run at different intervals, be it daily, weekly so as to ensure downstream operators that are dependent on certain capabilities are not negatively impacted if changes in systems CCO interacts with change behavior.
Acceptance Criteria:
Create a stubbed out e2e test path in CCO and matching e2e calling code in release such that there exists a path to tests that verify working in an AWS STS workflow.
OC mirror is GA product as of Openshift 4.11 .
The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term.
HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]
To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes:
If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE.
For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA:
Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc.
Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:
As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters.
HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story.
Thus the following stories are important for HyperShift:
Refs:
HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed.
Main user story: When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees.
Ref: What are we missing in Core HyperShift for GA Readiness?
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumptions:
HyperShift - proposed cuts from data plane
When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:
More information:
To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.
See Hosted Control Planes (aka HyperShift) Strategy [Live Document]
Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path.
https://issues.redhat.com/browse/OCPPLAN-8901
HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead).
We should make sure our SD milestones are unblocked by the core team.
This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.
- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors.
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771)
Epic Goal*
The goal is to split client certificate trust chains from the global Hypershift root CA.
Why is this important? (mandatory)
This is important to:
Scenarios (mandatory)
Provide details for user scenarios including actions to be performed, platform specifications, and user personas.
Dependencies (internal and external) (mandatory)
Hypershift team needs to provide us with code reviews and merge the changes we are to deliver
Contributing Teams(and contacts) (mandatory)
Acceptance Criteria (optional)
The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.
Drawbacks or Risk (optional)
Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release
Done - Checklist (mandatory)
AUTH-311 introduced an enhancement. Implement the signer separation described there.
Cloned from OCPSTRAT-377 to represent the backport to 4.12
Backport questions:
1) What's the impact/cost to any other critical items on the next release?
Installer and edge are mostly focused on activation/retention and working the list top-to-bottom without release blockers. This is an activation item highly coveted by SD and applicable in existing versions.
2) Is it a breaking change to the existing fleet?
No.
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic —
Enhancement PR: https://github.com/openshift/enhancements/pull/1397
API PR: https://github.com/openshift/api/pull/1460
Ingress Operator PR: https://github.com/openshift/cluster-ingress-operator/pull/928
Feature Goal: Support OpenShift installation in AWS Shared VPC scenario where AWS infrastructure resources (at least the Private Hosted Zone) belong to an account separate from the cluster installation target account.
The ingress operator is responsible for creating DNS records in AWS Route53 for cluster ingress. Prior to the implementation of this epic, the ingress operator doesn't have the capability to add DNS records into an existing Route 53 hosted zone in the shared VPC.
As described in the WIP PR https://github.com/openshift/cluster-ingress-operator/pull/928, the ingress operator will consume a new API field that contains the IAM Role ARN for configuring DNS records in the private hosted zone. If this field is present, then the ingress operator will use this account to create all private hosted zone records. The API fields will be described in the Enhancement PR.
The ingress operator code will accomplish this by defining a new provider implementation that wraps two other DNS providers, using one of them to publish records to the public zone and the other to publish records to the private zone.
See NE-1299
See NE-1299
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Backport of 4.13 AWS Shared VPC Feature
Backport of 4.13 AWS Shared VPC Feature
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
Customers can trust the metadata in our operators catalogs to reason about infrastructure compatibility and interoperability. Similar to OCPPLAN-7983 the requirement is that this data is present for every layered product and Red Hat-release operator and ideally also ISV operators.
Today it is hard to validate the presence of this data due to the metadata format. This features tracks introducing a new format, implementing the appropriate validation and enforcement of presence as well as defining a grace period in which both formats are acceptable.
Customers can rely on the operator metadata as the single source of truth for capability and interoperability information instead of having to look up product-specific documentation. They can use this data to filter in on-cluster and public catalog displays as well as in their pipelines or custom workflows.
Red Hat Operators are required to provide this data and we aim for near 100% coverage in our catalogs.
Absence of this data can reliably be detected and will subsequently lead to gating in the release process.
Provide any additional customer-specific considerations that must be made when designing and delivering the Feature. Initial completion during Refinement status.
Telecommunications providers continue to deploy OpenShift at the Far Edge. The acceleration of this adoption and the nature of existing Telecommunication infrastructure and processes drive the need to improve OpenShift provisioning speed at the Far Edge site and the simplicity of preparation and deployment of Far Edge clusters, at scale.
A list of specific needs or objectives that a Feature must deliver to satisfy the Feature. Some requirements will be flagged as MVP. If an MVP gets shifted, the feature shifts. If a non MVP requirement slips, it does not shift the feature.
requirement | Notes | isMvp? |
Telecommunications Service Provider Technicians will be rolling out OCP w/ a vDU configuration to new Far Edge sites, at scale. They will be working from a service depot where they will pre-install/pre-image a set of Far Edge servers to be deployed at a later date. When ready for deployment, a technician will take one of these generic-OCP servers to a Far Edge site, enter the site specific information, wait for confirmation that the vDU is in-service/online, and then move on to deploy another server to a different Far Edge site.
Retail employees in brick-and-mortar stores will install SNO servers and it needs to be as simple as possible. The servers will likely be shipped to the retail store, cabled and powered by a retail employee and the site-specific information needs to be provided to the system in the simplest way possible, ideally without any action from the retail employee.
Q: how challenging will it be to support multi-node clusters with this feature?
< What does the person writing code, testing, documenting need to know? >
< Are there assumptions being made regarding prerequisites and dependencies?>
< Are there assumptions about hardware, software or people resources?>
< Are there specific customer environments that need to be considered (such as working with existing h/w and software)?>
< Are there Upgrade considerations that customers need to account for or that the feature should address on behalf of the customer?>
<Does the Feature introduce data that could be gathered and used for Insights purposes?>
< What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)? >
< What does success look like?>
< Does this feature have doc impact? Possible values are: New Content, Updates to existing content, Release Note, or No Doc Impact>
< If unsure and no Technical Writer is available, please contact Content Strategy. If yes, complete the following.>
< Which other products and versions in our portfolio does this feature impact?>
< What interoperability test scenarios should be factored by the layered product(s)?>
Question | Outcome |
This is a clone of issue OCPBUGS-14416. The following is the description of the original issue:
—
Description of problem:
When installing SNO with bootstrap in place the cluster-policy-controller hangs for 6 minutes waiting for the lease to be acquired.
Version-Release number of selected component (if applicable):
How reproducible:
100%
Steps to Reproduce:
1.Run the PoC using the makefile here https://github.com/eranco74/bootstrap-in-place-poc 2.Observe the cluster-policy-controller logs post reboot
Actual results:
I0530 16:01:18.011988 1 leaderelection.go:352] lock is held by leaderelection.k8s.io/unknown and has not yet expired I0530 16:01:18.012002 1 leaderelection.go:253] failed to acquire lease kube-system/cluster-policy-controller-lock I0530 16:07:31.176649 1 leaderelection.go:258] successfully acquired lease kube-system/cluster-policy-controller-lock
Expected results:
Expected the bootstrap cluster-policy-controller to release the lease so that the cluster-policy-controller running post reboot won't have to wait the lease to expire.
Additional info:
Suggested resolution for bootstrap in place: https://github.com/openshift/installer/pull/7219/files#diff-f12fbadd10845e6dab2999e8a3828ba57176db10240695c62d8d177a077c7161R44-R59
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12
This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9
More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4
update ironic software to pick up latest bug fixes
1. Proposed title of this feature request
Delete worker nodes using GitOps / ACM workflow
2. What is the nature and description of the request?
We use siteConfig to deploy a cluster using the GitOPS / ACM workflow. We can also use siteConfig to add worker nodes to an existing cluster. However, today we cannot delete a worker node using the GitOps / ACM work flow. We need to go and manually delete the resources (BMH, nmstateConfig etc.) and the OpenShift node. We would like to have the node deleted as part of the GitOps workflow.
3. Why does the customer need this? (List the business requirements here)
Worker nodes may need to be replaced for any reason (hardware failures) which may require deletion of a node.
If we are colocating OpenShift and OpenStack control planes on the same infrastructure (using OpenStack director operator to create OpenStack control plane in OCP virtualization), then we also have the use case of assigning baremetal nodes as OpenShift worker nodes or OpenStack compute nodes. Over time we may need to change the role of those baremetal nodes (from worker to compute or from compute to worker). Having the ability to delete worker nodes via GitOps will make it easier to automate that use case.
4. List any affected packages or components.
ACM, GitOps
There is a requirement to handle removal and cleaning of nodes installed into spoke clusters in the ZTP flow (driven by git ops).
The currently proposed solution for this would use the hub cluster BMH to clean the host as it's already configured and can be used for either BM or non-platform spoke clusters.
This removal should be triggered by the deletion of the BMH, but if the BMH is removed we can't also use it to handle deprovisioning the host.
If another finalizer is configured on the BMH BMO should assume that host is not ready to be deleted.
Testing steps:
Deprovisioning should wait until the detached annotation is removed, previously the host was deleted before deprovisioning could run.
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
As an admin, I want to hide user perspective(s) based on the customization.
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Description of problem:
CU cluster of the Mavenir deployment has cluster-node-tuning-operator in a CrashLoopBackOff state and does not apply performance profile
Version-Release number of selected component (if applicable):
4.14rc0 and 4.14rc1
How reproducible:
100%
Steps to Reproduce:
1. Deploy CU cluster with ZTP gitops method 2. Wait for Policies to be complient 3. Check worker nodes and cluster-node-tuning-operator status
Actual results:
Nodes do not have performance profile applied cluster-node-tuning-operator is crashing with following in logs: E0920 12:16:57.820680 1 runtime.go:79] Observed a panic: &runtime.TypeAssertionError{_interface:(*runtime._type)(nil), concrete:(*runtime._type)(nil), asserted:(*runtime._type)(0x1e68ec0), missingMethod:""} (interface conversion: interface is nil, not v1.Object) goroutine 615 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1c98c20?, 0xc0006b7a70}) /go/src/github.com/openshift/cluster-node-tuning-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000d49500?}) /go/src/github.com/openshift/cluster-node-tuning-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x1c98c20, 0xc0006b7a70}) /usr/lib/golang/src/runtime/panic.go:884 +0x213 github.com/openshift/cluster-node-tuning-operator/pkg/util.ObjectInfo({0x0?, 0x0}) /go/src/github.com/openshift/cluster-node-tuning-operator/pkg/util/objectinfo.go:10 +0x39 github.com/openshift/cluster-node-tuning-operator/pkg/operator.(*ProfileCalculator).machineConfigLabelsMatch(0xc000a23ca0?, 0xc000445620, {0xc0001b38e0, 0x1, 0xc0010bd480?}) /go/src/github.com/openshift/cluster-node-tuning-operator/pkg/operator/profilecalculator.go:374 +0xc7 github.com/openshift/cluster-node-tuning-operator/pkg/operator.(*ProfileCalculator).calculateProfile(0xc000607290, {0xc000a40900, 0x33}) /go/src/github.com/openshift/cluster-node-tuning-operator/pkg/operator/profilecalculator.go:208 +0x2b9 github.com/openshift/cluster-node-tuning-operator/pkg/operator.(*Controller).syncProfile(0xc000195b00, 0x0?, {0xc000a40900, 0x33}) /go/src/github.com/openshift/cluster-node-tuning-operator/pkg/operator/controller.go:664 +0x6fd github.com/openshift/cluster-node-tuning-operator/pkg/operator.(*Controller).sync(0xc000195b00, {{0x1f48661, 0x7}, {0xc000000fc0, 0x26}, {0xc000a40900, 0x33}, {0x0, 0x0}}) /go/src/github.com/openshift/cluster-node-tuning-operator/pkg/operator/controller.go:371 +0x1571 github.com/openshift/cluster-node-tuning-operator/pkg/operator.(*Controller).eventProcessor.func1(0xc000195b00, {0x1dd49c0?, 0xc000d49500?}) /go/src/github.com/openshift/cluster-node-tuning-operator/pkg/operator/controller.go:193 +0x1de github.com/openshift/cluster-node-tuning-operator/pkg/operator.(*Controller).eventProcessor(0xc000195b00) /go/src/github.com/openshift/cluster-node-tuning-operator/pkg/operator/controller.go:212 +0x65 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?) /go/src/github.com/openshift/cluster-node-tuning-operator/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:226 +0x3e k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x224ee20, 0xc000c48ab0}, 0x1, 0xc00087ade0) /go/src/github.com/openshift/cluster-node-tuning-operator/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:227 +0xb6 k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0xc0004e6710?) /go/src/github.com/openshift/cluster-node-tuning-operator/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:204 +0x89 k8s.io/apimachinery/pkg/util/wait.Until(0xc0004e67d0?, 0x91af86?, 0xc000ace0c0?) /go/src/github.com/openshift/cluster-node-tuning-operator/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:161 +0x25 created by github.com/openshift/cluster-node-tuning-operator/pkg/operator.(*Controller).run /go/src/github.com/openshift/cluster-node-tuning-operator/pkg/operator/controller.go:1407 +0x1ba5 panic: interface conversion: interface is nil, not v1.Object [recovered] panic: interface conversion: interface is nil, not v1.Object
Expected results:
cluster-node-tuning-operator is functional, performance profiles applied to worker nodes
Additional info:
There is no issue on a DU node of the same deployment coming from same repository, DU node is configured as requested and cluster-node-tuning-operator is functioning correctly. must gather from rc0: https://drive.google.com/file/d/1DlzrjQiKTVnQKXdcRIijBkEKjAGsOFn1/view?usp=sharing must gather from rc1: https://drive.google.com/file/d/1qSqQtIunQe5e1hDVDYwa90L9MpEjEA4j/view?usp=sharing performance profile: https://gitlab.cee.redhat.com/agurenko/mavenir-ztp/-/blob/airtel-4.14/policygentemplates/group-cu-mno-ranGen.yaml
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
As a SRE, I want hypershift operator to expose a metric when hosted control plane is ready.
This should allow SRE to tune (or silence) alerts occurring while the hosted control plane is spinning up.
The Kube APIServer has a sidecar to output audit logs. We need similar sidecars for other APIServers that run on the control plane side. We also need to pass the same audit log policy that we pass to the KAS to these other API servers.
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
Rebase openshift/etcd to latest 3.5.11 upstream release.
Rebase openshift/etcd to latest 3.5.12 upstream release.
Place holder epic to track spontaneous task which does not deserve its own epic.
AC:
We have connectDirectlyToCloudAPIs flag in konnectiviy socks5 proxy to dial directly to cloud providers without going through konnectivity.
This introduce another path for exception https://github.com/openshift/hypershift/pull/1722
We should consolidate both by keep using connectDirectlyToCloudAPIs until there's a reason to not.
DoD:
At the moment if the input etcd kms encryption (key and role) is invalid we fail transparently.
We should check that both key and role are compatible/operational for a given cluster and fail in a condition otherwise
AWS has a hard limit of 100 OIDC providers globally.
Currently each HostedCluster created by e2e creates its own OIDC provider, which results in hitting the quota limit frequently and causing the tests to fail as a result.
DOD:
Only a single OIDC provider should be created and shared between all e2e HostedClusters.
Once the HostedCluster and NodePool gets stopped using PausedUntil statement, the awsprivatelink controller will continue reconciling.
How to test this:
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Epic Goal
Why is this important?
Scenarios
1. …
Acceptance Criteria
Dependencies (internal and external)
1. …
Previous Work (Optional):
1. …
Open questions::
1. …
Done Checklist
This is a clone of issue MULTIARCH-3708. The following is the description of the original issue:
—
Following issues need to be take care on cluster deletion with resource reuse flags.
Add new flags to utilise the existing resources in e2e test
This is a clone of issue MULTIARCH-3683. The following is the description of the original issue:
—
Flags similar to these https://github.com/openshift/hypershift/blob/main/cmd/cluster/powervs/create.go#L57toL61 from create command are missing in destroy command, so that infra destroy functionality not getting these flags for proper destroy of infra with existing resources.
Epic Goal
Why is this important?
Additional Context
Acceptance Criteria
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
This is a clone of issue OCPBUGS-10794. The following is the description of the original issue:
—
Description of problem:
Our telemetry contains only vCenter version ("7.0.3") and not the exact build number. We need the build number to know what exact vCenter build user has and what bugs are fixed there (e.g. https://issues.redhat.com/browse/OCPBUGS-5817).
This is a clone of issue OCPBUGS-12272. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-11057. The following is the description of the original issue:
—
Description of problem:
When import a Serverless Service from a git repository the topology shows an Open URL decorator also when "Add Route" checkbox was unselected (which is selected by default).
The created kn Route makes the Service available within the cluster and the created URL looks like this: http://nodeinfo-private.serverless-test.svc.cluster.local
So the Service is NOT accidentally exposed. It's "just" that we link an internal route that will not be accessible to the user.
This might happen also for Serverless functions import flow and the import container image import flow.
Version-Release number of selected component (if applicable):
Tested older versions and could see this at least on 4.10+
How reproducible:
Always
Steps to Reproduce:
Actual results:
The topology shows the new kn Service with a Open URL decorator on the top right corner.
The button is clickable but the target page could not be opened (as expected).
Expected results:
The topology should not show an Open URL decorator for "private" kn Routes.
The topology sidebar shows similar information, we should maybe release the Link there as well with a Text+Copy button???
A fix should be tested as well with Serverless functions as container images!
Additional info:
When the user unselects the "Add route" option an additional label is added to the kn Service. This label could also be added and removed later. When this label is specified the Open URL decorator should not be shown:
metadata: labels: networking.knative.dev/visibility: cluster-local
See also:
This is a clone of issue OCPBUGS-10846. The following is the description of the original issue:
—
CI is flaky because the TestClientTLS test fails.
I have seen these failures in 4.13 and 4.14 CI jobs.
Presently, search.ci reports the following stats for the past 14 days:
Found in 16.07% of runs (20.93% of failures) across 56 total runs and 13 jobs (76.79% failed) in 185ms
1. Post a PR and have bad luck.
2. Check https://search.ci.openshift.org/?search=FAIL%3A+TestAll%2Fparallel%2FTestClientTLS&maxAge=336h&context=1&type=all&name=cluster-ingress-operator&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job.
The test fails:
=== RUN TestAll/parallel/TestClientTLS === PAUSE TestAll/parallel/TestClientTLS === CONT TestAll/parallel/TestClientTLS === CONT TestAll/parallel/TestClientTLS stdout: Healthcheck requested 200 stderr: * Added canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com:443:172.30.53.236 to DNS cache * Rebuilt URL to: https://canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com/ * Hostname canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com was found in DNS cache * Trying 172.30.53.236... * TCP_NODELAY set % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none } [5 bytes data] * TLSv1.3 (OUT), TLS handshake, Client hello (1): } [512 bytes data] * TLSv1.3 (IN), TLS handshake, Server hello (2): { [122 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): { [10 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Request CERT (13): { [82 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Certificate (11): { [1763 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, CERT verify (15): { [264 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Finished (20): { [36 bytes data] * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Certificate (11): } [8 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 * ALPN, server did not agree to a protocol * Server certificate: * subject: CN=*.client-tls.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com * start date: Mar 22 18:55:46 2023 GMT * expire date: Mar 21 18:55:47 2025 GMT * issuer: CN=ingress-operator@1679509964 * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. } [5 bytes data] * TLSv1.3 (OUT), TLS app data, [no content] (0): } [1 bytes data] > GET / HTTP/1.1 > Host: canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com > User-Agent: curl/7.61.1 > Accept: */* > { [5 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): { [313 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): { [313 bytes data] * TLSv1.3 (IN), TLS app data, [no content] (0): { [1 bytes data] < HTTP/1.1 200 OK < x-request-port: 8080 < date: Wed, 22 Mar 2023 18:56:24 GMT < content-length: 22 < content-type: text/plain; charset=utf-8 < set-cookie: c6e529a6ab19a530fd4f1cceb91c08a9=683c60a6110214134bed475edc895cb9; path=/; HttpOnly; Secure; SameSite=None < cache-control: private < { [22 bytes data] * Connection #0 to host canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com left intact stdout: Healthcheck requested 200 stderr: * Added canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com:443:172.30.53.236 to DNS cache * Rebuilt URL to: https://canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com/ * Hostname canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com was found in DNS cache * Trying 172.30.53.236... * TCP_NODELAY set % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none } [5 bytes data] * TLSv1.3 (OUT), TLS handshake, Client hello (1): } [512 bytes data] * TLSv1.3 (IN), TLS handshake, Server hello (2): { [122 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): { [10 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Request CERT (13): { [82 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Certificate (11): { [1763 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, CERT verify (15): { [264 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Finished (20): { [36 bytes data] * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Certificate (11): } [799 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, CERT verify (15): } [264 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 * ALPN, server did not agree to a protocol * Server certificate: * subject: CN=*.client-tls.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com * start date: Mar 22 18:55:46 2023 GMT * expire date: Mar 21 18:55:47 2025 GMT * issuer: CN=ingress-operator@1679509964 * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. } [5 bytes data] * TLSv1.3 (OUT), TLS app data, [no content] (0): } [1 bytes data] > GET / HTTP/1.1 > Host: canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com > User-Agent: curl/7.61.1 > Accept: */* > { [5 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): { [1097 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): { [1097 bytes data] * TLSv1.3 (IN), TLS app data, [no content] (0): { [1 bytes data] < HTTP/1.1 200 OK < x-request-port: 8080 < date: Wed, 22 Mar 2023 18:56:24 GMT < content-length: 22 < content-type: text/plain; charset=utf-8 < set-cookie: c6e529a6ab19a530fd4f1cceb91c08a9=eb40064e54af58007f579a6c82f2bcd7; path=/; HttpOnly; Secure; SameSite=None < cache-control: private < { [22 bytes data] * Connection #0 to host canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com left intact stdout: Healthcheck requested 200 stderr: * Added canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com:443:172.30.53.236 to DNS cache * Rebuilt URL to: https://canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com/ * Hostname canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com was found in DNS cache * Trying 172.30.53.236... * TCP_NODELAY set % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none } [5 bytes data] * TLSv1.3 (OUT), TLS handshake, Client hello (1): } [512 bytes data] * TLSv1.3 (IN), TLS handshake, Server hello (2): { [122 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): { [10 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Request CERT (13): { [82 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Certificate (11): { [1763 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, CERT verify (15): { [264 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Finished (20): { [36 bytes data] * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Certificate (11): } [802 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, CERT verify (15): } [264 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 * ALPN, server did not agree to a protocol * Server certificate: * subject: CN=*.client-tls.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com * start date: Mar 22 18:55:46 2023 GMT * expire date: Mar 21 18:55:47 2025 GMT * issuer: CN=ingress-operator@1679509964 * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. } [5 bytes data] * TLSv1.3 (OUT), TLS app data, [no content] (0): } [1 bytes data] > GET / HTTP/1.1 > Host: canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com > User-Agent: curl/7.61.1 > Accept: */* > { [5 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): { [1097 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): { [1097 bytes data] * TLSv1.3 (IN), TLS app data, [no content] (0): { [1 bytes data] < HTTP/1.1 200 OK < x-request-port: 8080 < date: Wed, 22 Mar 2023 18:56:25 GMT < content-length: 22 < content-type: text/plain; charset=utf-8 < set-cookie: c6e529a6ab19a530fd4f1cceb91c08a9=104beed63d6a19782a5559400bd972b6; path=/; HttpOnly; Secure; SameSite=None < cache-control: private < { [22 bytes data] * Connection #0 to host canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com left intact stdout: 000 stderr: * Added canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com:443:172.30.53.236 to DNS cache * Rebuilt URL to: https://canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com/ * Hostname canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com was found in DNS cache * Trying 172.30.53.236... * TCP_NODELAY set % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none } [5 bytes data] * TLSv1.3 (OUT), TLS handshake, Client hello (1): } [512 bytes data] * TLSv1.3 (IN), TLS handshake, Server hello (2): { [122 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): { [10 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Request CERT (13): { [82 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Certificate (11): { [1763 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, CERT verify (15): { [264 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Finished (20): { [36 bytes data] * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Certificate (11): } [799 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, CERT verify (15): } [264 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 * ALPN, server did not agree to a protocol * Server certificate: * subject: CN=*.client-tls.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com * start date: Mar 22 18:55:46 2023 GMT * expire date: Mar 21 18:55:47 2025 GMT * issuer: CN=ingress-operator@1679509964 * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. } [5 bytes data] * TLSv1.3 (OUT), TLS app data, [no content] (0): } [1 bytes data] > GET / HTTP/1.1 > Host: canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com > User-Agent: curl/7.61.1 > Accept: */* > { [5 bytes data] * TLSv1.3 (IN), TLS alert, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS alert, unknown CA (560): { [2 bytes data] * OpenSSL SSL_read: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca, errno 0 * Closing connection 0 curl: (56) OpenSSL SSL_read: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca, errno 0 === CONT TestAll/parallel/TestClientTLS stdout: 000 stderr: * Added canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com:443:172.30.53.236 to DNS cache * Rebuilt URL to: https://canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com/ * Hostname canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com was found in DNS cache * Trying 172.30.53.236... * TCP_NODELAY set % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none } [5 bytes data] * TLSv1.3 (OUT), TLS handshake, Client hello (1): } [512 bytes data] * TLSv1.3 (IN), TLS handshake, Server hello (2): { [122 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): { [10 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Request CERT (13): { [82 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Certificate (11): { [1763 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, CERT verify (15): { [264 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Finished (20): { [36 bytes data] * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Certificate (11): } [8 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 * ALPN, server did not agree to a protocol * Server certificate: * subject: CN=*.client-tls.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com * start date: Mar 22 18:55:46 2023 GMT * expire date: Mar 21 18:55:47 2025 GMT * issuer: CN=ingress-operator@1679509964 * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. } [5 bytes data] * TLSv1.3 (OUT), TLS app data, [no content] (0): } [1 bytes data] > GET / HTTP/1.1 > Host: canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com > User-Agent: curl/7.61.1 > Accept: */* > { [5 bytes data] * TLSv1.3 (IN), TLS alert, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS alert, unknown (628): { [2 bytes data] * OpenSSL SSL_read: error:1409445C:SSL routines:ssl3_read_bytes:tlsv13 alert certificate required, errno 0 * Closing connection 0 curl: (56) OpenSSL SSL_read: error:1409445C:SSL routines:ssl3_read_bytes:tlsv13 alert certificate required, errno 0 === CONT TestAll/parallel/TestClientTLS stdout: Healthcheck requested 200 stderr: * Added canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com:443:172.30.53.236 to DNS cache * Rebuilt URL to: https://canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com/ * Hostname canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com was found in DNS cache * Trying 172.30.53.236... * TCP_NODELAY set % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none } [5 bytes data] * TLSv1.3 (OUT), TLS handshake, Client hello (1): } [512 bytes data] * TLSv1.3 (IN), TLS handshake, Server hello (2): { [122 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): { [10 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Request CERT (13): { [82 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Certificate (11): { [1763 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, CERT verify (15): { [264 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Finished (20): { [36 bytes data] * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Certificate (11): } [799 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, CERT verify (15): } [264 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 * ALPN, server did not agree to a protocol * Server certificate: * subject: CN=*.client-tls.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com * start date: Mar 22 18:55:46 2023 GMT * expire date: Mar 21 18:55:47 2025 GMT * issuer: CN=ingress-operator@1679509964 * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. } [5 bytes data] * TLSv1.3 (OUT), TLS app data, [no content] (0): } [1 bytes data] > GET / HTTP/1.1 > Host: canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com > User-Agent: curl/7.61.1 > Accept: */* > { [5 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): { [1097 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): { [1097 bytes data] * TLSv1.3 (IN), TLS app data, [no content] (0): { [1 bytes data] < HTTP/1.1 200 OK < x-request-port: 8080 < date: Wed, 22 Mar 2023 18:57:00 GMT < content-length: 22 < content-type: text/plain; charset=utf-8 < set-cookie: c6e529a6ab19a530fd4f1cceb91c08a9=683c60a6110214134bed475edc895cb9; path=/; HttpOnly; Secure; SameSite=None < cache-control: private < { [22 bytes data] * Connection #0 to host canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com left intact === CONT TestAll/parallel/TestClientTLS stdout: Healthcheck requested 200 stderr: * Added canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com:443:172.30.53.236 to DNS cache * Rebuilt URL to: https://canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com/ * Hostname canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com was found in DNS cache * Trying 172.30.53.236... * TCP_NODELAY set % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none } [5 bytes data] * TLSv1.3 (OUT), TLS handshake, Client hello (1): } [512 bytes data] * TLSv1.3 (IN), TLS handshake, Server hello (2): { [122 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): { [10 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Request CERT (13): { [82 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Certificate (11): { [1763 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, CERT verify (15): { [264 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Finished (20): { [36 bytes data] * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Certificate (11): } [802 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, CERT verify (15): } [264 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 * ALPN, server did not agree to a protocol * Server certificate: * subject: CN=*.client-tls.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com * start date: Mar 22 18:55:46 2023 GMT * expire date: Mar 21 18:55:47 2025 GMT * issuer: CN=ingress-operator@1679509964 * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. } [5 bytes data] * TLSv1.3 (OUT), TLS app data, [no content] (0): } [1 bytes data] > GET / HTTP/1.1 > Host: canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com > User-Agent: curl/7.61.1 > Accept: */* > { [5 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): { [1097 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): { [1097 bytes data] * TLSv1.3 (IN), TLS app data, [no content] (0): { [1 bytes data] < HTTP/1.1 200 OK < x-request-port: 8080 < date: Wed, 22 Mar 2023 18:57:00 GMT < content-length: 22 < content-type: text/plain; charset=utf-8 < set-cookie: c6e529a6ab19a530fd4f1cceb91c08a9=eb40064e54af58007f579a6c82f2bcd7; path=/; HttpOnly; Secure; SameSite=None < cache-control: private < { [22 bytes data] * Connection #0 to host canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com left intact === CONT TestAll/parallel/TestClientTLS stdout: 000 stderr: * Added canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com:443:172.30.53.236 to DNS cache * Rebuilt URL to: https://canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com/ * Hostname canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com was found in DNS cache * Trying 172.30.53.236... * TCP_NODELAY set % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none } [5 bytes data] * TLSv1.3 (OUT), TLS handshake, Client hello (1): } [512 bytes data] * TLSv1.3 (IN), TLS handshake, Server hello (2): { [122 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): { [10 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Request CERT (13): { [82 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Certificate (11): { [1763 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, CERT verify (15): { [264 bytes data] * TLSv1.3 (IN), TLS handshake, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS handshake, Finished (20): { [36 bytes data] * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Certificate (11): } [799 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, CERT verify (15): } [264 bytes data] * TLSv1.3 (OUT), TLS handshake, [no content] (0): } [1 bytes data] * TLSv1.3 (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 * ALPN, server did not agree to a protocol * Server certificate: * subject: CN=*.client-tls.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com * start date: Mar 22 18:55:46 2023 GMT * expire date: Mar 21 18:55:47 2025 GMT * issuer: CN=ingress-operator@1679509964 * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. } [5 bytes data] * TLSv1.3 (OUT), TLS app data, [no content] (0): } [1 bytes data] > GET / HTTP/1.1 > Host: canary-openshift-ingress-canary.apps.ci-op-21xplx9n-43abb.origin-ci-int-aws.dev.rhcloud.com > User-Agent: curl/7.61.1 > Accept: */* > { [5 bytes data] * TLSv1.3 (IN), TLS alert, [no content] (0): { [1 bytes data] * TLSv1.3 (IN), TLS alert, unknown CA (560): { [2 bytes data] * OpenSSL SSL_read: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca, errno 0 * Closing connection 0 curl: (56) OpenSSL SSL_read: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca, errno 0 === CONT TestAll/parallel/TestClientTLS --- FAIL: TestAll (1538.53s) --- FAIL: TestAll/parallel (0.00s) --- FAIL: TestAll/parallel/TestClientTLS (123.10s)
CI passes, or it fails on a different test.
I saw that TestClientTLS failed on the test case with no client certificate and ClientCertificatePolicy set to "Required". My best guess is that the test is racy and is hitting a terminating router pod. The test uses waitForDeploymentComplete to wait until all new pods are available, but perhaps waitForDeploymentComplete should also wait until all old pods are terminated.
Description of problem:
Name of workload get changed, when project and image stream gets changed on reloading the form on the edit deployment page of the workload
Version-Release number of selected component (if applicable):
4.9 and above
How reproducible:
Always
Steps to Reproduce:
1. Create a deployment workload 2. Select Edit Deployment option on workload 3. Verify initially name was same as workload name and field was not changeable. 4. Change the project to "openshift", image stream to "golang" or anything and tag to "latest" 5. Reload the form 6. Now check that the name also got changed to golang.
Actual results:
Name of workload changes when project and image stream name changed on edit deployment page.
Expected results:
Workload name doesn't have to be changed, when image stream name changed on edit deployment page, as name field is not changeable.
Additional info:
While performing automation, I can see the error "the name of the object(imageStreamName) does not match the name on the URL(workloadName)", but while performing this on UI, no errors.
Description of problem:
CI failure when installing a devfile, see also attached screenshot.
You can find the error by looking for Object.verifyTopologyPage
For example:
This is a clone of issue RHIBMCS-151. The following is the description of the original issue:
—
Error msg
type: 'Warning' reason: 'ResolutionFailed' constraints not satisfiable: @existing/ibm-common-services//ibm-namespace-scope-operator.v2.0.0 and @existing/ibm-common-services//ibm-namespace-scope-operator.v1.15.0 provide NamespaceScope (operator.ibm.com/v1), subscription ibm-namespace-scope-operator requires @existing/ibm-common-services//ibm-namespace-scope-operator.v2.0.0, subscription ibm-namespace-scope-operator exists, clusterserviceversion ibm-namespace-scope-operator.v1.15.0 exists and is not referenced by a subscription
The issue happens during the upgrade with and without channel switch. And several places which reports this issue
https://ibm-cloudplatform.slack.com/archives/CM95C10RK/p1662557747140069
PrivateCloud-analytics/CPD-Quality#5548
Current status
Issue opened in OLM community https://github.com/operator-framework/operator-lifecycle-manager/issues/2201
bugzilla ticket https://bugzilla.redhat.com/show_bug.cgi?id=1980755
Knowledge Base from Red Hat https://access.redhat.com/solutions/6603001
It is a known OLM issue and Bedrock also provides workarounds in documents https://www.ibm.com/docs/en/cpfs?topic=ii-olm-known-issue-updates-subscription-status-creates-csv-asynchronous
Usually when the second error msg happened not referenced by a subscription , it requires us to re-install the operator.
Or the mis synchronisation may be rectified by restarting the catalog and olm operators in some efforts.
This is a clone of issue OCPBUGS-15722. The following is the description of the original issue:
—
—
In Helm Charts we define a values.schema.json file - a JSON schema for all the possible values the user can set in a chart. This schema needs to follow JSON schema standard. The standard includes something called $ref - a reference to the either local or remote definition. If we use a schema with remote references in OCP, it causes various troubles in OCP. Different OCP versions gives different results, also on the same OCP version you can get different results based on how tight down the cluster networking is.
Tried in Developer Sandbox, OpenShift Local, Baremetal Public Cluster in Operate First, OCP provisioned through clusterbot. It behaves differently in each instance. Individual cases are described below.
1. Go to the "Helm" tab in Developer Perspective
2. Click "Create" in top right and select "Repository"
3. Use following ProjectHelmChartRepository resource and click "Create" (this repo contains single chart, this chart has values.schema.json with content linked below):
apiVersion: helm.openshift.io/v1beta1
kind: ProjectHelmChartRepository
metadata:
name: reproducer
spec:
connectionConfig:
url: https://raw.githubusercontent.com/tumido/helm-backstage/reproducer
4. Go back the "Helm" tab in Developer Perspective
5. Click "Create" in top right and select "Helm Release"
6. In filters section of the catalog in the "Chart repositories" select "Reproducer"
7. Click on the single tile available (Backstage)
8. Click "Install Helm Chart"
9. Either you will be greeted with various error screens or you see the "YAML view" tab (this tab selection is not the default and is remembered during user session only I suppose)
10. Select "Form view"
Various error screens depending on OCP version and network restrictions. I've attached screen captures how it behaves in different settings.
Either render the form view (resolve remote references) or make it obvious that remote references are not supporter. Optionally fallback to the "YAML view" regarding that user doesn't have full schema available, but the chart is still deployable.
Depends on the environment
Always in OpenShift Local, Developer Sandbox, cluster bot clusters
1. Select any other chart to install, click "Install Helm Chart"
2. Change the view to "YAML view"
3. Go back to the Helm catalog without actually deploying anything
4. Select the faulty chart and click "Install Helm Chart"
5. Proceed with installation
This is a clone of issue OCPBUGS-15853. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-8404. The following is the description of the original issue:
—
Description of problem:
If a custom API server certificate is added as per documentation[1], but the secret name is wrong and points to a non-existing secret, the following happens: - The kube-apiserver config is rendered with some of the namedCertificates pointing to /etc/kubernetes/static-pod-certs/secrets/user-serving-cert-000/ - As the secret in apiserver/cluster object is wrong, no user-serving-cert-000 secret is generated, so the /etc/kubernetes/static-pod-certs/secrets/user-serving-cert-000/ does not exist (and may be automatically removed if manually created). - The combination of the 2 points above causes kube-apiserver to start crash-looping because its config points to non-existent certificates. This is a cluster-kube-apiserver-operator, because it should validate that the specified secret exists and degrade and do nothing if it doesn't, not render inconsistent configuration.
Version-Release number of selected component (if applicable):
First found in 4.11.13, but also reproduced in the latest nightly build.
How reproducible:
Always
Steps to Reproduce:
1. Setup a named certificate pointing to a secret that doesn't exist. 2. 3.
Actual results:
Inconsistent configuration that points to non-existing secret. Kube API server pod crash-loop.
Expected results:
Cluster Kube API Server Operator to detect that the secret is wrong, do nothing and only report itself as degraded with meaningful message so the user can fix. No Kube API server pod crash-looping.
Additional info:
Once the kube-apiserver is broken, even if the apiserver/cluster object is fixed, it is usually needed to apply a manual workaround in the crash-looping master. An example of workaround that works is[2], even though that KB article was written for another bug with different root cause. References: [1] - https://docs.openshift.com/container-platform/4.11/security/certificates/api-server.html#api-server-certificates [2] - https://access.redhat.com/solutions/4893641
This is a clone of issue OCPBUGS-1604. The following is the description of the original issue:
—
Description of problem:
When viewing a resource that exists for multiple clusters, the data may be from the wrong cluster for a short time after switching clusters using the multicluster switcher.
Version-Release number of selected component (if applicable):
4.10.6
How reproducible:
Always
Steps to Reproduce:
1. Install RHACM 2.5 on OCP 4.10 and enable the FeatureGate to get multicluster switching 2. From the local-cluster perspective, view a resource that would exist on all clusters, like /k8s/cluster/config.openshift.io~v1~Infrastructure/cluster/yaml 3. Switch to a different cluster in the cluster switcher
Actual results:
Content for resource may start out correct, but then switch back to the local-cluster version before switching to the correct cluster several moments later.
Expected results:
Content should always be shown from the selected cluster.
Additional info:
Migrated from bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2075657
This is a clone of issue OCPBUGS-3235. The following is the description of the original issue:
—
Frequently we see the loading state of the topology view, even when there aren't many resources in the project.
Including an example
topology will sometimes hang with the loading indicator showing indefinitely
topology should load consistently without fail
intermittent
4.9
Customers have introduced Openshift using CloudFormation in "Example 4.55. CloudFormation template for the VPC", referring to the document below.
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html-single/installing/index#installing-restricted-networks-aws
CloudFormation uses python3.7 with Lambda.
Since it will be the EOL of Python 3.7, what kind of effect will it have if it becomes unusable?
Is there any immediate effect? Will there be any impact when adding worker nodes?
OCP Version & Channel: 4.10
Cloud Platform: AWS
Description of problem:
The samples operator needs to update it's imagestreams to use the Jenkins 4.12 release.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
In the Konnectivity SOCKS proxy: currently the default is to proxy cloud endpoint traffic: https://github.com/openshift/hypershift/blob/main/konnectivity-socks5-proxy/main.go#L61 Due to this after this change: https://github.com/openshift/hypershift/commit/0c52476957f5658cfd156656938ae1d08784b202 The oauth server had a behavior change where it began to proxy iam traffic instead of not proxying it. This causes a regression in Satellite environments running with an HTTP_PROXY server. The original network traffic path needs to be restored
Version-Release number of selected component (if applicable):
4.13 4.12
How reproducible:
100%
Steps to Reproduce:
1. Setup HTTP_PROXY IBM Cloud Satellite environment 2. In the oauth-server pod run a curl against iam (curl -v https://iam.cloud.ibm.com) 3. It will log it is using proxy
Actual results:
It is using proxy
Expected results:
It should send traffic directly (as it does in 4.11 and 4.10)
Additional info:
This is a clone of issue OCPBUGS-19894. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-17391. The following is the description of the original issue:
—
the pull-ci-openshift-ovn-kubernetes-master-e2e-aws-ovn-local-to-shared-gateway-mode-migration job started failing recently when the
ovnkube-master daemonset would not finish rolling out after 360s.
taking the must gather to debug which happens a few minutes after the test
failure you can see that the daemonset is still not ready, so I believe that
increasing the timeout is not the answer.
some debug info:
➜ static-kas git:(master) oc --kubeconfig=/tmp/kk get daemonsets -A NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE openshift-cluster-csi-drivers aws-ebs-csi-driver-node 6 6 6 6 6 kubernetes.io/os=linux 8h openshift-cluster-node-tuning-operator tuned 6 6 6 6 6 kubernetes.io/os=linux 8h openshift-dns dns-default 6 6 6 6 6 kubernetes.io/os=linux 8h openshift-dns node-resolver 6 6 6 6 6 kubernetes.io/os=linux 8h openshift-image-registry node-ca 6 6 6 6 6 kubernetes.io/os=linux 8h openshift-ingress-canary ingress-canary 3 3 3 3 3 kubernetes.io/os=linux 8h openshift-machine-api machine-api-termination-handler 0 0 0 0 0 kubernetes.io/os=linux,machine.openshift.io/interruptible-instance= 8h openshift-machine-config-operator machine-config-daemon 6 6 6 6 6 kubernetes.io/os=linux 8h openshift-machine-config-operator machine-config-server 3 3 3 3 3 node-role.kubernetes.io/master= 8h openshift-monitoring node-exporter 6 6 6 6 6 kubernetes.io/os=linux 8h openshift-multus multus 6 6 6 6 6 kubernetes.io/os=linux 9h openshift-multus multus-additional-cni-plugins 6 6 6 6 6 kubernetes.io/os=linux 9h openshift-multus network-metrics-daemon 6 6 6 6 6 kubernetes.io/os=linux 9h openshift-network-diagnostics network-check-target 6 6 6 6 6 beta.kubernetes.io/os=linux 9h openshift-ovn-kubernetes ovnkube-master 3 3 2 2 2 beta.kubernetes.io/os=linux,node-role.kubernetes.io/master= 9h openshift-ovn-kubernetes ovnkube-node 6 6 6 6 6 beta.kubernetes.io/os=linux 9h Name: ovnkube-master Selector: app=ovnkube-master Node-Selector: beta.kubernetes.io/os=linux,node-role.kubernetes.io/master= Labels: networkoperator.openshift.io/generates-operator-status=stand-alone Annotations: deprecated.daemonset.template.generation: 3 kubernetes.io/description: This daemonset launches the ovn-kubernetes controller (master) networking components. networkoperator.openshift.io/cluster-network-cidr: 10.128.0.0/14 networkoperator.openshift.io/hybrid-overlay-status: disabled networkoperator.openshift.io/ip-family-mode: single-stack release.openshift.io/version: 4.14.0-0.ci.test-2023-08-04-123014-ci-op-c6fp05f4-latest Desired Number of Nodes Scheduled: 3 Current Number of Nodes Scheduled: 3 Number of Nodes Scheduled with Up-to-date Pods: 2 Number of Nodes Scheduled with Available Pods: 2 Number of Nodes Misscheduled: 0 Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=ovnkube-master component=network kubernetes.io/os=linux openshift.io/component=network ovn-db-pod=true type=infra Annotations: networkoperator.openshift.io/cluster-network-cidr: 10.128.0.0/14 networkoperator.openshift.io/hybrid-overlay-status: disabled networkoperator.openshift.io/ip-family-mode: single-stack target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} Service Account: ovn-kubernetes-controller
it seems there is one pod that is not coming up all the way and that pod has
two containers not ready (sbdb and nbdb). logs from those containers below:
➜ static-kas git:(master) oc --kubeconfig=/tmp/kk describe pod ovnkube-master-7qlm5 -n openshift-ovn-kubernetes | rg '^ [a-z].*:|Ready' northd: Ready: True nbdb: Ready: False kube-rbac-proxy: Ready: True sbdb: Ready: False ovnkube-master: Ready: True ovn-dbchecker: Ready: True ➜ static-kas git:(master) oc --kubeconfig=/tmp/kk logs ovnkube-master-7qlm5 -n openshift-ovn-kubernetes -c sbdb 2023-08-04T13:08:49.127480354Z + [[ -f /env/_master ]] 2023-08-04T13:08:49.127562165Z + trap quit TERM INT 2023-08-04T13:08:49.127609496Z + ovn_kubernetes_namespace=openshift-ovn-kubernetes 2023-08-04T13:08:49.127637926Z + ovndb_ctl_ssl_opts='-p /ovn-cert/tls.key -c /ovn-cert/tls.crt -C /ovn-ca/ca-bundle.crt' 2023-08-04T13:08:49.127637926Z + transport=ssl 2023-08-04T13:08:49.127645167Z + ovn_raft_conn_ip_url_suffix= 2023-08-04T13:08:49.127682687Z + [[ 10.0.42.108 == \: ]] 2023-08-04T13:08:49.127690638Z + db=sb 2023-08-04T13:08:49.127690638Z + db_port=9642 2023-08-04T13:08:49.127712038Z + ovn_db_file=/etc/ovn/ovnsb_db.db 2023-08-04T13:08:49.127854181Z + [[ ! ssl:10.0.102.2:9642,ssl:10.0.42.108:9642,ssl:10.0.74.128:9642 =~ .:10\.0\.42\.108:. ]] 2023-08-04T13:08:49.128199437Z ++ bracketify 10.0.42.108 2023-08-04T13:08:49.128237768Z ++ case "$1" in 2023-08-04T13:08:49.128265838Z ++ echo 10.0.42.108 2023-08-04T13:08:49.128493242Z + OVN_ARGS='--db-sb-cluster-local-port=9644 --db-sb-cluster-local-addr=10.0.42.108 --no-monitor --db-sb-cluster-local-proto=ssl --ovn-sb-db-ssl-key=/ovn-cert/tls.key --ovn-sb-db-ssl-cert=/ovn-cert/tls.crt --ovn-sb-db-ssl-ca-cert=/ovn-ca/ca-bundle.crt' 2023-08-04T13:08:49.128535253Z + CLUSTER_INITIATOR_IP=10.0.102.2 2023-08-04T13:08:49.128819438Z ++ date -Iseconds 2023-08-04T13:08:49.130157063Z 2023-08-04T13:08:49+00:00 - starting sbdb CLUSTER_INITIATOR_IP=10.0.102.2 2023-08-04T13:08:49.130170893Z + echo '2023-08-04T13:08:49+00:00 - starting sbdb CLUSTER_INITIATOR_IP=10.0.102.2' 2023-08-04T13:08:49.130170893Z + initialize=false 2023-08-04T13:08:49.130179713Z + [[ ! -e /etc/ovn/ovnsb_db.db ]] 2023-08-04T13:08:49.130318475Z + [[ false == \t\r\u\e ]] 2023-08-04T13:08:49.130406657Z + wait 9 2023-08-04T13:08:49.130493659Z + exec /usr/share/ovn/scripts/ovn-ctl -db-sb-cluster-local-port=9644 --db-sb-cluster-local-addr=10.0.42.108 --no-monitor --db-sb-cluster-local-proto=ssl --ovn-sb-db-ssl-key=/ovn-cert/tls.key --ovn-sb-db-ssl-cert=/ovn-cert/tls.crt --ovn-sb-db-ssl-ca-cert=/ovn-ca/ca-bundle.crt '-ovn-sb-log=-vconsole:info -vfile:off -vPATTERN:console:%D {%Y-%m-%dT%H:%M:%S.###Z} |%05N|%c%T|%p|%m' run_sb_ovsdb 2023-08-04T13:08:49.208399304Z 2023-08-04T13:08:49.208Z|00001|vlog|INFO|opened log file /var/log/ovn/ovsdb-server-sb.log 2023-08-04T13:08:49.213507987Z ovn-sbctl: unix:/var/run/ovn/ovnsb_db.sock: database connection failed (No such file or directory) 2023-08-04T13:08:49.224890005Z 2023-08-04T13:08:49Z|00001|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2023-08-04T13:08:49.224912156Z 2023-08-04T13:08:49Z|00002|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2023-08-04T13:08:49.255474964Z 2023-08-04T13:08:49.255Z|00002|raft|INFO|local server ID is 7f92 2023-08-04T13:08:49.333342909Z 2023-08-04T13:08:49.333Z|00003|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 3.1.2 2023-08-04T13:08:49.348948944Z 2023-08-04T13:08:49.348Z|00004|reconnect|INFO|ssl:10.0.102.2:9644: connecting... 2023-08-04T13:08:49.349002565Z 2023-08-04T13:08:49.348Z|00005|reconnect|INFO|ssl:10.0.74.128:9644: connecting... 2023-08-04T13:08:49.352510569Z 2023-08-04T13:08:49.352Z|00006|reconnect|INFO|ssl:10.0.102.2:9644: connected 2023-08-04T13:08:49.353870484Z 2023-08-04T13:08:49.353Z|00007|reconnect|INFO|ssl:10.0.74.128:9644: connected 2023-08-04T13:08:49.889326777Z 2023-08-04T13:08:49.889Z|00008|raft|INFO|server 2501 is leader for term 5 2023-08-04T13:08:49.890316765Z 2023-08-04T13:08:49.890Z|00009|raft|INFO|rejecting append_request because previous entry 5,1538 not in local log (mismatch past end of log) 2023-08-04T13:08:49.891199951Z 2023-08-04T13:08:49.891Z|00010|raft|INFO|rejecting append_request because previous entry 5,1539 not in local log (mismatch past end of log) 2023-08-04T13:08:50.225632838Z 2023-08-04T13:08:50Z|00003|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2023-08-04T13:08:50.225677739Z 2023-08-04T13:08:50Z|00004|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connected 2023-08-04T13:08:50.227772827Z Waiting for OVN_Southbound to come up. 2023-08-04T13:08:55.716284614Z 2023-08-04T13:08:55.716Z|00011|raft|INFO|ssl:10.0.74.128:43498: learned server ID 3dff 2023-08-04T13:08:55.716323395Z 2023-08-04T13:08:55.716Z|00012|raft|INFO|ssl:10.0.74.128:43498: learned remote address ssl:10.0.74.128:9644 2023-08-04T13:08:55.724570375Z 2023-08-04T13:08:55.724Z|00013|raft|INFO|ssl:10.0.102.2:47804: learned server ID 2501 2023-08-04T13:08:55.724599466Z 2023-08-04T13:08:55.724Z|00014|raft|INFO|ssl:10.0.102.2:47804: learned remote address ssl:10.0.102.2:9644 2023-08-04T13:08:59.348572779Z 2023-08-04T13:08:59.348Z|00015|memory|INFO|32296 kB peak resident set size after 10.1 seconds 2023-08-04T13:08:59.348648190Z 2023-08-04T13:08:59.348Z|00016|memory|INFO|atoms:35959 cells:31476 monitors:0 n-weak-refs:749 raft-connections:4 raft-log:1543 txn-history:100 txn-history-atoms:7100 ➜ static-kas git:(master) oc --kubeconfig=/tmp/kk logs ovnkube-master-7qlm5 -n openshift-ovn-kubernetes -c nbdb 2023-08-04T13:08:48.779743434Z + [[ -f /env/_master ]] 2023-08-04T13:08:48.779743434Z + trap quit TERM INT 2023-08-04T13:08:48.779825516Z + ovn_kubernetes_namespace=openshift-ovn-kubernetes 2023-08-04T13:08:48.779825516Z + ovndb_ctl_ssl_opts='-p /ovn-cert/tls.key -c /ovn-cert/tls.crt -C /ovn-ca/ca-bundle.crt' 2023-08-04T13:08:48.779825516Z + transport=ssl 2023-08-04T13:08:48.779825516Z + ovn_raft_conn_ip_url_suffix= 2023-08-04T13:08:48.779825516Z + [[ 10.0.42.108 == \: ]] 2023-08-04T13:08:48.779825516Z + db=nb 2023-08-04T13:08:48.779825516Z + db_port=9641 2023-08-04T13:08:48.779825516Z + ovn_db_file=/etc/ovn/ovnnb_db.db 2023-08-04T13:08:48.779887606Z + [[ ! ssl:10.0.102.2:9641,ssl:10.0.42.108:9641,ssl:10.0.74.128:9641 =~ .:10\.0\.42\.108:. ]] 2023-08-04T13:08:48.780159182Z ++ bracketify 10.0.42.108 2023-08-04T13:08:48.780167142Z ++ case "$1" in 2023-08-04T13:08:48.780172102Z ++ echo 10.0.42.108 2023-08-04T13:08:48.780314224Z + OVN_ARGS='--db-nb-cluster-local-port=9643 --db-nb-cluster-local-addr=10.0.42.108 --no-monitor --db-nb-cluster-local-proto=ssl --ovn-nb-db-ssl-key=/ovn-cert/tls.key --ovn-nb-db-ssl-cert=/ovn-cert/tls.crt --ovn-nb-db-ssl-ca-cert=/ovn-ca/ca-bundle.crt' 2023-08-04T13:08:48.780314224Z + CLUSTER_INITIATOR_IP=10.0.102.2 2023-08-04T13:08:48.780518588Z ++ date -Iseconds 2023-08-04T13:08:48.781738820Z 2023-08-04T13:08:48+00:00 - starting nbdb CLUSTER_INITIATOR_IP=10.0.102.2, K8S_NODE_IP=10.0.42.108 2023-08-04T13:08:48.781753021Z + echo '2023-08-04T13:08:48+00:00 - starting nbdb CLUSTER_INITIATOR_IP=10.0.102.2, K8S_NODE_IP=10.0.42.108' 2023-08-04T13:08:48.781753021Z + initialize=false 2023-08-04T13:08:48.781753021Z + [[ ! -e /etc/ovn/ovnnb_db.db ]] 2023-08-04T13:08:48.781816342Z + [[ false == \t\r\u\e ]] 2023-08-04T13:08:48.781936684Z + wait 9 2023-08-04T13:08:48.781974715Z + exec /usr/share/ovn/scripts/ovn-ctl -db-nb-cluster-local-port=9643 --db-nb-cluster-local-addr=10.0.42.108 --no-monitor --db-nb-cluster-local-proto=ssl --ovn-nb-db-ssl-key=/ovn-cert/tls.key --ovn-nb-db-ssl-cert=/ovn-cert/tls.crt --ovn-nb-db-ssl-ca-cert=/ovn-ca/ca-bundle.crt '-ovn-nb-log=-vconsole:info -vfile:off -vPATTERN:console:%D {%Y-%m-%dT%H:%M:%S.###Z} |%05N|%c%T|%p|%m' run_nb_ovsdb 2023-08-04T13:08:48.851644059Z 2023-08-04T13:08:48.851Z|00001|vlog|INFO|opened log file /var/log/ovn/ovsdb-server-nb.log 2023-08-04T13:08:48.852091247Z ovn-nbctl: unix:/var/run/ovn/ovnnb_db.sock: database connection failed (No such file or directory) 2023-08-04T13:08:48.861365357Z 2023-08-04T13:08:48Z|00001|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connecting... 2023-08-04T13:08:48.861365357Z 2023-08-04T13:08:48Z|00002|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connection attempt failed (No such file or directory) 2023-08-04T13:08:48.875126148Z 2023-08-04T13:08:48.875Z|00002|raft|INFO|local server ID is c503 2023-08-04T13:08:48.911846610Z 2023-08-04T13:08:48.911Z|00003|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 3.1.2 2023-08-04T13:08:48.918864408Z 2023-08-04T13:08:48.918Z|00004|reconnect|INFO|ssl:10.0.102.2:9643: connecting... 2023-08-04T13:08:48.918934490Z 2023-08-04T13:08:48.918Z|00005|reconnect|INFO|ssl:10.0.74.128:9643: connecting... 2023-08-04T13:08:48.923439162Z 2023-08-04T13:08:48.923Z|00006|reconnect|INFO|ssl:10.0.102.2:9643: connected 2023-08-04T13:08:48.925166154Z 2023-08-04T13:08:48.925Z|00007|reconnect|INFO|ssl:10.0.74.128:9643: connected 2023-08-04T13:08:49.861650961Z 2023-08-04T13:08:49Z|00003|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connecting... 2023-08-04T13:08:49.861747153Z 2023-08-04T13:08:49Z|00004|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connected 2023-08-04T13:08:49.875272530Z 2023-08-04T13:08:49.875Z|00008|raft|INFO|server fccb is leader for term 6 2023-08-04T13:08:49.875302480Z 2023-08-04T13:08:49.875Z|00009|raft|INFO|rejecting append_request because previous entry 6,1732 not in local log (mismatch past end of log) 2023-08-04T13:08:49.876027164Z Waiting for OVN_Northbound to come up. 2023-08-04T13:08:55.694760761Z 2023-08-04T13:08:55.694Z|00010|raft|INFO|ssl:10.0.74.128:57122: learned server ID d382 2023-08-04T13:08:55.694800872Z 2023-08-04T13:08:55.694Z|00011|raft|INFO|ssl:10.0.74.128:57122: learned remote address ssl:10.0.74.128:9643 2023-08-04T13:08:55.706904913Z 2023-08-04T13:08:55.706Z|00012|raft|INFO|ssl:10.0.102.2:43230: learned server ID fccb 2023-08-04T13:08:55.706931733Z 2023-08-04T13:08:55.706Z|00013|raft|INFO|ssl:10.0.102.2:43230: learned remote address ssl:10.0.102.2:9643 2023-08-04T13:08:58.919567770Z 2023-08-04T13:08:58.919Z|00014|memory|INFO|21944 kB peak resident set size after 10.1 seconds 2023-08-04T13:08:58.919643762Z 2023-08-04T13:08:58.919Z|00015|memory|INFO|atoms:8471 cells:7481 monitors:0 n-weak-refs:200 raft-connections:4 raft-log:1737 txn-history:72 txn-history-atoms:8165 ➜ static-kas git:(master)
This seems to happen very frequently now, but was not happening before around July 21st.
This is a clone of issue OCPBUGS-3767. The following is the description of the original issue:
—
Description of problem:
Start maintenance action moved from Nodes tab to Bare Metal Hosts tab
Version-Release number of selected component (if applicable):
Cluster version is 4.12.0-0.nightly-2022-11-15-024309
How reproducible:
100%
Steps to Reproduce:
1. Install Node Maintenance operator 2. Go Compute -> Nodes 3. Start maintenance from 3dots menu of worker-0-0 see https://docs.openshift.com/container-platform/4.11/nodes/nodes/eco-node-maintenance-operator.html#eco-setting-node-maintenance-actions-web-console_node-maintenance-operator
Actual results:
No 'Start maintenance' option
Expected results:
Maintenance started successfully
Additional info:
worked for 4.11
Currently, we have this validation https://github.com/openshift/installer/blob/master/pkg/asset/agent/installconfig_test.go#L103 which checks if the platform is none then the number of control planes should be 1 and workers should be zero.
We need another validation to check if the number of control planes is 1 and workers are zero, the in the install-config.yaml the platform can only be set as none and in agent-cluster-install.yaml, the platformType should only be set as none. If we try to do SNO (i.e. control planes is 1 and workers are zero) with e.g. platform: baremetal then assisted will reject it, so we should catch it as early as possible
Description of problem:
when install private cluster, firstly failed , then need
ibmcloud is security-group-rule-add "${infra}-sg-kube-api-lb" inbound tcp --port-min 6443 --port-max 6443 --remote $sg
then openshift-install wait-for again.
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
1. try to create cluster with BYON, in install-config.yaml publish: Internal, install failed
Actual results:
firstly time, install failed
Expected results:
Just need install once. need not manually security-group-rule-add.
Additional info:
https://coreos.slack.com/archives/C01U40AM37F/p1664439142279079?thread_ts=1663769891.358229&cid=C01U40AM37F
this issue blocked set up private cluster automatically
Description of problem:
NPE on topology if creates a k8s svc and KSVC which has no metadata in template
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. create a KSVC from admin -> serving -> create service 2. create a k8s svc from search service (create)
Actual results:
topology breaks (see attached screenshot)
Expected results:
topology shouldn't break
Additional info:
This is a clone of issue OCPBUGS-10220. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-7559. The following is the description of the original issue:
—
Description of problem:
When attempting to add nodes to a long-lived 4.12.3 cluster, net new nodes are not able to join the cluster. They are provisioned in the cloud provider (AWS), but never actually join as a node.
Version-Release number of selected component (if applicable):
4.12.3
How reproducible:
Consistent
Steps to Reproduce:
1. On a long lived cluster, add a new machineset
Actual results:
Machines reach "Provisioned" but don't join the cluster
Expected results:
Machines join cluster as nodes
Additional info:
Description of problem:
When user selects a installed operator (for example, openshift elastic search) in operator hub and navigating to installed operator page from operator information page
with the help of "view it here" option, "404 Not found" information has wrongly shown/appeared although it navigates to the installed operator at the end.
Version-Release number of selected components (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible:
Always
Steps to Reproduce:
Actual results:
Wrong message "404: Not found" while the user selects an installed operator and navigates from operator hub to installed operator page.
Browser console log indicate as below
main-chunk-525818b154a57a9b220a.min.js:1 unhandled error: Uncaught TypeError: Cannot read properties of undefined (reading 'firstElementChild') TypeError: Cannot read properties of undefined (reading 'firstElementChild') at c (https://console-openshift-console.apps.jmekkatt-dob.ibmcloud.qe.devcluster.openshift.com/static/vendors~main-chunk-40fab65853dff2fbc413.min.js:118:125992) at HTMLDivElement.l (https://console-openshift-console.apps.jmekkatt-dob.ibmcloud.qe.devcluster.openshift.com/static/vendors~main-chunk-40fab65853dff2fbc413.min.js:118:126387) TypeError: Cannot read properties of undefined (reading 'firstElementChild') at c (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) at HTMLDivElement.l (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) window.onerror @ main-chunk-525818b154a57a9b220a.min.js:1 vendors~main-chunk-40fab65853dff2fbc413.min.js:72303 Uncaught TypeError: Cannot read properties of undefined (reading 'firstElementChild') at c (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) at HTMLDivElement.l (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) c @ vendors~main-chunk-40fab65853dff2fbc413.min.js:72303 l @ vendors~main-chunk-40fab65853dff2fbc413.min.js:72303 scroll (async) componentWillUnmount @ vendor-patternfly-core-chunk-006bb1499791fa7cfea7.min.js:38397 hs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 bs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 hs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 bs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Oc @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 t.unstable_runWithPriority @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171690 Hi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Ac @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 pc @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 (anonymous) @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 t.unstable_runWithPriority @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171690 Hi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Vi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 qi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 De @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Yt @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 main-chunk-525818b154a57a9b220a.min.js:1 GET https://console-openshift-console.apps.jmekkatt-dob.ibmcloud.qe.devcluster.openshift.com/api/kubernetes/apis/operators.coreos.com/v1alpha1/clusterserviceversions/elasticsearch-operator.5.5.0 404 (Not Found)
Expected results:
Installed operator details should show without any error when the user selects an installed operator and navigates from operator hub to installed operator page.
Additional info:
Reproduced in both chrome[103.0.5060.114 (Official Build) (64-bit)] and firefox[91.11.0esr (64-bit)] browsers
Attached screen share for the same issue InstalledOperatorNavigation404.mp4
Description of problem:
If cluster install failed and no tag attached to vm, run ./openshift-install destroy cluster get stuck, details pls see openshift-install.log
...
time="2022-09-28T08:19:14-04:00" level=debug msg="Delete Folder"
time="2022-09-28T08:19:14-04:00" level=debug msg="Find attached Folder on tag"
time="2022-09-28T08:19:15-04:00" level=debug msg="Folder: Expected Folder sgao-rtf6v to be empty"
time="2022-09-28T08:19:25-04:00" level=debug msg="Power Off Virtual Machines"
time="2022-09-28T08:19:25-04:00" level=debug msg="Find attached VirtualMachine on tag"
time="2022-09-28T08:19:25-04:00" level=debug msg="Delete Virtual Machines"
time="2022-09-28T08:19:25-04:00" level=debug msg="Find attached VirtualMachine on tag"
time="2022-09-28T08:19:25-04:00" level=debug msg="Delete Folder"
time="2022-09-28T08:19:25-04:00" level=debug msg="Find attached Folder on tag"
time="2022-09-28T08:19:25-04:00" level=debug msg="Folder: Expected Folder sgao-rtf6v to be empty"
time="2022-09-28T08:19:35-04:00" level=debug msg="Power Off Virtual Machines"
time="2022-09-28T08:19:35-04:00" level=debug msg="Find attached VirtualMachine on tag"
time="2022-09-28T08:19:35-04:00" level=debug msg="Delete Virtual Machines"
time="2022-09-28T08:19:35-04:00" level=debug msg="Find attached VirtualMachine on tag"
time="2022-09-28T08:19:35-04:00" level=debug msg="Delete Folder"
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-25-071630
How reproducible:
always when cluster install failed and no tag attached to vm
Steps to Reproduce:
1. cluster install failed and no tag attached to vm
2. run ./openshift-install destroy cluster
3.
Actual results:
installer destroy get stuck
Expected results:
installer destroy should set timeout and be able to quit in such situation
Additional info:
Description of problem:
Kebab menu for helm repository is showing inconsistent behavior
Version-Release number of selected component (if applicable): 4.12
How reproducible: Always
Steps to Reproduce:
1. Create some helm chart repository
2. Go to the Helm page and switch to the repositories tab
3. Open kebab menu for different repos
Actual results:
Menus are overlapping
Expected results:
The menu should work properly; one menu should close before opening a new one
Additional info:
Video has been added for the reference
This is a clone of issue OCPBUGS-3987. The following is the description of the original issue:
—
Description of problem:
When the user supplies nmstateConfig in agent-config.yaml invalid configurations may not be detected
Version-Release number of selected component (if applicable):
4.12
How reproducible:
every time
Steps to Reproduce:
1. Create an invalid NM config. In this case an interface was defined with a route but no IP address 2. The ISO can be generated with no errors 3. At run time the invalid was detected by assisted-service, create-cluster-and-infraenv.service logged the error "failed to validate network yaml for host 0, invalid yaml, error:"
Actual results:
Installation failed
Expected results:
Invalid configuration would be detected when ISO is created
Additional info:
It looks like the ValidateStaticConfigParams check is ONLY done when the nmstateconfig is provided in nmstateconfig.yaml, not when the file is generated (supplied in agent-config.yaml). https://github.com/openshift/installer/blob/master/pkg/asset/agent/manifests/nmstateconfig.go#L188
This is a clone of issue OCPBUGS-6053. The following is the description of the original issue:
—
Description of problem:
When a ClusterVersion's `status.availableUpdates` has a value of `null` and `Upgradeable=False`, a run time error occurs on the Cluster Settings page as the UpdatesGraph component expects `status.availableUpdates` to have a non-empty value.
Steps to Reproduce:
1. Add the following overrides to ClusterVersion config (/k8s/cluster/config.openshift.io~v1~ClusterVersion/version) spec: overrides: - group: apps kind: Deployment name: console-operator namespace: openshift-console-operator unmanaged: true - group: rbac.authorization.k8s.io kind: ClusterRole name: console-operator namespace: '' unmanaged: true 2. Visit /settings/cluster and note the run-time error (see attached screenshot)
Actual results:
An error occurs.
Expected results:
The contents of the Cluster Settings page render.
Copied from an upstream issue: https://github.com/operator-framework/operator-lifecycle-manager/issues/2830
What did you do?
When attempting to reinstall an operator that uses conversion webhooks by
The resulting InstallPlan enters a failed state with message similar to
error validating existing CRs against new CRD's schema for "devworkspaces.workspace.devfile.io": error listing resources in GroupVersionResource schema.GroupVersionResource{Group:"workspace.devfile.io", Version:"v1alpha1", Resource:"devworkspaces"}: conversion webhook for workspace.devfile.io/v1alpha2, Kind=DevWorkspace failed: Post "https://devworkspace-controller-manager-service.test-namespace.svc:443/convert?timeout=30s": service "devworkspace-controller-manager-service" not found
When the original CSVs are deleted, the operator's main deployment and service are removed, but CRDs are left in-cluster. However, since the service/CA bundle/deployment that serve the conversion webhook are removed, conversion webhooks are broken at that point. Eventually this impacts garbage collection on the cluster as well.
This can be reproduced by installing the DevWorkspace Operator from the Red Hat catalog. (I can provide yamls/upstream images that reproduce as well, if that's helpful). It may be necessary to create a DevWorkspace in the cluster before deletion, e.g. by oc apply -f https://raw.githubusercontent.com/devfile/devworkspace-operator/main/samples/plain.yaml
What did you expect to see?
Operator is able to be reinstalled without removing CRDs and all instances.
What did you see instead? Under which circumstances?
It's necessary to completely remove the operator including CRDs. For our operator (DevWorkspace), this also makes uninstall especially complicated as finalizers are used (so CRDs cannot be deleted if the controller is removed, and the controller cannot be restored by reinstalling)
Environment
operator-lifecycle-manager version: 4.10.24
Kubernetes version information: Kubernetes Version: v1.23.5+012e945 (OpenShift 4.10.24)
Kubernetes cluster kind: OpenShift
Backport DualStack and the new reconciler to whereabouts plugin 4.12
github rate limit failures for upi image downloading govc.
Description of problem:
When deleting a BYOH node in Platform:none, as well as in an Azure IPI cluster the node gets reconciled correctly, however when added back to the cluster it stays in Ready,SchedulingDisabled. When checking the WMCO logs, we can observe the following log: {"level":"error","ts":"2022-12-14T16:14:31Z","msg":"Reconciler error","controller":"configmap","controllerGroup":"","controllerKind":"ConfigMap","configMap":{"name":"windows-instances","namespace":"openshift-windows-machine-config-operator"},"namespace":"openshift-windows-machine-config-operator","name":"windows-instances","reconcileID":"d66a3142-d52c-43f5-8a42-214ce9c88417","error":"error configuring host with address 10.0.55.21: configuring node network failed: error waiting for k8s.ovn.org/hybrid-overlay-node-subnet node annotation for byoh-2019: timeout waiting for k8s.ovn.org/hybrid-overlay-node-subnet node annotation: timed out waiting for the condition" And when checking the node's annotation, it is indeed missing: $ oc get nodes byoh-2019 -o=jsonpath="{.metadata.annotations}" {"volumes.kubernetes.io/controller-managed-attach-detach":"true","windowsmachineconfig.openshift.io/desired-version":"7.0.0-16f486a","windowsmachineconfig.openshift.io/pub-key-hash":"1df2c166b1c401180523270e9cf6bc2cd2724b9279ea65668a3b95298525a0f5","windowsmachineconfig.openshift.io/username":"wx4EBwMICL6qT+4RY8tgbx4hiRmQdHlwUsHgVGCTVY7S5gG/G5gb/Wzv0JBLhNP9\u003cwmcoMarker\u003ejlmI5ExHPYFrd2Fw6Lxe/6PKEE5/vYAhZ2n1Z2nBIoa1xN1/HEaXhqR2CuXNe7Ez\u003cwmcoMarker\u003eg2Hg+gA=\u003cwmcoMarker\u003e=ubWA"} Tested in Azure IPI and Platform:None, in both cases the issue got reproduced.
Version-Release number of selected component (if applicable):
$ oc get cm -n openshift-windows-machine-config-operator NAME DATA AGE kube-root-ca.crt 1 10h openshift-service-ca.crt 1 10h windows-instances 2 9h windows-machine-config-operator-lock 0 6h24m windows-services-7.0.0-16f486a 2 6h23m $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-rc.4 True False 6h48m Cluster version is 4.12.0-rc.4
How reproducible:
Steps to Reproduce:
1. Deploy a OCP 4.11 cluster with WMCO 6.0.0 2. Add one or two byoh nodes to the cluster 3. Upgrade the cluster to OCP 4.12, and later WMCO to 7.0.0 4. Remove one of the byoh nodes using: oc delete node <byoh-node-id> 5. Wait for reconciliation to bring the node back
Actual results:
The deleted node gets re-added but stays in Ready,SchedulingDisabled and the workloads left in Pending state.
Expected results:
The node gets properly added to the cluster and stays in Ready.
Additional info:
This is a clone of issue OCPBUGS-4411. The following is the description of the original issue:
—
Description of problem:
manually configure ipv6 addresses and route on ipv4 OCP cluster to create a dualstack cluster, newly created pods will stay in 'ContainerCreating' status
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Steps to Reproduce:
1. enable ipv6 in network. # more patch_dual.yaml - op: add path: /spec/clusterNetwork/- value: cidr: fd01::/48 hostPrefix: 64 - op: add path: /spec/serviceNetwork/- value: fd02::/112 # oc patch network.config.openshift.io cluster --type='json' --patch-file patch_dual.yaml 2. Configure ipv6 addresses and routes PODS=$(oc get pods -n openshift-cluster-node-tuning-operator -l openshift-app=tuned --field-selector=status.phase=Running --no-headers -o name) i=10 for pod in $PODS; do oc exec -n openshift-cluster-node-tuning-operator $pod -- ip -6 addr add fd00:172:22::${i}/64 dev br-ex oc exec -n openshift-cluster-node-tuning-operator $pod -- ip -6 route add default via fd00:172:22::1 dev br-ex ((i=i+1)) done 3. create pods and they will stay in ContainerCreating status. 4. if remove the ipv6 configuration in network, newly created pods can be ready.
Actual results:
Pod can not be running
Expected results:
Pod should be ready with both ipv4 and ipv6 address.
Additional info:
version: # oc version Client Version: 4.12.0-0.nightly-2022-11-30-182550 Kustomize Version: v4.5.7 Server Version: 4.12.0-0.nightly-2022-11-30-182550 Kubernetes Version: v1.25.2+5533733 Describe pods: # oc describe pod iperf-rc-normal-qg6zd Name: iperf-rc-normal-qg6zd Namespace: offload-testing Priority: 0 Service Account: default Node: openshift-qe-025.lab.eng.rdu2.redhat.com/192.168.111.54 Start Time: Thu, 01 Dec 2022 21:35:28 -0500 Labels: name=iperf-pods-normal Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.129.2.7/23","fd01:0:0:6::3/64"],"mac_address":"0a:58:0a:81:02:07","gateway_ips":["10.129.2.1","fd01:0:0:6:... openshift.io/scc: restricted-v2 seccomp.security.alpha.kubernetes.io/pod: runtime/default Status: Pending IP: IPs: <none> Controlled By: ReplicationController/iperf-rc-normal Containers: iperf: Container ID: Image: quay.io/openshifttest/iperf3@sha256:440c59251338e9fcf0a00d822878862038d3b2e2403c67c940c7781297953614 Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Limits: memory: 340Mi Requests: memory: 340Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4266b (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-4266b: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 3m4s (x173 over 5h50m) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_iperf-rc-normal-qg6zd_offload-testing_18673f13-37b4-40ea-aa5d-85654dfa5c85_0(4899f7150492fa4cd895c62d0ec25ac5c1507016037c31b6019849083b42cdb5): error adding pod offload-testing_iperf-rc-normal-qg6zd to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [offload-testing/iperf-rc-normal-qg6zd/18673f13-37b4-40ea-aa5d-85654dfa5c85:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[offload-testing/iperf-rc-normal-qg6zd 4899f7150492fa4cd895c62d0ec25ac5c1507016037c31b6019849083b42cdb5] [offload-testing/iperf-rc-normal-qg6zd 4899f7150492fa4cd895c62d0ec25ac5c1507016037c31b6019849083b42cdb5] failed to configure pod interface: timed out waiting for OVS port binding (ovn-installed) for 0a:58:0a:81:02:07 [10.129.2.7/23 fd01:0:0:6::3/64] '
Description of problem:
When all projects are selected, workloads list page and details page shows inconsistent HorizontalPodAutoscaler actions
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-07-25-010250
How reproducible:
Always
Steps to Reproduce:
Actual results:
Expected results:
Additional info:
Description of problem:
This is an OCP clone of https://bugzilla.redhat.com/show_bug.cgi?id=2099794 In summary, NetworkManager reports the network as being up before the ipv6 address of the primary interface is ready and crio fails to bind to it.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-4954. The following is the description of the original issue:
—
Description of problem:
During the cluster destroy process for IBM Cloud IPI, failures can occur when COS Instances are deleted, but Reclamations are created for the COS deletions, and prevent cleanup of the ResourceGroup
Version-Release number of selected component (if applicable):
4.13.0 (and 4.12.0)
How reproducible:
Sporadic, it depends on IBM Cloud COS
Steps to Reproduce:
1. Create an IPI cluster on IBM Cloud
2. Delete the IPI cluster on IBM Cloud
3. COS Reclamation may be created, and can cause the destroy cluster to fail
Actual results:
time="2022-12-12T16:50:06Z" level=debug msg="Listing resource groups" time="2022-12-12T16:50:06Z" level=debug msg="Deleting resource group \"eu-gb-reclaim-1-zc6xg\"" time="2022-12-12T16:50:07Z" level=debug msg="Failed to delete resource group eu-gb-reclaim-1-zc6xg: Resource groups with active or pending reclamation instances can't be deleted. Use the CLI commands \"ibmcloud resource service-instances --type all\" and \"ibmcloud resource reclamations\" to check for remaining instances, then delete the instances and try again."
Expected results:
Successful destroy cluster (including deletion of ResourceGroup)
Additional info:
IBM Cloud is testing a potential fix currently.
It was also identified, the destroy stages are not in a proper order.
https://github.com/openshift/installer/blob/9377cb3974986a08b531a5e807fd90a3a4e85ebf/pkg/destroy/ibmcloud/ibmcloud.go#L128-L155
Changes are being made in an attempt to resolve this along with a fix for this bug as well.
Description of problem:
Restore size in snapshot output is not the same size of pvc request size
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Create IBM cluster. Flexy template: aos-4_12/ipi-on-ibmcloud/versioned-installer- private_cluster-ovn-fips-ci Payload: 4.12.0-0.nightly-2022-11-29-131548 2. Create sc, pvc, dep 3. Create volumesnapshot from default volumesnapshotclass. 4. Check the volumesnapshot output restore size
sc_pvc_dep.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mysc
parameters:
profile: 10iops-tier
provisioner: vpc.block.csi.ibm.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
—
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc-csi
namespace: testropatil
spec:
accessModes:
rohitpatil@ropatil-mac Downloads % oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEmysc vpc.block.csi.ibm.io Delete WaitForFirstConsumer true 2m37s
rohitpatil@ropatil-mac Downloads % oc get pvc,pod -n testropatilNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpersistentvolumeclaim/mypvc-csi Bound pvc-1a014601-8176-4c55-93cf-d408460b9359 26Gi RWO mysc 27s NAME READY STATUS RESTARTS AGEpod/mydep-5477fd946b-w77sw 1/1 Running 0 27s
rohitpatil@ropatil-mac Downloads % oc get volumesnapshot -n testropatilNAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGEmy-snapshot-new true mypvc-csi 1Gi vpc-block-snapshot snapcontent-a40f3a17-8697-4215-8a2f-77d3d5592c60 29s 32s
Actual results:
volumesnapshot RESTORESIZE is 1Gi which is not the same to pvc request size(26Gi)
Expected results:
volumesnapshot should be the same size of pvc request size
Additional info:
Description of problem:
OVNKubernetesControllerDisconnectedSouthboundDatabase alert seems to fire in the e2e-aws-ovn-serial CI job. Note that something funny happens in the job itself, which is that a set of ovnkube-node pods get created and then deleted and then get recreated again and test runs. But the alert gets fired for the first set of pods that got deleted. From the initial screening of artifacts alone its not clear what happened to the old pods. This needs investigation
Version-Release number of selected component (if applicable):
4.12 OCP
How reproducible:
Seems like always
Steps to Reproduce:
1.https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/27043/pull-ci-openshift-origin-master-e2e-aws-ovn-serial/1568166237639282688 2. https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/27043/pull-ci-openshift-origin-master-e2e-aws-ovn-serial/1567913444936519680
Actual results:
Alert is fired
Expected results:
Alert shouldn't be fired and this is expected in the serial job then we need to silence that alert for that job, make it flaky and not fail hard if that alert fires.
Additional info:
This is a clone of issue OCPBUGS-15512. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-14969. The following is the description of the original issue:
—
Description of problem:
When an HCP Service LB is created, for example for an IngressController, the CAPA controller calls ModifyNetworkInterfaceAttribute. It references the default security group for the VPC in addition to the security group created for the cluster ( with the right tags). Ideally, the LBs (and any other HCP components) should not be using the default VPC SecurityGroup
Version-Release number of selected component (if applicable):
All 4.12 and 4.13
How reproducible:
100%
Steps to Reproduce:
1. Create HCP 2. Wait for Ingress to come up. 3. Look in CloudTrail for ModifyNetworkInterfaceAttribute, and see default security group referenced
Actual results:
Default security group is used
Expected results:
Default security group should not be used
Additional info:
This is problematic as we are attempting to scope our AWS permissions as small as possible. The goal is to only use resources that are tagged with `red-hat-managed: true` so that our IAM Policies can conditioned to only access these resources. Using the Security Group created for the cluster should be sufficient, and the default Security Group does not need to be used, so if the usage can be removed here, we can secure our AWS policies that much better. Similar to OCPBUGS-11894
Description of problem:
Jenkins and Jenkins Agent Base image versions needs to be updated to use the latest images to mitigate known CVEs in plugins and Jenkins versions.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-4906. The following is the description of the original issue:
—
These commented out tests https://github.com/openshift/origin/blob/master/test/extended/testdata/cmd/test/cmd/templates.sh#L130-L149 are problematic, because they are testing rather important functionality of cross-namespace template processing.
This problem recently escalated after landing k8s 1.25, where there was a suspicion that new version of kube-apiserver removed that functionality. We need to bring back this test, as well as similar tests which are touching logging in functionality. https://github.com/openshift/origin/blob/master/test/extended/testdata/cmd/test/cmd/authentication.sh is another similar test being skipped due to similar reasons.
Based on my search: https://github.com/openshift/origin/blob/master/test/extended/oauth/helpers.go#L18 we could deploy Basic Auth Provider ie. password based, and group all tests relying on this functionality under a single umbrella.
The biggest question to answer is how we can properly deal with multiple IdentityProviders, so I'd suggest reaching out to Auth team for help.
The second problem that was identified is various cloud providers, so we've agreed to run this test initially only on AWS and GCP.