Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
OC mirror is GA product as of Openshift 4.11 .
The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
As a user, I should be able to configure CSI driver to have a storage topology.
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.
We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.
We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.
related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
This includes ibm-vpc-node-label-updater!
(Using separate cards for each driver because these updates can be more complicated)
There is a new driver release 5.0.0 since the last rebase that includes snapshot support:
https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0
Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
This Epic tracks the GA of this feature
Epic Goal
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
Support user input consisting of just InstallConfig and AgentConfig
Modify the agent-config to accept NMState config for each host.
This could be directly inline, or referenced from a file (either explicitly or by implicitly inferring the filename). This is TBD. We decided to go with `AgentConfig embeds install time node-specific configuration` option https://docs.google.com/document/d/1vCy0LikVPhbGIHF494NHTYsfu85fOiOicR3oB1vlEWI/edit#
Using the NMState data provided, generate the equivalent NMStateConfig manifests in cluster-manifests.
Validate the initial config files for the agent installer, ensuring that all the required fields are present and well defined
If node0 ip is specified in agentConfig, it takes precedence over the selection from NMStateConfigs, otherwise, we keep the same heuristic as we have now to choose.
If we make the ZTP manifest assets depend on the install-config asset, the install config will effectively be required (and the installer will launch into the interactive CLI questionnaire if it is not present).
We want to use the install-config if it is present, and just use the ZTP manifests if those are present instead. (Note: this appears to conflict with what AGENT-135 says, so one of these stories might be wrong.)
The installer team has more details and can probably suggest a design.
Given an install-config, convert it to the ZTP manifests that are used to directly populate the Ignition.
This document contains a list of fields and how they match up: https://docs.google.com/document/d/1S4OluK1c-CIma9hmEylPay9ugcqKrD64S7DgiYpufqE/edit
Given an install-config, generate the mirroring config assets (registries.conf and ca-bundle.crt) from the data in it.
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
Currently assisted service chooses one of the nodes that reach out to it to be the bootstrap node. We need to understand the choice mechanism and to make it reliably choose the node that we want node0 to be.
The bootstrap node already waits for the other nodes before rebooting, we need to make sure that this wait is sufficient for assisted-service as well. Prevent the assisted-service from rebooting the node it is running on until the following conditions are true:
We can try with having it reboot into bootstrap while making sure that assisted-service runs after reboot but ideally we'd want to have the node start bootstrapping without needing the reboot (As per customer/PM demands to minimize reboots).
In the context of METAL-10 there was a proposal to add a file that the agent would check for, such that the presence of this file would inhibit a reboot. We could possibly use the same mechanism here to avoid the need for large-scale changes to how assisted-service itself works (assisted-service would still need to delete the file at the appropriate time, but that is a less-invasive change). However, there are timeouts that have to be considered, so changes to the state machine may be required.
Note that we do want to continue to install to disk on the assisted-service host in parallel with the others, since this is on the critical path slowing down all deployments. Only the reboot should be delayed.
Single-node deployments are an exception to this.
Acceptance criteria:
Currently we allow the assisted-service to generate the InfraEnv ID automatically when the InfraEnv is created. The agents then have to fetch the list of InfraEnvs from the service to get the ID. This is suboptimal in a number of ways and won't be possible at all once we have authentication enabled on the assisted-service API.
Instead, modify assisted-service to accept an environment variable that contains a fixed InfraEnv ID. Any new InfraEnv created will use this ID (this has the desirable side effect that there can be only one InfraEnv).
Pre-generate a random ID in the command-line tool and store it in the configuration of both the agent and the assisted-service in the ISO.
A cli subcommand that:
Using podman kube play from a systemd service isn't ideal in terms of process monitoring, and makes it hard to do stuff like attach volumes. Split the containers out into separate containers (which can all be in the same pod still) that are started by their own systemd services. This will mean decomposing the ConfigMap that passes settings.
A cli subcommand that waits for the cluster to come up. This should be able to reuse the code from the regular openshift-install wait-for install-complete command largely unchanged, but if the k8s API is not available it may be because we're still running the assisted part of installation. It probably needs to fall back to checking for that. I'm not sure what assumptions in the existing installer command about when it is safe to run it. Ideally we would keep behaviour relatively consistent.
As a deployer, I want to be able to:
so that I can achieve
Currently the Assisted Service generates the credentials by running the ignition generation step of the oepnshift-installer. This is why the credentials are only retrievable from the REST API towards the end of the installation.
In the BILLI usage, which takes down assisted service before the installation is complete there is no obvious point at which to alert the user that they should retrieve the credentials. This means that we either need to:
This requires/does not require a design proposal.
This requires/does not require a feature gate.
Check that the cluster is ready for installation and send the appropriate REST API call to trigger the installation.
Instead of fmt.Errorf, use a logging library to log the errors and debug information.
Fix the unwanted API call to set API_VIP in case of SNO cluster in start-cluster-installation.service.
{"code":"400","href":"","id":400,"kind":"Error","reason":"API VIP cannot be set with User Managed Networking"}
Create a completely golang implementation of AGENT-37 and place the code in the assisted-service repo. A new binary should be created in the assisted-service image. The binary will be used in the create-cluster-and-infra-env service.
The service start-cluster-installation fails for conditionpathexists even though the path is created.
[core@master-0 ~]$ sudo systemctl status start-cluster-installation.service ● start-cluster-installation.service - Service that starts cluster installation Loaded: loaded (/etc/systemd/system/start-cluster-installation.service; enabled; vendor preset: enabled) Active: inactive (dead) Condition: start condition failed at Wed 2022-05-11 04:40:43 UTC; 32s ago └─ ConditionPathExists=/etc/assisted-service/node0 was not met
Also, when the ConditionPath error is fixed, later the service fails with
start-cluster-installation.sh[2533]: jq: error (at <stdin>:0): Cannot index number with string "status"
As an OpenShift infrastructure owner, I need to add host-specific configurations at install time, so that they are applied when the cluster installation is completed.
Specially, but not restricted to on-prem deployments, hosts need specific configurations (beyond the individual host network configuration). Customers automating installs want to avoid day-2 configurations and node reboots, so applying configurations during the installation is a requirement for them. Examples of this are multipath and SCTP on bare metal nodes, where it's not always straightforward to do it on day-2 and reboots are required.
If it is not generated from AgentConfig, we should at least generate a skeleton
Acceptance criteria:
There is no harm in supplying the “rd.multipath=default” argument on any host. The effect of this argument is to generate a default /etc/multipath.conf file and to enable the multipathd service. The assisted-service now adds these to its discovery ISOs, and we will do the same with the agent ISO.
Necessary for SCTP
Manifests are placed in <install-config-dir>/openshift and copied to the ISO. (Previously we assumed this would be <install-config-dir>/manifests, but Andrea suggested that openshift would be more consistent.)
A client in the ISO submits the manifests through assisted-service API.
REST
Get the ZTP extra manifests into the image and use the REST API below:
/v2/clusters/{cluster_id}/manifests
Epic Goal
Why is this important?
Acceptance Criteria
Previous Work (Optional)
Done Checklist
References
We currently support static IPs on Node 0, and this is required in order to get the common IP for the other nodes. We also need to support configuration of static IPs on all of the nodes even though they could also use DHCP for their addresses.
As an admin, I want to be able to:
so that I can achieve
The agent based installation for Zero Touch provisioning has a Custom Resource Defined to configure the static networking of the nodes that will be provisioned. E.g:
apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: mgmt-spoke1 namespace: mgmt-spoke1 labels: cluster-name: mgmt-spoke1 spec: config: interfaces: - name: bond0 type: bond link-aggregation: mode: active-backup options: miimon: "140" slaves: - eth0 - eth1 state: up ipv4: enabled: true address: - ip: 192.168.123.151 prefix-length: 24 dhcp: false ipv6: enabled: false dns-resolver: config: server: - 192.168.1.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.1.1 next-hop-interface: bond0 table-id: 254 interfaces: - name: "eth0" macAddress: "00:00:00:00:00:00" - name: "eth1" macAddress: "00:00:00:00:00:11"
NMState team is currently working on a rust library that includes the gc command that assisted service uses to generate all the configs and then load the one that matches the interfaces. We should reach out to Nick Carboni to check on assisted-service progress in integrating the new library and leverage the same code to make sure our ISO can use the same network configuration mechanism
Description of criteria:
Detail about what is specifically not being delivered in the story
This requires/does not require a design proposal.
This requires/does not require a feature gate.
The infraenv controller fetches the NMStateConfigs from the kube-api. Since we don't have the kube-api, we need to read them from the manifests and incorporate them into the InfraEnvCreateParams to create the InfraEnv.
CI - CI is running, tests are automated and merged.
Release Enablement <link to Feature Enablement Presentation>
DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
DEV - Downstream build attached to advisory: <link to errata>
QE - Test plans in Polarion: <link or reference to Polarion>
QE - Automated tests merged: <link or reference to automated tests>
DOC - Downstream documentation merged: <link to meaningful PR>
Using git-filter-repo, rewrite the commits in fleeting to place files in their correct locations in the installer. The resulting commits can then be merged into the agent branch of the installer with a pull request.
Data files should be moved to e.g. data/data/agent, appending the suffix .template to any that are templated.
Code files that are needed by the installer should be moved to appropriate directories that have the agent team in the OWNERS.
Keep the git-filter-repo script so that development can continue in parallel on fleeting until we are ready to switch CI over to the installer implementation.
As a (user persona), I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
This requires/does not require a design proposal.
This requires/does not require a feature gate.
Create installer Assets corresponding to each ZTP manifest, and move the code for reading them from disk into the respective assets.
Create an asset for AgentClusterInstall. Parent assets are install-config.yaml and agent-config.yaml.
From the initial install-config.yaml + agent-config.yaml, generate all the ZTP manifests file required by the create image command.
Dependency: install-config
*Note*: we could evaluate to further split this task into distinct manifests assets
As a first step for the assets integration. the create image command will need to fetch the required ztp manifest files from the cluster-manifests folder.
This will allow to:
1) Get the manifest file from the right location
2) seamlessly integrate the create image command with the create cluster-manifests one as the tasks related to assets generation are still in progress
3) Keep the create image command fully working until the assets generation will completed (users will still be able to create/edit manually the assets in the cluster-manifests folder)
Add a subcommand to create the ephemeral ISO.
Create Agent ISO and Agent Ignition assets in the installer, and use them to generate a customized ISO.
This story is just for implementing the mechanics, filling in the ignition will be left to another story.
Currently it's possible to specify the release version to be installed via the ClusterImageSet manifests.
Since we're working from within the openshift installer, the accepted version should be the one hard-coded in the installer binary (or overriden by the env var)
Using code from the installer (not code from fleeting), populate the Ignition asset with the data built in to the installer binary.
Currently we use a separate embed.FS (inherited from fleeting) to load the data files to go into the ignition. We should get rid of this and use the same method as the rest of the installer. We should also use the installer's code to e.g. do templating and convert to ignition format and throw away the fleeting code.
Ability to perform disconnected first cluster installation in the automated flow
When installing in a disconnected environment and the registries.conf and ca-bundle files have been loaded these files should be provided to assisted-service as a mount of the mirror/ dir. Assisted-service will updates its ignition config from these mounted files.
We won't be shipping with the assisted-ui container. At this point it is blocking the disconnected work since we don't have an Openshift container for it in the payload, so its time to remove it.
The Core OS ISO can be extracted from the release payload using a command like:
oc image extract --file=/coreos/coreos-x86_64.iso quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1dc3c2a644f62049ea4a03fddb9305bc2b929405bf979b7f5e720cfadf327b54
Where the SHA points to the machine-os-images container in the release payload (which can be obtained using oc adm release info --image-for=machine-os-images. (Both of these commands require the pull secret for the cluster to be available in your podman config.)
We'll need to use equivalent code (hopefully imported from oc or the same library it uses) to fetch the base ISO using the supplied pull secret in the ZTP manifests and store it as an Asset.
Podman creates a pause container on the hosts for the service pod as follows:
$ sudo podman ps
87a02f9ace39 registry.access.redhat.com/ubi8/pause:latest 58 minutes ago Up 58 minutes ago 0.0.0.0:8080->8080/tcp, 0.0.0.0:8090->8090/tcp, 0.0.0.0:8888->8888/tcp 27f9183bfbd9-infra
We should check if this image needs to be mirrored, and figure out if we need to change dev-scripts or add an entry to registries.conf.
In order to configure the registry for disconnected installs, the following assets should be created:
RegistriesConfig (read from mirror/registries.conf)
CABundleCertificates (read from mirror/ca-bundle.crt)
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I want to hide user perspective(s) based on the customization.
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12
This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9
More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4
update ironic software to pick up latest bug fixes
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
Description of problem:
oc --context build02 get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-ec.1 True False 45h Error while reconciling 4.12.0-ec.1: the cluster operator kube-controller-manager is degraded oc --context build02 get co kube-controller-manager NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE kube-controller-manager 4.12.0-ec.1 True False True 2y87d GarbageCollectorDegraded: error fetching rules: Get "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules": dial tcp 172.30.153.28:9091: connect: cannot assign requested address
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
build02 is a build farm cluster in CI production.
I can provide credentials to access the cluster if needed.
Description of problem:
Agent based installation fails during the 3+1 deployment. I found that the machine-api-operator degraded due to minimum worker replica count is 2 and for 3+1 deployment we need to define one worker node.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Create agent.iso (openshift-install agent create image) using install-config.yaml and agent-config.yaml (PFA sample files) 2. Deploy a 3+1 cluster using agent.iso 3. Execute "openshift-install agent wait-for install-complete" command to wait for install complete.
Actual results:
Getting below error: ERROR Cluster operator kube-controller-manager Degraded is True with GarbageCollector_Error: GarbageCollectorDegraded: error fetching rules: Get "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host INFO Cluster operator machine-api Progressing is True with SyncingResources: Progressing towards operator: 4.12.0-0.nightly-2022-10-05-053337 ERROR Cluster operator machine-api Degraded is True with SyncingFailed: Failed when progressing towards operator: 4.12.0-0.nightly-2022-10-05-053337 because minimum worker replica count (2) not yet met: current running replicas 1, waiting for [] INFO Cluster operator machine-api Available is False with Initializing: Operator is initializing INFO Cluster operator monitoring Available is False with UpdatingPrometheusOperatorFailed: Rollout of the monitoring stack failed and is degraded. Please investigate the degraded status error. ERROR Cluster operator monitoring Degraded is True with UpdatingPrometheusOperatorFailed: Failed to rollout the stack. Error: updating prometheus operator: reconciling Prometheus Operator Admission Webhook Deployment failed: updating Deployment object failed: waiting for DeploymentRollout of openshift-monitoring/prometheus-operator-admission-webhook: got 1 unavailable replicas INFO Cluster operator monitoring Progressing is True with RollOutInProgress: Rolling out the stack. INFO Cluster operator network ManagementStateDegraded is False with : ERROR Cluster initialization failed because one or more operators are not functioning properly. ERROR The cluster should be accessible for troubleshooting as detailed in the documentation linked below, ERROR https://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html
Expected results:
3+1 deployment should be successful.
Additional info:
I found that there is a condition in the machine-api-operator to check that the worker node count should be 2 which is preventing the 3+1 deployment. https://github.com/openshift/machine-api-operator/blob/master/pkg/operator/sync.go#L322
This is a clone of issue OCPBUGS-1327. The following is the description of the original issue:
—
See this comment for some updated information
—
Description of problem:
During IPI installation on IBM Cloud (x86_64), some of the worker machines have been seen to have no network connectivity during their initial bootup. Investigations were performed with IBM Cloud VPC to attempt to identify the issue, but in all appearances, all virtualization appears to be working.
Unfortunately due to this issue, no network traffic, no access to these worker machines is available to help identify the issue (Ignition is stuck without network traffic), so no SSH or console login is available to collect logs, or perform any testing on these machines.
The only content available is the console output, showing ignition is stuck due to the network issue.
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
About 60%
Steps to Reproduce:
1. Create an IPI cluster on IBM Cloud
2. Wait for the worker machines to be provisioned, causing IPI to fail waiting on machine-api operator
3. Check console of worker machines failing to report in to cluster (in this case 2 of 3 failed)
Actual results:
IPI creation failed waiting on machine-api operator to complete all worker node deployment
Expected results:
Successful IPI creation on IBM Cloud
Additional info:
As stated, investigation was performed by IBM Cloud VPC, but no further investigation could be performed since no access to these worker machines is available. Any further details that could be provided to help identify the issue would be helpful.
This appears to have become more prominent recently as well, causing concern for IBM Cloud's IPI GA support on the 4.12 release.
The only solution to restore network connectivity is rebooting the machine, which loses ignition bring up (I assume it must be triggered manually now), and in the case of IPI, isn't a great mitigation.
libovsdb builds transaction log messages for every transaction and then throws them away if the log level is not 4 or above. This wastes a bunch of CPU at scale and increases pod ready latency.
As reported in https://issues.redhat.com/browse/TRT-468 and discussed on slack , high disruption times to console/oauth/registry during upgrade in aws were reported after we merged the code in downstream ovn-k where endpointslices were introduced for ovn-k node.
The endpointslice code was reverted upstream and downstream to allow time for investigating this issue.
Upstream tracking issue: https://github.com/ovn-org/ovn-kubernetes/issues/3116
Description of problem:
OCP v4.9.31 cluster didn't have the $search domain in /etc/resolv.conf, which was there in the v4.8.29 OCP cluster. This was observed in all the nodes of the v4.9.31 cluster.
~~~
OpenShift 4.9.31
sh-4.4# cat /etc/resolv.conf
OpenShift 4.8.29
ENV: OpenStack IAD2, IPI installation. Connected cluster.
Version-Release number of selected component (if applicable):
OCP v4.9.31
How reproducible:
Always
Steps to Reproduce:
1. Install IPI cluster on OpenStack IAD2 platform having cluster version 4.9.31
2. Debug to any of the node(master/worker)
3. Check and confirm the missing search domain on all nodes of the cluster.
Actual results:
The search domain was missing when checked in `/etc/resolv.conf` file on all nodes of the cluster causing serious issues in the cluster.
Expected results:
The installer should embed the search domain in /etc/resolv.conf file on all nodes of the cluster.
Additional info:
set -eo pipefail
DISPATCHER_FILE="/etc/NetworkManager/dispatcher.d/30-resolv-prepender"
DOMAINS="$(grep -E '\s*DOMAINS=.*iad2.dc.paas.redhat.com' $DISPATCHER_FILE \
grep -oE '[a-z0-9]*.dev.iad2.dc.paas.redhat.com' \ |
tr '\n' ' ')" |
>&2 echo "IT-PaaS: overwriting search domains in /etc/resolv.conf with: $DOMAINS"
sed -e "/^search/d" \
-e "/Generated by/c# Generated by KNI resolv prepender NM dispatcher script \nsearch $DOMAINS" \
/etc/resolv.conf > /etc/resolv.tmp
mv /etc/resolv.tmp /etc/resolv.conf
~~~
Description of problem:
With "createFirewallRules: Enabled", after successful "create cluster" and then "destroy cluster", the created firewall-rules in the shared VPC are not deleted.
Version-Release number of selected component (if applicable):
$ ./openshift-install version ./openshift-install 4.12.0-0.nightly-2022-09-28-204419 built from commit 9eb0224926982cdd6cae53b872326292133e532d release image registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc release architecture amd64
How reproducible:
Always
Steps to Reproduce:
1. try IPI installation with "createFirewallRules: Enabled", which succeeded 2. try destroying the cluster, which succeeded 3. check firewall-rules in the shared VPC
Actual results:
After destroying the cluster, its firewall-rules created by installer in the shared VPC are not deleted.
Expected results:
Those firewall-rules should be deleted during destroying the cluster.
Additional info:
$ gcloud --project openshift-qe-shared-vpc compute firewall-rules list --filter='network=installer-shared-vpc' NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED ci-op-xpn-ingress-common installer-shared-vpc INGRESS 60000 tcp:6443,tcp:22,tcp:80,tcp:443,icmp False ci-op-xpn-ingress-health-checks installer-shared-vpc INGRESS 60000 tcp:30000-32767,udp:30000-32767,tcp:6080,tcp:6443,tcp:226 24,tcp:32335 False ci-op-xpn-ingress-internal-network installer-shared-vpc INGRESS 60000 udp:4789,udp:6081,udp:500,udp:4500,esp,tcp:9000-9999,udp: 9000-9999,tcp:10250,tcp:30000-32767,udp:30000-32767,tcp:10257,tcp:10259,tcp:22623,tcp:2379-2380 FalseTo show all fields of the firewall, please show in JSON format: --format=json To show all fields in table format, please see the examples in --help. $ $ yq-3.3.0 r test2/install-config.yaml platform gcp: projectID: openshift-qe region: us-central1 computeSubnet: installer-shared-vpc-subnet-2 controlPlaneSubnet: installer-shared-vpc-subnet-1 createFirewallRules: Enabled network: installer-shared-vpc networkProjectID: openshift-qe-shared-vpc $ $ yq-3.3.0 r test2/install-config.yaml metadata creationTimestamp: null name: jiwei-1013-01 $ $ openshift-install create cluster --dir test2 INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" INFO Consuming Install Config from target directory INFO Creating infrastructure resources... INFO Waiting up to 20m0s (until 4:06AM) for the Kubernetes API at https://api.jiwei-1013-01.qe.gcp.devcluster.openshift.com:6443... INFO API v1.24.0+8c7c967 up INFO Waiting up to 30m0s (until 4:20AM) for bootstrapping to complete... INFO Destroying the bootstrap resources... INFO Waiting up to 40m0s (until 4:42AM) for the cluster at https://api.jiwei-1013-01.qe.gcp.devcluster.openshift.com:6443 to initialize... INFO Checking to see if there is a route at openshift-console/console... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/fedora/test2/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.jiwei-1013-01.qe.gcp.devcluster.openshift.com INFO Login to the console with user: "kubeadmin", and password: "wWPkc-8G2Lw-xe2Vw-DgWha" INFO Time elapsed: 39m14s $ $ openshift-install destroy cluster --dir test2 INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" INFO Stopped instance jiwei-1013-01-464st-worker-b-pmg5z INFO Stopped instance jiwei-1013-01-464st-worker-a-csg2j INFO Stopped instance jiwei-1013-01-464st-master-1 INFO Stopped instance jiwei-1013-01-464st-master-2 INFO Stopped instance jiwei-1013-01-464st-master-0 INFO Deleted 2 recordset(s) in zone qe INFO Deleted 3 recordset(s) in zone jiwei-1013-01-464st-private-zone INFO Deleted DNS zone jiwei-1013-01-464st-private-zone INFO Deleted bucket jiwei-1013-01-464st-image-registry-us-central1-ulgxgjfqxbdnrhd INFO Deleted instance jiwei-1013-01-464st-master-0 INFO Deleted instance jiwei-1013-01-464st-worker-a-csg2j INFO Deleted instance jiwei-1013-01-464st-master-1 INFO Deleted instance jiwei-1013-01-464st-worker-b-pmg5z INFO Deleted instance jiwei-1013-01-464st-master-2 INFO Deleted disk jiwei-1013-01-464st-master-2 INFO Deleted disk jiwei-1013-01-464st-master-1 INFO Deleted disk jiwei-1013-01-464st-worker-b-pmg5z INFO Deleted disk jiwei-1013-01-464st-master-0 INFO Deleted disk jiwei-1013-01-464st-worker-a-csg2j INFO Deleted address jiwei-1013-01-464st-cluster-public-ip INFO Deleted address jiwei-1013-01-464st-cluster-ip INFO Deleted forwarding rule a516d89f9a4f14bdfb55a525b1a12a91 INFO Deleted forwarding rule jiwei-1013-01-464st-api INFO Deleted forwarding rule jiwei-1013-01-464st-api-internal INFO Deleted target pool a516d89f9a4f14bdfb55a525b1a12a91 INFO Deleted target pool jiwei-1013-01-464st-api INFO Deleted backend service jiwei-1013-01-464st-api-internal INFO Deleted instance group jiwei-1013-01-464st-master-us-central1-a INFO Deleted instance group jiwei-1013-01-464st-master-us-central1-c INFO Deleted instance group jiwei-1013-01-464st-master-us-central1-b INFO Deleted health check jiwei-1013-01-464st-api-internal INFO Deleted HTTP health check a516d89f9a4f14bdfb55a525b1a12a91 INFO Deleted HTTP health check jiwei-1013-01-464st-api INFO Time elapsed: 4m18s $ $ gcloud --project openshift-qe-shared-vpc compute firewall-rules list --filter='network=installer-shared-vpc' NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED ci-op-xpn-ingress-common installer-shared-vpc INGRESS 60000 tcp:6443,tcp:22,tcp:80,tcp:443,icmp False ci-op-xpn-ingress-health-checks installer-shared-vpc INGRESS 60000 tcp:30000-32767,udp:30000-32767,tcp:6080,tcp:6443,tcp:22624,tcp:32335 False ci-op-xpn-ingress-internal-network installer-shared-vpc INGRESS 60000 udp:4789,udp:6081,udp:500,udp:4500,esp,tcp:9000-9999,udp:9000-9999,tcp:10250,tcp:30000-32767,udp:30000-32767,tcp:10257,tcp:10259,tcp:22623,tcp:2379-2380 False jiwei-1013-01-464st-api installer-shared-vpc INGRESS 1000 tcp:6443 False jiwei-1013-01-464st-control-plane installer-shared-vpc INGRESS 1000 tcp:22623,tcp:10257,tcp:10259 False jiwei-1013-01-464st-etcd installer-shared-vpc INGRESS 1000 tcp:2379-2380 False jiwei-1013-01-464st-health-checks installer-shared-vpc INGRESS 1000 tcp:6080,tcp:6443,tcp:22624 False jiwei-1013-01-464st-internal-cluster installer-shared-vpc INGRESS 1000 tcp:30000-32767,udp:9000-9999,udp:30000-32767,udp:4789,udp:6081,tcp:9000-9999,udp:500,udp:4500,esp,tcp:10250 False jiwei-1013-01-464st-internal-network installer-shared-vpc INGRESS 1000 icmp,tcp:22 False k8s-a516d89f9a4f14bdfb55a525b1a12a91-http-hc installer-shared-vpc INGRESS 1000 tcp:30268 False k8s-fw-a516d89f9a4f14bdfb55a525b1a12a91 installer-shared-vpc INGRESS 1000 tcp:80,tcp:443 FalseTo show all fields of the firewall, please show in JSON format: --format=json To show all fields in table format, please see the examples in --help. $ FYI manually deleting those firewall-rules in the shared VPC does work. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-api Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-api]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-control-plane Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-control-plane]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-etcd Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-etcd]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-health-checks Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-health-checks]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-internal-cluster Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-internal-cluster]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-internal-network Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-internal-network]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q k8s-a516d89f9a4f14bdfb55a525b1a12a91-http-hc Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/k8s-a516d89f9a4f14bdfb55a525b1a12a91-http-hc]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q k8s-fw-a516d89f9a4f14bdfb55a525b1a12a91 Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/k8s-fw-a516d89f9a4f14bdfb55a525b1a12a91]. $ $ gcloud --project openshift-qe-shared-vpc compute firewall-rules list --filter='network=installer-shared-vpc' NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED ci-op-xpn-ingress-common installer-shared-vpc INGRESS 60000 tcp:6443,tcp:22,tcp:80,tcp:443,icmp False ci-op-xpn-ingress-health-checks installer-shared-vpc INGRESS 60000 tcp:30000-32767,udp:30000-32767,tcp:6080,tcp:6443,tcp:22624,tcp:32335 False ci-op-xpn-ingress-internal-network installer-shared-vpc INGRESS 60000 udp:4789,udp:6081,udp:500,udp:4500,esp,tcp:9000-9999,udp:9000-9999,tcp:10250,tcp:30000-32767,udp:30000-32767,tcp:10257,tcp:10259,tcp:22623,tcp:2379-2380 FalseTo show all fields of the firewall, please show in JSON format: --format=json To show all fields in table format, please see the examples in --help. $
Description of problem:
openshift-apiserver, openshift-oauth-apiserver and kube-apiserver pods cannot validate the certificate when trying to reach etcd reporting certificate validation errors: }. Err: connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for ::1, 127.0.0.1, ::1, fd69::2, not 2620:52:0:198::10" W1018 11:36:43.523673 15 logging.go:59] [core] [Channel #186 SubChannel #187] grpc: addrConn.createTransport failed to connect to { "Addr": "[2620:52:0:198::10]:2379", "ServerName": "2620:52:0:198::10", "Attributes": null, "BalancerAttributes": null, "Type": 0, "Metadata": null }. Err: connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for ::1, 127.0.0.1, ::1, fd69::2, not 2620:52:0:198::10"
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-18-041406
How reproducible:
100%
Steps to Reproduce:
1. Deploy SNO with single stack IPv6 via ZTP procedure
Actual results:
Deployment times out and some of the operators aren't deployed successfully. NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2022-10-18-041406 False False True 124m APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.... baremetal 4.12.0-0.nightly-2022-10-18-041406 True False False 112m cloud-controller-manager 4.12.0-0.nightly-2022-10-18-041406 True False False 111m cloud-credential 4.12.0-0.nightly-2022-10-18-041406 True False False 115m cluster-autoscaler 4.12.0-0.nightly-2022-10-18-041406 True False False 111m config-operator 4.12.0-0.nightly-2022-10-18-041406 True False False 124m console control-plane-machine-set 4.12.0-0.nightly-2022-10-18-041406 True False False 111m csi-snapshot-controller 4.12.0-0.nightly-2022-10-18-041406 True False False 111m dns 4.12.0-0.nightly-2022-10-18-041406 True False False 111m etcd 4.12.0-0.nightly-2022-10-18-041406 True False True 121m ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries image-registry 4.12.0-0.nightly-2022-10-18-041406 False True True 104m Available: The registry is removed... ingress 4.12.0-0.nightly-2022-10-18-041406 True True True 111m The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: DeploymentReplicasAllAvailable=False (DeploymentReplicasNotAvailable: 0/1 of replicas are available) insights 4.12.0-0.nightly-2022-10-18-041406 True False False 118s kube-apiserver 4.12.0-0.nightly-2022-10-18-041406 True False False 102m kube-controller-manager 4.12.0-0.nightly-2022-10-18-041406 True False True 107m GarbageCollectorDegraded: error fetching rules: Get "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules": dial tcp [fd02::3c5f]:9091: connect: connection refused kube-scheduler 4.12.0-0.nightly-2022-10-18-041406 True False False 107m kube-storage-version-migrator 4.12.0-0.nightly-2022-10-18-041406 True False False 117m machine-api 4.12.0-0.nightly-2022-10-18-041406 True False False 111m machine-approver 4.12.0-0.nightly-2022-10-18-041406 True False False 111m machine-config 4.12.0-0.nightly-2022-10-18-041406 True False False 115m marketplace 4.12.0-0.nightly-2022-10-18-041406 True False False 116m monitoring False True True 98m deleting Thanos Ruler Route failed: Timeout: request did not complete within requested timeout - context deadline exceeded, deleting UserWorkload federate Route failed: Timeout: request did not complete within requested timeout - context deadline exceeded, reconciling Alertmanager Route failed: retrieving Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io alertmanager-main), reconciling Thanos Querier Route failed: retrieving Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io thanos-querier), reconciling Prometheus API Route failed: retrieving Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io prometheus-k8s), prometheuses.monitoring.coreos.com "k8s" not found network 4.12.0-0.nightly-2022-10-18-041406 True False False 124m node-tuning 4.12.0-0.nightly-2022-10-18-041406 True False False 111m openshift-apiserver 4.12.0-0.nightly-2022-10-18-041406 True False False 104m openshift-controller-manager 4.12.0-0.nightly-2022-10-18-041406 True False False 107m openshift-samples False True False 103m The error the server was unable to return a response in the time allotted, but may still be processing the request (get imagestreams.image.openshift.io) during openshift namespace cleanup has left the samples in an unknown state operator-lifecycle-manager 4.12.0-0.nightly-2022-10-18-041406 True False False 111m operator-lifecycle-manager-catalog 4.12.0-0.nightly-2022-10-18-041406 True False False 111m operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2022-10-18-041406 True False False 106m service-ca 4.12.0-0.nightly-2022-10-18-041406 True False False 124m storage 4.12.0-0.nightly-2022-10-18-041406 True False False 111m
Expected results:
Deployment succeeds without issues.
Additional info:
I was unable to run must-gather so attaching the pods logs copied from the host file system.
This is a clone of issue OCPBUGS-860. The following is the description of the original issue:
—
Description of problem:
In GCP, once an external IP address is assigned to master/infra node through GCP console, numbers of pending CSR from kubernetes.io/kubelet-serving is increasing, and the following error are reported: I0902 10:48:29.254427 1 controller.go:121] Reconciling CSR: csr-q7bwd I0902 10:48:29.365774 1 csr_check.go:157] csr-q7bwd: CSR does not appear to be client csr I0902 10:48:29.371827 1 csr_check.go:545] retrieving serving cert from build04-c92hb-master-1.c.openshift-ci-build-farm.internal (10.0.0.5:10250) I0902 10:48:29.375052 1 csr_check.go:188] Found existing serving cert for build04-c92hb-master-1.c.openshift-ci-build-farm.internal I0902 10:48:29.375152 1 csr_check.go:192] Could not use current serving cert for renewal: CSR Subject Alternate Name values do not match current certificate I0902 10:48:29.375166 1 csr_check.go:193] Current SAN Values: [build04-c92hb-master-1.c.openshift-ci-build-farm.internal 10.0.0.5], CSR SAN Values: [build04-c92hb-master-1.c.openshift-ci-build-farm.internal 10.0.0.5 35.211.234.95] I0902 10:48:29.375175 1 csr_check.go:202] Falling back to machine-api authorization for build04-c92hb-master-1.c.openshift-ci-build-farm.internal E0902 10:48:29.375184 1 csr_check.go:420] csr-q7bwd: IP address '35.211.234.95' not in machine addresses: 10.0.0.5 I0902 10:48:29.375193 1 csr_check.go:205] Could not use Machine for serving cert authorization: IP address '35.211.234.95' not in machine addresses: 10.0.0.5 I0902 10:48:29.379457 1 csr_check.go:218] Falling back to serving cert renewal with Egress IP checks I0902 10:48:29.382668 1 csr_check.go:221] Could not use current serving cert and egress IPs for renewal: CSR Subject Alternate Names includes unknown IP addresses I0902 10:48:29.382702 1 controller.go:233] csr-q7bwd: CSR not authorized
Version-Release number of selected component (if applicable):
4.11.2
Steps to Reproduce:
1. Assign external IPs to master/infra node in GCP 2. oc get csr | grep kubernetes.io/kubelet-serving
Actual results:
CSRs are not approved
Expected results:
CSRs are approved
Additional info:
This issue is only happen in GCP. Same OpenShift installations in AWS do not have this issue. It looks like the CSR are created using external IP addresses once assigned. Ref: https://coreos.slack.com/archives/C03KEQZC1L2/p1662122007083059
https://github.com/openshift/api/pull/1213 and https://github.com/openshift/api/pull/1202 PR's have been merged but the latest 4.12 OCP clusters do not show the changes .
According to https://github.com/openshift/console-operator/blob/bd2a7c9077ccf214dd8a725a7660e86d96e045b0/Dockerfile.rhel7#L18-L23, we need to vendor the openshift/api in console operator repo so that the latest manifests get's applied.
Description of problem:
unset field networks in topology of each failureDomain, but defines platform.vsphere.vcenters.
in install-config.yaml:
vcenters: - server: xxx user: xxx password: xxx datacenters: - IBMCloud - datacenter-2 failureDomains: - name: us-east-1 region: us-east zone: us-east-1a topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2 datastore: multi-zone-ds-shared server: ibmvcenter.vmc-ci.devcluster.openshift.com - name: us-east-2 region: us-east zone: us-east-2a topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2 datastore: multi-zone-ds-shared server: ibmvcenter.vmc-ci.devcluster.openshift.com - name: us-east-3
Launch installer to create cluster, get panic error
sh-4.4$ ./openshift-install create cluster --dir ipi --log-level debug DEBUG OpenShift Installer 4.12.0-0.nightly-2022-09-25-071630 DEBUG Built from commit 1fb1397635c89ff8b3645fed4c4c264e4119fa84 DEBUG Fetching Metadata... ... DEBUG Reusing previously-fetched Master Ignition Config DEBUG Generating Master Machines... panic: runtime error: index out of range [0] with length 0goroutine 1 [running]: github.com/openshift/installer/pkg/asset/machines/vsphere.getDefinedZones(0xc0003bec80) /go/src/github.com/openshift/installer/pkg/asset/machines/vsphere/machinesets.go:122 +0x4f8 github.com/openshift/installer/pkg/asset/machines/vsphere.Machines({0xc0011ca0b0, 0xd}, 0xc001080c80, 0xc0005cad50, {0xc000651d10, 0x13}, {0x4ab5773, 0x6}, {0x4ad49bb, 0x10}) /go/src/github.com/openshift/installer/pkg/asset/machines/vsphere/machines.go:37 +0x250 github.com/openshift/installer/pkg/asset/machines.(*Master).Generate(0xc001118bd0, 0x5?)
Field platform.vsphere.failureDomains.topology.netowrks is not required in documentation.
sh-4.4$ ./openshift-install explain installconfig.platform.vsphere.failureDomains.topology
KIND: InstallConfig
VERSION: v1RESOURCE: <object>
Topology describes a given failure domain using vSphere constructsFIELDS:
computeCluster <string> -required-
computeCluster as the failure domain This is required to be a path datacenter <string> -required-
datacenter is the vCenter datacenter in which virtual machines will be located and defined as the failure domain. datastore <string> -required-
datastore is the name or inventory path of the datastore in which the virtual machine is created/located. folder <string>
folder is the name or inventory path of the folder in which the virtual machine is created/located. networks <[]string>
networks is the list of networks within this failure domain resourcePool <string>
resourcePool is the absolute path of the resource pool where virtual machines will be created. The absolute path is of the form /<datacenter>/host/<cluster>/Resources/<resourcepool>.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-25-071630
How reproducible:
always when setting platform.vsphere.vcenters and unsetting platform.vsphere.failureDomains.topology.networks It works if no set platform.vsphere.vcenters and set platform.vsphere.failureDomains.topology.networks
Steps to Reproduce:
1. configure zones in install-config.yaml, set platform.vsphere.vcenters and unset platform.vsphere.failureDomains.topology.networks 2. install IPI cluster 3.
Actual results:
installer get panic error
Expected results:
installation is successful.
Additional info:
Description of problem:
AWS CCM install fails in CI of WMCB and WMCO repositories, this install is using prow workflow
openshift-e2e-aws-ccm-ovn-hybrid
Version-Release number of selected component:master/4.12
How reproducible: Always
Additional info:
Ongoing slack thread: https://coreos.slack.com/archives/CBZHF4DHC/p1664998753931669
This is a clone of issue OCPBUGS-3195. The following is the description of the original issue:
—
Description of problem:
the service ca controller start func seems to return that error as soon as its context is cancelled (which seems to happen the moment the first signal is received): https://github.com/openshift/service-ca-operator/blob/42088528ef8a6a4b8c99b0f558246b8025584056/pkg/controller/starter.go#L24 that apparently triggers os.Exit(1) immediately https://github.com/openshift/service-ca-operator/blob/42088528ef8a6a4b8c99b0f55824[…]om/openshift/library-go/pkg/controller/controllercmd/builder.go the lock release doesn't happen until the periodic renew tick breaks out https://github.com/openshift/service-ca-operator/blob/42088528ef8a6a4b8c99b0f55824[…]/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go seems unlikely that you'd reach the call to le.release() before the call to os.Exit(1) in the other goroutine
Version-Release number of selected component (if applicable):
4.13.0
How reproducible:
~always
Steps to Reproduce:
1. oc delete -n openshift-service-ca pod <service-ca pod>
Actual results:
the old pod logs show:
W1103 09:59:14.370594 1 builder.go:106] graceful termination failed, controllers failed with error: stopped
and when a new pod comes up to replace it, it has to wait for a while before acquiring the leader lock
I1103 16:46:00.166173 1 leaderelection.go:248] attempting to acquire leader lease openshift-service-ca/service-ca-controller-lock... .... waiting .... I1103 16:48:30.004187 1 leaderelection.go:258] successfully acquired lease openshift-service-ca/service-ca-controller-lock
Expected results:
new pod can acquire the leader lease without waiting for the old pod's lease to expire
Additional info:
In order to start 4.12 development, we need to merge the agent-installer branch. We need to create a PR and engage the Installer team on getting it approved
Our Prometheus alerts are inconsistent with both upstream and sometimes our own vendor folder. Let's do a clean update run before the next release is branched off.
Description of problem:
acquiring node lock for assigning ip address, node: %s, ip: %sci-ln-g470i52-1d09d-slz7m-worker-westus-6wt7k10.0.128.102
Description of problem:
For example, "openshift-install explain installconfig.platform.gcp.publicDNSZone" tells "PublicDNSZone contains the zone ID and project where the Public DNS zone will be created", but in fact it's for specifying an existing zone where the Public DNS zone records will be put in.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-10-015203
How reproducible:
Always
Steps to Reproduce:
1. openshift-install explain installconfig.platform.gcp.publicDNSZone 2. openshift-install explain installconfig.platform.gcp.privateDNSZone 3.
Actual results:
For example, it tells "PublicDNSZone contains the zone ID and project where the Public DNS zone will be created."
Expected results:
It should be like "PublicDNSZone contains the zone ID and project where the Public DNS zone records will be created."
Additional info:
$ openshift-install version openshift-install 4.12.0-0.nightly-2022-10-10-015203 built from commit 02102a96b3f7c78337b32dcafe2e28be6fb67a0f release image registry.ci.openshift.org/ocp/release@sha256:00806cf7faaa86981e73b478a72c1b7a838cd08b215f3a9ab9b278ae94d9a794 release architecture amd64 $ $ openshift-install explain installconfig.platform.gcp.publicDNSZone KIND: InstallConfig VERSION: v1RESOURCE: <object> PublicDNSZone Technology Preview. PublicDNSZone contains the zone ID and project where the Public DNS zone will be created.FIELDS: id <string> ID Technology Preview. ID or name of the zone. project <string> ProjectID Technology Preview When the ProjectID is provided, the zone will be created in this project. When the ProjectID is empty, the DNS zone with this ID will be created and managed in the Service Project (GCP.ProjectID). $ $ openshift-install explain installconfig.platform.gcp.privateDNSZone KIND: InstallConfig VERSION: v1RESOURCE: <object> PrivateDNSZone Technology Preview. PrivateDNSZone contains the zone ID and project where the Private DNS zone will be created.FIELDS: id <string> ID Technology Preview. ID or name of the zone. project <string> ProjectID Technology Preview When the ProjectID is provided, the zone will be created in this project. When the ProjectID is empty, the DNS zone with this ID will be created and managed in the Service Project (GCP.ProjectID). $
We should reformat assisted-installer ops.go code and use exec commands as interface to allow mocking
Description of problem:
prometheus-k8s-0 ends in CrashLoopBackOff with evel=error err="opening storage failed: /prometheus/chunks_head/000002: invalid magic number 0" on SNO after hard reboot tests
Version-Release number of selected component (if applicable):
4.11.6
How reproducible:
Not always, after ~10 attempts
Steps to Reproduce:
1. Deploy SNO with Telco DU profile applied 2. Hard reboot node via out of band interface 3. oc -n openshift-monitoring get pods prometheus-k8s-0
Actual results:
NAME READY STATUS RESTARTS AGE prometheus-k8s-0 5/6 CrashLoopBackOff 125 (4m57s ago) 5h28m
Expected results:
Running
Additional info:
Attaching must-gather. The pod recovers successfully after deleting/re-creating. [kni@registry.kni-qe-0 ~]$ oc -n openshift-monitoring logs prometheus-k8s-0 ts=2022-09-26T14:54:01.919Z caller=main.go:552 level=info msg="Starting Prometheus Server" mode=server version="(version=2.36.2, branch=rhaos-4.11-rhel-8, revision=0d81ba04ce410df37ca2c0b1ec619e1bc02e19ef)" ts=2022-09-26T14:54:01.919Z caller=main.go:557 level=info build_context="(go=go1.18.4, user=root@371541f17026, date=20220916-14:15:37)" ts=2022-09-26T14:54:01.919Z caller=main.go:558 level=info host_details="(Linux 4.18.0-372.26.1.rt7.183.el8_6.x86_64 #1 SMP PREEMPT_RT Sat Aug 27 22:04:33 EDT 2022 x86_64 prometheus-k8s-0 (none))" ts=2022-09-26T14:54:01.919Z caller=main.go:559 level=info fd_limits="(soft=1048576, hard=1048576)" ts=2022-09-26T14:54:01.919Z caller=main.go:560 level=info vm_limits="(soft=unlimited, hard=unlimited)" ts=2022-09-26T14:54:01.921Z caller=web.go:553 level=info component=web msg="Start listening for connections" address=127.0.0.1:9090 ts=2022-09-26T14:54:01.922Z caller=main.go:989 level=info msg="Starting TSDB ..." ts=2022-09-26T14:54:01.924Z caller=tls_config.go:231 level=info component=web msg="TLS is disabled." http2=false ts=2022-09-26T14:54:01.926Z caller=main.go:848 level=info msg="Stopping scrape discovery manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:862 level=info msg="Stopping notify discovery manager..." ts=2022-09-26T14:54:01.926Z caller=manager.go:951 level=info component="rule manager" msg="Stopping rule manager..." ts=2022-09-26T14:54:01.926Z caller=manager.go:961 level=info component="rule manager" msg="Rule manager stopped" ts=2022-09-26T14:54:01.926Z caller=main.go:899 level=info msg="Stopping scrape manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:858 level=info msg="Notify discovery manager stopped" ts=2022-09-26T14:54:01.926Z caller=main.go:891 level=info msg="Scrape manager stopped" ts=2022-09-26T14:54:01.926Z caller=notifier.go:599 level=info component=notifier msg="Stopping notification manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:844 level=info msg="Scrape discovery manager stopped" ts=2022-09-26T14:54:01.926Z caller=manager.go:937 level=info component="rule manager" msg="Starting rule manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:1120 level=info msg="Notifier manager stopped" ts=2022-09-26T14:54:01.926Z caller=main.go:1129 level=error err="opening storage failed: /prometheus/chunks_head/000002: invalid magic number 0"
This is a clone of issue OCPBUGS-2992. The following is the description of the original issue:
—
Description of problem:
The metal3-ironic container image in OKD fails during steps in configure-ironic.sh that look for additional Oslo configuration entries as environment variables to configure the Ironic instance. The mechanism by which it fails in OKD but not OpenShift is that the image for OpenShift happens to have unrelated variables set which match the regex, because it is based on the builder image, but the OKD image is based only on a stream8 image without these unrelated OS_ prefixed variables set. The metal3 pod created in response to even a provisioningNetwork: Disabled Provisioning object will therefore crashloop indefinitely.
Version-Release number of selected component (if applicable):
4.11
How reproducible:
Always
Steps to Reproduce:
1. Deploy OKD to a bare metal cluster using the assisted-service, with the OKD ConfigMap applied to podman play kube, as in :https://github.com/openshift/assisted-service/tree/master/deploy/podman#okd-configuration 2. Observe the state of the metal3 pod in the openshift-machine-api namespace.
Actual results:
The metal3-ironic container repeatedly exits with nonzero, with the logs ending here: ++ export IRONIC_URL_HOST=10.1.1.21 ++ IRONIC_URL_HOST=10.1.1.21 ++ export IRONIC_BASE_URL=https://10.1.1.21:6385 ++ IRONIC_BASE_URL=https://10.1.1.21:6385 ++ export IRONIC_INSPECTOR_BASE_URL=https://10.1.1.21:5050 ++ IRONIC_INSPECTOR_BASE_URL=https://10.1.1.21:5050 ++ '[' '!' -z '' ']' ++ '[' -f /etc/ironic/ironic.conf ']' ++ cp /etc/ironic/ironic.conf /etc/ironic/ironic.conf_orig ++ tee /etc/ironic/ironic.extra # Options set from Environment variables ++ echo '# Options set from Environment variables' ++ env ++ grep '^OS_' ++ tee -a /etc/ironic/ironic.extra
Expected results:
The metal3-ironic container starts and the metal3 pod is reported as ready.
Additional info:
This is the PR that introduced pipefail to the downstream ironic-image, which is not yet accepted in the upstream: https://github.com/openshift/ironic-image/pull/267/files#diff-ab2b20df06f98d48f232d90f0b7aa464704257224862780635ec45b0ce8a26d4R3 This is the line that's failing: https://github.com/openshift/ironic-image/blob/4838a077d849070563b70761957178055d5d4517/scripts/configure-ironic.sh#L57 This is the image base that OpenShift uses for ironic-image (before rewriting in ci-operator): https://github.com/openshift/ironic-image/blob/4838a077d849070563b70761957178055d5d4517/Dockerfile.ocp#L9 Here is where the relevant environment variables are set in the builder images for OCP: https://github.com/openshift/builder/blob/973602e0e576d7eccef4fc5810ba511405cd3064/hack/lib/build/version.sh#L87 Here is the final FROM line in the OKD image build (just stream8): https://github.com/openshift/ironic-image/blob/4838a077d849070563b70761957178055d5d4517/Dockerfile.okd#L9 This results in the following differences between the two images: $ podman run --rm -it --entrypoint bash quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:519ac06836d972047f311de5e57914cf842716e22a1d916a771f02499e0f235c -c 'env | grep ^OS_' OS_GIT_MINOR=11 OS_GIT_TREE_STATE=clean OS_GIT_COMMIT=97530a7 OS_GIT_VERSION=4.11.0-202210061001.p0.g97530a7.assembly.stream-97530a7 OS_GIT_MAJOR=4 OS_GIT_PATCH=0 $ podman run --rm -it --entrypoint bash quay.io/openshift/okd-content@sha256:6b8401f8d84c4838cf0e7c598b126fdd920b6391c07c9409b1f2f17be6d6d5cb -c 'env | grep ^OS_' Here is what the OS_ prefixed variables should be used for: https://github.com/metal3-io/ironic-image/blob/807a120b4ce5e1675a79ebf3ee0bb817cfb1f010/README.md?plain=1#L36 https://opendev.org/openstack/oslo.config/src/commit/84478d83f87e9993625044de5cd8b4a18dfcaf5d/oslo_config/sources/_environment.py It's worth noting that ironic.extra is not consumed anywhere, and is simply being used here to save off the variables that Oslo _might_ be consuming (it won't consume the variables that are present in the OCP builder image, though they do get caught by this regex). With pipefail set, grep returns non-zero when it fails to find an environment variable that matches the regex, as in the case of the OKD ironic-image builds.
In 4.12.0-rc.0 some API-server components declare flowcontrol/v1beta1 release manifests:
$ oc adm release extract --to manifests quay.io/openshift-release-dev/ocp-release:4.12.0-rc.0-x86_64 $ grep -r flowcontrol.apiserver.k8s.io manifests manifests/0000_50_cluster-authentication-operator_09_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_50_cluster-authentication-operator_09_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_50_cluster-authentication-operator_09_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_50_cluster-authentication-operator_09_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_20_etcd-operator_10_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_20_kube-apiserver-operator_08_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_20_kube-apiserver-operator_08_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_20_kube-apiserver-operator_08_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_50_cluster-openshift-apiserver-operator_09_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_50_cluster-openshift-apiserver-operator_09_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_50_cluster-openshift-apiserver-operator_09_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_50_cluster-openshift-controller-manager-operator_10_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1
The APIs are scheduled for removal in Kube 1.26, which will ship with OpenShift 4.13. We want the 4.12 CVO to move to modern APIs in 4.12, so the APIRemovedInNext.*ReleaseInUse alerts are not firing on 4.12. This ticket tracks removing those manifests, or replacing them with a more modern resource type, or some such. Definition of done is that new 4.13 (and with backports, 4.12) nightlies no longer include flowcontrol.apiserver.k8s.io/v1beta1 manifests.
[It] clients should not use APIs that are removed in upcoming releases [apigroup:config.openshift.io] [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/apiserver/api_requests.go:27 Nov 18 21:59:06.261: INFO: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times Nov 18 21:59:06.261: INFO: api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times Nov 18 21:59:06.261: INFO: api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times Nov 18 21:59:06.261: INFO: user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times Nov 18 21:59:06.261: INFO: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times Nov 18 21:59:06.261: INFO: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times [AfterEach] [sig-arch][Late] github.com/openshift/origin/test/extended/util/client.go:158 [AfterEach] [sig-arch][Late] github.com/openshift/origin/test/extended/util/client.go:159 flake: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times Ginkgo exit error 4: exit with code 4
This is required to unblock https://github.com/openshift/origin/pull/27561
Description of problem:
Upgrade OCP 4.11 --> 4.12 fails with one 'NotReady,SchedulingDisabled' node and MachineConfigDaemonFailed.
Version-Release number of selected component (if applicable):
Upgrade from OCP 4.11.0-0.nightly-2022-09-19-214532 on top of OSP RHOS-16.2-RHEL-8-20220804.n.1 to 4.12.0-0.nightly-2022-09-20-040107. Network Type: OVNKubernetes
How reproducible:
Twice out of two attempts.
Steps to Reproduce:
1. Install OCP 4.11.0-0.nightly-2022-09-19-214532 (IPI) on top of OSP RHOS-16.2-RHEL-8-20220804.n.1. The cluster is up and running with three workers: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-09-19-214532 True False 51m Cluster version is 4.11.0-0.nightly-2022-09-19-214532 2. Run the OC command to upgrade to 4.12.0-0.nightly-2022-09-20-040107: $ oc adm upgrade --to-image=registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-09-20-040107 --allow-explicit-upgrade --force=true warning: Using by-tag pull specs is dangerous, and while we still allow it in combination with --force for backward compatibility, it would be much safer to pass a by-digest pull spec instead warning: The requested upgrade image is not one of the available updates.You have used --allow-explicit-upgrade for the update to proceed anyway warning: --force overrides cluster verification of your supplied release image and waives any update precondition failures. Requesting update to release image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-09-20-040107 3. The upgrade is not succeeds: [0] $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-09-19-214532 True True 17h Unable to apply 4.12.0-0.nightly-2022-09-20-040107: wait has exceeded 40 minutes for these operators: network One node degrided to 'NotReady,SchedulingDisabled' status: $ oc get nodes NAME STATUS ROLES AGE VERSION ostest-9vllk-master-0 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-master-1 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-master-2 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-worker-0-4x4pt NotReady,SchedulingDisabled worker 18h v1.24.0+3882f8f ostest-9vllk-worker-0-h6kcs Ready worker 18h v1.24.0+3882f8f ostest-9vllk-worker-0-xhz9b Ready worker 18h v1.24.0+3882f8f $ oc get pods -A | grep -v -e Completed -e Running NAMESPACE NAME READY STATUS RESTARTS AGE openshift-openstack-infra coredns-ostest-9vllk-worker-0-4x4pt 0/2 Init:0/1 0 18h $ oc get events LAST SEEN TYPE REASON OBJECT MESSAGE 7m15s Warning OperatorDegraded: MachineConfigDaemonFailed /machine-config Unable to apply 4.12.0-0.nightly-2022-09-20-040107: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] 7m15s Warning MachineConfigDaemonFailed /machine-config Cluster not available for [{operator 4.11.0-0.nightly-2022-09-19-214532}]: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2022-09-20-040107 True False False 18h baremetal 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cloud-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cloud-credential 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cluster-autoscaler 4.12.0-0.nightly-2022-09-20-040107 True False False 19h config-operator 4.12.0-0.nightly-2022-09-20-040107 True False False 19h console 4.12.0-0.nightly-2022-09-20-040107 True False False 18h control-plane-machine-set 4.12.0-0.nightly-2022-09-20-040107 True False False 17h csi-snapshot-controller 4.12.0-0.nightly-2022-09-20-040107 True False False 19h dns 4.12.0-0.nightly-2022-09-20-040107 True True False 19h DNS "default" reports Progressing=True: "Have 5 available node-resolver pods, want 6." etcd 4.12.0-0.nightly-2022-09-20-040107 True False False 19h image-registry 4.12.0-0.nightly-2022-09-20-040107 True True False 18h Progressing: The registry is ready... ingress 4.12.0-0.nightly-2022-09-20-040107 True False False 18h insights 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-apiserver 4.12.0-0.nightly-2022-09-20-040107 True True False 18h NodeInstallerProgressing: 1 nodes are at revision 11; 2 nodes are at revision 13 kube-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-scheduler 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-storage-version-migrator 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-api 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-approver 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-config 4.11.0-0.nightly-2022-09-19-214532 False True True 16h Cluster not available for [{operator 4.11.0-0.nightly-2022-09-19-214532}]: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] marketplace 4.12.0-0.nightly-2022-09-20-040107 True False False 19h monitoring 4.12.0-0.nightly-2022-09-20-040107 True False False 18h network 4.12.0-0.nightly-2022-09-20-040107 True True True 19h DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2022-09-20T14:16:13Z... node-tuning 4.12.0-0.nightly-2022-09-20-040107 True False False 17h openshift-apiserver 4.12.0-0.nightly-2022-09-20-040107 True False False 18h openshift-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 17h openshift-samples 4.12.0-0.nightly-2022-09-20-040107 True False False 17h operator-lifecycle-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h operator-lifecycle-manager-catalog 4.12.0-0.nightly-2022-09-20-040107 True False False 19h operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2022-09-20-040107 True False False 19h service-ca 4.12.0-0.nightly-2022-09-20-040107 True False False 19h storage 4.12.0-0.nightly-2022-09-20-040107 True True False 19h ManilaCSIDriverOperatorCRProgressing: ManilaDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods... [0] http://pastebin.test.redhat.com/1074531
Actual results:
OCP 4.11 --> 4.12 upgrade fails.
Expected results:
OCP 4.11 --> 4.12 upgrade success.
Additional info:
Attached logs of the NotReady node - [^journalctl_ostest-9vllk-worker-0-4x4pt.log.tar.gz]
in order to have more info to be able to debug router issue in sno , we want to see if router is healthy from node network point of view and enable router access logs,
Lets revert when https://bugzilla.redhat.com/show_bug.cgi?id=2097041 will be found
CI is failing due to the updated pod security admission controller. We need to update the console test pods with the correct security values.
Error: Command failed: echo '{"apiVersion":"v1","kind":"Pod","metadata":
{"name":"test-jxlpt-event-test-pod","namespace":"test-jxlpt"},"spec":{"containers":[
{"name":"httpd","image":"image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest"}]}}' | kubectl create -n test-jxlpt -f -
Error from server (Forbidden): error when creating "STDIN": pods "test-jxlpt-event-test-pod" is forbidden: violates PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Description of problem:
Have 6 runs of techpreview jobs where the jobs fails due to the MCO:
{Operator degraded (RequiredPoolsFailed): Unable to apply 4.12.0-0.ci.test-2022-09-21-183414-ci-op-qd6plyhc-latest: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)] Operator degraded (RequiredPoolsFailed): Unable to apply 4.12.0-0.ci.test-2022-09-21-183414-ci-op-qd6plyhc-latest: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)]}
looking at the MCD logs the master seems to go degraded in bootstrap due to the rendered config not being found?
I0921 18:49:47.091804 8171 daemon.go:444] Node ci-op-qd6plyhc-6dd9a-bfmjd-master-1 is part of the control plane I0921 18:49:49.213556 8171 node.go:24] No machineconfiguration.openshift.io/currentConfig annotation on node ci-op-qd6plyhc-6dd9a-bfmjd-master-1: map[csi.volume.kubernetes.io/nodeid: {"pd.csi.storage.gke.io":"projects/openshift-gce-devel-ci-2/zones/us-central1-b/instances/ci-op-qd6plyhc-6dd9a-bfmjd-master-1"} volumes.kubernetes.io/controller-managed-attach-detach:true], in cluster bootstrap, loading initial node annotation from /etc/machine-config-daemon/node-annotations.json I0921 18:49:49.215186 8171 node.go:45] Setting initial node config: rendered-master-2dde32327e4e5d15092fccbac1dcec49 I0921 18:49:49.253706 8171 daemon.go:1184] In bootstrap mode E0921 18:49:49.254046 8171 writer.go:200] Marking Degraded due to: machineconfig.machineconfiguration.openshift.io "rendered-master-2dde32327e4e5d15092fccbac1dcec49" not found I0921 18:49:51.232610 8171 daemon.go:499] Transitioned from state: Done -> Degraded I0921 18:49:51.249618 8171 daemon.go:1184] In bootstrap mode E0921 18:49:51.249906 8171 writer.go:200] Marking Degraded due to: machineconfig.machineconfiguration.openshift.io "rendered-master-2dde32327e4e5d15092fccbac1dcec49" not found
However looking at controller a rendered-config was generated correctly but it's not the missing config from above:
I0921 18:54:06.736984 1 render_controller.go:506] Generated machineconfig rendered-master-acc8491aafab8ef511a40b76372325ee from 6 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 }] I0921 18:54:06.737226 1 event.go:285] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"master", UID:"b2084ca6-4b33-46bf-b83b-9e98010ff085", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"5648", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-acc8491aafab8ef511a40b76372325ee successfully generated (release version: 4.12.0-0.ci.test-2022-09-21-183220-ci-op-9ksj7d7g-latest, controller version: a627415c240b4c7dd2f9e90f659690d9c0f623f3) I0921 18:54:06.742053 1 render_controller.go:532] Pool master: now targeting: rendered-master-acc8491aafab8ef511a40b76372325ee
So far I see this in the following techpreview jobs:
GCP techpreview
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-gcp-sdn-techpreview/1572638837954318336
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-gcp-sdn-techpreview-serial/1572638838793179136
Vsphere techpreview
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-nightly-4.12-e2e-vsphere-ovn-techpreview/1572638854794448896
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-nightly-4.12-e2e-vsphere-ovn-techpreview-serial/1572638855574589440
AWS Techpreview:
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-aws-sdn-techpreview/1572638828672323584
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-aws-sdn-techpreview-serial/1572638829217583104
The above jobs affect the k8s 1.25 bump and are blocking the job.
There are also other occurances not in our PR:
https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/31965/rehearse-31965-pull-ci-openshift-openshift-controller-manager-master-openshift-e2e-aws-builds-techpreview/1572581504297472000
Also see a quick search:
https://search.ci.openshift.org/?search=timed+out+waiting+for+the+condition%2C+error+pool+master+is+not+ready&maxAge=48h&context=1&type=bug%2Bissue%2Bjunit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Did something change that would affect tech preview jobs?
Also note, this seems like a new failure. I have some of these jobs passing in the last ~ 8 days.
Description of problem:
If you set a services cluster IP to an IP with a leading zero (e.g. 192.168.0.011), ovn-k should normalise this and remove the leading zero before sending it to ovn.
This was seen by me on a CI run executing the k8 test here: test/e2e/network/funny_ips.go +75
you can reproduce using that above test.
Have a read of the text there:
43 // What are funny IPs: 44 // The adjective is because of the curl blog that explains the history and the problem of liberal 45 // parsing of IP addresses and the consequences and security risks caused the lack of normalization, 46 // mainly due to the use of different notations to abuse parsers misalignment to bypass filters. 47 // xref: https://daniel.haxx.se/blog/2021/04/19/curl-those-funny-ipv4-addresses/ 48 // 49 // Since golang 1.17, IPv4 addresses with leading zeros are rejected by the standard library. 50 // xref: https://github.com/golang/go/issues/30999 51 // 52 // Because this change on the parsers can cause that previous valid data become invalid, Kubernetes 53 // forked the old parsers allowing leading zeros on IPv4 address to not break the compatibility. 54 // 55 // Kubernetes interprets leading zeros on IPv4 addresses as decimal, users must not rely on parser 56 // alignment to not being impacted by the associated security advisory: CVE-2021-29923 golang 57 // standard library "net" - Improper Input Validation of octal literals in golang 1.16.2 and below 58 // standard library "net" results in indeterminate SSRF & RFI vulnerabilities. xref: 59 // https://nvd.nist.gov/vuln/detail/CVE-2021-29923
northd is logging an error about this also:
|socket_util|ERR|172.30.0.011:7180: bad IP address "172.30.0.011" ... 2022-08-23T14:14:21.968Z|01839|ovn_util|WARN|bad ip address or port for load balancer key 172.30.0.011:7180
Also, I see the error:
E0823 14:14:34.135115 3284 gateway_shared_intf.go:600] Failed to delete conntrack entry for service e2e-funny-ips-8626/funny-ip: failed to delete conntrack entry for service e2e-funny-ips-8626/funny-ip with svcVIP 172.30.0.011, svcPort 7180, protocol TCP: value "<nil>" passed to DeleteConntrack is not an IP address
We should normalise the IPs before sending to OVN-k. I see also theres conntrack error when trying to set this bad IP.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. See above k8 test
Actual results:
Leading zero IP sent to OVN
Expected results:
No leading zero IP sent to OVN
Additional info:
This is a clone of issue OCPBUGS-3440. The following is the description of the original issue:
—
Description of problem:
https://github.com/openshift/cluster-authentication-operator/pull/587 addresses an issue in which the auth operator goes degraded when the console capability is not enabled. The rest is that the console publicAssetURL is not configured when the console is disabled. However if the console capability is later enabled on the cluster, there is no logic in place to ensure the auth operator detects this and performs the configuration. Manually restarting the auth operator will address this, but we should have a solution that handles it automatically.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Install a cluster w/o the console cap 2. Inspect the auth configmap, see that assetPublicURL is empty 3. Enable the console capability, wait for console to start up 4. Inspect the auth configmap and see it is still empty
Actual results:
assetPublicURL does not get populated
Expected results:
assetPublicURL is populated once the console is enabled
Additional info:
Description of problem:
Deployed hypershift cluster with recent multi-arch build. Storage cluster operator has become available but having below warning message PowerVSBlockCSIDriverOperatorCRDegraded: PowerVSBlockCSIDriverStaticResourcesControllerDegraded: "rbac/attacher_role.yaml" (string): clusterroles.rbac.authorization.k8s.io "ibm-powervs-block-external-attacher-role" is forbidden: user "system:serviceaccount:openshift-cluster-csi-drivers:powervs-block-csi-driver-operator" (groups=["system:serviceaccounts" "system:serviceaccounts:openshift-cluster-csi-drivers" "system:authenticated"]) is attempting to grant RBAC permissions not currently held: PowerVSBlockCSIDriverOperatorCRDegraded: PowerVSBlockCSIDriverStaticResourcesControllerDegraded: {APIGroups:["csi.storage.k8s.io"], Resources:["csinodeinfos"], Verbs:["get" "list" "watch"]} PowerVSBlockCSIDriverOperatorCRDegraded: PowerVSBlockCSIDriverStaticResourcesControllerDegraded: "rbac/attacher_binding.yaml" (string): clusterroles.rbac.authorization.k8s.io "ibm-powervs-block-external-attacher-role" not found
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.Deploy 4.12.0-0.nightly-multi-2022-09-01-220105 nightly build
Actual results:
Expected results:
Additional info:
Description of problem:
Tests failure when running dev-console tests locally.
Version-Release number of selected component (if applicable):
At least on 4.11 and 4.12
How reproducible:
Always
Steps to Reproduce:
1. Start cypress: yarn run test-cypress-dev-console
2. Run add-page
Actual results:
Fails
Expected results:
Should pass
Additional info:
As a developer, I would like to remove the random terraform provider because it is essentially unnecessary and would improve our build process.
The random Terraform provider is used in Azure & Azure Stack to create a random string. This could easily be done in go code and passed in as a variable.
Removing an extra provider would decrease our build time and improve our build stability, which is often failing due to timeouts.
The random string is used here in Azure (and similarly in Azure Stack):
https://github.com/openshift/installer/blob/master/data/data/azure/vnet/main.tf#L23-L27
One approach would be to generate the string in tfvars and pass it in as a terraform variable.
In the Known Issues section of the OpenStack-specific Installer docs issues, there is a point about control plane anti-affinity.
The known issue has several problems:
Description of problem:
This bug is a clone of https://bugzilla.redhat.com/show_bug.cgi?id=2109140 on odf-console side. Corresponding PR needed to be merged in console as well. Please, verify this Jira console's bug and https://bugzilla.redhat.com/show_bug.cgi?id=2109140 simultaneous. Steps are exactly same, no difference.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
A nil-pointer dereference occurred in the TestRouterCompressionOperation test in the e2e-gcp-operator CI job for the openshift/cluster-ingress-operator repository.
4.12.
Observed once. However, we run e2e-gcp-operator infrequently.
1. Run the e2e-gcp-operator CI job on a cluster-ingress-operator PR.
panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0x14cabef] goroutine 8048 [running]: testing.tRunner.func1.2({0x1624920, 0x265b870}) /usr/lib/golang/src/testing/testing.go:1389 +0x24e testing.tRunner.func1() /usr/lib/golang/src/testing/testing.go:1392 +0x39f panic({0x1624920, 0x265b870}) /usr/lib/golang/src/runtime/panic.go:838 +0x207 k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x40e43e5698?}) /go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xd8 panic({0x1624920, 0x265b870}) /usr/lib/golang/src/runtime/panic.go:838 +0x207 github.com/openshift/cluster-ingress-operator/test/e2e.getHttpHeaders(0xc0002b9380?, 0xc0000e4540, 0x1) /go/src/github.com/openshift/cluster-ingress-operator/test/e2e/router_compression_test.go:257 +0x2ef github.com/openshift/cluster-ingress-operator/test/e2e.testContentEncoding.func1() /go/src/github.com/openshift/cluster-ingress-operator/test/e2e/router_compression_test.go:220 +0x57 k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc00003f000}) /go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x1b25d40?, 0xc000138000?}, 0xc000befe08?) /go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/apimachinery/pkg/util/wait.poll({0x1b25d40, 0xc000138000}, 0x48?, 0xc4fa25?, 0x30?) /go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38 k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x1b25d40, 0xc000138000}, 0xc000b1da00?, 0xc000befe98?, 0x414207?) /go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00088cea0?, 0x3b9aca00?, 0xc000138000?) /go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 github.com/openshift/cluster-ingress-operator/test/e2e.testContentEncoding(0xc00088cea0, 0xc000a8a270, 0xc0000e4540, 0x1, {0x17fe569, 0x4}) /go/src/github.com/openshift/cluster-ingress-operator/test/e2e/router_compression_test.go:219 +0xfc github.com/openshift/cluster-ingress-operator/test/e2e.TestRouterCompressionOperation(0xc00088cea0) /go/src/github.com/openshift/cluster-ingress-operator/test/e2e/router_compression_test.go:208 +0x454 testing.tRunner(0xc00088cea0, 0x191cdd0) /usr/lib/golang/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/lib/golang/src/testing/testing.go:1486 +0x35f
The test should pass.
The faulty logic was introduced in https://github.com/openshift/cluster-ingress-operator/pull/679/commits/211b9c15b1fd6217dee863790c20f34c26c138aa.
The test was subsequently marked as a parallel test in https://github.com/openshift/cluster-ingress-operator/pull/756/commits/a22322b25569059c61e1973f37f0a4b49e9407bc.
The job history shows that the e2e-gcp-operator job has only run once since June: https://prow.ci.openshift.org/job-history/gs/origin-ci-test/pr-logs/directory/pull-ci-openshift-cluster-ingress-operator-master-e2e-gcp-operator. I see failures in May, but none of those failures shows the panic.
This bug is a backport clone of [Bugzilla Bug 2050230](https://bugzilla.redhat.com/show_bug.cgi?id=2050230). The following is the description of the original bug:
—
Description of problem:
In a large cluster, sdn daemonset can DoS the kube-apiserver with un-paginated LIST calls on high count resources.
Version-Release number of selected component (if applicable):
How reproducible:
NA
Steps to Reproduce:
NA
Actual results:
Kube API Server and Openshift API Server in one of the cluster keeps restarting, without proper exception. The cluster is not accessible.
Expected results:
Kube API Server and Openshift API Server should be stable.
Additional info:
We cache images by filename, which works when downloading from the Internet as the filename always includes the CoreOS version.
However, when extracting an image from the release payload, it always has the same name. Therefore, we will never update it to a newer image even when running different versions of the installer.
A possible solution:
An alternative might be to set the name of the cache file to something different. It's not clear how we'd guarantee a match between the release payload we've been given and the ISO unless the name was based on the release payload (which eliminates some of the point of the cache, since ordinarily most release payloads will point to a small number of images).
This relates to the recovery of a cluster following an etcd outage.
The ingress path to kube-apiserver is:
───────────> VIP ─────────────────> Local HAProxy ────┬─> kube-apiserver-master-0 (managed by keepalived) │ ├─> kube-apiserver-master-1 │ └─> kube-apiserver-master-2
Each master is running an HAProxy which load balances between the 3 kube-apiservers. Each HAProxy is running health checks against each kube-apiserver, and will add or remove it from the available pool based on its health.
We only use keepalived to ensure that HAProxy is not a single point of failure. It is the job of keepalived to ensure that incoming traffic is being directed to an HAProxy which is functioning correctly.
The current health check we are using for keepalived involves polling /readyz against the local HAProxy. While this seems intuitively correct it is in fact testing the wrong thing. It is testing whether the kube-apiserver it connects to is functioning correctly. However, this is not the purpose of keepalived. HAProxy runs health checks against kube-apiserver backends. keepalived simply selects a correctly functioning HAProxy.
This becomes important during recovery from an outage. When none of the kube-apiservers are healthy this health check will fail continuously, and the API VIP will move uselessly between masters. However the situation is much worse when only one of the kube-apiservers is up. In this case there is a high probability that it is overloaded and at least rate limiting incoming connections. This may lead us to fail the keepalived health check and fail the VIP over to the next HAProxy. This will cause all open kube-apiserver connections to reset, even the established ones. This increases the load on the kube-apiserver and increases the probability that the health check will fail again.
Ideally the keepalived health check would check only the health of HAProxy itself, not the health of the pool of kube-apiservers. In practise it will probably never be necessary to move the VIP while the master is up, regardless of the health of the cluster. A network partition affecting HAProxy would already be handled by VRRP between the masters, so it may be that it would be sufficient to check that the local HAProxy pod is healthy.
This is a clone of issue OCPBUGS-1627. The following is the description of the original issue:
—
Description of problem:
Two issues when setting user-defined folder in failureDomain.
1. installer get error when setting folder as a path of user-defined folder in failureDomain.
failureDomains setting in install-config.yaml:
failureDomains: - name: us-east-1 region: us-east zone: us-east-1a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-1 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-1 folder: /IBMCloud/vm/qe-jima - name: us-east-2 region: us-east zone: us-east-2a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-2 folder: /IBMCloud/vm/qe-jima - name: us-east-3 region: us-east zone: us-east-3a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-3 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR folder: /IBMCloud/vm/qe-jima - name: us-west-1 region: us-west zone: us-west-1a server: ibmvcenter.vmc-ci.devcluster.openshift.com topology: datacenter: datacenter-2 computeCluster: /datacenter-2/host/vcs-mdcnc-workload-4 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR
Error message in terraform after completing ova image import:
DEBUG vsphereprivate_import_ova.import[0]: Still creating... [1m40s elapsed] DEBUG vsphereprivate_import_ova.import[3]: Creation complete after 1m40s [id=vm-367860] DEBUG vsphereprivate_import_ova.import[1]: Creation complete after 1m49s [id=vm-367863] DEBUG vsphereprivate_import_ova.import[0]: Still creating... [1m50s elapsed] DEBUG vsphereprivate_import_ova.import[2]: Still creating... [1m50s elapsed] DEBUG vsphereprivate_import_ova.import[2]: Still creating... [2m0s elapsed] DEBUG vsphereprivate_import_ova.import[0]: Still creating... [2m0s elapsed] DEBUG vsphereprivate_import_ova.import[2]: Creation complete after 2m2s [id=vm-367862] DEBUG vsphereprivate_import_ova.import[0]: Still creating... [2m10s elapsed] DEBUG vsphereprivate_import_ova.import[0]: Creation complete after 2m20s [id=vm-367861] DEBUG data.vsphere_virtual_machine.template[0]: Reading... DEBUG data.vsphere_virtual_machine.template[3]: Reading... DEBUG data.vsphere_virtual_machine.template[1]: Reading... DEBUG data.vsphere_virtual_machine.template[2]: Reading... DEBUG data.vsphere_virtual_machine.template[3]: Read complete after 1s [id=42054e33-85d6-e310-7f4f-4c52a73f8338] DEBUG data.vsphere_virtual_machine.template[1]: Read complete after 2s [id=42053e17-cc74-7c89-f5d1-059c9030ecc7] DEBUG data.vsphere_virtual_machine.template[2]: Read complete after 2s [id=4205019f-26d8-f9b4-ac0c-2c073fd70b35] DEBUG data.vsphere_virtual_machine.template[0]: Read complete after 2s [id=4205eaf2-c727-c647-ad44-bd9ad7023c56] ERROR ERROR Error: error trying to determine parent targetFolder: folder '/IBMCloud/vm//IBMCloud/vm' not found ERROR ERROR with vsphere_folder.folder["IBMCloud-/IBMCloud/vm/qe-jima"], ERROR on main.tf line 61, in resource "vsphere_folder" "folder": ERROR 61: resource "vsphere_folder" "folder" { ERROR ERROR failed to fetch Cluster: failed to generate asset "Cluster": failure applying terraform for "pre-bootstrap" stage: failed to create cluster: failed to apply Terraform: exit status 1 ERROR ERROR Error: error trying to determine parent targetFolder: folder '/IBMCloud/vm//IBMCloud/vm' not found ERROR ERROR with vsphere_folder.folder["IBMCloud-/IBMCloud/vm/qe-jima"], ERROR on main.tf line 61, in resource "vsphere_folder" "folder": ERROR 61: resource "vsphere_folder" "folder" { ERROR ERROR
2. installer get panic error when setting folder as user-defined folder name in failure domains.
failure domain in install-config.yaml
failureDomains: - name: us-east-1 region: us-east zone: us-east-1a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-1 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-1 folder: qe-jima - name: us-east-2 region: us-east zone: us-east-2a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-2 folder: qe-jima - name: us-east-3 region: us-east zone: us-east-3a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-3 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR folder: qe-jima - name: us-west-1 region: us-west zone: us-west-1a server: xxx topology: datacenter: datacenter-2 computeCluster: /datacenter-2/host/vcs-mdcnc-workload-4 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR
panic error message in installer:
INFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.12/412.86.202208101039-0/x86_64/rhcos-412.86.202208101039-0-vmware.x86_64.ova?sha256='
INFO The file was found in cache: /home/user/.cache/openshift-installer/image_cache/rhcos-412.86.202208101039-0-vmware.x86_64.ova. Reusing...
panic: runtime error: index out of range [1] with length 1goroutine 1 [running]:
github.com/openshift/installer/pkg/tfvars/vsphere.TFVars({{0xc0013bd068, 0x3, 0x3}, {0xc000b11dd0, 0x12}, {0xc000b11db8, 0x14}, {0xc000b11d28, 0x14}, {0xc000fe8fc0, ...}, ...})
/go/src/github.com/openshift/installer/pkg/tfvars/vsphere/vsphere.go:79 +0x61b
github.com/openshift/installer/pkg/asset/cluster.(*TerraformVariables).Generate(0x1d1ed360, 0x5?)
/go/src/github.com/openshift/installer/pkg/asset/cluster/tfvars.go:847 +0x4798
Based on explanation of field folder, looks like folder name should be ok. If it is not allowed to use folder name, need to validate the folder and update explain.
sh-4.4$ ./openshift-install explain installconfig.platform.vsphere.failureDomains.topology.folder KIND: InstallConfig VERSION: v1RESOURCE: <string> folder is the name or inventory path of the folder in which the virtual machine is created/located.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-20-095559
How reproducible:
always
Steps to Reproduce:
see description
Actual results:
installation has errors when set user-defined folder
Expected results:
installation is successful when set user-defined folder
Additional info:
Description of problem:
Get the below error when upgrading to OCP 4.12 from 4.9->4.10->4.11.
MacBook-Pro:~ jianzhang$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-08-24-091058 True True 4h Unable to apply 4.12.0-0.nightly-2022-08-24-053339: the workload openshift-operator-lifecycle-manager/package-server-manager cannot roll out - lastTransitionTime: "2022-08-25T04:47:36Z" lastUpdateTime: "2022-08-25T04:47:36Z" message: 'pods "package-server-manager-85b6dc4d89-sdzcc" is forbidden: violates PodSecurity "restricted:v1.24": seccompProfile (pod or container "package-server-manager" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")' reason: FailedCreate status: "True" type: ReplicaFailure
Version-Release number of selected component (if applicable):
MacBook-Pro:~ jianzhang$ oc exec catalog-operator-c5c655d5c-b9lcn -- olm --version
OLM version: 0.19.0
git commit: 8a984d41acc67c0bc9bfe807fadeef23f83abd44
How reproducible:
always
Steps to Reproduce:
1. Install OCP 4.11.0-0.nightly-2022-08-24-091058
2. Upgrade it to 4.12.0-0.nightly-2022-08-24-053339
Actual results:
The cluster upgrading is blocked. Get the above errors as described.
Expected results:
Upgraded to 4.12 from old OCP versions 4.5, 4.9 successfully.
Additional info:
MacBook-Pro:~ jianzhang$ oc get deployment package-server-manager -o yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "5" include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" creationTimestamp: "2022-08-25T00:14:08Z" generation: 5 labels: app: package-server-manager name: package-server-manager namespace: openshift-operator-lifecycle-manager ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 3fd29082-0e76-4b09-988e-78cb5fc7c8b5 resourceVersion: "169028" uid: c8f7cbe2-4f82-40ce-9468-817ffefa903f spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: package-server-manager strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' creationTimestamp: null labels: app: package-server-manager spec: containers: - args: - --name - $(PACKAGESERVER_NAME) - --namespace - $(PACKAGESERVER_NAMESPACE) command: - /bin/psm - start env: - name: PACKAGESERVER_NAME value: packageserver - name: PACKAGESERVER_IMAGE value: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d49e1e27114f4b719bc8f3c222b2c5934d3b8028c79ec8e2bd288f6e9b5b3d5c - name: PACKAGESERVER_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: RELEASE_VERSION value: 4.12.0-0.nightly-2022-08-24-053339 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d49e1e27114f4b719bc8f3c222b2c5934d3b8028c79ec8e2bd288f6e9b5b3d5c imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: package-server-manager readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 10m memory: 50Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError dnsPolicy: ClusterFirst nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/master: "" priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true serviceAccount: olm-operator-serviceaccount serviceAccountName: olm-operator-serviceaccount terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 status: availableReplicas: 1 conditions: - lastTransitionTime: "2022-08-25T03:14:20Z" lastUpdateTime: "2022-08-25T03:14:20Z" message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available - lastTransitionTime: "2022-08-25T04:47:36Z" lastUpdateTime: "2022-08-25T04:47:36Z" message: 'pods "package-server-manager-85b6dc4d89-sdzcc" is forbidden: violates PodSecurity "restricted:v1.24": seccompProfile (pod or container "package-server-manager" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")' reason: FailedCreate status: "True" type: ReplicaFailure - lastTransitionTime: "2022-08-25T04:57:37Z" lastUpdateTime: "2022-08-25T04:57:37Z" message: ReplicaSet "package-server-manager-85b6dc4d89" has timed out progressing. reason: ProgressDeadlineExceeded status: "False" type: Progressing observedGeneration: 5 readyReplicas: 1 replicas: 1 unavailableReplicas: 1
Description of problem:
When all projects are selected, workloads list page and details page shows inconsistent HorizontalPodAutoscaler actions
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-07-25-010250
How reproducible:
Always
Steps to Reproduce:
Actual results:
Expected results:
Additional info:
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-sdn-upgrade/1588454035726274560 in the skipped test shows:
: [sig-scheduling][Early] The openshift-console console pods [apigroup:console.openshift.io should be scheduled on different nodes [Suite:openshift/conformance/parallel]
Reason: skipped because the following required API groups are missing: console.openshift.io should be scheduled on different nodes [Suite:openshift/conformance/parallel
The apigroup has no closing bracket.
I'd disabled Telemetry for the bulk of the CI fleet in OTA-740. But that lead to many
failures for:
[sig-instrumentation] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present [Late] [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
We should extend the checks for Telemetry enablement to include telemeterClient.enabled in the monitoring-specific ConfigMap, as well as the previously-checked pull-secret token.
Description of problem:
When log line number is too big, the number will overlap with cut-off line in the log viewer.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible:
Always
Steps to Reproduce:
1.Go to a pod log page with lots of logs, such as pod in openshift-cluster-version namespace. Check log line numbers.
2.
3.
Actual results:
1. When line number is too big, it will overlap with cut-off line.
Expected results:
1. Should have no overlaps in logs
Additional info: