Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
As a user, I should be able to configure CSI driver to have a storage topology.
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
This Epic tracks the GA of this feature
Epic Goal
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
Instead of fmt.Errorf, use a logging library to log the errors and debug information.
Currently we allow the assisted-service to generate the InfraEnv ID automatically when the InfraEnv is created. The agents then have to fetch the list of InfraEnvs from the service to get the ID. This is suboptimal in a number of ways and won't be possible at all once we have authentication enabled on the assisted-service API.
Instead, modify assisted-service to accept an environment variable that contains a fixed InfraEnv ID. Any new InfraEnv created will use this ID (this has the desirable side effect that there can be only one InfraEnv).
Pre-generate a random ID in the command-line tool and store it in the configuration of both the agent and the assisted-service in the ISO.
Using podman kube play from a systemd service isn't ideal in terms of process monitoring, and makes it hard to do stuff like attach volumes. Split the containers out into separate containers (which can all be in the same pod still) that are started by their own systemd services. This will mean decomposing the ConfigMap that passes settings.
A cli subcommand that waits for the cluster to come up. This should be able to reuse the code from the regular openshift-install wait-for install-complete command largely unchanged, but if the k8s API is not available it may be because we're still running the assisted part of installation. It probably needs to fall back to checking for that. I'm not sure what assumptions in the existing installer command about when it is safe to run it. Ideally we would keep behaviour relatively consistent.
The service start-cluster-installation fails for conditionpathexists even though the path is created.
[core@master-0 ~]$ sudo systemctl status start-cluster-installation.service ● start-cluster-installation.service - Service that starts cluster installation Loaded: loaded (/etc/systemd/system/start-cluster-installation.service; enabled; vendor preset: enabled) Active: inactive (dead) Condition: start condition failed at Wed 2022-05-11 04:40:43 UTC; 32s ago └─ ConditionPathExists=/etc/assisted-service/node0 was not met
Also, when the ConditionPath error is fixed, later the service fails with
start-cluster-installation.sh[2533]: jq: error (at <stdin>:0): Cannot index number with string "status"
A cli subcommand that:
As a deployer, I want to be able to:
so that I can achieve
Currently the Assisted Service generates the credentials by running the ignition generation step of the oepnshift-installer. This is why the credentials are only retrievable from the REST API towards the end of the installation.
In the BILLI usage, which takes down assisted service before the installation is complete there is no obvious point at which to alert the user that they should retrieve the credentials. This means that we either need to:
This requires/does not require a design proposal.
This requires/does not require a feature gate.
Check that the cluster is ready for installation and send the appropriate REST API call to trigger the installation.
Fix the unwanted API call to set API_VIP in case of SNO cluster in start-cluster-installation.service.
{"code":"400","href":"","id":400,"kind":"Error","reason":"API VIP cannot be set with User Managed Networking"}
Create a completely golang implementation of AGENT-37 and place the code in the assisted-service repo. A new binary should be created in the assisted-service image. The binary will be used in the create-cluster-and-infra-env service.
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
Support user input consisting of just InstallConfig and AgentConfig
Modify the agent-config to accept NMState config for each host.
This could be directly inline, or referenced from a file (either explicitly or by implicitly inferring the filename). This is TBD. We decided to go with `AgentConfig embeds install time node-specific configuration` option https://docs.google.com/document/d/1vCy0LikVPhbGIHF494NHTYsfu85fOiOicR3oB1vlEWI/edit#
Using the NMState data provided, generate the equivalent NMStateConfig manifests in cluster-manifests.
If we make the ZTP manifest assets depend on the install-config asset, the install config will effectively be required (and the installer will launch into the interactive CLI questionnaire if it is not present).
We want to use the install-config if it is present, and just use the ZTP manifests if those are present instead. (Note: this appears to conflict with what AGENT-135 says, so one of these stories might be wrong.)
The installer team has more details and can probably suggest a design.
Validate the initial config files for the agent installer, ensuring that all the required fields are present and well defined
Given an install-config, convert it to the ZTP manifests that are used to directly populate the Ignition.
This document contains a list of fields and how they match up: https://docs.google.com/document/d/1S4OluK1c-CIma9hmEylPay9ugcqKrD64S7DgiYpufqE/edit
If node0 ip is specified in agentConfig, it takes precedence over the selection from NMStateConfigs, otherwise, we keep the same heuristic as we have now to choose.
Given an install-config, generate the mirroring config assets (registries.conf and ca-bundle.crt) from the data in it.
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
Ability to perform disconnected first cluster installation in the automated flow
The Core OS ISO can be extracted from the release payload using a command like:
oc image extract --file=/coreos/coreos-x86_64.iso quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1dc3c2a644f62049ea4a03fddb9305bc2b929405bf979b7f5e720cfadf327b54
Where the SHA points to the machine-os-images container in the release payload (which can be obtained using oc adm release info --image-for=machine-os-images. (Both of these commands require the pull secret for the cluster to be available in your podman config.)
We'll need to use equivalent code (hopefully imported from oc or the same library it uses) to fetch the base ISO using the supplied pull secret in the ZTP manifests and store it as an Asset.
When installing in a disconnected environment and the registries.conf and ca-bundle files have been loaded these files should be provided to assisted-service as a mount of the mirror/ dir. Assisted-service will updates its ignition config from these mounted files.
In order to configure the registry for disconnected installs, the following assets should be created:
RegistriesConfig (read from mirror/registries.conf)
CABundleCertificates (read from mirror/ca-bundle.crt)
Podman creates a pause container on the hosts for the service pod as follows:
$ sudo podman ps
87a02f9ace39 registry.access.redhat.com/ubi8/pause:latest 58 minutes ago Up 58 minutes ago 0.0.0.0:8080->8080/tcp, 0.0.0.0:8090->8090/tcp, 0.0.0.0:8888->8888/tcp 27f9183bfbd9-infra
We should check if this image needs to be mirrored, and figure out if we need to change dev-scripts or add an entry to registries.conf.
We won't be shipping with the assisted-ui container. At this point it is blocking the disconnected work since we don't have an Openshift container for it in the payload, so its time to remove it.
CI - CI is running, tests are automated and merged.
Release Enablement <link to Feature Enablement Presentation>
DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
DEV - Downstream build attached to advisory: <link to errata>
QE - Test plans in Polarion: <link or reference to Polarion>
QE - Automated tests merged: <link or reference to automated tests>
DOC - Downstream documentation merged: <link to meaningful PR>
As a (user persona), I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a first step for the assets integration. the create image command will need to fetch the required ztp manifest files from the cluster-manifests folder.
This will allow to:
1) Get the manifest file from the right location
2) seamlessly integrate the create image command with the create cluster-manifests one as the tasks related to assets generation are still in progress
3) Keep the create image command fully working until the assets generation will completed (users will still be able to create/edit manually the assets in the cluster-manifests folder)
Add a subcommand to create the ephemeral ISO.
Create Agent ISO and Agent Ignition assets in the installer, and use them to generate a customized ISO.
This story is just for implementing the mechanics, filling in the ignition will be left to another story.
Using code from the installer (not code from fleeting), populate the Ignition asset with the data built in to the installer binary.
Currently we use a separate embed.FS (inherited from fleeting) to load the data files to go into the ignition. We should get rid of this and use the same method as the rest of the installer. We should also use the installer's code to e.g. do templating and convert to ignition format and throw away the fleeting code.
Currently it's possible to specify the release version to be installed via the ClusterImageSet manifests.
Since we're working from within the openshift installer, the accepted version should be the one hard-coded in the installer binary (or overriden by the env var)
Using git-filter-repo, rewrite the commits in fleeting to place files in their correct locations in the installer. The resulting commits can then be merged into the agent branch of the installer with a pull request.
Data files should be moved to e.g. data/data/agent, appending the suffix .template to any that are templated.
Code files that are needed by the installer should be moved to appropriate directories that have the agent team in the OWNERS.
Keep the git-filter-repo script so that development can continue in parallel on fleeting until we are ready to switch CI over to the installer implementation.
Create installer Assets corresponding to each ZTP manifest, and move the code for reading them from disk into the respective assets.
From the initial install-config.yaml + agent-config.yaml, generate all the ZTP manifests file required by the create image command.
Dependency: install-config
*Note*: we could evaluate to further split this task into distinct manifests assets
Create an asset for AgentClusterInstall. Parent assets are install-config.yaml and agent-config.yaml.
Currently assisted service chooses one of the nodes that reach out to it to be the bootstrap node. We need to understand the choice mechanism and to make it reliably choose the node that we want node0 to be.
The bootstrap node already waits for the other nodes before rebooting, we need to make sure that this wait is sufficient for assisted-service as well. Prevent the assisted-service from rebooting the node it is running on until the following conditions are true:
We can try with having it reboot into bootstrap while making sure that assisted-service runs after reboot but ideally we'd want to have the node start bootstrapping without needing the reboot (As per customer/PM demands to minimize reboots).
In the context of METAL-10 there was a proposal to add a file that the agent would check for, such that the presence of this file would inhibit a reboot. We could possibly use the same mechanism here to avoid the need for large-scale changes to how assisted-service itself works (assisted-service would still need to delete the file at the appropriate time, but that is a less-invasive change). However, there are timeouts that have to be considered, so changes to the state machine may be required.
Note that we do want to continue to install to disk on the assisted-service host in parallel with the others, since this is on the critical path slowing down all deployments. Only the reboot should be delayed.
Single-node deployments are an exception to this.
Epic Goal
Why is this important?
Acceptance Criteria
Previous Work (Optional)
Done Checklist
References
As an admin, I want to be able to:
so that I can achieve
The agent based installation for Zero Touch provisioning has a Custom Resource Defined to configure the static networking of the nodes that will be provisioned. E.g:
apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: mgmt-spoke1 namespace: mgmt-spoke1 labels: cluster-name: mgmt-spoke1 spec: config: interfaces: - name: bond0 type: bond link-aggregation: mode: active-backup options: miimon: "140" slaves: - eth0 - eth1 state: up ipv4: enabled: true address: - ip: 192.168.123.151 prefix-length: 24 dhcp: false ipv6: enabled: false dns-resolver: config: server: - 192.168.1.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.1.1 next-hop-interface: bond0 table-id: 254 interfaces: - name: "eth0" macAddress: "00:00:00:00:00:00" - name: "eth1" macAddress: "00:00:00:00:00:11"
NMState team is currently working on a rust library that includes the gc command that assisted service uses to generate all the configs and then load the one that matches the interfaces. We should reach out to Nick Carboni to check on assisted-service progress in integrating the new library and leverage the same code to make sure our ISO can use the same network configuration mechanism
Description of criteria:
Detail about what is specifically not being delivered in the story
This requires/does not require a design proposal.
This requires/does not require a feature gate.
We currently support static IPs on Node 0, and this is required in order to get the common IP for the other nodes. We also need to support configuration of static IPs on all of the nodes even though they could also use DHCP for their addresses.
The infraenv controller fetches the NMStateConfigs from the kube-api. Since we don't have the kube-api, we need to read them from the manifests and incorporate them into the InfraEnvCreateParams to create the InfraEnv.
Acceptance criteria:
As an OpenShift infrastructure owner, I need to add host-specific configurations at install time, so that they are applied when the cluster installation is completed.
Specially, but not restricted to on-prem deployments, hosts need specific configurations (beyond the individual host network configuration). Customers automating installs want to avoid day-2 configurations and node reboots, so applying configurations during the installation is a requirement for them. Examples of this are multipath and SCTP on bare metal nodes, where it's not always straightforward to do it on day-2 and reboots are required.
There is no harm in supplying the “rd.multipath=default” argument on any host. The effect of this argument is to generate a default /etc/multipath.conf file and to enable the multipathd service. The assisted-service now adds these to its discovery ISOs, and we will do the same with the agent ISO.
Necessary for SCTP
Manifests are placed in <install-config-dir>/openshift and copied to the ISO. (Previously we assumed this would be <install-config-dir>/manifests, but Andrea suggested that openshift would be more consistent.)
A client in the ISO submits the manifests through assisted-service API.
REST
Get the ZTP extra manifests into the image and use the REST API below:
/v2/clusters/{cluster_id}/manifests
Acceptance criteria:
If it is not generated from AgentConfig, we should at least generate a skeleton
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want my changes in ClusterCSIDriver and Storage instances to be overwritten by correct values, so I cannot break my guest cluster by using wrong values.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
To run the snapshot operator, control-plane-operator must:
Exit criteria:
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.
We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.
We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.
related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
There is a new driver release 5.0.0 since the last rebase that includes snapshot support:
https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0
Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
This includes ibm-vpc-node-label-updater!
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
OC mirror is GA product as of Openshift 4.11 .
The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror
Testing is one of the main pillars of production-grade software. It helps validate and flag issues early on before the code is shipped into productive landscapes. Code changes no matter how small they are might lead to bugs and outages, the best way to validate bugs is to write proper tests, and to run those tests we need to have a foundation for a test infrastructure, finally, to close the circle, automation of these tests and their corresponding build help reduce errors and save a lot of time.
Note: Sync with the Developer productivity teams might be required to understand infra requirements especially for our first HyperShift infrastructure backend, AWS.
Context:
This is a placeholder epic to capture all the e2e scenarios that we want to test in CI in the long term. Anything which is a TODO here should at minimum be validated by QE as it is developed.
DoD:
Every supported scenario is e2e CI tested.
Scenarios:
DoD:
At the moment the HO in CI automatically pull changes via image stream.
There's a risk that it gets broken because the changes pulled requires running install manifests.
This should be automated.
https://coreos.slack.com/archives/G01QS0P2F6W/p1666084771459559
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Hosted control planes with HyperShift introduce a new architecture for OpenShift where the control plane and workers are decoupled and are represented but two distinct APIs, namely the `HostedCluster` API to represent cluster and control plane configuration and `NodePool` to expose lifecycle configuration.
Today we set the channel-specific field to empty as the default CVO configuration does not apply to this new architecture. As a result, Cinncinati and the edged graph do not apply. The alternative provided today with HyperShift is guard railing versions within the HyperShift's operator code-base; while this works short-term, it can result in added complexity and even duplication long-term.
This effort aims to evaluate the changes required for parity between the existing CVO + Cinncinati with the newly introduced HyperShift architecture. Furthermore, one important outcome of this work is a clearly articulated spec of proposed changes to achieve partiy or suggestion of alternative methods.
A possible implementation of this would result in adding a channel field to the spec of the HostedCluster resource and reporting availableUpdates in its status.
We should also decide whether the same should be done for NodePools (channel + status)
OR whether we should adopt a simpler pattern:
The hosted cluster API needs to updated to hold required information for channel, available updates . So that the hypershift operator (HSO) can copy those to the clusterversion resource and copy back status from it.
HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term.
HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]
To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes:
If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE.
For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA:
Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc.
Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:
As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters.
HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story.
Thus the following stories are important for HyperShift:
Refs:
HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed.
Main user story: When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees.
Ref: What are we missing in Core HyperShift for GA Readiness?
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumptions:
HyperShift - proposed cuts from data plane
When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:
More information:
To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.
See Hosted Control Planes (aka HyperShift) Strategy [Live Document]
Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path.
https://issues.redhat.com/browse/OCPPLAN-8901
HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead).
We should make sure our SD milestones are unblocked by the core team.
This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.
- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors.
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771)
Epic Goal*
The goal is to split client certificate trust chains from the global Hypershift root CA.
Why is this important? (mandatory)
This is important to:
Scenarios (mandatory)
Provide details for user scenarios including actions to be performed, platform specifications, and user personas.
Dependencies (internal and external) (mandatory)
Hypershift team needs to provide us with code reviews and merge the changes we are to deliver
Contributing Teams(and contacts) (mandatory)
Acceptance Criteria (optional)
The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.
Drawbacks or Risk (optional)
Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release
Done - Checklist (mandatory)
AUTH-311 introduced an enhancement. Implement the signer separation described there.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
Should this have a new docs section?
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12
This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9
More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4
update ironic software to pick up latest bug fixes
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I want to hide user perspective(s) based on the customization.
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
Place holder epic to track spontaneous task which does not deserve its own epic.
DoD:
"Immutable" fields are actually immutable a the API level via https://kubernetes.io/blog/2022/09/29/enforce-immutability-using-cel/
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
This is a clone of issue OCPBUGS-3883. The following is the description of the original issue:
—
While doing a PerfScale test of we noticed that the ovnkube pods are not being spread out evenly among the available workers. Instead they are all stacking on a few until they fill up the available allocatable ebs volumes (25 in the case of m5 instances that we see here).
An example from partway through our 80 hosted cluster test when there were ~30 hosted clusters created/in progress
There are 24 workers available:
```
$ for i in `oc get nodes l node-role.kubernetes.io/worker=,node-role.kubernetes.io/infra!=,node-role.kubernetes.io/workload!= | egrep -v "NAME" | awk '{ print $1 }'`; do echo $i `oc describe node $i | grep -v openshift | grep ovnkube -c`; done
ip-10-0-129-227.us-west-2.compute.internal 0
ip-10-0-136-22.us-west-2.compute.internal 25
ip-10-0-136-29.us-west-2.compute.internal 0
ip-10-0-147-248.us-west-2.compute.internal 0
ip-10-0-150-147.us-west-2.compute.internal 0
ip-10-0-154-207.us-west-2.compute.internal 0
ip-10-0-156-0.us-west-2.compute.internal 0
ip-10-0-157-1.us-west-2.compute.internal 4
ip-10-0-160-253.us-west-2.compute.internal 0
ip-10-0-161-30.us-west-2.compute.internal 0
ip-10-0-164-98.us-west-2.compute.internal 0
ip-10-0-168-245.us-west-2.compute.internal 0
ip-10-0-170-103.us-west-2.compute.internal 0
ip-10-0-188-169.us-west-2.compute.internal 25
ip-10-0-188-194.us-west-2.compute.internal 0
ip-10-0-191-51.us-west-2.compute.internal 5
ip-10-0-192-10.us-west-2.compute.internal 0
ip-10-0-193-200.us-west-2.compute.internal 0
ip-10-0-193-27.us-west-2.compute.internal 7
ip-10-0-199-1.us-west-2.compute.internal 0
ip-10-0-203-161.us-west-2.compute.internal 0
ip-10-0-204-40.us-west-2.compute.internal 23
ip-10-0-220-164.us-west-2.compute.internal 0
ip-10-0-222-59.us-west-2.compute.internal 0
```
This is running quay.io/openshift-release-dev/ocp-release:4.11.11-x86_64 for the hosted clusters and the hypershift operator is quay.io/hypershift/hypershift-operator:4.11 on a 4.11.9 management cluster
Description of problem:
This is an OCP clone of https://bugzilla.redhat.com/show_bug.cgi?id=2099794 In summary, NetworkManager reports the network as being up before the ipv6 address of the primary interface is ready and crio fails to bind to it.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
The name of "Role" on Compute -> Nodes page should update to "Roles" to match the name in the CLI
Compare with other resources, the title of the column should keep pace with the name in CLI
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible:
Always
Steps to Reproduce:
1. Login OCP with CLI, use below command to get nodes information
$ oc get nodes
2. Go to Compute -> nodes page, check the column name of "Role"
3.
Actual results:
CLI will return information as below shown, and the title of the column is "ROLES"
NAME STATUS ROLES AGE VERSION ip-10-0-145-18.us-east-2.compute.internal Ready worker 9h v1.24.0+4f0dd4d ip-10-0-145-203.us-east-2.compute.internal Ready master 9h v1.24.0+4f0dd4d ip-10-0-163-205.us-east-2.compute.internal Ready master 9h v1.24.0+4f0dd4d ip-10-0-169-118.us-east-2.compute.internal Ready worker 9h v1.24.0+4f0dd4d ip-10-0-198-234.us-east-2.compute.internal Ready master 9h v1.24.0+4f0dd4d ip-10-0-212-34.us-east-2.compute.internal Ready worker 9h v1.24.0+4f0dd4d
But in UI, the name of ROLES is "Role" which is incorrect. (Attached)
Expected results:
The title of "Role" should update to "Roles"
Additional info:
This is a clone of issue OCPBUGS-3924. The following is the description of the original issue:
—
The APIs are scheduled for removal in Kube 1.26, which will ship with OpenShift 4.13. We want the 4.12 CVO to move to modern APIs in 4.12, so the APIRemovedInNext.*ReleaseInUse alerts are not firing on 4.12. We'll need the components setting manifests for these deprecated APIs to move to modern APIs. And then we should drop our ability to reconcile the deprecated APIs, to avoid having other components leak back in to using them.
Specifically cluster-monitoring-operator touches:
Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times
Full output of the test at https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/27560/pull-ci-openshift-origin-master-e2e-gcp-ovn/1593697975584952320/artifacts/e2e-gcp-ovn/openshift-e2e-test/build-log.txt:
[It] clients should not use APIs that are removed in upcoming releases [apigroup:config.openshift.io] [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/apiserver/api_requests.go:27 Nov 18 21:59:06.261: INFO: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times Nov 18 21:59:06.261: INFO: api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times Nov 18 21:59:06.261: INFO: api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times Nov 18 21:59:06.261: INFO: user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times Nov 18 21:59:06.261: INFO: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times Nov 18 21:59:06.261: INFO: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times [AfterEach] [sig-arch][Late] github.com/openshift/origin/test/extended/util/client.go:158 [AfterEach] [sig-arch][Late] github.com/openshift/origin/test/extended/util/client.go:159 flake: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times Ginkgo exit error 4: exit with code 4
This is required to unblock https://github.com/openshift/origin/pull/27561
This is a clone of issue OCPBUGS-6213. The following is the description of the original issue:
—
Please review the following PR: https://github.com/openshift/machine-config-operator/pull/3450
The PR has been automatically opened by ART (#aos-art) team automation and indicates
that the image(s) being used downstream for production builds are not consistent
with the images referenced in this component's github repository.
Differences in upstream and downstream builds impact the fidelity of your CI signal.
If you disagree with the content of this PR, please contact @release-artists
in #aos-art to discuss the discrepancy.
Closing this issue without addressing the difference will cause the issue to
be reopened automatically.
Description of problem:
Deployed hypershift cluster with recent multi-arch build. Storage cluster operator has become available but having below warning message PowerVSBlockCSIDriverOperatorCRDegraded: PowerVSBlockCSIDriverStaticResourcesControllerDegraded: "rbac/attacher_role.yaml" (string): clusterroles.rbac.authorization.k8s.io "ibm-powervs-block-external-attacher-role" is forbidden: user "system:serviceaccount:openshift-cluster-csi-drivers:powervs-block-csi-driver-operator" (groups=["system:serviceaccounts" "system:serviceaccounts:openshift-cluster-csi-drivers" "system:authenticated"]) is attempting to grant RBAC permissions not currently held: PowerVSBlockCSIDriverOperatorCRDegraded: PowerVSBlockCSIDriverStaticResourcesControllerDegraded: {APIGroups:["csi.storage.k8s.io"], Resources:["csinodeinfos"], Verbs:["get" "list" "watch"]} PowerVSBlockCSIDriverOperatorCRDegraded: PowerVSBlockCSIDriverStaticResourcesControllerDegraded: "rbac/attacher_binding.yaml" (string): clusterroles.rbac.authorization.k8s.io "ibm-powervs-block-external-attacher-role" not found
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.Deploy 4.12.0-0.nightly-multi-2022-09-01-220105 nightly build
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-6222. The following is the description of the original issue:
—
Please review the following PR: https://github.com/openshift/alibaba-cloud-csi-driver/pull/20
The PR has been automatically opened by ART (#aos-art) team automation and indicates
that the image(s) being used downstream for production builds are not consistent
with the images referenced in this component's github repository.
Differences in upstream and downstream builds impact the fidelity of your CI signal.
If you disagree with the content of this PR, please contact @release-artists
in #aos-art to discuss the discrepancy.
Closing this issue without addressing the difference will cause the issue to
be reopened automatically.
Clone of https://issues.redhat.com/browse/OCPBUGSM-44162.
Cannot use the original as the bot won't accept a security bug:
When the change merges, the Bugzilla associated with the CVE must be set to MODIFIED. Since the DPTP bugzilla bot is not permitted to scan bugs with the SECURITY group in Bugzilla, The REP will not be able to use the bot's public functionality of moving their bug to MODIFIED.
This is a clone of issue OCPBUGS-3032. The following is the description of the original issue:
—
If installation fails at an early stage (e.g. pulling release images, configuring hosts, waiting for agents to come up) there is no indication that anything has gone wrong, and the installer binary may not even be able to connect.
We should at least display what is happening on the console so that users have some avenue to figure out for themselves what is going on.
we need to make sure that the ironic containers use the latest available bugfix versions
Description of problem:
E2E test Installs Red Hat Integration - 3scale operator test is failing due to change of Operator name
An RW mutex was introduced to the project auth cache with https://github.com/openshift/openshift-apiserver/pull/267, taking exclusive access during cache syncs. On clusters with extremely high object counts for namespaces and RBAC, syncs appear to be extremely slow (on the order of several minutes). The project LIST handler acquires the same mutex in shared mode as part of its critical path.
As a (user persona), I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
This requires/does not require a design proposal.
This requires/does not require a feature gate.
Derrick got an "old and new refs are equal" on rebase error; this is similar to OCPBUGS-1899 but I think has a different root cause. In this case, when a manual rollback is performed via the bootloader, we've computed that there's an osimageurl diff between the expected and desired state, but actually the desired state is already set.
We just need to skip doing the rebase if we're already in the target state.
(A real root of this problem again is that the whole "current/desired config" thing is trying to track state independently of the bootloader...if we made node state == container image, all of that goes away. The MCO would understand that it got booted into a previous state)
This is a clone of issue OCPBUGS-2895. The following is the description of the original issue:
—
Description of problem:
Current validation will not accept Resource Groups or DiskEncryptionSets which have upper-case letters.
Version-Release number of selected component (if applicable):
4.11
How reproducible:
Attempt to create a cluster/machineset using a DiskEncryptionSet with an RG or Name with upper-case letters
Steps to Reproduce:
1. Create cluster with DiskEncryptionSet with upper-case letters in DES name or in Resource Group name
Actual results:
See error message: encountered error: [controlPlane.platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup: Invalid value: \"v4-e2e-V62447568-eastus\": invalid resource group format, compute[0].platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup: Invalid value: \"v4-e2e-V62447568-eastus\": invalid resource group format]
Expected results:
Create a cluster/machineset using the existing and valid DiskEncryptionSet
Additional info:
I have submitted a PR for this already, but it needs to be reviewed and backported to 4.11: https://github.com/openshift/installer/pull/6513
Description of problem:
[OVN][OSP] After reboot egress node, egress IP cannot be applied anymore.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-11-07-181244
How reproducible:
Frequently happened in automation. But didn't reproduce it in manual.
Steps to Reproduce:
1. Label one node as egress node 2. Config one egressIP object STEP: Check one EgressIP assigned in the object. Nov 8 15:28:23.591: INFO: egressIPStatus: [{"egressIP":"192.168.54.72","node":"huirwang-1108c-pg2mt-worker-0-2fn6q"}] 3. Reboot the node, wait for the node ready.
Actual results:
EgressIP cannot be applied anymore. Waited more than 1 hour. oc get egressip NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS egressip-47031 192.168.54.72
Expected results:
The egressIP should be applied correctly.
Additional info:
Some logs E1108 07:29:41.849149 1 egressip.go:1635] No assignable nodes found for EgressIP: egressip-47031 and requested IPs: [192.168.54.72] I1108 07:29:41.849288 1 event.go:285] Event(v1.ObjectReference{Kind:"EgressIP", Namespace:"", Name:"egressip-47031", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'NoMatchingNodeFound' no assignable nodes for EgressIP: egressip-47031, please tag at least one node with label: k8s.ovn.org/egress-assignable W1108 07:33:37.401149 1 egressip_healthcheck.go:162] Could not connect to huirwang-1108c-pg2mt-worker-0-2fn6q (10.131.0.2:9107): context deadline exceeded I1108 07:33:37.401348 1 master.go:1364] Adding or Updating Node "huirwang-1108c-pg2mt-worker-0-2fn6q" I1108 07:33:37.437465 1 egressip_healthcheck.go:168] Connected to huirwang-1108c-pg2mt-worker-0-2fn6q (10.131.0.2:9107)
After this log, seems like no logs related to "192.168.54.72" happened.
This is a clone of issue OCPBUGS-3405. The following is the description of the original issue:
—
In case it should be used for publishing artifacts in CI jobs.
Look into to see if the following things are leaked:
Description of problem:
co/storage is not available due to csi driver not have proxy setting on ibm cloud
Version-Release number of selected component (if applicable):
{4.12.0-0.ci-2022-10-13-233744}How reproducible:
Always
Steps to Reproduce:
1.Install ocp cluster on ibm disconnected env with http proxy Template: private-templates/functionality-testing/aos-4_12/ipi-on-ibmcloud/versioned-installer-customer_vpc-http_proxy 2.Check co/storage oc get co/storage NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE storage 4.12.0-0.ci-2022-10-13-233744 False True False 6h55m IBMVPCBlockCSIDriverOperatorCRAvailable: IBMBlockDriverControllerServiceControllerAvailable: Waiting for Deployment... 3.oc get pods NAME READY STATUS RESTARTS AGE ibm-vpc-block-csi-controller-6c4bfc9fc-6dmz7 4/5 CrashLoopBackOff 83 (113s ago) 6h55m ibm-vpc-block-csi-driver-operator-7bd6fb5cdc-rktk2 1/1 Running 1 (6h44m ago) 6h55m ibm-vpc-block-csi-node-8s6dj 0/3 Init:0/1 77 (5m34s ago) 6h52m ibm-vpc-block-csi-node-9msld 0/3 Init:Error 76 (5m49s ago) 6h47m ibm-vpc-block-csi-node-fgs76 0/3 Init:CrashLoopBackOff 76 (5m ago) 6h52m ibm-vpc-block-csi-node-jd9fl 0/3 Init:CrashLoopBackOff 75 (4m16s ago) 6h47m ibm-vpc-block-csi-node-qkjxs 0/3 Init:CrashLoopBackOff 77 (2m53s ago) 6h52m ibm-vpc-block-csi-node-xbzm8 0/3 Init:0/1 76 (5m13s ago) 6h47m 4.oc -n openshift-cluster-csi-drivers logs -c vpc-node-label-updater ibm-vpc-block-csi-node-xbzm8 {"level":"info","timestamp":"2022-10-14T09:18:32.436Z","caller":"nodeupdater/utils.go:57","msg":"Fetching secret configuration.","watcher-name":"vpc-node-label-updater"} {"level":"info","timestamp":"2022-10-14T09:18:32.436Z","caller":"nodeupdater/utils.go:158","msg":"parsing conf file","watcher-name":"vpc-node-label-updater","confpath":"/etc/storage_ibmc/slclient.toml"} {"level":"error","timestamp":"2022-10-14T09:19:02.437Z","caller":"nodeupdater/utils.go:96","msg":"Failed to Get IAM access token","watcher-name":"vpc-node-label-updater","error":"Post \"https://iam.cloud.ibm.com/oidc/token\": dial tcp 23.203.93.6:443: i/o timeout"} {"level":"fatal","timestamp":"2022-10-14T09:19:02.437Z","caller":"cmd/main.go:140","msg":"Failed to read secret configuration from storage secret present in the cluster ","watcher-name":"vpc-node-label-updater","error":"Post \"https://iam.cloud.ibm.com/oidc/token\": dial tcp 23.203.93.6:443: i/o timeout"} 5.oc -n openshift-cluster-csi-drivers describe pod ibm-vpc-block-csi-node-xbzm8 Environment: ADDRESS: /csi/csi.sock DRIVER_REGISTRATION_SOCK: /var/lib/kubelet/plugins/vpc.block.csi.ibm.io/csi.sock KUBE_NODE_NAME: (v1:spec.nodeName) Actual results:{code:none}
Expected results:
Additional info:
When installing OCP cluster with worker nodes VM type specified as high performance, some of the configuration settings of said VMs do not match the configuration settings a high performance VM should have.
Specific configurations that do not match are described in subtasks.
Default configuration settings of high performance VMs:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index?extIdCarryOver=true&sc_cid=701f2000001Css5AAC#Configuring_High_Performance_Virtual_Machines_Templates_and_Pools
When installing OCP cluster with worker nodes VM type specified as high performance, manual and automatic migration is enabled in the said VMs.
However, high performance worker VMs are created with default values of the engine, so only manual migration should be enabled.
Default configuration settings of high performance VMs:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index?extIdCarryOver=true&sc_cid=701f2000001Css5AAC#Configuring_High_Performance_Virtual_Machines_Templates_and_Pools
How reproducible: 100%
How to reproduce:
1. Create install-config.yaml with a vmType field and set it to high performance, i.e.:
apiVersion: v1 baseDomain: basedomain.com compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: ovirt: affinityGroupsNames: [] vmType: high_performance replicas: 2 ...
2. Run installation
./openshift-install create cluster --dir=resources --log-level=debug
3. Check worker VM's configuration in the RHV webconsole.
Expected:
Only manual migration (under Host) should be enabled.
Actual:
Manual and automatic migration is enabled.
Description of problem:
Cluster can not be installed when updating join network CIDR using v6InternalSubnet fdxx::/64 in the manifests/cluster-network-03-config.yml
Version-Release number of selected component (if applicable):
v4.12
How reproducible:
Always
Steps to Reproduce:
Using v6InternalSubnet: fd66::/48 in manifests/cluster-network-03-config.yml to install a dual stack cluster: cp manifests/cluster-network-02-config.yml manifests/cluster-network-03-config.yml sed -i 's/config.openshift.io\/v1/operator.openshift.io\/v1/g' manifests/cluster-network-03-config.yml cat > ovn_kube_config <<HEREDOC defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: v6InternalSubnet: fd66::/48 HEREDOC sed -i $'/^status/{e cat ovn_kube_config\n}' manifests/cluster-network-03-config.yml
Actual results:
Installation fail
Expected results:
Installation pass
Additional info:
Description of problem:
cloud-network-config-controller pod crashloops in proxy deployments as it tries to reach Openstack keystone API directly (not through the proxy) and there is no connectivity. NAMESPACE NAME READY STATUS RESTARTS AGE openshift-cloud-network-config-controller cloud-network-config-controller-c4867b748-vlq9h 0/1 CrashLoopBackOff 158 (2m10s ago) 13h $ oc -n openshift-cloud-network-config-controller logs -p cloud-network-config-controller-c4867b748-vlq9h W0927 05:48:18.678947 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0927 05:48:18.680269 1 leaderelection.go:248] attempting to acquire leader lease openshift-cloud-network-config-controller/cloud-network-config-controller-lock... I0927 05:48:26.754377 1 leaderelection.go:258] successfully acquired lease openshift-cloud-network-config-controller/cloud-network-config-controller-lock I0927 05:48:26.755413 1 openstack.go:121] Custom CA bundle found at location '/kube-cloud-config/ca-bundle.pem' - reading certificate information F0927 05:48:28.233519 1 main.go:101] Error building cloud provider client, err: Get "https://10.46.44.10:13000/": dial tcp 10.46.44.10:13000: connect: no route to host goroutine 51 [running]: k8s.io/klog/v2.stacks(0x1) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:860 +0x8a k8s.io/klog/v2.(*loggingT).output(0x37696c0, 0x3, 0x0, 0xc000636000, 0x1, {0x2cbcbd8?, 0x1?}, 0xc000438400?, 0x0) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:825 +0x686 k8s.io/klog/v2.(*loggingT).printfDepth(0x37696c0, 0x237798a?, 0x0, {0x0, 0x0}, 0x7fff81041af7?, {0x23a20d0, 0x2d}, {0xc00052c050, 0x1, ...}) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:630 +0x1f2 k8s.io/klog/v2.(*loggingT).printf(...) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:612 k8s.io/klog/v2.Fatalf(...) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:1516 main.main.func1({0x26e5638, 0xc00016c040}) /go/src/github.com/openshift/cloud-network-config-controller/cmd/cloud-network-config-controller/main.go:101 +0x26d created by k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:211 +0x11bgoroutine 1 [select]: k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00052bb60?, {0x26cee20, 0xc000581740}, 0x1, 0xc00052bb60) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x135 k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00016c080?, 0x60db88400, 0x0, 0x20?, 0x7fea470ec108?) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89 k8s.io/apimachinery/pkg/util/wait.Until(...) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew(0xc0000a8120, {0x26e5638?, 0xc00016c040?}) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:268 +0xd0 k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc0000a8120, {0x26e5638, 0xc00025fcc0}) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:212 +0x12f k8s.io/client-go/tools/leaderelection.RunOrDie({0x26e5638, 0xc00025fcc0}, {{0x26e7430, 0xc00062afa0}, 0x1fe5d61a00, 0x18e9b26e00, 0x60db88400, {0xc00065e630, 0xc000634810, 0x0}, ...}) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:226 +0x94 main.main() /go/src/github.com/openshift/cloud-network-config-controller/cmd/cloud-network-config-controller/main.go:86 +0x450
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-26-050728
How reproducible:
Always
Steps to Reproduce:
1. Install OCP with proxy
Actual results:
Bootstrap failure and pod crashloop
Expected results:
Successful installation
Additional info:
Please find the must-gather here.
Description of problem:
opm serve fails with message: Error: compute digest: compute hash: write tar: stat .: os: DirFS with empty root
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
(The easiest reproducer involves serving an empty catalog)
1. mkdir /tmp/catalog 2. using Dockerfile /tmp/catalog.Dockerfile based on 4.12 docs (https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/operators/index#olm-creating-fb-catalog-image_olm-managing-custom-catalogs # The base image is expected to contain # /bin/opm (with a serve subcommand) and /bin/grpc_health_probe FROM registry.redhat.io/openshift4/ose-operator-registry:v4.12 # Configure the entrypoint and command ENTRYPOINT ["/bin/opm"] CMD ["serve", "/configs"] # Copy declarative config root into image at /configs ADD catalog /configs # Set DC-specific label for the location of the DC root directory # in the image LABEL operators.operatorframework.io.index.configs.v1=/configs 3. build the image `cd /tmp/ && docker build -f catalog.Dockerfile .` 4. execute an instance of the container in docker/podman `docker run --name cat-run [image-file]` 5. error
Using a dockerfile generated from opm (`opm generate dockerfile [dir]`) works, but includes precache and cachedir options to opm.
Actual results:
Error: compute digest: compute hash: write tar: stat .: os: DirFS with empty root
Expected results:
opm generates cache in default /tmp/cache location and serves without error
Additional info:
Description of problem:
documentationBaseURL still points to 4.10
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-31-101631
How reproducible:
Always
Steps to Reproduce:
1.Check documentationBaseURL on 4.12 cluster: # oc get configmap console-config -n openshift-console -o yaml | grep documentationBaseURL documentationBaseURL: https://access.redhat.com/documentation/en-us/openshift_container_platform/4.11/ 2. 3.
Actual results:
1.documentationBaseURL is still pointing to 4.11
Expected results:
1.documentationBaseURL should point to 4.12
Additional info:
Although this doesn't cause any functional problems we get these errors during initial boot and the Podman API service (which we don't use) fails to start
Jul 20 17:30:17 master-0.ostest.test.metalkube.org systemd[4008]: Started Podman API Service.
Jul 20 17:30:17 master-0.ostest.test.metalkube.org podman[4019]: time="2022-07-20T17:30:17Z" level=error msg="reading system config \"/etc/containers/containers.conf\": decode configuration /etc/containers/containers.conf: open /etc/containers/containers.conf: permission denied"
Jul 20 17:30:17 master-0.ostest.test.metalkube.org podman[4018]: time="2022-07-20T17:30:17Z" level=error msg="reading system config \"/etc/containers/containers.conf\": decode configuration /etc/containers/containers.conf: open /etc/containers/containers.conf: permission denied"
Jul 20 17:30:17 master-0.ostest.test.metalkube.org podman[4020]: time="2022-07-20T17:30:17Z" level=error msg="reading system config \"/etc/containers/containers.conf\": decode configuration /etc/containers/containers.conf: open /etc/containers/containers.conf: permission denied"
This is a clone of issue OCPBUGS-723. The following is the description of the original issue:
—
Description of problem:
I have a customer who created clusterquota for one of the namespace, it got created but the values were not reflecting under limits or not displaying namespace details.
~~~
$ oc describe AppliedClusterResourceQuota
Name: test-clusterquota
Created: 19 minutes ago
Labels: size=custom
Annotations: <none>
Namespace Selector: []
Label Selector:
AnnotationSelector: map[openshift.io/requester:system:serviceaccount:application-service-accounts:test-sa]
Scopes: NotTerminating
Resource Used Hard
-------- ---- ----
~~~
WORKAROUND: They recreated the clusterquota object (cache it off, delete it, create new) after which it displayed values as expected.
In the past, they saw similar behavior on their test cluster, there it was heavily utilized the etcd DB was much larger in size (>2.5Gi), and had many more objects (at that time, helm secrets were being cached for all deployments, and keeping a history of 10, so etcd was being bombarded).
This cluster the same "symptom" was noticed however etcd was nowhere near that in size nor the amount of etcd objects and/or helm cached secrets.
Version-Release number of selected component (if applicable): OCP 4.9
How reproducible: Occurred only twice(once in test and in current cluster)
Steps to Reproduce:
1. Create ClusterQuota
2. Check AppliedClusterResourceQuota
3. The values and namespace is empty
Actual results: ClusterQuota should display the values
Expected results: ClusterQuota not displaying values
Description of problem:
When providing the openshift-install agent create command with installconfig + agentconfig manifests that contain the InstallConfig Proxy section, the Proxy configuration does not get configured cluster-wide.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1.Define InstallConfig with Proxy section 2.openshift-install agent create image 3.Boot ISO 4.Check /etc/assisted/manifests for agent-cluster-install.yaml to contain the Proxy section
Actual results:
Missing proxy
Expected results:
Proxy should be present and match with the InstallConfig
Additional info:
Description of problem:
OCPBUGS-3499 and OCPBUGS-3501 both require a more recent version of openshift/library-go containing the shared validation and host-assignment logic.
Description of problem:
metal3 pod does not come up on SNO when creating Provisioning with provisioningNetwork set to Disabled The issue is that on SNO, there is no Machine, and no BareMetalHost, it is looking of Machine objects to populate the provisioningMacAddresses field. However, when provisioningNetwork is Disabled, provisioningMacAddresses is not used anyway. You can work around this issue by populating provisioningMacAddresses with a dummy address, like this: kind: Provisioning metadata: name: provisioning-configuration spec: provisioningMacAddresses: - aa:aa:aa:aa:aa:aa provisioningNetwork: Disabled watchAllNamespaces: true
Version-Release number of selected component (if applicable):
4.11.17
How reproducible:
Try to bring up Provisioning on SNO in 4.11.17 with provisioningNetwork set to Disabled apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: Disabled watchAllNamespaces: true
Steps to Reproduce:
1. 2. 3.
Actual results:
controller/provisioning "msg"="Reconciler error" "error"="machines with cluster-api-machine-role=master not found" "name"="provisioning-configuration" "namespace"="" "reconciler group"="metal3.io" "reconciler kind"="Provisioning"
Expected results:
metal3 pod should be deployed
Additional info:
This issue is a result of this change: https://github.com/openshift/cluster-baremetal-operator/pull/307 See this Slack thread: https://coreos.slack.com/archives/CFP6ST0A3/p1670530729168599
This is a clone of issue OCPBUGS-3668. The following is the description of the original issue:
—
Description of problem:
Installer fails to install 4.12.0-rc.0 on VMware IPI with the script that worked with prior OCP versions. Error happens during Terraform prepare step when gathering information in the "Platform Provisioning Check". It looks like a permission issue, but we're using the VCenter administrator account. I double checked and that account has all the necessary permissions.
Version-Release number of selected component (if applicable):
OCP installer 4.12.0-rc.0 VSphere & Vcenter 7.0.3 - no pending updates
How reproducible:
always - we observed this already in the nightlies, but wanted to wait for a RC to confirm
Steps to Reproduce:
1. Try to install using the openshift-install binary
Actual results:
Fails during the preparation step
Expected results:
Installs the cluster ;)
Additional info:
This runs in our CICD pipeline, let me know if you want to need access to the full run log: https://gitlab.consulting.redhat.com/cblum/storage-ocs-lab/-/jobs/219304 This includes the install-config.yaml, all component versions and the full debug log output
Description of problem:
Installer fails due to Neutron policy error when creating Openstack servers for OCP master nodes. $ oc get machines -A NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api ostest-kwtf8-master-0 Running 23h openshift-machine-api ostest-kwtf8-master-1 Running 23h openshift-machine-api ostest-kwtf8-master-2 Running 23h openshift-machine-api ostest-kwtf8-worker-0-g7nrw Provisioning 23h openshift-machine-api ostest-kwtf8-worker-0-lrkvb Provisioning 23h openshift-machine-api ostest-kwtf8-worker-0-vwrsk Provisioning 23h $ oc -n openshift-machine-api logs machine-api-controllers-7454f5d65b-8fqx2 -c machine-controller [...] E1018 10:51:49.355143 1 controller.go:317] controller/machine_controller "msg"="Reconciler error" "error"="error creating Openstack instance: Failed to create port err: Request forbidden: [POST https://overcloud.redhat.local:13696/v2.0/ports], error message: {\"NeutronError\": {\"type\": \"PolicyNotAuthorized\", \"message\": \"(rule:create_port and (rule:create_port:allowed_address_pairs and (rule:create_port:allowed_address_pairs:ip_address and rule:create_port:allowed_address_pairs:ip_address))) is disallowed by policy\", \"detail\": \"\"}}" "name"="ostest-kwtf8-worker-0-lrkvb" "namespace"="openshift-machine-api"
Version-Release number of selected component (if applicable):
4.10.0-0.nightly-2022-10-14-023020
How reproducible:
Always
Steps to Reproduce:
1. Install 4.10 within provider networks (in primary or secondary interface)
Actual results:
Installation failure: 4.10.0-0.nightly-2022-10-14-023020: some cluster operators have not yet rolled out
Expected results:
Successful installation
Additional info:
Please find must-gather for installation on primary interface link here and for installation on secondary interface link here.
Found when running resource watcher, these keep updating with no real changes, just moving conditions around. Likely needs bugs for all three.
Description of problem:
If a master fails and is drained, the old copy of the metal3 pod gets stuck in Terminating state for some (possibly long) time. While the new pod works correctly, CBO expects only one port to exist and thus cannot determine the applicable Ironic IP address.
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
1. On dev-scripts: virsh destroy <VM with metal3 pod> 2. Wait for drain to happen or trigger it manually 3. Check CBO logs
Actual results:
"unable to determine Ironic's IP to pass to the machine-image-customization-controller: there should be only one pod listed for the given label"
Expected results:
CBO reconfigures its pods with the new Ironic IP
Additional info:
I don't know how to filter out pods in Terminating state...
Description: agent.iso is created in case of invalid macAddress
Here is the content of agent-config.yaml
--------------------------------------------
kind: AgentConfig
metadata:
name: sno-cluster
spec:
rendezvousIP: 192.168.111.80
hosts:
How reproducible:
always
Repro Steps:
1) Get the latest agent-installer and build
git clone -b agent-installer https://github.com/openshift/installer.git
cd installer/
hack/build.sh
2) Create agent.iso using agent-config and install-config files.
Expected: Installer should throw an error message something like this: hosts.host[0].interfaces.macAddress: Invalid value: “0000”: macAddress must provide the valid macAddress.
And
hosts.host[0].networkConfig.interfaces.macAddress: Invalid value: “00000”: macAddress must provide the valid macAddress.
Actual: Able to create agent.iso image.
Currently controller will set status done each time it sees host that is ready in k8s without looking if it was already set.
time="2022-09-13T19:03:45Z" level=info msg="Found new ready node ocp-2.cluster1.kpsalerno.us.ibm.com with inventory id 2da64d56-5057-78c6-ea6e-bf74a783bd79, kubernetes id 2da64d56-5057-78c6-ea6e-bf74a783bd79, updating its status to Done" func="github.com/openshift/assisted-installer/src/assisted_installer_controller.(*controller).waitAndUpdateNodesStatus" file="/remote-source/app/src/assisted_installer_controller/assisted_installer_controller.go:255" request_id=6258e5a2-4e78-4148-a913-45d704a0fa1d
time="2022-09-13T19:04:05Z" level=info msg="Found new ready node ocp-2.cluster1.kpsalerno.us.ibm.com with inventory id 2da64d56-5057-78c6-ea6e-bf74a783bd79, kubernetes id 2da64d56-5057-78c6-ea6e-bf74a783bd79, updating its status to Done" func="github.com/openshift/assisted-installer/src/assisted_installer_controller.(*controller).waitAndUpdateNodesStatus" file="/remote-source/app/src/assisted_installer_controller/assisted_installer_controller.go:255" request_id=49e4e63f-cf4f-4b9f-b1f3-923c473c09dd
Description of problem:
The SQL-based index image created by old opm failed to run in 4.12 even if added the `privileged` permission to the namespace.
MacBook-Pro:~ jianzhang$ oc get pods NAME READY STATUS RESTARTS AGE jian-operators-4g5ln 0/1 CrashLoopBackOff 1 (2s ago) 11s MacBook-Pro:~ jianzhang$ oc logs jian-operators-4g5ln Error: open /etc/nsswitch.conf: permission denied
PS: the SQL-based index created by the new opm version doesn't have this issue.
opm version Version: version.Version{OpmVersion:"e41024eb3", GitCommit:"e41024eb37c721bc43e8b3df226dd30c0589aee7", BuildDate:"2022-08-16T01:50:17Z", GoOs:"darwin", GoArch:"amd64"}
Version-Release number of selected component (if applicable):
OCP 4.12
MacBook-Pro:~ jianzhang$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-0.nightly-2022-08-15-150248 True False 3h25m Cluster version is 4.12.0-0.nightly-2022-08-15-150248
How reproducible:
always
Steps to Reproduce:
1. Deploy OCP 4.12
2, Deploy a CatalogSource in the `openshift-marketplace` namespace.
MacBook-Pro:~ jianzhang$ oc get ns openshift-marketplace -o yaml apiVersion: v1 kind: Namespace metadata: annotations: capability.openshift.io/name: marketplace include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" openshift.io/node-selector: "" openshift.io/sa.scc.mcs: s0:c16,c10 openshift.io/sa.scc.supplemental-groups: 1000260000/10000 openshift.io/sa.scc.uid-range: 1000260000/10000 workload.openshift.io/allowed: management creationTimestamp: "2022-08-15T23:15:27Z" labels: kubernetes.io/metadata.name: openshift-marketplace olm.operatorgroup.uid/1b776321-2714-4c1f-95ba-2ddff49c4efe: "" openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/audit: baseline pod-security.kubernetes.io/enforce: baseline pod-security.kubernetes.io/warn: baseline name: openshift-marketplace ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: cd81594b-4f6c-46d6-9369-75deef542ec8 resourceVersion: "8617" uid: 1c35352e-3636-4f2b-a3b1-c84ebc6681e0 spec: finalizers: - kubernetes status: phase: Active
3, Check the CatalogSource pod status, crashed.
MacBook-Pro:~ jianzhang$ oc get catalogsource -n openshift-marketplace jian-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: creationTimestamp: "2022-08-16T02:24:20Z" generation: 1 name: jian-operators namespace: openshift-marketplace resourceVersion: "106145" uid: 6a75ecc9-7b88-4411-bcf5-e34618f9b3cd spec: displayName: Jian Operators image: quay.io/olmqe/etcd-index:v1 priority: -100 publisher: Jian sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: jian-operators.openshift-marketplace.svc:50051 lastConnect: "2022-08-16T03:12:28Z" lastObservedState: TRANSIENT_FAILURE latestImageRegistryPoll: "2022-08-16T02:34:21Z" registryService: createdAt: "2022-08-16T02:24:20Z" port: "50051" protocol: grpc serviceName: jian-operators serviceNamespace: openshift-marketplace MacBook-Pro:~ jianzhang$ oc get pods -n openshift-marketplace NAME READY STATUS RESTARTS AGE 28bb83ea022e9728d25570ab0adbe09a31d6a0a606917488e0ddb00f925mnfw 0/1 Completed 0 3h23m 7049ea48beb27a712fa506b76ad672be201ce5d3a6a93d627a0091e0fesvdlj 0/1 Completed 0 3h23m certified-operators-ftt2n 1/1 Running 0 3h49m community-operators-27dx9 1/1 Running 0 3h49m jian-operators-5zq7d 0/1 CrashLoopBackOff 12 (71s ago) 38m jian-operators-gpg4v 0/1 CrashLoopBackOff 14 (57s ago) 48m marketplace-operator-9c8496b58-2jfmv 1/1 Running 0 3h56m qe-app-registry-rqrrv 1/1 Running 0 141m redhat-marketplace-s6zrj 1/1 Running 0 3h49m redhat-operators-54cqr 1/1 Running 0 3h49m MacBook-Pro:~ jianzhang$ oc -n openshift-marketplace logs jian-operators-gpg4v Error: open /etc/nsswitch.conf: permission denied Usage: opm registry serve [flags] Flags: -d, --database string relative path to sqlite db (default "bundles.db") --debug enable debug logging -h, --help help for serve -p, --port string port number to serve on (default "50051") --skip-migrate do not attempt to migrate to the latest db revision when starting -t, --termination-log string path to a container termination log file (default "/dev/termination-log") --timeout-seconds string Timeout in seconds. This flag will be removed later. (default "infinite") Global Flags: --skip-tls skip TLS certificate verification for container image registries while pulling bundles or index
4. Create a namespace with the `privileged` permission.
MacBook-Pro:~ jianzhang$ oc get ns debug -o yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/sa.scc.mcs: s0:c30,c10 openshift.io/sa.scc.supplemental-groups: 1000890000/10000 openshift.io/sa.scc.uid-range: 1000890000/10000 creationTimestamp: "2022-08-16T02:46:41Z" labels: kubernetes.io/metadata.name: debug pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" name: debug resourceVersion: "95718" uid: bdf93839-6c42-4365-a65c-d9c0b9fe0504 spec: finalizers: - kubernetes status: phase: Active
5. Deploy a CatalogSource as above step 2. Still crashed.
MacBook-Pro:~ jianzhang$ oc get pods -n debug NAME READY STATUS RESTARTS AGE jian-operators-4g5ln 0/1 CrashLoopBackOff 10 (114s ago) 28m jian-operators-wn766 0/1 CrashLoopBackOff 8 (2m25s ago) 18m MacBook-Pro:~ jianzhang$ oc -n debug logs jian-operators-wn766 Error: open /etc/nsswitch.conf: permission denied Usage: opm registry serve [flags] Flags: -d, --database string relative path to sqlite db (default "bundles.db") --debug enable debug logging -h, --help help for serve -p, --port string port number to serve on (default "50051") --skip-migrate do not attempt to migrate to the latest db revision when starting -t, --termination-log string path to a container termination log file (default "/dev/termination-log") --timeout-seconds string Timeout in seconds. This flag will be removed later. (default "infinite") Global Flags: --skip-tls skip TLS certificate verification for container image registries while pulling bundles or index
Actual results:
The sql-based index image created by the old opm version cannot be run.
MacBook-Pro:~ jianzhang$ oc -n debug logs jian-operators-wn766 Error: open /etc/nsswitch.conf: permission denied
Expected results:
The old SQL-based index image runs well. Or we have a workaround for it.
Additional info:
I changed another old sql-based image and have a try, get another permission issue.
MacBook-Pro:~ jianzhang$ oc get catalogsource NAME DISPLAY TYPE PUBLISHER AGE jian-operators Jian Operators grpc Jian 37m xia-operators Xia Operators grpc Xia 101s MacBook-Pro:~ jianzhang$ oc get catalogsource xia-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: creationTimestamp: "2022-08-16T03:22:38Z" generation: 1 name: xia-operators namespace: debug resourceVersion: "110629" uid: 8be42e68-43be-4fd4-9b67-c74edc5e6353 spec: displayName: Xia Operators image: quay.io/olmqe/ditto-index:test-xzha-1 priority: -100 publisher: Xia sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: xia-operators.debug.svc:50051 lastConnect: "2022-08-16T03:24:18Z" lastObservedState: CONNECTING registryService: createdAt: "2022-08-16T03:22:38Z" port: "50051" protocol: grpc serviceName: xia-operators serviceNamespace: debug MacBook-Pro:~ jianzhang$ oc project Using project "debug" on server "https://api.qe-daily-412-0816.ibmcloud.qe.devcluster.openshift.com:6443". MacBook-Pro:~ jianzhang$ oc get pods NAME READY STATUS RESTARTS AGE jian-operators-4g5ln 0/1 CrashLoopBackOff 11 (3m41s ago) 35m jian-operators-wn766 0/1 CrashLoopBackOff 9 (4m13s ago) 25m xia-operators-6wgjt 0/1 CrashLoopBackOff 1 (8s ago) 13s MacBook-Pro:~ jianzhang$ oc logs xia-operators-6wgjt time="2022-08-16T03:22:43Z" level=warning msg="\x1b[1;33mDEPRECATION NOTICE:\nSqlite-based catalogs and their related subcommands are deprecated. Support for\nthem will be removed in a future release. Please migrate your catalog workflows\nto the new file-based catalog format.\x1b[0m" Error: open ./db-609956243: permission denied Usage: opm registry serve [flags] Flags: -d, --database string relative path to sqlite db (default "bundles.db") --debug enable debug logging
Even if that namespace is `privileged`.
MacBook-Pro:~ jianzhang$ oc get ns debug -o yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/sa.scc.mcs: s0:c30,c10 openshift.io/sa.scc.supplemental-groups: 1000890000/10000 openshift.io/sa.scc.uid-range: 1000890000/10000 creationTimestamp: "2022-08-16T02:46:41Z" labels: kubernetes.io/metadata.name: debug pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" name: debug resourceVersion: "95718" uid: bdf93839-6c42-4365-a65c-d9c0b9fe0504 spec: finalizers: - kubernetes status: phase: Active
But, both of them work well in the 4.11 cluster. As follows,
MacBook-Pro:~ jianzhang$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-08-15-152346 True False 91m Cluster version is 4.11.0-0.nightly-2022-08-15-152346 MacBook-Pro:~ jianzhang$ oc get catalogsource NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 106m community-operators Community Operators grpc Red Hat 106m jian-operators Jian Operators grpc Jian 48m redhat-marketplace Red Hat Marketplace grpc Red Hat 106m redhat-operators Red Hat Operators grpc Red Hat 106m xia-operators Xia Operators grpc Xia 6s MacBook-Pro:~ jianzhang$ oc get pods NAME READY STATUS RESTARTS AGE certified-operators-fsjc8 1/1 Running 0 107m community-operators-9qvzt 1/1 Running 0 107m jian-operators-n5s8c 1/1 Running 0 48m marketplace-operator-7b777f747-22rwq 1/1 Running 0 109m redhat-marketplace-2mgrl 1/1 Running 0 107m redhat-operators-72q6z 1/1 Running 0 107m xia-operators-ngq86 1/1 Running 0 23s MacBook-Pro:~ jianzhang$ oc get catalogsource jian-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: creationTimestamp: "2022-08-16T02:39:52Z" generation: 1 name: jian-operators namespace: openshift-marketplace resourceVersion: "58565" uid: 481a6fbe-00a5-4af5-86f7-d7413c658db3 spec: displayName: Jian Operators image: quay.io/olmqe/etcd-index:v1 priority: -100 publisher: Jian sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: jian-operators.openshift-marketplace.svc:50051 lastConnect: "2022-08-16T02:44:45Z" lastObservedState: READY latestImageRegistryPoll: "2022-08-16T03:24:54Z" registryService: createdAt: "2022-08-16T02:39:52Z" port: "50051" protocol: grpc serviceName: jian-operators serviceNamespace: openshift-marketplace MacBook-Pro:~ jianzhang$ oc get catalogsource xia-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: creationTimestamp: "2022-08-16T03:28:07Z" generation: 1 name: xia-operators namespace: openshift-marketplace resourceVersion: "59886" uid: a270f665-ee0b-49a5-badb-d3394c7a9344 spec: displayName: Xia Operators image: quay.io/olmqe/ditto-index:test-xzha-1 priority: -100 publisher: Xia sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: xia-operators.openshift-marketplace.svc:50051 lastConnect: "2022-08-16T03:28:27Z" lastObservedState: READY registryService: createdAt: "2022-08-16T03:28:07Z" port: "50051" protocol: grpc serviceName: xia-operators serviceNamespace: openshift-marketplace MacBook-Pro:~ jianzhang$ oc get ns openshift-marketplace -o yaml apiVersion: v1 kind: Namespace metadata: annotations: capability.openshift.io/name: marketplace include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" openshift.io/node-selector: "" openshift.io/sa.scc.mcs: s0:c16,c5 openshift.io/sa.scc.supplemental-groups: 1000250000/10000 openshift.io/sa.scc.uid-range: 1000250000/10000 workload.openshift.io/allowed: management creationTimestamp: "2022-08-16T01:38:10Z" labels: kubernetes.io/metadata.name: openshift-marketplace olm.operatorgroup.uid/24dae571-2843-445b-b09f-5a4631cb25ba: "" openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/audit: baseline pod-security.kubernetes.io/warn: baseline name: openshift-marketplace ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 470d072e-37d9-4203-bc5a-c675800d593c resourceVersion: "6981" uid: 554a5ceb-8343-46f4-ae69-af36ee45d7fe spec: finalizers: - kubernetes status: phase: Active
Description of problem:
OVN-Kubernetes master is crashing during upgrade from 4.11.5 to 4.11.6
Version-Release number of selected component (if applicable):
4.11.5 to 4.11.6
cannot clean up egress default deny ACL name: cannot update old NetworkPolicy ACLs for namespace ocm-myuser-1urk47c6ti1n94n1spdvo9902as3klar-sd6: error in transact with ops [{Op:update Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[default-deny-policy-type:Egress]} log:false match:inport == @a12995145443578534523_egressDefaultDeny meter:{GoSet:[acl-logging]} name:{GoSet:[ocm-myuser-1urk47c6ti1n94n1spdvo9902as3klar-sd6_egressDefaultDeny]} options:{GoMap:map[apply-after-lb:true]} priority:1000 severity:{GoSet:[info]}] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {5277db54-dd96-4c4d-bbed-99142cab91e7}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}] results [{Count:0 Error:constraint violation Details:"ocm-myuser-1urk47c6ti1n94n1spdvo9902as3klar-sd6_egressDefaultDeny" length 65 is greater than maximum allowed length 63 UUID:{GoUUID:} Rows:[]}] and errors
This is a clone of issue OCPBUGS-5182. The following is the description of the original issue:
—
Description of problem:
Deploy IPI cluster on azure cloud, set region as westeurope, vm size as EC96iads_v5 or EC96ias_v5. Installation fails with below error: 12-15 11:47:03.429 level=error msg=Error: creating Linux Virtual Machine: (Name "jima-15a-m6fzd-bootstrap" / Resource Group "jima-15a-m6fzd-rg"): compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="The VM size 'Standard_EC96iads_v5' is not supported for creation of VMs and Virtual Machine Scale Set with '<NULL>' security type." Similar as https://bugzilla.redhat.com/show_bug.cgi?id=2055247. From azure portal, we can see that the type of both vm size EC96iads_v5 and EC96ias_v5 are confidential compute. Might also need to do similar process for them as what did in bug 2055247.
Version-Release number of selected component (if applicable):
4.12 nightly build
How reproducible:
Always
Steps to Reproduce:
1. Prepare install-config.yaml file, set region as westeurope, vm size as EC96iads_v5 or EC96ias_v5 2. Deploy IPI azure cluster 3.
Actual results:
Install failed with error in description
Expected results:
Installer should be exited during validation and show expected error message.
Additional info:
Description of problem:
Seems ART is having trouble building OLM images: https://redhat-internal.slack.com/archives/CB95J6R4N/p1676531421724929 I've already fixed master: * https://github.com/openshift/cluster-policy-controller/pull/103 * https://github.com/openshift/cluster-policy-controller/pull/101 Need a bug to backport...
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
Using OLM descriptor components.Using OLM descriptor components deletes operand
Steps to Reproduce:
Name: DNS
Description: Please change the "DNS" component to be a subcomponent "DNS" of the "Networking" component.
Component: change to "Networking".
Subcomponent: change to "DNS".
Existing fields (default assignee, default QA contact, default CC email list, etc.) should remain the same as they currently are.
Default Assignee: aos-network-edge-staff@bot.bugzilla.redhat.com
Default QA Contact: hongli@redhat.com
Default CC List: aos-network-edge-staff@bot.bugzilla.redhat.com
Additional Notes:
I filled in "Default CC email list" because the form validation would not permit me to omit it. However, it can be left empty in Bugzilla (it is currently empty).
If possible, we would like this change to be done prior to the Bugzilla-to-Jira migration to avoid the need to make the change after the migration.
Description of problem:
`create a project` link is enabled for users who do not have permission to create a project. This issue surfaces itself in the developer sandbox.
Version-Release number of selected component (if applicable):
4.11.5
How reproducible:
Steps to Reproduce:
1. log into dev sandbox, or a cluster where the user does not have permission to create a project 2. go directly to URL /topology/all-namespaces
Actual results:
`create a project` link is enabled. Upon clicking the link and submitting the form, the project fails to create; as expected.
Expected results:
`create a project` link should only be available to users with the correct permissions.
Additional info:
The project list pages are not directly available to the user in the UI through the project selector. The user must go directly to the URL. It's possible to encounter this situation when a user logs in with multiple accounts and returns to a previous url.