Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
Problem:
Certain Insights Advisor features differentiate between RHEL and OCP advisor
Goal:
Address top priority UI misalignments between RHEL and OCP advisor. Address UI features dropped from Insights ADvisor for OCP GA.
Scope:
Specific tasks and priority of them tracked in https://issues.redhat.com/browse/CCXDEV-7432
This contains all the Insights Advisor widget deliverables for the OCP release 4.11.
Scope
It covers only minor bug fixes and improvements:
Scenario: Check if the Insights Advisor widget in the OCP WebConsole UI shows the time of the last data analysis Given: OCP WebConsole UI and the cluster dashboard is accessible And: CCX external data pipeline is in a working state And: administrator A1 has access to his cluster's dashboard And: Insights Operator for this cluster is sending archives When: administrator A1 clicks on the Insights Advisor widget Then: the results of the last analysis are showed in the Insights Advisor widget And: the time of the last analysis is shown in the Insights Advisor widget
Acceptance criteria:
max_over_time(timestamp(changes(insightsclient_request_send_total\{status_code="202"}[1m]) > 0)[24h:1m])
Show the error message (mocked in CCXDEV-5868) if the Prometheus metrics `cluster_operator_conditions{name="insights"}` contain two true conditions: UploadDegraded and Degraded at the same time. This state occurs if there was an IO archive upload error = problems with the pipeline.
Expected for 4.11 OCP release.
Cloning the existing rule should end up with a new rule in the same namespace.
Modifications can now be done to the new rule.
(Optional) You can silence the existing rule.
Create a new PrometheusRule object inside the namespace that includes the metrics you need to form the alerting rule.
CMO should reconcile the platform Prometheus configuration with the AlertingRule resources.
DoD
CMO should reconcile the platform Prometheus configuration with the alert-relabel-config resources.
DoD
Managing PVs at scale for a fleet creates difficulties where "one size does not fit all". The ability for SRE to deploy prometheus with PVs and have retention based an on a desired size would enable easier management of these volumes across the fleet.
The prometheus-operator exposes retentionSize.
Field | Description |
---|---|
retentionSize | Maximum amount of disk space used by blocks. Supported units: B, KB, MB, GB, TB, PB, EB. Ex: 512MB. |
This is a feature request to enable this configuration option via CMO cluster-monitoring-config ConfigMap.
Today, all configuration for setting individual, for example, routing configuration is done via a single configuration file that only admins have access to. If an environment uses multiple tenants and each tenant, for example, has different systems that they are using to notify teams in case of an issue, then someone needs to file a request w/ an admin to add the required settings.
That can be bothersome for individual teams, since requests like that usually disappear in the backlog of an administrator. At the same time, administrators might get tons of requests that they have to look at and prioritize, which takes them away from more crucial work.
We would like to introduce a more self service approach whereas individual teams can create their own configuration for their needs w/o the administrators involvement.
Last but not least, since Monitoring is deployed as a Core service of OpenShift there are multiple restrictions that the SRE team has to apply to all OSD and ROSA clusters. One restriction is the ability for customers to use the central Alertmanager that is owned and managed by the SRE team. They can't give access to the central managed secret due to security concerns so that users can add their own routing information.
Provide a new API (based on the Operator CRD approach) as part of the Prometheus Operator that allows creating a subset of the Alertmanager configuration without touching the central Alertmanager configuration file.
Please note that we do not plan to support additional individual webhooks with this work. Customers will need to deploy their own version of the third party webhooks.
Team A wants to send all their important notifications to a specific Slack channel.
* CI - CI is running, tests are automated and merged.
* Release Enablement <link to Feature Enablement Presentation>
* DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
* DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
* DEV - Downstream build attached to advisory: <link to errata>
* QE - Test plans in Polarion: <link or reference to Polarion>
* QE - Automated tests merged: <link or reference to automated tests>
* DOC - Downstream documentation merged: <link to meaningful PR>
Now that upstream supports AlertmanagerConfig v1beta1 (see MON-2290 and https://github.com/prometheus-operator/prometheus-operator/pull/4709), it should be deployed by CMO.
DoD:
DoD
DoD
Copy/paste from [_https://github.com/openshift-cs/managed-openshift/issues/60_]
Which service is this feature request for?
OpenShift Dedicated and Red Hat OpenShift Service on AWS
What are you trying to do?
Allow ROSA/OSD to integrate with AWS Managed Prometheus.
Describe the solution you'd like
Remote-write of metrics is supported in OpenShift but it does not work with AWS Managed Prometheus since AWS Managed Prometheus requires AWS SigV4 auth.
Describe alternatives you've considered
There is the workaround to use the "AWS SigV4 Proxy" but I'd think this is not properly supported by RH.
https://mobb.ninja/docs/rosa/cluster-metrics-to-aws-prometheus/
Additional context
The customer wants to use an open and portable solution to centralize metrics storage and analysis. If they also deploy to other clouds, they don't want to have to re-configure. Since most clouds offer a Prometheus service (or it's easy to self-manage Prometheus), app migration should be simplified.
The cluster monitoring operator should allow OpenShift customers to configure remote write with all authentication methods supported by upstream Prometheus.
We will extend CMO's configuration API to support the following authentications with remote write:
Customers want to send metrics to AWS Managed Prometheus that require sigv4 authentication (see https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-secure-metric-ingestion.html#AMP-secure-auth).
Prometheus and Prometheus operator already support custom Authorization for remote write. This should be possible to configure the same in the CMO configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
remoteWrite:
- url: "https://remote-write.endpoint"
Authorization:
type: Bearer
credentials:
name: credentials
key: token
DoD:
Prometheus and Prometheus operator already support sigv4 authentication for remote write. This should be possible to configure the same in the CMO configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
remoteWrite:
- url: "https://remote-write.endpoint"
sigv4:
accessKey:
name: aws-credentialss
key: access
secretKey:
name: aws-credentials
key: secret
profile: "SomeProfile"
roleArn: "SomeRoleArn"
DoD:
As WMCO user, I want to make sure containerd logging information has been updated in documents and scripts.
Configure audit logging to capture login, logout and login failure details
TODO(PM): update this
Customer who needs login, logout and login failure details inside the openshift container platform.
I have checked for this on my test cluster but the audit logs do not contain any user name specifying login or logout details. For successful logins or logout, on CLI and openshift console as well we can see 'Login successful' or 'Invalid credentials'.
Expected results: Login, logout and login failures should be captured in audit logging.
The apiserver pods today have ´/var/log/<kube|oauth|openshift>-apiserver` mounted from the host and create audit files there using the upstream audit event format (JSON lines following https://github.com/kubernetes/apiserver/blob/92392ef22153d75b3645b0ae339f89c12767fb52/pkg/apis/audit/v1/types.go#L72). These events are apiserver specific, but as oauth authentication flow events are also requests, we can use the apiserver event format to log logins, login failures and logouts. Hence, we propose to make oauth-server to create /var/log/oauth-server/audit.log files on the master nodes using that format.
When the login flow does not finish within a certain time (e.g. 10min), we can artificially create an event to show a login failure in the audit logs.
Right now there's no way to generate audit logs from this.
Let the Cluster Authentication Operator deliver the policy to OAuthServer.
In order to know if authn events should be logged, OAuthServer needs to be aware of it.
* Stanislav LázničkaCreate an observer to deliver the audit policy to the oauth server
Make the authentication-operator react to the new audit field in the oauth.config/cluster object. Write an observer watching this field, such an observer will translate the top-level configuration into oauth-server config and add it to the rest of the observed config.
Right now there's no way to generate audit logs from this.
OCP/Telco Definition of Done
Feature Template descriptions and documentation.
Early customer feedback is that they see SNO as a great solution covering smaller footprint deployment, but are wondering what is the evolution story OpenShift is going to provide where more capacity or high availability are needed in the future.
While migration tooling (moving workload/config to new cluster) could be a mid-term solution, customer desire is not to include extra hardware to be involved in this process.
For Telecommunications Providers, at the Far Edge they intend to start small and then grow. Many of these operators will start with a SNO-based DU deployment as an initial investment, but as DUs evolve, different segments of the radio spectrum are added, various radio hardware is provisioned and features delivered to the Far Edge, the Telecommunication Providers desire the ability for their Far Edge deployments to scale up from 1 node to 2 nodes to n nodes. On the opposite side of the spectrum from SNO is MMIMO where there is a robust cluster and workloads use HPA.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
This is a ticket meant to track all the all the OCP PRs that are involved in the implementation of the SNO + workers enhancement
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift/builder to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
As a deployer, I want to be able to:
so that I can achieve
Currently the Assisted Service generates the credentials by running the ignition generation step of the oepnshift-installer. This is why the credentials are only retrievable from the REST API towards the end of the installation.
In the BILLI usage, which takes down assisted service before the installation is complete there is no obvious point at which to alert the user that they should retrieve the credentials. This means that we either need to:
This requires/does not require a design proposal.
This requires/does not require a feature gate.
The AWS-specific code added in OCPPLAN-6006 needs to become GA and with this we want to introduce a couple of Day2 improvements.
Currently the AWS tags are defined and applied at installation time only and saved in the infrastructure CRD's status field for further operator use, which in turn just add the tags during creation.
Saving in the status field means it's not included in Velero backups, which is a crucial feature for customers and Day2.
Thus the status.resourceTags field should be deprecated in favour of a newly created spec.resourceTags with the same content. The installer should only populate the spec, consumers of the infrastructure CRD must favour the spec over the status definition if both are supplied, otherwise the status should be honored and a warning shall be issued.
Being part of the spec, the behaviour should also tag existing resources that do not have the tags yet and once the tags in the infrastructure CRD are changed all the AWS resources should be updated accordingly.
On AWS this can be done without re-creating any resources (the behaviour is basically an upsert by tag key) and is possible without service interruption as it is a metadata operation.
Tag deletes continue to be out of scope, as the customer can still have custom tags applied to the resources that we do not want to delete.
Due to the ongoing intree/out of tree split on the cloud and CSI providers, this should not apply to clusters with intree providers (!= "external").
Once confident we have all components updated, we should introduce an end2end test that makes sure we never create resources that are untagged.
After that, we can remove the experimental flag and make this a GA feature.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
List any affected packages or components.
RFE-1101 described user defined tags for AWS resources provisioned by an OCP cluster. Currently user can define tags which are added to the resources during creation. These tags cannot be updated subsequently. The propagation of the tags is controlled using experimental flag. Before this feature goes GA we should define and implement a mechanism to exclude any experimental flags. Day2 operations and deletion of tags is not in the scope.
RFE-2012 aims to make the user-defined resource tags feature GA. This means that user defined tags should be updatable.
Currently the user-defined tags during install are passed directly as parameters of the Machine and Machineset resources for the master and worker. As a result these tags cannot be updated by consulting the Infrastructure resource of the cluster where the user defined tags are written.
The MCO should be changed such that during provisioning the MCO looks up the values of the tags in the Infrastructure resource and adds the tags during creation of the EC2 resources. The MCO should also watch the infrastructure resource for changes and when the resource tags are updated it should update the tags on the EC2 instances without restarts.
Acceptance Criteria:
OCP/Telco Definition of Done
Feature Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Feature --->
<--- Remove the descriptive text as appropriate --->
Problem
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
Running the OPCT with the latest version (v0.1.0) on OCP 4.11.0, the openshift-tests is reporting an incorrect counter for the "total" field.
In the example below, after the 1127th test, the total follows the same counter of executed. I also would assume that the total is incorrect before that point as the test continues the execution increases both counters.
openshift-tests output format: [failed/executed/total]
started: (0/1126/1127) "[sig-storage] PersistentVolumes-expansion loopback local block volume should support online expansion on node [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (38s) 2022-08-09T17:12:21 "[sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options [Suite:openshift/conformance/parallel] [Suite:k8s]" started: (0/1127/1127) "[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (6.6s) 2022-08-09T17:12:21 "[sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: (0/1128/1128) "[sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies [Suite:openshift/conformance/parallel] [Suite:k8s]" skip [k8s.io/kubernetes@v1.24.0/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support GenericEphemeralVolume -- skipping Ginkgo exit error 3: exit with code 3 skipped: (400ms) 2022-08-09T17:12:21 "[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition [Suite:openshift/conformance/parallel] [Suite:k8s]" started: (0/1129/1129) "[sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information [Suite:openshift/conformance/parallel] [Suite:k8s]"
OPCT output format [executed/total (failed failures)]
Tue, 09 Aug 2022 14:12:13 -03> Global Status: running JOB_NAME | STATUS | RESULTS | PROGRESS | MESSAGE openshift-conformance-validated | running | | 1112/1127 (0 failures) | status=running openshift-kube-conformance | complete | | 352/352 (0 failures) | waiting for post-processor... Tue, 09 Aug 2022 14:12:23 -03> Global Status: running JOB_NAME | STATUS | RESULTS | PROGRESS | MESSAGE openshift-conformance-validated | running | | 1120/1127 (0 failures) | status=running openshift-kube-conformance | complete | | 352/352 (0 failures) | waiting for post-processor... Tue, 09 Aug 2022 14:12:33 -03> Global Status: running JOB_NAME | STATUS | RESULTS | PROGRESS | MESSAGE openshift-conformance-validated | running | | 1139/1139 (0 failures) | status=running openshift-kube-conformance | complete | | 352/352 (0 failures) | waiting for post-processor... Tue, 09 Aug 2022 14:12:43 -03> Global Status: running JOB_NAME | STATUS | RESULTS | PROGRESS | MESSAGE openshift-conformance-validated | running | | 1185/1185 (0 failures) | status=running openshift-kube-conformance | complete | | 352/352 (0 failures) | waiting for post-processor... Tue, 09 Aug 2022 14:12:53 -03> Global Status: running JOB_NAME | STATUS | RESULTS | PROGRESS | MESSAGE openshift-conformance-validated | running | | 1188/1188 (0 failures) | status=running openshift-kube-conformance | complete | | 352/352 (0 failures) | waiting for post-processor...
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Create a PR in openshift/cluster-ingress-operator to implement configurable router probe timeouts.
The PR should include the following:
User Story: As a customer in a highly regulated environment, I need the ability to secure DNS traffic when forwarding requests to upstream resolvers so that I can ensure additional DNS traffic and data privacy.
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
In OCP 4.8 the router was changed to use the "random" balancing algorithm for non-passthrough routes by default. It was previously "leastconn".
Bug https://bugzilla.redhat.com/show_bug.cgi?id=2007581 shows that using "random" by default incurs significant memory overhead for each backend that uses it.
PR https://github.com/openshift/cluster-ingress-operator/pull/663
reverted the change and made "leastconn" the default again (OCP 4.8 onwards).
The analysis in https://bugzilla.redhat.com/show_bug.cgi?id=2007581#c40 shows that the default haproxy behaviour is to multiply the weight (specified in the route CR) by 16 as it builds its data structures for each backend. If no weight is specified then openshift-router sets the weight to 256. If you have many, many thousands of routes then this balloons quickly and leads to a significant increase in memory usage, as highlighted by customer cases attached to BZ#2007581.
The purpose of this issue is to both explore changing the openshift-router default weight (i.e., 256) to something smaller, or indeed unset (assuming no explicit weight has been requested), and to measure the memory usage within the context of the existing perf&scale tests that we use for vetting new haproxy releases.
It may be that the low-hanging change is to not default to weight=256 for backends that only have one pod replica (i.e., if no value specified, and there is only 1 pod replica, then don't default to 256 for that single server entry).
Outcome: does changing the [default] weight value make it feasible to switch back to "random" as the default balancing algorithm for a future OCP release.
Revert router to using "random" once again in 4.11 once analysis is done on impact of weight and static memory allocation.
Per the 4.6.30 Monitoring DNS Post Mortem, we should add E2E tests to openshift/cluster-dns-operator to reduce the risk that changes to our CoreDNS configuration break DNS resolution for clients.
To begin with, we add E2E DNS testing for 2 or 3 client libraries to establish a framework for testing DNS resolvers; the work of adding additional client libraries to this framework can be left for follow-up stories. Two common libraries are Go's resolver and glibc's resolver. A somewhat common library that is known to have quirks is musl libc's resolver, which uses a shorter timeout value than glibc's resolver and reportedly has issues with the EDNS0 protocol extension. It would also make sense to test Java or other popular languages or runtimes that have their own resolvers.
Additionally, as talked about in our DNS Issue Retro & Testing Coverage meeting on Feb 28th 2024, we also decided to add a test for testing a non-EDNS0 query for a larger than 512 byte record, as once was an issue in bug OCPBUGS-27397.
The ultimate goal is that the test will inform us when a change to OpenShift's DNS or networking has an effect that may impact end-user applications.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
When viewing the Installed Operators list set to 'All projects' and then selecting an operator that is available in 'All namespaces' (globally installed,) upon clicking the operator to view its details the user is taken into the details of that operator in installed namespace (project selector will switch to the install namespace.)
This can be disorienting then to look at the lists of custom resource instances and see them all blank, since the lists are showing instances only in the currently selected project (the install namespace) and not across all namespaces the operator is available in.
It is likely that making use of the new Operator resource will improve this experience (CONSOLE-2240,) though that may still be some releases away. it should be considered if it's worth a "short term" fix in the meantime.
Note: The informational alert was not implemented. It was decided that since "All namespaces" is displayed in the radio button, the alert was not needed.
During master nodes upgrade when nodes are getting drained there's currently no protection from two or more operands going down. If your component is required to be available during upgrade or other voluntary disruptions, please consider deploying PDB to protect your operands.
The effort is tracked in https://issues.redhat.com/browse/WRKLDS-293.
Example:
Acceptance Criteria:
1. Create PDB controller in console-operator for both console and downloads pods
2. Add e2e tests for PDB in single node and multi node cluster
Note: We should consider to backport this to 4.10
Goal
Add support for PDB (Pod Disruption Budget) to the console.
Requirements:
Designs:
Customers are asking for improvements to the upgrade experience (both over-the-air and disconnected). This is a feature tracking epics required to get that work done.
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Goal
Improve the UX on the machine config pool page to reflect the new enhancements on the cluster settings that allows users to select the ability to update the control plane only.
Background
Currently in the console, users only have the ability to complete a full cluster upgrade. For many customers, upgrades take longer than what their maintenance window allows. Users need the ability to upgrade the control plane independently of the other worker nodes.
Ex. Upgrades of huge clusters may take too long so admins may do the control plane this weekend, worker-pool-A next weekend, worker-pool-B the weekend after, etc. It is all at a pool level, they will not be able to choose specific hosts.
Requirements
Design deliverables:
Goal
Add the ability to choose between a full cluster upgrade (which exists today) or control plane upgrade (which will pause all worker pools) in the console.
Background
Currently in the console, users only have the ability to complete a full cluster upgrade. For many customers, upgrades take longer than what their maintenance window allows. Users need the ability to upgrade the control plane independently of the other worker nodes.
Ex. Upgrades of huge clusters may take too long so admins may do the control plane this weekend, worker-pool-A next weekend, worker-pool-B the weekend after, etc. It is all at a pool level, they will not be able to choose specific hosts.
Requirements
Design deliverables:
Enable sharing ConfigMap and Secret across namespaces
Requirement | Notes | isMvp? |
---|---|---|
Secrets and ConfigMaps can get shared across namespaces | YES |
NA
NA
Consumption of RHEL entitlements has been a challenge on OCP 4 since it moved to a cluster-based entitlement model compared to the node-based (RHEL subscription manager) entitlement mode. In order to provide a sufficiently similar experience to OCP 3, the entitlement certificates that are made available on the cluster (OCPBU-93) should be shared across namespaces in order to prevent the need for cluster admin to copy these entitlements in each namespace which leads to additional operational challenges for updating and refreshing them.
Questions to be addressed:
* What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
* Does this feature have doc impact?
* New Content, Updates to existing content, Release Note, or No Doc Impact
* If unsure and no Technical Writer is available, please contact Content Strategy.
* What concepts do customers need to understand to be successful in [action]?
* How do we expect customers will use the feature? For what purpose(s)?
* What reference material might a customer want/need to complete [action]?
* Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
* What is the doc impact (New Content, Updates to existing content, or Release Note)?
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer using SharedSecrets and ConfigMaps
I want to ensure all pods set readOnly; true on admission
So that I don't have pods stuck in the "Pending" state because of a bad volume mount
QE will need to verify the new Pod Admission behavior
Docs will need to ensure that readOnly: true is required and must be set to true.
None.
QE testing/verification of the feature - require readOnly to be true
Actions:
1. Create smoke test and submit to GitHub
2. Run script to integrate smoke test with Polarion
As an OpenShift engineer,
I want to initialize a validating admission webhook for the shared resource CSI driver
So that I can eventually require readOnly: true to be set on all pods that use the Shared Resource CSI Driver
None.
None.
None.
This is a prerequisite for implementing the validating admission webhook.
We need to have ART build the container image downstream so that we can add the correct image references for the CVO.
If we reference images in the CVO manifests which do not have downstream counterparts, we break the downstream build for the payload.
CI is capable of producing multiple images for a GitHub repository. For example, github.com/openshift/oc produces 4-5 images with various capabilities.
We did similar work in BUILD-234 - some of these steps are not required.
See also:
Tasks:
As an OpenShift engineer
I want the shared resource CSI Driver webhook to be installed with the cluster storage operator
So that the webhook is deployed when the CSI driver is deployed
None - no new functional capabilities will be added
None - we can verify in CI that we are deploying the webhook correctly.
None - no new functional capabilities will be added
The scope of this story is to just deploy the "hello world" webhook with the Cluster Storage Operator.
Adding the live ValidatingWebhook configuration and service will be done in a separate story.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
https://issues.redhat.com/browse/AUTH-2 revealed that, in prinicipal, Pod Security Admission is possible to integrate into OpenShift while retaining SCC functionality.
This epic is about the concrete steps to enable Pod Security Admission by default in OpenShift
Enhancement - https://github.com/openshift/enhancements/pull/1010
dns-operator must comply to restricted pod security level. The current audit warning is:
{ "objectRef": "openshift-dns-operator/deployments/dns-operator", "pod-security.kubernetes.io/audit-violations": "would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (containers \"dns-operator\", \"kube-rbac-proxy\" must set securityContext.allowPrivilegeEscalation=false), unre stricted capabilities (containers \"dns-operator\", \"kube-rbac-proxy\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or containers \"dns-operator\", \"kube-rbac-proxy\" must set securityContext.runAsNonRoot=tr ue), seccompProfile (pod or containers \"dns-operator\", \"kube-rbac-proxy\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")" }ingress-operator must comply to pod security. The current audit warning is:
{ "objectRef": "openshift-ingress-operator/deployments/ingress-operator", "pod-security.kubernetes.io/audit-violations": "would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (containers \"ingress-operator\", \"kube-rbac-proxy\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \"ingress-operator\", \"kube-rbac-proxy\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or containers \"ingress-operator\", \"kube-rbac-proxy\" must set securityContext.run AsNonRoot=true), seccompProfile (pod or containers \"ingress-operator\", \"kube-rbac-proxy\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")" }
HyperShift provisions OpenShift clusters with externally managed control-planes. It follows a slightly different process for provisioning clusters. For example, HyperShift uses cluster API as a backend and moves all the machine management bits to the management cluster.
showing machine management/cluster auto-scaling tabs in the console is likely to confuse users and cause unnecessary side effects.
See Design Doc: https://docs.google.com/document/d/1k76JtRRHBdCCEjHPqKcYvbNVsuaGmRhWDLESWIm0mbo/edit#
It's based on the SERVER_FLAG controlPlaneTopology being set to External is really the driving factor here; this can be done in one of two ways:
To test work related to cluster upgrade process, use a 4.10.3 cluster set on the candidate-4.10 upgrade channel using 4.11 frontend code.
If the Infrastructure.Status.ControlPlaneTopology is set to 'External', the console-operator will pass this information via the console-config.yaml to the console. Console pod will get re-deployed and will store the topology mode information as a SERVER_FLAG. Based on that value we need to suspend these notifications:
For these we will need to check `ControlPlaneTopology`, if it's set to 'External' and also check if the user can edit cluster version(either by creating a hook or an RBAC call, eg. `canEditClusterVersion`)
Check section 05 for more info: https://docs.google.com/document/d/1k76JtRRHBdCCEjHPqKcYvbNVsuaGmRhWDLESWIm0mbo/edit#
If the Infrastructure.Status.ControlPlaneTopology is set to 'External', the console-operator will pass this information via the console-config.yaml to the console. Console pod will get re-deployed and will store the topology mode information as a SERVER_FLAG. Based on that value we need to suspend kubeadmin notifier, from the global notifications, since it contain link for updating the cluster OAuth configuration (see attachment).
Based on Cesar's comment we should be removing the `Control Plane` section, if the infrastructure.status.controlplanetopology being "External".
If the Infrastructure.Status.ControlPlaneTopology is set to 'External', the console-operator will pass this information via the console-config.yaml co the console. Console pod will get re-deployed and will store the topology mode information as a SERVER_FLAG. Based on that value we need surface a message that the control plane is externally managed and add following changes:
In general, anything that changes a cluster version should be read only.
Check section 02 for more info: https://docs.google.com/document/d/1k76JtRRHBdCCEjHPqKcYvbNVsuaGmRhWDLESWIm0mbo/edit#
If the Infrastructure.Status.ControlPlaneTopology is set to 'External', the console-operator will pass this information via the console-config.yaml co the console. Console pod will get re-deployed and will store the topology mode information as a SERVER_FLAG. Based on that value we need to remove the ability to “Add identity providers” under “Set up your Cluster”. In addition to the getting started card, we should remove the ability to update a cluster on the details card when applicable (anything that changes a cluster version should be read only).
Summary of changes to the overview page:
Check section 03 for more info: https://docs.google.com/document/d/1k76JtRRHBdCCEjHPqKcYvbNVsuaGmRhWDLESWIm0mbo/edit#
PatternFly Dark Theme Handbook: https://docs.google.com/document/d/1mRYEfUoOjTsSt7hiqjbeplqhfo3_rVDO0QqMj2p67pw/edit
Admin Console -> Workloads & Pods
Dev Console -> Gotcha pages: Observe Dashboard and Metrics, Add, Pipelines: builder, list, log, and run
As a developer, I want to be able to fix remaining issues from the spreadsheet of issues generated after the initial pass and spike of adding dark theme to the console.. As such, I need to make sure to either complete all remaining issues for the spreadsheet, or, create a bug or future story for any remaining issues in these two documents.
Acceptance criteria:
As a developer, I want to be able to scope the changes needed to enable dark mode for the admin console. As such, I need to investigate how much of the console will display dark mode using PF variables and also define a list of gotcha pages/components which will need special casing above and beyond PF variable settings.
Acceptance criteria:
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
The Cluster Dashboard Details Card Protractor integration test was failing at high rate, and despite multiple attempts to fix, was never fully resolved, so it was disabled as a way to fix https://bugzilla.redhat.com/show_bug.cgi?id=2068594. Migrating this entire file to Cypress should give us better debugging capability, which is what was done to fix a similarly problematic project dashboard Protractor test.
Currently, you need to navigate to
Cluster Settings ->
Global configuration ->
Console (operator) config ->
Console plugins
to see and managed plugins. This takes a lot of clicks and is not discoverable. We should look at surfacing plugin details where they're easier to find – perhaps on the Cluster Settings page – or at least provide a more convenient link somewhere in the UI.
AC: Add the Dynamic Plugins section to the Status Card in the overview that will contain:
Currently, enabled plugins can fail to load for a variety of reasons. For instance, plugins don't load if the plugin name in the manifest doesn't match the ConsolePlugin name or the plugin has an invalid codeRef. There is no indication in the UI that something has gone wrong. We should explore ways to report this problem in the UI to cluster admins. Depending on the nature of the issue, an admin might be able to resolve the issue or at least report a bug against the plugin.
The message about failing could appear in the notification drawer and/or console plugins tab on the operator config. We could also explore creating an alert if a plugin is failing.
AC:
We have a Timestamp component for consistent display of dates and times that we should expose through the SDK. We might also consider a hook that formats dates and times for places were you don't want or cant use the component, eg. times on a chart.
This will become important when we add a user preference for dates so that plugins show consistent dates and times as console. If I set my user preference to UTC dates, console should show UTC dates everywhere.
AC:
In the 4.11 release, a console.openshift.io/default-i18next-namespace annotation is being introduced. The annotation indicates whether the ConsolePlugin contains localization resources. If the annotation is set to "true", the localization resources from the i18n namespace named after the dynamic plugin (e.g. plugin__kubevirt), are loaded. If the annotation is set to any other value or is missing on the ConsolePlugin resource, localization resources are not loaded.
In case these resources are not present in the dynamic plugin, the initial console load will be slowed down. For more info check BZ#2015654
AC:
Follow up of https://issues.redhat.com/browse/CONSOLE-3159
We need to provide a base for running integration tests using the dynamic plugins. The tests should initially
Once the basic framework is in place, we can update the demo plugin and add new integration tests when we add new extension points.
https://github.com/openshift/console/tree/master/frontend/dynamic-demo-plugin
https://github.com/openshift/enhancements/blob/master/enhancements/console/dynamic-plugins.md
https://github.com/openshift/console/tree/master/frontend/packages/console-plugin-sdk
Goal
Background
RFE: for 4.10, Cincinnati and the cluster-version operator are adding conditional updates (a.k.a. targeted edge blocking): https://issues.redhat.com/browse/OTA-267
High-level plans in https://github.com/openshift/enhancements/blob/master/enhancements/update/targeted-update-edge-blocking.md#update-client-support-for-the-enhanced-schema
Example of what the oc adm upgrade UX will be in https://github.com/openshift/enhancements/blob/master/enhancements/update/targeted-update-edge-blocking.md#cluster-administrator.
The oc implementation landed via https://github.com/openshift/oc/pull/961.
Design
See design doc: https://docs.google.com/document/d/1Nja4whdsI5dKmQNS_rXyN8IGtRXDJ8gXuU_eSxBLMIY/edit#
See marvel: https://marvelapp.com/prototype/h3ehaa4/screen/86077932
The "Update Version" modal on the cluster settings page should be updated to give users information about recommended, not recommended, and blocked update versions.
Update the cluster settings page to inform the user when the latest available update is supported but not recommended. Add an informational popover to the latest version in update path visualization.
Story: As an administrator I want to rely on a default configuration that spreads image registry pods across topology zones so that I don't suffer from a long recovery time (>6 mins) in case of a complete zone failure if all pods are impacted.
Background: The image registry currently uses affinity/anti-affinity rules to spread registry pods across different hosts. However this might cause situations in which all pods end up on hosts of a single zone, leading to a long recovery time of the registry if that zone is lost entirely. However due to problems in the past with the preferred setting of anti-affinity rule adherence the configuration was forced instead with required and the rules became constraints. With zones as constraints the internal registry would not have deployed anymore in environments with a single zone, e.g. internal CI environment. Pod topology constraints is a new API that is supported in OCP which can also relax constraints in case they cannot be satisfied. Details here: https://docs.openshift.com/container-platform/4.7/nodes/scheduling/nodes-scheduler-pod-topology-spread-constraints.html
Acceptance criteria:
Open Questions:
As an OpenShift administrator
I want to provide the registry operator with a custom certificate authority for S3 storage
so that I can use a third-party S3 storage provider.
Remove Jenkins from the OCP Payload.
See epic linking - need alternative non payload image available to provide relatively seamless migration
Also, the EP for this is approved and merged at https://github.com/openshift/enhancements/blob/master/enhancements/builds/remove-jenkins-payload.md
PARTIAL ANSWER ^^: confirmed with Ben Parees in https://coreos.slack.com/archives/C014MHHKUSF/p1646683621293839 that EP merging is currently sufficient OCP "technical leadership" approval.
assuming none
As maintainers of the OpenShift jenkins component, we need run Jenkins CI for PR testing against openshift/jenkins, openshift/jenkins-sync-plugin, openshift/jenkins-client-plugin, openshift/jenkins-openshift-login-plugin, using images built in the CI pipeline but not injected into CI test clusters via sample operator overriding the jenkins sample imagestream with the jenkins payload image.
As maintainers of the OpenShift Jenkins component, we need Jenkins periodics for the client and sync plugins to run against the latest non payload, CPaas image, promoted to CI's image locations on quay.io, for the current release in development.
As maintainers of the OpenShift Jenkins component, we need Jenkins related tests outside of very basic Jenkins Pipieline Strategy Build Config verification, removed from openshift-tests in OpenShift Origin, using a non-payload, CPaas image pertinent to the branch in question.
High Level, we ideally want to vet the new CPaas image via CI and periodics BEFORE we start changing the samples operator so that it does not manipulate the jenkins imagestream (our tests will override the samples operator override)
NONE ... QE should wait until JNKS-254
NONE
NONE
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Possible staging
1) before CPaas is available, we can validate images generated by PRs to openshift/jenkins, openshift/jenkins-sync-plugin, openshift/jenkins-client-plugin by taking the image built by the image (where the info needed to get the right image from the CI registry is in the IMAGE_FORMAT env var) and then doing an `oc tag --source=docker <PR image ref> openshift/jenkins:2` to replace the use of the payload image in the jenkins imagestream in the openshift namespace with the PRs image
2) insert 1) in https://github.com/openshift/release/blob/master/ci-operator/step-registry/jenkins/sync-plugin/e2e/jenkins-sync-plugin-e2e-commands.sh and https://github.com/openshift/release/blob/master/ci-operator/step-registry/jenkins/client-plugin/tests/jenkins-client-plugin-tests-commands.sh where you test for IMAGE_FORMAT being set
3) or instead of 2) you update the Makefiles for the plugins to call a script that does the same sort of thing, see what is in IMAGE_FORMAT, and if it has something, do the `oc tag`
https://github.com/openshift/release/pull/26979 is a prototype of how to stick the image built from a PR and conceivably the periodics to get the image built from it and tag it into the jenkins imagestream in the openshift namespace in the test cluster
After installing or upgrading to the latest OCP version, the existing OpenShift route to the prometheus-k8s service is updated to be a path-based route to '/api/v1'.
DoD:
Following up on https://issues.redhat.com/browse/MON-1320, we added three new CLI flags to Prometheus to apply different limits on the samples' labels. These new flags are available starting from Prometheus v2.27.0, which will most likely be shipped in OpenShift 4.9.
The limits that we want to look into for OCP are the following ones:
# Per-scrape limit on number of labels that will be accepted for a sample. If # more than this number of labels are present post metric-relabeling, the # entire scrape will be treated as failed. 0 means no limit. [ label_limit: <int> | default = 0 ] # Per-scrape limit on length of labels name that will be accepted for a sample. # If a label name is longer than this number post metric-relabeling, the entire # scrape will be treated as failed. 0 means no limit. [ label_name_length_limit: <int> | default = 0 ] # Per-scrape limit on length of labels value that will be accepted for a sample. # If a label value is longer than this number post metric-relabeling, the # entire scrape will be treated as failed. 0 means no limit. [ label_value_length_limit: <int> | default = 0 ]
We could benefit from them by setting relatively high values that could only induce unbound cardinality and thus reject the targets completely if they happened to breach our constrainst.
DoD:
When users configure CMO to interact with systems outside of an OpenShift cluster, we want to provide an easy way to add the cluster ID to the data send.
Technically this can be achieved today, by adding an identifying label to the remote_write configuration for a given cluster. The operator adding the remote_write integration needs to take care that the label is unique over the managed fleet of clusters. This however adds management complexity. Any given cluster already has a pseudo-unique datum, that can be used for this purpose.
Expose a flag in the CMO configuration, that is false by default (keeps backward compatibility) and when set to true will add the _id label to a remote_write configuration. More specifically it will be added to the top of a remote_write relabel_config list via the replace action. This will add the label as expect, but additionally a user could alter this label in a later relabel config to suit any specific requirements (say rename the label or add additional information to the value).
The location of this flag is the remote_write Spec, so this can be set for individual remote_write configurations.
We currently use a sample app to e2e test remote write in CMO.
In order to test the addition of the cluster_id relabel config, we need to confirm that the metrics send actually have the expected label.
For this test we should use Prometheus as the remote_write target. This allows us to query the metrics send via remote write and confirm they have the expected label.
Add an optional boolean flag to CMOs definition of RemoteWriteSpec that if true adds an entry in the specs WriteRelabelConfigs list.
I went with adding the relabel config to all user-supplied remote_write configurations. This path has no risk for backwards compatibility (unless users use the {}tmp_openshift_cluster_id{} label, seems unlikely) and reduces overall complexity, as well as documentation complexity.
The entry should look like what is already added to the telemetry remote write config and it should be added as the first entry in the list, before any user supplied relabel configs.
The potential target ServiceMonitors are:
As a user, I want the topology view to be less cluttered as I doom out showing only information that I can discern and still be able to get a feel for the status of my project.
As a user, I want to understand which service bindings connected a service to a component successfully or not. Currently it's really difficult to understand and needs inspection into each ServiceBinding resource (yaml).
See also https://docs.google.com/document/d/1OzE74z2RGO5LPjtDoJeUgYBQXBSVmD5tCC7xfJotE00/edit
This epic is mainly focused on the 4.10 Release QE activities
1. Identify the scenarios for automation
2. Segregate the test Scenarios into smoke, Regression and other user stories
a. Update the https://docs.jboss.org/display/ODC/Automation+Status+Report
3. Align with layered operator teams for updating scripts
3. Work closely with dev team for epic automation
4. Create the automation scripts using cypress
5. Implement CI for nightly builds
6. Execute scripts on sprint basis
To the track the QE progress at one place in 4.10 Release Confluence page
Acceptance criteria:
This epic covers a number of customer requests(RFEs) as well as increases usability.
Customer satisfaction as well as improved usability.
None
As a user, I should be able to switch between the form and yaml editor while creating the ProjectHelmChartRepository CR.
Form component https://github.com/openshift/console/pull/11227
As a user, I want to use a form to create Deployments
Edit deployment form ODC-5007
Currently we are only able to get limited telemetry from the Dev Sandbox, but not from any of our managed clusters or on prem clusters.
In order to improve properly analyze usage and the user experience, we need to be able to gather as much data as possible.
// JS type
telemetry?: Record<string, string>
./bin/bridge --telemetry SEGMENT_API_KEY=a-key-123-xzy ./bin/bridge --telemetry CONSOLE_LOG=debug
Goal:
Enhance oc adm release new (and related verbs info, extract, mirror) with heterogeneous architecture support
tl;dr
oc adm release new (and related verbs info, extract, mirror) would be enhanced to optionally allow the creation of manifest list release payloads. The manifest list flow would be triggered whenever the CVO image in an imagestream was a manifest list. If the CVO image is a standard manifest, the generated release payload will also be a manifest. If the CVO image is a manifest list, the generated release payload would be a manifest list (containing a manifest for each arch possessed by the CVO manifest list).
In either case, oc adm release new would permit non-CVO component images to be manifest or manifest lists and pass them through directly to the resultant release manifest(s).
If a manifest list release payload is generated, each architecture specific release payload manifest will reference the same pullspecs provided in the input imagestream.
More details in Option 1 of https://docs.google.com/document/d/1BOlPrmPhuGboZbLZWApXszxuJ1eish92NlOeb03XEdE/edit#heading=h.eldc1ppinjjh
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
I asked Zvonko Kaiser and he seemed open to it. I need to confirm with Shiva Merla
Rename Provider to Infrastructure Provider
Add GPU Provider
https://miro.com/app/board/uXjVOeUB2B4=/?moveToWidget=3458764514332229879&cot=14
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
Description of problem:
NodePort port not accessible
Version-Release number of selected component (if applicable):
OCP 4.8.20
How reproducible:
$oc -n ui-nprd get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
docker-registry ClusterIP 10.201.219.240 <none> 5000/TCP 24d app=registry
docker-registry-lb LoadBalancer 10.201.252.253 internal-xxxxxx.xx-xxxx-1.elb.amazonaws.com 5000:30779/TCP 3d22h app=registry
docker-registry-np NodePort 10.201.216.26 <none> 5000:32428/TCP 3d16h app=registry
$oc debug node/ip-xxx.ca-central-1.compute.internal
Starting pod/ip-xxx.ca-central-1computeinternal-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.81.23.96
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# nc -vz 10.81.23.96 32428
Ncat: Version 7.70 ( https://nmap.org/ncat )
Ncat: Connection timed out.
In a new-created namespaces the same deployment works:
[RHEL7:> oc project
Using project "test-c1" on server "https://api.xx.xx.xxxx.xx.xx:6443".
[RHEL7:- ~/tmp]> oc port-forward service/docker-registry-np 5000:5000
Forwarding from 127.0.0.1:5000 -> 5000
[1]+ Stopped oc4 port-forward service/docker-registry-np 5000:5000
[RHEL7: ~/tmp]> bg %1
[1]+ oc4 port-forward service/docker-registry-np 5000:5000 &
[RHEL7: ~/tmp]> nc -v localhost 5000
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 127.0.0.1:5000.
Handling connection for 5000
[RHEL7: ~/tmp]> kill %1
[RHEL7: ~/tmp]>
[1]+ Terminated oc4 port-forward service/docker-registry-np 5000:5000
[RHEL7: ~/tmp]> oc get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry-np NodePort 10.201.224.174 <none> 5000:31793/TCP 68s
[RHEL7: ~/tmp]> oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
registry-75b7c7fd94-rx29j 1/1 Running 0 7m5s 10.201.1.29 ip-xxx.ca-central-1.compute.internal <none> <none>
[RHEL7: ~/tmp]> oc debug node/ip-xxx.ca-central-1.compute.internal
Starting pod/ip-xxxca-central-1computeinternal-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.81.23.87
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# nc -v 10.81.23.87 31793
Ncat: Version 7.70 ( https://nmap.org/ncat )
Ncat: Connected to 10.81.23.87:31793.
Actual results:
Expected results:
Additional info:
Description of problem: Issue described in following issue: https://github.com/openshift/multus-admission-controller/issues/40
Fixed in: https://github.com/openshift/cluster-network-operator/pull/1515
Version-Release number of selected component (if applicable): OCP 4.10
Official Red Hat tracker. Issue has been merged already.
Description of problem:
We need to have admin-ack in 4.11 so that admins can check the deprecated APIs and approve when they move to 4.12.Refer https://access.redhat.com/articles/6955381 for more information. As planned we want to add the admin-ack around 4.12 feature freeze.
Version-Release number of selected component (if applicable):
4.11
How reproducible:
Always
Steps to Reproduce:
1. Install a cluster in 4.11. 2. Run an application which uses the deprecated API. See https://access.redhat.com/articles/6955381 for more information. 3. Upgrade to 4.12
Actual results:
The upgrade happens without asking the admin to confirm that the worksloads do not use the deprecated APIs.
Expected results:
Upgrade should wait for the admin-ack.
Additional info:
We had admin-acks in the past too e.g. https://docs.openshift.com/container-platform/4.9/updating/updating-cluster-prepare.html#update-preparing-migrate_updating-cluster-prepare
This is a clone of issue OCPBUGS-1226. The following is the description of the original issue:
—
We added server groups for control plane and computes as part of OSASINFRA-2570, except for UPI that only creates server group for the control plane.
We need to update the UPI scripts to create server group for computes to be consistent with IPI and have the instruction at https://docs.openshift.com/container-platform/4.11/machine_management/creating_machinesets/creating-machineset-osp.html work out of the box in case customers want to create MachineSets on their UPI clusters.
Related to OCPCLOUD-1135.
This is a clone of issue OCPBUGS-2508. The following is the description of the original issue:
—
Description of problem:
Installer fails due to Neutron policy error when creating Openstack servers for OCP master nodes. $ oc get machines -A NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api ostest-kwtf8-master-0 Running 23h openshift-machine-api ostest-kwtf8-master-1 Running 23h openshift-machine-api ostest-kwtf8-master-2 Running 23h openshift-machine-api ostest-kwtf8-worker-0-g7nrw Provisioning 23h openshift-machine-api ostest-kwtf8-worker-0-lrkvb Provisioning 23h openshift-machine-api ostest-kwtf8-worker-0-vwrsk Provisioning 23h $ oc -n openshift-machine-api logs machine-api-controllers-7454f5d65b-8fqx2 -c machine-controller [...] E1018 10:51:49.355143 1 controller.go:317] controller/machine_controller "msg"="Reconciler error" "error"="error creating Openstack instance: Failed to create port err: Request forbidden: [POST https://overcloud.redhat.local:13696/v2.0/ports], error message: {\"NeutronError\": {\"type\": \"PolicyNotAuthorized\", \"message\": \"(rule:create_port and (rule:create_port:allowed_address_pairs and (rule:create_port:allowed_address_pairs:ip_address and rule:create_port:allowed_address_pairs:ip_address))) is disallowed by policy\", \"detail\": \"\"}}" "name"="ostest-kwtf8-worker-0-lrkvb" "namespace"="openshift-machine-api"
Version-Release number of selected component (if applicable):
4.10.0-0.nightly-2022-10-14-023020
How reproducible:
Always
Steps to Reproduce:
1. Install 4.10 within provider networks (in primary or secondary interface)
Actual results:
Installation failure: 4.10.0-0.nightly-2022-10-14-023020: some cluster operators have not yet rolled out
Expected results:
Successful installation
Additional info:
Please find must-gather for installation on primary interface link here and for installation on secondary interface link here.
This is a clone of issue OCPBUGS-3889. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-3744. The following is the description of the original issue:
—
Description of problem:
Egress router POD creation on Openshift 4.11 is failing with below error. ~~~ Nov 15 21:51:29 pltocpwn03 hyperkube[3237]: E1115 21:51:29.467436 3237 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"stage-wfe-proxy-ext-qrhjw_stage-wfe-proxy(c965a287-28aa-47b6-9e79-0cc0e209fcf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"stage-wfe-proxy-ext-qrhjw_stage-wfe-proxy(c965a287-28aa-47b6-9e79-0cc0e209fcf2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_stage-wfe-proxy-ext-qrhjw_stage-wfe-proxy_c965a287-28aa-47b6-9e79-0cc0e209fcf2_0(72bcf9e52b199061d6e651e84b0892efc142601b2442c2d00b92a1ba23208344): error adding pod stage-wfe-proxy_stage-wfe-proxy-ext-qrhjw to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): [stage-wfe-proxy/stage-wfe-proxy-ext-qrhjw/c965a287-28aa-47b6-9e79-0cc0e209fcf2:openshift-sdn]: error adding container to network \\\"openshift-sdn\\\": CNI request failed with status 400: 'could not open netns \\\"/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669\\\": unknown FS magic on \\\"/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669\\\": 1021994\\n'\"" pod="stage-wfe-proxy/stage-wfe-proxy-ext-qrhjw" podUID=c965a287-28aa-47b6-9e79-0cc0e209fcf2 ~~~ I have checked SDN POD log from node where egress router POD is failing and I could see below error message. ~~~ 2022-11-15T21:51:29.283002590Z W1115 21:51:29.282954 181720 pod.go:296] CNI_ADD stage-wfe-proxy/stage-wfe-proxy-ext-qrhjw failed: could not open netns "/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669": unknown FS magic on "/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669": 1021994 ~~~ Crio is logging below event and looking at the log it seems the namespace has been created on node. ~~~ Nov 15 21:51:29 pltocpwn03 crio[3150]: time="2022-11-15 21:51:29.307184956Z" level=info msg="Got pod network &{Name:stage-wfe-proxy-ext-qrhjw Namespace:stage-wfe-proxy ID:72bcf9e52b199061d6e651e84b0892efc142601b2442c2d00b92a1ba23208344 UID:c965a287-28aa-47b6-9e79-0cc0e209fcf2 NetNS:/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}" ~~~
Version-Release number of selected component (if applicable):
4.11.12
How reproducible:
Not Sure
Steps to Reproduce:
1. 2. 3.
Actual results:
Egress router POD is failing to create. Sample application could be created without any issue.
Expected results:
Egress router POD should get created
Additional info:
Egress router POD is created following below document and it does contain pod.network.openshift.io/assign-macvlan: "true" annotation. https://docs.openshift.com/container-platform/4.11/networking/openshift_sdn/deploying-egress-router-layer3-redirection.html#nw-egress-router-pod_deploying-egress-router-layer3-redirection
This is a clone of issue OCPBUGS-1329. The following is the description of the original issue:
—
Description of problem:
etcd and kube-apiserver pods get restarted due to failed liveness probes while deleting/re-creating pods on SNO
Version-Release number of selected component (if applicable):
4.10.32
How reproducible:
Not always, after ~10 attempts
Steps to Reproduce:
1. Deploy SNO with Telco DU profile applied 2. Create multiple pods with local storage volumes attached(attaching yaml manifest) 3. Force delete and re-create pods 10 times
Actual results:
etcd and kube-apiserver pods get restarted, making to cluster unavailable for a period of time
Expected results:
etcd and kube-apiserver do not get restarted
Additional info:
Attaching must-gather. Please let me know if any additional info is required. Thank you!
Description of problem:
In a complete disconnected cluster, the dev catalog is taking too much time in loading
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. A complete disconnected cluster
2. In add page go to the All services page
3.
Actual results:
Taking too much time too load
Expected results:
Time taken should be reduced
Additional info:
Attached a gif for reference
Description of problem:
For some reason, the LSP of a pod is not properly added to the port group where the ACL of a NetworkPolicy is applied. This results on the networkpolicy not being applied to the pod and communication not possible.
Version-Release number of selected component (if applicable):
4.10
How reproducible:
Always with a concrete pod at customer environment.
Steps to Reproduce:
(not known exactly yet)
Actual results:
LSP not in port group. ACL not applied. Netpol not in effect.
Expected results:
LSP in port group. ACL applied. Netpol in effect.
Additional info:
Details in private comments, as they involve sensitive data. Deleting the pod does nothing, but it is possible that this has something to do with the pod being recreated with the same name (although the LSPs UUIDs are different in each incarnation).
This is a clone of issue OCPBUGS-268. The following is the description of the original issue:
—
The linux kernel was updated:
https://lkml.org/lkml/2020/3/20/1030
to include steal
accounting
This would greatly assist in troubleshooting vSphere performance issues
caused by over-provisioned ESXi hosts.
Description of problem:
When queried dns hostname from certain pod on the certain node, responded from random coredns pod, not prefer local one. Is it expected result ? # In OCP v4.8.13 case // Ran dig command on the certain node which is running the following test-7cc4488d48-tqc4m pod. sh-4.4# while : ; do echo -n "$(date '+%H:%M:%S') :"; dig google.com +short; sleep 1; done : 07:16:33 :172.217.175.238 07:16:34 :172.217.175.238 <--- Refreshed the upstream result 07:16:36 :142.250.207.46 07:16:37 :142.250.207.46 // The dig results is matched with the running node one as you can see the above one. $ oc rsh test-7cc4488d48-tqc4m bash -c 'while : ; do echo -n "$(date '+%H:%M:%S') :"; dig google.com +short; sleep 1; done' : 07:16:35 :172.217.175.238 07:16:36 :172.217.175.238 <--- At the same time, the pod dig result is also refreshed. 07:16:37 :142.250.207.46 07:16:38 :142.250.207.46 But in v4.10 case, in contrast, the dns query result is various and responded randomly regardless local dns results on the node as follows. # In OCP v4.10.23 case, pod's response from DNS services are not consistent. $ oc rsh test-848fcf8ddb-zrcbx bash -c 'while : ; do echo -n "$(date '+%H:%M:%S') :"; dig google.com +short; sleep 1; done' 07:23:00 :142.250.199.110 07:23:01 :142.250.207.46 07:23:02 :142.250.207.46 07:23:03 :142.250.199.110 07:23:04 :142.250.199.110 07:23:05 :172.217.161.78 # Even though the node which is running the pod keep responding the same IP... sh-4.4# while : ; do echo -n "$(date '+%H:%M:%S') :"; dig google.com +short; sleep 1; done 07:23:00 :172.217.161.78 07:23:01 :172.217.161.78 07:23:02 :172.217.161.78 07:23:03 :172.217.161.78 07:23:04 :172.217.161.78 07:23:05 :172.217.161.78
Version-Release number of selected component (if applicable):
v4.10.23 (ROSA) SDN: OpenShiftSDN
How reproducible:
You can always reproduce this issue using "dig google.com" from both any pod and the node the pod running according to the above "Description" details.
Steps to Reproduce:
1. Run any usual pod, and check which node the pod is running on. 2. Run dig google.com on the pod and the node. 3. Check the IP is consistent with the running node each other.
Actual results:
The response IPs are not consistent and random IP is responded.
Expected results:
The response IP is kind of consistent, and aware of prefer local dns.
Additional info:
This issue affects EgressNetworkPolicy dnsName feature.
Description of problem:
Version-Release number of selected component (if applicable):
4.11
How reproducible:
Always
Steps to Reproduce:
1. Enable UWM + dedicated UWM Alertmanager
2. Deploy an application + service monitor + alerting rule which fires always
3. Go to the OCP dev console and silence the alert.
Actual results:
Nothing happens
Expected results:
The alert notification is muted.
Additional info:
Copied from https://bugzilla.redhat.com/show_bug.cgi?id=2100860
This is a clone of issue OCPBUGS-753. The following is the description of the original issue:
—
Description of problem:
The default dns-default pod is missing the "target.workload.openshift.io/management:" annotation. As a result when the workload partitioning feature is enabled on SNO, this pod resources will not get mutated and pinned to the reserved cpuset. This is a regresion from 4.10. Pod spec from 4.10.17 Annotations: ... resources.workload.openshift.io/dns: {"cpushares": 51} resources.workload.openshift.io/kube-rbac-proxy: {"cpushares": 10} target.workload.openshift.io/management {"effect":"PreferredDuringScheduling"}
Version-Release number of selected component (if applicable):
4.11.0
How reproducible:
100%
Steps to Reproduce:
1. Install a SNO and check the annotation 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-10622. The following is the description of the original issue:
—
Description of problem:
Unit test failing === RUN TestNewAppRunAll/app_generation_using_context_dir newapp_test.go:907: app generation using context dir: Error mismatch! Expected <nil>, got supplied context directory '2.0/test/rack-test-app' does not exist in 'https://github.com/openshift/sti-ruby' --- FAIL: TestNewAppRunAll/app_generation_using_context_dir (0.61s)
Version-Release number of selected component (if applicable):
How reproducible:
100
Steps to Reproduce:
see for example https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_oc/1376/pull-ci-openshift-oc-master-images/1638172620648091648
Actual results:
unit tests fail
Expected results:
TestNewAppRunAll unit test should pass
Additional info:
Description of problem:
This issue exists to drive the backport process of https://github.com/openshift/api/pull/1313
According to the Kubernetes documentation, starting from Kubernetes 1.22, the service-account-issuer flag can be specified multiple times. The first value is then used to generate new tokens and other values are accepted. Using this field can prevent cluster disruptions and allows for smoother reconfiguration of this field.
The status field will allow us to keep track of "used" service account issuers and also expire/prune them.
this is a replacement for: #1309
xref: https://issues.redhat.com/browse/AUTH-309
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
1. Proposed title of this feature request
--> Alert generation when the etcd container memory consumption goes beyond 90%
2. What is the nature and description of the request?
--> When the etcd database starts growing rapidly due to some high number of objects like secrets, events, or configmap generation by application/workload, the memory and CPU consumption of APIserver and etcd container (control plane component) spikes up and eventually the control plane nodes goes to hung/unresponsive or crash due to out of memory errors as some of the critical processes/services running on master nodes get killed. Hence we request an alert/alarm when the ETCD container's memory consumption goes beyond 90% so that the cluster administrator can take some action before the cluster/nodes go unresponsive.
I see we already have a etcdExcessiveDatabaseGrowth Prometheus rule which helps when the surge in etcd writes leading to a 50% increase in database size over the past four hours on etcd instance however it does not consider the memory consumption:
$ oc get prometheusrules etcd-prometheus-rules -o yaml|grep -i etcdExcessiveDatabaseGrowth -A 9
- alert: etcdExcessiveDatabaseGrowth
annotations:
description: 'etcd cluster "{{ $labels.job }}": Observed surge in etcd writes
leading to 50% increase in database size over the past four hours on etcd
instance {{ $labels.instance }}, please check as it might be disruptive.'
expr: |
increase(((etcd_mvcc_db_total_size_in_bytes/etcd_server_quota_backend_bytes)*100)[240m:1m]) > 50
for: 10m
labels:
severity: warning
3. Why does the customer need this? (List the business requirements here)
--> Once the etcd memory consumption goes beyond 90-95% of total ram as it's system critical container, the OCP cluster goes unresponsive causing revenue loss to business and impacting the productivity of users of the openshift cluster.
4. List any affected packages or components.
--> etcd
Description of problem:
The alertmanager pod is stuck on OCP 4.11 with OVN in container Creating State From oc describe alertmanager pod: ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 16s (x459 over 17h) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-managed-ocs-alertmanager-0_openshift-storage_3a55ed54-4eaa-4f65-8a10-e5d21fad1ebc_0(88575547dc0b210307b89dd2bb8e379ece0962b607ac2707a1c2cf630b1aaa78): error adding pod openshift-storage_alertmanager-managed-ocs-alertmanager-0 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [openshift-storage/alertmanager-managed-ocs-alertmanager-0/3a55ed54-4eaa-4f65-8a10-e5d21fad1ebc:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-storage/alertmanager-managed-ocs-alertmanager-0 88575547dc0b210307b89dd2bb8e379ece0962b607ac2707a1c2cf630b1aaa78] [openshift
Version-Release number of selected component (if applicable):
OCP 4.11 with OVN
How reproducible:
100%
Steps to Reproduce:
1. Terminate the node on which alertmanager pod is running 2. pod will get stuck in container Creating state 3.
Actual results:
AlertManager pod is stuck in container Creating state
Expected results:
Alertmanager pod is ready
Additional info:
The workaround would be to terminate the alertmanager pod
This is a clone of issue OCPBUGS-7437. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-5547. The following is the description of the original issue:
—
Description of problem:
This is a follow-up on https://bugzilla.redhat.com/show_bug.cgi?id=2083087 and https://github.com/openshift/console/pull/12390
When creating a Knative Service and delete it again with enabled option "Delete other resources created by console" (only available on 4.13+ with the PR above) the secret "$name-github-webhook-secret" is not deleted.
When the user tries to create the same Knative Service again this fails with an error:
An error occurred
secrets "nodeinfo-github-webhook-secret" already exists
Version-Release number of selected component (if applicable):
4.13
(we might want to backport this together with https://github.com/openshift/console/pull/12390 and OCPBUGS-5548)
How reproducible:
Always
Steps to Reproduce:
Actual results:
Deleted resources:
Expected results:
Should also remove this resource
Additional info:
When delete the whole application all the resources are deleted correctly (and just once)!
This is a clone of issue OCPBUGS-858. The following is the description of the original issue:
—
Description of problem:
In OCP 4.9, the package-server-manager was introduced to manage the packageserver CSV. However, when OCP 4.8 in upgraded to 4.9, the packageserver stays stuck in v0.17.0, which is the version in OCP 4.8, and v0.18.3 does not roll out, which is the version in OCP 4.9
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Install OCP 4.8 2. Upgrade to OCP 4.9 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2022-08-31-160214 True True 50m Working towards 4.9.47: 619 of 738 done (83% complete) $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.47 True False 4m26s Cluster version is 4.9.47
Actual results:
Check packageserver CSV. It's in v0.17.0 $ oc get csv NAME DISPLAY VERSION REPLACES PHASE packageserver Package Server 0.17.0 Succeeded
Expected results:
packageserver CSV is at 0.18.3
Additional info:
packageserver CSV version in 4.8: https://github.com/openshift/operator-framework-olm/blob/release-4.8/manifests/0000_50_olm_15-packageserver.clusterserviceversion.yaml#L12 packageserver CSV version in 4.9: https://github.com/openshift/operator-framework-olm/blob/release-4.9/pkg/manifests/csv.yaml#L8
Description of problem:
Cannot scale up worker node have deploying OCP 4.11.1 cluster via UPI on Azure
5h2m Warning FailedCreate machine/pokus-2knkh-worker-northeurope1-f6kc4 InvalidConfiguration: failed to reconcile machine "pokus-2knkh-worker-northeurope1-f6kc4": failed to create vm pokus-2knkh-worker-northeurope1-f6kc4: failure sending request for machine pokus-2knkh-worker-northeurope1-f6kc4: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=404 - Original Error: Code="NotFound" Message="The Image '/subscriptions/e639e479-2737-4b3d-b338-f1928f6429a1/resourceGroups/mlpipe-2163-azpln-rg/providers/Microsoft.Compute/images/pokus-2knkh-gen2' cannot be found in 'northeurope' region."
Customer would like to have the installer create machineset from the inital installation, therefore Kubernetes manifest files that define the worker machines were not removed during the installation.
Highlights:
Can I please let help verifying if these are the correct steps to have the initial installation created and manage the worker machines?Is there an explanation on how changing the image to -gen2 in [concat(parameters('baseName'),'-gen2')] from the 02_storage.json template can resolve the problem?
Version-Release number of selected component (if applicable):
Environment:
OCP 4.11.1 UPI install on Azure using ARM
VM size:
bootstrap: Standard_D4s_v3
master: Standard_D4s_v3
How reproducible:
Always
Steps to Reproduce:
Following the step described in the document: Installing a cluster on Azure using ARM templates .
In the install-config.yaml, worker replicas was set to 0
compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3
After creating the manifests described in this step: Creating the Kubernetes manifest and Ignition config files only control plane machines manifests were removed, worker machines manifests remain untouchedAfter three masters and three worker nodes were created by ARM templates, additional worker were added using machine sets via command
oc scale --replicas=1 machineset cluster-g7rzv-worker-francecentral1 -n openshift-machine-api`
Actual results:
No addition node visible from `oc get nodes` and the following error occur:
5h2m Warning FailedCreate machine/pokus-2knkh-worker-northeurope1-f6kc4 InvalidConfiguration: failed to reconcile machine "pokus-2knkh-worker-northeurope1-f6kc4": failed to create vm pokus-2knkh-worker-northeurope1-f6kc4: failure sending request for machine pokus-2knkh-worker-northeurope1-f6kc4: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=404 - Original Error: Code="NotFound" Message="The Image '/subscriptions/e639e479-2737-4b3d-b338-f1928f6429a1/resourceGroups/mlpipe-2163-azpln-rg/providers/Microsoft.Compute/images/pokus-2knkh-gen2' cannot be found in 'northeurope' region."
The customer found out that this can be resolved if changing the -image to -gen2 in [concat(parameters('baseName'),'-gen2')] from the 02_storage.json template
Expected results:
The installer should be able to create and manage machineset
Additional info:
SFDC case #03304526
Slack discussion, might due to MAO not able to support UPI in Azure Thread1, Thread2
Description of problem:
[OVN][OSP] After reboot egress node, egress IP cannot be applied anymore.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-11-07-181244
How reproducible:
Frequently happened in automation. But didn't reproduce it in manual.
Steps to Reproduce:
1. Label one node as egress node 2. Config one egressIP object STEP: Check one EgressIP assigned in the object. Nov 8 15:28:23.591: INFO: egressIPStatus: [{"egressIP":"192.168.54.72","node":"huirwang-1108c-pg2mt-worker-0-2fn6q"}] 3. Reboot the node, wait for the node ready.
Actual results:
EgressIP cannot be applied anymore. Waited more than 1 hour. oc get egressip NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS egressip-47031 192.168.54.72
Expected results:
The egressIP should be applied correctly.
Additional info:
Some logs E1108 07:29:41.849149 1 egressip.go:1635] No assignable nodes found for EgressIP: egressip-47031 and requested IPs: [192.168.54.72] I1108 07:29:41.849288 1 event.go:285] Event(v1.ObjectReference{Kind:"EgressIP", Namespace:"", Name:"egressip-47031", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'NoMatchingNodeFound' no assignable nodes for EgressIP: egressip-47031, please tag at least one node with label: k8s.ovn.org/egress-assignable W1108 07:33:37.401149 1 egressip_healthcheck.go:162] Could not connect to huirwang-1108c-pg2mt-worker-0-2fn6q (10.131.0.2:9107): context deadline exceeded I1108 07:33:37.401348 1 master.go:1364] Adding or Updating Node "huirwang-1108c-pg2mt-worker-0-2fn6q" I1108 07:33:37.437465 1 egressip_healthcheck.go:168] Connected to huirwang-1108c-pg2mt-worker-0-2fn6q (10.131.0.2:9107)
After this log, seems like no logs related to "192.168.54.72" happened.
This is a clone of issue OCPBUGS-212. The following is the description of the original issue:
—
Description of problem:
oc --context build02 get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-ec.1 True False 45h Error while reconciling 4.12.0-ec.1: the cluster operator kube-controller-manager is degraded oc --context build02 get co kube-controller-manager NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE kube-controller-manager 4.12.0-ec.1 True False True 2y87d GarbageCollectorDegraded: error fetching rules: Get "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules": dial tcp 172.30.153.28:9091: connect: cannot assign requested address
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
build02 is a build farm cluster in CI production.
I can provide credentials to access the cluster if needed.
This is a clone of issue OCPBUGS-7800. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-266. The following is the description of the original issue:
—
Description of problem: I am working with a customer who uses the web console. From the Developer Perspective's Project Access tab, they cannot differentiate between users and groups and furthermore cannot add groups from this web console. This has led to confusion whether existing resources were in fact users or groups, and furthermore they have added users when they intended to add groups instead. What we really need is a third column in the Project Access tab that says whether a resource is a user or group.
Version-Release number of selected component (if applicable): This is an issue in OCP 4.10 and 4.11, and I presume future versions as well
How reproducible: Every time. My customer is running on ROSA, but I have determined this issue to be general to OpenShift.
Steps to Reproduce:
From the oc cli, I create a group and add a user to it.
$ oc adm groups new techlead
group.user.openshift.io/techlead created
$ oc adm groups add-users techlead admin
group.user.openshift.io/techlead added: "admin"
$ oc get groups
NAME USERS
cluster-admins
dedicated-admins admin
techlead admin
I create a new namespace so that I can assign a group project level access:
$ oc new-project my-namespace
$ oc adm policy add-role-to-group edit techlead -n my-namespace
I then went to the web console -> Developer perspective -> Project -> Project Access. I verified the rolebinding named 'edit' is bound to a group named 'techlead'.
$ oc get rolebinding
NAME ROLE AGE
admin ClusterRole/admin 15m
admin-dedicated-admins ClusterRole/admin 15m
admin-system:serviceaccounts:dedicated-admin ClusterRole/admin 15m
dedicated-admins-project-dedicated-admins ClusterRole/dedicated-admins-project 15m
dedicated-admins-project-system:serviceaccounts:dedicated-admin ClusterRole/dedicated-admins-project 15m
edit ClusterRole/edit 2m18s
system:deployers ClusterRole/system:deployer 15m
system:image-builders ClusterRole/system:image-builder 15m
system:image-pullers ClusterRole/system:image-puller 15m
$ oc get rolebinding edit -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: "2022-08-15T14:16:56Z"
name: edit
namespace: my-namespace
resourceVersion: "108357"
uid: 4abca27d-08e8-43a3-b9d3-d20d5c294bbe
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
Now back to the CLI, I view the newly created rolebinding named 'developer-view-c15b720facbc8deb', and find that the "View" role is assigned to a user named 'developer', rather than a group.
$ oc get rolebinding
NAME ROLE AGE
admin ClusterRole/admin 17m
admin-dedicated-admins ClusterRole/admin 17m
admin-system:serviceaccounts:dedicated-admin ClusterRole/admin 17m
dedicated-admins-project-dedicated-admins ClusterRole/dedicated-admins-project 17m
dedicated-admins-project-system:serviceaccounts:dedicated-admin ClusterRole/dedicated-admins-project 17m
edit ClusterRole/edit 4m25s
developer-view-c15b720facbc8deb ClusterRole/view 90s
system:deployers ClusterRole/system:deployer 17m
system:image-builders ClusterRole/system:image-builder 17m
system:image-pullers ClusterRole/system:image-puller 17m
[10:21:21] kechung:~ $ oc get rolebinding developer-view-c15b720facbc8deb -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: "2022-08-15T14:19:51Z"
name: developer-view-c15b720facbc8deb
namespace: my-namespace
resourceVersion: "113298"
uid: cc2d1b37-922b-4e9b-8e96-bf5e1fa77779
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
So in conclusion, from the Project Access tab, we're unable to add groups and unable to differentiate between users and groups. This is in essence our ask for this RFE.
Actual results:
Developer perspective -> Project -> Project Access tab shows a list of resources which can be users or groups, but does not differentiate between them. Furthermore, when we add resources, they are only users and there is no way to add a group from this tab in the web console.
Expected results:
Should have the ability to add groups and differentiate between users and groups. Ideally, we're looking at a third column for user or group.
Additional info:
This bug is a backport clone of [Bugzilla Bug 2089950](https://bugzilla.redhat.com/show_bug.cgi?id=2089950). The following is the description of the original bug:
—
Description of problem: Some upgrades failed during scale testing with messages indicating the console operator is not available. In total 5 out of 2200 clusters failed with this pattern.
These clusters are all configured with the Console operator disabled in order to reduce overall OCP cpu use in the Telecom environment. The following CR is applied:
apiVersion: operator.openshift.io/v1
kind: Console
metadata:
annotations:
include.release.openshift.io/ibm-cloud-managed: "false"
include.release.openshift.io/self-managed-high-availability: "false"
include.release.openshift.io/single-node-developer: "false"
release.openshift.io/create-only: "true"
ran.openshift.io/ztp-deploy-wave: "10"
name: cluster
spec:
logLevel: Normal
managementState: Removed
operatorLogLevel: Normal
From one cluster (sno01175) the ClusterVersion conditions show:
' | jq
[
,
,
,
,
,
{ "lastTransitionTime": "2022-05-24T13:57:05Z", "message": "Cluster operator kube-apiserver should not be upgraded between minor versions: KubeletMinorVersionUpgradeable: Kubelet minor version (1.22.5+5c84e52) on node sno01175 will not be supported in the next OpenShift minor version upgrade.", "reason": "KubeletMinorVersion_KubeletMinorVersionUnsupportedNextUpgrade", "status": "False", "type": "Upgradeable" }]
Another cluster (sno01959) has very similar conditions with slight variation in the Failing and Progressing messages:
,
,
Version-Release number of selected component (if applicable): 4.9.26 upgrade to 4.10.13
How reproducible: 5 out of 2200
Steps to Reproduce:
1. Disable console with managementState: Removed
2. Starting OCP version 4.9.26
3. Initiate upgrade to 4.10.13 via ClusterVersion CR
Actual results: Cluster upgrade is stuck (no longer progressing) for 5+ hours
Expected results: Cluster upgrade completes
Additional info:
Description of problem:
During ocp multinode spoke cluster creation agent provisioning is stuck on "configuring" because machineConfig service is crashing on the node.
After restarting the service still fails with
Can't read link "/var/lib/containers/storage/overlay/l/V2OP2CCVMKSOHK2XICC546DUCG" because it does not exist. A storage corruption might have occurred, attempting to recreate the missing symlinks. It might be best wipe the storage to avoid further errors due to storage corruption.
Version-Release number of selected component (if applicable):
Podman 4.0.2 +
How reproducible:
sometimes
Steps to Reproduce:
1. deploy multinode spoke (ipxe + boot order ) 2. 3.
Actual results:
4 agents in done state and 1 is in "configuring"
Expected results:
all agents are in "done" state
Additional info:
issue mentioned in https://github.com/containers/podman/issues/14003
Fix: https://github.com/containers/storage/issues/1136
This is a clone of issue OCPBUGS-6887. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-3476. The following is the description of the original issue:
—
Description of problem:
When we detect a refs/heads/branchname we should show the label as what we have now: - Branch: branchname And when we detect a refs/tags/tagname we should instead show the label as: - Tag: tagname
I haven't implemented this in cli but there is an old issue for that here openshift-pipelines/pipelines-as-code#181
Version-Release number of selected component (if applicable):
4.11.z
How reproducible:
Steps to Reproduce:
1. Create a repository 2. Trigger the pipelineruns by push or pull request event on the github
Actual results:
We do not show tag name even is tag is present instead of branch
Expected results:
We should show tag if tag is detected and branch if branch is detedcted.
Additional info:
https://github.com/openshift/console/pull/12247#issuecomment-1306879310
Description of problem:
prometheus-k8s-0 ends in CrashLoopBackOff with evel=error err="opening storage failed: /prometheus/chunks_head/000002: invalid magic number 0" on SNO after hard reboot tests
Version-Release number of selected component (if applicable):
4.11.6
How reproducible:
Not always, after ~10 attempts
Steps to Reproduce:
1. Deploy SNO with Telco DU profile applied 2. Hard reboot node via out of band interface 3. oc -n openshift-monitoring get pods prometheus-k8s-0
Actual results:
NAME READY STATUS RESTARTS AGE prometheus-k8s-0 5/6 CrashLoopBackOff 125 (4m57s ago) 5h28m
Expected results:
Running
Additional info:
Attaching must-gather. The pod recovers successfully after deleting/re-creating. [kni@registry.kni-qe-0 ~]$ oc -n openshift-monitoring logs prometheus-k8s-0 ts=2022-09-26T14:54:01.919Z caller=main.go:552 level=info msg="Starting Prometheus Server" mode=server version="(version=2.36.2, branch=rhaos-4.11-rhel-8, revision=0d81ba04ce410df37ca2c0b1ec619e1bc02e19ef)" ts=2022-09-26T14:54:01.919Z caller=main.go:557 level=info build_context="(go=go1.18.4, user=root@371541f17026, date=20220916-14:15:37)" ts=2022-09-26T14:54:01.919Z caller=main.go:558 level=info host_details="(Linux 4.18.0-372.26.1.rt7.183.el8_6.x86_64 #1 SMP PREEMPT_RT Sat Aug 27 22:04:33 EDT 2022 x86_64 prometheus-k8s-0 (none))" ts=2022-09-26T14:54:01.919Z caller=main.go:559 level=info fd_limits="(soft=1048576, hard=1048576)" ts=2022-09-26T14:54:01.919Z caller=main.go:560 level=info vm_limits="(soft=unlimited, hard=unlimited)" ts=2022-09-26T14:54:01.921Z caller=web.go:553 level=info component=web msg="Start listening for connections" address=127.0.0.1:9090 ts=2022-09-26T14:54:01.922Z caller=main.go:989 level=info msg="Starting TSDB ..." ts=2022-09-26T14:54:01.924Z caller=tls_config.go:231 level=info component=web msg="TLS is disabled." http2=false ts=2022-09-26T14:54:01.926Z caller=main.go:848 level=info msg="Stopping scrape discovery manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:862 level=info msg="Stopping notify discovery manager..." ts=2022-09-26T14:54:01.926Z caller=manager.go:951 level=info component="rule manager" msg="Stopping rule manager..." ts=2022-09-26T14:54:01.926Z caller=manager.go:961 level=info component="rule manager" msg="Rule manager stopped" ts=2022-09-26T14:54:01.926Z caller=main.go:899 level=info msg="Stopping scrape manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:858 level=info msg="Notify discovery manager stopped" ts=2022-09-26T14:54:01.926Z caller=main.go:891 level=info msg="Scrape manager stopped" ts=2022-09-26T14:54:01.926Z caller=notifier.go:599 level=info component=notifier msg="Stopping notification manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:844 level=info msg="Scrape discovery manager stopped" ts=2022-09-26T14:54:01.926Z caller=manager.go:937 level=info component="rule manager" msg="Starting rule manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:1120 level=info msg="Notifier manager stopped" ts=2022-09-26T14:54:01.926Z caller=main.go:1129 level=error err="opening storage failed: /prometheus/chunks_head/000002: invalid magic number 0"
This is a clone of issue OCPBUGS-4504. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-1557. The following is the description of the original issue:
—
Seen in an instance created recently by a 4.12.0-ec.2 GCP provider:
"scheduling": { "automaticRestart": false, "onHostMaintenance": "MIGRATE", "preemptible": false, "provisioningModel": "STANDARD" },
From GCP's docs, they may stop instances on hardware failures and other causes, and we'd need automaticRestart: true to auto-recover from that. Also from GCP docs, the default for automaticRestart is true. And on the Go provider side, we doc:
If omitted, the platform chooses a default, which is subject to change over time, currently that default is "Always".
But the implementing code does not actually float the setting. Seems like a regression here, which is part of 4.10:
$ git clone https://github.com/openshift/machine-api-provider-gcp.git $ cd machine-api-provider-gcp $ git log --oneline origin/release-4.10 | grep 'migrate to openshift/api' 44f0f958 migrate to openshift/api
But that's not where the 4.9 and earlier code is located:
$ git branch -a | grep origin/release remotes/origin/release-4.10 remotes/origin/release-4.11 remotes/origin/release-4.12 remotes/origin/release-4.13
Hunting for 4.9 code:
$ oc adm release info --commits quay.io/openshift-release-dev/ocp-release:4.9.48-x86_64 | grep gcp gcp-machine-controllers https://github.com/openshift/cluster-api-provider-gcp c955c03b2d05e3b8eb0d39d5b4927128e6d1c6c6 gcp-pd-csi-driver https://github.com/openshift/gcp-pd-csi-driver 48d49f7f9ef96a7a42a789e3304ead53f266f475 gcp-pd-csi-driver-operator https://github.com/openshift/gcp-pd-csi-driver-operator d8a891de5ae9cf552d7d012ebe61c2abd395386e
So looking there:
$ git clone https://github.com/openshift/cluster-api-provider-gcp.git $ cd cluster-api-provider-gcp $ git log --oneline | grep 'migrate to openshift/api' ...no hits... $ git grep -i automaticRestart origin/release-4.9 | grep -v '"description"\|compute-gen.go' origin/release-4.9:vendor/google.golang.org/api/compute/v1/compute-api.json: "automaticRestart": {
Not actually clear to me how that code is structured. So 4.10 and later GCP machine-API providers are impacted, and I'm unclear on 4.9 and earlier.
Description of problem:
[4.11.z] Fix kubevirt-console tests
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Backport clone of https://issues.redhat.com/browse/OCPBUGSM-24281
openshift-4 tracking bug for telemeter-container: see the bugs linked in the "Blocks" field of this bug for full details of the security issue(s).
This bug is never intended to be made public, please put any public notes in the blocked bugs.
Impact: Moderate
Public Date: 11-Jan-2021
PM Fix/Wontfix Decision By: 04-May-2021
Resolve Bug By: 11-Jan-2022
In case the dates above are already past, please evaluate this bug in your next prioritization review and make a decision then. Remember to explicitly set CLOSED:WONTFIX if you decide not to fix this bug.
Please see the Security Errata Policy for further details: https://docs.engineering.redhat.com/x/9RBqB
This is a clone of issue OCPBUGS-10496. The following is the description of the original issue:
—
Description of problem:
Customer is running machine learning (ML) tasks on OpenShift Container Platform, for which large models need to be embedded in the container image. When building a new container image with large container image layers (>=10GB) and pushing it to the internal image registry, this fails with the following error message:
error: build error: Failed to push image: writing blob: uploading layer to https://image-registry.openshift-image-registry.svc:5000/v2/example/example-image/blobs/uploads/b305b374-af79-4dce-afe0-afe6893b0ada?_state=[..]: blob upload invalid
In the image registry Pod we can see the following error message:
time="2023-01-30T14:12:22.315726147Z" level=error msg="upload resumed at wrong offest: 10485760000 != 10738341637" [..] time="2023-01-30T14:12:22.338264863Z" level=error msg="response completed with error" err.code="blob upload invalid" err.message="blob upload invalid" [..]
Backend storage is AWS S3. We suspect that this could be the following upstream bug: https://github.com/distribution/distribution/issues/1698
Version-Release number of selected component (if applicable):
Customer encountered the issue on OCP 4.11.20. We reproduced the issue on OCP 4.11.21: $ oc version Client Version: 4.12.0 Kustomize Version: v4.5.7 Server Version: 4.11.21 Kubernetes Version: v1.24.6+5658434
How reproducible:
Always
Steps to Reproduce:
1. Install OpenShift Container Platform cluster 4.11.21 on AWS 2. Confirm registry storage is on AWS S3 3. Create a new build including a 10GB file using the following command: `printf "FROM registry.fedoraproject.org/fedora:37\nRUN dd if=/dev/urandom of=/bigfile bs=1M count=10240" | oc new-build -D -` 4. Wait for some time for the build to run
Actual results:
Pushing the new build fails with the following error message: error: build error: Failed to push image: writing blob: uploading layer to https://image-registry.openshift-image-registry.svc:5000/v2/example/example-image/blobs/uploads/b305b374-af79-4dce-afe0-afe6893b0ada?_state=[..]: blob upload invalid
Expected results:
Push of large container image layers succeeds
Additional info:
This is a clone of issue OCPBUGS-11972. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-9956. The following is the description of the original issue:
—
Description of problem:
PipelineRun default template name has been updated in the backend in Pipeline operator 1.10, So we need to update the name in the UI code as well.
Clone of https://bugzilla.redhat.com/show_bug.cgi?id=2106803 to backport the e2e fix to 4.11 and 4.10.
Description of problem: E2E: intermittent failure is seen on tests for devfile due to network call to devfile registry
Deploy git workload with devfile from topology page: A-04-TC01
Version-Release number of selected component (if applicable):
How reproducible: Intermittent
Steps to Reproduce:
1. Run test for add-flow-ci.feature to test Deploy git workload with devfile from topology page: A-04-TC01
Actual results:
Expected results: Show always pass
Additional info:
This is a clone of issue OCPBUGS-1417. The following is the description of the original issue:
—
Description of problem:
Egress IP is not being assigned to primary interface of node as per hostsubnet definition. The issue being observed at an Openshift cluster hosted on Disconnected AWS environment. Following steps were performed at AWS end: - Disconnected VPC was created and installation of Openshift was done as per documentation. - Elastic IP could not be used as it is a disconnected environment. Customer identified a free IP from same subnet as the node and modified interface of the node to add a secondary IP. It seems cloud.network.openshift.io/egress-ipconfig annotation is need on the node to attach IP to primary interface but its missing. From SDN POD log on the same node I could see its complaining about 'an incomplete annotation "cloud.network.openshift.io/egress-ipconfig"'. Will share more details over comments.
Version-Release number of selected component (if applicable):
Openshift 4.10.28
How reproducible:
Always
Steps to Reproduce:
1. Create a disconnected environment on AWS 2. find a free IP from subnet where a worker node is hosted and add that as secondary IP to NIC of that node. 3. Configure hostsubnet and netnamespace on Openshift cluster
Actual results:
- Eress IP is not being attached to primary interface of node for which hostsubnet has been configured
Expected results:
- Egress IP should get configured without any issue.
Additional info:
This is a clone of issue OCPBUGS-675. The following is the description of the original issue:
—
Description of problem:
A cluster hit a panic in etcd operator in bootstrap:
I0829 14:46:02.736582 1 controller_manager.go:54] StaticPodStateController controller terminated
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1e940ab]
goroutine 2701 [running]:
github.com/openshift/cluster-etcd-operator/pkg/etcdcli.checkSingleMemberHealth({0x29374c0, 0xc00217d920}, 0xc0021fb110)
github.com/openshift/cluster-etcd-operator/pkg/etcdcli/health.go:135 +0x34b
github.com/openshift/cluster-etcd-operator/pkg/etcdcli.getMemberHealth.func1()
github.com/openshift/cluster-etcd-operator/pkg/etcdcli/health.go:58 +0x7f
created by github.com/openshift/cluster-etcd-operator/pkg/etcdcli.getMemberHealth
github.com/openshift/cluster-etcd-operator/pkg/etcdcli/health.go:54 +0x2ac
Version-Release number of selected component (if applicable):
How reproducible:
Pulled up a 4.12 cluster and hit panic during bootstrap
Steps to Reproduce:
1. 2. 3.
Actual results:
panic as above
Expected results:
no panic
Additional info:
Description of problem:
Whenever one runs ovnkube-trace from an in-cluster pod to a pod in the host network that is in different node, the following spurious error appears despite of the underlying ovn-trace being correct: ovn-trace indicates failure from ingress-canary-7zhxs to router-default-6758fb465c-s66rv - output to "k8s-worker-0.example.redhat.com" not matched This is caused because as per[1], if the destination pod is in host network, the outport is expected to be of the form "k8s-${NODE_NAME}", which is true only if either in local gateway or if the source pod is in the same node than the destination pod. This is already fixed in the master branch[2], but we would need this to be backported to previous releases.
Version-Release number of selected component (if applicable):
4.11.4
How reproducible:
Always
Steps to Reproduce:
1. ovnkube-trace from pod in the SDN to pod in host network 2. 3.
Actual results:
Wrong error
Expected results:
No wrong error
Additional info:
References: [1] - https://github.com/openshift/ovn-kubernetes/blob/release-4.11/go-controller/cmd/ovnkube-trace/ovnkube-trace.go#L771-L777 [2] - https://github.com/openshift/ovn-kubernetes/blob/master/go-controller/cmd/ovnkube-trace/ovnkube-trace.go#L755-L769
We are in the process of moving our bug tracking to JIRA. We should update the report bug link in the help menu to use JIRA instead of Bugzilla for new bugs. Opening as a medium severity bug since this only impacts prerelease OpenShift versions. For release versions, we have users open customer cases.
Description of problem:
AWS tagging - when applying user defined tags you cannot add more than 10
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-2438. The following is the description of the original issue:
—
Description of problem:
On the alert details page and alerting rule details page, clicking on a field that has a popover help throws an uncaught JavaScript error.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Go to Observe > Alerting pages 2. Click on an alert (or go to the rules tab then click on a rule) 3. Click on one of the underlined fields (those that have a popover help)
Actual results:
Expected results:
Additional info:
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This bug is a backport clone of [Bugzilla Bug 2034883](https://bugzilla.redhat.com/show_bug.cgi?id=2034883). The following is the description of the original bug:
—
Description of problem:
Situation (starting point):
Problem:
Version-Release number of MCO (Machine Config Operator) (if applicable):
4.7.21
Platform (AWS, VSphere, Metal, etc.): (not relevant)
Are you certain that the root cause of the issue being reported is the MCO (Machine Config Operator)?
(Y/N/Not sure): Y
How reproducible:
Always if the said conditions are met.
Steps to Reproduce:
1. Have some nodes not ready
2. Force a change that requires machine-config-daemon daemonset rollout (I think that changing proxy settings would work for this)
3. Wait until a new kube-apiserver-to-kubelet-client-ca is rolled out by kube-apiserver-operator
Actual results:
New kube-apiserver-to-kubelet-client-ca not forwarded to controllerconfig, kube-apiserver-to-kubelet-client-ca not deployed on nodes
Expected results:
kube-apiserver-to-kubelet-client-ca forwarded to controllerconfig, kube-apiserver-to-kubelet-client-ca deployed to nodes.
Additional info:
In comments
Just like kube proxy, ovnk should expose port 10256 on every node, so that cloud LBs can send health checks and know which nodes are available. This is relevant for services with externalTrafficPolicy=Cluster.
This is a clone of issue OCPBUGS-3114. The following is the description of the original issue:
—
Description of problem:
When running a Hosted Cluster on Hypershift the cluster-networking-operator never progressed to Available despite all the components being up and running
Version-Release number of selected component (if applicable):
quay.io/openshift-release-dev/ocp-release:4.11.11-x86_64 for the hosted clusters hypershift operator is quay.io/hypershift/hypershift-operator:4.11 4.11.9 management cluster
How reproducible:
Happened once
Steps to Reproduce:
1. 2. 3.
Actual results:
oc get co network reports False availability
Expected results:
oc get co network reports True availability
Additional info:
Following the trail
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/1139
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/1175
https://github.com/openshift/aws-ebs-csi-driver/pull/206
Looks like the fix should be in 4.12, but it still see it being 39 vs ~24 on an m6i instance type.
It seems that the kubelet applies this capacity to the node in 4.11 and earlier and, thus, unlikely to receive this fix for attachable volumes in the upstream CSI driver. 4.12 behavior is currently unknown but it seems that the kubelet might still be setting this capacity.
The actual issue is that kube scheduler schedules pods that require PVs to nodes where those PVs can not be attached.
This is a clone of issue OCPBUGS-5191. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-5164. The following is the description of the original issue:
—
Description of problem:
It looks like the ODC doesn't register KNATIVE_SERVING and KNATIVE_EVENTING flags. Those are based on KnativeServing and KnativeEventing CRs, but they are looking for v1alpha1 version of those: https://github.com/openshift/console/blob/f72519fdf2267ad91cc0aa51467113cc36423a49/frontend/packages/knative-plugin/console-extensions.json#L6-L8
This PR https://github.com/openshift-knative/serverless-operator/pull/1695 moved the CRs to v1beta1, and that breaks that ODC discovery.
Version-Release number of selected component (if applicable):
Openshift 4.8, Serverless Operator 1.27
Additional info:
https://coreos.slack.com/archives/CHGU4P8UU/p1671634903447019
This is a clone of issue OCPBUGS-9955. The following is the description of the original issue:
—
Description of problem:
OCP cluster installation (SNO) using assisted installer running on ACM hub cluster. Hub cluster is OCP 4.10.33 ACM is 2.5.4 When a cluster fails to install we remove the installation CRs and cluster namespace from the hub cluster (to eventually redeploy). The termination of the namespace hangs indefinitely (14+ hours) with finalizers remaining. To resolve the hang we can remove the finalizers by editing both the secret pointed to by BareMetalHost .spec.bmc.credentialsName and BareMetalHost CR. When these finalizers are removed the namespace termination completes within a few seconds.
Version-Release number of selected component (if applicable):
OCP 4.10.33 ACM 2.5.4
How reproducible:
Always
Steps to Reproduce:
1. Generate installation CRs (AgentClusterInstall, BMH, ClusterDeployment, InfraEnv, NMStateConfig, ...) with an invalid configuration parameter. Two scenarios validated to hit this issue: a. Invalid rootDeviceHint in BareMetalHost CR b. Invalid credentials in the secret referenced by BareMetalHost.spec.bmc.credentialsName 2. Apply installation CRs to hub cluster 3. Wait for cluster installation to fail 4. Remove cluster installation CRs and namespace
Actual results:
Cluster namespace remains in terminating state indefinitely: $ oc get ns cnfocto1 NAME STATUS AGE cnfocto1 Terminating 17h
Expected results:
Cluster namespace (and all installation CRs in it) are successfully removed.
Additional info:
The installation CRs are applied to and removed from the hub cluster using argocd. The CRs have the following waves applied to them which affects the creation order (lowest to highest) and removal order (highest to lowest): Namespace: 0 AgentClusterInstall: 1 ClusterDeployment: 1 NMStateConfig: 1 InfraEnv: 1 BareMetalHost: 1 HostFirmwareSettings: 1 ConfigMap: 1 (extra manifests) ManagedCluster: 2 KlusterletAddonConfig: 2
This bug is a backport clone of [Bugzilla Bug 2073220](https://bugzilla.redhat.com/show_bug.cgi?id=2073220). The following is the description of the original bug:
—
Description of problem:
Version-Release number of selected component (if applicable): 4.*
How reproducible: always
Steps to Reproduce:
1. Set audit profile to WriteRequestBodies
2. Wait for api server rollout to complete
3. tail -f /var/log/kube-apiserver/audit.log | grep routes/status
Actual results:
Write events to routes/status are recorded at the RequestResponse level, which often includes keys and certificates.
Expected results:
Events involving routes should always be recorded at the Metadata level, per the documentation at https://docs.openshift.com/container-platform/4.10/security/audit-log-policy-config.html#about-audit-log-profiles_audit-log-policy-config
Additional info:
This bug is a backport clone of [Bugzilla Bug 2094174](https://bugzilla.redhat.com/show_bug.cgi?id=2094174). The following is the description of the original bug:
—
Created attachment 1887340
CVO log file
Description of problem:
Clearing upgrade after signature verification fails, ReleaseAccepted=False keeps complaining about the update cannot be verified blah blah.
,
,
,
,
,
{ "lastTransitionTime": "2022-06-07T01:56:17Z", "message": "Cluster version is 4.11.0-0.nightly-2022-06-06-025509", "status": "False", "type": "Progressing" }]
Version-Release number of the following components:
4.11.0-0.nightly-2022-06-06-025509
How reproducible:
1/1
Steps to Reproduce:
1. Upgrade to a fake release
2. Check ReleaseAccepted=False due to target image signature verification failure
ReleaseAccepted=False
Reason: RetrievePayload
Message: Retrieving payload failed version="" image="registry.ci.openshift.org/ocp/release@sha256:5967359c2bfee0512030418af0f69faa3fa74a81a89ad64a734420e020e7f100" failure=The update cannot be verified: unable to verify sha256:5967359c2bfee0512030418af0f69faa3fa74a81a89ad64a734420e020e7f100 against keyrings: verifier-public-key-redhat
Upstream is unset, so the cluster will use an appropriate default.
Channel: stable-4.11
warning: Cannot display available updates:
Reason: VersionNotFound
Message: Unable to retrieve available updates: currently reconciling cluster version 4.11.0-0.nightly-2022-06-04-014713 not found in the "stable-4.11" channel
3. Clear the upgrade
4. Check oc adm upgrade info
ReleaseAccepted=False
Reason: RetrievePayload
Message: Retrieving payload failed version="" image="registry.ci.openshift.org/ocp/release@sha256:5967359c2bfee0512030418af0f69faa3fa74a81a89ad64a734420e020e7f100" failure=The update cannot be verified: unable to verify sha256:5967359c2bfee0512030418af0f69faa3fa74a81a89ad64a734420e020e7f100 against keyrings: verifier-public-key-redhat
Upstream is unset, so the cluster will use an appropriate default.
Channel: stable-4.11
warning: Cannot display available updates:
Reason: VersionNotFound
Message: Unable to retrieve available updates: currently reconciling cluster version 4.11.0-0.nightly-2022-06-04-014713 not found in the "stable-4.11" channel
Actual results:
After upgrade is cleared, cv condition ReleaseAccepted keeps to false with message The update cannot be verified
Expected results:
After upgrade is cleared, cv condition ReleaseAccepted should stop complaining about the target image
Additional info:
Please attach logs from ansible-playbook with the -vvv flag
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
With every pod update we are executing a mutate operation to add the pod port to the port group or add the pod IP to an address set. This functionally doesn't hurt, since mutate will not add duplicate values to the same set. However, this is bad for performance. For example, with a 730 network policies affecting a pod, and issuing 7 pod updates would result in over 5k transactions.
This is a clone of issue OCPBUGS-6816. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-6799. The following is the description of the original issue:
—
Description of problem:
The pipelines -> repositories list view in Dev Console does not show the running pipelineline as the last pipelinerun in the table.
Original BugZilla Link: https://bugzilla.redhat.com/show_bug.cgi?id=2016006
OCPBUGSM: https://issues.redhat.com/browse/OCPBUGSM-36408
Description of problem:
The alibabacloud client "aliyun" would be used when pre-configuring some resources (e.g. VPC, bastion host, etc.) before launching an OCP cluster with customization.
Version-Release number of selected component (if applicable):
4.11
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
#Description of problem:
Developer Console > +ADD > Develoeper Catalog > Service > select Types Templates > Initiate Template
Input values in Instantiate Template are disappeared randomly.
#Version-Release number of selected component (if applicable):
#How reproducible:
I reproduced this issue in ocp410ovn shared cluster in the quicklab
Select Apache HTTP Server > Input name "test" in Application Hostname box
After several seconds, the value has disappeared in the web console.
#Steps to Reproduce:
0. Developer Console > +ADD > Develoeper Catalog > Service > select Types Templates > Initiate Template
1. Input values in the box of template menu.
2. The values are disappeared after several seconds later. (20s~ or randomly)
3. Many users have experienced this issue.
==> the browser version doesn't matter.
#Actual results:
Input values in "Instantiate Template" are disappeared randomly.
Users can't use the Initiate Template feature in the Dev console.
#Expected results:
Input values remain in the web console and users creat the object by the "Instantiate Template"
#Additional info:
See "Application Name" has disappeared in the video I attached.
This is a clone of OCPBUGSM-47085
Version:
$ openshift-install version
4.11.0-rc2
Platform:
Nutanix
On `openshift-installer create manifests` stage a connection to Prism is made (see https://github.com/openshift/installer/blob/master/pkg/asset/installconfig/nutanix/validation.go#L15-L36=)
This make generating manifests separately impossible, which breaks Assisted Installer flow. Instead of storing sensitive user information, Assisted Installer sets fake details in install-config.yaml and asks user to update these after installation has completed.
With validation happening on `openshift-install create manifests` phase installation process can't start with invalid credentials.
Please move this validation to ValidateForProvisioning, similar to vSphere
This is a clone of issue OCPBUGS-3111. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-2992. The following is the description of the original issue:
—
Description of problem:
The metal3-ironic container image in OKD fails during steps in configure-ironic.sh that look for additional Oslo configuration entries as environment variables to configure the Ironic instance. The mechanism by which it fails in OKD but not OpenShift is that the image for OpenShift happens to have unrelated variables set which match the regex, because it is based on the builder image, but the OKD image is based only on a stream8 image without these unrelated OS_ prefixed variables set. The metal3 pod created in response to even a provisioningNetwork: Disabled Provisioning object will therefore crashloop indefinitely.
Version-Release number of selected component (if applicable):
4.11
How reproducible:
Always
Steps to Reproduce:
1. Deploy OKD to a bare metal cluster using the assisted-service, with the OKD ConfigMap applied to podman play kube, as in :https://github.com/openshift/assisted-service/tree/master/deploy/podman#okd-configuration 2. Observe the state of the metal3 pod in the openshift-machine-api namespace.
Actual results:
The metal3-ironic container repeatedly exits with nonzero, with the logs ending here: ++ export IRONIC_URL_HOST=10.1.1.21 ++ IRONIC_URL_HOST=10.1.1.21 ++ export IRONIC_BASE_URL=https://10.1.1.21:6385 ++ IRONIC_BASE_URL=https://10.1.1.21:6385 ++ export IRONIC_INSPECTOR_BASE_URL=https://10.1.1.21:5050 ++ IRONIC_INSPECTOR_BASE_URL=https://10.1.1.21:5050 ++ '[' '!' -z '' ']' ++ '[' -f /etc/ironic/ironic.conf ']' ++ cp /etc/ironic/ironic.conf /etc/ironic/ironic.conf_orig ++ tee /etc/ironic/ironic.extra # Options set from Environment variables ++ echo '# Options set from Environment variables' ++ env ++ grep '^OS_' ++ tee -a /etc/ironic/ironic.extra
Expected results:
The metal3-ironic container starts and the metal3 pod is reported as ready.
Additional info:
This is the PR that introduced pipefail to the downstream ironic-image, which is not yet accepted in the upstream: https://github.com/openshift/ironic-image/pull/267/files#diff-ab2b20df06f98d48f232d90f0b7aa464704257224862780635ec45b0ce8a26d4R3 This is the line that's failing: https://github.com/openshift/ironic-image/blob/4838a077d849070563b70761957178055d5d4517/scripts/configure-ironic.sh#L57 This is the image base that OpenShift uses for ironic-image (before rewriting in ci-operator): https://github.com/openshift/ironic-image/blob/4838a077d849070563b70761957178055d5d4517/Dockerfile.ocp#L9 Here is where the relevant environment variables are set in the builder images for OCP: https://github.com/openshift/builder/blob/973602e0e576d7eccef4fc5810ba511405cd3064/hack/lib/build/version.sh#L87 Here is the final FROM line in the OKD image build (just stream8): https://github.com/openshift/ironic-image/blob/4838a077d849070563b70761957178055d5d4517/Dockerfile.okd#L9 This results in the following differences between the two images: $ podman run --rm -it --entrypoint bash quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:519ac06836d972047f311de5e57914cf842716e22a1d916a771f02499e0f235c -c 'env | grep ^OS_' OS_GIT_MINOR=11 OS_GIT_TREE_STATE=clean OS_GIT_COMMIT=97530a7 OS_GIT_VERSION=4.11.0-202210061001.p0.g97530a7.assembly.stream-97530a7 OS_GIT_MAJOR=4 OS_GIT_PATCH=0 $ podman run --rm -it --entrypoint bash quay.io/openshift/okd-content@sha256:6b8401f8d84c4838cf0e7c598b126fdd920b6391c07c9409b1f2f17be6d6d5cb -c 'env | grep ^OS_' Here is what the OS_ prefixed variables should be used for: https://github.com/metal3-io/ironic-image/blob/807a120b4ce5e1675a79ebf3ee0bb817cfb1f010/README.md?plain=1#L36 https://opendev.org/openstack/oslo.config/src/commit/84478d83f87e9993625044de5cd8b4a18dfcaf5d/oslo_config/sources/_environment.py It's worth noting that ironic.extra is not consumed anywhere, and is simply being used here to save off the variables that Oslo _might_ be consuming (it won't consume the variables that are present in the OCP builder image, though they do get caught by this regex). With pipefail set, grep returns non-zero when it fails to find an environment variable that matches the regex, as in the case of the OKD ironic-image builds.
This is a clone of issue OCPBUGS-4489. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-4168. The following is the description of the original issue:
—
Description of problem:
Prometheus continuously restarts due to slow WAL replay
Version-Release number of selected component (if applicable):
openshift - 4.11.13
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-7458. The following is the description of the original issue:
—
Description of problem:
- After upgrading to OCP 4.10.41, thanos-ruler-user-workload-1 in the openshift-user-workload-monitoring namespace is consistently being created and deleted. - We had to scale down the Prometheus operator multiple times so that the upgrade is considered as successful. - This fix is temporary. After some time it appears again and Prometheus operator needs to be scaled down and up again. - The issue is present on all clusters in this customer environment which are upgraded to 4.10.41.
Version-Release number of selected component (if applicable):
How reproducible:
N/A, I wasn't able to reproduce the issue.
Steps to Reproduce:
Actual results:
Expected results:
Additional info:
Description of problem:
Remove the self-provisioner role for the system authenticated users as per https://access.redhat.com/solutions/4040541 to stop users from having the ability to create new projects, but the customer has found this is only partially working. It appears that when you use cluster Web UI Administrator view, the "Create Project" button is not available but switching to the default Developer view default user can create a project
Version-Release number of selected component (if applicable):
How reproducible:
Follow https://access.redhat.com/solutions/1529893
Steps to Reproduce:
1. oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth 2. log back in as user and switch between admin/Dev view 3. User still has link showing in Dev console
Actual results:
Create new project link still exists
Expected results:
Create new project link should be removed, similar to Admin Console
Additional info:
Although the loink still exists, the user get's a correct permission denied message.
Description of problem:
Similar to OCPBUGS-11636 ccoctl needs to be updated to account for the s3 bucket changes described in https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/ these changes have rolled out to us-east-2 and China regions as of today and will roll out to additional regions in the near future See OCPBUGS-11636 for additional information
Version-Release number of selected component (if applicable):
How reproducible:
Reproducible in affected regions.
Steps to Reproduce:
1. Use "ccoctl aws create-all" flow to create STS infrastructure in an affected region like us-east-2. Notice that document upload fails because the s3 bucket is created in a state that does not allow usage of ACLs with the s3 bucket.
Actual results:
./ccoctl aws create-all --name abutchertestue2 --region us-east-2 --credentials-requests-dir ./credrequests --output-dir _output 2023/04/11 13:01:06 Using existing RSA keypair found at _output/serviceaccount-signer.private 2023/04/11 13:01:06 Copying signing key for use by installer 2023/04/11 13:01:07 Bucket abutchertestue2-oidc created 2023/04/11 13:01:07 Failed to create Identity provider: failed to upload discovery document in the S3 bucket abutchertestue2-oidc: AccessControlListNotSupported: The bucket does not allow ACLs status code: 400, request id: 2TJKZC6C909WVRK7, host id: zQckCPmozx+1yEhAj+lnJwvDY9rG14FwGXDnzKIs8nQd4fO4xLWJW3p9ejhFpDw3c0FE2Ggy1Yc=
Expected results:
"ccoctl aws create-all" successfully creates IAM and S3 infrastructure. OIDC discovery and JWKS documents are successfully uploaded to the S3 bucket and are publicly accessible.
Additional info:
Description of problem:
intra namespace allow network policy doesn't work after applying ingress&egress deny all network policy
Version-Release number of selected component (if applicable):
OpenShift 4.10.12
How reproducible:
Always
Steps to Reproduce:
1. Define deny all network policy for egress an ingress in a namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
2. Define the following network policy to allow the traffic between the pods in the namespace:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-intra-namespace-001 spec: egress: - to: - podSelector: {} ingress: - from: - podSelector: {} podSelector: {} policyTypes: - Ingress - Egress
3. Test the connectivity between two pods from the namespace.
Actual results:
The connectivity is not allowed
Expected results:
The connectivity should be allowed between pods from the same namespace.
Additional info:
After performing a test and analyzing SDN flows for the namespace:
sh-4.4# ovs-ofctl dump-flows -O OpenFlow13 br0 | grep --color 0x964376 cookie=0x0, duration=99375.342s, table=20, n_packets=14, n_bytes=588, priority=100,arp,in_port=21,arp_spa=10.128.2.20,arp_sha=00:00:0a:80:02:14/00:00:ff:ff:ff:ff actions=load:0x964376->NXM_NX_REG0[],goto_table:30 cookie=0x0, duration=1681.845s, table=20, n_packets=11, n_bytes=462, priority=100,arp,in_port=24,arp_spa=10.128.2.23,arp_sha=00:00:0a:80:02:17/00:00:ff:ff:ff:ff actions=load:0x964376->NXM_NX_REG0[],goto_table:30 cookie=0x0, duration=99375.342s, table=20, n_packets=135610, n_bytes=759239814, priority=100,ip,in_port=21,nw_src=10.128.2.20 actions=load:0x964376->NXM_NX_REG0[],goto_table:27 cookie=0x0, duration=1681.845s, table=20, n_packets=2006, n_bytes=12684967, priority=100,ip,in_port=24,nw_src=10.128.2.23 actions=load:0x964376->NXM_NX_REG0[],goto_table:27 cookie=0x0, duration=99375.342s, table=25, n_packets=0, n_bytes=0, priority=100,ip,nw_src=10.128.2.20 actions=load:0x964376->NXM_NX_REG0[],goto_table:27 cookie=0x0, duration=1681.845s, table=25, n_packets=0, n_bytes=0, priority=100,ip,nw_src=10.128.2.23 actions=load:0x964376->NXM_NX_REG0[],goto_table:27 cookie=0x0, duration=975.129s, table=27, n_packets=0, n_bytes=0, priority=150,reg0=0x964376,reg1=0x964376 actions=goto_table:30 cookie=0x0, duration=99375.342s, table=70, n_packets=145260, n_bytes=11722173, priority=100,ip,nw_dst=10.128.2.20 actions=load:0x964376->NXM_NX_REG1[],load:0x15->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=1681.845s, table=70, n_packets=2336, n_bytes=191079, priority=100,ip,nw_dst=10.128.2.23 actions=load:0x964376->NXM_NX_REG1[],load:0x18->NXM_NX_REG2[],goto_table:80 cookie=0x0, duration=975.129s, table=80, n_packets=0, n_bytes=0, priority=150,reg0=0x964376,reg1=0x964376 actions=output:NXM_NX_REG2[]
We see that the following rule doesn't match because `reg1` hasn't been defined:
cookie=0x0, duration=975.129s, table=27, n_packets=0, n_bytes=0, priority=150,reg0=0x964376,reg1=0x964376 actions=goto_table:30
Discovered in the must gather kubelet_service.log from https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-sdn-upgrade/1586093220087992320
It appears the guard pod names are too long, and being truncated down to where they will collide with those from the other masters.
From kubelet logs in this run:
❯ grep openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-maste kubelet_service.log Oct 28 23:58:55.693391 ci-op-3hj6pnwf-4f6ab-lv57z-master-1 kubenswrapper[1657]: E1028 23:58:55.693346 1657 kubelet_pods.go:413] "Hostname for pod was too long, truncated it" podName="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-master-1" hostnameMaxLen=63 truncatedHostname="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-maste" Oct 28 23:59:03.735726 ci-op-3hj6pnwf-4f6ab-lv57z-master-0 kubenswrapper[1670]: E1028 23:59:03.735671 1670 kubelet_pods.go:413] "Hostname for pod was too long, truncated it" podName="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-master-0" hostnameMaxLen=63 truncatedHostname="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-maste" Oct 28 23:59:11.168082 ci-op-3hj6pnwf-4f6ab-lv57z-master-2 kubenswrapper[1667]: E1028 23:59:11.168041 1667 kubelet_pods.go:413] "Hostname for pod was too long, truncated it" podName="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-master-2" hostnameMaxLen=63 truncatedHostname="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-maste"
This also looks to be happening for openshift-kube-scheduler-guard, kube-controller-manager-guard, possibly others.
Looks like they should be truncated further to make room for random suffixes in https://github.com/openshift/library-go/blame/bd9b0e19121022561dcd1d9823407cd58b2265d0/pkg/operator/staticpod/controller/guard/guard_controller.go#L97-L98
Unsure of the implications here, it looks a little scary.
Description of problem:
The storageclass "thin-csi" is created by vsphere-CSI-Driver-Operator, after deleting it manually, it should be re-created immediately.
Version-Release number of selected component (if applicable):
4.11.4
How reproducible:
Always
Steps to Reproduce:
1. Check storageclass in running cluster, thin-csi is present: $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE thin (default) kubernetes.io/vsphere-volume Delete Immediate false 41m thin-csi csi.vsphere.vmware.com Delete WaitForFirstConsumer true 38m
2. Delete thin-csi storageclass: $ oc delete sc thin-csi storageclass.storage.k8s.io "thin-csi" deleted
3. Check storageclass again, thin-csi is not present: $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE thin (default) kubernetes.io/vsphere-volume Delete Immediate false 50m
4. Check vmware-vsphere-csi-driver-operator log: ...... I0909 03:47:42.172866 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662695014\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662695014\" (2022-09-09 02:43:34 +0000 UTC to 2023-09-09 02:43:34 +0000 UTC (now=2022-09-09 03:47:42.172853123 +0000 UTC))"I0909 03:49:38.294962 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOFI0909 03:49:38.295468 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOFI0909 03:49:38.295765 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF
5. Only first time creating in vmware-vsphere-csi-driver-operator log: $ oc -n openshift-cluster-csi-drivers logs vmware-vsphere-csi-driver-operator-7cc6d44b5c-c8czw | grep -i "storageclass"I0909 03:46:31.865926 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-cluster-csi-drivers", Name:"vmware-vsphere-csi-driver-operator", UID:"9e0c3e2d-d403-40a1-bf69-191d7aec202b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StorageClassCreated' Created StorageClass.storage.k8s.io/thin-csi because it was missing
Actual results:
The storageclass "thin-csi" could not be re-created after deleting
Expected results:
The storageclass "thin-csi" should be re-created after deleting
Additional info:
Description of problem:
The last egressIP in IP capacity cannot be applied correctly
Version-Release number of selected component (if applicable):
4.11.0-0.nightly-2022-11-27-164248
How reproducible:
Found this failure in automation case which used to pass
Steps to Reproduce:
In AWS, one worker node is with IP capacity "ipv4:14" $oc describe node ip-10-0-129-21.us-west-2.compute.internal | egrep -C 3 egress Annotations: cloud.network.openshift.io/egress-ipconfig: [{"interface":"eni-03820ba0eca427fbf","ifaddr":{"ipv4":"10.0.128.0/18"},"capacity":{"ipv4":14,"ipv6":15}}] csi.volume.kubernetes.io/nodeid: {"ebs.csi.aws.com":"i-0cb3ae15bf3cffd9f"} k8s.ovn.org/host-addresses: ["10.0.129.21"]
1. Label above node as egress node
2. Created 15 egressIP objects, each egressIP object with one egressIP.
Actual results:
13 egressIPs applied correctly , last one is in wrong status. % oc get egressip NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS egressip-47208-0 10.0.135.200 ip-10-0-129-21.us-west-2.compute.internal 10.0.135.200 egressip-47208-1 10.0.162.178 ip-10-0-129-21.us-west-2.compute.internal 10.0.162.178 egressip-47208-10 10.0.144.46 ip-10-0-129-21.us-west-2.compute.internal 10.0.144.46 egressip-47208-11 10.0.191.91 ip-10-0-129-21.us-west-2.compute.internal 10.0.191.91 egressip-47208-12 10.0.133.215 ip-10-0-129-21.us-west-2.compute.internal 10.0.133.215 egressip-47208-13 10.0.174.207 ip-10-0-129-21.us-west-2.compute.internal 10.0.174.207 egressip-47208-14 10.0.176.224 ip-10-0-129-21.us-west-2.compute.internal 10.0.176.224 egressip-47208-2 10.0.184.114 ip-10-0-129-21.us-west-2.compute.internal 10.0.184.114 egressip-47208-3 10.0.167.224 ip-10-0-129-21.us-west-2.compute.internal 10.0.167.224 egressip-47208-4 10.0.187.148 ip-10-0-129-21.us-west-2.compute.internal 10.0.187.148 egressip-47208-5 10.0.184.109 ip-10-0-129-21.us-west-2.compute.internal 10.0.184.109 egressip-47208-6 10.0.155.208 egressip-47208-7 10.0.134.13 ip-10-0-129-21.us-west-2.compute.internal 10.0.134.13 egressip-47208-8 10.0.142.255 ip-10-0-129-21.us-west-2.compute.internal 10.0.142.255 egressip-47208-9 10.0.170.197 % oc get cloudprivateipconfig NAME AGE 10.0.133.215 113s 10.0.134.13 113s 10.0.135.200 113s 10.0.142.255 113s 10.0.144.46 113s 10.0.155.208 113s 10.0.162.178 113s 10.0.167.224 113s 10.0.174.207 113s 10.0.176.224 113s 10.0.184.109 113s 10.0.184.114 113s 10.0.187.148 113s 10.0.191.91 113s oc get cloudprivateipconfig 10.0.155.208 -o yaml apiVersion: cloud.network.openshift.io/v1 kind: CloudPrivateIPConfig metadata: annotations: k8s.ovn.org/egressip-owner-ref: egressip-47208-6 creationTimestamp: "2022-11-28T09:13:47Z" finalizers: - cloudprivateipconfig.cloud.network.openshift.io/finalizer generation: 1 name: 10.0.155.208 resourceVersion: "72869" uid: 0143a07b-0a30-4de8-bfd7-589ed0c3d7dc spec: node: ip-10-0-129-21.us-west-2.compute.internal status: conditions: - lastTransitionTime: "2022-11-28T09:16:51Z" message: 'Error processing cloud assignment request, err: <nil>' observedGeneration: 1 reason: CloudResponseError status: "False" type: Assigned node: ip-10-0-129-21.us-west-2.compute.internal
Expected results:
As IP capacity for this node is 14, so here should have 14 egress IP applied successfully. No cloudprivateipconfig IP in error status.
Additional info:
Description of problem:
The ovn-kubernetes ovnkube-master containers are continuously crashlooping since we updated to 4.11.0-0.okd-2022-10-15-073651.
Log Excerpt:
] [] [] [{kubectl-client-side-apply Update networking.k8s.io/v1 2022-09-12 12:25:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:ingress":{},"f:policyTypes":{}}} }]},Spec:NetworkPolicySpec{PodSelector:{map[] []},Ingress:[]NetworkPolicyIngressRule{NetworkPolicyIngressRule{Ports:[]NetworkPolicyPort{},From:[]NetworkPolicyPeer{NetworkPolicyPeer{PodSelector:&v1.LabelSelector{MatchLabels:map[string]string{access: true,},MatchExpressions:[]LabelSelectorRequirement{},},NamespaceSelector:nil,IPBlock:nil,},},},},Egress:[]NetworkPolicyEgressRule{},PolicyTypes:[Ingress],},} &NetworkPolicy{ObjectMeta:{allow-from-openshift-ingress compsci-gradcentral a405f843-c250-40d7-8dd4-a759f764f091 217304038 1 2022-09-22 14:36:38 +0000 UTC <nil> <nil> map[] map[] [] [] [{openshift-apiserver Update networking.k8s.io/v1 2022-09-22 14:36:38 +0000 UTC FieldsV1 {"f:spec":{"f:ingress":{},"f:policyTypes":{}}} }]},Spec:NetworkPolicySpec{PodSelector:{map[] []},Ingress:[]NetworkPolicyIngressRule{NetworkPolicyIngressRule{Ports:[]NetworkPolicyPort{},From:[]NetworkPolicyPeer{NetworkPolicyPeer{PodSelector:nil,NamespaceSelector:&v1.LabelSelector{MatchLabels:map[string]string{policy-group.network.openshift.io/ingress: ,},MatchExpressions:[]LabelSelectorRequirement{},},IPBlock:nil,},},},},Egress:[]NetworkPolicyEgressRule{},PolicyTypes:[Ingress],},}]: cannot clean up egress default deny ACL name: error in transact with ops [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:delete Value:{GoSet:[{GoUUID:60cb946a-46e9-4623-9ba4-3cb35f018ed6}]}}] Timeout:<nil> Where:[where column _uuid == {ccdd01bf-3009-42fb-9672-e1df38190cd7}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:delete Value:{GoSet:[{GoUUID:60cb946a-46e9-4623-9ba4-3cb35f018ed6}]}}] Timeout:<nil> Where:[where column _uuid == {10bbf229-8c1b-4c62-b36e-4ba0097722db}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:delete Table:ACL Row:map[] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {7b55ba0c-150f-4a63-9601-cfde25f29408}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:delete Table:ACL Row:map[] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {60cb946a-46e9-4623-9ba4-3cb35f018ed6}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}] results [{Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:0 Error:referential integrity violation Details:cannot delete ACL row 7b55ba0c-150f-4a63-9601-cfde25f29408 because of 1 remaining reference(s) UUID:{GoUUID:} Rows:[]}] and errors []: referential integrity violation: cannot delete ACL row 7b55ba0c-150f-4a63-9601-cfde25f29408 because of 1 remaining reference(s)
Additional info:
https://github.com/okd-project/okd/issues/1372 Issue persisted through update to 4.11.0-0.okd-2022-10-28-153352 must-gather: https://nbc9-snips.cloud.duke.edu/snips/must-gather.local.2859117512952590880.zip
Description of problem:
Users on a fully-disconnected cluster could not see Devfiles in the developer catalog or import a Devfiles. That's fine.
But the API calls /api/devfile/samples/ and /api/devfile/ takes 30 seconds until they fail with a 504 Gateway timeout error.
If possible they should fail immediately.
Version-Release number of selected component (if applicable):
This might happen since 4.8
Tested this yet only on 4.12.0-0.nightly-2022-09-07-112008
How reproducible:
Always
Steps to Reproduce:
Actual results:
Expected results:
Additional info:
The console Pod log contains this error:
E0909 10:28:18.448680 1 devfile-handler.go:74] Failed to parse devfile: failed to populateAndParseDevfile: Get "https://registry.devfile.io/devfiles/go": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
This is a clone of issue OCPBUGS-5185. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-5165. The following is the description of the original issue:
—
Currently, the Dev Sandbox clusters sends the clusterType "OSD" instead of "DEVSANDBOX" because the configuration annotations of the console config are automatically overridden by some SyncSets.
Open Dev Sandbox and browser console and inspect window.SERVER_FLAGS.telemetry
We have created a fix in 4.12 that fetches instance type information from Azure API instead of updating the lists. We feel that backporting that fix is too risky, but agreed to update the list in older versions.
Description of problem:
Add the following instance types to azure_instance_types list[1]:
Version-Release number of selected component (if applicable):
OCP 4.8
Steps to Reproduce:
1. Migrate worker/infra nodes to above mentioned (missing) v5 instance types
2. "Failed to set autoscaling from zero annotations, instance type unknown"
Actual results:
Expected results:
The new instance types are available in the azure_instance_types list[1] and no errors/warnings are observed after migrating:
Additional info:
The related v4 instance types are already available[1] - I suspect adding the mentioned v5 instance types is a minor update:
1) azure_instance_types.go
https://github.com/openshift/cluster-api-provider-azure/blob/release-4.8/pkg/cloud/azure/actuators/machineset/azure_instance_types.go
Description of problem:
This is a clone of https://bugzilla.redhat.com/show_bug.cgi?id=2074299 for backporting purposes.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Since 4.11 OCP comes with OperatorHub definition which declares a capability
and enables all catalog sources. For OKD we want to enable just community-operators
as users may not have Red Hat pull secret set.
This commit would ensure that OKD version of marketplace operator gets
its own OperatorHub manifest with a custom set of operator catalogs enabled
This is a clone of issue OCPBUGS-3235. The following is the description of the original issue:
—
Frequently we see the loading state of the topology view, even when there aren't many resources in the project.
Including an example
topology will sometimes hang with the loading indicator showing indefinitely
topology should load consistently without fail
intermittent
4.9
Description of problem:
This is a clone of https://issues.redhat.com/browse/OCPBUGS-469
Description of problem: Numerous erroreneous logs in OVN master
I0823 18:00:11.163491 1 obj_retry.go:1063] Retry object setup: *v1.Pod openshift-operator-lifecycle-manager/collect-profiles-27687900-hlp6k
I0823 18:00:11.163546 1 obj_retry.go:1096] Removing old object: *v1.Pod openshift-operator-lifecycle-manager/collect-profiles-27687900-hlp6k
I0823 18:00:11.163555 1 pods.go:124] Deleting pod: openshift-operator-lifecycle-manager/collect-profiles-27687900-hlp6k
I0823 18:00:11.163631 1 obj_retry.go:1103] Retry delete failed for *v1.Pod openshift-operator-lifecycle-manager/collect-profiles-27687900-hlp6k, will try again later: deleteLogicalPort failed for pod openshift-operator-lifecycle-manager_collect-profiles-27687900-hlp6k: unable to locate portUUID+nodeName for pod openshift-operator-lifecycle-manager/collect-profiles-27687900-hlp6k: error getting logical port <nil>: object not found
W0823 18:00:41.163633 1 obj_retry.go:1031] Dropping retry entry for *v1.Pod openshift-operator-lifecycle-manager/collect-profiles-27687900-hlp6k: exceeded number of failed attempts
Must-gather: http://shell.lab.bos.redhat.com/~anusaxen/must-gather.local.2234927131259452300/
Version-Release number of selected component (if applicable): 4.12.0-0.nightly-2022-08-23-031342
How reproducible: Always
Steps to Reproduce:
1. Bring up OVN cluster on 4.12
2.
3.
Actual results: deleteLogicalPort failed for already gone object
Expected results: deleteLogicalPort should not keep retrying post object deletion
Additional info:
This is a clone of issue OCPBUGS-6913. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-186. The following is the description of the original issue:
—
Description of problem:
When resizing the browser window, the PipelineRun task status bar would overlap the status text that says "Succeeded" in the screenshot.
Actual results:
Status text is overlapped by the task status bar
Expected results:
Status text breaks to a newline or gets shortened by "..."
Description of problem:
Availability Set will be created when vmSize is invalid in a region which has zones, but Availability Set should only be created in a region which don’t have zones.
Version-Release number of selected component (if applicable):
4.11.0-0.nightly-2022-10-07-174524 4.10.0-0.nightly-2022-10-07-205844
How reproducible:
Always
Steps to Reproduce:
1.Set up a cluster in a region which has zones. liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-az410-99qcm-master-0 Running Standard_D8s_v3 eastus 2 34m huliu-az410-99qcm-master-1 Running Standard_D8s_v3 eastus 3 34m huliu-az410-99qcm-master-2 Running Standard_D8s_v3 eastus 1 34m huliu-az410-99qcm-worker-eastus1-xld58 Running Standard_D4s_v3 eastus 1 27m huliu-az410-99qcm-worker-eastus2-chzg8 Running Standard_D4s_v3 eastus 2 27m huliu-az410-99qcm-worker-eastus3-7g2mw Running Standard_D4s_v3 eastus 3 27m 2.Create a machineset with invalid vmSize liuhuali@Lius-MacBook-Pro huali-test % oc create -f ms4.yaml machineset.machine.openshift.io/huliu-az410-99qcm-1 created liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-az410-99qcm-1-cfw6w Failed 8s huliu-az410-99qcm-master-0 Running Standard_D8s_v3 eastus 2 35m huliu-az410-99qcm-master-1 Running Standard_D8s_v3 eastus 3 35m huliu-az410-99qcm-master-2 Running Standard_D8s_v3 eastus 1 35m huliu-az410-99qcm-worker-eastus1-xld58 Running Standard_D4s_v3 eastus 1 28m huliu-az410-99qcm-worker-eastus2-chzg8 Running Standard_D4s_v3 eastus 2 28m huliu-az410-99qcm-worker-eastus3-7g2mw Running Standard_D4s_v3 eastus 3 28m liuhuali@Lius-MacBook-Pro huali-test % oc get machine huliu-az410-99qcm-1-cfw6w -o yaml apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: Unknown creationTimestamp: "2022-10-08T07:42:28Z" finalizers: - machine.machine.openshift.io generateName: huliu-az410-99qcm-1- generation: 2 labels: machine.openshift.io/cluster-api-cluster: huliu-az410-99qcm machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: huliu-az410-99qcm-1 name: huliu-az410-99qcm-1-cfw6w namespace: openshift-machine-api ownerReferences: - apiVersion: machine.openshift.io/v1beta1 blockOwnerDeletion: true controller: true kind: MachineSet name: huliu-az410-99qcm-1 uid: bf8f7518-1fa9-4704-bdd7-6d0fde54e38e resourceVersion: "31287" uid: 303cf672-a2fa-44f3-8793-59801bb78902 spec: lifecycleHooks: {} metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/huliu-az410-99qcm-rg/providers/Microsoft.Compute/images/huliu-az410-99qcm sku: "" version: "" kind: AzureMachineProviderSpec location: eastus managedIdentity: huliu-az410-99qcm-identity metadata: creationTimestamp: null name: huliu-az410-99qcm networkResourceGroup: huliu-az410-99qcm-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: huliu-az410-99qcm resourceGroup: huliu-az410-99qcm-rg spotVMOptions: {} subnet: huliu-az410-99qcm-worker-subnet userDataSecret: name: worker-user-data vmSize: invalidStandard_D4s_v3 vnet: huliu-az410-99qcm-vnet zone: "3" status: conditions: - lastTransitionTime: "2022-10-08T07:42:28Z" status: "True" type: Drainable - lastTransitionTime: "2022-10-08T07:42:28Z" message: Instance has not been created reason: InstanceNotCreated severity: Warning status: "False" type: InstanceExists - lastTransitionTime: "2022-10-08T07:42:28Z" status: "True" type: Terminable errorMessage: 'failed to reconcile machine "huliu-az410-99qcm-1-cfw6w": failed to create vm huliu-az410-99qcm-1-cfw6w: failure sending request for machine huliu-az410-99qcm-1-cfw6w: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="BadRequest" Message="Virtual Machine cannot be created because both Availability Zone and Availability Set were specified. Deploying an Availability Set to an Availability Zone isn’t supported."' errorReason: InvalidConfiguration lastUpdated: "2022-10-08T07:42:35Z" phase: Failed providerStatus: conditions: - lastProbeTime: "2022-10-08T07:42:35Z" lastTransitionTime: "2022-10-08T07:42:35Z" message: 'failed to create vm huliu-az410-99qcm-1-cfw6w: failure sending request for machine huliu-az410-99qcm-1-cfw6w: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="BadRequest" Message="Virtual Machine cannot be created because both Availability Zone and Availability Set were specified. Deploying an Availability Set to an Availability Zone isn’t supported."' reason: MachineCreationFailed status: "True" type: MachineCreated metadata: {}
Actual results:
Created Availability Set for it.
Expected results:
Should not create Availability Set, as the region has zones.
Additional info:
If provided correct vmSize, the machine get Running and will not create Availability Set for it. Not sure why it will create Availability Set for it when vmSize is invalid. The issue can be reproduced both on 4.11 and 4.10 version, as Availability Set is introduced in 4.10. On 4.12, there is bug https://issues.redhat.com/browse/OCPBUGS-1871, will also check this on 4.12 when this bug get verified.
This bug represents a backport of CCO-222 to release-4.11.
This is a clone of issue OCPBUGS-262. The following is the description of the original issue:
—
github rate limit failures for upi image downloading govc.
This is a clone of issue OCPBUGS-2083. The following is the description of the original issue:
—
Description of problem:
Currently we are running VMWare CSI Operator in OpenShift 4.10.33. After running vulnerability scans, the operator was discovered to be running a known weak cipher 3DES. We are attempting to upgrade or modify the operator to customize the ciphers available. We were looking at performing a manual upgrade via Quay.io but can't seem to pull the image and was trying to steer away from performing a custom install from scratch. Looking for any suggestions into mitigated the weak cipher in the kube-rbac-proxy under VMware CSI Operator.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
Upgrade OCP 4.11 --> 4.12 fails with one 'NotReady,SchedulingDisabled' node and MachineConfigDaemonFailed.
Version-Release number of selected component (if applicable):
Upgrade from OCP 4.11.0-0.nightly-2022-09-19-214532 on top of OSP RHOS-16.2-RHEL-8-20220804.n.1 to 4.12.0-0.nightly-2022-09-20-040107. Network Type: OVNKubernetes
How reproducible:
Twice out of two attempts.
Steps to Reproduce:
1. Install OCP 4.11.0-0.nightly-2022-09-19-214532 (IPI) on top of OSP RHOS-16.2-RHEL-8-20220804.n.1. The cluster is up and running with three workers: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-09-19-214532 True False 51m Cluster version is 4.11.0-0.nightly-2022-09-19-214532 2. Run the OC command to upgrade to 4.12.0-0.nightly-2022-09-20-040107: $ oc adm upgrade --to-image=registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-09-20-040107 --allow-explicit-upgrade --force=true warning: Using by-tag pull specs is dangerous, and while we still allow it in combination with --force for backward compatibility, it would be much safer to pass a by-digest pull spec instead warning: The requested upgrade image is not one of the available updates.You have used --allow-explicit-upgrade for the update to proceed anyway warning: --force overrides cluster verification of your supplied release image and waives any update precondition failures. Requesting update to release image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-09-20-040107 3. The upgrade is not succeeds: [0] $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-09-19-214532 True True 17h Unable to apply 4.12.0-0.nightly-2022-09-20-040107: wait has exceeded 40 minutes for these operators: network One node degrided to 'NotReady,SchedulingDisabled' status: $ oc get nodes NAME STATUS ROLES AGE VERSION ostest-9vllk-master-0 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-master-1 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-master-2 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-worker-0-4x4pt NotReady,SchedulingDisabled worker 18h v1.24.0+3882f8f ostest-9vllk-worker-0-h6kcs Ready worker 18h v1.24.0+3882f8f ostest-9vllk-worker-0-xhz9b Ready worker 18h v1.24.0+3882f8f $ oc get pods -A | grep -v -e Completed -e Running NAMESPACE NAME READY STATUS RESTARTS AGE openshift-openstack-infra coredns-ostest-9vllk-worker-0-4x4pt 0/2 Init:0/1 0 18h $ oc get events LAST SEEN TYPE REASON OBJECT MESSAGE 7m15s Warning OperatorDegraded: MachineConfigDaemonFailed /machine-config Unable to apply 4.12.0-0.nightly-2022-09-20-040107: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] 7m15s Warning MachineConfigDaemonFailed /machine-config Cluster not available for [{operator 4.11.0-0.nightly-2022-09-19-214532}]: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2022-09-20-040107 True False False 18h baremetal 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cloud-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cloud-credential 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cluster-autoscaler 4.12.0-0.nightly-2022-09-20-040107 True False False 19h config-operator 4.12.0-0.nightly-2022-09-20-040107 True False False 19h console 4.12.0-0.nightly-2022-09-20-040107 True False False 18h control-plane-machine-set 4.12.0-0.nightly-2022-09-20-040107 True False False 17h csi-snapshot-controller 4.12.0-0.nightly-2022-09-20-040107 True False False 19h dns 4.12.0-0.nightly-2022-09-20-040107 True True False 19h DNS "default" reports Progressing=True: "Have 5 available node-resolver pods, want 6." etcd 4.12.0-0.nightly-2022-09-20-040107 True False False 19h image-registry 4.12.0-0.nightly-2022-09-20-040107 True True False 18h Progressing: The registry is ready... ingress 4.12.0-0.nightly-2022-09-20-040107 True False False 18h insights 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-apiserver 4.12.0-0.nightly-2022-09-20-040107 True True False 18h NodeInstallerProgressing: 1 nodes are at revision 11; 2 nodes are at revision 13 kube-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-scheduler 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-storage-version-migrator 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-api 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-approver 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-config 4.11.0-0.nightly-2022-09-19-214532 False True True 16h Cluster not available for [{operator 4.11.0-0.nightly-2022-09-19-214532}]: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] marketplace 4.12.0-0.nightly-2022-09-20-040107 True False False 19h monitoring 4.12.0-0.nightly-2022-09-20-040107 True False False 18h network 4.12.0-0.nightly-2022-09-20-040107 True True True 19h DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2022-09-20T14:16:13Z... node-tuning 4.12.0-0.nightly-2022-09-20-040107 True False False 17h openshift-apiserver 4.12.0-0.nightly-2022-09-20-040107 True False False 18h openshift-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 17h openshift-samples 4.12.0-0.nightly-2022-09-20-040107 True False False 17h operator-lifecycle-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h operator-lifecycle-manager-catalog 4.12.0-0.nightly-2022-09-20-040107 True False False 19h operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2022-09-20-040107 True False False 19h service-ca 4.12.0-0.nightly-2022-09-20-040107 True False False 19h storage 4.12.0-0.nightly-2022-09-20-040107 True True False 19h ManilaCSIDriverOperatorCRProgressing: ManilaDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods... [0] http://pastebin.test.redhat.com/1074531
Actual results:
OCP 4.11 --> 4.12 upgrade fails.
Expected results:
OCP 4.11 --> 4.12 upgrade success.
Additional info:
Attached logs of the NotReady node - [^journalctl_ostest-9vllk-worker-0-4x4pt.log.tar.gz]
This bug was initially created as a copy of
Bug #2096605
I am copying this bug because: the parent bug solved the validation aspect of diskType but now the description of diskType in
https://github.com/openshift/installer/blob/master/data/data/install.openshift.io_installconfigs.yaml#L2914-L2923
needs to be updated.
Version: 4.11.0-0.nightly-2022-06-06-201913
Platform: vSphere IPI
What happened?
1. If user inputs an invalid value for platform.vsphere.diskType in install-config.yaml file, there is no validation checking for diskType and doesn't exit with error, but continues the installation, which is not the same behavior as in 4.10.
After all vms are provisioned, I checked that the disk provision type is thick.
2. If user doesn't set platform.vsphere.diskType in install-config.yaml file, the default disk provision type is thick, but not the vSphere default storage policy. On VMC, the default policy is thin, so maybe the description of diskType should also need to be updated.
$ ./openshift-install explain installconfig.platform.vsphere.diskType
KIND: InstallConfig
VERSION: v1
RESOURCE: <string>
Valid Values: "","thin","thick","eagerZeroedThick"
DiskType is the name of the disk provisioning type, valid values are thin, thick, and eagerZeroedThick. When not specified, it will be set according to the default storage policy of vsphere.
What did you expect to happen?
validation for diskType
How to reproduce it (as minimally and precisely as possible)?
set diskType to invalid value in install-config.yaml and install the cluster
This is a clone of issue OCPBUGS-1717. The following is the description of the original issue:
—
Description of problem:
Image registry pods panic while deploying OCP in me-central-1 AWS region
Version-Release number of selected component (if applicable):
4.11.2
How reproducible:
Deploy OCP in AWS me-central-1 region
Steps to Reproduce:
Deploy OCP in AWS me-central-1 region
Actual results:
panic: Invalid region provided: me-central-1
Expected results:
Image registry pods should come up with no errors
Additional info:
Description of problem:
This is just a clone of https://bugzilla.redhat.com/show_bug.cgi?id=2105570 for purposes of cherry-picking.
Version-Release number of selected component (if applicable):
4.13
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Created attachment 1905034 [details]
Plugin page with error
Steps to reproduce:
1. Install a plugin with a page that has a runtime error. (Demo Plugin -> Dynamic Nav 1 currently has an error for me, but you can reproduce by editing any plugin and introducing an error.)
2. Observe the "something went wrong" error message.
3. Navigate to any other page (e.g. Workloads -> Pods)
Expected result:
The pods page is displayed.
Action result:
The error message persists. There is no way to clear except to refresh the browser.
Description of problem:
When a pod runs to a completed state, we typically rely on the update event that will indicate to us that this pod is completed. At that point the pod IP is released and the port configuration is removed in OVN. The subsequent delete event for this pod will be ignored because it should have been cleaned up in the previous update. However, there can be cases where the update event is missed with pod completed. In this case we will only receive a delete with pod completed event, and ignore tearing down the pod. The end result is the pod is not cleaned up in OVN and the IP address remains allocated, reducing the amount of address range available to launch another pod. This can lead to exhausting all IP addresses available for pod allocation on a node.
Version-Release number of selected component (if applicable):
4.10.24
How reproducible:
Not sure how to reproduce this. I'm guessing some lag in kapi updates can cause the completed update event and the final delete event to be combined into a single event.
Steps to Reproduce:
1. 2. 3.
Actual results:
Port still exists in OVN, IP remains allocated for a deleted pod.
Expected results:
IP should be freed, port should be removed from OVN.
Additional info:
This is a clone of issue OCPBUGS-7830. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-7729. The following is the description of the original issue:
—
Description of problem:
Etcd's liveliness probe should be removed.
Version-Release number of selected component (if applicable):
4.11
Additional info:
When the Master Hosts hit CPU load this can cause a cascading restart loop for etcd and kube-api due to the etcd liveliness probes failing. Due to this loop load on the masters stays high because the api and controllers restarting over and over again.. There is no reason for etcd to have a liveliness probe, we removed this probe in 3.11 due issues like this.
Node healthz server was added in 4.13 with https://github.com/openshift/ovn-kubernetes/commit/c8489e3ff9c321e77f265dc9d484ed2549df4a6b and https://github.com/openshift/ovn-kubernetes/commit/9a836e3a547f3464d433ce8b9eef336624d51858. We need to configure it by default on 0.0.0.0:10256 on CNO for ovnk, just like we do for sdn.
This is a clone of issue OCPBUGS-8339. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-5287. The following is the description of the original issue:
—
Description of problem:
See https://issues.redhat.com/browse/THREESCALE-9015. A problem with the Red Hat Integration - 3scale - Managed Application Services operator prevents it from installing correctly, which results in the failure of operator-install-single-namespace.spec.ts integration test.
In order to delete the correct GCP cloud resources, the "--credentials-requests-dir" parameter must be passed to "ccoctl gcp delete". This was fixed for 4.12 as part of https://github.com/openshift/cloud-credential-operator/pull/489 but must be backported for previous releases. See https://github.com/openshift/cloud-credential-operator/pull/489#issuecomment-1248733205 for discussion regarding this bug.
To reproduce, create GCP infrastructure with a name parameter that is a subset of another set of GCP infrastructure's name parameter. I will "ccoctl gcp create all" with "name=abutcher-gcp" and "name=abutcher-gcp1".
$ ./ccoctl gcp create-all \ --name=abutcher-gcp \ --region=us-central1 \ --project=openshift-hive-dev \ --credentials-requests-dir=./credrequests $ ./ccoctl gcp create-all \ --name=abutcher-gcp1 \ --region=us-central1 \ --project=openshift-hive-dev \ --credentials-requests-dir=./credrequests
Running "ccoctl gcp delete --name=abutcher-gcp" will result in GCP infrastructure for both "abutcher-gcp" and "abutcher-gcp1" being deleted.
$ ./ccoctl gcp delete --name abutcher-gcp --project openshift-hive-dev 2022/10/24 11:30:06 Credentials loaded from file "/home/abutcher/.gcp/osServiceAccount.json" 2022/10/24 11:30:06 Deleted object .well-known/openid-configuration from bucket abutcher-gcp-oidc 2022/10/24 11:30:07 Deleted object keys.json from bucket abutcher-gcp-oidc 2022/10/24 11:30:07 OIDC bucket abutcher-gcp-oidc deleted 2022/10/24 11:30:09 IAM Service account abutcher-gcp-openshift-image-registry-gcs deleted 2022/10/24 11:30:10 IAM Service account abutcher-gcp-openshift-gcp-ccm deleted 2022/10/24 11:30:11 IAM Service account abutcher-gcp1-openshift-cloud-network-config-controller-gcp deleted 2022/10/24 11:30:12 IAM Service account abutcher-gcp-openshift-machine-api-gcp deleted 2022/10/24 11:30:13 IAM Service account abutcher-gcp-openshift-ingress-gcp deleted 2022/10/24 11:30:15 IAM Service account abutcher-gcp-openshift-gcp-pd-csi-driver-operator deleted 2022/10/24 11:30:16 IAM Service account abutcher-gcp1-openshift-ingress-gcp deleted 2022/10/24 11:30:17 IAM Service account abutcher-gcp1-openshift-image-registry-gcs deleted 2022/10/24 11:30:19 IAM Service account abutcher-gcp-cloud-credential-operator-gcp-ro-creds deleted 2022/10/24 11:30:20 IAM Service account abutcher-gcp1-openshift-gcp-pd-csi-driver-operator deleted 2022/10/24 11:30:21 IAM Service account abutcher-gcp1-openshift-gcp-ccm deleted 2022/10/24 11:30:22 IAM Service account abutcher-gcp1-cloud-credential-operator-gcp-ro-creds deleted 2022/10/24 11:30:24 IAM Service account abutcher-gcp1-openshift-machine-api-gcp deleted 2022/10/24 11:30:25 IAM Service account abutcher-gcp-openshift-cloud-network-config-controller-gcp deleted 2022/10/24 11:30:25 Workload identity pool abutcher-gcp deleted
Description of problem:
To address: 'Static Pod is managed but errored" err="managed container xxx does not have Resource.Requests'
Version-Release number of selected component (if applicable):
4.11
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Already merged in https://github.com/openshift/cluster-kube-apiserver-operator/pull/1398
This is a clone of issue OCPBUGS-10943. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10661. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10591. The following is the description of the original issue:
—
Description of problem:
Starting with 4.12.0-0.nightly-2023-03-13-172313, the machine API operator began receiving an invalid version tag either due to a missing or invalid VERSION_OVERRIDE(https://github.com/openshift/machine-api-operator/blob/release-4.12/hack/go-build.sh#L17-L20) value being passed tot he build. This is resulting in all jobs invoked by the 4.12 nightlies failing to install.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2023-03-13-172313 and later
How reproducible:
consistently in 4.12 nightlies only(ci builds do not seem to be impacted).
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Example of failure https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.12-e2e-aws-csi/1635331349046890496/artifacts/e2e-aws-csi/gather-extra/artifacts/pods/openshift-machine-api_machine-api-operator-866d7647bd-6lhl4_machine-api-operator.log
This is a clone of issue OCPBUGS-5761. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-5458. The following is the description of the original issue:
—
reported in https://coreos.slack.com/archives/C027U68LP/p1673010878672479
Description of problem:
Hey guys, I have a openshift cluster that was upgraded to version 4.9.58 from version 4.8. After the upgrade was done, the etcd pod on master1 isn't coming up and is crashlooping. and it gives the following error: {"level":"fatal","ts":"2023-01-06T12:12:58.709Z","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"wal: max entry size limit exceeded, recBytes: 13279, fileSize(313430016) - offset(313418480) - padBytes(1) = entryLimit(11535)","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\t/remote-source/cachito-gomod-with-deps/app/server/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\t/remote-source/cachito-gomod-with-deps/app/server/etcdmain/main.go:40\nmain.main\n\t/remote-source/cachito-gomod-with-deps/app/server/main.go:32\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:225"}
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-10314. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-8741. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-5889. The following is the description of the original issue:
—
Description of problem:
Customer running a cluster with following config: 4.10.23 AWS/IPI OVNKubernetes Observed that in namespace with networkpolicy rules enabled, and a policy for allow-from-same namespace, pods will have different behaviors when calling service IP's hosted in that same namespace. Example: Deployment1 with two pods (A/B) exists in namespace <EXAMPLE> Deployment2 with 1 pod hosting a service and route exists in same namespace Pod A will unexpectedly stop being able to call service IP of deployment2; Pod B will never lose access to calling service IP of deployment2. Pod A remains able to call out through br-ex interface, tag the ROUTE address, and reach deployment2 pod via haproxy (this never breaks) Pod A remains able to reach the local gateway on the node Host node for Pod A is able to reach the service IP of deployment2 and remains able to do so, even while pod A is impacted. Issue can be mitigated by applying a label or annotation to pod A, which immediately allows it to reach internal service IPs again within the namespace. I suspect that the issue is to do with the networkpolicy rules failing to stay updated on the pod object, and the pod needs to be 'refreshed' --> label appendation/other update, to force the pod to 'remember' that it is allowed to call peers within the namespace. Additional relevant data: - pods affects throughout cluster; no specific project/service/deployment/application - pods ride on different nodes all the time (no one node affected) - pods with fail condition are on same node with other pods without issue - multiple namespaces see this problem - all namespaces are using similar networkpolicy isolation and allow-from-same-namespace ruleset (which matches our documentation on syntax).
Version-Release number of selected component (if applicable):
4.10.23
How reproducible:
every time --> unclear what the trigger is that causes this; pods will be functional and several hours/days later, will stop being able to talk to peer services.
Steps to Reproduce:
1. deploy pod with at least two replicas in a namespace with allow-from same network policy 2. deploy a different service and route example httpd instance in same namespace 3. observe that one of the two pods may fail to reach service IP after some time 4. apply annotation to pod and it is immediately able to reach services again.
Actual results:
pods intermittently fail to reach internal service addresses, but are able to be interacted with otherwise, and can reach upstream/external addresses including routes on cluster.
Expected results:
pods should not lose access to service network peers.
Additional info:
see next comments for relevant uploads/sosreports and inspects.
OCPBUGS-1251 landed an admin-ack gate in 4.11.z to help admins prepare for Kubernetes 1.25 API removals which are coming in OpenShift 4.12. Poking around in a 4.12.0-ec.2 cluster where APIRemovedInNextReleaseInUse is firing:
$ oc --as system:admin adm must-gather -- /usr/bin/gather_audit_logs $ zgrep -h v1beta1/poddisruptionbudget must-gather.local.1378724704026451055/quay*/audit_logs/kube-apiserver/*.log.gz | jq -r '.verb + " " + (.user | .username + " " + (.extra["authentication.kubernetes.io/pod-name"] | tostr ing))' | sort | uniq -c parse error: Invalid numeric literal at line 29, column 6 28 watch system:serviceaccount:openshift-machine-api:cluster-autoscaler ["cluster-autoscaler-default-5cf997b8d6-ptgg7"]
Finding the source for that container:
$ oc --as system:admin -n openshift-machine-api get -o json pod cluster-autoscaler-default-5cf997b8d6-ptgg7 | jq -r '.status.containerStatuses[].image' quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81ab7ce0c851ba5e5169bba717cb54716ce5457cbe89d159c97a5c25fd820ed $ oc image info quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81ab7ce0c851ba5e5169bba717cb54716ce5457cbe89d159c97a5c25fd820ed | grep github SOURCE_GIT_URL=https://github.com/openshift/kubernetes-autoscaler io.openshift.build.commit.url=https://github.com/openshift/kubernetes-autoscaler/commit/1dac0311b9842958ec630273428b74703d51c1c9 io.openshift.build.source-location=https://github.com/openshift/kubernetes-autoscaler
Poking about in the source:
$ git clone --depth 30 --branch master https://github.com/openshift/kubernetes-autoscaler.git
$ cd kubernetes-autoscaler
$ find . -name vendor
./addon-resizer/vendor
./cluster-autoscaler/vendor
./vertical-pod-autoscaler/e2e/vendor
./vertical-pod-autoscaler/vendor
Lots of vendoring. I haven't checked to see how new the client code is in the various vendor packages. But the main issue seems to be the v1beta1 in:
$ git grep policy cluster-autoscaler/core cluster-autoscaler/utils | grep policy.*v1beta1 cluster-autoscaler/core/scaledown/actuation/actuator_test.go: policyv1beta1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/core/scaledown/actuation/actuator_test.go: eviction := createAction.GetObject().(*policyv1beta1.Eviction) cluster-autoscaler/core/scaledown/actuation/drain.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/core/scaledown/actuation/drain_test.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/core/scaledown/legacy/legacy.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/core/scaledown/legacy/wrapper.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/core/scaledown/scaledown.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/core/static_autoscaler_test.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/utils/drain/drain.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/utils/drain/drain_test.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/utils/kubernetes/listers.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/utils/kubernetes/listers.go: v1policylister "k8s.io/client-go/listers/policy/v1beta1"
The main change from v1beta1 to v1 involves spec.selector; I dunno if that's relevant to the autoscaler use-case or not.
Do we run autoscaler CI? I was poking around a bit, but did not find a 4.12 periodic excercising the autoscaler that might have turned up this alert and issue.
This is a clone of issue OCPBUGS-7445. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-7207. The following is the description of the original issue:
—
At some point in the mtu-migration development a configuration file was generated at /etc/cno/mtu-migration/config which was used as a flag to indicate to configure-ovs that a migration procedure was in progress. When that file was missing, it was assumed the migration procedure was over and configure-ovs did some cleaning on behalf of it.
But that changed and /etc/cno/mtu-migration/config is never set. That causes configure-ovs to remove mtu-migration information when the procedure is still in progress making it to use incorrect MTU values and either causing nodes to be tainted with "ovn.k8s.org/mtu-too-small" blocking the procedure itself or causing network disruption until the procedure is over.
However, this was not a problem for the CI job as it doesn't use the migration procedure as documented for the sake of saving limited time available to run CI jobs. The CI merges two steps of the procedure into one so that there is never a reboot while the procedure is in progress and hiding this issue.
This was probably not detected in QE as well for the same reason as CI.
This is a clone of issue OCPBUGS-7732. The following is the description of the original issue:
—
Description of problem:
When services are deleted, the services controller cache should also remove the service from its top level cache to avoid growing forever. While this is not an issue in 4.13 once the lb_cache rework merges [1], the 4.12 and older branches have this problem because that rework is meant for 4.13 only. [1]: https://github.com/ovn-org/ovn-kubernetes/pull/3387 This is the location where alreadyApplied is not deleting the removal: https://github.com/openshift/ovn-kubernetes/blob/cf9fb51510e1870961bf3a0f064b73536757a4f8/go-controller/pkg/ovn/controller/services/services_controller.go#L269 It should do the similar changes depicted here (currently merged upstream): https://github.com/ovn-org/ovn-kubernetes/blob/cd78ae1af4657d38bdc41003a8737aa958d62b9d/go-controller/pkg/ovn/controller/services/services_controller.go#L322-L324
Version-Release number of selected component (if applicable):
How reproducible:
100%
Steps to Reproduce:
1. create service -- use unique name 2. remove service 3. notice how alreadyApplied grows and never gets smaller 4. repeat
Actual results:
^^
Expected results:
alreadyApplied should not grow forever
Additional info:
Description of problem:
The 4.11 version of openshift-installer does not support the mon01 zone
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
When creating a ProjectHelmChartRepository (with or without the form) and setting a display name (as `spec.name`), this value is not used in the developer catalog / Helm Charts catalog filter sidebar.
It shows (and watches) the display names of `HelmChartRepository` resources.
Version-Release number of selected component (if applicable):
4.11
How reproducible:
Always
Steps to Reproduce:
1. Switch to Developer Perspective 2. Navigate to Add > "Helm Chart repositories" 3. Enter "ibm-charts" as "Chart repository name" 4. Enter URL https://raw.githubusercontent.com/IBM/charts/master/repo/community/index.yaml as URL) 5. Press on create 6. Open the YAML editor and change the `spec.name` attribute to "IBM Charts" 7. Save the change 8. Navigate to Add > "Helm Chart"
Actual results:
The filter navigation on the left side shows "Chart Repositories" "Ibm Chart". A camel case version of the resource name.
Expected results:
It should show the "spec.name" "IBM Charts" if defined and fallback to the current implementation if the optional spec.name is not defined.
Additional info:
There is a bug discussing that the display name could not be entered directly, https://bugzilla.redhat.com/show_bug.cgi?id=2106366. This bug here is only about the catalog output.
This bug is a backport clone of [Bugzilla Bug 2094362](https://bugzilla.redhat.com/show_bug.cgi?id=2094362). The following is the description of the original bug:
—
Description of problem:
A change [1] was introduced to split the kube-apiserver SLO rules into 2 groups to reduce the load on Prometheus (see bug 2004585).
Version-Release number of selected component (if applicable):
4.9 (because the change was backported to 4.9.z)
How reproducible:
Always
Steps to Reproduce:
1. Install OCP 4.9
2. Retrieve kube-apiserver-slos*
oc get -n openshift-kube-apiserver prometheusrules kube-apiserver-slos -o yaml
oc get -n openshift-kube-apiserver prometheusrules kube-apiserver-slos-basic -o yaml
Actual results:
The KubeAPIErrorBudgetBurn alert with labels
{long="1h",namespace="openshift-kube-apiserver",severity="critical",short="5m"}exists both in kube-apiserver-slos and kube-apiserver-slos-basic.
The alerting rules is evaluated twice. The same is true for recording rules like "apiserver_request:burnrate1h" and in this case, it can trigger warning logs in the Prometheus pods:
> level=warn component="rule manager" group=kube-apiserver.rules msg="Error on ingesting out-of-order result from rule evaluation" numDropped=283
Expected results:
I presume that kube-apiserver-slos shouldn't exist since it's been replaced by kube-apiserver-slos-basic and kube-apiserver-slos-extended.
Additional info:
Discovered while investigating bug 2091902
This is a clone of issue OCPBUGS-676. The following is the description of the original issue:
—
the machine approver isn't recognizing hostnames that use capital letters as valid even though DNS is case-insensitive
an example of this is in OHSS-14709:
I0822 19:04:51.587266 1 controller.go:114] Reconciling CSR: csr-vdtpv I0822 19:04:51.600941 1 csr_check.go:156] csr-vdtpv: CSR does not appear to be client csr I0822 19:04:51.603648 1 csr_check.go:542] retrieving serving cert from ip-100-66-119-117.ec2.internal (100.66.119.117:10250) I0822 19:04:51.604003 1 csr_check.go:181] Failed to retrieve current serving cert: dial tcp 100.66.119.117:10250: connect: connection refused I0822 19:04:51.604017 1 csr_check.go:201] Falling back to machine-api authorization for ip-100-66-119-117.ec2.internal E0822 19:04:51.604024 1 csr_check.go:392] csr-vdtpv: DNS name 'ip-100-66-119-117.tech-ace-maint-prd.aws.delta.com' not in machine names: ip-100-66-119-117.ec2.internal ip-100-66-119-117.ec2.internal ip-100-66-119-117.tech-ACE-maint-prd.aws.delta.com I0822 19:04:51.604033 1 csr_check.go:204] Could not use Machine for serving cert authorization: DNS name 'ip-100-66-119-117.tech-ace-maint-prd.aws.delta.com' not in machine names: ip-100-66-119-117.ec2.internal ip-100-66-119-117.ec2.internal ip-100-66-119-117.tech-ACE-maint-prd.aws.delta.com I0822 19:04:51.606777 1 controller.go:199] csr-vdtpv: CSR not authorized
This can be worked around by manually approving the CSR
The relevant line in the machine approver appears to be here: https://github.com/openshift/cluster-machine-approver/blob/master/pkg/controller/csr_check.go#L378
Description of problem:
When scaling down the machineSet for worker nodes, a PV(vmdk) file got deleted.
Version-Release number of selected component (if applicable):
4.10
How reproducible:
N/A
Steps to Reproduce:
1. Scale down worker nodes 2. Check VMware logs and VM gets deleted with vmdk still attached
Actual results:
After scaling down nodes, volumes still attached to the VM get deleted alongside the VM
Expected results:
Worker nodes scaled down without any accidental deletion
Additional info:
This is a clone of issue OCPBUGS-11636. The following is the description of the original issue:
—
Description of problem:
The ACLs are disabled for all newly created s3 buckets, this causes all OCP installs to fail: the bootstrap ignition can not be uploaded: level=info msg=Creating infrastructure resources... level=error level=error msg=Error: error creating S3 bucket ACL for yunjiang-acl413-4dnhx-bootstrap: AccessControlListNotSupported: The bucket does not allow ACLs level=error msg= status code: 400, request id: HTB2HSH6XDG0Q3ZA, host id: V6CrEgbc6eyfJkUbLXLxuK4/0IC5hWCVKEc1RVonSbGpKAP1RWB8gcl5dfyKjbrLctVlY5MG2E4= level=error level=error msg= with aws_s3_bucket_acl.ignition, level=error msg= on main.tf line 62, in resource "aws_s3_bucket_acl" "ignition": level=error msg= 62: resource "aws_s3_bucket_acl" ignition { level=error level=error msg=failed to fetch Cluster: failed to generate asset "Cluster": failure applying terraform for "bootstrap" stage: failed to create cluster: failed to apply Terraform: exit status 1 level=error level=error msg=Error: error creating S3 bucket ACL for yunjiang-acl413-4dnhx-bootstrap: AccessControlListNotSupported: The bucket does not allow ACLs level=error msg= status code: 400, request id: HTB2HSH6XDG0Q3ZA, host id: V6CrEgbc6eyfJkUbLXLxuK4/0IC5hWCVKEc1RVonSbGpKAP1RWB8gcl5dfyKjbrLctVlY5MG2E4= level=error level=error msg= with aws_s3_bucket_acl.ignition, level=error msg= on main.tf line 62, in resource "aws_s3_bucket_acl" "ignition": level=error msg= 62: resource "aws_s3_bucket_acl" ignition {
Version-Release number of selected component (if applicable):
4.11+
How reproducible:
Always
Steps to Reproduce:
1.Create a cluster via IPI
Actual results:
install fail
Expected results:
install succeed
Additional info:
Heads-Up: Amazon S3 Security Changes Are Coming in April of 2023 - https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/ https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-error-responses.html - After you apply the bucket owner enforced setting for Object Ownership, ACLs are disabled.
Description of problem:
When the user installs a helm chart, the dropdown to select a specific version is always disabled. Also for helm charts that can upgraded or downgraded after installation. For example the nodejs helm chart.
Version-Release number of selected component (if applicable):
At least 4.11, maybe all versions, but a backport to 4.11 is fine
How reproducible:
Always
Steps to Reproduce:
1. Switch to developer perspective 2. Navigate to Add > Helm chart 3. Select the Nodejs helm chart 4. Try to select another version .. When the user installs the not selectable version and edit the helm chart there is another version to selec
Actual results:
The version is not selectable.
Expected results:
The version should be selectable.
Additional info:
This is a clone of issue OCPBUGS-4696. The following is the description of the original issue:
—
Description of problem:
metal3 pod does not come up on SNO when creating Provisioning with provisioningNetwork set to Disabled The issue is that on SNO, there is no Machine, and no BareMetalHost, it is looking of Machine objects to populate the provisioningMacAddresses field. However, when provisioningNetwork is Disabled, provisioningMacAddresses is not used anyway. You can work around this issue by populating provisioningMacAddresses with a dummy address, like this: kind: Provisioning metadata: name: provisioning-configuration spec: provisioningMacAddresses: - aa:aa:aa:aa:aa:aa provisioningNetwork: Disabled watchAllNamespaces: true
Version-Release number of selected component (if applicable):
4.11.17
How reproducible:
Try to bring up Provisioning on SNO in 4.11.17 with provisioningNetwork set to Disabled apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: Disabled watchAllNamespaces: true
Steps to Reproduce:
1. 2. 3.
Actual results:
controller/provisioning "msg"="Reconciler error" "error"="machines with cluster-api-machine-role=master not found" "name"="provisioning-configuration" "namespace"="" "reconciler group"="metal3.io" "reconciler kind"="Provisioning"
Expected results:
metal3 pod should be deployed
Additional info:
This issue is a result of this change: https://github.com/openshift/cluster-baremetal-operator/pull/307 See this Slack thread: https://coreos.slack.com/archives/CFP6ST0A3/p1670530729168599
This is a clone of issue OCPBUGS-12839. The following is the description of the original issue:
—
As a user, I would like to see the type of technology used by the samples on the samples view similar to the all services view.
On the samples view:
It is showing different types of samples, e.g. devfile, helm and all showing as .NET. It is difficult for user to decide which .Net entry to select on the list. We'll need something like the all service view where it shows the type of technology on the top right of each card for users to differentiate between the entries:
This is a clone of issue OCPBUGS-3117. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-3084. The following is the description of the original issue:
—
Upstream Issue: https://github.com/kubernetes/kubernetes/issues/77603
Long log lines get corrupted when using '--timestamps' by the Kubelet.
The root cause is that the buffer reads up to a new line. If the line is greater than 4096 bytes and '--timestamps' is turrned on the kubelet will write the timestamp and the partial log line. We will need to refactor the ReadLogs function to allow for a partial line read.
apiVersion: v1
kind: Pod
metadata:
name: logs
spec:
restartPolicy: Never
containers:
- name: logs
image: fedora
args:
- bash
- -c
- 'for i in `seq 1 10000000`; do echo -n $i; done'
kubectl logs logs --timestamps
This fix contains the following changes coming from updated version of kubernetes up to v1.24.10:
Changelog:
v1.24.11: https://github.com/kubernetes/kubernetes/blob/release-1.24/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v12410
v1.24.10: https://github.com/kubernetes/kubernetes/blob/release-1.24/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v1249
v1.24.9: https://github.com/kubernetes/kubernetes/blob/release-1.24/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v1248
v1.24.8: https://github.com/kubernetes/kubernetes/blob/release-1.24/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v1247
v1.24.7: https://github.com/kubernetes/kubernetes/blob/release-1.24/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v1246
This is a clone of issue OCPBUGS-7474. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-6714. The following is the description of the original issue:
—
Description of problem:
Traffic from egress IPs was interrupted after Cluster patch to Openshift 4.10.46
a customer cluster was patched. It is an Openshift 4.10.46 cluster with SDN.
More description about issue is available in private comment below since it contains customer data.
Users can't configure the retention period for Thanos Ruler currently and the default value is 24h (from the prometheus operator).
Description of problem:
This bugs purpose is to enable a feature backport of https://issues.redhat.com/browse/MON-1949.
Our Prometheus alerts are inconsistent with both upstream and sometimes our own vendor folder. Let's do a clean update run before the next release is branched off.
This is a clone of issue OCPBUGS-683. The following is the description of the original issue:
—
Description of problem:
Version-Release number of selected component (if applicable):
{ 4.12.0-0.nightly-2022-08-21-135326 }How reproducible:
Steps to Reproduce:
{See https://bugzilla.redhat.com/show_bug.cgi?id=2118563#c5, The following messages here are "normal" on startup, but it is very misleading with error statement, suggest suppress them or update them to some more clear context that we can know they are in normal process. E0818 02:18:53.709223 1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-c955q': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-c955q, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue E0818 02:18:53.715530 1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-sl9jn': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-sl9jn, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue E0818 02:18:53.735885 1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-sl9jn': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-sl9jn, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue E0818 02:18:53.775984 1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-sl9jn': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-sl9jn, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue E0818 02:18:53.790449 1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-c955q': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-c955q, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue E0818 02:18:53.856911 1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-sl9jn': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-sl9jn, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue E0818 02:18:53.950782 1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-c955q': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-c955q, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue E0818 02:18:54.017583 1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-sl9jn': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-sl9jn, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue E0818 02:18:54.271967 1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-c955q': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-c955q, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue E0818 02:18:54.338944 1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-sl9jn': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-sl9jn, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue E0818 02:18:54.916988 1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-c955q': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-c955q, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue E0818 02:18:54.982211 1 controller.go:165] error syncing 'br709bt-b5564-6jgdx-worker-0-sl9jn': error retrieving the private IP configuration for node: br709bt-b5564-6jgdx-worker-0-sl9jn, err: cannot parse valid nova server ID from providerId '', requeuing in node workqueue}
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-4238. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-3883. The following is the description of the original issue:
—
While doing a PerfScale test of we noticed that the ovnkube pods are not being spread out evenly among the available workers. Instead they are all stacking on a few until they fill up the available allocatable ebs volumes (25 in the case of m5 instances that we see here).
An example from partway through our 80 hosted cluster test when there were ~30 hosted clusters created/in progress
There are 24 workers available:
```
$ for i in `oc get nodes l node-role.kubernetes.io/worker=,node-role.kubernetes.io/infra!=,node-role.kubernetes.io/workload!= | egrep -v "NAME" | awk '{ print $1 }'`; do echo $i `oc describe node $i | grep -v openshift | grep ovnkube -c`; done
ip-10-0-129-227.us-west-2.compute.internal 0
ip-10-0-136-22.us-west-2.compute.internal 25
ip-10-0-136-29.us-west-2.compute.internal 0
ip-10-0-147-248.us-west-2.compute.internal 0
ip-10-0-150-147.us-west-2.compute.internal 0
ip-10-0-154-207.us-west-2.compute.internal 0
ip-10-0-156-0.us-west-2.compute.internal 0
ip-10-0-157-1.us-west-2.compute.internal 4
ip-10-0-160-253.us-west-2.compute.internal 0
ip-10-0-161-30.us-west-2.compute.internal 0
ip-10-0-164-98.us-west-2.compute.internal 0
ip-10-0-168-245.us-west-2.compute.internal 0
ip-10-0-170-103.us-west-2.compute.internal 0
ip-10-0-188-169.us-west-2.compute.internal 25
ip-10-0-188-194.us-west-2.compute.internal 0
ip-10-0-191-51.us-west-2.compute.internal 5
ip-10-0-192-10.us-west-2.compute.internal 0
ip-10-0-193-200.us-west-2.compute.internal 0
ip-10-0-193-27.us-west-2.compute.internal 7
ip-10-0-199-1.us-west-2.compute.internal 0
ip-10-0-203-161.us-west-2.compute.internal 0
ip-10-0-204-40.us-west-2.compute.internal 23
ip-10-0-220-164.us-west-2.compute.internal 0
ip-10-0-222-59.us-west-2.compute.internal 0
```
This is running quay.io/openshift-release-dev/ocp-release:4.11.11-x86_64 for the hosted clusters and the hypershift operator is quay.io/hypershift/hypershift-operator:4.11 on a 4.11.9 management cluster
This is a clone of issue OCPBUGS-11998. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10678. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10655. The following is the description of the original issue:
—
Description of problem:
The dev console shows a list of samples. The user can create a sample based on a git repository. But some of these samples doesn't include a git repository reference and could not be created.
Version-Release number of selected component (if applicable):
Tested different frontend versions against a 4.11 cluster and all (oldest tested frontend was 4.8) show the sample without git repository.
But the result also depends on the installed samples operator and installed ImageStreams.
How reproducible:
Always
Steps to Reproduce:
Actual results:
The git repository is not filled and the create button is disabled.
Expected results:
Samples without git repositories should not be displayed in the list.
Additional info:
The Git repository is saved as "sampleRepo" in the ImageStream tag section.
This is a clone of issue OCPBUGS-5019. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-4941. The following is the description of the original issue:
—
Description of problem: This is a follow-up to OCPBUGS-3933.
The installer fails to destroy the cluster when the OpenStack object storage omits 'content-type' from responses, and a container is empty.
Version-Release number of selected component (if applicable):
4.8.z
How reproducible:
Likely not happening in customer environments where Swift is exposed directly. We're seeing the issue in our CI where we're using a non-RHOSP managed cloud.
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info: