Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
Dependencies (internal and external)
The tool should be able to upload an OpenID Connect (OIDC) configuration to an S3 bucket, and create an AWS IAM Identity Provider that trusts identities from the OIDC provider. It should take infra name as input so that user can identify all the resources created in AWS. Make sure that resources created in AWS are tagged appropriately.
Sample command with existing key pair:
tool-name create identity-provider <infra-name> --public-key ./path/to/public/key
Ensure the Identity Provider includes audience config for both the in-cluster components ('openshift') and the pod-identity-webhook ('sts.amazonaws.com').
ccoctl should be able to delete AWS resources it created
ccoctl delete <infra-name>
Research if we can dynamically reserve memory and CPU for nodes.
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
As a OpenShift administrator
I want the registry operator to use topology mode from Infrastructure (HighAvailable = 2 replicas, SingleReplica = 1 replica)
so that it the operator is not spending resources for high-availability purpose when it's not needed.
See also:
https://github.com/openshift/enhancements/blob/master/enhancements/cluster-high-availability-mode-api.md
https://github.com/openshift/api/pull/827/files
Platform | SingleReplica | HighAvailable |
---|---|---|
AWS | 1 replica | 2 replicas |
Azure | 1 replica | 2 replicas |
GCP | 1 replica | 2 replicas |
OpenStack (swift) | 1 replica | 2 replicas |
OpenStack (cinder) | 1 replica | 1 replica (PVC) |
oVirt | 1 replica | 1 replica (PVC) |
bare metal | Removed | Removed |
vSphere | Removed | Removed |
https://github.com/openshift/enhancements/pull/555
https://github.com/openshift/api/pull/827
The console operator will need to support single-node clusters.
We have a console deployment and downloads deployment. Each will to be updated so that there's only a single replica when high availability mode is disabled in the Infrastructure config. We should also remove the anti-affinity rule in the console deployment that tries to spread console pods across nodes.
The downloads deployment is currently a static manifest. That likely needs to be created by the console operator instead going forward.
Acceptance Criteria:
Bump github.com/openshift/api to pickup changes from openshift/api#827
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
This story is for actually updating the version of CoreDNS in github.com/openshift/coredns. Our fork will need to be rebased onto https://github.com/coredns/coredns/releases/tag/v1.8.1, which may involve some git fu. Refer to previous CoreDNS Rebase PR's for any pointers there.
CoreDNS v1.7 renamed some metrics that we use in our alerting rules. Make sure the alerting rules in https://github.com/openshift/cluster-dns-operator/blob/master/manifests/0000_90_dns-operator_03_prometheusrules.yaml are using the correct metrics names (and still work as intended).
We need to verify that no new CoreDNS dual stack features require any configuration changes or feature flags.
(All dual stack changes should just work once we rebase to coredns v1.8.1).
See https://github.com/coredns/coredns/pull/4339 .
We also need to verify that cluster DNS works for both v4 and v6 for a dual stack cluster IP service. (ie request via A and AAAA, make sure you get the desired response, and not just one or the other). A brief CI test on our dual stack metal CI might make the most sense here (KNI Might have a job like this already, need to investigate our options to add dual stack coverage to openshift/coredns).
Create a PR in openshift/cluster-ingress-operator to implement the PROXY protocol API.
The multiple destinations provided as a part of the allowedDestinations field is not working as it used to on OCP4: https://github.com/openshift/images/blob/master/egress/router/egress-router.sh#L70-L109
We need to parse this from the NAD and modify the iptables here to support them:
https://github.com/openshift/egress-router-cni/blob/master/pkg/macvlan/macvlan.go#L272-L349
Testing:
1) Created NAD:
[dsal@bkr-hv02 surya_multiple_destinations]$ cat nad_multiple_destination.yaml --- apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: egress-router spec: config: '{ "cniVersion": "0.4.0", "type": "egress-router", "name": "egress-router", "ip": { "addresses": [ "10.200.16.10/24" ], "destinations": [ "80 tcp 10.100.3.200", "8080 tcp 203.0.113.26 80", "8443 tcp 203.0.113.26 443" ], "gateway": "10.200.16.1" } }'
2) Created pod:
[dsal@bkr-hv02 surya_multiple_destinations]$ cat egress-router-pod.yaml --- apiVersion: v1 kind: Pod metadata: name: egress-router-pod annotations: k8s.v1.cni.cncf.io/networks: egress-router spec: containers: - name: openshift-egress-router-pod command: ["/bin/bash", "-c", "sleep 999999999"] image: centos/tools securityContext: privileged: true
3) Checked IPtables:
[root@worker-1 core]# iptables-save -t nat Generated by iptables-save v1.8.4 on Mon Feb 1 12:08:05 2021 *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A POSTROUTING -o net1 -j SNAT --to-source 10.200.16.10 COMMIT # Completed on Mon Feb 1 12:08:05 2021
As we can see, only the SNAT rule is added. The DNAT doesn't get picked up because of the syntax difference.
Create a PR in openshift/cluster-ingress-operator to specify the random balancing algorithm if the feature gate is enabled, and to specify the leastconn balancing algorithm (the current default) otherwise.
Plugin teams need a mechanism to extend the OCP console that is decoupled enough so they can deliver at the cadence of their projects and not be forced in to the OCP Console release timelines.
The OCP Console Dynamic Plugin Framework will enable all our plugin teams to do the following:
Requirement | Notes | isMvp? |
---|---|---|
UI to enable and disable plugins | YES | |
Dynamic Plugin Framework in place | YES | |
Testing Infra up and running | YES | |
Docs and read me for creating and testing Plugins | YES | |
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Documentation Considerations
Questions to be addressed:
Related to CONSOLE-2380
We need a way for cluster admins to disable a console plugin when uninstalling an operator if it's enabled in the console operator config. Otherwise, the config will reference a plugin that no longer exists. This won't prevent console from loading, but it's something that we can clean up during uninstall.
The UI will always remove the console plugin when an operator is uninstalled. There will not be an option to keep the operator. We should have a sentence in the dialog letting the user know that the plugin will disabled when the operator is uninstalled (but only if the CSV has the plugin annotation).
If the user doesn't have authority to patch the operator config, we should warn them that the operator config can't be updated to remove the plugin.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
This would let us import YAML with multiple resources and add YAML templates that create related resources like image streams and build configs together.
See CONSOLE-580
Acceptance criteria:
Story:
As a user viewing the pod logs tab with a selected container, I want the ability to view past logs if they are available for the container.
Acceptance Criteria:
Design doc: https://docs.google.com/document/d/1PB8_D5LTWhFPFp3Ovf85jJTc-zAxwgFR-sAOcjQCSBQ/edit#
When moving to OCP 4 we didn't port the metrics charts for Deployments, Deployment Configs, StatefulSets, DaemonSets, ReplicaSets, and ReplicationControllers. These should be the same charts that we show on the Pods page: Memory, CPU, Filesystem, Network In and Out.
This was only done for pods.
We need to decide if we want use a multiline chart or some other representation.
The work on this story is dependent on following changes:
The console already supports custom routes on the operator config. With the new proposed CustomDomains API introduces a unified way how to stock install custom domains for routes, which both names and serving cert/keys, customers want to customise. From the console perspective those are:
The setup should be done on the Ingress config. There two new fields are introduced:
Console-operator will be only consuming the API and check for any changes. If a custom domain is set for either `console` or `downloads` route in the `openshift-console` namespace, console-operator will read the setup set a custom route accordingly. When a custom route will be setup for any of console's route, the default route wont be deleted, but instead it will updated so it redirects to the custom one. This is done because of two reasons:
Console-operator will still need to support the CustomDomain API that is available on it's config.
Acceptance criteria:
Questions:
Dump openshift/api godep to pickup new CustomDomain API for the Ingress config.
Implement console-operator changes to consume new CustomDomains API, based on the story details.
Feature Overview
This will be phase 1 of Internationalization of the OpenShift Console.
Phase 1 will include the following:
Phase 1 will not include:
Initial List of Languages to Support
---------- 4.7* ----------
*This will be based on the ability to get all the strings externalized, there is a good chance this gets pushed to 4.8.
---------- Post 4.7 ----------
POC
Goals
Internationalization has become table stakes. OpenShift Console needs to support different languages in each of the major markets. This is key functionality that will help unlock sales in different regions.
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Language Selector | YES | |
Localized Date. + Time | YES | |
Externalization and translation of all client side strings | YES | |
Translation for Chinese and Japanese | YES | |
Process, infra, and testing capabilities put into place | YES | |
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Out of Scope
Assumptions
Customer Considerations
We are rolling this feature in phases, based on customer feedback, there may be no phase 2.
Documentation Considerations
I believe documentation already supports a large language set.
We have too many namespaces if we're loading them upfront. We should consolidate some of the files.
Consolidate namepsaces S-Z to reduce change size
Just do namespaces from A-D to reduce number of files being changed at once
Consolidate namepsaces K-M to reduce change size
Consolidate namepsaces N-R to reduce change size
Consolidate namepsaces E-I to reduce change size
We need to automate how we send and receive updated translations using Memsource for the Red Hat Globalization team. The Ansible Tower team already has automation in place that we might be able to reuse.
Acceptance Criteria:
Openshift Sandboxed Containers provide the ability to add an additional layer of isolation through virtualization for many workloads. The main way to enable the use of katacontainers on an Openshift Cluster is by first installing the Operator (for more information about operator enablement check [1]).
Once the feature is enabled on the cluster, it just a matter of a one-liner YAML modification on the pod/deployment level to run the workload using katacontianers. That might sound easy for some, but for others who don't care about YAML they might want more abstractions on how to use katacontainers for their workloads.
This feature covers all the efforts required to integrate and present Kata in Openshift UI (console) to cater to all user personas.
To enable for users to adopt Kata as a runtime, it is important to make it easy to use. Adding hook-points in the UI with ease-of-use as a goal in mind is one way to bring in more users.
The main goal of this feature is to make sure that:
Questions to be addressed:
References
[1] https://issues.redhat.com/browse/KATA-429?jql=project %3D KATA AND issuetype %3D Feature
The grand goal is to improve the usability of Kata from Openshift UI. This EPIC aims to cover only a subset that would help:
To use a different runtime e.g., Kata, the "runtimeClassName" will be set to the desired low-level runtime. Also please see [1]:
"RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/runtime-class.md This is a beta feature as of Kubernetes v1.14.."
apiVersion: v1 kind: Pod metadata: name: nginx-runc spec: runtimeClassName: runC
The value of the runtime class cannot be changed on the pod level, but it can be changed on the deployment level
apiVersion: apps/v1 kind: Deployment metadata: name: sandboxed-nginx spec: replicas: 2 selector: matchLabels: app: sandboxed-nginx template: metadata: labels: app: sandboxed-nginx spec: runtimeClassName: kata. # ---> This can be changed containers: - name: nginx image: nginx ports: - containerPort: 80 protocol: TCP
[1] https://docs.openshift.com/container-platform/4.6/rest_api/workloads_apis/pod-core-v1.html
We should show the runtime class on workloads pages and add a badge to the heading in the case a workload uses Kata. A workload uses Kata if its pod template has `runtimeClassName` set to `kata`.
Acceptance Criteria:
Andrew Ronaldson indicated that adding a "kata" badge in the heading would be too much noise around other heading badges (ContainerCreating, Failed, etc).
The OCP Console needs to detect if the ACM Operator has been installed, if detected then a new multi-cluster perspective option shows up in the perspective chooser.
As a user I need the ability to to switch to the the ACM UI from the OCP Console and vice versa without requiring the user to login multiple times.
This option also needs to be hidden if the user doesn't have the correct RBAC.
The console should detect the presence of the ACM operator and add an Advanced Cluster Management item to the perspective switcher. We will need to work with the ACM team to understand how to detect the operator and how to discover the ACM URL.
Additionally, we will need to provide a query parameter or URL fragment to indicate which perspective to use. This will allow ACM to link back to the a specific perspective since it will share the same perspective switcher in its UI. ACM will need to be able to discover the console URL.
This story does not include handling SSO, which will be tracked in a separate story.
We need to determine what RBAC checks to make before showing the ACM link.
Acceptance Criteria
1. Console shows a link to ACM in its perspective switcher
2. Console provides a way for ACM to link back to a specific perspective
3. The ACM option only appears when the ACM operator is installed
4. ACM should open in the same browser tab to give the appearance of it being one application
5. Only users with appropriate RBAC should see the link (access review TBD)
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
Node is currently 10.x. Let's increase that to at least 14.x.
It will require some changes on the ART side as well OSBS builds
This is required to bump node to avoid https://github.com/webpack/webpack/issues/4629. We need to evaluate whether this has a domino effect on our webpack dependencies.
See https://github.com/openshift/console/pull/7306#issuecomment-755509361
Console operator should swap from using monis.app to openshift/operator-boilerplate-legacy. This will allow switching to klog/v2, which the shared libs (api,client-go,library-go) have already done.
This epic is mainly focused to track the dev console QE automation activities for 4.8 release
1. Identify the scenarios for automation
2. Segregate the test cases into smoke, Regression and user stories
3. Designing the gherkin scripts with below priority
- Update the Smoke test suite
- Update the Regression test suite
4. Create the automation scripts using cypress
5. Implement CI
This improves the quality of the product
This is not related to any UI features. It is mainly focused on UI automation
This story is mainly related to push the pipelines code from dev console to gitops plugin folder for extensibility purpose
As a operator qe, I should be able to execute them on my operator folder
1. All pipelines scripts should be able to execute in the gitops plugin folder
2. gitops operator installation needs to be done by the script
Consolidate cypress cucumber and cypress frameworks related to pluigns/index.js files
Currently the PR looks too large, To reduce the size, creating these sub tasks
Updating the ReadMe documentation for knative plugin folder
This story is mainly related to push the pipelines code from dev console to pipelines plugin folder for extensibility purpose
Verify the pipelines regression test suite
As a operator qe, I should be able to execute them on my operator folder
1. All pipelines scripts should be able to execute in the pipelines plugin folder
2. Pipelines operator installation needs to be done by the script
CI implementation for pipelines, knative, devconsole
update package.json file
CI for pipelines:
Any update related to pipelines should execute pipelines smoke tests
on nightly builds, pipelines regression should be executed [TBD]
CI for devconsole:
Any update related to devconsole should execute devconsole smoke tests
on nightly builds, devconsole regression should be executed [TBD]
Ci for knative
Any update related to knative should execute knative smoke tests
on nightly builds, knative regression should be executed [TBD]
Fixing the lint feature file lint issues and moving the topology features to topology folder which is occurring on executing `yarn run test-cypress-devconsole-headless`
Setup the CI for all plugins smoke test scripts
References for CI implementation
updated all automation scripts and verify the execution on remote cluster
As a user,
Execute them on Chrome browser and 4.8 release cluster
Would like to include integration-tests for topology folder
Design the cypress scripts for the epic ODC-3991
Refer the Gherkin scripts https://issues.redhat.com/browse/ODC-5430
As a user,
All automation possible test scenarios related to EPIC ODC-3991 should be automated
Pipelines operator needs to be installed
Fixing all gherkin linter errors
Create Github templates with certain criteria to met the Gherkin script standards, Automation script standards
By adding the owners file to service mesh, helps us to add the automatic reviewers on this gherkin scripts update
As this .gherkin-lintrc is mainly used by QE team. so it's not necessary to be in frontend folder, So I am moving it to dev-console/integration-tests folder
Adding all necessary tags and modifying below rules due to recently observed scenarios
This helps to automatically notify the web terminal team members on test scenario changes
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
Please read: migrating-protractor-tests-to-cypress
Protractor test to migrate: `frontend/integration-tests/tests/storage.scenario.ts`
Loops through 6 storage kinds:
15) Add storage is applicable for all workloads
16) replicationcontrollers
✔ create a replicationcontrollers resource
✔ add storage to replicationcontrollers
17) daemonsets
✔ create a daemonsets resource
✔ add storage to daemonsets
18) deployments
✔ create a deployments resource
✔ add storage to deployments
19) replicasets
✔ create a replicasets resource
✔ add storage to replicasets
20) statefulsets
✔ create a statefulsets resource
✔ add storage to statefulsets
21) deploymentconfigs
✔ create a deploymentconfigs resource
✔ add storage to deploymentconfigs
Accpetance Criteria
Please read: migrating-protractor-tests-to-cypress
Protractor test to migrate: `frontend/integration-tests/tests/filter.scenario.ts`
4) Filtering ✔ filters Pod from object detail ✔ filters invalid Pod from object detail ✔ filters from Pods list ⚠ CONSOLE-1503 - searches for object by label ✔ searches for pod by label and filtering by name ✔ searches for object by label using by other kind of workload
Accpetance Criteria
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
1) We want to fix the order of Imports in the files.
2) We want to have vendor import, followed by console/package import and then relative imports should come at last.
Can be done manually or introduce some linter rules for this.
In the topology view, if you select any grouping (Application, Helm Release, Operator Backed service, etc), an extraneous blue box is displayed
This is a regression.
Create an application in any way ... but this will do ...
This animated gif shows the issue:
The blue box shouldn't be shown
Always
Seen on 4/26/2021 4.8 daily, but this behavior was discussed in slack last week
This is a regression
This task adds support for setting socket options SO_REUSEADDR and SO_REUSEPORT to etcd listeners via ListenConfig. These options give the flexibility to cluster admins who wish to more explicit control of these features. What we have found is during etcd process restart there can be a considerable time waiting for the port to release as it is held open by TIME_WAIT which on many systems is 60s.
Pull in the latest openshift/library content into the samples operator
If image eco e2e's fail, work with upstream SCL to address
List of EOL images needs to be sent to the Docs team and added to the release notes.
P-01-TC03 | On Second run, script worked fine |
P-01-TC06 | created seperate functions for docker file page |
P-01-TC09 | Removing this test case, by updating P-04-TC04 test scenario Updating pipelines section title in side bar |
P-02-TC02 | Script fix required - unable to identify locators |
P-02-TC03 | Script fix required - unable to identify locators |
P-02-TC06 | Script fix required - unable to identify locators |
Create Namespaces script is keep on failing due to load issue
Unable to execute the create namespace script
Create Namespace script should work without any issue
Some of the steps in test scenarios [A-06-TC02]- Script fix required
A-06-TC05 - script fix required
A-06-TC11 update required as per the latest UI
Update the kafka test scenarios in eventing-kafka-event-source.feature file
While Regression Test execution, updated the test scenarios
Migrate the existing tests which are located here :
Helper functions/Views location:
P-09-TC01, P-09-TC04, P-09-TC05, P-09-TC06, P-09-TC07, P-09-TC11 test scripts update required
Page obejcts updated for pipelines
P-06-TC01 | Text change is required |
P-06-TC04 | Text change is required |
P-06-TC13 | Text change is required |
P-03-TC03 also get fixed with this bug
discover-etcd-initial-cluster was written very early on in the cluster-etcd-operator life cycle. We have observed at least one bug in this code and in order to validate logical correctness it needs to be rewritten with unit tests.
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were not completed when this image was assembled
console-operator codebase contains a lot of inline manifests. Instead we should put those manifests into a `/bindata` folder, from which they will be read and then updated per purpose.