Back to index

4.8.44

Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete |

Changes from 4.7.60

Note: this page shows the Feature-Based Change Log for a release

Complete Features

These features were completed when this image was assembled

Epic Goal

  • Complete the implementation for AWS STS, including support and documentation.

Why is this important?

  • Many customers want to follow best security practices for handling credentials.
  • This is the way recommended by AWS. 
  • Customer interest: EMEA, AMER

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Dependencies (internal and external)

Open questions:

  1. Will this cover existing OCP deployments or only new OCP deployments?
  2. Is there a migration path for existing customers to start using AWS STS?
  3. Are there considerations that apply to Operators so they can work with limited privilege credentials?

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

The tool should be able to upload an OpenID Connect (OIDC) configuration to an S3 bucket, and create an AWS IAM Identity Provider that trusts identities from the OIDC provider. It should take infra name as input so that user can identify all the resources created in AWS. Make sure that resources created in AWS are tagged appropriately.

Sample command with existing key pair:

tool-name create identity-provider <infra-name> --public-key ./path/to/public/key

 

Ensure the Identity Provider includes audience config for both the in-cluster components ('openshift') and the pod-identity-webhook ('sts.amazonaws.com').

Epic Goal

  • Support running console in single-node OpenShift configurations for production use in edge computing use cases.
  • Support disabling the console entirely in some of these configurations to reduce overhead in constrained environments.

Why is this important?

  • Some bare metal edge customers, especially in the telco market, want to use kubernetes at physically remote sites with minimal hardware.

Scenarios

  1. As a user, I want to deploy a fully supported instance of OpenShift on a single node.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • console can be deployed with a single replica

Dependencies (internal and external)

  1. CORS-1589

Previous Work (Optional):

  1. https://github.com/openshift/enhancements/pull/504
  2. https://github.com/openshift/enhancements/pull/560

Open questions::

  1. Should the console configuration API have a separate option for this setting, or should it use the API created from CORS-1589?

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

https://github.com/openshift/enhancements/pull/555
https://github.com/openshift/api/pull/827

The console operator will need to support single-node clusters.

We have a console deployment and downloads deployment. Each will to be updated so that there's only a single replica when high availability mode is disabled in the Infrastructure config. We should also remove the anti-affinity rule in the console deployment that tries to spread console pods across nodes.

The downloads deployment is currently a static manifest. That likely needs to be created by the console operator instead going forward.

Acceptance Criteria:

  • Console operator deploys console with 1 replica and no anti-affinity rules when not in high availability mode
  • Console operator deploys the downloads deployment with 1 replica when not in high availability mode
  • The console and downloads deployments do not change when in high availability mode
  • The feature is well-covered by tests

Epic Goal

  • Support running the image registry services in single-node OpenShift configurations for production use in edge computing use cases.

Why is this important?

  • Some bare metal edge customers, especially in the telco market, want to use kubernetes at physically remote sites with minimal hardware.

Scenarios

  1. As a user, I want to deploy a fully supported instance of OpenShift on a single node.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

  1. https://github.com/openshift/enhancements/pull/504
  2. https://github.com/openshift/enhancements/pull/560
  3. OCP Single Node Production Edge Profile
  4. We're pretty sure we need the node-ca deployed since our first few customers are using disconnected environments.
  5. It's not clear if we need the image-registry and image-pruner.

Open questions::

  1. ...

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

As a OpenShift administrator
I want the registry operator to use topology mode from Infrastructure (HighAvailable = 2 replicas, SingleReplica = 1 replica)
so that it the operator is not spending resources for high-availability purpose when it's not needed.

See also:

https://github.com/openshift/enhancements/blob/master/enhancements/cluster-high-availability-mode-api.md
https://github.com/openshift/api/pull/827/files

Number of replicas on different platforms 

Platform SingleReplica HighAvailable
AWS 1 replica 2 replicas
Azure 1 replica 2 replicas
GCP 1 replica 2 replicas
OpenStack (swift) 1 replica 2 replicas
OpenStack (cinder) 1 replica 1 replica (PVC)
oVirt 1 replica 1 replica (PVC)
bare metal Removed Removed
vSphere Removed Removed

 

Feature Overview

Openshift Sandboxed Containers provide the ability to add an additional layer of isolation through virtualization for many workloads. The main way to enable the use of katacontainers on an Openshift Cluster is by first installing the Operator (for more information about operator enablement check [1]).

Once the feature is enabled on the cluster, it just a matter of a one-liner YAML modification on the pod/deployment level to run the workload using katacontianers. That might sound easy for some, but for others who don't care about YAML they might want more abstractions on how to use katacontainers for their workloads.

This feature covers all the efforts required to integrate and present Kata in Openshift UI (console) to cater to all user personas.

Background, and strategic fit

To enable for users to adopt Kata as a runtime, it is important to make it easy to use. Adding hook-points in the UI with ease-of-use as a goal in mind is one way to bring in more users.

Goal(s)

The main goal of this feature is to make sure that:

  1. It is easy for users to find out how to use/enable Openshift Sandboxed Containers on their clusters (e.g., Getting started guide in the UI).
  2. Cluster-admins are able to differentiate between normal pods and Kata pods.
  3. Developers (application, CNF, ...) have an easy way to create Katacontiners (without peeking at YAMLs)
  4. Application End-users are able to collectively activate kata on their app packages/content (e.g., Helm, odo,...)

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as a reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?

References

[1] https://issues.redhat.com/browse/KATA-429?jql=project %3D KATA AND issuetype %3D Feature

Goal

The grand goal is to improve the usability of Kata from Openshift UI. This EPIC aims to cover only a subset that would help:

  • Make it easy to differentiate between native cluster runtime (e.g., runC) and kata.
  • Enable Kata as a runtime without modifying YAMLs.

To use a different runtime e.g., Kata, the "runtimeClassName" will be set to the desired low-level runtime. Also please see [1]

"RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/runtime-class.md This is a beta feature as of Kubernetes v1.14.." 

 

Pod-Runtimeclass.yaml
apiVersion: v1 
kind: Pod 
metadata:
  name: nginx-runc 
spec:     
  runtimeClassName: runC 

 

The value of the runtime class cannot be changed on the pod level, but it can be changed on the deployment level 

 

Deployment-Runtimeclass.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sandboxed-nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sandboxed-nginx
  template:
    metadata:
      labels:
        app: sandboxed-nginx
    spec:
      runtimeClassName: kata. # ---> This can be changed
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          protocol: TCP
          

User-stories

  • As a cluster-admin, I would like to be able to differentiate between a normal pod and a Katacontainer pod from the UI.
  • As a developer, I would like to create katacontainers-based pods without dealing with YAML, i.e., from the UI.
  • As a developer, I would like to switch my deployments to use Kata instead on runC (native). 

Requirements

  • Kata runtime MUST be viewable when checking running workloads.
  • A checkbox or a similar method to create Katacontainers from the UI MUST be provided.
  • The above two requirements MUST be tested.

 

 

References

 [1] https://docs.openshift.com/container-platform/4.6/rest_api/workloads_apis/pod-core-v1.html 
 

We should show the runtime class on workloads pages and add a badge to the heading in the case a workload uses Kata. A workload uses Kata if its pod template has `runtimeClassName` set to `kata`.

Acceptance Criteria:

  • Kata runtime must be viewable when checking running workloads, including Pods, ReplicaSets, ReplicationControllers, StatefulSets, Deployments, and DeploymentConfigs.
  • Automated test must be written to verify coverage

 

Andrew Ronaldson indicated that adding a "kata" badge in the heading would be too much noise around other heading badges (ContainerCreating, Failed, etc).

 

Incomplete Features

When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release

Feature Overview

We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.

Goals

  • Feature enhancements (performance, scale, configuration, UX, ...)
  • Modernization (incorporation and productization of new technologies)

Requirements

  • Core Networking Stability
  • Core Networking Performance and Scale
  • Core Neworking Extensibility (Multus CNIs)
  • Core Networking UX (Observability)
  • Core Networking Security and Compliance

In Scope

  • Network Edge (ingress, DNS, LB)
  • SDN (CNI plugins, openshift-sdn, OVN, network policy, egressIP, egress Router, ...)
  • Networking Observability

Out of Scope

There are definitely grey areas, but in general:

  • CNV
  • Service Mesh
  • CNF

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

This story is for actually updating the version of CoreDNS in github.com/openshift/coredns. Our fork will need to be rebased onto https://github.com/coredns/coredns/releases/tag/v1.8.1, which may involve some git fu. Refer to previous CoreDNS Rebase PR's for any pointers there.

We need to verify that no new CoreDNS dual stack features require any configuration changes or feature flags.
(All dual stack changes should just work once we rebase to coredns v1.8.1).

See https://github.com/coredns/coredns/pull/4339 .

We also need to verify that cluster DNS works for both v4 and v6 for a dual stack cluster IP service. (ie request via A and AAAA, make sure you get the desired response, and not just one or the other). A brief CI test on our dual stack metal CI might make the most sense here (KNI Might have a job like this already, need to investigate our options to add dual stack coverage to openshift/coredns).

The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Create a PR in openshift/cluster-ingress-operator to specify the random balancing algorithm if the feature gate is enabled, and to specify the leastconn balancing algorithm (the current default) otherwise.

The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

The multiple destinations provided as a part of the allowedDestinations field is not working as it used to on OCP4: https://github.com/openshift/images/blob/master/egress/router/egress-router.sh#L70-L109

 

We need to parse this from the NAD and modify the iptables here to support them:

https://github.com/openshift/egress-router-cni/blob/master/pkg/macvlan/macvlan.go#L272-L349

 

Testing:

1) Created NAD:

[dsal@bkr-hv02 surya_multiple_destinations]$ cat nad_multiple_destination.yaml 
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
 name: egress-router
spec:
 config: '{
     "cniVersion": "0.4.0",
     "type": "egress-router",
     "name": "egress-router",
 "ip": {
     "addresses": [
         "10.200.16.10/24"
     ],
     "destinations": [
         "80 tcp 10.100.3.200",
         "8080 tcp 203.0.113.26 80",
         "8443 tcp 203.0.113.26 443"
     ],
     "gateway": "10.200.16.1"
  }
}'

2) Created pod:

[dsal@bkr-hv02  surya_multiple_destinations]$ cat egress-router-pod.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  name: egress-router-pod
  annotations:
    k8s.v1.cni.cncf.io/networks: egress-router
spec:
  containers:
    - name: openshift-egress-router-pod
      command: ["/bin/bash", "-c", "sleep 999999999"]
      image: centos/tools
      securityContext:
        privileged: true

3) Checked IPtables:

[root@worker-1 core]# iptables-save -t nat 
Generated by iptables-save v1.8.4 on Mon Feb 1 12:08:05 2021
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A POSTROUTING -o net1 -j SNAT --to-source 10.200.16.10
COMMIT # Completed on Mon Feb 1 12:08:05 2021

As we can see, only the SNAT rule is added. The DNAT doesn't get picked up because of the syntax difference.

Feature Overview

Plugin teams need a mechanism to extend the OCP console that is decoupled enough so they can deliver at the cadence of their projects and not be forced in to the OCP Console release timelines.

The OCP Console Dynamic Plugin Framework will enable all our plugin teams to do the following:

  • Extend the Console
  • Deliver UI code with their Operator
  • Work in their own git Repo
  • Deliver at their own cadence

Goals

    • Operators can deliver console plugins separate from the console image and update plugins when the operator updates.
    • The dynamic plugin API is similar to the static plugin API to ease migration.
    • Plugins can use shared console components such as list and details page components.
    • Shared components from core will be part of a well-defined plugin API.
    • Plugins can use Patternfly 4 components.
    • Cluster admins control what plugins are enabled.
    • Misbehaving plugins should not break console.
    • Existing static plugins are not affected and will continue to work as expected.

Out of Scope

    • Initially we don't plan to make this a public API. The target use is for Red Hat operators. We might reevaluate later when dynamic plugins are more mature.
    • We can't avoid breaking changes in console dependencies such as Patternfly even if we don't break the console plugin API itself. We'll need a way for plugins to declare compatibility.
    • Plugins won't be sandboxed. They will have full JavaScript access to the DOM and network. Plugins won't be enabled by default, however. A cluster admin will need to enable the plugin.
    • This proposal does not cover allowing plugins to contribute backend console endpoints.

 

Requirements

 

Requirement Notes isMvp?
 UI to enable and disable plugins    YES 
 Dynamic Plugin Framework in place    YES 
Testing Infra up and running   YES 
 Docs and read me for creating and testing Plugins    YES 
CI - MUST be running successfully with test automation This is a requirement for ALL features. YES
Release Technical Enablement Provide necessary release enablement details and documents. YES

 
 Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?  
  • New Content, Updates to existing content,  Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Related to CONSOLE-2380

We need a way for cluster admins to disable a console plugin when uninstalling an operator if it's enabled in the console operator config. Otherwise, the config will reference a plugin that no longer exists. This won't prevent console from loading, but it's something that we can clean up during uninstall.

The UI will always remove the console plugin when an operator is uninstalled. There will not be an option to keep the operator. We should have a sentence in the dialog letting the user know that the plugin will disabled when the operator is uninstalled (but only if the CSV has the plugin annotation).

If the user doesn't have authority to patch the operator config, we should warn them that the operator config can't be updated to remove the plugin.

cc Peter Kreuser Tony Wu Robb Hamilton

Feature Overview

  • This Section:* High-Level description of the feature ie: Executive Summary
  • Note: A Feature is a capability or a well defined set of functionality that delivers business value. Features can include additions or changes to existing functionality. Features can easily span multiple teams, and multiple releases.

 

Goals

  • This Section:* Provide high-level goal statement, providing user context and expected user outcome(s) for this feature

 

Requirements

  • This Section:* A list of specific needs or objectives that a Feature must deliver to satisfy the Feature.. Some requirements will be flagged as MVP. If an MVP gets shifted, the feature shifts. If a non MVP requirement slips, it does not shift the feature.

 

Requirement Notes isMvp?
CI - MUST be running successfully with test automation This is a requirement for ALL features. YES
Release Technical Enablement Provide necessary release enablement details and documents. YES

 

(Optional) Use Cases

This Section: 

  • Main success scenarios - high-level user stories
  • Alternate flow/scenarios - high-level user stories
  • ...

 

Questions to answer…

  • ...

 

Out of Scope

 

Background, and strategic fit

This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.

 

Assumptions

  • ...

 

Customer Considerations

  • ...

 

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?  
  • New Content, Updates to existing content,  Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

When moving to OCP 4 we didn't port the metrics charts for Deployments, Deployment Configs, StatefulSets, DaemonSets, ReplicaSets, and ReplicationControllers. These should be the same charts that we show on the Pods page: Memory, CPU, Filesystem, Network In and Out.

This was only done for pods.

We need to decide if we want use a multiline chart or some other representation.

This would let us import YAML with multiple resources and add YAML templates that create related resources like image streams and build configs together.

See CONSOLE-580

Acceptance criteria:

  • Users should be able to drag multiple files into the import yaml.
    • the yaml files should be displayed in the editor separated by "- - -"
  • After clicking create
    • a dry run will be initiated and will report back any errors
    • upon receiving no errors from the dry run, the resources will be created
    • the results page will appear showing links for each resource

Story:
As a user viewing the pod logs tab with a selected container, I want the ability to view past logs if they are available for the container.

Acceptance Criteria:

  • Provide a mechanism to expose past logs, if they are available.

 

Design doc: https://docs.google.com/document/d/1PB8_D5LTWhFPFp3Ovf85jJTc-zAxwgFR-sAOcjQCSBQ/edit#

The work on this story is dependent on following changes:

 

The console already supports custom routes on the operator config. With the new proposed CustomDomains API introduces a unified way how to stock install custom domains for routes, which both names and serving cert/keys, customers want to customise. From the console perspective those are:

  • openshift-console / console
  • openshift-console / downloads (CLI downloads)

 

The setup should be done on the Ingress config. There two new fields are introduced:

  • ComponentRouteSpec - contains configuration of the for the custom domain(name, namespace, custom hostname, TLS secret reference)
  • ComponentRouteStatus - contains status of the custom domain(condition, previous hostname, rbac needed to read the TLS secret, ...)

 

Console-operator will be only consuming the API and check for any changes. If a custom domain is set for either `console` or `downloads` route in the `openshift-console` namespace, console-operator will read the setup set a custom route accordingly. When a custom route will be setup for any of console's route, the default route wont be deleted, but instead it will updated so it redirects to the custom one. This is done because of two reasons:

  1. we want to prevent somebody from stealing the default hostname of both routes (console, downloads)
  2. we want to prevent users from having unusable bookmarks that are pointing to the default hostname

 

Console-operator will still need to support the CustomDomain API that is available on it's config.

Acceptance criteria:

  • Console supports the new CustomDomains API for configuring a custom domain for both `console` and `downloads` routes
  • Console falls back to the deprecated API in the console operator config if present
  • Console supports the original default domains and redirects to the new ones

 

Questions:

  • Which CustomDomain API takes precedens? Ingress config vs. Console-operator config. Can upgrade cause any issues?

 Feature Overview

This will be phase 1 of Internationalization of the OpenShift Console.

 Phase 1 will include the following:

  1. UI based language Selector instead of using browser detection
  2. Externalize all hard coded strings in the client code including all OpenShift static plugins
    1. Admin Console
    2. Dev Console
    3. Serverless
    4. Pipelines
    5. CNV
    6. OCS
    7. CSO
  3. Localized Date\Time
  4. Setup all processes, infrastructure, and testing required
  5. We will start with support for Chinese and Japanese lang

Phase 1 will not include:

  1. Dynamically generated UI (Operator, OpenAPIV3Schema)
    1. Operators that surface informational messages may not have translations available
  2. Strings from non client code
    1. This may include items such as events surfaced from Kuberenetes, alerts, and error messages displayed to the user or in logs
  3. Localization of logging messages at any level is not in scope
  4. Any CLI
  5. Language support for left to right languages ie Arabic

Initial List of Languages to Support

---------- 4.7* ----------

  1. Japanese - Code: ja 
  2. Chinese - Code: zh_CN, zh_TW 
  3. Korean - Code: ko

*This will be based on the ability to get all the strings externalized, there is a good chance this gets pushed to 4.8.

---------- Post 4.7 ----------

  1. Spanish: - Code: es_419, es 
  2. German: - Code: de
  3. French - Code: fr
  4. Italian - Code: it
  5. Portuguese - Code: pt_BR
  6. Korean - Code: ko
  7. Hindi - Code: hi

POC

 Initial POC PR

Goals

Internationalization has become table stakes. OpenShift Console needs to support different languages in each of the major markets. This is key functionality that will help unlock sales in different regions.

 

Requirements

 

Requirement Notes isMvp?
Language Selector   YES
Localized Date. + Time   YES
Externalization and translation of all client side strings   YES
Translation for Chinese and Japanese   YES
Process, infra, and testing capabilities put into place   YES
CI - MUST be running successfully with test automation This is a requirement for ALL features. YES
     

  

Out of Scope

  1. Dynamically generated UI (Operator, OpenAPIV3Schema)
    1. Operators that surface informational messages may not have translations available
  2. Strings from non client code
    1. This may include items such as events surfaced from Kuberenetes, alerts, and error messages displayed to the user or in logs
  3. Localization of logging messages at any level is not in scope
  4. Any CLI support
  5. Language support for left to right languages ie. Arabic

 

Assumptions

  • Each static plugin team will be responsible for externalizing all their client code strings.
  • Quick Starts will need to be translated.

Customer Considerations

We are rolling this feature in phases, based on customer feedback, there may be no phase 2.

Documentation Considerations

I believe documentation already supports a large language set.

Epic Goal

  • This is the continuation of the Internationalization work... the following items remain:
    • All existing QuickStarts get Translated
    • Automation Completed
    • Any remaining items cleaned up

Why is this important?

  • Automating as much as possible with the detecting duplicate strings, building, translation drops will ensure we will be successful for all future releases
  • Quick Start are important part of the product that enable our users to maximize usage of the Console
  • Best to clean up anything left over to reduce future Tech Debt

Acceptance Criteria

  • Quick Starts are translated 
  • Everything is automated for building, and pushing translation drops to the globalization team
  • Source code should be up to quality standards

Previous Work (Optional):

  1. https://issues.redhat.com/browse/CONSOLE-2325

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

We need to automate how we send and receive updated translations using Memsource for the Red Hat Globalization team. The Ansible Tower team already has automation in place that we might be able to reuse.

Acceptance Criteria:

  • We have a script that takes the current messages from the console repos and pushes them to Memsource
  • We have a script that pulls the updated translations from Memsource and creates a PR against openshift/console
  • We work with the DPTP team to determine if this process can be automated such that it runs periodically (e.g. once a sprint)

Epic Goal

The OCP Console needs to detect if the ACM Operator has been installed, if detected then a new multi-cluster perspective option shows up in the perspective chooser.

As a user I need the ability to to switch to the the ACM UI from the OCP Console and vice versa without requiring the user to login multiple times.

This option also needs to be hidden if the user doesn't have the correct RBAC.

Marvel design mockup

Why is this important?

  • Multi-cluster functionality is very important to our users. We need to provide a seamless experience for users.

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

The console should detect the presence of the ACM operator and add an Advanced Cluster Management item to the perspective switcher. We will need to work with the ACM team to understand how to detect the operator and how to discover the ACM URL.

Additionally, we will need to provide a query parameter or URL fragment to indicate which perspective to use. This will allow ACM to link back to the a specific perspective since it will share the same perspective switcher in its UI. ACM will need to be able to discover the console URL.

This story does not include handling SSO, which will be tracked in a separate story.

We need to determine what RBAC checks to make before showing the ACM link.

Acceptance Criteria

1. Console shows a link to ACM in its perspective switcher
2. Console provides a way for ACM to link back to a specific perspective
3. The ACM option only appears when the ACM operator is installed
4. ACM should open in the same browser tab to give the appearance of it being one application
5. Only users with appropriate RBAC should see the link (access review TBD)

Complete Epics

This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled

An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.

Goal:

This epic is mainly focused to track the dev console QE automation activities for 4.8 release
1. Identify the scenarios for automation
2. Segregate the test cases into smoke, Regression and user stories
3. Designing the gherkin scripts with below priority

  • Update the Smoke test suite
  • Update the Regression test suite

4. Create the automation scripts using cypress
5. Implement CI

Why is it important?

This improves the quality of the product

Note:

This is not related to any UI features. It is mainly focused on UI automation

Description

This story is mainly related to push the pipelines code from dev console to pipelines plugin folder for extensibility purpose
Verify the pipelines regression test suite

As a operator qe, I should be able to execute them on my operator folder

Acceptance Criteria

1. All pipelines scripts should be able to execute in the pipelines plugin folder
2. Pipelines operator installation needs to be done by the script

Additional Details:

Description

Design the cypress scripts for the epic ODC-3991
Refer the Gherkin scripts https://issues.redhat.com/browse/ODC-5430

As a user,

Acceptance Criteria

All automation possible test scenarios related to EPIC ODC-3991 should be automated

Additional Details:

Pipelines operator needs to be installed

Description

Update the smoke test cases related to kantive
Remove the duplicate scenarios

As a user,

Acceptance Criteria

  1. <criteria>

Additional Details:

Description

Merging the existing knative smoke suite scripts

As a tester, I should be able to execute the smoke test scripts

Acceptance Criteria

  1. <criteria>

Additional Details:

Currently the PR looks too large, To reduce the size, creating these sub tasks
Updating the ReadMe documentation for knative plugin folder

Description

This story is mainly related to push the pipelines code from dev console to gitops plugin folder for extensibility purpose

As a operator qe, I should be able to execute them on my operator folder

Acceptance Criteria

1. All pipelines scripts should be able to execute in the gitops plugin folder
2. gitops operator installation needs to be done by the script

Description

This story is to move the existing scripts from Dev Console to Topology plugin folder

As a user,

Acceptance Criteria

  1. I should be able to execute the topology scripts

Additional Details:

Description

updated all automation scripts and verify the execution on remote cluster

As a user,

Acceptance Criteria

  1. Follow the automation testing practices while designing scripts
  2. Execute the scripts on remote cluster
  3. In Automation PR, follow the PR Hygiene guidelines

Additional Details:

Execute them on Chrome browser and 4.8 release cluster

Description

Setup the cypress cucumber in helm plugin folder

Acceptance Criteria

  • Helm scenarios should move to helm folder and their respective pageObjects, page functions and step definitions
  • Able to execute smoke and regression test execution

Additional Details:

Description

CI implementation for pipelines, knative, devconsole

Acceptance Criteria

update package.json file
CI for pipelines:
Any update related to pipelines should execute pipelines smoke tests
on nightly builds, pipelines regression should be executed [TBD]

CI for devconsole:
Any update related to devconsole should execute devconsole smoke tests
on nightly builds, devconsole regression should be executed [TBD]

Ci for knative
Any update related to knative should execute knative smoke tests
on nightly builds, knative regression should be executed [TBD]

Additional Details:

  1. Update the pacakge.json file with below scripts
    • test-cypress-pipelines-headless & test-cypress-pipelines
    • test-cypress-knative-headless & test-cypress-knative
    • test-cypress-gitops-headless & test-cypress-gitops
  2. Update the .gherkinlintrc file 
    • updated knative related tags
    • max scenarios per file
    • file name style
  3. Updated the OWNERS file for gitops, knative and pipelines

Fixing the lint feature file lint issues and moving the topology features to topology folder which is occurring on executing `yarn run test-cypress-devconsole-headless`

As this .gherkin-lintrc is mainly used by QE team. so it's not necessary to be in frontend folder, So I am moving it to dev-console/integration-tests folder

Adding all necessary tags and modifying below rules due to recently observed scenarios

  1. While uploading scenarios to upload to polarion, switch off the "no-homogenous-tags" is useful
  2. Updated allowed tags w.r.to new feature files

Create Github templates with certain criteria to met the Gherkin script standards, Automation script standards

By adding the owners file to service mesh, helps us to add the automatic reviewers on this gherkin scripts update

Incomplete Epics

This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled

Epic Goal

  • Port all remaining Protractor tests to Cypress

Why is this important?

  • Protractor is very hard to debug when tests fail/flake
  • Once all protractor tests are ported we can remove all Protractor dependencies, scripts, and configuration files.
  • Cypress has better debugging, plug-ins, and reporting tools

Acceptance Criteria

  • CI - MUST be running successfully with tests automated

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Please read: migrating-protractor-tests-to-cypress

Protractor test to migrate:  `frontend/integration-tests/tests/storage.scenario.ts`

Loops through 6 storage kinds:

15) Add storage is applicable for all workloads

   16) replicationcontrollers
      ✔ create a replicationcontrollers resource
      ✔ add storage to replicationcontrollers

   17) daemonsets
      ✔ create a daemonsets resource
      ✔ add storage to daemonsets

   18) deployments
      ✔ create a deployments resource
      ✔ add storage to deployments

   19) replicasets
      ✔ create a replicasets resource
      ✔ add storage to replicasets

   20) statefulsets
      ✔ create a statefulsets resource
      ✔ add storage to statefulsets

   21) deploymentconfigs
      ✔ create a deploymentconfigs resource
      ✔ add storage to deploymentconfigs

 

Accpetance Criteria

  • Protractor test ported to cypress
  • Remove any unused legacy data-test-id`s
  • Protractor test deleted, and non longer referenced in `frontend/integration-tests/protractor.conf.ts`

Please read: migrating-protractor-tests-to-cypress

Protractor test to migrate:  `frontend/integration-tests/tests/filter.scenario.ts`

4) Filtering 
   ✔ filters Pod from object detail 
   ✔ filters invalid Pod from object detail 
   ✔ filters from Pods list 
   ⚠ CONSOLE-1503 - searches for object by label 
   ✔ searches for pod by label and filtering by name 
   ✔ searches for object by label using by other kind of workload

 

Accpetance Criteria

  • Protractor test ported to cypress
  • Remove any unused legacy data-test-id`s
  • Protractor test deleted, and non longer referenced in `frontend/integration-tests/protractor.conf.ts`

Other Complete

This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled

Description of problem:

Some of the steps in test scenarios [A-06-TC02]- Script fix required

A-06-TC05 - script fix required

A-06-TC11 update required as per the latest UI

Prerequisites (if any, like setup, operators/versions):

Steps to Reproduce

  1. <steps>

Actual results:

Expected results:

Reproducibility (Always/Intermittent/Only Once):

Build Details:

Additional info:

This task adds support for setting socket options SO_REUSEADDR and SO_REUSEPORT to etcd listeners via ListenConfig. These options give the flexibility to cluster admins who wish to more explicit control of these features. What we have found is during etcd process restart there can be a considerable time waiting for the port to release as it is held open by TIME_WAIT which on many systems is 60s.

Description of problem:

P-01-TC03  On Second run, script worked fine
P-01-TC06    created seperate functions for docker file page
P-01-TC09 Removing this test case, by updating P-04-TC04 test scenario
Updating pipelines section title in side bar

Above test scenarios - on second run it works fine

Prerequisites (if any, like setup, operators/versions):

Steps to Reproduce

  1. <steps>

Actual results:

Expected results:

Reproducibility (Always/Intermittent/Only Once):

Build Details:

Additional info:

Description of problem:

Update the kafka test scenarios in eventing-kafka-event-source.feature file

Steps to Reproduce

  1. <steps>

Actual results:

Expected results:

Reproducibility (Always/Intermittent/Only Once):

Build Details:

Additional info:

While Regression Test execution, updated the test scenarios

Description of problem:

 P-09-TC01, P-09-TC04, P-09-TC05, P-09-TC06, P-09-TC07, P-09-TC11 test scripts update required

Page obejcts updated for pipelines

Prerequisites (if any, like setup, operators/versions):

Steps to Reproduce

  1. <steps>

Actual results:

Expected results:

Reproducibility (Always/Intermittent/Only Once):

Build Details:

Additional info:

Description of problem:

 

P-06-TC01 Text change is required
P-06-TC04 Text change is required
P-06-TC13 Text change is required

P-04-TC02 also get fixed with this bug

P-03-TC03 also get fixed with this bug

Prerequisites (if any, like setup, operators/versions):

Steps to Reproduce

  1. <steps>

Actual results:

Expected results:

Reproducibility (Always/Intermittent/Only Once):

Build Details:

Additional info:

discover-etcd-initial-cluster was written very early on in the cluster-etcd-operator life cycle. We have observed at least one bug in this code and in order to validate logical correctness it needs to be rewritten with unit tests.

PR: https://github.com/openshift/etcd/pull/73

Description of problem:

In the topology view, if you select any grouping (Application, Helm Release, Operator Backed service, etc), an extraneous blue box is displayed
This is a regression.

Prerequisites (if any, like setup, operators/versions):

Steps to Reproduce

Create an application in any way ... but this will do ...

  1. Import from Git ... defaulting the application provided
  2. Select the Application which is created & you'll see the blue bounding box

Actual results:

This animated gif shows the issue:

Expected results:

The blue box shouldn't be shown

Reproducibility (Always/Intermittent/Only Once):

Always

Build Details:

Seen on 4/26/2021 4.8 daily, but this behavior was discussed in slack last week

Additional info:

This is a regression

console-operator codebase contains a lot of inline manifests. Instead we should put those manifests into a `/bindata` folder, from which they will be read and then updated per purpose.

1) We want to fix the order of Imports in the files.

2) We want to have vendor import, followed by console/package import and then relative imports should come at last.

Can be done manually or introduce some linter rules for this.

User Story

Pull in the latest openshift/library content into the samples operator
If image eco e2e's fail, work with upstream SCL to address

Acceptance Criteria

  • Samples operator installs current official content in openshift/library
  • List of removed/EOL images is prepared for docs update

Docs Impact

List of EOL images needs to be sent to the Docs team and added to the release notes.

Description of problem:

Create Namespaces script is keep on failing due to load issue 

Prerequisites (if any, like setup, operators/versions):

Steps to Reproduce

  1. <steps>

Actual results:

Unable to execute the create namespace script

Expected results:

Create Namespace script should work without any issue

Reproducibility (Always/Intermittent/Only Once):

Intermittent

Build Details:

Additional info:

Description of problem:

P-02-TC02 Script fix required - unable to identify locators
P-02-TC03 Script fix required - unable to identify locators
P-02-TC06 Script fix required - unable to identify locators

Prerequisites (if any, like setup, operators/versions):

Steps to Reproduce

  1. <steps>

Actual results:

Expected results:

Reproducibility (Always/Intermittent/Only Once):

Build Details:

Additional info:

Description of problem:

 Fixing P-07-TC20 test scenario

Prerequisites (if any, like setup, operators/versions):

Steps to Reproduce

  1. <steps>

Actual results:

Expected results:

Reproducibility (Always/Intermittent/Only Once):

Build Details:

Additional info: