Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
minLevel1
maxLevel7

...

Daily Breakdown

Length: 10 business days

DayWeekday

Task

Team Members

1

Day One - Wednesday

Day Two - Thursday

Day Three - Friday

Day Four - Monday

Day Five - Tuesday

Day Six - Wednesday

Day Seven - Thursday

Day Eight - Friday

Day Nine - Monday

Day Ten - Tuesday
  • Maggie / Jugraj - write change management ticket for next release

Maggie / Jugraj

2

Thursday

  • Ewa:

    • Ad-hoc testing {QA environment} (30min-1hr)

    • Backlog items - write test case and manual testing

Ewa

3

Friday

4

Monday

5

Tuesday

  • Ewa:

    • Backlog items - write test case and manual testing

Ewa

6

Wednesday

7

Thursday

  • Ewa:

    • Ad-hoc testing {ACC environment} (30min-1hr)

    • Backlog items - write test case and manual testing

Ewa

8

Friday

9

Monday

10

Tuesday

  • Ewa:

    • Backlog items - write test case and manual testing

Ewa

Dev - Backlog item checklist

...

One-off Sprints

  • Bug fix only sprint

Sprint Composition

  • Can we still use Ewa 4 days of the sprint - ad-hoc testing would really help - this would be very valuable. She could do ad hoc testing, writing test cases, etc. anything that is valuable. Even if she does a timeboxed one day ad hoc testing, that would be valuable - one day timebox on QA and one day timebox in ACC.

  • Need to communicate to the user that the cost might not be worth it to do non-valuable things.

  • Full regression test

Full Regression Test

Weekday

Tasks

Team Member

1

Wednesday

2

Thursday

3

Friday

4

Monday

5

Tuesday

6

Wednesday

7

Thursday

8

Friday

9

Monday

10

Tuesday

Automated Testing

  • Can use playwright for automated testing - have we solved the issue of authentication?

  • Change management - could write the ticket every sprint

  • If we follow this method, might not have to do a full regression test. But might want to do a 5 day regression.

  • Need - conversation about who will decide what is a bug or not or UI change we are not going to make. Action item: feature we will never fix - e.g. Here is a bug that is not a bug. Team can search to see what decisions were made.

Decision-makers

Task

Team Member

Whether or not a bug or UI change suggestion will be added to the backlog

Items to Action

Decision

Team Member

Status

Who will decide whether or not a bug or UI change suggestion will be added to the backlog

Create a feature to hold items we will never fix, so the team can use it as a reference in the future

Sprint Composition

50-70% - New development and testing, including:

...

Testing - brainstorming only

Discussion with Grace - April 4

Bugs addressed during the sprint. Leave some space for bug testing during the sprint

...

  • How to track modifications to backlog items. E.g. when testing, if something needs to be added, what should we do?

Discussion with Ashiq

PO and users:

  • test in ACC

  • They can test on the first day of the next sprint - will will give them a list of what we completed and they can test if they can, if not, that is ok - maybe 1 hr of testing for them

  • Dev - send a screenshot of new UI before the functionality is built to get feedback from PO via email. Do not need to send screenshots for things such as bugs, and FR translations. Only things we need feeback on.

  • PO walkover - two days before the end of sprint - 15-30 min to take a quick look if screenshots are not enough - flexible. PO may not need it every week.

    • Can show him work still in progress

  • Users can test in test environment if they wish after sprint review. We can give them a few bullet points of what we worked on. Either at end of review or first day of sprint.

...

How would we work with dev’s only to write test cases?

Discussion with Ashiq

QA free from manual testing

...

Other work environments - 1 to 1 dev team - work at writing the user stories from the beginning of sprint

Scope

Time

Quality

Automated Testing

  • How can we implement?

  • How can we keep QA involved?

  • What is the outcome of implementation? E.g. less QA manual testing

...