Tester Feedback

Use this form to (publicly) submit your feedback and feature ideas. Others will be able to vote for and discuss your idea.
Can the bug assessment sheet be revised to cover more scenarios?
It is sometimes very confusing when the exact same bug behavior is accepted in one test and rejected in another test. I say this because I had a very recent experience of this. For instance, a case where the TL never mentioned certain exclusions in the chat but goes ahead to reject bugs (e.g. bugs reported on credit card encryption issues or bugs reproducible only on specific device or browser OS) If some TLs cannot be clear on these kind of exclusions, then Test IO should be clear on such disputable issues (e.g. on reproducibility, I have also seen bugs where none other tester could reproduce positively, still get forwarded to customer because it was OS or device specific). Up till this time, it’s still unclear why Test IO is permitting such inconsistencies on bug rejection reasons from some TLs even after a bug dispute (especially when exact same bugs have been approved in previous tests) In such cases, the TL position should not be agreed-to in a bug dispute (especially when such exclusions were not made clear in the test scope, chat or Test IO academy but are valid bugs). If Test IO did not state in the academy that certain bug scenarios be not reported anymore, then ALL TLs should ALWAYS state test-specific exclusions (not stated in out-of-scope section) in their opening comments via chat (e.g. in this test TL approves card encryption caching bug, in another test TL reject exact same card encryption caching issue). This is just one example. I could go on-and-on with other examples. I would really appreciate Test IO to look into such disputable issues and to revise the bug assessment sheet to provide clarity and classifications for such bugs.
1
·

under review

Allow testers at least one chance to respond to dispute decision
The dispute are supposed to be the final stop at redeeming a bug decision, so it’s disheartening when the dispute manager doesn’t review the cycle (overview, instructions, bug lists and chat) so they understand the context of the test. On more than 1 occasion, I have encountered situations where the dispute manager’s response is out of context or irrelevant to the bug. So I would suggest we are able to respond to the dispute manager to review their decision A recent example involves a cycle where we were given a link to an html to be opened on browser and then sent the same html as email. The aim was to compare the 2 html and report any visual or content issue that differentiates the 2 htmls Also, in this cycle, each feature was a different client as clarified in the chat and from past experience with the cycle. So I submitted content bugs for missing phone number links which were present in the html on browser but TL rejected as functional. So another tester and I opened a dispute. In mine, I made the argument as to why it was content not functional, according to the academy and linked a video of comparing the html on browser with the email clients. The other tester also argued using academy, why it was a content and then linked the html browser link. My dispute was rejected as a UX and a duplicate, but the other tester’s own was accepted as a content bug. If my dispute manager had reviewed the cycle they would not reject for such reasons.
3
·

under review

Load More