Pacific Beach Drive

Mike's Drive.

Follow me on GitHub

My QA Playbook

QA done right.


Milestone 4 - gathered and incorporated

Prioritize User-Centric Testing

  1. To ensure a user-centric approach prioritize testing that revolves around the needs, expectations, and usage patterns of end-users.
  2. Therefore gain a comprehensive understanding of the end-users’ (Dive deep into the minds of end-users)
    • needs,
    • expectations, and
    • usage patterns (behavior).

This knowledge will serve as a foundation for designing effective tests and improving the overall user experience.

Collaborate with product owners to define user stories and acceptance criteria.

  1. Work closely with product owners to define user stories and acceptance criteria.
  2. This collaboration ensures that testing aligns with the intended user experience and meets the desired outcomes.

Conduct usability testing and gather user feedback throughout the development process.

  • Usability testing and continuous gathering of user feedback should be conducted throughout the development process.
  • This iterative approach allows for continuous improvement based on real-time user insights.
  • Start testing early in the development cycle to identify defects sooner.
  • Conduct exploratory testing to uncover issues beyond the scripted test cases.

Get Early and Continuous Testing up and running

This allows for quick identification and resolution of issues, ensuring that the final product meets user expectations.

Enslave the mighty CI/CD pipelines to ensure

  • frequent builds,
  • testing, and
  • deployment.

I can’t stress this enough:

Cross-Functional Collaboration

  • Agile User-Centric Testing emphasizes close collaboration between testers, developers, designers, and other stakeholders.
  • This collaboration ensures a shared understanding of user requirements and facilitates effective communication and problem-solving.

User Stories > Test Cases > test suite > test cycles

Define User Stories:

  • In Agile, user stories describe the desired functionality from the user’s perspective.
  • Begin by identifying the user stories that need to be implemented or modified.

Create Test Cases:

  • For each user story, create test cases that define the expected behavior of the software.
  • These test cases will serve as acceptance criteria for the feature.

Sitenote: Test Cases - The Epistemological QA Flow

Question Everything:

  • Enter the realm of Epistemological QA Flow, where doubt is your greatest ally.
  • Challenge assumptions, scrutinize requirements, and
  • Just don’t expect anyone to appreciate your philosophical musings.

Seek Enlightenment through Testing:

  • Test like a Zen master seeking enlightenment.
  • Leave no stone unturned, no corner unexplored.
  • Embrace exploratory testing, automated testing, and
  • any other testing technique that brings you closer to the elusive state of QA nirvana.

New feature > Meeting with dev > create Testplan > test suite solely for this feature

  • I could add here a (link for template)

Design the test suite

* Set milestones
* Deliverables (team alignment) 
* Identify risks
* Definition of Done  (DoD)

Use the information from the requirement - analysis for test planning. The test plan should comprise

  • the software testing strategy,
  • the scope of testing,
  • the project budget, and
  • established deadlines.
  • outline the types and levels of testing required, methods, and tools for tracking bugs
  • allocate resources and responsibilities to individual testers.

a) Prioritize transparency

  • clearly defines requirements,
  • give testers a thorough understanding of the features,
  • give them a blueprint for how to progress.

Transparency benefits any developmental approach, but it is especially necessary for procuring Agile success. Testers have to be very clear on what the software is expected to do, what features to test in each sprint, and what “good results” look like. This clarity helps teams collaborate, test faster, and deliver results within short deadlines.

b) SMART objectives for your QA test

  • Specific
    • state what you will do (use action words)
  • Measurable
    • provide a way to evaluate
    • use metrics and data targets
  • Achievable
    • within your scope
    • possible to accomplish
  • Relevant
    • improves the business in some way, makes sense
  • Time-bound
    • state when you will get it done
      • be specific on date or timeframe

c) functional and non-functional software testing types

  • functional testing verifies that the app behaves as intended and meets the functional requirements,
  • non-functional testing evaluates its performance, usability, security, and other quality attributes.
  • Both types combined in one comprehensive testing strategy puts assurance in QA and allows user centric results.

Functional Testing ***

  • examines the software’s behavior and functionality.
  • verifies whether the app functions as expected and meets the requirements.
  • main question: “Does the app perform the intended tasks and produce the expected outputs?”
  • Team creates test cases that simulate real-world scenarios to validate the expected behavior/functionality against predefined requirements/specs.

Unit Testing

  • Testing individual components or modules to verify their correctness.
  • correct path
  • correct control structure
  • correct loops

Smoke Testing

  • A preliminary round of testing to verify basic functionality before conducting more comprehensive tests.
  • performed post build to verify that the critical functionalities are working fine
  • executed before any detailed functionality, regression test are executed
  • main purpose is to reject the app with issues so that Wa does not waste time testing broken software app.
  • test cases chose to cover the most important functionality or component of the system
  • the objective is not to perform exhaustive testing, but to verify the critical functionality.

Sanity Testing

  • performed after receiving build, with minor changes in the code or functionality, to make sure that the bigs have been fixed and no further issues are introduced due the changes
  • goal is to determine that the proposed functionality works roughly as expected.
  • checking the right version to make sure the right build was tested

Regression Testing (content from the handbooks - in progress)[link]

  • Testing previously tested functionalities to ensure that changes or enhancements have not introduced any new issues.
  • confirm program or code change has not adversely affected existing features ( broken features)
  • full or partial selection of already executed test cases that are re-executed to ensure existing functionalities work fine.
  • done to ensure that the new code - the code changes do not have a side affect on this existing functionalities
  • ensures old code still works once the latest code changes are done(?)
  • automate tests whenever possible to achieve faster feedback and regression coverage.

Automated Testing - The Holy Grail (or not):

Automation is the buzzword that promises salvation.

  • Automate tirelessly, only to discover that half the tests are flaky, the other half don’t make sense, and the rest are obsolete by the time you finish writing them.
  • Don’t forget to laugh at the irony.

System Integration Testing (SIT)

  • Testing the interaction between different components/modules to ensure they work together seamlessly.
  • to verify the different modules or services used work well together example
  • interaction with the DB
  • microservices work together
  • more expensive to run as they required multiple parts of the app to be up and running

User Acceptance Testing

  • performed by the end user or the client to verify/accept before moving the app goes to prod
  • done in the final phase of testing
  • main purpose of UAT is to validate end to end business flow. it does not focus on cosmetic erros, spelling mistakes or system testing
  • testing is carried out in a separate testing env with prod like data setup
  • UAT is performed by clients/end user.

API Testing

  • validates APIs
  • to check the functionality, reliability, performance and security of the interface
  • instead of using standard user inputs (keyboard) and outputs, you use software to send call to the API, get out output, and note down the system’s response.
  • business logic layer
  • communications and data exchange between two separate software system
  • database and server should be configured as per the application requirements
  • once the installation is done, API function should be called to check wether that API is working.

Localization Testing

  • process of testing a globalized app whose UI
  • behavior ist tested for a specific region, locale or culture.
  • customizing as per the targeted language and country: default language, currency, date, time format, documentation, content and UI
  • purpose is the check appropriate linguistic and culture aspects…
  • change the user interface for even the intiatl settings according to requirements
  • verify typographical error, sulture appropriateness of UI, linguistics errors;

User Interface (UI) Testing

  • testing the graphical user interface to ensure it is user-friendly and operates as intended.

Non-Functional Testing ***

  • evaluates how the app performs/behaves under different conditions/environments.
  • QA team assess the app’s
    • responsiveness,
    • resource utilization,
    • security vulnerabilities,
    • user-friendliness, and its
    • ability to handle high loads or concurrent users.

    • Goal is to identify potential bottlenecks, risks, and usability issues. the relevant quality attributes therefore are
    • performance,
    • reliability,
    • usability,
    • speed,
    • scalability,
    • security,
    • stability, and
    • user experience.

Performance Testing

  • Evaluating the software’s speed, scalability, and responsiveness under various workloads and conditions.

Load Testing

  • Testing the software’s performance under expected and peak load conditions.

Stress Testing

  • Assessing the software’s stability and performance beyond its normal capacity, under extreme conditions,

Usability Testing

  • Testing the software’s user-friendliness, ease of use, and overall user experience.

Security Testing

  • Testing the software’s vulnerability to unauthorized access, data breaches, and other security threats.

Usability Testing

  • Evaluating the software’s user-friendliness and intuitiveness by observing real users interacting with it.

Accessibility Testing

  • Verifying the software’s compatibility with different operating systems, browsers, hardware, and network configurations.

Endurance Testing

  • Evaluating the software’s ability to perform consistently and reliably over a specified period and under various conditions.

Browser Testing

  • Testing the software’s availability and accessibility to users during different times and under different scenarios.

Mobile Testing

  • Assessing the software’s ability to handle increasing workloads and scale up with growing user demands.

Penetration Testing

  • Testing the software’s ease of maintenance, code readability, and modularity.

Agile Testing - include Test-Driven Development (TDD)

  • practiced in Agile User-Centric Testing.
  • involves writing test cases before developing the code.
  • helps ensure that the software meets the intended user requirements and can be used as a guideline for user-centric testing.

  • Work closely with developers to define test scenarios and acceptance criteria before coding begins.
    • writing automated tests before implementing the actual code.
    • It follows a cycle of writing a failing test, implementing the code to pass the test,
    • and then refactoring the code while ensuring that all tests still pass.

Write Failing Tests:

  • Start by writing automated tests based on the test cases created in the previous step.
  • These tests should initially fail since the corresponding code is yet to be implemented.

Implement Minimum Viable Code:

  • Write the minimum amount of code required to make the failing tests pass.
  • The goal is to implement the simplest solution that fulfills the test requirements.

Run Tests:

  • Run all the automated tests to ensure that the newly implemented code passes the tests.
  • If any test fails, go back to step 4 and modify the code until all tests pass.

Refactor Code:

  • Once all tests pass, review and improve the code.
  • Refactoring aims to enhance the code’s structure, maintainability, and efficiency without changing its functionality.
  • Refactoring should be done in small, incremental steps to ensure that the tests continue to pass.

Repeat the Cycle:

  • Move on to the next user story and repeat the process from step 2.
  • Write failing tests, implement code, run tests, and refactor until all acceptance criteria are met.

Milestone 1   Milestone 2   Milestone 3   Milestone 4   Milestone 5

Milestone 6   Milestone 7   Milestone 8   Milestone 9   back