Tue Jan 07 - Written by: Samridha
Software Testing Basics
Explore the fundamentals of software testing, including types, techniques, planning, and defect lifecycle, designed for beginners and professionals alike.
Types of Software Testing
Unit Testing
Unit testing involves testing individual functions or modules to ensure they work independently. It is always automated.
Integration Testing
Integration testing ensures that two or more modules work as expected in coordination. It verifies that communication and data flow are functioning correctly.
Regression Testing
Regression testing ensures recent code changes (e.g., new features or bug fixes) do not negatively impact the application, maintaining stability during development.
Smoke Testing
Smoke testing is a quick test (usually 7-10 minutes) to ensure core functionalities are working in the application. It is often performed prior to regression testing or post-deployment.
Sanity Testing
Sanity testing verifies whether a feature or bug fix is working as expected.
User Acceptance Testing (UAT)
UAT is the final stage in the Software Development Lifecycle (SDLC), where the application is tested to ensure it meets customer requirements. It is typically conducted by the client.
Difference Between Smoke and Sanity Testing:
- Smoke: Focuses on stability.
- Sanity: Focuses on the functionality of specific features (very narrow scope).
Manual vs. Automated Testing
Manual Testing
Testing that requires human intelligence and observation to find bugs. It is performed by a human tester and is more suitable for exploratory and user acceptance testing. It is ideal in the early stages of development when automation may be expensive.
Automated Testing
Automated testing uses scripts and tools to validate the application. Tools like Selenium, Cypress, and TestNG are commonly used. It is faster, more efficient, and more reliable for repetitive tasks. It requires a higher initial investment but is worthwhile for large-scale projects with frequent updates.
Black-Box, White-Box, and Gray-Box Testing
Black-Box Testing
Testing without any knowledge of the internal structure, code, or implementation. Focuses only on inputs and outputs. Examples include acceptance testing and UI testing.
White-Box Testing
Testing with full knowledge of the internal structure, code, or implementation. Test cases are designed using code and focus on code coverage (e.g., path, branch, or statement). Performed by testers with programming knowledge.
Gray-Box Testing
Testing with partial internal knowledge of the code. Integration testing can be considered gray-box testing.
Test Planning
Writing Test Plans, Test Cases, and Test Scripts
Test Plan
Defines the approach, objective, scope, resources, and schedule of testing.
Sections of a Test Plan:
- Introduction: Objective (e.g., finding bugs, validating features), approach.
- Definitions and Scope: Business-specific terminologies, features in/out of scope.
- Functionality: Detailed breakdown of functional and non-functional requirements.
Other Components:
- Roles and responsibilities
- Testing environments
- Resources, deliverables, and schedule
Test Case
A set of conditions and instructions to validate a feature or functionality. It usually includes prerequisites, test steps, test data, validation conditions, and pass conditions.
Test Script
Programmatically written instructions to automate a test case.
Defect Life Cycle and Bug Reporting
Defect Life Cycle
The stages a bug goes through from identification to closure.
- New: When identified by the tester.
- Assigned: When acknowledged by the development team.
- Open: When the development team starts working on it.
- Fixed: After patching, the development marks it fixed.
- Retest: When the bug reaches the test team for verification.
- Verified: When the fix works as expected.
- Closed: When the bug is resolved with no further issues.
Additional stages:
- Deferred: Bug postponed for future releases.
- Rejected: Bug deemed invalid.
- Reopened: Bug not fixed correctly.
Bug Reporting
Contents of a Bug Report:
- Preconditions
- Steps to reproduce the bug
- Test data
- Priority (low, medium, high, critical)
- Severity (minor, major, critical, broken)
- Supporting photos/videos for reproduction
Test Design Techniques
Boundary Value Analysis
A test design technique where values on the boundary are selected for testing.
Example: For an age field (0-120), test with values like -1, 0, 1, 119, 120, 121.
Equivalence Partitioning
Divides the input domain into partitions that produce the same output. Tests select one or two values from each partition.
Example: For an age field (0-120), test values like -5, 50, and 150.
Decision Table Testing
Uses decision tables to represent different combinations of inputs and corresponding outputs.
Example:
Username - Valid | Password - Valid | Action |
---|---|---|
Yes | Yes | Success |
Yes | No | Failure |
No | Yes | Failure |
No | No | Failure |
State Transition Testing
Validates applications that transition states based on events/conditions.
Example:
- Insert ATM card → Enter PIN (Valid: Go to Menu; Invalid: Eject Card)