- It is a method for tracing each requirement from its point of origin, through each development phase and work product, to the delivered product
- Can indicate through identifiers where the requirement is originated, specified, created, tested, and delivered
- Will indicate for each work product the requirement(s) this work product satisfies
- Facilitates communications, helping customer relationship management and commitment negotiation
- It ensures, for each phase of the lifecycle, that I have correctly accounted for all the customer’s needs
- Ensure that all requirements are correct and included in the test plan and the test cases
- Ensure that developers are not creating features that no one has requested
- Identifies the missing parts
- The completed system may have “Extra” functionality that may have not been specified in the design specification, resulting in wastage of manpower, time and effort.
- If the code component that constitutes the customer’s high priority requirements is not known, then the areas that need to be worked first may not be known thereby decreasing the chances of shipping a useful product on schedule
- Seemingly simple request might involve changes to several parts of the system and if proper Traceability process is not followed, the evaluation of the work that may be needed to satisfy the request may not be correctly evaluated.
Tuesday, March 30, 2010
Features of the Traceability Matrix
Labels:
Traceability matrix
Traceability matrix
A traceability matrix is a document, usually in the form of a table, that correlates any two baselined documents that require a many to many relationship to determine the completeness of the relationship. It is often used with high-level requirements (these often consist of marketing requirements) and detailed requirements of the software product to the matching parts of high-level design, detailed design, test plan, and test cases.
Traceability matrix is a document defines mapping between customer requirements and prepared test cases.
Traceability Metrics is a proof of document to ensure that all the specifications are been tested and the application is bug free.
Traceability matrix is a document defines mapping between customer requirements and prepared test cases.
Traceability Metrics is a proof of document to ensure that all the specifications are been tested and the application is bug free.
Labels:
Traceability matrix
Monday, March 29, 2010
Test Bed
A test bed (also commonly spelled as testbed in research publications) is a platform for experimentation of large development projects. Test beds allow for rigorous, transparent, and replicable testing of scientific theories, computational tools, and new technologies.
OR
Test bed is the envaironment that is required to test software.
This include requirement of H/W S/W Memory cpu speed operating system etc.
OR
An executing environment configuring for a testing environment is called test bed.
OR
Test Bed is an environment containing the hardware instrumentation simulators software tools and other support elements needed to conduct a test.
OR
Test bed is the envaironment that is required to test software.
This include requirement of H/W S/W Memory cpu speed operating system etc.
OR
An executing environment configuring for a testing environment is called test bed.
OR
Test Bed is an environment containing the hardware instrumentation simulators software tools and other support elements needed to conduct a test.
Labels:
Basic Concepts
BRS
BRS is Business Requirement Specification which means the client who want to make the application gives the specification to software development organization and then the organization convert it to SRS (Software Requirement Specification) as per the need of the software.
Labels:
Basic Concepts
Difference between Test Strategy and Test Plan
Test Strategy : It is a company level document and developed by QA category people like QA and PM.This document defines "Testing Approach" to achieve testing objective. Test strategy is the freezed part of BRS from which we get Test Policy and Test Strategy.
Test Plan : Test plan is the freezed document developed from SRS, FS, UC.After completion of testing team formation and risk analysis, Test Lead is preparing test plan document in term of what to test, how to test, who to test, and when to test. There is one Master Test Plan consists of reviewed Project Test Plan and Phase Test Plan. so there is general talk about Project Test Plan.
Test Strategy : Components in the Test Strategy are as follows:
Scope and objective,Business issues,Roles and responsibilities,Communication and status reporting,Test deliverability,Test approach,Test automation and tools,Testing measurements and metrices,Risks and mitigation,Defect reporting and tracking,Change and configuration management,Training plan.
Test Plan : Components in the Test Strategy are as follows:
Test Plan id,Introduction,Test items,Features to be tested,Features not to be tested,Approach,Testing tasks,Suspension criteria,Features pass or fail criteria,Test environment (Entry criteria, Exit criteria),Test delivarables,Staff and training needs,Responsibilities,Schedule,Risk and mitigation,Approach.
Labels:
Test Plan and Test Strategy
Test Strategy
The purpose of a test strategy is to clarify the major tasks and challenges of the test project.
Creating a Test Strategy :
The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.
Defining Test Strategy :
A solid testing strategy provides the framework necessary to implement your testing methodology. A separate strategy should be developed for each system being developed taking into account the development methodology being used and the specific application architecture.
The heart of any testing strategy is the master testing strategy document. It aggregates all the information from the requirements, system design and acceptance criteria into a detailed plan for testing. A detailed master strategy should cover the following:
Creating a Test Strategy :
The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.
Defining Test Strategy :
A solid testing strategy provides the framework necessary to implement your testing methodology. A separate strategy should be developed for each system being developed taking into account the development methodology being used and the specific application architecture.
The heart of any testing strategy is the master testing strategy document. It aggregates all the information from the requirements, system design and acceptance criteria into a detailed plan for testing. A detailed master strategy should cover the following:
- Project Scope
- Test Objectives
- Features and Functions to be Tested
- Testing Approach
- Testing Process and Procedures
- Test Compliance
- Testing Tools
- Defect Resolution
- Roles and Responsibilities
- Process Improvement
- Deliverables
- Schedule
- Environmental Needs
- Resource Management
- Risk and Contingencies
- Approvals and Workflow
- Project Overview
- Business Risks
- Testing Milestones
- Testing Approach
- Testing Environment
Labels:
Test Plan and Test Strategy
Tuesday, March 23, 2010
Agile Testing
Emphasizing testing from the perspective of customers who will utilize the system.
Agile testing does not emphasize rigidly defined testing procedures, but rather focuses on testing iteratively against newly developed code until quality is achieved from an end customer's perspective.
The Word Agile means "Moving Quickly" and this explains the whole concept of Agile Testing.
Testers have to adapt to rapid deployment cycles and changes in testing patterns.
Agile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable enough from module/unit level testing.
Testers are no longer a form of Quality Police. Testing moves the project forward leading to new strategy called Test Driven Development. Testers provide information, feedback and suggestions rather than being last phase of defense.
Testing is no more a phase; it integrates closely with Development. Continuous testing is the only way to ensure continuous progress.
Manual testing, particularly manual exploratory testing, is still important.
Agile teams typically find that the fast feedback afforded by automated regression is a key to detecting problems quickly, thus reducing risk and rework.
Labels:
Software Testing Types
Monday, March 22, 2010
QTP Questions
- What is reusable action in qtp?
- What is objective of actions in qtp?
- Why do need synchronization in qtp?
- How many modes of recording are there?
- What is virtual object and when will we use?
- What are the difference between per text mode and shared mode in qtp?
- Why do u use object spy in qtp?
- What is the difference between properties and methods in qtp?
- What is virtual object and at what we will use virtual object in qtp?
- What is regualr expression and when we will use regularexpression in qtp?
- How to add objects to the Object Repository?
- When we do update mode in qtp?
- What is the difference between constant and parameter in qtp?
- What are GET TO , SET TO and GET RO properties in QTP?
- What is frame work in qtp?
- Where do check points store in qtp?
- What are the objectives of Lowlevel recording? What is Elapsed Time? Is Quick Test supports Java Script? What is extention for test script in Quick Test?
- Why do u save .vbs in library files in qtp?
Labels:
Interview Questions
General Questions
- What do you mean by Pilot Testing?
- What is the difference between usability testing and GUI?
- Can you explain me the levels in V model manual?
- What is exact difference between Debugging & Testing?
- what is the differene between scenario and testcase ?
- What is determination?
- What is debugging?
- What is prototype model in manual testing?
- What is compatibility testing?
- What is test bed?
- What is stub and driver in manual testing?
- What is integration testing?
- What is unit testing?
- Can you explain waterfall model in manual testing?
- Can you explain V model in manual testing?
- can u explain spiral binding model in manual testing?
- What is fish model can you explain?
- What is unit testing in manual?
- What is test development?
- What is port testing?
- What is V model can u explain ?
- What is BUG Life cycle?
- What is system testing?
- What is SRS and BRS in manual testing?
- What is test metrics ?
- What is test strategy who will prepare that one? And what will be there in test strategy?
- What is a test plan who will prepare that one?
- What is STLC how many phases are there can you explain them?
- What is the model of spiral binding in manual testing? Can you explain spiral binding?
- What is Review?
- What are the objectives of Utility objects?
- What is performance testing?
- Can u explain the structure of bug life cycle?
- What is stress testing?
- What is the difference between test scenarios and test strategy?
- What is Sanity Test, Adhoc Testing & Smoke Testing? When will use the Above Tests?
- How will you review the test case and how many types are there ?
- Explain about use case document?
- What is the difference between smoke testing and sanitary testing ?
- What is Black Box Testing?
- What is alpha testing and beta testing ?
- What is FSO can you explain?
- What are the objectives of debugging?
- What is functional testing,system testing,datadriven testing?
- How to write a testcase and bugreport?plz expln with an example.
- What is mean by gui testing ? What is mean by client/Server? What is meat by web based application ?
- what is the Testcase Life Cycle ?
- How to test the Microsoft Word 2003. What all the major areas to be tested, please explain.
- Difference between bug,error,and defect?
- How to do regression testing, and can give one or two examples on that in the same application?
- Explain about MicroSoft Six Rules Standardfor User Interface testing?
- The role of both QA & QC?
- Give exact and appropriate definition of testing.
- how to wrtie test case with a minimum of 13 columns.......
- how i can do gui testing,what is its important contant,plz tell me all property of Gui testing?
- Share a particular project where you have been able to learn enough skills to help with testing? (more for the developers looking to do= testing)
- What part of the testing phase is the most important part for testing in the cycle?
- How to carry out manual testing for a background process which does't have any user interface ?
- what is open beta testing? ans it done at which end? what is the difference b/w open beta testing and beta testing?
- What is application entry and exit criteria?
- What is visual source safe?
Labels:
Interview Questions
Integration Testing
Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Approaches of integration testing :
Top Down Testing :
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Bottom Up Testing :
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Approaches of integration testing :
Top Down Testing :
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Bottom Up Testing :
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Labels:
Software Testing Types
Load Testing
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
A load test is usually conducted to understand the behavior of the application under a specific expected load. This load can be the expected concurrent number of users on the application performing a specific number of transactions within the set duration. This test will give out the response times of all the important business critical transactions. If the database, application server, etc are also monitored, then this simple test can itself point towards the bottleneck in the application.
A load test is usually conducted to understand the behavior of the application under a specific expected load. This load can be the expected concurrent number of users on the application performing a specific number of transactions within the set duration. This test will give out the response times of all the important business critical transactions. If the database, application server, etc are also monitored, then this simple test can itself point towards the bottleneck in the application.
Labels:
Software Testing Types
Volume Testing
Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
For example :
if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application's performance on it.
For example :
if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application's performance on it.
Labels:
Software Testing Types
Stress Testing
This kind of test is done to determine the application's robustness in times of extreme load and helps application administrators to determine if the application will perform sufficiently if the current load goes well above the expected load.
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.
For example :
A web server may be stress tested using scripts, bots, and various denial of service tools to observe the performance of a web site during peak loads.
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.
For example :
A web server may be stress tested using scripts, bots, and various denial of service tools to observe the performance of a web site during peak loads.
Labels:
Software Testing Types
Sunday, March 21, 2010
Software Testing Types
1. Black box testing : Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.
2. White box testing : This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.
3. Unit testing : Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. May require developing test driver modules or test harnesses.
4. Incremental integration testing : Bottom up approach for testing i.e. continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. Done by programmers or by testers.
5. Integration testing : Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
6. Functional testing : This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.
7. System testing : Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.
8. End-to-end testing : Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
9. Sanity testing : Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.
10. Regression testing : Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.
11. Acceptance testing : Normally this type of testing is done to verify if system meets the customer specified requirements. User or customers do this testing to determine whether to accept application.
12. Load testing : It’s a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
13. Stress testing : System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.
14. Performance testing : Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.
15. Usability testing : User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.
16. Install/uninstall testing : Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.
17. Recovery testing : Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
18. Security testing : Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.
19. Compatibility testing : Testing how well software performs in a particular hardware/software/operating system/network environment and different combinations of above.
20. Comparison testing : Comparison of product strengths and weaknesses with previous versions or other similar products.
21. Alpha testing : In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.
22. Beta testing : Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.
2. White box testing : This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.
3. Unit testing : Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. May require developing test driver modules or test harnesses.
4. Incremental integration testing : Bottom up approach for testing i.e. continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. Done by programmers or by testers.
5. Integration testing : Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
6. Functional testing : This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.
7. System testing : Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.
8. End-to-end testing : Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
9. Sanity testing : Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.
10. Regression testing : Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.
11. Acceptance testing : Normally this type of testing is done to verify if system meets the customer specified requirements. User or customers do this testing to determine whether to accept application.
12. Load testing : It’s a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
13. Stress testing : System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.
14. Performance testing : Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.
15. Usability testing : User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.
16. Install/uninstall testing : Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.
17. Recovery testing : Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
18. Security testing : Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.
19. Compatibility testing : Testing how well software performs in a particular hardware/software/operating system/network environment and different combinations of above.
20. Comparison testing : Comparison of product strengths and weaknesses with previous versions or other similar products.
21. Alpha testing : In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.
22. Beta testing : Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.
Labels:
Software Testing Types
Testing Techniques
1. Static Testing :
i) During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Reviews, Inspection's and Walkthrough's are static testing methodologies.
ii) Static testing is a form of software testing where the software isn't actually used.
iii) This is in contrast to Dynamic testing. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document.
iv) It is primarily syntax checking of the code or and manually reading of the code or document to find errors.
v) This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.
vi) From the black box testing point of view, static testing involves review of requirements or specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation.
2. Dynamic Testing :
i)Dynamic Testing involves working with the software, giving input values and checking if the output is as expected.
ii)These are the validation activities.
iii)Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies.
iv)In dynamic testing the software must actually be compiled and run; this is in contrast to static testing.
i) During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Reviews, Inspection's and Walkthrough's are static testing methodologies.
ii) Static testing is a form of software testing where the software isn't actually used.
iii) This is in contrast to Dynamic testing. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document.
iv) It is primarily syntax checking of the code or and manually reading of the code or document to find errors.
v) This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.
vi) From the black box testing point of view, static testing involves review of requirements or specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation.
2. Dynamic Testing :
i)Dynamic Testing involves working with the software, giving input values and checking if the output is as expected.
ii)These are the validation activities.
iii)Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies.
iv)In dynamic testing the software must actually be compiled and run; this is in contrast to static testing.
Labels:
Testing Techniques
What should be tested in web site testing?
1. Functionality :
Links:
a)All Internal Links
b)All External Links
c)All mail links
d)Check for Broken Links
Forms:
a)All Field Level Checks
b)All Field Level Validations
c)Functionality of Create, Modify, Delete & View
d)Handling of Wrong inputs (App Error messages has to be Displayed)
e)Optional and mandatory fields checks
2. Usability :
Navigation:
a)Application navigation is proper through tab
b)Navigation through Mouse
c)Main features accessible from the main/home page
d)Any hot keys, control keys to access menus
Content:
a)Spellings and Grammars
b)Updated information
General Appearance:
a)Page appearance [Eg… Overlapping, Missing]
b)Color, font and size
c)Consistent design
3. Server Side Interfaces :
Server Interface:
a)Verify that communication is done correctly, Web server-application server, application server-database server and vice versa.
b)Compatibility of server software, hardware, network connections
c)Database compatibility (SQL, Oracle etc.)
4. Client Side Compatibility :
Platform:
Check for the compatibility of
a)Windows (98, 2000, NT)
b)Unix (different sets)
c)Macintosh (If applicable)
d)Linux
e)Solaris (If applicable)
Browsers:
Check for the various combinations
a)Internet Explorer (5.X, 6.X, 7.X)
b)Netscape Navigator
c)AOL
d)Mozilla
e)Browser settings
Graphics:
a)Loading of images, graphics, etc.
Printing:
a)Text and image alignment
b)Colures of text, foreground and background
c)Scalability to fit paper size
d)Tables and borders
Performance:
a)Connection speed : Try with various connection speeds, Time out
b)Load :
Check/Measure the following:
What is the estimated number of users per time period and how will it be divided over the period?
Will there be peak loads and how will the system react?
Can your site handle a large amount of users requesting a certain page?
Large amount of data from users.
c)Stress:
Stress testing is done in order to actually break a site or a certain feature to determine how the system reacts.
Stress tests are designed to push and test system limitations and determine whether the system recovers gracefully from crashes. Hackers often stress systems by providing loads of wrong in-data until it crash and then gain access to it during start-up.
a. Typical areas to test are forms, logins or other information transaction components.
b. Performance of memory, CPU, file handling etc.
c. Error in software, hardware, memory errors (leakage, overwrite or pointers)
d)Continuous use:
Is the application or certain features going to be used only during certain periods of time or will it be used continuously 24 hours a day 7 days a week?
Verify that the application is able to meet the requirements and does not run out of memory or disk space.
5. Security :
a)Valid and Invalid Login
b)Limit defined for the number of tries.
c)Can it be bypassed by typing URL to a page inside directly in the browser?
Links:
a)All Internal Links
b)All External Links
c)All mail links
d)Check for Broken Links
Forms:
a)All Field Level Checks
b)All Field Level Validations
c)Functionality of Create, Modify, Delete & View
d)Handling of Wrong inputs (App Error messages has to be Displayed)
e)Optional and mandatory fields checks
2. Usability :
Navigation:
a)Application navigation is proper through tab
b)Navigation through Mouse
c)Main features accessible from the main/home page
d)Any hot keys, control keys to access menus
Content:
a)Spellings and Grammars
b)Updated information
General Appearance:
a)Page appearance [Eg… Overlapping, Missing]
b)Color, font and size
c)Consistent design
3. Server Side Interfaces :
Server Interface:
a)Verify that communication is done correctly, Web server-application server, application server-database server and vice versa.
b)Compatibility of server software, hardware, network connections
c)Database compatibility (SQL, Oracle etc.)
4. Client Side Compatibility :
Platform:
Check for the compatibility of
a)Windows (98, 2000, NT)
b)Unix (different sets)
c)Macintosh (If applicable)
d)Linux
e)Solaris (If applicable)
Browsers:
Check for the various combinations
a)Internet Explorer (5.X, 6.X, 7.X)
b)Netscape Navigator
c)AOL
d)Mozilla
e)Browser settings
Graphics:
a)Loading of images, graphics, etc.
Printing:
a)Text and image alignment
b)Colures of text, foreground and background
c)Scalability to fit paper size
d)Tables and borders
Performance:
a)Connection speed : Try with various connection speeds, Time out
b)Load :
Check/Measure the following:
What is the estimated number of users per time period and how will it be divided over the period?
Will there be peak loads and how will the system react?
Can your site handle a large amount of users requesting a certain page?
Large amount of data from users.
c)Stress:
Stress testing is done in order to actually break a site or a certain feature to determine how the system reacts.
Stress tests are designed to push and test system limitations and determine whether the system recovers gracefully from crashes. Hackers often stress systems by providing loads of wrong in-data until it crash and then gain access to it during start-up.
a. Typical areas to test are forms, logins or other information transaction components.
b. Performance of memory, CPU, file handling etc.
c. Error in software, hardware, memory errors (leakage, overwrite or pointers)
d)Continuous use:
Is the application or certain features going to be used only during certain periods of time or will it be used continuously 24 hours a day 7 days a week?
Verify that the application is able to meet the requirements and does not run out of memory or disk space.
5. Security :
a)Valid and Invalid Login
b)Limit defined for the number of tries.
c)Can it be bypassed by typing URL to a page inside directly in the browser?
Labels:
General
Saturday, March 20, 2010
Difference between Smoke and Sanity testing
Smoke : Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.
Sanity : A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.
Smoke : A smoke test is scripted--either using a written set of tests or an automated test
Sanity : A sanity test is usually unscripted.
Smoke : A Smoke test is designed to touch every part of the application in a cursory way. It's is shallow and wide.
Sanity : A Sanity test is used to determine a small section of the application is still working after a minor change.
Smoke : Smoke testing will be conducted to ensure whether the most crucial functions of a program work, but not bothering with finer details. (Such as build verification).
Sanity : Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.
Smoke : Smoke testing is normal health check up to a build of an application before taking it to testing in depth.
Sanity : sanity testing is to verify whether requirements are met or not,
checking all features breadth-first.
Labels:
Software Testing Types
Difference between Verification and Validation
Verification : Am I building the product right?
Validation : Am I building the right product?
Verification : The review of interim work steps and interim deliverables during a project to ensure they are acceptable. To determine if the system is consistent, adheres to standards, uses reliable techniques and prudent practices, and performs the selected functions in the correct manner.
Validation : Determining if the system complies with the requirements and performs functions for which it is intended and meets the organization’s goals and user needs. It is traditional and is performed at the end of the project.
Verification : Am I accessing the data right (in the right place; in the right way)?
Validation : Am I accessing the right data (in terms of the data required to satisfy the requirement)?
Verification : Low level activity
Validation : High level activity
Verification : Performed during development on key artifacts, like walkthroughs, reviews and inspections, mentor feedback, training, checklists and standards
Validation : Performed after a work product is produced against established criteria ensuring that the product integrates correctly into the environment
Verification : Demonstration of consistency, completeness, and correctness of the software at each stage and between each stage of the development life cycle.
Validation : Determination of correctness of the final software product by a development project with respect to the user needs and requirements.
Labels:
Verification And Validation
Difference between QA and QC
QA : Process
QC : Product
QA : Proactive
QC : Reactive
QA : Staff function
QC : Line function
QA : Prevent defects
QC : Find defects
QA : Examples - Quality Audit,Defining Process,Selection of Tools,Training.
QC : Examples -Walk through,Testing,Inspection,Checkpoint review.
Labels:
Basic Concepts
What is Quality?
1. A high degree of excellence,
2. Conformance to requirements,
3. Fitness for use.
2. Conformance to requirements,
3. Fitness for use.
Labels:
Basic Concepts
Software Testing Life Cycle
Software testing life cycle identifies what test activities to carry out and when (what is the best time) to accomplish those test activities. Even though testing differs between organizations, there is a testing life cycle.
Software Testing Life Cycle consists of following phases:
• Test Planning,
• Test Analysis,
• Test Design,
• Construction and verification,
• Testing Cycles,
• Final Testing and Implementation and
• Post Implementation.
Software testing has its own life cycle that intersects with every stage of the SDLC. The basic requirements in software testing life cycle is to control/deal with software testing – Manual, Automated and Performance.
1. Test Planning
This is the phase where Project Manager has to decide what things need to be tested, do I have the appropriate budget etc. Naturally proper planning at this stage would greatly reduce the risk of low quality software. This planning will be an ongoing process with no end point.
Activities at this stage would include preparation of high level test plan-(according to IEEE test plan template The Software Test Plan (STP) is designed to prescribe the scope, approach, resources, and schedule of all testing activities. The plan must identify the items to be tested, the features to be tested, the types of testing to be performed, the personnel responsible for testing, the resources and schedule required to complete testing, and the risks associated with the plan.). Almost all of the activities done during this stage are included in this software test plan and revolve around a test plan.
2. Test Analysis
Once test plan is made and decided upon, next step is to delve little more into the project and decide what types of testing should be carried out at different stages of SDLC, do we need or plan to automate, if yes then when the appropriate time to automate is, what type of specific documentation I need for testing.
Proper and regular meetings should be held between testing teams, project managers, and development teams, Business Analysts to check the progress of things which will give a fair idea of the movement of the project and ensure the completeness of the test plan created in the planning phase, which will further help in enhancing the right testing strategy created earlier. We will start creating test case formats and test cases itself. In this stage we need to develop Functional validation matrix based on Business Requirements to ensure that all system requirements are covered by one or more test cases, identify which test cases to automate, begin review of documentation, i.e. Functional Design, Business Requirements, Product Specifications, Product Externals etc. We also have to define areas for Stress and Performance testing.
3. Test Design
Test plans and cases which were developed in the analysis phase are revised. Functional validation matrix is also revised and finalized. In this stage risk assessment criteria is developed. If you have thought of automation then you have to select which test cases to automate and begin writing scripts for them. Test data is prepared. Standards for unit testing and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) & finalized and test environment is prepared.
4. Construction and verification
In this phase we have to complete all the test plans, test cases, complete the scripting of the automated test cases, Stress and Performance testing plans needs to be completed. We have to support the development team in their unit testing phase. And obviously bug reporting would be done as when the bugs are found. Integration tests are performed and errors (if any) are reported.
5. Testing Cycles
In this phase we have to complete testing cycles until test cases are executed without errors or a predefined condition is reached. Run test cases --> Report Bugs --> revise test cases (if needed) --> add new test cases (if needed) --> bug fixing --> retesting (test cycle 2, test cycle 3….).
6. Final Testing and Implementation
In this we have to execute remaining stress and performance test cases, documentation for testing is completed / updated, provide and complete different matrices for testing. Acceptance, load and recovery testing will also be conducted and the application needs to be verified under production conditions.
7. Post Implementation
In this phase, the testing process is evaluated and lessons learnt from that testing process are documented. Line of attack to prevent similar problems in future projects is identified. Create plans to improve the processes. The recording of new errors and enhancements is an ongoing process. Cleaning up of test environment is done and test machines are restored to base lines in this stage.
Software Testing Life Cycle consists of following phases:
• Test Planning,
• Test Analysis,
• Test Design,
• Construction and verification,
• Testing Cycles,
• Final Testing and Implementation and
• Post Implementation.
Software testing has its own life cycle that intersects with every stage of the SDLC. The basic requirements in software testing life cycle is to control/deal with software testing – Manual, Automated and Performance.
1. Test Planning
This is the phase where Project Manager has to decide what things need to be tested, do I have the appropriate budget etc. Naturally proper planning at this stage would greatly reduce the risk of low quality software. This planning will be an ongoing process with no end point.
Activities at this stage would include preparation of high level test plan-(according to IEEE test plan template The Software Test Plan (STP) is designed to prescribe the scope, approach, resources, and schedule of all testing activities. The plan must identify the items to be tested, the features to be tested, the types of testing to be performed, the personnel responsible for testing, the resources and schedule required to complete testing, and the risks associated with the plan.). Almost all of the activities done during this stage are included in this software test plan and revolve around a test plan.
2. Test Analysis
Once test plan is made and decided upon, next step is to delve little more into the project and decide what types of testing should be carried out at different stages of SDLC, do we need or plan to automate, if yes then when the appropriate time to automate is, what type of specific documentation I need for testing.
Proper and regular meetings should be held between testing teams, project managers, and development teams, Business Analysts to check the progress of things which will give a fair idea of the movement of the project and ensure the completeness of the test plan created in the planning phase, which will further help in enhancing the right testing strategy created earlier. We will start creating test case formats and test cases itself. In this stage we need to develop Functional validation matrix based on Business Requirements to ensure that all system requirements are covered by one or more test cases, identify which test cases to automate, begin review of documentation, i.e. Functional Design, Business Requirements, Product Specifications, Product Externals etc. We also have to define areas for Stress and Performance testing.
3. Test Design
Test plans and cases which were developed in the analysis phase are revised. Functional validation matrix is also revised and finalized. In this stage risk assessment criteria is developed. If you have thought of automation then you have to select which test cases to automate and begin writing scripts for them. Test data is prepared. Standards for unit testing and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) & finalized and test environment is prepared.
4. Construction and verification
In this phase we have to complete all the test plans, test cases, complete the scripting of the automated test cases, Stress and Performance testing plans needs to be completed. We have to support the development team in their unit testing phase. And obviously bug reporting would be done as when the bugs are found. Integration tests are performed and errors (if any) are reported.
5. Testing Cycles
In this phase we have to complete testing cycles until test cases are executed without errors or a predefined condition is reached. Run test cases --> Report Bugs --> revise test cases (if needed) --> add new test cases (if needed) --> bug fixing --> retesting (test cycle 2, test cycle 3….).
6. Final Testing and Implementation
In this we have to execute remaining stress and performance test cases, documentation for testing is completed / updated, provide and complete different matrices for testing. Acceptance, load and recovery testing will also be conducted and the application needs to be verified under production conditions.
7. Post Implementation
In this phase, the testing process is evaluated and lessons learnt from that testing process are documented. Line of attack to prevent similar problems in future projects is identified. Create plans to improve the processes. The recording of new errors and enhancements is an ongoing process. Cleaning up of test environment is done and test machines are restored to base lines in this stage.
What is Defect?
Nonconformance to requirements or functional / program specification.
Labels:
Basic Concepts
What is an Error?
1. A discrepancy between expected and actual result.
2. The occurrence of an incorrect result produced by a computer.
3. An error is a difference between a computed, estimated, or measured value and the true, specified, or theoretically correct value.
4. In software engineering, the term error refers to an incorrect action or calculation performed by software as a result of a fault.
If, as a result of the error, the software performs an undesired action or fails to perform a desired action, then this is referred to as a failure.
2. The occurrence of an incorrect result produced by a computer.
3. An error is a difference between a computed, estimated, or measured value and the true, specified, or theoretically correct value.
4. In software engineering, the term error refers to an incorrect action or calculation performed by software as a result of a fault.
If, as a result of the error, the software performs an undesired action or fails to perform a desired action, then this is referred to as a failure.
Labels:
Basic Concepts
What is Bug?
A fault in a program which causes the program to perform in an unintended or unanticipated manner.
1. An error or defect in software or hardware that causes a program to malfunction. Often a bug is caused by conflicts in software when applications try to run in tandem. According to folklore, the first computer bug was an actual bug. Discovered in 1945 at Harvard, a moth trapped between two electrical relays of the Mark II Aiken Relay Calculator caused the whole machine to shut down.
2. A software bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working as intended, or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program's source code or its design. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy. Reports about bugs in a program are referred to as bug reports, also called PRs (problem reports), trouble reports, CRs (change requests), and so forth.
Bugs can have a wide variety of effects, with varying levels of inconvenience to the user of the program. Some bugs have only a subtle effect on the program's functionality, and may thus lie undetected for a long time. More serious bugs may cause the program to crash or freeze. Other bugs lead to security problems; for example, a common type of bug which allows a buffer overflow may allow a malicious user to execute other programs that are normally not allowed to run.
3. A bug is an error or something Unsuspected left by the programmer. For example if some programmer wants to ask the user for the input between 3 and 5, but the user sends "7",then the software isn't prepared for the user's input.
This would generally cause a script error or something.
It is also the term referring to the condition when a dynamic link in the software causes the system on the user's machine to crash or just even hang, or simply even stops responding.
1. An error or defect in software or hardware that causes a program to malfunction. Often a bug is caused by conflicts in software when applications try to run in tandem. According to folklore, the first computer bug was an actual bug. Discovered in 1945 at Harvard, a moth trapped between two electrical relays of the Mark II Aiken Relay Calculator caused the whole machine to shut down.
2. A software bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working as intended, or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program's source code or its design. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy. Reports about bugs in a program are referred to as bug reports, also called PRs (problem reports), trouble reports, CRs (change requests), and so forth.
Bugs can have a wide variety of effects, with varying levels of inconvenience to the user of the program. Some bugs have only a subtle effect on the program's functionality, and may thus lie undetected for a long time. More serious bugs may cause the program to crash or freeze. Other bugs lead to security problems; for example, a common type of bug which allows a buffer overflow may allow a malicious user to execute other programs that are normally not allowed to run.
3. A bug is an error or something Unsuspected left by the programmer. For example if some programmer wants to ask the user for the input between 3 and 5, but the user sends "7",then the software isn't prepared for the user's input.
This would generally cause a script error or something.
It is also the term referring to the condition when a dynamic link in the software causes the system on the user's machine to crash or just even hang, or simply even stops responding.
Labels:
Basic Concepts
Subscribe to:
Posts (Atom)