Testing as a Service (TaaS, typically pronounced 'tass') is a model of software testing whereby a provider undertakes the activity of software testing applications/solutions for customers as a service on demand. Testing as a Service involves the on-demand test execution of well-defined suites of test material, generally on an outsourced basis. The execution can be performed either on client site or remotely from the outsourced providers test lab/facilities.
To know more please visit TaaS
Tuesday, October 19, 2010
Cloud Testing Service Features
To know about Cloud Testing Service Features please visit Cloud Testing Service Features
Labels:
Cloud Testing
Monday, October 18, 2010
Skills Needed for Building Clouds
By Mike DiPetrillo, Global Cloud Architect, VMware
1) Networking - Networking is about the most complex piece of VMware's cloud tools. Our product manager likes to call it "flexible" which it really is (and powerful) but it's also complex. Giving end users the ability to configure their own network segments on-the-fly complete with VLAN IDs is something that would scare most network admins and yet this is something that we need to tackle to get to "true cloud". I usually suggest to customers that they go and engage their network team early on in the cloud building process and then recruit the best of the networking engineers to be on the cloud team.
2) Storage - Storage is another area that can get complex. How do you make it so end users don't have to care about the underlying storage and yet land on the right volume from a performance perspective? And don't even get me started on movement of data from one place to another or backup. All of these things are going to require an ace storage engineer on the cloud team.
3) Programming Skills - You don't need some uber code monkey on the team but you do need someone that understands APIs, how to use them, and how you would go about plugging everything together. Automation is the name of the game in the guts of cloud and that's why tools like BMC Atrium Orchestrator, VMware vCenter Orchestrator, and HPOO have become centerpieces in the cloud. Most of these are based on Java or Javascript so find someone who can at least start there. And since nearly everything in cloud land seems to be going the path of REST it would be great to get someone that knows that and XML really well.
So those are my three core skill sets that I tell people to go out and find. There are more you could add to the list such as security or billing or portal design but those can be from people that augment the core team. If you find people in the above core skill sets then you'll be well on your way to architecting a successful cloud build out.
To know more plz visit vmware
1) Networking - Networking is about the most complex piece of VMware's cloud tools. Our product manager likes to call it "flexible" which it really is (and powerful) but it's also complex. Giving end users the ability to configure their own network segments on-the-fly complete with VLAN IDs is something that would scare most network admins and yet this is something that we need to tackle to get to "true cloud". I usually suggest to customers that they go and engage their network team early on in the cloud building process and then recruit the best of the networking engineers to be on the cloud team.
2) Storage - Storage is another area that can get complex. How do you make it so end users don't have to care about the underlying storage and yet land on the right volume from a performance perspective? And don't even get me started on movement of data from one place to another or backup. All of these things are going to require an ace storage engineer on the cloud team.
3) Programming Skills - You don't need some uber code monkey on the team but you do need someone that understands APIs, how to use them, and how you would go about plugging everything together. Automation is the name of the game in the guts of cloud and that's why tools like BMC Atrium Orchestrator, VMware vCenter Orchestrator, and HPOO have become centerpieces in the cloud. Most of these are based on Java or Javascript so find someone who can at least start there. And since nearly everything in cloud land seems to be going the path of REST it would be great to get someone that knows that and XML really well.
So those are my three core skill sets that I tell people to go out and find. There are more you could add to the list such as security or billing or portal design but those can be from people that augment the core team. If you find people in the above core skill sets then you'll be well on your way to architecting a successful cloud build out.
To know more plz visit vmware
Labels:
Basic Concepts,
Cloud Testing
Monday, September 13, 2010
Configuration Management Database
A configuration management database (CMDB) is a repository of information related to all the components of an information system. Although repositories similar to CMDBs have been used by IT departments for many years, the term CMDB stems from
ITIL (Information Technology Infrastructure Library). In the ITIL context, a CMDB represents the authorized configuration of the significant components of the IT environment. A CMDB helps an organization understand the relationships between these components and track their configuration. The CMDB is a fundamental component of the ITIL framework's Configuration Management process. CMDB implementations often involve federation, the inclusion of data into the CMDB from other sources, such as Asset Management, in such a way that the source of the data retains control of the data. Federation is usually distinguished from ETL (extract, transform, load) solutions in which data is copied into the CMDB.
ITIL (Information Technology Infrastructure Library). In the ITIL context, a CMDB represents the authorized configuration of the significant components of the IT environment. A CMDB helps an organization understand the relationships between these components and track their configuration. The CMDB is a fundamental component of the ITIL framework's Configuration Management process. CMDB implementations often involve federation, the inclusion of data into the CMDB from other sources, such as Asset Management, in such a way that the source of the data retains control of the data. Federation is usually distinguished from ETL (extract, transform, load) solutions in which data is copied into the CMDB.
Labels:
Basic Concepts
Release Management
Release Management is the relatively new but rapidly growing discipline within software engineering of managing software releases.
As software systems, software development processes, and resources become more distributed, they invariably become more specialized and complex. Furthermore, software products (especially web applications) are typically in an ongoing cycle of development, testing, and release. Add to this an evolution and growing complexity of the platforms on which these systems run, and it becomes clear there are a lot of moving pieces that must fit together seamlessly to guarantee the success and long-term value of a product or project.
The need therefore exists for dedicated resources to oversee the integration and flow of development, testing, deployment, and support of these systems. Although project managers have done this in the past, they generally are more concerned with high-level, "grand design" aspects of a project or application, and so often do not have time to oversee some of the more technical or day-to-day aspects. Release Managers (aka "RMs") address this need. They must have a general knowledge of every aspect of the Software Development Life Cycle (SDLC), various applicable operating systems and software application or platforms, as well as various business functions and perspectives.
A Release Manager is:
1) Facilitator – serves as a liaison between varying business units to guarantee smooth and timely delivery of software products or updates.
2) Gatekeeper – “holds the keys” to production systems/applications and takes responsibility for their implementations.
3) Architect – helps to identify, create and/or implement processes or products to efficiently manage the release of code.
4) Server Application Support Engineer – help troubleshoot problems with an application (although not typically at a code level).
5) Coordinator – utilized to coordinate disparate source trees, projects, teams and components.
Some of the challenges facing a Software Release Manager include the management of:
1) Software Defects
2) Issues
3) Risks
4) Software Change Requests
5) New Development Requests (additional features and functions)
6) Deployment and Packaging
7) New Development Tasks
As software systems, software development processes, and resources become more distributed, they invariably become more specialized and complex. Furthermore, software products (especially web applications) are typically in an ongoing cycle of development, testing, and release. Add to this an evolution and growing complexity of the platforms on which these systems run, and it becomes clear there are a lot of moving pieces that must fit together seamlessly to guarantee the success and long-term value of a product or project.
The need therefore exists for dedicated resources to oversee the integration and flow of development, testing, deployment, and support of these systems. Although project managers have done this in the past, they generally are more concerned with high-level, "grand design" aspects of a project or application, and so often do not have time to oversee some of the more technical or day-to-day aspects. Release Managers (aka "RMs") address this need. They must have a general knowledge of every aspect of the Software Development Life Cycle (SDLC), various applicable operating systems and software application or platforms, as well as various business functions and perspectives.
A Release Manager is:
1) Facilitator – serves as a liaison between varying business units to guarantee smooth and timely delivery of software products or updates.
2) Gatekeeper – “holds the keys” to production systems/applications and takes responsibility for their implementations.
3) Architect – helps to identify, create and/or implement processes or products to efficiently manage the release of code.
4) Server Application Support Engineer – help troubleshoot problems with an application (although not typically at a code level).
5) Coordinator – utilized to coordinate disparate source trees, projects, teams and components.
Some of the challenges facing a Software Release Manager include the management of:
1) Software Defects
2) Issues
3) Risks
4) Software Change Requests
5) New Development Requests (additional features and functions)
6) Deployment and Packaging
7) New Development Tasks
Labels:
Basic Concepts
Sunday, August 22, 2010
Vaporware
Vaporware is a word used to describe products, usually computer hardware or software, that were not released on the date announced by their developer, or that were announced months or years before their release. Application of the word usually implies a negative opinion of a product, and uncertainty that it will eventually be released. The word has been applied to a growing range of products including consumer electronics, automobiles, and some stock trading practices.
Labels:
Basic Concepts
Friday, August 20, 2010
Paper Launch
A paper launch is the situation in which a product is compared or tested against other products, despite the fact that it is not available to the public at the time. Generally the term is applied to the computer and gaming industry, although it is not limited to that.
Labels:
Basic Concepts
Tuesday, August 17, 2010
What is a staging area?
The staging area is:-
1. One or more database schema(s) or file stores used to “stage” data extracted from the source OLTP systems prior to being published to the “warehouse” where it is visible to end users.
2. Data in the staging area is NOT visible to end users for queries, reports or analysis of any kind. It does not hold completed data ready for querying.
3. It may hold intermediate results, (if data is pipelined through a process)
4. Equally it may hold “state” data – the keys of the data held on the warehouse, and used to detect whether incoming data includes New or Updated rows. (Or deleted for that matter).
5. It is likely to be equal in size (or maybe larger) than the “presentation area” itself.
6. Although the “state” data – eg. Last sequence loaded may be backed up, much of the staging area data is automatically replaced during the ETL load processes, and can with care avoid adding to the backup effort. The presentation area however, may need backup in many cases.
7. It may include some metadata, which may be used by analysts or operators monitoring the state of the previous loads (eg. audit information, summary totals of rows loaded etc).
8. It’s likely to hold details of “rejected” entries – data which has failed quality tests, and may need correction and re-submission to the ETL process.
9. It’s likely to have few indexes (compared to the “presentation area”), and hold data in a quite normalised form. The presentation area (the bit the end users see), is by comparison likely to be more highly indexed (mainly bitmap indexes), with highly denormalised tables (the Dimension tables anyway).
The staging area exists to be a separate “back room“ or “engine room” of the warehouse where the data can be transformed, corrected and prepared for the warehouse.
It should ONLY be accessible to the ETL processes working on the data, or administrators monitoring or managing the ETL process.
In summary. A typical warehouse generally has three distinct areas:-
1. Several source systems which provide data. This can include databases (Oracle, SQL Server, Sybase etc) or files or spreadsheets
2. A single “staging area” which may use one or more database schemas or file stores (depending upon warehouse load volumes).
3. One or more “visible” data marts or a single “warehouse presentation area” where data is made visible to end user queries. This is what many people think of as the warehouse – although the entire system is the warehouse – it depends upon your perspective.
The “staging area” is the middle bit.
Staging area is place where you hold temporary tables on data warehouse server. Staging tables are connected to work area or fact tables. We basically need staging area to hold the data and perform data cleansing and merging before loading the data into warehouse.
1. One or more database schema(s) or file stores used to “stage” data extracted from the source OLTP systems prior to being published to the “warehouse” where it is visible to end users.
2. Data in the staging area is NOT visible to end users for queries, reports or analysis of any kind. It does not hold completed data ready for querying.
3. It may hold intermediate results, (if data is pipelined through a process)
4. Equally it may hold “state” data – the keys of the data held on the warehouse, and used to detect whether incoming data includes New or Updated rows. (Or deleted for that matter).
5. It is likely to be equal in size (or maybe larger) than the “presentation area” itself.
6. Although the “state” data – eg. Last sequence loaded may be backed up, much of the staging area data is automatically replaced during the ETL load processes, and can with care avoid adding to the backup effort. The presentation area however, may need backup in many cases.
7. It may include some metadata, which may be used by analysts or operators monitoring the state of the previous loads (eg. audit information, summary totals of rows loaded etc).
8. It’s likely to hold details of “rejected” entries – data which has failed quality tests, and may need correction and re-submission to the ETL process.
9. It’s likely to have few indexes (compared to the “presentation area”), and hold data in a quite normalised form. The presentation area (the bit the end users see), is by comparison likely to be more highly indexed (mainly bitmap indexes), with highly denormalised tables (the Dimension tables anyway).
The staging area exists to be a separate “back room“ or “engine room” of the warehouse where the data can be transformed, corrected and prepared for the warehouse.
It should ONLY be accessible to the ETL processes working on the data, or administrators monitoring or managing the ETL process.
In summary. A typical warehouse generally has three distinct areas:-
1. Several source systems which provide data. This can include databases (Oracle, SQL Server, Sybase etc) or files or spreadsheets
2. A single “staging area” which may use one or more database schemas or file stores (depending upon warehouse load volumes).
3. One or more “visible” data marts or a single “warehouse presentation area” where data is made visible to end user queries. This is what many people think of as the warehouse – although the entire system is the warehouse – it depends upon your perspective.
The “staging area” is the middle bit.
Staging area is place where you hold temporary tables on data warehouse server. Staging tables are connected to work area or fact tables. We basically need staging area to hold the data and perform data cleansing and merging before loading the data into warehouse.
Labels:
General
What is the methodology and process followed for ETL testing in Data warehouse environment?
Like ETL SPEC they create a document containing the source table (Schema name) and a target table (another schema name) with the logic used in transforming source to target. We have to write database query with the logic contained in the document using source schema and take its output. Now write a simple select statement in the target schema and take its output. Compare the two outputs if they are same well and fine else it's a bug.
Database knowledge is a must for ETL testing.
Database knowledge is a must for ETL testing.
Labels:
General
What are the things to consider while testing ETL ?
The process of Testing the web based application and ETL application is quit diffrent.the major difference is in web based application we test the GUI part of the applicaytion as well as the main functional tesitng.but as it in ETL first
We have to understand the source structure like how many records are comming from source and how many records are loaded in to the target this is the basic motivation for testing the ETL. then how many records are rejecting and waht is the reason for rejecting.
Second we have to test the Back-end data driven test.
We have to test ETL components by execuitng SQL PL/SQL queyries.
We have to test the mapping naming convertion is done with respect to SRS.
Things to be considered :
1.Check we can get the existing data
2.Check we can clean up the data
3.Check we can add new data
4.Check we can merge data
5.Check for the limitation of the data
6.Check for the security purpose of using data
7.Check whether it is overlimited how much time we needed to get from the extraction
We have to understand the source structure like how many records are comming from source and how many records are loaded in to the target this is the basic motivation for testing the ETL. then how many records are rejecting and waht is the reason for rejecting.
Second we have to test the Back-end data driven test.
We have to test ETL components by execuitng SQL PL/SQL queyries.
We have to test the mapping naming convertion is done with respect to SRS.
Things to be considered :
1.Check we can get the existing data
2.Check we can clean up the data
3.Check we can add new data
4.Check we can merge data
5.Check for the limitation of the data
6.Check for the security purpose of using data
7.Check whether it is overlimited how much time we needed to get from the extraction
Labels:
General
ETL CHANNEL
Extract, transform and load (ETL) is the core process of data integration and is typically associated with data warehousing. ETL tools extract data from a chosen source(s), transform it into new formats according to business rules, and then load it into target data structure(s). Managing rules and processes for the increasing diversity of data sources and high volumes of data processed that ETL must accommodate, make management, performance and cost the primary and challenges for users. The traditional ETL approach requires users to map each physical data item with a unique metadata description; newer ETL tools allow the user to create an abstraction layer of common business definitions and map all similar data items to the same definition before applying target-specific business rules, isolating business rules from data and allowing easier ETL management.
Labels:
General
ETL Testing
Extract, Transform and Load (ETL)
General goals of testing an ETL application:
1. Data completeness. Ensures that all expected data is loaded.
2. Data transformation. Ensures that all data is transformed correctly according to business rules and/or design specifications.
3. Data quality. Ensures that the ETL application correctly rejects, substitutes default values, corrects or ignores and reports invalid data.
4. Performance and scalability. Ensures that data loads and queries perform within expected time frames and that the technical architecture is scalable.
5. Integration testing. Ensures that the ETL process functions well with other upstream and downstream processes.
6. User-acceptance testing. Ensures the solution meets users' current expectations and anticipates their future expectations.
7. Regression testing. Ensures existing functionality remains intact each time a new release of code is completed.
For more details please visit ETL Testing
General goals of testing an ETL application:
1. Data completeness. Ensures that all expected data is loaded.
2. Data transformation. Ensures that all data is transformed correctly according to business rules and/or design specifications.
3. Data quality. Ensures that the ETL application correctly rejects, substitutes default values, corrects or ignores and reports invalid data.
4. Performance and scalability. Ensures that data loads and queries perform within expected time frames and that the technical architecture is scalable.
5. Integration testing. Ensures that the ETL process functions well with other upstream and downstream processes.
6. User-acceptance testing. Ensures the solution meets users' current expectations and anticipates their future expectations.
7. Regression testing. Ensures existing functionality remains intact each time a new release of code is completed.
For more details please visit ETL Testing
Labels:
General
Tuesday, August 10, 2010
Infrastructure as a Service (IaaS)
IaaS provides and mantains the underlying hardware, operating system and network Infrastructure resources and provides it in a virtualiced, easy to manage comoditized way. IaaS doesn’t care about the application at all.
IaaS is the base of the Cloud Computing paradigm, many people confuse CC with IaaS, thers just use the term Cloud Computing when they are in fact talking about IaaS. IaaS has also been referred to as "Everything as a Service" and "Hardware as a Service".
IaaS offers CPU, memory, storage, networking and security as a package. IaaS is the virtual machine in the sky.
With IaaS, you can choose from a range of predefined virtual machines and load packaged operating system images to it.
Well known and whidely trusted IaaS providers that offer services to the general public are: Amazon, Joyent, GoGrid and FlexiScale and Rackspace Cloud.
Amazon is probably the best known of the providers, Joyent is also huge and hosts some Facebook applications and the the social network LinkedIn, among others.
By moving your infrastructure to "the cloud", you have the ability to scale as if you owned your own hardware and data center (which is not realistic with a traditional hosting provider) but you keep the upfront costs to a minimum.
Benefits of IaaS:
1. Ability to scale on demand, instantly.
2. Per Hour billing, you only pay for what you use.
3. Ideal for startup business, where one of the most difficult things to do is keep capital expenditures under control.
IaaS is the base of the Cloud Computing paradigm, many people confuse CC with IaaS, thers just use the term Cloud Computing when they are in fact talking about IaaS. IaaS has also been referred to as "Everything as a Service" and "Hardware as a Service".
IaaS offers CPU, memory, storage, networking and security as a package. IaaS is the virtual machine in the sky.
With IaaS, you can choose from a range of predefined virtual machines and load packaged operating system images to it.
Well known and whidely trusted IaaS providers that offer services to the general public are: Amazon, Joyent, GoGrid and FlexiScale and Rackspace Cloud.
Amazon is probably the best known of the providers, Joyent is also huge and hosts some Facebook applications and the the social network LinkedIn, among others.
By moving your infrastructure to "the cloud", you have the ability to scale as if you owned your own hardware and data center (which is not realistic with a traditional hosting provider) but you keep the upfront costs to a minimum.
Benefits of IaaS:
1. Ability to scale on demand, instantly.
2. Per Hour billing, you only pay for what you use.
3. Ideal for startup business, where one of the most difficult things to do is keep capital expenditures under control.
Labels:
Upcoming Technology
Database as a Service (DaaS)
A new emerging option called database-as-a-service (DaaS) hosts databases in the cloud and is a good fit for some new apps. Amazon, Google, IBM, Microsoft, Oracle, and Saleforce.com as well as small innovators such as EnterpriseDB, LongJump, and Elastra are all targeting the DaaS market. Although most of today's DaaS solutions are very simple, in the next two to three years, more sophisticated offerings will evolve to support larger and more complex apps.
What Is DaaS?
DaaS provides traditional database features, typically data definition, storage and retrieval, on a subscription basis over the web. To subscribers DaaS appears as a black box supporting logical data operations, and logical data stores where customers can only see their organization's data. Physical access is seen as a security risk and thus it is not available. As with SaaS, DaaS vendors build and manage data centers incorporating best practices in security, back-up, recovery and customer support. Data services typically are provided as SOAP or REST APIs allowing users to define data structures, perform CRUD operations, manage entitlements and query the database using a subset of standard SQL.
What Is DaaS?
DaaS provides traditional database features, typically data definition, storage and retrieval, on a subscription basis over the web. To subscribers DaaS appears as a black box supporting logical data operations, and logical data stores where customers can only see their organization's data. Physical access is seen as a security risk and thus it is not available. As with SaaS, DaaS vendors build and manage data centers incorporating best practices in security, back-up, recovery and customer support. Data services typically are provided as SOAP or REST APIs allowing users to define data structures, perform CRUD operations, manage entitlements and query the database using a subset of standard SQL.
Labels:
Upcoming Technology
Software as a service (SaaS)
Software as a service (SaaS, typically pronounced [sæs]), sometimes referred to as "software on demand," is software that is deployed over the internet and/or is deployed to run behind a firewall on a local area network or personal computer. With SaaS, a provider licenses an application to customers as a service on demand, through a subscription or a "pay-as-you-go" model.
SaaS was initially widely deployed for sales force automation and Customer Relationship Management (CRM). Now, it has become commonplace for many business tasks, including computerized billing, invoicing, human resource management, financials, content management, collaboration, document management, and service desk management.
SaaS was initially widely deployed for sales force automation and Customer Relationship Management (CRM). Now, it has become commonplace for many business tasks, including computerized billing, invoicing, human resource management, financials, content management, collaboration, document management, and service desk management.
Labels:
Upcoming Technology
Upcoming Technology
Following are the few upcoming technologies -
1)Cloud computing,
2)SaaS(Software as a Service),
3)DaaS(Database as a Service),
4)Platform as-a-Service (PaaS),
5)IaaS(Infrastructure as a Service (IaaS)),
6)IDM(Identity Management)
1)Cloud computing,
2)SaaS(Software as a Service),
3)DaaS(Database as a Service),
4)Platform as-a-Service (PaaS),
5)IaaS(Infrastructure as a Service (IaaS)),
6)IDM(Identity Management)
Labels:
Upcoming Technology
Sunday, August 1, 2010
Cloud testing: attracting demand
Cloud Testing is a form of software testing wherein testing is done using resources, machines or servers from the cloud infrastructure. Besides, the entire testing environments can be obtained from the cloud on-demand at a cost that is practical and reasonable due to the pay-for-use nature of cloud computing and with a lead-time that is near impossible within a company’s own data center.
Initially, this concept took shape when companies started using numerous machines booted up in the cloud in order to simulate web traffic and carry out performance tests on Web sites. Now remote machines in the cloud are used to provide a common ground for testers to test and developers to isolate and resolve the observed software defects.
Apparently, cloud testing has traditionally been used to refer to load and performance testing of Web sites. However, with increasing maturity of technology, all kinds of enterprise software can be tested for functional and performance issues before going in for full fledged enterprise deployment.
For more information please visit : Cloud testing: attracting demand
Initially, this concept took shape when companies started using numerous machines booted up in the cloud in order to simulate web traffic and carry out performance tests on Web sites. Now remote machines in the cloud are used to provide a common ground for testers to test and developers to isolate and resolve the observed software defects.
Apparently, cloud testing has traditionally been used to refer to load and performance testing of Web sites. However, with increasing maturity of technology, all kinds of enterprise software can be tested for functional and performance issues before going in for full fledged enterprise deployment.
For more information please visit : Cloud testing: attracting demand
Labels:
Cloud Testing
What Cloud Testing offers
Cloud Testing offers services that allow developers, testers and website managers to test their websites using industry standard frameworks and real browsers. Cloud Testing provides this using a SaaS (Software as a Service) model. There’s no need to invest in any hardware, software or consultancy.
For more information please visit below link :
What Cloud Testing offers
For more information please visit below link :
What Cloud Testing offers
Labels:
Cloud Testing
Cloud Testing
Cloud Testing is a form of software testing in which Web applications that leverage Cloud computing environments (“cloud”) seek to simulate real-world user traffic as a means of load testing and stress testing web sites
Testing in the cloud is often discussed in the context of performance or load tests against cloud-based applications. However, all types of software application tests, be they performance, functionality, usability, etc., are eligible to be referred to as 'cloud testing'
'The testing entity is targeting an application which resides on a third-party computing platform and is accessing that platform across the internet.'
Leading cloud computing service providers include, among others, Amazon, 3-terra, Skytap, and SOASTA. Some keys to successful testing in the cloud include
1. understanding a platform provider's elasticity model/dynamic configuration method,
2. staying abreast of the provider's evolving monitoring services and Service Level Agreements (SLAs),
3. potentially engaging the service provider as an on-going operations partner if producing commercial off-the-shelf (COTS) software, and
4. being willing to be used as a case study by the cloud service provider. The latter may lead to cost reductions.
Testing in the cloud is often discussed in the context of performance or load tests against cloud-based applications. However, all types of software application tests, be they performance, functionality, usability, etc., are eligible to be referred to as 'cloud testing'
'The testing entity is targeting an application which resides on a third-party computing platform and is accessing that platform across the internet.'
Leading cloud computing service providers include, among others, Amazon, 3-terra, Skytap, and SOASTA. Some keys to successful testing in the cloud include
1. understanding a platform provider's elasticity model/dynamic configuration method,
2. staying abreast of the provider's evolving monitoring services and Service Level Agreements (SLAs),
3. potentially engaging the service provider as an on-going operations partner if producing commercial off-the-shelf (COTS) software, and
4. being willing to be used as a case study by the cloud service provider. The latter may lead to cost reductions.
Labels:
Cloud Testing
Saturday, June 12, 2010
What is a test scenario?
1) The terms "test scenario" and "test case" are often used synonymously.
2) Test scenarios are test cases or test scripts, and the sequence in which they are to be executed.
3) Test scenarios are test cases that ensure that all business process flows are tested from end to end.
4) Test scenarios are independent tests, or a series of tests that follow each other, where each of them dependent upon the output of the previous one.
5) Test scenarios are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures.
6) Test scenarios are designed to represent both typical and unusual situations that may occur in the application.
7) Test engineers define unit test requirements and unit test scenarios. Test engineers also execute unit test scenarios. It is the test team that, with assistance of developers and clients, develops test scenarios for integration and system testing.
8) Test scenarios are executed through the use of test procedures or scripts. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. Test procedures or scripts may cover multiple test scenarios.
2) Test scenarios are test cases or test scripts, and the sequence in which they are to be executed.
3) Test scenarios are test cases that ensure that all business process flows are tested from end to end.
4) Test scenarios are independent tests, or a series of tests that follow each other, where each of them dependent upon the output of the previous one.
5) Test scenarios are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures.
6) Test scenarios are designed to represent both typical and unusual situations that may occur in the application.
7) Test engineers define unit test requirements and unit test scenarios. Test engineers also execute unit test scenarios. It is the test team that, with assistance of developers and clients, develops test scenarios for integration and system testing.
8) Test scenarios are executed through the use of test procedures or scripts. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. Test procedures or scripts may cover multiple test scenarios.
Labels:
Basic Concepts
Scenario Testing
Scenario testing is a software testing activity that uses scenario tests, or simply scenarios, which are based on a hypothetical story to help a person think through a complex problem or system for a testing environment.
The ideal scenario has five key characteristics: it is (a) a story that is (b) motivating, (c) credible, (d) complex, and (e) easy to evaluate.
These tests are usually different from test cases in that test cases are single steps whereas scenarios cover a number of steps. Test suites and scenarios can be used in concert for complete system testing.
Labels:
Basic Concepts
Thursday, June 10, 2010
Client-server Applications And Web-based Applications
Client-server applications are loaded at the server.An .exe is loaded on every client to call this application.
Web-based applications are also loaded at the server but no .exe is installed at the client machine. Instead the client calls the application through a web browser.
Client-server Technology:
1. Testing is performed on .exe installed on local computer
2. The number of clients is known
3. Client and server are the entities to be tested
4. Both server and client locations are fixed and known to the user
5. Server to server interaction is prohibited
6. Low multimedia type of data transaction
7. Designed and implemented on intranet environment
Web-based Technology:
1. Testing is performed on content streamed from web server using browser (ex: Explorer, Mozilla, etc) installed on local computer
2. Number of clients is difficult to predict (millions of clients)
3. Client, Server and network are the entities to be tested
4. Server location is certain; client locations are not certain
5. Server to server interaction is normal
6. Rich multimedia type of data transaction
7. Designed and implemented on internet environment
Web-based applications are also loaded at the server but no .exe is installed at the client machine. Instead the client calls the application through a web browser.
Client-server Technology:
1. Testing is performed on .exe installed on local computer
2. The number of clients is known
3. Client and server are the entities to be tested
4. Both server and client locations are fixed and known to the user
5. Server to server interaction is prohibited
6. Low multimedia type of data transaction
7. Designed and implemented on intranet environment
Web-based Technology:
1. Testing is performed on content streamed from web server using browser (ex: Explorer, Mozilla, etc) installed on local computer
2. Number of clients is difficult to predict (millions of clients)
3. Client, Server and network are the entities to be tested
4. Server location is certain; client locations are not certain
5. Server to server interaction is normal
6. Rich multimedia type of data transaction
7. Designed and implemented on internet environment
Labels:
Basic Concepts
Difference Between Desktop Application Testing And Web Application Testing
Desktop App (DA) is the machine independent, hence every change has only reflects at the machine level.
Web App (WA) is the Internet dependent program,hence any change in the program reflects at every where,where it becomes use.
Web App (WA) is the Internet dependent program,hence any change in the program reflects at every where,where it becomes use.
Labels:
Basic Concepts
Difference Between Desktop Application Testing, Client Server Testing And Web Testing
Each one differs in the environment in which they are tested and you will lose control over the environment in which application you are testing, while you move from desktop to web applications.
Desktop application runs on personal computers and work stations, so when you test the desktop application you are focusing on a specific environment. You will test complete application broadly in categories like GUI, functionality, Load, and backend i.e DB.
Client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. You are aware of number of clients and servers and their locations in the test scenario.
Web application is a bit different and complex to test as tester don’t have that much control over the application. Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing.
Desktop application runs on personal computers and work stations, so when you test the desktop application you are focusing on a specific environment. You will test complete application broadly in categories like GUI, functionality, Load, and backend i.e DB.
Client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. You are aware of number of clients and servers and their locations in the test scenario.
Web application is a bit different and complex to test as tester don’t have that much control over the application. Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing.
Labels:
Basic Concepts
Saturday, May 29, 2010
Unit Test
Unit testing is a software verification and validation method in which a programmer tests if individual units of source code are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure.
Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended.
The primary goal of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves exactly as you expect. Each unit is tested separately before integrating them into modules to test the interfaces between modules. Unit testing has proven its value in that a large percentage of defects are identified during its use.
The most common approach to unit testing requires drivers and stubs to be written. The driver simulates a calling unit and the stub simulates a called unit. The investment of developer time in this activity sometimes results in demoting unit testing to a lower level of priority and that is almost always a mistake. Even though the drivers and stubs cost time and money, unit testing provides some undeniable advantages. It allows for automation of the testing process, reduces difficulties of discovering errors contained in more complex pieces of the application, and test coverage is often enhanced because attention is given to each unit.
For example, if you have two units and decide it would be more cost effective to glue them together and initially test them as an integrated unit, an error could occur in a variety of places:
1) Is the error due to a defect in unit 1?
2) Is the error due to a defect in unit 2?
3) Is the error due to defects in both units?
4) Is the error due to a defect in the interface between the units?
5) Is the error due to a defect in the test?
Finding the error (or errors) in the integrated module is much more complicated than first isolating the units, testing each, then integrating them and testing the whole.
Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended.
The primary goal of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves exactly as you expect. Each unit is tested separately before integrating them into modules to test the interfaces between modules. Unit testing has proven its value in that a large percentage of defects are identified during its use.
The most common approach to unit testing requires drivers and stubs to be written. The driver simulates a calling unit and the stub simulates a called unit. The investment of developer time in this activity sometimes results in demoting unit testing to a lower level of priority and that is almost always a mistake. Even though the drivers and stubs cost time and money, unit testing provides some undeniable advantages. It allows for automation of the testing process, reduces difficulties of discovering errors contained in more complex pieces of the application, and test coverage is often enhanced because attention is given to each unit.
For example, if you have two units and decide it would be more cost effective to glue them together and initially test them as an integrated unit, an error could occur in a variety of places:
1) Is the error due to a defect in unit 1?
2) Is the error due to a defect in unit 2?
3) Is the error due to defects in both units?
4) Is the error due to a defect in the interface between the units?
5) Is the error due to a defect in the test?
Finding the error (or errors) in the integrated module is much more complicated than first isolating the units, testing each, then integrating them and testing the whole.
Labels:
Software Testing Types
Monday, May 24, 2010
Selenium
Selenium is a portable software testing framework for web applications.
Selenium provides a record/playback tool for authoring tests without learning a test scripting language. Selenium provides a test domain specific language (DSL) to write tests in a number of popular programming languages, including Java, Ruby, Groovy, Python, PHP, and Perl. Test playback is possible in most modern web browsers. Selenium deploys on Windows, Linux, and Macintosh platforms.
It is open source software, released under the Apache 2.0 license and can be downloaded and used without charge. The latest side project is Selenium Grid, which provides a hub allowing the running of multiple Selenium tests concurrently on any number of local or remote systems, thus minimizing test execution time.
Selenium provides a record/playback tool for authoring tests without learning a test scripting language. Selenium provides a test domain specific language (DSL) to write tests in a number of popular programming languages, including Java, Ruby, Groovy, Python, PHP, and Perl. Test playback is possible in most modern web browsers. Selenium deploys on Windows, Linux, and Macintosh platforms.
It is open source software, released under the Apache 2.0 license and can be downloaded and used without charge. The latest side project is Selenium Grid, which provides a hub allowing the running of multiple Selenium tests concurrently on any number of local or remote systems, thus minimizing test execution time.
Labels:
Testing tools
Thursday, May 20, 2010
Bug Tracking System
A bug tracking system is a software application that is designed to help quality assurance and programmers keep track of reported software bugs in their work. It may be regarded as a sort of issue tracking system.
Many bug-tracking systems, such as those used by most open source software projects, allow users to enter bug reports directly. Other systems are used only internally in a company or organization doing software development. Typically bug tracking systems are integrated with other software project management applications.
Having a bug tracking system is extremely valuable in software development, and they are used extensively by companies developing software products.
Many bug-tracking systems, such as those used by most open source software projects, allow users to enter bug reports directly. Other systems are used only internally in a company or organization doing software development. Typically bug tracking systems are integrated with other software project management applications.
Having a bug tracking system is extremely valuable in software development, and they are used extensively by companies developing software products.
Labels:
General
Defect Prevention
The objective of defect prevention is to identify the defects and take corrective action to ensure they are not repeated over subsequent iterative cycles. Defect prevention can be implemented by preparing an action plan to minimize or eliminate defects, generating defect metrics, defining corrective action and producing an analysis of the root causes of the defects.
Defect prevention can be accomplished by actioning the following steps:
1) Calculate defect data with periodic reviews using test logs from the execution phase: this data should be used to segregate and classify defects by root causes. This produces defect metrics highlighting the most prolific problem areas;
2) Identify improvement strategies;
3) Escalate issues to senior management or customer where necessary;
4) Draw up an action plan to address outstanding defects and improve development process.This should be reviewed regularly for effectiveness and modified should it prove to be ineffective.
5) Undertake periodic peer reviews to verify that the action plans are being adhered to;
6) Produce regular reports on defects by age. If the defect age for a particular defect is high and the severity is sufficient to cause concern, focussed action needs to be taken to resolve it.
7) Classify defects into categories such as critical defects, functional defects, and cosmetic defects.
Defect prevention can be accomplished by actioning the following steps:
1) Calculate defect data with periodic reviews using test logs from the execution phase: this data should be used to segregate and classify defects by root causes. This produces defect metrics highlighting the most prolific problem areas;
2) Identify improvement strategies;
3) Escalate issues to senior management or customer where necessary;
4) Draw up an action plan to address outstanding defects and improve development process.This should be reviewed regularly for effectiveness and modified should it prove to be ineffective.
5) Undertake periodic peer reviews to verify that the action plans are being adhered to;
6) Produce regular reports on defects by age. If the defect age for a particular defect is high and the severity is sufficient to cause concern, focussed action needs to be taken to resolve it.
7) Classify defects into categories such as critical defects, functional defects, and cosmetic defects.
Labels:
General
Defect Tracking
Defect tracking is the process of finding defects in a product (by inspection, testing, or recording feedback from customers), and making new versions of the product that fix the defects. Defect tracking is important in software engineering as complex software systems typically have tens or hundreds or thousands of defects: managing, evaluating and prioritizing these defects is a difficult task: defect tracking systems are computer database systems that store defects and help people to manage them.
Labels:
General
Monday, May 3, 2010
Installation Testing
Installation testing is a kind of quality assurance work in the software industry that focuses on what customers will need to do to install and set up the new software successfully. The testing process may involve full, partial or upgrades install/uninstall processes.
This testing is typically done by the software testing engineer in conjunction with the configuration manager. Implementation testing is usually defined as testing which places a compiled version of code into the testing or pre-production environment, from which it may or may not progress into production. This generally takes place outside of the software development environment to limit code corruption from other future releases which may reside on the development network.
OR
Installation testing (in software engineering) can simply be defined as any testing that occurs outside of the development environment. Such testing will frequently occur on the computer system the software product will eventually be installed on.
Whilst the ideal installation might simply appear to be to run a setup program, the generation of that setup program itself and its efficacy in a variety of machine and operating system environments can require extensive testing before it can be used with confidence.
In distributed systems, particularly where software is to be released into an already live target environment (such as an operational web site) installation (or deployment as it is sometimes called) can involve database schema changes as well as the installation of new software. Deployment plans in such circumstances may include back-out procedures whose use is intended to roll the target environment back in the event that the deployment is unsuccessful. Ideally, the deployment plan itself should be tested in an environment that is a replica of the live environment. A factor that can increase the organisational requirements of such an exercise is the need to synchronize the data in the test deployment environment with that in the live environment with minimum disruption to live operation.
This testing is typically done by the software testing engineer in conjunction with the configuration manager. Implementation testing is usually defined as testing which places a compiled version of code into the testing or pre-production environment, from which it may or may not progress into production. This generally takes place outside of the software development environment to limit code corruption from other future releases which may reside on the development network.
OR
Installation testing (in software engineering) can simply be defined as any testing that occurs outside of the development environment. Such testing will frequently occur on the computer system the software product will eventually be installed on.
Whilst the ideal installation might simply appear to be to run a setup program, the generation of that setup program itself and its efficacy in a variety of machine and operating system environments can require extensive testing before it can be used with confidence.
In distributed systems, particularly where software is to be released into an already live target environment (such as an operational web site) installation (or deployment as it is sometimes called) can involve database schema changes as well as the installation of new software. Deployment plans in such circumstances may include back-out procedures whose use is intended to roll the target environment back in the event that the deployment is unsuccessful. Ideally, the deployment plan itself should be tested in an environment that is a replica of the live environment. A factor that can increase the organisational requirements of such an exercise is the need to synchronize the data in the test deployment environment with that in the live environment with minimum disruption to live operation.
Labels:
Software Testing Types
Stochastic Testing
Stochastic testing is the same as "monkey testing", but stochastic testing is a lot more technical sounding name for the same testing process.
Stochastic testing is black box testing, random testing, performed by automated testing tools. Stochastic testing is a series of random tests over time. The software under test typically passes the individual tests, but our goal is to see if it can pass a large number of individual tests.
Stochastic testing is black box testing, random testing, performed by automated testing tools. Stochastic testing is a series of random tests over time. The software under test typically passes the individual tests, but our goal is to see if it can pass a large number of individual tests.
Labels:
Software Testing Types
Sunday, May 2, 2010
What is the NUnit Framework?
NUnit framework is port of JUnit framework from java and Extreme Programming (XP).
This is an open source product. You can download it from http://www.nunit.org. The NUnit framework is developed from ground up to make use of .NET framework functionalities. It uses an Attribute based programming model. It loads test assemblies in separate application domain hence we can test an application without restarting the NUnit test tools. The NUnit further watches a file/assembly change events and reload it as soon as they are changed. With these features in hand a developer can perform develop and test cycles sides by side.
We should also understand what NUnit Framework is not:
1)It is not Automated GUI tester.
2)It is not a scripting language, all test are written in .NET supported language e.g. C#, VC, VB.NET, J# etc.
3)It is not a benchmark tool.
4)Passing the entire unit test suite does not mean software is production ready.
This is an open source product. You can download it from http://www.nunit.org. The NUnit framework is developed from ground up to make use of .NET framework functionalities. It uses an Attribute based programming model. It loads test assemblies in separate application domain hence we can test an application without restarting the NUnit test tools. The NUnit further watches a file/assembly change events and reload it as soon as they are changed. With these features in hand a developer can perform develop and test cycles sides by side.
We should also understand what NUnit Framework is not:
1)It is not Automated GUI tester.
2)It is not a scripting language, all test are written in .NET supported language e.g. C#, VC, VB.NET, J# etc.
3)It is not a benchmark tool.
4)Passing the entire unit test suite does not mean software is production ready.
Labels:
Testing tools
What Is NUnit?
NUnit is a unit-testing framework for all .Net languages.
Initially ported from JUnit, the current production release, version 2.5, is the sixth major release of this xUnit based unit testing tool for Microsoft .NET. It is written entirely in C# and has been completely redesigned to take advantage of many .NET language features, for example custom attributes and other reflection related capabilities. NUnit brings xUnit to all .NET languages.
Initially ported from JUnit, the current production release, version 2.5, is the sixth major release of this xUnit based unit testing tool for Microsoft .NET. It is written entirely in C# and has been completely redesigned to take advantage of many .NET language features, for example custom attributes and other reflection related capabilities. NUnit brings xUnit to all .NET languages.
Labels:
Testing tools
Wednesday, April 28, 2010
Test Script
A test script is a testing work product modeling a software program (often written in a procedural scripting language) that executes a test suite of test cases.
The goals of a test script :
1) Automate the execution of test cases.
2) Support regression testing
Objectives of a single test script :
1) Execute each test case in the test suite.
2) Report the results of the test suite.
A test script provides the following benefits:
1) Automates a single test suite, thereby supporting regression testing.
2) Failure to produce test scripts makes regression testing more expensive and less likely to occur.
Contents
1) Test script objectives
2) Test preparation (e.g., to place objects under test into the appropriate pre-test states)
3) Test stimuli (e.g., to send test messages or raise test exceptions)
4) Expected behavior (i.e., test oracle)
5) Test reporting script
6) Test finalization script
To know more visit : Test Script
The goals of a test script :
1) Automate the execution of test cases.
2) Support regression testing
Objectives of a single test script :
1) Execute each test case in the test suite.
2) Report the results of the test suite.
A test script provides the following benefits:
1) Automates a single test suite, thereby supporting regression testing.
2) Failure to produce test scripts makes regression testing more expensive and less likely to occur.
Contents
1) Test script objectives
2) Test preparation (e.g., to place objects under test into the appropriate pre-test states)
3) Test stimuli (e.g., to send test messages or raise test exceptions)
4) Expected behavior (i.e., test oracle)
5) Test reporting script
6) Test finalization script
To know more visit : Test Script
Labels:
Automation Testing
Test Execution Engine
A test execution engine is a type of software used to test software, hardware or complete systems.
A test execution engine may appear in two forms:
1) Module of a test software suite (test bench) or an integrated development environment
2) Stand-alone application software
The test specification is software. Test specification is sometimes referred to as test sequence, which consists of test steps.
The test specification should be stored in the test repository in a text format (such as source code). Test data is sometimes generated by some test data generator tool. Test data can be stored in binary or text files. Test data should also be stored in the test repository together with the test specification.
Test specification is selected, loaded and executed by the test execution engine similarly, as application software is selected, loaded and executed by operation systems. The test execution engine should not operate on the tested object directly, but though plug-in modules similarly as an application software accesses devices through drivers which are installed on the operation system.
The difference between the concept of test execution engine and operation system is that the test execution engine monitors, presents and stores the status, results, time stamp, length and other information for every Test Step of a Test Sequence, but typically an operation system does not perform such profiling of a software execution.
Advantages of using a test execution engine:
1) Test results are stored and can be viewed in a uniform way, independent of the type of the test
2) Easier to keep track of the changes
3) Easier to reuse components developed for testing
A test execution engine may appear in two forms:
1) Module of a test software suite (test bench) or an integrated development environment
2) Stand-alone application software
The test specification is software. Test specification is sometimes referred to as test sequence, which consists of test steps.
The test specification should be stored in the test repository in a text format (such as source code). Test data is sometimes generated by some test data generator tool. Test data can be stored in binary or text files. Test data should also be stored in the test repository together with the test specification.
Test specification is selected, loaded and executed by the test execution engine similarly, as application software is selected, loaded and executed by operation systems. The test execution engine should not operate on the tested object directly, but though plug-in modules similarly as an application software accesses devices through drivers which are installed on the operation system.
The difference between the concept of test execution engine and operation system is that the test execution engine monitors, presents and stores the status, results, time stamp, length and other information for every Test Step of a Test Sequence, but typically an operation system does not perform such profiling of a software execution.
Advantages of using a test execution engine:
1) Test results are stored and can be viewed in a uniform way, independent of the type of the test
2) Easier to keep track of the changes
3) Easier to reuse components developed for testing
Labels:
General
Sunday, April 25, 2010
Difference between Monkey testing an Ad-hoc testing
Monkey Testing : Monkey is random testing, you don't know about the application.
Ad-hoc Testing : Ad-hoc is informal testing where you know about the application well in hand.
Monkey Testing : Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Ad-hoc testing : A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well.
Ad-hoc Testing : Ad-hoc is informal testing where you know about the application well in hand.
Monkey Testing : Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Ad-hoc testing : A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well.
Labels:
Software Testing Types
Monkey Testing
Testing by means of a random selection from a large range of inputs and by randomly pushing buttons, ignorant on how the product is being used.
Or
Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out.
Or
In computer science, a monkey test is a unit test that runs with no specific test in mind. The monkey in this case is the producer of any input. For example, a monkey test can enter random strings into text boxes to ensure handling of all possible user input or provide garbage files to check for loading routines that have blind faith in their data.
Or
Testers use the term monkey when referring to a fully automated testing tool. This tool doesn’t know how to use any application, so it performs mouse clicks on the screen or keystrokes on the keyboard randomly. The test monkey is technically known to conduct stochastic testing, which is in the category of black-box testing.
There are two types :
1) Smart Monkeys
2) Dumb Monkeys
1) Smart Monkeys : Are valuable for load and stress testing.
They will find a significant number of bugs.
Very expensive to develop.
2) Dumb Monkeys : Inexpensive to develop.
Able to do basic testing.
Can find only few bugs.
Or
Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out.
Or
In computer science, a monkey test is a unit test that runs with no specific test in mind. The monkey in this case is the producer of any input. For example, a monkey test can enter random strings into text boxes to ensure handling of all possible user input or provide garbage files to check for loading routines that have blind faith in their data.
Or
Testers use the term monkey when referring to a fully automated testing tool. This tool doesn’t know how to use any application, so it performs mouse clicks on the screen or keystrokes on the keyboard randomly. The test monkey is technically known to conduct stochastic testing, which is in the category of black-box testing.
There are two types :
1) Smart Monkeys
2) Dumb Monkeys
1) Smart Monkeys : Are valuable for load and stress testing.
They will find a significant number of bugs.
Very expensive to develop.
2) Dumb Monkeys : Inexpensive to develop.
Able to do basic testing.
Can find only few bugs.
Labels:
Software Testing Types
Some Major Test cases for web application cookie testing:
The first obvious test case is to test if your application is writing cookies properly on disk.
You can use the Cookie Tester application also if you don’t have any web application to test but you want to understand the cookie concept for testing.
Test cases:
1) As a Cookie privacy policy make sure from your design documents that no personal or sensitive data is stored in the cookie.
2) If you have no option than saving sensitive data in cookie make sure data stored in cookie is stored in encrypted format.
3) Make sure that there is no overuse of cookies on your site under test. Overuse of cookies will annoy users if browser is prompting for cookies more often and this could result in loss of site traffic and eventually loss of business.
4) Disable the cookies from your browser settings: If you are using cookies on your site, your sites major functionality will not work by disabling the cookies. Then try to access the web site under test. Navigate through the site. See if appropriate messages are displayed to user like “For smooth functioning of this site make sure that cookies are enabled on your browser”. There should not be any page crash due to disabling the cookies. (Please make sure that you close all browsers, delete all previously written cookies before performing this test)
5) Accepts/Reject some cookies: The best way to check web site functionality is, not to accept all cookies. If you are writing 10 cookies in your web application then randomly accept some cookies say accept 5 and reject 5 cookies. For executing this test case you can set browser options to prompt whenever cookie is being written to disk. On this prompt window you can either accept or reject cookie. Try to access major functionality of web site. See if pages are getting crashed or data is getting corrupted.
6) Delete cookie: Allow site to write the cookies and then close all browsers and manually delete all cookies for web site under test. Access the web pages and check the behavior of the pages.
7) Corrupt the cookies: Corrupting cookie is easy. You know where cookies are stored. Manually edit the cookie in notepad and change the parameters to some vague values. Like alter the cookie content, Name of the cookie or expiry date of the cookie and see the site functionality. In some cases corrupted cookies allow to read the data inside it for any other domain. This should not happen in case of your web site cookies. Note that the cookies written by one domain say rediff.com can’t be accessed by other domain say yahoo.com unless and until the cookies are corrupted and someone trying to hack the cookie data.
8) Checking the deletion of cookies from your web application page: Some times cookie written by domain say rediff.com may be deleted by same domain but by different page under that domain. This is the general case if you are testing some ‘action tracking’ web portal. Action tracking or purchase tracking pixel is placed on the action web page and when any action or purchase occurs by user the cookie written on disk get deleted to avoid multiple action logging from same cookie. Check if reaching to your action or purchase page deletes the cookie properly and no more invalid actions or purchase get logged from same user.
9) Cookie Testing on Multiple browsers: This is the important case to check if your web application page is writing the cookies properly on different browsers as intended and site works properly using these cookies. You can test your web application on Major used browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera etc.
10) If your web application is using cookies to maintain the logging state of any user then log in to your web application using some username and password. In many cases you can see the logged in user ID parameter directly in browser address bar. Change this parameter to different value say if previous user ID is 100 then make it 101 and press enter. The proper access message should be displayed to user and user should not be able to see other users account.
To more information about What is Cookie? visit : Cookie
You can use the Cookie Tester application also if you don’t have any web application to test but you want to understand the cookie concept for testing.
Test cases:
1) As a Cookie privacy policy make sure from your design documents that no personal or sensitive data is stored in the cookie.
2) If you have no option than saving sensitive data in cookie make sure data stored in cookie is stored in encrypted format.
3) Make sure that there is no overuse of cookies on your site under test. Overuse of cookies will annoy users if browser is prompting for cookies more often and this could result in loss of site traffic and eventually loss of business.
4) Disable the cookies from your browser settings: If you are using cookies on your site, your sites major functionality will not work by disabling the cookies. Then try to access the web site under test. Navigate through the site. See if appropriate messages are displayed to user like “For smooth functioning of this site make sure that cookies are enabled on your browser”. There should not be any page crash due to disabling the cookies. (Please make sure that you close all browsers, delete all previously written cookies before performing this test)
5) Accepts/Reject some cookies: The best way to check web site functionality is, not to accept all cookies. If you are writing 10 cookies in your web application then randomly accept some cookies say accept 5 and reject 5 cookies. For executing this test case you can set browser options to prompt whenever cookie is being written to disk. On this prompt window you can either accept or reject cookie. Try to access major functionality of web site. See if pages are getting crashed or data is getting corrupted.
6) Delete cookie: Allow site to write the cookies and then close all browsers and manually delete all cookies for web site under test. Access the web pages and check the behavior of the pages.
7) Corrupt the cookies: Corrupting cookie is easy. You know where cookies are stored. Manually edit the cookie in notepad and change the parameters to some vague values. Like alter the cookie content, Name of the cookie or expiry date of the cookie and see the site functionality. In some cases corrupted cookies allow to read the data inside it for any other domain. This should not happen in case of your web site cookies. Note that the cookies written by one domain say rediff.com can’t be accessed by other domain say yahoo.com unless and until the cookies are corrupted and someone trying to hack the cookie data.
8) Checking the deletion of cookies from your web application page: Some times cookie written by domain say rediff.com may be deleted by same domain but by different page under that domain. This is the general case if you are testing some ‘action tracking’ web portal. Action tracking or purchase tracking pixel is placed on the action web page and when any action or purchase occurs by user the cookie written on disk get deleted to avoid multiple action logging from same cookie. Check if reaching to your action or purchase page deletes the cookie properly and no more invalid actions or purchase get logged from same user.
9) Cookie Testing on Multiple browsers: This is the important case to check if your web application page is writing the cookies properly on different browsers as intended and site works properly using these cookies. You can test your web application on Major used browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera etc.
10) If your web application is using cookies to maintain the logging state of any user then log in to your web application using some username and password. In many cases you can see the logged in user ID parameter directly in browser address bar. Change this parameter to different value say if previous user ID is 100 then make it 101 and press enter. The proper access message should be displayed to user and user should not be able to see other users account.
To more information about What is Cookie? visit : Cookie
Labels:
General
Thursday, April 22, 2010
STAF
The Software Testing Automation Framework (STAF) is an open source, multi-platform, multi-language framework designed around the idea of reusable components, called services (such as process invocation, resource management, logging, and monitoring). STAF removes the tedium of building an automation infrastructure, thus enabling you to focus on building your automation solution. The STAF framework provides the foundation upon which to build higher level solutions, and provides a pluggable approach supported across a large variety of platforms and languages.
To know more about STAF please visit : http://staf.sourceforge.net/
Labels:
Automation Testing
Test Automation Framework
A Test Automation Framework is a set of assumptions, concepts and tools that provide support for automated software testing. The main advantage of such a framework is the low cost for maintenance. If there is change to any test case then only the test case file needs to be updated and the Driver Script and Startup script will remain the same. There's no need to update the scripts in case of changes to the application.
Labels:
Automation Testing
What should a test harness include?
Test harnesses should include the following capabilities:
1) A standard way to specify setup (i.e., creating an artificial runtime environment) and cleanup.
2) A method for selecting individual tests to run, or all tests.
3) A means of analyzing output for expected (or unexpected) results.
4) A standardized form of failure reporting.
Labels:
Automation Testing
Test Harness
A test harness or automated test framework is a collection of software and test data configured to test a program unit by running it under varying conditions and monitoring its behavior and outputs.
It has two main parts:
1) Test execution engine
2) Test script repository.
Test harnesses allow for the automation of tests. They can call functions with supplied parameters and print out and compare the results to the desired value. The test harness is a hook to the developed code, which can be tested using an automation framework.
The typical objectives of a test harness are to:
1) Automate the testing process.
2) Execute test suites of test cases.
3) Generate associated test reports.
Benefits of test harness :
1) Increased productivity due to automation of the testing process.
2) Increased probability that regression testing will occur.
3) Increased quality of software components and application.
Labels:
Automation Testing
Tuesday, April 20, 2010
Globalization Testing
The goal of globalization testing is to detect potential problems in application design that could inhibit globalization. It makes sure that the code can handle all international support without breaking functionality that would cause either data loss or display problems.
Globalization testing checks proper functionality of the product with any of the culture/locale settings using every type of international input possible.
Proper functionality of the product assumes both a stable component that works according to design specification, regardless of international environment settings or cultures/locales, and the correct representation of data.
For more details please visit :
http://www.onestoptesting.com/globalization-testing/
Globalization testing checks proper functionality of the product with any of the culture/locale settings using every type of international input possible.
Proper functionality of the product assumes both a stable component that works according to design specification, regardless of international environment settings or cultures/locales, and the correct representation of data.
For more details please visit :
http://www.onestoptesting.com/globalization-testing/
Labels:
Software Testing Types
Localization Testing
Localization (L10N) is the process of customizing a software application that was originally designed for a domestic market so that it can be released in foreign markets.
This process involves translating all native language strings to the target language and customizing the GUI so that it is appropriate for the target market. Depending on the size and complexity of the software, localization can range from a simple process involving a small team of translators, linguists, desktop publishers and engineers to a complex process requiring a Localization Project Manager directing a team of a hundred specialists.
Localization is usually done using some combination of in-house resources, independent contractors and full-scope services of a localization company.
For more details please visit : http://www.onestoptesting.com/localization-testing/
This process involves translating all native language strings to the target language and customizing the GUI so that it is appropriate for the target market. Depending on the size and complexity of the software, localization can range from a simple process involving a small team of translators, linguists, desktop publishers and engineers to a complex process requiring a Localization Project Manager directing a team of a hundred specialists.
Localization is usually done using some combination of in-house resources, independent contractors and full-scope services of a localization company.
For more details please visit : http://www.onestoptesting.com/localization-testing/
Labels:
Software Testing Types
Sunday, April 18, 2010
GUI Test
GUI tests test the graphical user interface. GUI tests are considered functional tests. Applications are used to simulate users interacting with the system such as entering text into a field or clicking a button. Verifications are then made based on the response from the UI or system.
Labels:
Software Testing Types
Unit Test
A unit test is a method used to verify that a small unit of source code is working properly. Unit tests should be independent of external resources such as databases and files. A unit is generally considered a method.
Labels:
Software Testing Types
Dummy Objects
Dummy objects are used when methods require an object as part of their method or constructor. However, in this case the object is never used by the code under test. As such, a common dummy object is null.
Labels:
Basic Concepts
Fake
Fake objects are yet another type of test doubles. Fake objects are similar to test stubs, but replace parts of the functionality with their own implementation to enable testing to be easier for the method.
Labels:
Basic Concepts
Mock
Mock objects are also a form of test double and work in a similar fashion to stub objects. Mocks are used to simulate the behavior of a complex object. Any interactions made with the mock object are verified for correctness, unlike stub objects.
Labels:
Basic Concepts
Stub
A test stub is a specific type of test double. A stub is used when you need to replicate an object and control the output, but without verifying any interactions with the stub object for correctness. Many types of stubs exist, such as the responder, saboteur, temporary, procedural, and entity chain.
Labels:
Basic Concepts
Test Double
When we cannot, or choose not, to use a real component in unit tests, the object that is substituted for the real component is called a test double.
Labels:
Basic Concepts
Behavior Driven Development (BDD)
Building on top of the fundamentals of TDD, BDD aims to take more advantage of the design and documentation aspects of TDD to provide more value to the customer and business.
Labels:
Basic Concepts
Test Driven Development (TDD)
Test Driven Development is an Agile Software Development process where a test for a procedure is created before the code is created.
Labels:
Basic Concepts
Test Fixture
Test fixtures refer to the state a test must be in before the test can be run. Test fixtures prepare any objects that need to be in place before the test is run. Fixtures ensure a known, repeatable state for the tests to be run in.
Labels:
Basic Concepts
Fail
In the case of a fail, the functionality being tested has changed and as a result no longer works as expected. When represented on a report, this is represented as red.
Labels:
Basic Concepts
Pass
A pass indicates that everything is working correctly. When represented on a report or user interface (UI), it is represented as green.
Labels:
Basic Concepts
Test
A test is a systematic procedure to ensure that a particular unit of an application is working correctly.
Labels:
Basic Concepts
Friday, April 9, 2010
Wednesday, April 7, 2010
Penetration test
A penetration test is a method of evaluating the security of a computer system or network by simulating an attack from a malicious source, known as a Black Hat Hacker, or Cracker. The process involves an active analysis of the system for any potential vulnerabilities that may result from poor or improper system configuration, known and/or unknown hardware or software flaws, or operational weaknesses in process or technical countermeasures. This analysis is carried out from the position of a potential attacker, and can involve active exploitation of security vulnerabilities. Any security issues that are found will be presented to the system owner together with an assessment of their impact and often with a proposal for mitigation or a technical solution. The intent of a penetration test is to determine feasibility of an attack and the amount of business impact of a successful exploit, if discovered. It is a component of a full security audit.
Labels:
Security Testing
Dictionary attack
A dictionary attack is a method of breaking into a password-protected computer or server by systematically entering every word in a dictionary as a password. A dictionary attack can also be used in an attempt to find the key necessary to decrypt an encrypted message or document.
Dictionary attacks work because many computer users and businesses insist on using ordinary words as passwords. Dictionary attacks are rarely successful against systems that employ multiple-word phrases, and unsuccessful against systems that employ random combinations of uppercase and lowercase letters mixed up with numerals. In those systems, the brute-force method of attack (in which every possible combination of characters and spaces is tried up to a certain maximum length) can sometimes be effective, although this approach can take a long time to produce results.
Vulnerability to password or decryption-key assaults can be reduced to near zero by limiting the number of attempts allowed within a given period of time, and by wisely choosing the password or key. For example, if only three attempts are allowed and then a period of 15 minutes must elapse before the next three attempts are allowed, and if the password or key is a long, meaningless jumble of letters and numerals, a system can be rendered immune to dictionary attacks and practically immune to brute-force attacks.
A form of dictionary attack is often used by spammers. A message is sent to every e-mail address consisting of a word in the dictionary, followed by the at symbol (@), followed by the name of a particular domain. Lists of given names (such as frank, george, judith, or donna) can produce amazing results. So can individual letters of the alphabet followed by surnames (such as csmith, jwilson, or pthomas). E-mail users can minimize their vulnerability to this type of spam by choosing usernames according to the same rules that apply to passwords and decryption keys -- long, meaningless sequences of letters interspersed with numerals.
Dictionary attacks work because many computer users and businesses insist on using ordinary words as passwords. Dictionary attacks are rarely successful against systems that employ multiple-word phrases, and unsuccessful against systems that employ random combinations of uppercase and lowercase letters mixed up with numerals. In those systems, the brute-force method of attack (in which every possible combination of characters and spaces is tried up to a certain maximum length) can sometimes be effective, although this approach can take a long time to produce results.
Vulnerability to password or decryption-key assaults can be reduced to near zero by limiting the number of attempts allowed within a given period of time, and by wisely choosing the password or key. For example, if only three attempts are allowed and then a period of 15 minutes must elapse before the next three attempts are allowed, and if the password or key is a long, meaningless jumble of letters and numerals, a system can be rendered immune to dictionary attacks and practically immune to brute-force attacks.
A form of dictionary attack is often used by spammers. A message is sent to every e-mail address consisting of a word in the dictionary, followed by the at symbol (@), followed by the name of a particular domain. Lists of given names (such as frank, george, judith, or donna) can produce amazing results. So can individual letters of the alphabet followed by surnames (such as csmith, jwilson, or pthomas). E-mail users can minimize their vulnerability to this type of spam by choosing usernames according to the same rules that apply to passwords and decryption keys -- long, meaningless sequences of letters interspersed with numerals.
Labels:
Security Testing
Brute force attack
In cryptography, a brute force attack is a strategy used to break the encryption of data. It involves traversing the search space of possible keys until the correct key is found.
The selection of an appropriate key length depends on the practical feasibility of performing a brute force attack. By obfuscating the data to be encoded, brute force attacks are made less effective as it is more difficult to determine when one has succeeded in breaking the code.
The selection of an appropriate key length depends on the practical feasibility of performing a brute force attack. By obfuscating the data to be encoded, brute force attacks are made less effective as it is more difficult to determine when one has succeeded in breaking the code.
Labels:
Security Testing
Warchalking
Warchalking is the drawing of symbols in public places to advertise an open Wi-Fi wireless network.
The word is formed by analogy to wardriving, the practice of driving around an area in a car to detect open Wi-Fi nodes. That term in turn is based on wardialing, the practice of dialing many phone numbers hoping to find a modem.
Having found a Wi-Fi node, the warchalker draws a special symbol on a nearby object, such as a wall, the pavement, or a lamp post. Those offering Wi-Fi service might also draw such a symbol to advertise the availability of their Wi-Fi location, whether commercial or personal.
The word is formed by analogy to wardriving, the practice of driving around an area in a car to detect open Wi-Fi nodes. That term in turn is based on wardialing, the practice of dialing many phone numbers hoping to find a modem.
Having found a Wi-Fi node, the warchalker draws a special symbol on a nearby object, such as a wall, the pavement, or a lamp post. Those offering Wi-Fi service might also draw such a symbol to advertise the availability of their Wi-Fi location, whether commercial or personal.
Labels:
Security Testing
Tuesday, April 6, 2010
War dialing And wardriving
War dialing or wardialing is a technique of using a modem to automatically scan a list of telephone numbers, usually dialing every number in a local area code to search for computers, Bulletin board systems and fax machines. Hackers use the resulting lists for various purposes, hobbyists for exploration, and crackers - hackers that specialize in computer security - for password guessing.
A single wardialing call would involve calling an unknown number, and waiting for one or two rings, since answering computers usually pick up on the first ring. If the phone rings twice, the modem hangs up and tries the next number. If a modem or fax machine answers, the wardialer program makes a note of the number. If a human or answering machine answers, the wardialer program hangs up. Depending on the time of day, wardialing 10,000 numbers in a given area code might annoy dozens or hundreds of people, some who attempt and fail to answer a phone in two rings, and some who succeed, only to hear the wardialing modem's carrier tone and hang up. The repeated incoming calls are especially annoying to businesses that have many consecutively numbered lines in the exchange, such as used with a Centrex telephone system.
A more recent phenomenon is wardriving, the searching for wireless networks (Wi-Fi) from a moving vehicle. Wardriving was named after wardialing, since both techniques involve brute-force searches to find computer networks. The aim of wardriving is to collect information about wireless access points.
A single wardialing call would involve calling an unknown number, and waiting for one or two rings, since answering computers usually pick up on the first ring. If the phone rings twice, the modem hangs up and tries the next number. If a modem or fax machine answers, the wardialer program makes a note of the number. If a human or answering machine answers, the wardialer program hangs up. Depending on the time of day, wardialing 10,000 numbers in a given area code might annoy dozens or hundreds of people, some who attempt and fail to answer a phone in two rings, and some who succeed, only to hear the wardialing modem's carrier tone and hang up. The repeated incoming calls are especially annoying to businesses that have many consecutively numbered lines in the exchange, such as used with a Centrex telephone system.
A more recent phenomenon is wardriving, the searching for wireless networks (Wi-Fi) from a moving vehicle. Wardriving was named after wardialing, since both techniques involve brute-force searches to find computer networks. The aim of wardriving is to collect information about wireless access points.
Labels:
Security Testing
Common techniques for Security Testing
1) Network scanning
2) Vulnerability scanning
3) Password cracking
4) Log review
5) Integrity checkers
6) Virus detection
7) War dialing
8) War driving (wireless LAN testing)
9) Penetration testing
In actual practice combination of many such techniques may be used to have a more comprehensive assessment of the overall security aspect.
Labels:
Security Testing
Who should do the Security Testing?
Majority of the security testing techniques are manual, requiring an individual to initiate and conduct the test. Automation tools can be helpful in executing simple tasks, whereas complicated tasks continue to depend largely on the intelligentsia of the test engineer.
Irrespective of the type of testing, the testing engineers that plan and conduct security testing should have significant security and networking related knowledge, including expertise of following areas:
1) Network security
2) Firewalls
3) Intrusion detection system
4) Operating systems
5) Programming and networking protocols like TCP/IP
Labels:
Security Testing
Objectives of Security Testing
- To ensure that adequate attention is provided to identify the security risks,
- To ensure that a realistic mechanism to define & enforce access to the system is in place,
- To ensure that sufficient expertise exists to perform adequate security testing,
- To conduct reasonable tests to confirm the proper functioning of the implemented security measures.
Labels:
Security Testing
When do we use Security Testing?
Security testing is carried out when some important information and assets managed by the software application are of significant importance to the organization. Failures in the software security system can be serious especially when not detected, thereby resulting in a loss or compromise of information without the knowledge of that loss.
The security testing should be performed both prior to the system going into the operation and after the system is put into operation.
Rigorous security testing activities are performed to demonstrate that the system meets the specified security requirements & identify the left out security vulnerabilities, if any.
The extent of testing largely depends upon the security risks, and the test engineers assigned to conduct the security testing are selected according to the estimated sophistication that might be used to penetrate the security.
The security testing should be performed both prior to the system going into the operation and after the system is put into operation.
Rigorous security testing activities are performed to demonstrate that the system meets the specified security requirements & identify the left out security vulnerabilities, if any.
The extent of testing largely depends upon the security risks, and the test engineers assigned to conduct the security testing are selected according to the estimated sophistication that might be used to penetrate the security.
Labels:
Security Testing
Tuesday, March 30, 2010
Features of the Traceability Matrix
- It is a method for tracing each requirement from its point of origin, through each development phase and work product, to the delivered product
- Can indicate through identifiers where the requirement is originated, specified, created, tested, and delivered
- Will indicate for each work product the requirement(s) this work product satisfies
- Facilitates communications, helping customer relationship management and commitment negotiation
- It ensures, for each phase of the lifecycle, that I have correctly accounted for all the customer’s needs
- Ensure that all requirements are correct and included in the test plan and the test cases
- Ensure that developers are not creating features that no one has requested
- Identifies the missing parts
- The completed system may have “Extra” functionality that may have not been specified in the design specification, resulting in wastage of manpower, time and effort.
- If the code component that constitutes the customer’s high priority requirements is not known, then the areas that need to be worked first may not be known thereby decreasing the chances of shipping a useful product on schedule
- Seemingly simple request might involve changes to several parts of the system and if proper Traceability process is not followed, the evaluation of the work that may be needed to satisfy the request may not be correctly evaluated.
Labels:
Traceability matrix
Traceability matrix
A traceability matrix is a document, usually in the form of a table, that correlates any two baselined documents that require a many to many relationship to determine the completeness of the relationship. It is often used with high-level requirements (these often consist of marketing requirements) and detailed requirements of the software product to the matching parts of high-level design, detailed design, test plan, and test cases.
Traceability matrix is a document defines mapping between customer requirements and prepared test cases.
Traceability Metrics is a proof of document to ensure that all the specifications are been tested and the application is bug free.
Traceability matrix is a document defines mapping between customer requirements and prepared test cases.
Traceability Metrics is a proof of document to ensure that all the specifications are been tested and the application is bug free.
Labels:
Traceability matrix
Monday, March 29, 2010
Test Bed
A test bed (also commonly spelled as testbed in research publications) is a platform for experimentation of large development projects. Test beds allow for rigorous, transparent, and replicable testing of scientific theories, computational tools, and new technologies.
OR
Test bed is the envaironment that is required to test software.
This include requirement of H/W S/W Memory cpu speed operating system etc.
OR
An executing environment configuring for a testing environment is called test bed.
OR
Test Bed is an environment containing the hardware instrumentation simulators software tools and other support elements needed to conduct a test.
OR
Test bed is the envaironment that is required to test software.
This include requirement of H/W S/W Memory cpu speed operating system etc.
OR
An executing environment configuring for a testing environment is called test bed.
OR
Test Bed is an environment containing the hardware instrumentation simulators software tools and other support elements needed to conduct a test.
Labels:
Basic Concepts
BRS
BRS is Business Requirement Specification which means the client who want to make the application gives the specification to software development organization and then the organization convert it to SRS (Software Requirement Specification) as per the need of the software.
Labels:
Basic Concepts
Difference between Test Strategy and Test Plan
Test Strategy : It is a company level document and developed by QA category people like QA and PM.This document defines "Testing Approach" to achieve testing objective. Test strategy is the freezed part of BRS from which we get Test Policy and Test Strategy.
Test Plan : Test plan is the freezed document developed from SRS, FS, UC.After completion of testing team formation and risk analysis, Test Lead is preparing test plan document in term of what to test, how to test, who to test, and when to test. There is one Master Test Plan consists of reviewed Project Test Plan and Phase Test Plan. so there is general talk about Project Test Plan.
Test Strategy : Components in the Test Strategy are as follows:
Scope and objective,Business issues,Roles and responsibilities,Communication and status reporting,Test deliverability,Test approach,Test automation and tools,Testing measurements and metrices,Risks and mitigation,Defect reporting and tracking,Change and configuration management,Training plan.
Test Plan : Components in the Test Strategy are as follows:
Test Plan id,Introduction,Test items,Features to be tested,Features not to be tested,Approach,Testing tasks,Suspension criteria,Features pass or fail criteria,Test environment (Entry criteria, Exit criteria),Test delivarables,Staff and training needs,Responsibilities,Schedule,Risk and mitigation,Approach.
Labels:
Test Plan and Test Strategy
Test Strategy
The purpose of a test strategy is to clarify the major tasks and challenges of the test project.
Creating a Test Strategy :
The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.
Defining Test Strategy :
A solid testing strategy provides the framework necessary to implement your testing methodology. A separate strategy should be developed for each system being developed taking into account the development methodology being used and the specific application architecture.
The heart of any testing strategy is the master testing strategy document. It aggregates all the information from the requirements, system design and acceptance criteria into a detailed plan for testing. A detailed master strategy should cover the following:
Creating a Test Strategy :
The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.
Defining Test Strategy :
A solid testing strategy provides the framework necessary to implement your testing methodology. A separate strategy should be developed for each system being developed taking into account the development methodology being used and the specific application architecture.
The heart of any testing strategy is the master testing strategy document. It aggregates all the information from the requirements, system design and acceptance criteria into a detailed plan for testing. A detailed master strategy should cover the following:
- Project Scope
- Test Objectives
- Features and Functions to be Tested
- Testing Approach
- Testing Process and Procedures
- Test Compliance
- Testing Tools
- Defect Resolution
- Roles and Responsibilities
- Process Improvement
- Deliverables
- Schedule
- Environmental Needs
- Resource Management
- Risk and Contingencies
- Approvals and Workflow
- Project Overview
- Business Risks
- Testing Milestones
- Testing Approach
- Testing Environment
Labels:
Test Plan and Test Strategy
Tuesday, March 23, 2010
Agile Testing
Emphasizing testing from the perspective of customers who will utilize the system.
Agile testing does not emphasize rigidly defined testing procedures, but rather focuses on testing iteratively against newly developed code until quality is achieved from an end customer's perspective.
The Word Agile means "Moving Quickly" and this explains the whole concept of Agile Testing.
Testers have to adapt to rapid deployment cycles and changes in testing patterns.
Agile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable enough from module/unit level testing.
Testers are no longer a form of Quality Police. Testing moves the project forward leading to new strategy called Test Driven Development. Testers provide information, feedback and suggestions rather than being last phase of defense.
Testing is no more a phase; it integrates closely with Development. Continuous testing is the only way to ensure continuous progress.
Manual testing, particularly manual exploratory testing, is still important.
Agile teams typically find that the fast feedback afforded by automated regression is a key to detecting problems quickly, thus reducing risk and rework.
Labels:
Software Testing Types
Monday, March 22, 2010
QTP Questions
- What is reusable action in qtp?
- What is objective of actions in qtp?
- Why do need synchronization in qtp?
- How many modes of recording are there?
- What is virtual object and when will we use?
- What are the difference between per text mode and shared mode in qtp?
- Why do u use object spy in qtp?
- What is the difference between properties and methods in qtp?
- What is virtual object and at what we will use virtual object in qtp?
- What is regualr expression and when we will use regularexpression in qtp?
- How to add objects to the Object Repository?
- When we do update mode in qtp?
- What is the difference between constant and parameter in qtp?
- What are GET TO , SET TO and GET RO properties in QTP?
- What is frame work in qtp?
- Where do check points store in qtp?
- What are the objectives of Lowlevel recording? What is Elapsed Time? Is Quick Test supports Java Script? What is extention for test script in Quick Test?
- Why do u save .vbs in library files in qtp?
Labels:
Interview Questions
General Questions
- What do you mean by Pilot Testing?
- What is the difference between usability testing and GUI?
- Can you explain me the levels in V model manual?
- What is exact difference between Debugging & Testing?
- what is the differene between scenario and testcase ?
- What is determination?
- What is debugging?
- What is prototype model in manual testing?
- What is compatibility testing?
- What is test bed?
- What is stub and driver in manual testing?
- What is integration testing?
- What is unit testing?
- Can you explain waterfall model in manual testing?
- Can you explain V model in manual testing?
- can u explain spiral binding model in manual testing?
- What is fish model can you explain?
- What is unit testing in manual?
- What is test development?
- What is port testing?
- What is V model can u explain ?
- What is BUG Life cycle?
- What is system testing?
- What is SRS and BRS in manual testing?
- What is test metrics ?
- What is test strategy who will prepare that one? And what will be there in test strategy?
- What is a test plan who will prepare that one?
- What is STLC how many phases are there can you explain them?
- What is the model of spiral binding in manual testing? Can you explain spiral binding?
- What is Review?
- What are the objectives of Utility objects?
- What is performance testing?
- Can u explain the structure of bug life cycle?
- What is stress testing?
- What is the difference between test scenarios and test strategy?
- What is Sanity Test, Adhoc Testing & Smoke Testing? When will use the Above Tests?
- How will you review the test case and how many types are there ?
- Explain about use case document?
- What is the difference between smoke testing and sanitary testing ?
- What is Black Box Testing?
- What is alpha testing and beta testing ?
- What is FSO can you explain?
- What are the objectives of debugging?
- What is functional testing,system testing,datadriven testing?
- How to write a testcase and bugreport?plz expln with an example.
- What is mean by gui testing ? What is mean by client/Server? What is meat by web based application ?
- what is the Testcase Life Cycle ?
- How to test the Microsoft Word 2003. What all the major areas to be tested, please explain.
- Difference between bug,error,and defect?
- How to do regression testing, and can give one or two examples on that in the same application?
- Explain about MicroSoft Six Rules Standardfor User Interface testing?
- The role of both QA & QC?
- Give exact and appropriate definition of testing.
- how to wrtie test case with a minimum of 13 columns.......
- how i can do gui testing,what is its important contant,plz tell me all property of Gui testing?
- Share a particular project where you have been able to learn enough skills to help with testing? (more for the developers looking to do= testing)
- What part of the testing phase is the most important part for testing in the cycle?
- How to carry out manual testing for a background process which does't have any user interface ?
- what is open beta testing? ans it done at which end? what is the difference b/w open beta testing and beta testing?
- What is application entry and exit criteria?
- What is visual source safe?
Labels:
Interview Questions
Integration Testing
Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Approaches of integration testing :
Top Down Testing :
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Bottom Up Testing :
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Approaches of integration testing :
Top Down Testing :
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Bottom Up Testing :
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Labels:
Software Testing Types
Load Testing
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
A load test is usually conducted to understand the behavior of the application under a specific expected load. This load can be the expected concurrent number of users on the application performing a specific number of transactions within the set duration. This test will give out the response times of all the important business critical transactions. If the database, application server, etc are also monitored, then this simple test can itself point towards the bottleneck in the application.
A load test is usually conducted to understand the behavior of the application under a specific expected load. This load can be the expected concurrent number of users on the application performing a specific number of transactions within the set duration. This test will give out the response times of all the important business critical transactions. If the database, application server, etc are also monitored, then this simple test can itself point towards the bottleneck in the application.
Labels:
Software Testing Types
Volume Testing
Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
For example :
if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application's performance on it.
For example :
if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application's performance on it.
Labels:
Software Testing Types
Stress Testing
This kind of test is done to determine the application's robustness in times of extreme load and helps application administrators to determine if the application will perform sufficiently if the current load goes well above the expected load.
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.
For example :
A web server may be stress tested using scripts, bots, and various denial of service tools to observe the performance of a web site during peak loads.
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.
For example :
A web server may be stress tested using scripts, bots, and various denial of service tools to observe the performance of a web site during peak loads.
Labels:
Software Testing Types
Sunday, March 21, 2010
Software Testing Types
1. Black box testing : Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.
2. White box testing : This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.
3. Unit testing : Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. May require developing test driver modules or test harnesses.
4. Incremental integration testing : Bottom up approach for testing i.e. continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. Done by programmers or by testers.
5. Integration testing : Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
6. Functional testing : This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.
7. System testing : Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.
8. End-to-end testing : Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
9. Sanity testing : Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.
10. Regression testing : Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.
11. Acceptance testing : Normally this type of testing is done to verify if system meets the customer specified requirements. User or customers do this testing to determine whether to accept application.
12. Load testing : It’s a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
13. Stress testing : System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.
14. Performance testing : Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.
15. Usability testing : User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.
16. Install/uninstall testing : Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.
17. Recovery testing : Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
18. Security testing : Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.
19. Compatibility testing : Testing how well software performs in a particular hardware/software/operating system/network environment and different combinations of above.
20. Comparison testing : Comparison of product strengths and weaknesses with previous versions or other similar products.
21. Alpha testing : In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.
22. Beta testing : Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.
2. White box testing : This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.
3. Unit testing : Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. May require developing test driver modules or test harnesses.
4. Incremental integration testing : Bottom up approach for testing i.e. continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. Done by programmers or by testers.
5. Integration testing : Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
6. Functional testing : This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.
7. System testing : Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.
8. End-to-end testing : Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
9. Sanity testing : Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.
10. Regression testing : Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.
11. Acceptance testing : Normally this type of testing is done to verify if system meets the customer specified requirements. User or customers do this testing to determine whether to accept application.
12. Load testing : It’s a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
13. Stress testing : System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.
14. Performance testing : Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.
15. Usability testing : User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.
16. Install/uninstall testing : Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.
17. Recovery testing : Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
18. Security testing : Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.
19. Compatibility testing : Testing how well software performs in a particular hardware/software/operating system/network environment and different combinations of above.
20. Comparison testing : Comparison of product strengths and weaknesses with previous versions or other similar products.
21. Alpha testing : In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.
22. Beta testing : Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.
Labels:
Software Testing Types
Testing Techniques
1. Static Testing :
i) During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Reviews, Inspection's and Walkthrough's are static testing methodologies.
ii) Static testing is a form of software testing where the software isn't actually used.
iii) This is in contrast to Dynamic testing. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document.
iv) It is primarily syntax checking of the code or and manually reading of the code or document to find errors.
v) This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.
vi) From the black box testing point of view, static testing involves review of requirements or specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation.
2. Dynamic Testing :
i)Dynamic Testing involves working with the software, giving input values and checking if the output is as expected.
ii)These are the validation activities.
iii)Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies.
iv)In dynamic testing the software must actually be compiled and run; this is in contrast to static testing.
i) During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Reviews, Inspection's and Walkthrough's are static testing methodologies.
ii) Static testing is a form of software testing where the software isn't actually used.
iii) This is in contrast to Dynamic testing. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document.
iv) It is primarily syntax checking of the code or and manually reading of the code or document to find errors.
v) This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.
vi) From the black box testing point of view, static testing involves review of requirements or specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation.
2. Dynamic Testing :
i)Dynamic Testing involves working with the software, giving input values and checking if the output is as expected.
ii)These are the validation activities.
iii)Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies.
iv)In dynamic testing the software must actually be compiled and run; this is in contrast to static testing.
Labels:
Testing Techniques
What should be tested in web site testing?
1. Functionality :
Links:
a)All Internal Links
b)All External Links
c)All mail links
d)Check for Broken Links
Forms:
a)All Field Level Checks
b)All Field Level Validations
c)Functionality of Create, Modify, Delete & View
d)Handling of Wrong inputs (App Error messages has to be Displayed)
e)Optional and mandatory fields checks
2. Usability :
Navigation:
a)Application navigation is proper through tab
b)Navigation through Mouse
c)Main features accessible from the main/home page
d)Any hot keys, control keys to access menus
Content:
a)Spellings and Grammars
b)Updated information
General Appearance:
a)Page appearance [Eg… Overlapping, Missing]
b)Color, font and size
c)Consistent design
3. Server Side Interfaces :
Server Interface:
a)Verify that communication is done correctly, Web server-application server, application server-database server and vice versa.
b)Compatibility of server software, hardware, network connections
c)Database compatibility (SQL, Oracle etc.)
4. Client Side Compatibility :
Platform:
Check for the compatibility of
a)Windows (98, 2000, NT)
b)Unix (different sets)
c)Macintosh (If applicable)
d)Linux
e)Solaris (If applicable)
Browsers:
Check for the various combinations
a)Internet Explorer (5.X, 6.X, 7.X)
b)Netscape Navigator
c)AOL
d)Mozilla
e)Browser settings
Graphics:
a)Loading of images, graphics, etc.
Printing:
a)Text and image alignment
b)Colures of text, foreground and background
c)Scalability to fit paper size
d)Tables and borders
Performance:
a)Connection speed : Try with various connection speeds, Time out
b)Load :
Check/Measure the following:
What is the estimated number of users per time period and how will it be divided over the period?
Will there be peak loads and how will the system react?
Can your site handle a large amount of users requesting a certain page?
Large amount of data from users.
c)Stress:
Stress testing is done in order to actually break a site or a certain feature to determine how the system reacts.
Stress tests are designed to push and test system limitations and determine whether the system recovers gracefully from crashes. Hackers often stress systems by providing loads of wrong in-data until it crash and then gain access to it during start-up.
a. Typical areas to test are forms, logins or other information transaction components.
b. Performance of memory, CPU, file handling etc.
c. Error in software, hardware, memory errors (leakage, overwrite or pointers)
d)Continuous use:
Is the application or certain features going to be used only during certain periods of time or will it be used continuously 24 hours a day 7 days a week?
Verify that the application is able to meet the requirements and does not run out of memory or disk space.
5. Security :
a)Valid and Invalid Login
b)Limit defined for the number of tries.
c)Can it be bypassed by typing URL to a page inside directly in the browser?
Links:
a)All Internal Links
b)All External Links
c)All mail links
d)Check for Broken Links
Forms:
a)All Field Level Checks
b)All Field Level Validations
c)Functionality of Create, Modify, Delete & View
d)Handling of Wrong inputs (App Error messages has to be Displayed)
e)Optional and mandatory fields checks
2. Usability :
Navigation:
a)Application navigation is proper through tab
b)Navigation through Mouse
c)Main features accessible from the main/home page
d)Any hot keys, control keys to access menus
Content:
a)Spellings and Grammars
b)Updated information
General Appearance:
a)Page appearance [Eg… Overlapping, Missing]
b)Color, font and size
c)Consistent design
3. Server Side Interfaces :
Server Interface:
a)Verify that communication is done correctly, Web server-application server, application server-database server and vice versa.
b)Compatibility of server software, hardware, network connections
c)Database compatibility (SQL, Oracle etc.)
4. Client Side Compatibility :
Platform:
Check for the compatibility of
a)Windows (98, 2000, NT)
b)Unix (different sets)
c)Macintosh (If applicable)
d)Linux
e)Solaris (If applicable)
Browsers:
Check for the various combinations
a)Internet Explorer (5.X, 6.X, 7.X)
b)Netscape Navigator
c)AOL
d)Mozilla
e)Browser settings
Graphics:
a)Loading of images, graphics, etc.
Printing:
a)Text and image alignment
b)Colures of text, foreground and background
c)Scalability to fit paper size
d)Tables and borders
Performance:
a)Connection speed : Try with various connection speeds, Time out
b)Load :
Check/Measure the following:
What is the estimated number of users per time period and how will it be divided over the period?
Will there be peak loads and how will the system react?
Can your site handle a large amount of users requesting a certain page?
Large amount of data from users.
c)Stress:
Stress testing is done in order to actually break a site or a certain feature to determine how the system reacts.
Stress tests are designed to push and test system limitations and determine whether the system recovers gracefully from crashes. Hackers often stress systems by providing loads of wrong in-data until it crash and then gain access to it during start-up.
a. Typical areas to test are forms, logins or other information transaction components.
b. Performance of memory, CPU, file handling etc.
c. Error in software, hardware, memory errors (leakage, overwrite or pointers)
d)Continuous use:
Is the application or certain features going to be used only during certain periods of time or will it be used continuously 24 hours a day 7 days a week?
Verify that the application is able to meet the requirements and does not run out of memory or disk space.
5. Security :
a)Valid and Invalid Login
b)Limit defined for the number of tries.
c)Can it be bypassed by typing URL to a page inside directly in the browser?
Labels:
General
Subscribe to:
Posts (Atom)