Freshers Aptitude technical questions
Freshers Job Alert
Bookmark and Share

Software Testing

This Software testing tutorial will help you to practise Testing.

 

Discovery Phase

–Knowledge Transfer, Study of Business Processes, Functional, System and Project Requirements,

Planning Phase –Test Plan (CM, milestone), Test Procedures, Tool Selection, Quality Plan, resource – hw/sw/human, formats –tc, reporting, issues escal, start of TC for key scenarios

Setup / Development Phase –Test Exec Setup, Con setup, Gen of test data, Test Scripts & drivers, Test bed setup, Tool configuration

Execution Phase –Test bed refresh, Integrity/Smoke Test, Test Execution, Metrics Collection, Analysis of new features to be executed /tested, enhance test case (if require), Regression testing , test reports.

Analysis Phase –Result analysis, Test reports/charts, Defect tracking, Leakage analysis

Test Plan covers – Scope (type of testing, what will be tested & what will not be tested), scheduling, input docs, (guidelines for prep of tc, guidelines for env setup, guidelines for testing [the process used in testing. For e.g. testing based on test procedure doc, or informal testing, etc. In case of IT the order in which the units are integrated and tested is specified here), resources (hw,sw, human), risk / miti plan, quality plan, tools for defect tracking, testing tool, formats for reporting, process for review / escalation, CM, metrics, test completion criteria.

Regression testing : done to ensure that changes in a work product have not resulted in malfunctioning of previously working system. Regression Testing

–Similar in scope to a functional test, a regression test allows a consistent, repeatable validation of each new release of a product or Web site. Such testing ensures reported product defects have been corrected for each new release and that no new quality problems were introduced in the maintenance process. Though regression testing can be performed manually an automated test suite is often used to reduce the time and resources needed to perform the required testing .

•Traceability matrix –is a document that links test cases to functional requirements. This would be useful in case of failures to trace back the failed testcase to the affected functional requirement

Functional Testing –Testing is done using test data derived from the specified functional requirements without regard to the final program structure. Also known as black box testing or behavioral testing.

Structural testing –Testing approach that examines the program structures and derives test data from the program logic. This is also known as White box testing.

Component testing •In this testing the called components are replaced with stubs, simulators or trusted components. Calling components are replaced with drivers or trusted super components. This is used while testing components in software built using component-based architecture.

•Integration testing •An orderly progression of testing in which the software components or hardware components, or both are combined and tested until the entire system has been integrated.

•For Web applications, server side caching strategy used in production environment is simulated to get accurate results for functionality and performance of the application

•Compatibility testing –Compatibility testing is performed in order to verify that your product functions without difficulties or discrepancies due to incompatibility with a platform configuration. Tests are run on several different computer configurations that are considered the "Industry Standard”.

•Performance testing –Testing to check performance and behavior of the application under varying conditions of load.

•Load testing –To test the reliability of the system under statistically representative (usually) load

•Stress testing –To stress a system to the breaking point in order to find defects that are potentially harmful by subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disk, mips, interrupts, etc.) needed to process that load.

 

Test procedure

Sl.no. Action Expected Result Actual Result

Test procedure id, version, reference docs, test cases, date, prepared by, revision history, interdependent modules, appln,

Test report

Project id, date, build (not in case of UT), unit id, test proce id & ver, time taken for execution, test results (test case no, result (P/F/NE), defect id), tested by, Test summary (no. of tc planned, executed, passed, failed).

 

•Functional Test

–WinRunner (Mercury), Rational Robot (Rational), SilkTest (Segue), Visual Test (Rational), eTest (Empirix)

•Performance Test

–LoadRunner (Mercury), SiteLoad (Rational), WAST (Web Application Stress Test) (MS), SilkPerformer (Segue), eLoad (Empirix)

•Analysis Tools

–Visual PureCoverage, Quantify, Purify (Rational)

What is 'Software Quality Assurance'?
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'. (See the Bookstore section's 'Software QA' category for a list of useful books on Software Quality Assurance.)

What is 'Software Testing'?
Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'. (See the Bookstore section's 'Software Testing' category for a list of useful books on Software Testing.)

Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation

A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.

An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything.

Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable.

'Good code' is code that works, is bug free, and is readable and maintainable.

'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and status logging capability; and works correctly when implemented.