As the complexity of software applications increases, testing becomes more crucial. And in the process, more time consuming. Here is a list of emerging testing practices.

Software is everywhere today and is becoming increasingly mission critical, whether in satellites and planes, or e-commerce websites. Software complexity is also on the rise – thanks to distributed, multi-tier applications targeting multiple devices (mobile, thin/thick clients, clouds, etc). Added to that are development methodologies like extreme programming and agile development. No wonder software testing professionals are finding it hard to keep up with the change.

As a result, many projects fail while the rest are completed significantly late, and provide only a subset of the originally planned functionality. Poorly tested software and buggy code cost corporations billions of dollars annually, and most defects are found by end users in production environments.
Given the magnitude of the problem, software-testing professionals are finding innovative means of keeping up – both in terms of tools and methodologies. This article covers some of the recent trends in software testing – and why they’re making the headlines. Test driven development (TDD)

 

Microsoft MCTS Certification, MCITP Certification and over 2000+ Exams at Actualkey.com

 

 

TDD is a software development technique that ensures your source code is thoroughly unit-tested as compared to traditional testing methodologies, where unit testing is recommended but not enforced. It combines test-first development (where you write a test before you write just enough code to fulfil that test), and refactoring (where, if the existing design isn’t the best possible to enable you to implement a particular functionality, you improve it to enable the new feature).

TDD is not a new technique-but it is suddenly centre stage, thanks to the continued popularity of software development methodologies such as agile development and extreme programming.

Optimisations to TDD include the use of tools (such as PEX/peer exchange for Visual studio – http://research.microsoft.com/en-us/projects/pex/ ) to improve code coverage, by creating parameterised unit tests that look for boundary conditions, exceptions, and assertion failures.

TDD is gaining popularity as it allows for incremental software development – where bugs are detected and fixed as soon as the code is written, rather than at the end of an iteration or a milestone.

For more details on TDD, use the following links:
http://en.wikipedia.org/wiki/Test-driven_development
http://www.agiledata.org/essays/tdd.html

Virtualisation testing
Testing is becoming increasingly complex – the test environment set-up, getting people access to the environment, and loading it with the right bits from development, all take up about 30-50 per cent of the total testing time in a typical organisation. What is worse is that when testers find bugs, it is hard to re-create the same environment for developers to investigate and fix bugs. Test organisations are increasingly gravitating towards virtualisation technologies to cut down test set-up times significantly. These technologies include:

* accelerate set-up/tear down and restoration of complex virtual environments to a clean state, improving machine utilisation

* eliminate no repro bugs by allowing developers to recreate complex environments easily

* improve quality by automating virtual machine provisioning, building deployment, and building verification testing in an integrated manner (details later)

As an offshoot, virtualisation ensures that test labs reduce their energy footprint, resulting in a positive environmental impact, as well as significant savings.

Some of the companies that have virtual test lab management solutions are VMware, VMLogix, and Surgient. Microsoft has recently announced a Lab Management (http://channel9.msdn.com/posts/VisualStudio/Lab-Management-coming-to-Visual-Studio-Team-System-2010/) product as part of its Visual Studio Team System 2010 release. Lab Management supports multiple environment management, snapshots to easily restore to a previous state, virtual network isolation to allow multiple test environments to run concurrently, and a workflow to allow developers to have easy access to environments to reproduce and fix defects.

Theresa Lanowitz, founder of Voke, a firm involved with analysis of trends in the IT world, expects virtualisation to become ‘the defining technology of the 21st century’, with organisations of every size set to benefit from virtualisation as a part of its core infrastructure.

Continuous integration
CI is a trend that is rapidly being adopted in testing, where the team members integrate their work with the rest of the development team on a frequent basis by committing all changes to a central versioning system. Beyond maintaining a common code repository, other characteristics of a CI environment include build automation, auto-deployment of the build into a production-like environment, and ensuring a self-test mechanism such that at the very least, a minimal set of tests are run to confirm that the code behaves as expected.

Leveraging virtualised test environments, tools such as Microsoft’s Visual Studio Team System (VSTS) can create sophisticated CI workflows. As soon as code is checked in, a build workflow kicks in that compiles the code – deploys it on to a virtualised test environment, triggers a set of unit and functional tests on the test environment, and reports on the results.

VSTS takes the build workflow one step further, and performs the build before the check-in is finalised, allowing the check-in to be aborted if it would cause a break, or if it fails the tests. And given historical code coverage data from test runs, the tool can identify which one of the several thousand test cases needs to be run when a new build comes out – significantly reducing the build validation time.

One obvious benefit of continuous integration is transparency. Failed builds and tests are found quickly rather than having to wait for the next build. The developer who checked in the offending code is probably still nearby and can quickly fix or roll back the change.

For a complete set of tools that help enable CI, see http://en.wikipedia.org/wiki/Continuous_Integration.

Crowd testing
Crowd testing is a new and emerging trend in which, rather than relying on a dedicated team of testers (in-house or out sourced), companies rely on virtual test teams (created on demand) to get complete test coverage and reduce the time to market for their applications.

The company defines its test requirements in terms of scenarios, environments, and the type of testing (functional, performance, etc). A crowd test vendor (such as uTest – www.utest.com) identifies a pool of testers that meet the requirements, creates a project, and assigns work. Testers check the application, report bugs, and communicate with the company via an online portal. Crowd testing vendors also provide other tools, such as powerful reporting engines and test optimisation utilities. Some of the crowd testing vendors are domain specific – such as Mob4hire (www.mob4hire.com), which focuses on mobile application testing. Testers will bid on various projects specific to their handsets. Developers will choose the testers that they require, and will deploy test plans for the mobile application they are developing. On completion of the test, the mobile tester will get paid for the work.

One obvious advantage is in terms of reducing the test cycle time. But crowd testing is being used in various other scenarios as well – for example, to do usability studies on new user interfaces. The cost savings can be substantial.

Tools driven developer testing
Traditionally, developer testing was primarily limited to unit testing and some code coverage metrics. However, as organisations realised that the cost of defects found in development was exponentially lower than that found in test or production, they have begun to invest in tooling to enable developers to find bugs early on.

IDE-integrated tools have made the self-testing practice acceptable to developers, and the unit-testing and coverage analysis process automated for them. These tools also make it easy to analyse performance and compare it with a baseline by extending the unit test infrastructure.

Development teams are also expected to perform a level of security testing (threat modelling, buffer overflow, sequel injection, etc). For teams developing on native languages such as C/C++, developers are also required to use run-time analysis tools to check for memory leaks, memory corruptions and thread deadlocks. Developers are also using static analysis tools to find accessibility, localisation and globalisation issues — and in some cases more sophisticated errors related to memory management and performance simulation — by using data flow analysis and other techniques.

As a result of using these innovative methods, testers can now spend a lot more of their time on integration testing, stress, platform coverage, and end-to-end scenario testing. This will help them detect higher-level defects that would have otherwise trickled down to production.