The databases and the database processes should be tested as an independent subsystem. This testing should
test the subsystems without the target-of-test's User Interface as the interface to the data. Additional
research into the Database Management System (DBMS) needs to be performed to identify the tools and
techniques that may exist to support the testing identified in the following table.
Technique Objective:
|
Exercise database access methods and processes independent of the UI so you can observe and
log incorrectly functioning target behavior or data corruption.
|
Technique:
|
Invoke each database access method and process, seeding each with valid and invalid data or
requests for data.
Inspect the database to ensure the data has been populated as intended and all database
events have occurred properly, or review the returned data to ensure that the correct data
was retrieved for the correct reasons.
|
Oracles:
|
Outline one or more strategies that can be used by the technique to accurately observe the
outcomes of the test. The oracle combines elements of both the method by which the
observation can be made and the characteristics of specific outcome that indicate probable
success or failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to mitigate the
risks inherent in automated results determination.
|
Required Tools:
|
The technique requires the following tools:
Test Script Automation Tool
base configuration imager and restorer
backup and recovery tools
installation-monitoring tools (registry, hard disk, CPU, memory, and so on)
database SQL utilities and tools
data-generation tools
|
Success Criteria:
|
The technique supports the testing of all key database access methods and processes.
|
Special Considerations:
|
Testing may require a DBMS development environment or drivers to enter or modify data
directly in the database.
Processes should be invoked manually.
Small or minimally sized databases (with a limited number of records) should be used to
increase the visibility of any non-acceptable events.
|
Function testing of the target-of-test should focus on any requirements for test that can be traced
directly to use cases or business functions and business rules. The goals of these tests are to verify
proper data acceptance, processing, and retrieval, and the appropriate implementation of the business
rules. This type of testing is based upon black box techniques; that is, verifying the application and its
internal processes by interacting with the application via the Graphical User Interface (GUI) and analyzing
the output or results. The following table identifies an outline of the testing recommended for each
application.
Technique Objective:
|
Exercise target-of-test functionality, including navigation, data entry, processing, and
retrieval to observe and log target behavior.
|
Technique:
|
Exercise each use-case scenario's individual use-cases flows or functions and features,
using valid and invalid data, to verify that:
the expected results occur when valid data is used
the appropriate error or warning messages are displayed when invalid data is used
each business rule is properly applied
|
Oracles:
|
Outline one or more strategies that can be used by the technique to accurately observe the
outcomes of the test. The oracle combines elements of both the method by which the
observation can be mad, and the characteristics of specific outcome that indicate probable
success or failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to mitigate the
risks inherent in automated results determination.
|
Required Tools:
|
The technique requires the following tools:
Test Script Automation Tool
base configuration imager and restorer
backup and recovery tools
installation-monitoring tools (registry, hard disk, CPU, memory, and so on)
data-generation tools
|
Success Criteria:
|
The technique supports the testing of:
all key use-case scenarios
all key features
|
Special Considerations:
|
Identify or describe those items or issues (internal or external) that impact the
implementation and execution of function test.
|
Business Cycle Testing should emulate the tasks performed on the <Project Name> over time. A period
should be identified, such as one year, and transactions and tasks that would occur during a year's period
should be executed. This includes all daily, weekly, and monthly cycles, and events that are
date-sensitive, such as ticklers.
Technique Objective:
|
Exercise target-of-test and background processes according to required business models and
schedules to observe and log target behavior.
|
Technique:
|
Testing will simulate several business cycles by performing the following:
The tests used for target-of-test's function testing will be modified or enhanced to
increase the number of times each function is executed to simulate several different users
over a specified period.
All time or date-sensitive functions will be executed using valid and invalid dates or time
periods.
All functions that occur on a periodic schedule will be executed or launched at the
appropriate time.
Testing will include using valid and invalid data to verify the following:
The expected results occur when valid data is used.
The appropriate error or warning messages are displayed when invalid data is used.
Each business rule is properly applied.
|
Oracles:
|
Outline one or more strategies that can be used by the technique to accurately observe the
outcomes of the test. The oracle combines elements of both the method by which the
observation can be made, and the characteristics of specific outcome that indicate probable
success or failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to mitigate the
risks inherent in automated results determination.
|
Required Tools:
|
The technique requires the following tools:
Test Script Automation Tool
base configuration imager and restorer
backup and recovery tools
data-generation tools
|
Success Criteria:
|
The technique supports the testing of all critical business cycles.
|
Special Considerations:
|
System dates and events may require special support tasks.
A business model is required to identify appropriate test requirements and procedures.
|
User Interface (UI) testing verifies a user's interaction with the software. The goal of UI testing is to
ensure that the UI provides the user with the appropriate access and navigation through the functions of
the target-of-test. In addition, UI testing ensures that the objects within the UI function as expected and
conform to corporate, or industry, standards.
Technique Objective:
|
Exercise the following to observe and log standards conformance and target behavior:
Navigation through the target-of-test reflecting business functions and requirements,
including window-to-window, field-to- field, and use of access methods (tab keys, mouse
movements, accelerator keys).
Window objects and characteristics can be exercised-such as menus, size, position, state,
and focus.
|
Technique:
|
Create or modify tests for each window to verify proper navigation and object states for
each application window and object.
|
Oracles:
|
Outline one or more strategies that can be used by the technique to accurately observe the
outcomes of the test. The oracle combines elements of both the method by which the
observation can be made and the characteristics of specific outcome that indicate probable
success or failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to mitigate the
risks inherent in automated results determination.
|
Required Tools:
|
The technique requires the Test Script Automation Tool.
|
Success Criteria:
|
The technique supports the testing of each major screen or window that will be used
extensively by the user.
|
Special Considerations:
|
Not all properties for custom and third party objects can be accessed.
|
Performance profiling is a performance test in which response times, transaction rates, and other
time-sensitive requirements are measured and evaluated. The goal of Performance Profiling is to verify
performance requirements have been achieved. Performance profiling is implemented and executed to profile
and tune a target-of-test's performance behaviors as a function of conditions, such as workload or hardware
configurations.
Note: Transactions in the following table refer to "logical business transactions". These
transactions are defined as specific use cases that an actor of the system is expected to perform using the
target-of-test, such as add or modify a given contract.
Technique Objective:
|
Exercise behaviors for designated functional transactions or business functions under the
following conditions to observe and log target behavior and application performance data:
normal anticipated workload
anticipated worst-case workload
|
Technique:
|
Use Test Procedures developed for Function or Business Cycle Testing.
Modify data files to increase the number of transactions or the scripts to increase the
number of iterations that occur in each transaction.
Scripts should be run on one machine (best case is to benchmark single user, single
transaction) and should be repeated with multiple clients (virtual or actual, see Special
Considerations below).
|
Oracles:
|
Outline one or more strategies that can be used by the technique to accurately observe the
outcomes of the test. The oracle combines elements of both the method by which the
observation can be made and the characteristics of specific outcome that indicate probable
success or failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to mitigate the
risks inherent in automated results determination.
|
Required Tools:
|
The technique requires the following tools:
Test Script Automation Tool
an application performance profiling tool, such as Rational Quantify
installation-monitoring tools (registry, hard disk, CPU, memory, and so on
resource-constraining tools; for example, Canned Heat
|
Success Criteria:
|
The technique supports testing:
Single Transaction or single user: Successful emulation of the transaction scripts without
any failures due to test implementation problems.
Multiple transactions or multiple users: Successful emulation of the workload without any
failures due to test implementation problems.
|
Special Considerations:
|
Comprehensive performance testing includes having a background workload on the server.
There are several methods that can be used to perform this, including:
"Drive transactions" directly to the server, usually in the form of Structured Query
Language (SQL) calls.
Create "virtual" user load to simulate many clients, usually several hundred. Remote
Terminal Emulation tools are used to accomplish this load. This technique can also be used
to load the network with "traffic".
Use multiple physical clients, each running test scripts, to place a load on the
system.
Performance testing should be performed on a dedicated machine or at a dedicated time. This
permits full control and accurate measurement.
The databases used for Performance Testing should be either actual size or scaled equally.
|
Load testing is a performance test that subjects the target-of-test to varying workloads to measure and
evaluate the performance behaviors and abilities of the target-of-test to continue to function properly
under these different workloads. The goal of load testing is to determine and ensure that the system
functions properly beyond the expected maximum workload. Additionally, load testing evaluates the
performance characteristics, such as response times, transaction rates, and other time-sensitive issues.
Note: Transactions in the following table refer to "logical business transactions". These
transactions are defined as specific functions that an user of the system is expected to perform using the
application, such as add or modify a given contract.
Technique Objective:
|
Exercise designated transactions or business cases under varying workload conditions to
observe and log target behavior and system performance data.
|
Technique:
|
Use Transaction Test Scripts developed for Function or Business Cycle Testing as a basis,
but remember to remove unnecessary interactions and delays.
Modify data files to increase the number of transactions or the tests to increase the
number of times each transaction occurs.
Workloads should include-for example, daily, weekly, and monthly-peak loads.
Workloads should represent both average as well as peak loads.
Workloads should represent both instantaneous and sustained peaks.
The workloads should be executed under different Test Environment Configurations.
|
Oracles:
|
Outline one or more strategies that can be used by the technique to accurately observe the
outcomes of the test. The oracle combines elements of both the method by which the
observation can be made and the characteristics of specific outcome that indicate probable
success or failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to mitigate the
risks inherent in automated results determination.
|
Required Tools:
|
The technique requires the following tools:
Test Script Automation Tool
Transaction load scheduling and control tool
installation-monitoring tools (registry, hard disk, CPU, memory, and so on)
resource-constraining tools; for example, Canned Heat
data-generation tools
|
Success Criteria:
|
The technique supports the testing of Workload Emulation, which is the successful emulation
of the workload without any failures due to test implementation problems.
|
Special Considerations:
|
Load testing should be performed on a dedicated machine or at a dedicated time. This
permits full control and accurate measurement.
The databases used for load testing should be either actual size or scaled equally.
|
Stress testing is a type of performance test implemented and executed to understand how a system fails due
to conditions at the boundary, or outside of, the expected tolerances. This typically involves low
resources or competition for resources. Low resource conditions reveal how the target-of-test fails that is
not apparent under normal conditions. Other defects might result from competition for shared resources,
like database locks or network bandwidth, although some of these tests are usually addressed under
functional and load testing.
Note: References to transactions in the following table refer to logical business transactions.
Technique Objective:
|
Exercise the target-of-test functions under the following stress conditions to observe and
log target behavior that identifies and documents the conditions under which the system
fails to continue functioning properly:
little or no memory available on the server (RAM and persistent storage space)
maximum actual or physically capable number of clients connected or simulated
multiple users performing the same transactions against the same data or accounts
"overload" transaction volume or mix (see Performance Profiling above)
|
Technique:
|
Use tests developed for Performance Profiling or Load Testing.
To test limited resources, tests should be run on a single machine, and RAM and persistent
storage space on the server should be reduced or limited.
For remaining stress tests, multiple clients should be used, either running the same tests
or complementary tests to produce the worst-case transaction volume or mix.
|
Oracles:
|
Outline one or more strategies that can be used by the technique to accurately observe the
outcomes of the test. The oracle combines elements of both the method by which the
observation can be made and the characteristics of specific outcome that indicate probable
success or failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to mitigate the
risks inherent in automated results determination.
|
Required Tools:
|
The technique requires the following tools:
Test Script Automation Tool
Transaction load scheduling and control tool
installation-monitoring tools (registry, hard disk, CPU, memory, and so on
resource-constraining tools; for example, Canned Heat
data-generation tools
|
Success Criteria:
|
The technique supports the testing of Stress Emulation. The system can be emulated
successfully in one or more conditions defined as stress conditions, and an observation of
the resulting system state, during and after the condition has been emulated, can be
captured.
|
Special Considerations:
|
Stressing the network may require network tools to load the network with messages or
packets.
The persistent storage used for the system should temporarily be reduced to restrict the
available space for the database to grow.
Synchronize the simultaneous clients accessing of the same records or data accounts.
|
Volume testing subjects the target-of-test to large amounts of data to determine if limits are reached that
cause the software to fail. Volume testing also identifies the continuous maximum load or volume the
target-of-test can handle for a given period. For example, if the target-of-test is processing a set of
database records to generate a report, a Volume Test would use a large test database, and would check that
the software behaved normally and produced the correct report.
Technique Objective:
|
Exercise the target-of-test functions under the following high volume scenarios to observe
and log target behavior:
Maximum (actual or physically-capable) number of clients connected, or simulated, all
performing the same, worst case (performance) business function for an extended period.
Maximum database size has been reached (actual or scaled) and multiple queries or report
transactions are executed simultaneously.
|
Technique:
|
Use tests developed for Performance Profiling or Load Testing.
Multiple clients should be used, either running the same tests or complementary tests to
produce the worst-case transaction volume or mix (see Stress Testing) for an extended
period.
Maximum database size is created (actual, scaled, or filled with representative data), and
multiple clients are used to run queries and report transactions simultaneously for
extended periods.
|
Oracles:
|
Outline one or more strategies that can be used by the technique to accurately observe the
outcomes of the test. The oracle combines elements of both the method by which the
observation can be made and the characteristics of specific outcome that indicate probable
success or failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to mitigate the
risks inherent in automated results determination.
|
Required Tools:
|
The technique requires the following tools:
Test Script Automation Tool
Transaction load scheduling and control tool
installation-monitoring tools (registry, hard disk, CPU, memory, and so on
resource-constraining tools; for example, Canned Heat
data-generation tools
|
Success Criteria:
|
The technique supports the testing of Volume Emulation. Large quantities of users, data,
transactions, or other aspects of the system use under volume can be successfully emulated
and an observation of the system state changes over the duration of the volume test can be
captured.
|
Special Considerations:
|
What period of time would be considered an acceptable time for high volume conditions, as
noted above?
|
Security and Access Control Testing focuses on two key areas of security:
Application-level security, including access to the Data or Business Functions
System-level Security, including logging into or remotely accessing to the system
Based on the security you want, application-level security ensures that actors are restricted to specific
functions or use cases, or they are limited in the data that is available to them. For example, everyone
may be permitted to enter data and create new accounts, but only managers can delete them. If there is
security at the data level, testing ensures that "user type one" can see all customer information,
including financial data, however, "user type two" only sees the demographic data for the same client.
System-level security ensures that only those users granted access to the system are capable of accessing
the applications and only through the appropriate gateways.
Technique Objective:
|
Exercise the target-of-test under the following conditions to observe and log target
behavior:
Application-level Security: an actor can access only those functions or data for which
their user type is provided permissions.
System-level Security: only those actors with access to the system and applications are
permitted to access them.
|
Technique:
|
Application-level Security: Identify and list each user type and the functions or data for
which each type has permissions.
Create tests for each user type and verify each permission by creating transactions
specific to each user type.
Modify user type and rerun tests for same users. In each case, verify those additional
functions or data are correctly available or denied.
System-level Access: See Special Considerations below.
|
Oracles:
|
Outline one or more strategies that can be used by the technique to accurately observe the
outcomes of the test. The oracle combines elements of both the method by which the
observation can be made and the characteristics of specific outcome that indicate probable
success or failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to mitigate the
risks inherent in automated results determination.
|
Required Tools:
|
The technique requires the following tools:
Test Script Automation Tool
"Hacker" security breach and probing tools
OS Security Administration tools
|
Success Criteria:
|
The technique supports the testing of the appropriate functions or data affected by
security settings can be tested for each known actor type.
|
Special Considerations:
|
Access to the system must be reviewed or discussed with the appropriate network or systems
administrator. This testing may not be required as it may be a function of network or
systems administration.
|
Failover and recovery testing ensures that the target-of-test can successfully failover and recover from a
variety of hardware, software, or network malfunctions with undue loss of data or data integrity.
For those systems that must be kept running, failover testing ensures that when a failover condition
occurs, the alternate or backup systems properly "take over" for the failed system without any loss of data
or transactions.
Recovery testing is an antagonistic test process in which the application or system is exposed to extreme
conditions, or simulated conditions, to cause a failure, such as device Input/Output (I/O) failures, or
invalid database pointers and keys. Recovery processes are invoked, and the application or system is
monitored and inspected to verify proper application, or system, and data recovery has been achieved.
Technique Objective:
|
Simulate the failure conditions and exercise the recovery processes (manual and automated)
to restore the database, applications, and system to a desired, known state. The following
types of conditions are included in the testing to observe and log behavior after recovery:
power interruption to the client
power interruption to the server
communication interruption via network servers
interruption, communication, or power loss to DASD (Direct Access Storage Devices) and DASD
controllers
incomplete cycles (data filter processes interrupted, data synchronization processes
interrupted)
invalid database pointers or keys
invalid or corrupted data elements in database
|
Technique:
|
The tests already created for Function and Business Cycle testing can be used as a basis
for creating a series of transactions to support failover and recovery testing, primarily
to define the tests to be run to test that recovery was successful.
Power interruption to the client: power down the PC.
Power interruption to the server: simulate or initiate power down procedures for the
server.
Interruption via network servers: simulate or initiate communication loss with the network
(physically disconnect communication wires or power down network servers or routers).
Interruption, communication, or power loss to DASD and DASD controllers: simulate or
physically eliminate communication with one or more DASDs or DASD controllers.
Once the above conditions or simulated conditions are achieved, additional transactions
should be executed and upon reaching this second test point state, recovery procedures
should be invoked.
Testing for incomplete cycles utilizes the same technique as described above except that
the database processes themselves should be aborted or prematurely terminated.
Testing for the following conditions requires that a known database state be achieved.
Several database fields, pointers, and keys should be corrupted manually and directly
within the database (via database tools). Additional transactions should be executed using
the tests from Application Function and Business Cycle Testing and full cycles executed.
|
Oracles:
|
Outline one or more strategies that can be used by the technique to accurately observe the
outcomes of the test. The oracle combines elements of both the method by which the
observation can be made and the characteristics of specific outcome that indicate probable
success or failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to mitigate the
risks inherent in automated results determination.
|
Required Tools:
|
The technique requires the following tools:
base configuration imager and restorer
installation-monitoring tools (registry, hard disk, CPU, memory, and so on
backup and recovery tools
|
Success Criteria:
|
The technique supports the testing of:
One of more simulated disasters involving one or more combinations of the application,
database, and system.
One or more simulated recoveries involving one or more combinations of the application,
database, and system to a known desired state.
|
Special Considerations:
|
Recovery testing is highly intrusive. Procedures to disconnect cabling (simulating power or
communication loss) may not be desirable or feasible. Alternative methods, such as
diagnostic software tools may be required.
Resources from the Systems (or Computer Operations), Database, and Networking groups are
required.
These tests should be run after hours or on an isolated machine.
|
Configuration testing verifies the operation of the target-of-test on different software and hardware
configurations. In most production environments, the particular hardware specifications for the client
workstations, network connections, and database servers vary. Client workstations may have different
software loaded (for example, applications, drivers, and so on) and, at any one time, many different
combinations may be active using different resources.
Technique Objective:
|
Exercise the target-of-test on the required hardware and software configurations to observe
and log target behavior under different configurations and identify changes in
configuration state.
|
Technique:
|
Use Function Test scripts.
Open and close various non-target-of-test related software, such as the Microsoft®
Excel® and Microsoft® Word® applications, either as part of the test or prior
to the start of the test.
Execute selected transactions to simulate actors interacting with the target-of-test and
the non-target-of-test software.
Repeat the above process, minimizing the available conventional memory on the client
workstation.
|
Oracles:
|
Outline one or more strategies that can be used by the technique to accurately observe the
outcomes of the test. The oracle combines elements of both the method by which the
observation can be made and the characteristics of specific outcome that indicate probable
success or failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to mitigate the
risks inherent in automated results determination.
|
Required Tools:
|
The technique requires the following tools:
base configuration imager and restorer
installation-monitoring tools (registry, hard disk, CPU, memory, and so on)
|
Success Criteria:
|
The technique supports the testing of one or more combinations of the target test items
running in expected, supported deployment environments.
|
Special Considerations:
|
What non-target-of-test software is needed, is available, and is accessible on the desktop?
What applications are typically used?
What data are the applications running; for example, a large spreadsheet opened in Excel or
a 100-page document in Word?
The entire systems' netware, network servers, databases, and so on, also need to be
documented as part of this test.
|
Installation testing has two purposes. The first is to ensure that the software can be installed under
different conditions (such as a new installation, an upgrade, and a complete or custom installation) under
normal and abnormal conditions. Abnormal conditions include insufficient disk space, lack of privilege to
create directories, and so on. The second purpose is to verify that, once installed, the software operates
correctly. This usually means running a number of tests that were developed for Function Testing.
Technique Objective:
|
Exercise the installation of the target-of-test onto each required hardware configuration
under the following conditions to observe and log installation behavior and configuration
state changes:
new installation: a new machine, never installed previously with <Project Name>
update: a machine previously installed <Project Name>, same version
update: a machine previously installed <Project Name>, older version
|
Technique:
|
Develop automated or manual scripts to validate the condition of the target machine.
new: <project Name> never installed
<project Name> same or older version already installed
Launch or perform installation.
Using a predetermined subset of Function Test scripts, run the transactions.
|
Oracles:
|
Outline one or more strategies that can be used by the technique to accurately observe the
outcomes of the test. The oracle combines elements of both the method by which the
observation can be made and the characteristics of specific outcome that indicate probable
success or failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to mitigate the
risks inherent in automated results determination.
|
Required Tools:
|
The technique requires the following tools:
base configuration imager and restorer
installation-monitoring tools (registry, hard disk, CPU, memory, and so on)
|
Success Criteria:
|
The technique supports the testing of the installation of the developed product in one or
more installation configurations.
|
Special Considerations:
|
What <project Name> transactions should be selected to comprise a confidence test
that <project Name> application has been successfully installed and no major software
components are missing?
|
|