Monday, January 12, 2009

SQL Tuning Tips for performance

SQL Tuning Tips for performance:

1) Concatenation of Different Data Types
An SQL query involving concatenation of different datatypes takes more time to execute.

2) Usage of “WHERE” Instead of “HAVING”
Using a “where” clause in place of “having” is often effective, in group by statements.
The where clause is applied before selecting the data, whereas the having clause is
applied after data retrieval.

3) Position of Table with Fewer Rows in the “SELECT…FROM” Query
It is advisable to put the table that returns the fewest rows at the end of the from list.

4) Usage of “BETWEEN” in Place of Comparison Operators
If a query involves a range of values to be searched on, then usage of “between” is
advisable over the comparison operators.

5) Usage of Table Aliases
If more than one table is used in a query, then it is advisable to use table aliases, as they
would enhance the speed of the parse phase of the query.

5) Index
Index enables faster retrieval of data, but is an overhead when insertion, updating, and
deletion processes are involved.
The index is a separate structure attached to a table. This structure holds the indexed
column value and a pointer to the physical data.
Hence, any query which involves searching based on indexes would first access the index
structure and then would retrieve the data from the respective table.
But if the table contains more than 4 to 5 indexes, then the performance comes down.
The selectivity for index is determined by the ratio of unique values in a given column
to the total number of values.
If the value is nearer to 0, then usage of index is not advisable.
If the value is nearer to 1, then usage of index would enhance the performance of the
system.

6) Usage of “ORDER BY”
Avoid “ORDER BY” wherever possible as it works on output of the query and hence
involves double processing. The exception is if any of the columns to be sorted in DESC
order are wanted.


7 ) Resource Intensive Operations
Avoid using resource intensive operations like UNION, MINUS, DISTINCT, INTERSECT,
ORDER BY, and GROUP BY. DISTINCT uses one sort whereas other operators use
two sorts or more.

8) Usage of NULL
Null values are never stored in index structure.
Any query using the clause “IS NULL”, does not make use of index and does a FTS (Full
Table Scan), thereby taking more time to execute.


9) Usage of “EXISTS” and “NOT EXISTS” Clauses
Wherever it is possible,”EXISTS” or “NOT EXISTS” should be used. Using the
“EXISTS” clause may eliminate unnecessary table accesses.

Friday, January 9, 2009

Monitoring System Resources - Linux/Unix

What to monitor ?.

Some general performance expectations on any system.

Run Queues – A run queue should have no more than 1-3 threads queued per processor. For
example, a dual processor system should not have more than 6 threads in the run queue.
CPU Utilization – If a CPU is fully utilized, then the following balance of utilization should
be achieved.
Context Switches – The amount of context switches is directly relevant to CPU utilization.

1) vmstat
vmstat is a real-time performance monitoring tool. The vmstat command provides data that can be used to help find unusual system activity, such as high page faults or excessive context switches, and CPU usage.
Watch out for the main section
The cpu section reports the percentage of total CPU time in terms of user (us), system (sy), true idleness (id), and waiting for I/O completion (wa).

vmstat can be used to help find unusual system activity, such as high page faults or excessive context switches, that can lead to a degradation in system performance.You can monitor bi and bo for the transfer rate, and in for the interrupt rate. You can monitor swpd, si, and so to see whether the system is swapping.

2) Ps Command
To find out how the memory is used within a particular process, use ps for an overview of memory used per process:$ ps aux
The output of the ps aux command shows the total percentage of system memory that each process consumes, as well as its virtual memory footprint (VSZ) and the amount of physical memory that the process is currently using (RSS).

The RSS column provides the "resident set size" of a process; this is the amount of physical memory used by the process and a good indication of how much real memory a given process is using. The VSZ column details the total amount of memory being used by a process, including what is allocated for internal storage, but often swapped to disk. Both of these columns are common to the majority of ps variants.

3) IOSTAT

Iostat stands for input output statistics and it provides data about the input output devices such as disk, terminals, serial devices, and CPU, but we will use it here for disk-related data only.
By default, iostat generates two reports, one for CPU utilization and one for device utilization. You can use the –c option to get just the CPU report or the –d option to get just the device report
Syntax
The basic syntax is:
iostat -d -x interval count
-d : Display the device utilization report (d == disk)
-x : Display extended statistics including disk utilization
Option -- This may differ among operating system.
Interval -- Time period in seconds between two samples. iostat 5 will give data at each 5 seconds interval.
Count -- Number of times the data is needed. iostat 5 4 will give data at 5 seconds interval 4 times. If no count is specified, the samples continue indefinitely and must be terminated by pressing ^c. Commonly, the command is run without count, and samples are observed to get a feel of system state.

The values to look from the iostat output are:
The average service time (svctm)
Percentage of CPU time during which I/O requests were issued (%util)
See if a hard disk reports consistently high reads/writes (r/s and w/s)

Eg: iostat –d -x 1

4) NETSTAT

It also provides information about network routes and cumulative statistics for network interfaces, including the number of incoming and outgoing packets and the number of packet collisions.

netstat

Tuesday, January 6, 2009

Reliability Testing or Endurance Testing..

Definition:
Reliability testing– is the probability of failure-free operation of a computer program in a specified environment for a specified time.

Informally software reliability is about how well an application accurately provides without failure the services that were defined in the original specification. In addition to how long the application runs before failure, reliability engineering is about providing correct results and handling error detection and recovery in order to avoid failures.

The main intention is to find out the memory leak with the application, unexpected errors, any crash if we run it for long hours.

In real Business Day based scenarios cases usually application run for week /month/more. We need to simulate the application under workload like as a real case. That the reason we must do for reliability test. For a short duration we could not able to find the memory leak, unexpected behavior with the application.


The objective of Reliability test is to measure:

Memory leaks
JVM Heap
Concurrency
Will the application performance be consistent over time
Unhandled exceptions - crashes, hangs
Number of user Number of hits/sec
%CPU utilization on all serversMemory utilization on all server
GC collection (if any)

Steps:

To execute a scalability tests, follow the following steps1. Finalize the business scenarios and decide the Key Performance Indicators (KPI) for each operation, memory, CPU etc.
2. Verify the scenarios manually, load the volumes of data required.
3. Create the load scripts using any load testing tool; customize the scripts to make it robust.
4. Check the environment settings and ensure that all the setting is as per the load testing recommendation.
5. Create the load test scenario as with Business Day scenarios

Design the scenario for Business Day based scenarios to simulate real application usage cases like peak hours run (8 hrs) then idle time for 30 minutes then ramp up and run for another 8 hrs then idle time for 30 minutes
Continue this cycle around 8 -10 times.

6. Add all the performance counters required.
7. Execute the load test for pre-defined number of users, by slowly ramping up the users
8. Run the test for about 48 or more hours.9. Collect the results and analyze the results by comparing with the KPI / previous build and release results.

Results:
Plot the graph for private bytes
Put the all the performance monitor counters in the result sheet
Response time
Throughput

Main Observation:

Monitor Memory -RSS, VSZ on UNIX
Private Bytes and Virtual Bytes on Windows
Observe that all the transaction have been passed successfully.
(If you have more failed transaction there would be chance of application hangs, crashing.)

Tips:
--> Set granularity = 5 secs for all the counters.
-->Verify that the measurement scale setting to 1 for all the counters

Thanks
Senthil

Sunday, January 4, 2009

Bug Counts..

Bug Count are also accountable to tester!!!

There was a discussion/debate about the Bug Count in the industry,
I feel that bug count numbers is also one factor to measure tester performance indicator. I feel it’s important to the tester efficiency / creditability.

Why its being?....

In the team How the efficient tester finding more bugs compare to other team members , not only simply executing the well written test cases,

Thinking capability
ideas to exploratory testing
His in depth Domain knowledge and the complexity of the product
his intention to find more bugs ( off course everyone wishes J)
his productivity hours
finding the bugs in a stable products
severity and priority of the bugs


Bug count also depend on other factors as Alan mentioned in his sites
http://blogs.msdn.com/imtesty/archive/2006/06/26/647628.aspx


I am trying to emphasis that bug count is also one factor to measure tester performance.

Thanks
Senthil

Friday, January 2, 2009

Scalability Test Methodology


This is to explain the Scalability concepts, methodology and the root cause analysis

Scalability Testing:

Scalability is a Performance testing focused on ensuring the application under test gracefully handles increases in work load. Its is the testing of a software application for measuring its capability to scale up or scale out in terms of any of its non-functional capability be it the user load supported, the number of transactions, the data volume etc


The objective of scalability test is to measure:

1) Response time
2) Number of user the application
3) Number of hits/sec
4) Slow SQLs
5) Throughput
6) %CPU utilization on all servers
7) Memory utilization on all server
8) GC collection (if any)


Prerequisite to start Scalability test execution:

Restart Application/web server services
clear cache and log files
Observe the CPU and memory usage before starting the execution and analyze what is the CPU and memory usage prior the application is run
Verify the DB memory and CPU Utilization
Baseline the DB size. If production DB has 10k records then 4 times of the data should be made available in the testing environment DB
Run DB index fragmentation
IE caching depends on the type of application being tested. Ex: If the application is banking system based, then caching need not be enabled because of frequency of the user logins in rare. When the frequency of user login in a given working day is more, then caching needs to be on.


Key Process for Scalability Testing:

Identify Performance Acceptance Criteria
Identify Key Scenarios
Create a Workload Model
Identify Target Load Levels
Identify Metrics
Run Tests
Analyze the Results

Scalability Testing Methodology:

To execute a scalability tests, follow the following steps

1. Finalize the business scenarios and decide the Key Performance Indicators (KPI) for each operation, memory, CPU etc.

2. Verify the scenarios manually, load the volumes of data required.
3. Create the load scripts using any load testing tool; customize the scripts to make it robust.
4. Run the script for warm up run, at least two runs
5. Take SQL trace;
6. Find the list of SQLs for all the operations in the scenario that exceed KPI; Run the SQLs
against the Database and find the time taken by each SQL
7. If you find any slow SQLs, report to DB Team
8. Create the load test scenario, add all the performance counters required and execute the test
9. Check the environment settings and ensure that all the setting is as per the load testing recommendation
10. Execute the load test for pre-defined number of users, by slowly ramping up the users and then running the test for about 3 hours.
11. Once the test is completed, filter the results for steady state (Between 2- 3 hrs)
12. Collect the results and analyze the results by comparing with the KPI / previous build and release results.
13. If the results do not meet the KPI, log the defect and continue analyzing the root cause.


Take the CPU Profile in case of CPU being more than KPI
Take Memory profile if the memory is higher than KPI.