Wednesday, September 10, 2008

How do you measure CPU pressure?


It is important to understand whether CPU pressure is affecting SQL Server performance.  This is true even in the case where SQL Server is the only application running on a particular box.  The System object Perfmon counter Processor Queue length is not necessarily an effective way of measuring CPU pressure in SQL Server.  To see why this is the case, we first must take a brief (and simplified) look at the SQL Server Execution Model.

SQL Server uses a User Mode Scheduler (UMS) to control the execution of SQL Server user requests (SPIDs or session_ids).  The UMS does not replace the Windows scheduler but rather, manages the execution of SQL Server requests (without returning control to Windows).  So when SQL Server gets its time slice from the Windows scheduler, the SQL Server UMS manages what user requests are run during this time.   In a 4-proc scenario, there will be 4 User Mode Schedulers, one for each CPU.  Each UMS uses a number of constructs (queues, lists and worker threads) to govern execution.  At any given time, each UMS will have at most a single running user, a runnable queue of requests that are waiting for CPU, a waiter list (for resources such as IO, locks, memory), and a work queue (user requests that are waiting for worker threads).

The runnable queue can be likened to a grocery analogy where there are multiple check out lines.  The register clerk is the CPU.  There is just one customer checking out e.g. “running” at any given register.  The time spent in the checkout line represents CPU pressure.  The longer the line, the longer the waits, hence more CPU pressure.

OK, back to SQL Server.  Assume the following for a single UMS:  SPID 51 is currently running.  The Runnable Queue consists of SPIDs 60, 55, 87 & 79.  The Waiter list includes SPIDS 55, 84 & 72.  The Work queue is empty.  Now, assume the running SPID 51 needs physical IO.  Two things will happen.  SPID 51 is moved to the wait list until the IO is completed and the next session_id in the runnable queue, SPID 60, begins to run.  When the IO is complete, SPID 51 is moved to the bottom of the runnable queue which now consists of SPIDS 55, 87, 79 & 51.

Given this scenario, the total amount of time waiting consists of resource and signal waits.  The time waiting for a resource is shown as Resource Waits.  The time waiting in the runnable queue for CPU is called Signal Waits.  In SQL Server 2005, waits are shown in the Dynamic Management View (DMV) sys.dm_os_wait_stats. The query to measure cpu pressure is as follows:

---- Total waits are wait_time_ms
Select signal_wait_time_ms=sum(signal_wait_time_ms)
          ,'%signal (cpu) waits' = cast(100.0 * sum(signal_wait_time_ms) / sum(wait_time_ms) as numeric(20,2))
          ,resource_wait_time_ms=sum(wait_time_ms - signal_wait_time_ms)
          ,'%resource waits'= cast(100.0 * sum(wait_time_ms -signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2))
From sys.dm_os_wait_stats
You can initialize or clear out SQL Server 2005 waitstats with the statement dbccsqlperf ([sys.dm_os_wait_stats],clear) with no_infomsgs.  In SQL Server 2000, waits can be seen by looking at DBCC SQLPERF (WAITSTATS). 

If you have SQL–dedicated box it is conceivable that you could have high signal waits indicating CPU pressure and still have a low processor queue length.   In such cases, a faster CPU could reduce signal waits but just focusing on the low processor queue length as a measure of CPU contention may lead to the erroneous conclusion that CPU is fine.

In conclusion, if Signal Waits are a significant percentage of total waits, you have CPU pressure which may be alleviated by faster or more CPUs.  Alternately, CPU pressure can be reduced by eliminating unnecessary sorts (indexes can avoid sorts in order & group by’s) and joins, and compilations (and re-compilations).  If Signal Waits are not significant, a faster CPU will not appreciably improve performance.
References :-http://blogs.msdn.com/sqlcat/

Useful SQL Server DBCC Commands


What is DBCC in Sql Server?

Brief Introduction :- DBCC stands for database consistency checker. We use these commands to check the consistency of the databases, i.e., maintenance, validation task and status checks.
E.g. DBCC CHECKDB - Ensures that tables in the db and the indexes are correctly linked.
DBCC CHECKALLOC - To check that all pages in a db are correctly allocated.
DBCC CHECKFILEGROUP - Checks all tables file group for any damage.


Useful SQL Server DBCC Commands:-

DBCC CACHESTATS displays information about the objects currently in the buffer cache, such as hit rates, compiled objects and plans, etc.
Example:
DBCC CACHESTATS
Sample Results (abbreviated):
Object Name       Hit Ratio 
------------      -------------
Proc              0.86420054765378507 
Prepared          0.99988494930394334 
Adhoc             0.93237136647793051 
ReplProc          0.0 
Trigger           0.99843452831887947 
Cursor            0.42319205924058612 
Exec Cxt          0.65279111666076906 
View              0.95740334726893905 
Default           0.60895011346896522 
UsrTab            0.94985969576133511 
SysTab            0.0 
Check             0.67021276595744683 
Rule              0.0 
Summary           0.80056155581812771
Here's what some of the key statistics from this command mean:
  • Hit Ratio: Displays the percentage of time that this particular object was found in SQL Server's cache. The bigger this number, the better.
  • Object Count: Displays the total number of objects of the specified type that are cached.
  • Avg. Cost: A value used by SQL Server that measures how long it takes to compile a plan, along with the amount of memory needed by the plan. This value is used by SQL Server to determine if the plan should be cached or not.
  • Avg. Pages: Measures the total number of 8K pages used, on average, for cached objects.
  • LW Object Count, LW Avg Cost, WL Avg Stay, LW Ave Use: All these columns indicate how many of the specified objects have been removed from the cache by the Lazy Writer. The lower the figure, the better.
[7.0, 2000] Updated 9-1-2005
*****
DBCC DROPCLEANBUFFERS: Use this command to remove all the data from SQL Server's data cache (buffer) between performance tests to ensure fair testing. Keep in mind that this command only removes clean buffers, not dirty buffers. Because of this, before running the DBCC DROPCLEANBUFFERS command, you may first want to run the CHECKPOINT command first. Running CHECKPOINT will write all dirty buffers to disk. And then when you run DBCC DROPCLEANBUFFERS, you can be assured that all data buffers are cleaned out, not just the clean ones.
Example:
DBCC DROPCLEANBUFFERS
[7.0, 2000, 2005] Updated 9-1-2005
*****
DBCC ERRORLOG: If you rarely restart the mssqlserver service, you may find that your server log gets very large and takes a long time to load and view. You can truncate (essentially create a new log) the Current Server log by running DBCC ERRORLOG. You might want to consider scheduling a regular job that runs this command once a week to automatically truncate the server log. As a rule, I do this for all of my SQL Servers on a weekly basis. Also, you can accomplish the same thing using this stored procedure: sp_cycle_errorlog.
Example:
DBCC ERRORLOG
[7.0, 2000, 2005] Updated 9-1-2005
*****
DBCC FLUSHPROCINDB: Used to clear out the stored procedure cache for a specific database on a SQL Server, not the entire SQL Server. The database ID number to be affected must be entered as part of the command.
You may want to use this command before testing to ensure that previous stored procedure plans won't negatively affect testing results.
Example:
DECLARE @intDBID INTEGER SET @intDBID = (SELECT dbid FROM master.dbo.sysdatabases WHERE name = 'database_name') 
DBCC FLUSHPROCINDB (@intDBID)
[7.0, 2000, 2005] Updated 9-1-2005
*****
DBCC INDEXDEFRAG: In SQL Server 2000, Microsoft introduced DBCC INDEXDEFRAG to help reduce logical disk fragmentation. When this command runs, it reduces fragmentation and does not lock tables, allowing users to access the table when the defragmentation process is running. Unfortunately, this command doesn't do a great job of logical defragmentation.
The only way to truly reduce logical fragmentation is to rebuild your table's indexes. While this will remove all fragmentation, unfortunately it will lock the table, preventing users from accessing it during this process. This means that you will need to find a time when this will not present a problem to your users.
Of course, if you are unable to find a time to reindex your indexes, then running DBCC INDEXDEFRAG is better than doing nothing.
Example:
DBCC INDEXDEFRAG (Database_Name, Table_Name, Index_Name)
DBCC FREEPROCCACHE: Used to clear out the stored procedure cache for all SQL Server databases. You may want to use this command before testing to ensure that previous stored procedure plans won't negatively affect testing results.
Example:
DBCC FREEPROCCACHE
[7.0, 2000, 2005] Updated 10-16-2005
*****
DBCC MEMORYSTATUS: Lists a breakdown of how the SQL Server buffer cache is divided up, including buffer activity. This is an undocumented command, and one that may be dropped in future versions of SQL Server.
Example:
DBCC MEMORYSTATUS
[7.0, 2000] Updated 10-16-2005
*****
DBCC OPENTRAN: An open transaction can leave locks open, preventing others from accessing the data they need in a database. This command is used to identify the oldest open transaction in a specific database.
Example:
DBCC OPENTRAN('database_name')
[7.0, 2000] Updated 10-16-2005
*****
DBCC PAGE: Use this command to look at contents of a data page stored in SQL Server.
Example:
DBCC PAGE ({dbid|dbname}, pagenum [,print option] [,cache] [,logical])
where:
Dbid or dbname: Enter either the dbid or the name of the database in question.
Pagenum: Enter the page number of the SQL Server page that is to be examined.
Print option: (Optional) Print option can be either 0, 1, or 2. 0 - (Default) This option causes DBCC PAGE to print out only the page header information. 1 - This option causes DBCC PAGE to print out the page header information, each row of information from the page, and the page's offset table. Each of the rows printed out will be separated from each other. 2 - This option is the same as option 1, except it prints the page rows as a single block of information rather than separating the individual rows. The offset and header will also be displayed.
Cache: (Optional) This parameter allows either a 1 or a 0 to be entered. 0 - This option causes DBCC PAGE to retrieve the page number from disk rather than checking to see if it is in cache. 1 - (Default) This option takes the page from cache if it is in cache rather than getting it from disk only.
Logical: (Optional) This parameter is for use if the page number that is to be retrieved is a virtual page rather then a logical page. It can be either 0 or 1. 0 - If the page is to be a virtual page number. 1 - (Default) If the page is the logical page number. 

[6.5, 7.0, 2000]Updated 10-16-2005
*****
DBCC PINTABLE & DBCC UNPINTABLE: By default, SQL Server automatically brings into its data cache the pages it needs to work with. These data pages will stay in the data cache until there is no room for them, and assuming they are not needed, these pages will be flushed out of the data cache onto disk. At some point in the future when SQL Server needs these data pages again, it will have to go to disk in order to read them again into the data cache for use. If SQL Server somehow had the ability to keep the data pages in the data cache all the time, then SQL Server's performance would be increased because I/O could be reduced on the server.
The process of "pinning a table" is a way to tell SQL Server that we don't want it to flush out data pages for specific named tables once they are read into the cache in the first place. This in effect keeps these database pages in the data cache all the time, which eliminates the process of SQL Server from having to read the data pages, flush them out, and reread them again when the time arrives. As you can imagine, this can reduce I/O for these pinned tables, boosting SQL Server's performance.
To pin a table, the command DBCC PINTABLE is used. For example, the script below can be run to pin a table in SQL Server:
DECLARE @db_id int, @tbl_id int 
USE Northwind 
SET @db_id = DB_ID('Northwind') 
SET @tbl_id = OBJECT_ID('Northwind..categories') 
DBCC PINTABLE (@db_id, @tbl_id)
While you can use the DBCC PINTABLE directly, without the rest of the above script, you will find the script handy because the DBCC PINTABLE's parameters refer to the database and table ID that you want to pin, not by their database and table name. This script makes it a little easier to pin a table. You must run this command for every table you want to pin.
Once a table is pinned in the data cache, this does not mean that the entire table is automatically loaded into the data cache. It only means that as data pages from that table are needed by SQL Server, they are loaded into the data cache, and then stay there, not ever being flushed out to disk until you give the command to unpin the table using the DBCC UNPINTABLE. It is possible that part of a table, and not all of it, will be all that is pinned.
When you are done with a table and you no longer want it pinned, you will want to unpin your table. To do so, run this example code:
DECLARE @db_id int, @tbl_id int 
USE Northwind 
SET @db_id = DB_ID('Northwind') 
SET @tbl_id = OBJECT_ID('Northwind..categories') 
DBCC UNPINTABLE (@db_id, @tbl_id)

DBCC PROCCACHE: Displays information about how the stored procedure cache is being used.
Example:
DBCC PROCCACHE
[6.5, 7.0, 2000]Updated 10-16-2005
*****
DBCC REINDEX: Periodically (weekly or monthly) perform a database reorganization on all the indexes on all the tables in your database. This will rebuild the indexes so that the data is no longer fragmented. Fragmented data can cause SQL Server to perform unnecessary data reads, slowing down SQL Server's performance.
If you perform a reorganization on a table with a clustered index, any non-clustered indexes on that same table will automatically be rebuilt.
Database reorganizations can be done  byscheduling SQLMAINT.EXE to run using the SQL Server Agent, or if by running your own custom script via the SQL Server Agent (see below).
Unfortunately, the DBCC DBREINDEX command will not automatically rebuild all of the indexes on all the tables in a database; it can only work on one table at a time. But if you run the following script, you can index all the tables in a database with ease.
Example:
DBCC DBREINDEX('table_name', fillfactor)
or
--Script to automatically reindex all tables in a database 

USE DatabaseName --Enter the name of the database you want to reindex 

DECLARE @TableName varchar(255) 

DECLARE TableCursor CURSOR FOR 
SELECT table_name FROM information_schema.tables 
WHERE table_type = 'base table' 

OPEN TableCursor 

FETCH NEXT FROM TableCursor INTO @TableName 
WHILE @@FETCH_STATUS = 0 
BEGIN 
PRINT "Reindexing " + @TableName 
DBCC DBREINDEX(@TableName,' ',90) 
FETCH NEXT FROM TableCursor INTO @TableName 
END 

CLOSE TableCursor 

DEALLOCATE TableCursor
The script will automatically reindex every index in every table of any database you select, and provide a fillfactor of 90%. You can substitute any number you want for the 90 in the above script.
When DBCC DBREINDEX is used to rebuild indexes, keep in mind that as the indexes on a table are being rebuilt, that the table becomes unavailable for use by your users. For example, when a non-clustered index is rebuilt, a shared table lock is put on the table, preventing all but SELECT operations to be performed on it. When a clustered index is rebuilt, an exclusive table lock is put on the table, preventing any table access by your users. Because of this, you should only run this command when users don't need access to the tables being reorganized. [7.0, 2000]Updated 10-16-2005
*****
DBCC SHOWCONTIG: Used to show how fragmented data and indexes are in a specified table. If data pages storing data or index information becomes fragmented, it takes more disk I/O to find and move the data to the SQL Server cache buffer, hurting performance. This command tells you how fragmented these data pages are. If you find that fragmentation is a problem, you can reindex the tables to eliminate the fragmentation. Note: this fragmentation is fragmentation of data pages within the SQL Server MDB file, not of the physical file itself.
Since this command requires you to know the ID of both the table and index being analyzed, you may want to run the following script so you don't have to manually look up the table name ID number and the index ID number.
Example:
DBCC SHOWCONTIG (Table_id, IndexID)
Or:
--Script to identify table fragmentation 

--Declare variables 
DECLARE 
@ID int, 
@IndexID int, 
@IndexName varchar(128) 

--Set the table and index to be examined 
SELECT @IndexName = 'index_name'           --enter name of index 
SET @ID = OBJECT_ID('table_name')          --enter name of table 

--Get the Index Values 
SELECT @IndexID = IndID 
FROM sysindexes 
WHERE id = @ID AND name = @IndexName 

--Display the fragmentation 
DBCC SHOWCONTIG (@id, @IndexID)
While the DBCC SHOWCONTIG command provides several measurements, the key one is Scan Density. This figure should be as close to 100% as possible. If the scan density is less than 75%, then you may want to reindex the tables in your database. [6.5, 7.0, 2000] Updated 3-20-2006
*****
DBCC SHOW_STATISTICS: Used to find out the selectivity of an index. Generally speaking, the higher the selectivity of an index, the greater the likelihood it will be used by the query optimizer. You have to specify both the table name and the index name you want to find the statistics on.
Example:
DBCC SHOW_STATISTICS (table_name, index_name)

DBCC SQLMGRSTATS: Used to produce three different values that can sometimes be useful when you want to find out how well caching is being performed on ad-hoc and prepared Transact-SQL statements.
Example:
DBCC SQLMGRSTATS
Sample Results:
Item                      Status 
------------------------- ----------- 
Memory Used (8k Pages)    5446 
Number CSql Objects       29098 
Number False Hits         425490
Here's what the above means:
  • Memory Used (8k Pages): If the amount of memory pages is very large, this may be an indication that some user connection is preparing many Transact-SQL statements, but it not un-preparing them.
  • Number CSql Objects: Measures the total number of cached Transact-SQL statements.
  • Number False Hits: Sometimes, false hits occur when SQL Server goes to match pre-existing cached Transact-SQL statements. Ideally, this figure should be as low as possible.
[2000] Added 4-17-2003
*****
DBCC SQLPERF(): This command includes both documented and undocumented options. Let's take a look at all of them and see what they do.
DBCC SQLPERF (LOGSPACE)
This option (documented) returns data about the transaction log for all of the databases on the SQL Server, including Database Name, Log Size (MB), Log Space Used (%), and Status.
DBCC SQLPERF (UMSSTATS)
This option (undocumented) returns data about SQL Server thread management.
DBCC SQLPERF (WAITSTATS)
This option (undocumented) returns data about wait types for SQL Server resources.
DBCC SQLPERF (IOSTATS)
This option (undocumented) returns data about outstanding SQL Server reads and writes.
DBCC SQLPERF (RASTATS)
This option (undocumented) returns data about SQL Server read-ahead activity.
DBCC SQLPERF (THREADS)
This option (undocumented) returns data about I/O, CPU, and memory usage per SQL Server thread. [7.0, 2000] Updated 3-20-2006
*****
DBCC SQLPERF (UMSSTATS): When you run this command, you get output like this. (Note, this example was run on a 4 CPU server. There is 1 Scheduler ID per available CPU.)
Statistic                        Value 
-------------------------------- ------------------------ 
Scheduler ID                     0.0 
num users                        18.0 
num runnable                     0.0 
num workers                      13.0 
idle workers                     11.0 
work queued                      0.0 
cntxt switches                   2.2994396E+7 
cntxt switches(idle)             1.7793976E+7 
Scheduler ID                     1.0 
num users                        15.0 
num runnable                     0.0 
num workers                      13.0 
idle workers                     10.0 
work queued                      0.0 
cntxt switches                   2.4836728E+7 
cntxt switches(idle)             1.6275707E+7 
Scheduler ID                     2.0 
num users                        17.0 
num runnable                     0.0 
num workers                      12.0 
idle workers                     11.0 
work queued                      0.0 
cntxt switches                   1.1331447E+7 
cntxt switches(idle)             1.6273097E+7 
Scheduler ID                     3.0 
num users                        16.0 
num runnable                     0.0 
num workers                      12.0 
idle workers                     11.0 
work queued                      0.0 
cntxt switches                   1.1110251E+7 
cntxt switches(idle)             1.624729E+7 
Scheduler Switches               0.0 
Total Work                       3.1632352E+7
Below is an explanation of some of the key statistics above:
  • num users: This is the number of SQL Server threads currently in the scheduler.
  • num runnable: This is the number of actual SQL Server threads that are runnable.
  • num workers: This is the actual number of worker there are to process threads. This is the size of the thread pool.
  • idle workers: The number of workers that are currently idle.
  • cntxt switches: The number of context switches between runnable threads.
  • cntxt switches (idle): The number of context switches to the idle thread.

DBCC TRACEON & DBCC TRACEOFF: Used to turn on and off trace flags. Trace flags are often used to turn on and off specific server behavior or server characteristics temporarily. In rare occasions, they can be useful to troubleshooting SQL Server performance problems.
Example:
To use the DBCC TRACEON command to turn on a specified trace flag, use this syntax:
DBCC TRACEON (trace# [,...n])
To use the DBCC TRACEON command to turn off a specified trace flag, use this syntax:
DBCC TRACEOFF (trace# [,...n])
You can also use the DBCC TRACESTATUS command to find out which trace flags are currently turned on in your server using this syntax:
DBCC TRACESTATUS (trace# [,...n])
For specific information on the different kinds of trace flags available, search this website or look them up in Books Online. [6.5, 7.0, 2000] Updated 3-20-2006
*****
DBCC UPDATEUSAGE: The official use for this command is to report and correct inaccuracies in the sysindexes table, which may result in incorrect space usage reports. Apparently, it can also fix the problem of unreclaimed data pages in SQL Server. You may want to consider running this command periodically to clean up potential problems. This command can take some time to run, and you want to run it during off times because it will negatively affect SQL Server's performance when running. When you run this command, you must specify the name of the database that you want affected.
Example:
DBCC UPDATEUSAGE ('databasename')

Referenced by :- http://www.sql-server-performance.com/
Author :- Brad McGehee

Tuesday, September 9, 2008

SQL Server - Error : Fix : SharePoint Stop Working After Changing Server (Computer) Name

If Microsoft Office SharePoint Server (MOSS) and your database (MS SQL Server) are running together on same physical server, changing the name of the server (computer) using operating system may create non-functional SharePoint website.

When you change the physical server name the SharePoint is already connected to the SQL instance of old computer name (OldServerName/SQLInstance) and on changing the name the SharePoint will not able to connect the SQL Server  as now the SQL Server instance will run on new computer name (NewServerName/SQLInstance).

To solve this problem you need to reconfigure the entire Microsoft Office SharePoint Server with SQL Server Instance. Please follow the steps to fix the issue.

1. Open command prompt, change directory to where stsadm.exe exists.

C:\cd C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN

2. To invoke setconfigdb operation we need to use stsadm.exe. Run following command on command prompt

stsadm.exe -o setconfigdb -databaseserver ServerName -farmuserMyUserName - farmpassword MyPassword

3. Go to IIS, All Programs>>Administrative Tools>>Internet Information Service (IIS) Manager.

4. Right Click SharePoint Central Administration v3, click on delete and re-run the configuration wizard; the web application will be re-created by the configuration wizard.

5. After the above operation has completed successfully, run the SharePoint Products and Technologies Configuration Wizard.

All Programs>>Microsoft Office Server >> SharePoint Product and Technologies Configuration Wizard

6. Complete all the steps of wizard by clicking on Next button.

At the end of this process you will get a success wizard.

On completion of this whole procedure Microsoft Office SharePoint Server gets connected to the database with new Computer Name.

Reference:Pinal Dave (www.SQLAuthority.com)

Monday, September 8, 2008

BACKUP compression in SQL Server 2008

SQL Server 2008 is released with inbuilt compression for backups. Until the release of SQL Server 2005, compression was only available via third party backup software such as  SQL LiteSpeed, SQLZip, etc.

This article demonstrates how to take Full, Differential and Transactional log backups with compression, without compression and how to enable compression as a default.

Let us create a database “MyDB” as shown below.

USE [master] GO  
IF  EXISTS (SELECT name FROM sys.databases WHERE name = N'MyDB') 
DROP DATABASE [MyDB]
GO
USE [master] 
GO  
CREATE DATABASE [MyDB] ON   
PRIMARY ( NAME = N'MyDB_Data', FILENAME = N'D:\BPA SQL Project\SQL BP\TestDB\MyDB_Data.mdf' ,SIZE = 2176KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )  
LOG ON ( NAME = N'MyDB_log',FILENAME = N'D:\BPA SQL Project\SQL BP\TestDB\MyDB_log.LDF' , SIZE = 504KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)  
GO 
ALTER DATABASE [MyDB] SET RECOVERY FULL  
GO

Now let us create a table “MyTable” in the database “MyDB” as shown below

USE [MyDB]
GO
IF  EXISTS (SELECT * FROM sys.objects 
 WHERE object_id = OBJECT_ID(N'[dbo].[MyTable]') 
 AND type in (N'U'))
DROP TABLE [dbo].[MyTable]
GO
USE [MyDB]
GO
SET ANSI_NULLS ON
GO 
SET QUOTED_IDENTIFIER ON
GO 
SET ANSI_PADDING ON
GO 
CREATE TABLE [dbo].[MyTable](
          [id] [int] NULL,
          [name] [char](100) NULL
) ON [PRIMARY] 
GO
SET ANSI_PADDING OFF
Go
Let’s add 10000 rows of data to the table “MyTable” as shown below.
USE [MyDB] GO   declare @myid int set @myid=1 while @myid<=1000 begin insert into MyTable select @myid, 'A'+convert(varchar(10),@myid) set @myid =@myid +1 end 

you can use the system stored procedure sp_spaceused to see how large my database is.

EXECUTE sp_spaceused


Select the data using the following T-SQL command. [Refer Fig 1.0]

use MyDB go Select * from MyTable go 

 
Fig 1.0

Create a folder, D:\Backup, as shown below. [Refer Fig 1.1]

 
Fig 1.1

Now let us take a full backup as shown below. [Refer Fig 1.2]

Backup Database MyDB to disk ='d:\Backup\MyDB_Full.bak' with init

 
Fig 1.2

Let’s add some more data [1000 rows] to the table “MyTable” in the database “MyDB” as shown below.

USE [MyDB] GO   declare @myid int set @myid=1 while @myid<=1000 begin insert into MyTable select @myid, 'A'+convert(varchar(10),@myid) set @myid =@myid +1 end 

Now let us a take a transaction log backup, as shown below. [Refer Fig 1.3]

Backup log  MyDB to disk ='d:\Backup\MyDB_TLog_1.bak' with init

 
Fig 1.3

By default, SQL Server does not compress the backups. We can compress the backups in two different ways.

a.      Change the default behavior of SQL Server to compress all of the backups.

b.      Add an optional keyword “With COMPRESSION” in the backup clause.

The database MyDB and the Full backup, Transactional log backup that we took were without compression. That is the default SQL Server behavior.

Now let’s take a full backup of the database with compression as shown below. [Refer Fig 1.4]

Backup Database MyDB to disk ='d:\Backup\MyDB_Full2.bak' with COMPRESSION

 
Fig 1.4

From figure 1.4, you can see that the size of MyDB_Full2.bak is much smaller when compared to MyDB_Full.Bak and MyDB_Tlog_1.bak.

Add some more data [1000 rows] to the table “MyTable” in the database “MyDB” as shown below.

USE [MyDB] GO   declare @myid int set @myid=1 while @myid<=1000 begin insert into MyTable select @myid, 'A'+convert(varchar(10),@myid) set @myid =@myid +1 end 

Now let’s take a transaction log backup as shown below. [Refer Fig 1.5]

Backup log  MyDB to disk ='d:\Backup\MyDB_TLog_new.bak' with COMPRESSION

 
Fig 1.5

In figure 1.5, you can see that the size of MyDB_Tlog_new.bak is much smaller when compared to MyDB_Tlog_1.bak.

Let’s take a differential backup without compression and compare it with the differential backup with compression.

Execute the following commands as shown below. [Refer Fig 1.6]

backup database MyDB to disk ='d:\Backup\MyDB_Diff.bak' with differential   backup database MyDB to disk ='d:\Backup\MyDB_Diff2.bak' with differential, COMPRESSION 

 
Fig 1.6

Fig 1.6 shows the compression ratio between MyDB_Diff.bak and MyDB_Diff2.bak.

Let’s change the default behavior of SQL Server from uncompressed backup to compressed. This can be done using the SP_CONGIFURE command.

Execute the command as shown below.

USE master GO EXEC sp_configure 'backup compression default', '1' GO RECONFIGURE WITH OVERRIDE GO 

Now let’s take a full backup of the MyDB database, without the optional keyword “WITH COMPRESSION”. [Refer Fig 1.7]

Backup Database MyDB to disk ='d:\Backup\MyDB_Full3.bak'

 
Fig 1.7

From figure 1.7, we can clearly see that the backup by default is compressed.

Conclusion

We can make backups with different pattern.

Reference taken from:-Muthusamy Anantha Kumar aka The MAK (http://www.databasejournal.com)