Friday, March 23, 2012

Maintenance Plan

I've done quite a bit of reading about SQL 2000 full backups and log
backups. I have a pretty good idea on how to make this happen (or at
least I think I do).
Here's the current problem, I have a database that was created by a
third party and it needs to be fully backed up every night with hourly
log backups. I scheduled this to run using the maintenance wizard, I
changed the recovery model from Simple to Full in order to keep log
backups.
Now the transaction log is getting quite large, through my readings,
I've found out that backing up the log does truncate the log but
doesn't reduce the physical log size. I guess I need to backup the log
and then do a dbcc shrinkfile in order to actually shrink the physical
log to a target size. In order to do this I need to write my own backup
scripts without using the wizard...Right?
So if that's correct my question is: Where do I write the scripts? In a
SP and then schedule it to run somehow? Additionally should I write 2
sp's, one for the nightly full backup and another for the hourly log
backups?
This 3rd party db has a lot of tables that are unused (about 100), so
when I write my maintenance script should I specify only the tables I
want to reindex? Is there any performance consideration here?
In the end, I need to fully back up the db nightly and the transaction
log hourly while keeping the physical file size as small as possible
while still being able to restore to a point-in-time.
I'm pretty sure I can write the scripts for this, I'm just not sure
where to put the scripts and how to schedule it?
Any help would be appreciated.
Israel> Now the transaction log is getting quite large, through my readings,
> I've found out that backing up the log does truncate the log but
> doesn't reduce the physical log size. I guess I need to backup the log
> and then do a dbcc shrinkfile in order to actually shrink the physical
> log to a target size.
What do you mean by "target size"? Why would you want to shrink the log file if it will just bump up
to the size it need to be again? See http://www.karaszi.com/SQLServer/info_dont_shrink.asp for more
details.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Izzy" <israel.richner@.gmail.com> wrote in message
news:1144682154.389600.122100@.t31g2000cwb.googlegroups.com...
> I've done quite a bit of reading about SQL 2000 full backups and log
> backups. I have a pretty good idea on how to make this happen (or at
> least I think I do).
> Here's the current problem, I have a database that was created by a
> third party and it needs to be fully backed up every night with hourly
> log backups. I scheduled this to run using the maintenance wizard, I
> changed the recovery model from Simple to Full in order to keep log
> backups.
> Now the transaction log is getting quite large, through my readings,
> I've found out that backing up the log does truncate the log but
> doesn't reduce the physical log size. I guess I need to backup the log
> and then do a dbcc shrinkfile in order to actually shrink the physical
> log to a target size. In order to do this I need to write my own backup
> scripts without using the wizard...Right?
> So if that's correct my question is: Where do I write the scripts? In a
> SP and then schedule it to run somehow? Additionally should I write 2
> sp's, one for the nightly full backup and another for the hourly log
> backups?
> This 3rd party db has a lot of tables that are unused (about 100), so
> when I write my maintenance script should I specify only the tables I
> want to reindex? Is there any performance consideration here?
> In the end, I need to fully back up the db nightly and the transaction
> log hourly while keeping the physical file size as small as possible
> while still being able to restore to a point-in-time.
> I'm pretty sure I can write the scripts for this, I'm just not sure
> where to put the scripts and how to schedule it?
> Any help would be appreciated.
> Israel
>|||OK, thanks for the article, point taken.
The size of the db is 7.5GB, Indexes are rebuilt every night when the
db is backed up. I'm guessing that is what caused the log file to grow
beyond 11GB, reindexing of the tables. Even if this was only done once
a week, eventually the log would grow again too a size it shouldn't
need to be.
How do I get around this problem of the log getting too big and using
up the available disk space?
In the past this database was set to Simple Recovery so this wasn't an
issue, now the need has changed and losing a day of work isn't an
option.
What would you recommend?
Thanks,
Israel|||Why shouldn't it need to be that size? If it requires to hold that much
info in-between log backups then that is the size it needs to be. Shrinking
it doesn't make that go away. If you want to keep the log file smaller at
all times you basically have two options. One is to do several log backups
while the reindexing is going on. Or simply reindex the indexes that
actually need to be reindexed. Forget the maintenance plan and write your
own job to issue a DBCC DBREINDEX on only the indexes that have enough
fragmentation to warrant it. You can find an example in BooksOnLine under
DBCC SHOWCONTIG for how to do that.
--
Andrew J. Kelly SQL MVP
"Izzy" <israel.richner@.gmail.com> wrote in message
news:1144691290.716456.303060@.g10g2000cwb.googlegroups.com...
> OK, thanks for the article, point taken.
> The size of the db is 7.5GB, Indexes are rebuilt every night when the
> db is backed up. I'm guessing that is what caused the log file to grow
> beyond 11GB, reindexing of the tables. Even if this was only done once
> a week, eventually the log would grow again too a size it shouldn't
> need to be.
> How do I get around this problem of the log getting too big and using
> up the available disk space?
> In the past this database was set to Simple Recovery so this wasn't an
> issue, now the need has changed and losing a day of work isn't an
> option.
> What would you recommend?
> Thanks,
> Israel
>|||Thanks Andrew,
That's pretty much the conclusion I've come to as well. Until I get the
scripts written I've disabled the reindexing.
In one thread someone recommended changing the Recovery Model to
Bulk-Logged during the reindexing and changing it back once it
completes, they said that would keep the transaction log from getting
so big. What is your take on that approach?
My original question was how to go about executing my own scripts. Do I
create a SP for this? If so how do I schedule the SP to run?
This is my first attempt at recovery procedures, so sorry if that
question seems a little "no brainer".|||> In one thread someone recommended changing the Recovery Model to
> Bulk-Logged during the reindexing and changing it back once it
> completes, they said that would keep the transaction log from getting
> so big. What is your take on that approach?
The ldf files might not log nearly as much for some operations (like index rebuilds). but the
following transaction log backup will not only contain log records, but also data pages - so it can
get very big. Also, you cannot perform point in time restore for such a transaction log backup. You
can give it a spin ans see how well it work out for your environment.
> My original question was how to go about executing my own scripts. Do I
> create a SP for this? If so how do I schedule the SP to run?
Encapsulating the TSQL commands in a procedure is a good idea. You can then schedult the procedure
using SQL Server Agent (EM, Management, ...).
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Izzy" <israel.richner@.gmail.com> wrote in message
news:1144759482.900957.286230@.i40g2000cwc.googlegroups.com...
> Thanks Andrew,
> That's pretty much the conclusion I've come to as well. Until I get the
> scripts written I've disabled the reindexing.
> In one thread someone recommended changing the Recovery Model to
> Bulk-Logged during the reindexing and changing it back once it
> completes, they said that would keep the transaction log from getting
> so big. What is your take on that approach?
> My original question was how to go about executing my own scripts. Do I
> create a SP for this? If so how do I schedule the SP to run?
> This is my first attempt at recovery procedures, so sorry if that
> question seems a little "no brainer".
>|||Thanks Tibor,
I now have a SP to reindex only tables with an index that is >= 30%
fragmented. I tried it out on my test db and got some wierd results.
I found a table which is 66% fragmented, ran DBCC DBREINDEX
(EDI_210,'',80) on just that table and it still shows it being 66%
fragmented.
Any idea why that is?
Here is the results of running DBCC SHOWCONTIG (EDI_210)
DBCC SHOWCONTIG scanning 'edi_210' table...
Table: 'edi_210' (725577623); index ID: 0, database ID: 9
TABLE level scan performed.
- Pages Scanned........................: 4
- Extents Scanned.......................: 3
- Extent Switches.......................: 2
- Avg. Pages per Extent..................: 1.3
- Scan Density [Best Count:Actual Count]......: 33.33% [1:3]
- Extent Scan Fragmentation ...............: 66.67%
- Avg. Bytes Free per Page................: 2849.5
- Avg. Page Density (full)................: 64.79%|||IndexID 0 means this is a heap, so DBREINDEX will not shuffle any data pages.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Izzy" <israel.richner@.gmail.com> wrote in message
news:1144767691.735932.174430@.i39g2000cwa.googlegroups.com...
> Thanks Tibor,
> I now have a SP to reindex only tables with an index that is >= 30%
> fragmented. I tried it out on my test db and got some wierd results.
> I found a table which is 66% fragmented, ran DBCC DBREINDEX
> (EDI_210,'',80) on just that table and it still shows it being 66%
> fragmented.
> Any idea why that is?
> Here is the results of running DBCC SHOWCONTIG (EDI_210)
> DBCC SHOWCONTIG scanning 'edi_210' table...
> Table: 'edi_210' (725577623); index ID: 0, database ID: 9
> TABLE level scan performed.
> - Pages Scanned........................: 4
> - Extents Scanned.......................: 3
> - Extent Switches.......................: 2
> - Avg. Pages per Extent..................: 1.3
> - Scan Density [Best Count:Actual Count]......: 33.33% [1:3]
> - Extent Scan Fragmentation ...............: 66.67%
> - Avg. Bytes Free per Page................: 2849.5
> - Avg. Page Density (full)................: 64.79%
>|||So let me make sure I have this right.
After I write the SP to performe the maintenance, I'll create a new
Job...give it a schedule...and just put EXEC Procedure in the Command
box when creating a new job step.
Is that all there is too it?|||Yep. And, make that jobstep a TSQL one. Also check out the Advanced tab, where you can specify an
output file.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Izzy" <israel.richner@.gmail.com> wrote in message
news:1144777279.033714.283750@.g10g2000cwb.googlegroups.com...
> So let me make sure I have this right.
> After I write the SP to performe the maintenance, I'll create a new
> Job...give it a schedule...and just put EXEC Procedure in the Command
> box when creating a new job step.
> Is that all there is too it?
>|||Great!
You guys have been a big help and I appreciate it.
Israel Richner, SQL Server Newbie :-)|||Great!
You guys have been a big help and I appreciate it.
Israel Richner, SQL Server Newbie :-)|||Just to add some to Tibor's reply on the heap issue. It is recommended that
every table have a clustered index if for no other reason than to allow you
to control fragmentation. This particular table has too few pages to worry
about it one way or the other. Until you get to 8 or more pages a table or
index will always use a Mixed Extent (means the pages are spread across up
to 8 extents instead of just one) and the fragmentation will never look
good. Reindexing these will not hurt anything but just keep this in mind.
--
Andrew J. Kelly SQL MVP
"Izzy" <israel.richner@.gmail.com> wrote in message
news:1144767691.735932.174430@.i39g2000cwa.googlegroups.com...
> Thanks Tibor,
> I now have a SP to reindex only tables with an index that is >= 30%
> fragmented. I tried it out on my test db and got some wierd results.
> I found a table which is 66% fragmented, ran DBCC DBREINDEX
> (EDI_210,'',80) on just that table and it still shows it being 66%
> fragmented.
> Any idea why that is?
> Here is the results of running DBCC SHOWCONTIG (EDI_210)
> DBCC SHOWCONTIG scanning 'edi_210' table...
> Table: 'edi_210' (725577623); index ID: 0, database ID: 9
> TABLE level scan performed.
> - Pages Scanned........................: 4
> - Extents Scanned.......................: 3
> - Extent Switches.......................: 2
> - Avg. Pages per Extent..................: 1.3
> - Scan Density [Best Count:Actual Count]......: 33.33% [1:3]
> - Extent Scan Fragmentation ...............: 66.67%
> - Avg. Bytes Free per Page................: 2849.5
> - Avg. Page Density (full)................: 64.79%
>|||Ok, since the table is very small I wont worry about trying to fix
something that isn't broken.
I've got a script now that will reindex tables that are >= 30%
fragmented, but I want to employ your suggestion regarding performing
log backups during
reindexing if the tlog grows beyond a certain point.
Right now my log seems to be holding steady at about 80mb and I'd like
to see it stay less than 1gb because storage space is a factor.
If it were my decision I'd just add enough drive space to accomidate
whatever size it needed to grow too. But it's not, in fact my IT
manager has two heavily used network shares on the same drive as our
database and Tlog file. I bet that made you cringe hu?!
I've tried to talk him into moving the shares and putting the database
file and tlog file on seperate drives, but no luck there.
Anyway, so I'm wondering is there a way too evaluate the size of the
Tlog while I'm looping through my tables and reindexing them?
Here is my script for reindexing my tables, I got it from BOL and
modified it a little.
SET NOCOUNT ON
DECLARE @.TABLENAME VARCHAR (128)
DECLARE @.EXECSTR VARCHAR (255)
DECLARE @.MAXFRAG DECIMAL
DECLARE @.FILLFACTOR TINYINT
SELECT @.MAXFRAG = 30.0
SELECT @.FILLFACTOR = 80
DECLARE TABLES CURSOR FOR
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
CREATE TABLE #FRAGLIST (
OBJECTNAME CHAR (255),
OBJECTID INT,
INDEXNAME CHAR (255),
INDEXID INT,
LVL INT,
COUNTPAGES INT,
COUNTROWS INT,
MINRECSIZE INT,
MAXRECSIZE INT,
AVGRECSIZE INT,
FORRECCOUNT INT,
EXTENTS INT,
EXTENTSWITCHES INT,
AVGFREEBYTES INT,
AVGPAGEDENSITY INT,
SCANDENSITY DECIMAL,
BESTCOUNT INT,
ACTUALCOUNT INT,
LOGICALFRAG DECIMAL,
EXTENTFRAG DECIMAL)
OPEN TABLES
FETCH NEXT FROM TABLES INTO @.TABLENAME
WHILE @.@.FETCH_STATUS = 0
BEGIN
INSERT INTO #FRAGLIST
EXEC ('DBCC SHOWCONTIG (''' + @.TABLENAME + ''')
WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
FETCH NEXT FROM TABLES INTO @.TABLENAME
END
CLOSE TABLES
DEALLOCATE TABLES
DECLARE INDEXES CURSOR FOR
SELECT DISTINCT OBJECTNAME
FROM #FRAGLIST
WHERE LOGICALFRAG >= @.MAXFRAG
OPEN INDEXES
FETCH NEXT FROM INDEXES INTO @.TABLENAME
WHILE @.@.FETCH_STATUS = 0
BEGIN
SELECT @.EXECSTR = 'DBCC DBREINDEX (' + @.TABLENAME + ','''',' +
RTRIM(@.FILLFACTOR) + ')'
EXEC (@.EXECSTR)
--I'd like to evaluate the tlog here and perform a backup if size is >=value
FETCH NEXT FROM INDEXES INTO @.TABLENAME
END
CLOSE INDEXES
DEALLOCATE INDEXES
DROP TABLE #FRAGLIST
SET NOCOUNT OFF
GO|||I figured it out, and for anyone wondering how...it's like this:
CREATE TABLE #LOGS
(
DATABASENAME VARCHAR(25),
LOGSIZE DECIMAL(10,5),
LOGSPACEUSED DECIMAL(10,5),
LOGSTATUS TINYINT
)
INSERT INTO #LOGS
EXEC ('DBCC SQLPERF(LOGSPACE)')
SELECT LOGSIZE
FROM #LOGS
WHERE DATABASENAME = [database name]
DROP TABLE #LOGS|||Yup, that is what I would have suggested had you given me the chance:).
--
Andrew J. Kelly SQL MVP
"Izzy" <israel.richner@.gmail.com> wrote in message
news:1144852022.220627.5640@.i40g2000cwc.googlegroups.com...
>I figured it out, and for anyone wondering how...it's like this:
> CREATE TABLE #LOGS
> (
> DATABASENAME VARCHAR(25),
> LOGSIZE DECIMAL(10,5),
> LOGSPACEUSED DECIMAL(10,5),
> LOGSTATUS TINYINT
> )
> INSERT INTO #LOGS
> EXEC ('DBCC SQLPERF(LOGSPACE)')
> SELECT LOGSIZE
> FROM #LOGS
> WHERE DATABASENAME = [database name]
> DROP TABLE #LOGS
>

No comments:

Post a Comment