Monday, March 26, 2012

Maintenance Plan

I've done quite a bit of reading about SQL 2000 full backups and log
backups. I have a pretty good idea on how to make this happen (or at
least I think I do).
Here's the current problem, I have a database that was created by a
third party and it needs to be fully backed up every night with hourly
log backups. I scheduled this to run using the maintenance wizard, I
changed the recovery model from Simple to Full in order to keep log
backups.
Now the transaction log is getting quite large, through my readings,
I've found out that backing up the log does truncate the log but
doesn't reduce the physical log size. I guess I need to backup the log
and then do a dbcc shrinkfile in order to actually shrink the physical
log to a target size. In order to do this I need to write my own backup
scripts without using the wizard...Right?
So if that's correct my question is: Where do I write the scripts? In a
SP and then schedule it to run somehow? Additionally should I write 2
sp's, one for the nightly full backup and another for the hourly log
backups?
This 3rd party db has a lot of tables that are unused (about 100), so
when I write my maintenance script should I specify only the tables I
want to reindex? Is there any performance consideration here?
In the end, I need to fully back up the db nightly and the transaction
log hourly while keeping the physical file size as small as possible
while still being able to restore to a point-in-time.
I'm pretty sure I can write the scripts for this, I'm just not sure
where to put the scripts and how to schedule it?
Any help would be appreciated.
Israel> Now the transaction log is getting quite large, through my readings,
> I've found out that backing up the log does truncate the log but
> doesn't reduce the physical log size. I guess I need to backup the log
> and then do a dbcc shrinkfile in order to actually shrink the physical
> log to a target size.
What do you mean by "target size"? Why would you want to shrink the log file
if it will just bump up
to the size it need to be again? See http://www.karaszi.com/SQLServer/in...br />
rink.asp for more
details.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Izzy" <israel.richner@.gmail.com> wrote in message
news:1144682154.389600.122100@.t31g2000cwb.googlegroups.com...
> I've done quite a bit of reading about SQL 2000 full backups and log
> backups. I have a pretty good idea on how to make this happen (or at
> least I think I do).
> Here's the current problem, I have a database that was created by a
> third party and it needs to be fully backed up every night with hourly
> log backups. I scheduled this to run using the maintenance wizard, I
> changed the recovery model from Simple to Full in order to keep log
> backups.
> Now the transaction log is getting quite large, through my readings,
> I've found out that backing up the log does truncate the log but
> doesn't reduce the physical log size. I guess I need to backup the log
> and then do a dbcc shrinkfile in order to actually shrink the physical
> log to a target size. In order to do this I need to write my own backup
> scripts without using the wizard...Right?
> So if that's correct my question is: Where do I write the scripts? In a
> SP and then schedule it to run somehow? Additionally should I write 2
> sp's, one for the nightly full backup and another for the hourly log
> backups?
> This 3rd party db has a lot of tables that are unused (about 100), so
> when I write my maintenance script should I specify only the tables I
> want to reindex? Is there any performance consideration here?
> In the end, I need to fully back up the db nightly and the transaction
> log hourly while keeping the physical file size as small as possible
> while still being able to restore to a point-in-time.
> I'm pretty sure I can write the scripts for this, I'm just not sure
> where to put the scripts and how to schedule it?
> Any help would be appreciated.
> Israel
>|||OK, thanks for the article, point taken.
The size of the db is 7.5GB, Indexes are rebuilt every night when the
db is backed up. I'm guessing that is what caused the log file to grow
beyond 11GB, reindexing of the tables. Even if this was only done once
a week, eventually the log would grow again too a size it shouldn't
need to be.
How do I get around this problem of the log getting too big and using
up the available disk space?
In the past this database was set to Simple Recovery so this wasn't an
issue, now the need has changed and losing a day of work isn't an
option.
What would you recommend?
Thanks,
Israel|||Why shouldn't it need to be that size? If it requires to hold that much
info in-between log backups then that is the size it needs to be. Shrinking
it doesn't make that go away. If you want to keep the log file smaller at
all times you basically have two options. One is to do several log backups
while the reindexing is going on. Or simply reindex the indexes that
actually need to be reindexed. Forget the maintenance plan and write your
own job to issue a DBCC DBREINDEX on only the indexes that have enough
fragmentation to warrant it. You can find an example in BooksOnLine under
DBCC SHOWCONTIG for how to do that.
Andrew J. Kelly SQL MVP
"Izzy" <israel.richner@.gmail.com> wrote in message
news:1144691290.716456.303060@.g10g2000cwb.googlegroups.com...
> OK, thanks for the article, point taken.
> The size of the db is 7.5GB, Indexes are rebuilt every night when the
> db is backed up. I'm guessing that is what caused the log file to grow
> beyond 11GB, reindexing of the tables. Even if this was only done once
> a week, eventually the log would grow again too a size it shouldn't
> need to be.
> How do I get around this problem of the log getting too big and using
> up the available disk space?
> In the past this database was set to Simple Recovery so this wasn't an
> issue, now the need has changed and losing a day of work isn't an
> option.
> What would you recommend?
> Thanks,
> Israel
>|||Thanks Andrew,
That's pretty much the conclusion I've come to as well. Until I get the
scripts written I've disabled the reindexing.
In one thread someone recommended changing the Recovery Model to
Bulk-Logged during the reindexing and changing it back once it
completes, they said that would keep the transaction log from getting
so big. What is your take on that approach?
My original question was how to go about executing my own scripts. Do I
create a SP for this? If so how do I schedule the SP to run?
This is my first attempt at recovery procedures, so sorry if that
question seems a little "no brainer".|||> In one thread someone recommended changing the Recovery Model to
> Bulk-Logged during the reindexing and changing it back once it
> completes, they said that would keep the transaction log from getting
> so big. What is your take on that approach?
The ldf files might not log nearly as much for some operations (like index r
ebuilds). but the
following transaction log backup will not only contain log records, but also
data pages - so it can
get very big. Also, you cannot perform point in time restore for such a tran
saction log backup. You
can give it a spin ans see how well it work out for your environment.

> My original question was how to go about executing my own scripts. Do I
> create a SP for this? If so how do I schedule the SP to run?
Encapsulating the TSQL commands in a procedure is a good idea. You can then
schedult the procedure
using SQL Server Agent (EM, Management, ...).
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Izzy" <israel.richner@.gmail.com> wrote in message
news:1144759482.900957.286230@.i40g2000cwc.googlegroups.com...
> Thanks Andrew,
> That's pretty much the conclusion I've come to as well. Until I get the
> scripts written I've disabled the reindexing.
> In one thread someone recommended changing the Recovery Model to
> Bulk-Logged during the reindexing and changing it back once it
> completes, they said that would keep the transaction log from getting
> so big. What is your take on that approach?
> My original question was how to go about executing my own scripts. Do I
> create a SP for this? If so how do I schedule the SP to run?
> This is my first attempt at recovery procedures, so sorry if that
> question seems a little "no brainer".
>|||Thanks Tibor,
I now have a SP to reindex only tables with an index that is >= 30%
fragmented. I tried it out on my test db and got some wierd results.
I found a table which is 66% fragmented, ran DBCC DBREINDEX
(EDI_210,'',80) on just that table and it still shows it being 66%
fragmented.
Any idea why that is?
Here is the results of running DBCC SHOWCONTIG (EDI_210)
DBCC SHOWCONTIG scanning 'edi_210' table...
Table: 'edi_210' (725577623); index ID: 0, database ID: 9
TABLE level scan performed.
- Pages Scanned........................: 4
- Extents Scanned.......................: 3
- Extent Switches.......................: 2
- Avg. Pages per Extent..................: 1.3
- Scan Density [Best Count:Actual Count]......: 33.33% [1:3]
- Extent Scan Fragmentation ...............: 66.67%
- Avg. Bytes Free per Page................: 2849.5
- Avg. Page Density (full)................: 64.79%|||IndexID 0 means this is a heap, so DBREINDEX will not shuffle any data pages
.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Izzy" <israel.richner@.gmail.com> wrote in message
news:1144767691.735932.174430@.i39g2000cwa.googlegroups.com...
> Thanks Tibor,
> I now have a SP to reindex only tables with an index that is >= 30%
> fragmented. I tried it out on my test db and got some wierd results.
> I found a table which is 66% fragmented, ran DBCC DBREINDEX
> (EDI_210,'',80) on just that table and it still shows it being 66%
> fragmented.
> Any idea why that is?
> Here is the results of running DBCC SHOWCONTIG (EDI_210)
> DBCC SHOWCONTIG scanning 'edi_210' table...
> Table: 'edi_210' (725577623); index ID: 0, database ID: 9
> TABLE level scan performed.
> - Pages Scanned........................: 4
> - Extents Scanned.......................: 3
> - Extent Switches.......................: 2
> - Avg. Pages per Extent..................: 1.3
> - Scan Density [Best Count:Actual Count]......: 33.33% [1:3]
> - Extent Scan Fragmentation ...............: 66.67%
> - Avg. Bytes Free per Page................: 2849.5
> - Avg. Page Density (full)................: 64.79%
>|||So let me make sure I have this right.
After I write the SP to performe the maintenance, I'll create a new
Job...give it a schedule...and just put EXEC Procedure in the Command
box when creating a new job step.
Is that all there is too it?|||Yep. And, make that jobstep a TSQL one. Also check out the Advanced tab, whe
re you can specify an
output file.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Izzy" <israel.richner@.gmail.com> wrote in message
news:1144777279.033714.283750@.g10g2000cwb.googlegroups.com...
> So let me make sure I have this right.
> After I write the SP to performe the maintenance, I'll create a new
> Job...give it a schedule...and just put EXEC Procedure in the Command
> box when creating a new job step.
> Is that all there is too it?
>

No comments:

Post a Comment