Saturday, August 18, 2012

SQL Server maintains a list




When running a large database is the actual

Periodic maintenance will feel the need to.

Typical users will feel a big part baekeopil, DBA, the

Update statistics, databases, etc. will be reduced.

Let's find out about it in detail

A. Backup

 The type of backup, recovery, if based on

 Full + Log OR

 And will be Full + Deferential + Log.

 If I continue to accumulate the amount of data to reduce the load of the backup

 Once a day Deferential, 3 hours once a Log, check the way in which time is taking a Full Backup.

 In fact, this cycle is very personal service data that is part of the transaction amount or

 Checks should be designed in consideration of the period.

 And many of them are part of the exclusion, which is just a backup system database.

 jatjineun master, but the important system components, such as fluctuating if you do not back up the database

 Hoksina huge master database went flying (?) That may be working.

 Frequent backups are not recommended, but if there were as many changes recommended to periodically back up.

Two. Shrink Database

 Transactions that occur frequently in the initial database of a large database space,

 Increment, max gapdeungeul should be taken in setting considerably.

 Ten thousand days to get over the initial 100MB space has increased 10% Take, for example,

 But more than 100MB of data per hour Assuming a pile of work to increase the space in the database

 Due to server overload will take considerable.

 Shrink database which is an important part.

 How much space after shrinking and if maintained, but also how much space should be returned to the operating system

 Consideration has been given to this part of hayeoseo It is important to maintain.

 * Periodically, the database does not shrink the actual data 100000 row of the 1000 row, but the space

 May be more than occupy space.

Three. Update Statistics & Re-Build index

 DBA does not have a professional database developers if management is something that most mistakes.

 If you suppose a single board design.

 Initially, the test data in order to put about 100 reports simply 'jaldoene ~', and went beyond.

 However, there is literally just over 100 if the data was done.

 Statistics continue to update the Board unless the DBMS data have determined that as many as 100 by

 Subsequent action plan (scan) will be run.

 Ten thousand days in situations where statistics do not update more than one million piled What happens if the data?

 Bulletin board access, both reflash full cans per 1 million that can result.

 Ribildeuneun index made a lot of updates and deletes can be done, jimeuroseo fragmentation,

 To solve this problem is to understand that creating a new index.

 Likewise, at first, which shows a lot faster query performance over time, due to updates and deletes

 Despite tamedo fragmentation index scans and all the data with no performance difference can be seen.

Four. SP Re-Compile

 Used a stored procedure if the environment should be considered.

 Some people are learning about statistics, 'Oh Now that you've updated statistics will be okay.'

 When you boil an excellent performance as a query that is actually used, but the SP is not the case.

 Because the SP has already been compiled at the time of the execution plan compiled and

 The same execution plan is maintained. But the data is updated at the time of piling up statistics

 SP unless you tell it to compile the old execution plan will be reused again and again.

 The syntax is recompiled sp_recompile 'table name' is

 sys.tables that correspond to all the tables by referring to the SP, the SP recompiled to make use

 Is recommended.

No comments:

Post a Comment