[2024-feb-29] Sad news: Eric Layton aka Nocturnal Slacker aka vtel57 passed away on Feb 26th, shortly after hospitalization. He was one of our Wiki's most prominent admins. He will be missed.

Welcome to the Slackware Documentation Project

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
howtos:hardware:smart_hdd_diagnostics [2015/01/04 23:49 (UTC)] – [Bad Blocks (#5, 196, 197, 198)] metaschimahowtos:hardware:smart_hdd_diagnostics [2015/01/05 01:19 (UTC)] (current) – [Is my drive failing ?] metaschima
Line 147: Line 147:
               arguments to this option are on and off.               arguments to this option are on and off.
 </code> </code>
 +This also updates attributes that are marked ''Offline''. Unlike ''Always'' updated attributes, ''Offline'' attributes are only updated if this is enabled or if you run a SMART test.
  
 Note also that the approximate times for running various tests are listed. We will discuss SMART tests in the next section. Note also that the approximate times for running various tests are listed. We will discuss SMART tests in the next section.
Line 187: Line 188:
 ===== #4 Start_Stop_Count and #12 Power_Cycle_Count and #193 Load_Cycle_Count ===== ===== #4 Start_Stop_Count and #12 Power_Cycle_Count and #193 Load_Cycle_Count =====
  
-This attribute is important for laptop HDDs, because they default to powering off when not in use. Now, although laptop HDDs are designed to spin up and down more times than desktop HDD and this is an ''Old_age'' attribute, it still wears down the drive. Unless you run on batteries all the time you may want to consider turning off this feature but adding this to a boot script such as ''/etc/rc.d/rc.local'':+This attribute is important for laptop HDDs, because they default to powering off when not in use. Now, although laptop HDDs are designed to spin up and down more times than desktop HDD and this is an ''Old_age'' attribute, it still wears down the drive. Unless you run on batteries all the time you may want to consider turning off this feature by adding this to a boot script such as ''/etc/rc.d/rc.local'':
 <code> <code>
 hdparm -B 254 /dev/sda hdparm -B 254 /dev/sda
Line 198: Line 199:
 ===== #174 Unexpected power loss count and #192 Power-Off_Retract_Count ===== ===== #174 Unexpected power loss count and #192 Power-Off_Retract_Count =====
  
-Sudden power loss is detrimental to both HDDs and [[http://hardware.slashdot.org/story/13/03/01/224257/how-power-failures-corrupt-flash-ssd-data|SSDs]]. UPS power backups should be used for systems that are on all time for this reason as well as many others. Make sure to also shutdown your computer properly whenever possible to prevent data loss.+Sudden power loss is detrimental to both HDDs and [[http://hardware.slashdot.org/story/13/03/01/224257/how-power-failures-corrupt-flash-ssd-data|SSDs]]. UPS power backups should be used for systems that are on all time for this reason as well as many others. Make sure to also shutdown your computer properly whenever possible to prevent damage and data loss.
  
 ===== #190 Airflow_Temperature_Cel and 194 Temperature_Celsius ===== ===== #190 Airflow_Temperature_Cel and 194 Temperature_Celsius =====
Line 206: Line 207:
 ===== Bad Blocks (#5, 196, 197, 198) ===== ===== Bad Blocks (#5, 196, 197, 198) =====
  
-Bad blocks are basically areas of the disk surface that are damaged and can no longer hold data reliably. Internally the HDD/SSD deals with these by marking them and re-mapping/allocating them to other areas. Bad blocks increase with the age of the drive. It can be expected that you will encounter bad blocks with every HDD and SSD. The question is when does this become something to be concerned about ? That is hard to say, and in general you will have to deal with each device on an individual basis. A large increase in the number of bad blocks could mean the drive in nearing its end. Keep monitoring the ''Pre-fail'' attributes and decide when to change it out.+Bad blocks are basically areas of the disk surface that are damaged and can no longer hold data reliably. Internally the HDD/SSD deals with these by marking them and remapping/reallocating them to other areas. Bad blocks increase with the age of the drive. It can be expected that you will encounter bad blocks with every HDD and SSD. The question is when does this become something to be concerned about ? That is hard to say, and in general you will have to deal with each device on an individual basis. A large increase in the number of bad blocks could mean the drive in nearing its end. Keep monitoring the ''Pre-fail'' attributes and decide when to change it out.
  
 ====== SMART Tests ====== ====== SMART Tests ======
Line 220: Line 221:
 smartctl -t long /dev/sda smartctl -t long /dev/sda
 </code> </code>
 +
 +These tests can all be run on a running system without major side-effects. If you expect the long test to finish, you should minimize HDD usage as it has to scan the whole disk to finish the test.
 +
 +After waiting for the test to finish, you can get the results using the ''-a'' option as shown in the previous section.
 +
 +Short and Conveyance tests should always pass. If these fail, check the attributes as the drive is probably failing. A long test can fail if there are bad blocks, and this does NOT mean the drive is failing. The long test stops when it finds an error on the disk, so if there is a bad block it just stops. This doesn't mean the drive is failing, but you will have to wait for the HDD to remap/reallocate the block, or technically you could try to force it to do so:
 +http://www.smartmontools.org/browser/trunk/www/badblockhowto.xml
 +However, this method is difficult to implement safely, so you should usually just wait for the HDD to remap/reallocate.
 +
 +How often should you run these tests ? That depends. If you run a server then more often is better, the smartmontools site recommends weekly tests. For a home user, I usually run a long test every 1000 power on hours, but that is up to you and also depends on the details of the drive and situation.
 +
 +====== Is my drive failing ? ======
 +
 +A failing drive is defined as:
 +  - Having a ''Pre-fail'' attribute below or near threshold, marked ''FAILING_NOW'' or ''In_the_past''.
 +  - Having an ''Old_age'' attribute below or near threshold, marked ''FAILING_NOW'' or ''In_the_past'' **PLUS** other signs of failure such as consistent failure of SMART tests, strange noises, slowing down, corrupt data, etc.
 +
 +<note important>A failed long test does NOT mean your drive is failing, it could be just bad blocks. See previous section.</note>
 +
 +Do not ignore your senses, if the HDD sounds unusual or makes strange noises, monitor it closely and/or replace it. Again, SMART cannot tell you with great accuracy if or when the drive will fail. The drive can fail with above threshold attributes and minimal signs. The only hope you have to keep your data safe is to backup your data, use the 3-2-1 strategy as mentioned above.
 +
 +====== smartd ======
 +
 +What is smartd ? It is a daemon that monitors SMART. So if you don't want to manually monitor and run tests, you can set up smartd to run them on a regular basis. You should refer to ''man smartd'' and ''man smartd.conf'' and ''/etc/smartd.conf'' for everything you need to know about setting up smartd to do what you want it to do.
 ====== Sources ====== ====== Sources ======
 <!-- If you are copying information from another source, then specify that source --> <!-- If you are copying information from another source, then specify that source -->
 howtos:hardware:smart_hdd_diagnostics ()