[2024-feb-29] Sad news: Eric Layton aka Nocturnal Slacker aka vtel57 passed away on Feb 26th, shortly after hospitalization. He was one of our Wiki's most prominent admins. He will be missed.

Welcome to the Slackware Documentation Project

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
howtos:hardware:smart_hdd_diagnostics [2015/01/04 23:13 (UTC)] – [#9 Power_On_Hours] metaschimahowtos:hardware:smart_hdd_diagnostics [2015/01/05 01:19 (UTC)] (current) – [Is my drive failing ?] metaschima
Line 20: Line 20:
  
 In order to be able to use SMART you need: In order to be able to use SMART you need:
-  - A HDD or SDD that supports SMART+  - A HDD or SSD that supports SMART
   - SMART enabled in the UEFI/BIOS   - SMART enabled in the UEFI/BIOS
   - Software to interface with SMART   - Software to interface with SMART
  
-Some commonly used software to interface with SMART is [[http://www.smartmontools.org/wiki|smartmontools]], or you can find individual manufacturer's utilities on [https://www.ultimatebootcd.com/[|UBCD]]. Some people prefer smartmontools because it is easily accessible from the command line. Others prefer the manufacturer's utilities because they sometimes have more features than smartmontools. Which is better is mostly down to user preference and the details of the situation. For this article we will focus on smartmontools and more specifically smartctl.+Some commonly used software to interface with SMART is [[http://www.smartmontools.org/wiki|smartmontools]], or you can find individual manufacturer's utilities on [[https://www.ultimatebootcd.com/|UBCD]]. Some people prefer smartmontools because it is easily accessible from the command line. Others prefer the manufacturer's utilities because they sometimes have more features than smartmontools. Which is better is mostly down to user preference and the details of the situation. For this article we will focus on smartmontools and more specifically smartctl.
  
 In order to display the SMART attributes with smartmontools you need to run the following as root: In order to display the SMART attributes with smartmontools you need to run the following as root:
Line 30: Line 30:
 smartctl -a /dev/sda smartctl -a /dev/sda
 </code> </code>
-Note that we will be assuming that ''/dev/sda'' is your HDD/SDD device node. In many cases this is the first HDD/SDD on the system, but you need to double check to make sure it is the HDD/SDD you are interested in.+Note that we will be assuming that ''/dev/sda'' is your HDD/SSD device node. In many cases this is the first HDD/SSD on the system, but you need to double check to make sure it is the HDD/SSD you are interested in.
  
 The output will be something like: The output will be something like:
Line 137: Line 137:
 </code> </code>
  
-This is just an example from my current HDD. Technically ''smartctl -a'' lists everything, not just attributes, but the whole output is more useful than just the attributes. Some things to note on the output is that SMART support is available and enabled. If it is not available then your device may not support SMART. If it is not enabled, go into your UEFI/BIOS settings and enable it. Also note ''SMART overall-health self-assessment test result: PASSED'', it should be PASSED unless your HDD is failing.+This is just an example from my current HDD. Technically ''smartctl -a'' lists everything, not just attributes, but the whole output is more useful than just the attributes. Some things to note on the output is that SMART support is available and enabled. If it is not available then your device may not support SMART, which can occur if this is an external HDD with a cheap enclosure or if the device is not a HDD/SSD. If it is not enabled, go into your UEFI/BIOS settings and enable it. Also note ''SMART overall-health self-assessment test result: PASSED'', it should be PASSED unless your HDD is failing.
  
 Note the line ''Auto Offline Data Collection: Enabled'', this is a feature that is enabled by default on modern internal HDDs. ''man smartctl'' explains what this feature does and how to enable it: Note the line ''Auto Offline Data Collection: Enabled'', this is a feature that is enabled by default on modern internal HDDs. ''man smartctl'' explains what this feature does and how to enable it:
Line 147: Line 147:
               arguments to this option are on and off.               arguments to this option are on and off.
 </code> </code>
 +This also updates attributes that are marked ''Offline''. Unlike ''Always'' updated attributes, ''Offline'' attributes are only updated if this is enabled or if you run a SMART test.
  
 Note also that the approximate times for running various tests are listed. We will discuss SMART tests in the next section. Note also that the approximate times for running various tests are listed. We will discuss SMART tests in the next section.
Line 180: Line 181:
 </code> </code>
  
-Thus, the most important attributes are marked ''Pre-fail''. If the value of a ''Pre-fail'' attribute is below threshold, the attribute is failing implying that the HDD is failing. A failing attribute will be marked as ''FAILING_NOWor "In_the_pastif it has failed now or in the past, respectively. ''Old_age'' attribute failures do NOT necessarily mean imminent failure, but rather that the drive is getting old and it should be monitored more carefully or replaced at some point.+Thus, the most important attributes are marked ''Pre-fail''. If the value of a ''Pre-fail'' attribute is below threshold, the attribute is failing implying that the HDD is failing. A failing attribute will be marked as ''FAILING_NOW'' or ''In_the_past'' if it has failed now or in the past, respectively. ''Old_age'' attribute failures do NOT necessarily mean imminent failure, but rather that the drive is getting old and it should be monitored more carefully or replaced at some point.
  
 For the exact meaning of each attribute, please see the [[https://en.wikipedia.org/wiki/S.M.A.R.T.|Wiki]] page. Some specific attributes that I would like to discuss are as follows: For the exact meaning of each attribute, please see the [[https://en.wikipedia.org/wiki/S.M.A.R.T.|Wiki]] page. Some specific attributes that I would like to discuss are as follows:
  
-===== #4 Start_Stop_Count and #12 Power_Cycle_Count ===== 
  
-This attribute is important for laptop HDDs, because they default to powering off when not in use. Now, although laptop HDDs are designed to spin up and down more times than desktop HDD and this is an ''Old_age'' attribute, it still wears down the drive. Unless you run on batteries all the time you may want to consider turning off this feature but adding this to a boot script such as ''/etc/rc.d/rc.local'':+===== #4 Start_Stop_Count and #12 Power_Cycle_Count and #193 Load_Cycle_Count ===== 
 + 
 +This attribute is important for laptop HDDs, because they default to powering off when not in use. Now, although laptop HDDs are designed to spin up and down more times than desktop HDD and this is an ''Old_age'' attribute, it still wears down the drive. Unless you run on batteries all the time you may want to consider turning off this feature by adding this to a boot script such as ''/etc/rc.d/rc.local'':
 <code> <code>
 hdparm -B 254 /dev/sda hdparm -B 254 /dev/sda
Line 195: Line 197:
 This is the age of the drive in hours. This is rather important because it tells you how old the drive is and thus how likely it is to fail. HDD failure among other things follows the [[https://en.wikipedia.org/wiki/Bathtub_curve|Bathtub curve]]. As such, the highest failure rate is among very young (infant mortality) and very old (worn out) drives. This is important because I hear many people saying, "Oh, but the drive is brand new, it can't be failing." Wrong, a new drive is more likely to fail than a middle-aged drive, much like an old drive. This is the age of the drive in hours. This is rather important because it tells you how old the drive is and thus how likely it is to fail. HDD failure among other things follows the [[https://en.wikipedia.org/wiki/Bathtub_curve|Bathtub curve]]. As such, the highest failure rate is among very young (infant mortality) and very old (worn out) drives. This is important because I hear many people saying, "Oh, but the drive is brand new, it can't be failing." Wrong, a new drive is more likely to fail than a middle-aged drive, much like an old drive.
  
-===== #174 Unexpected power loss count =====+===== #174 Unexpected power loss count and #192 Power-Off_Retract_Count =====
  
-Sudden power loss is detrimental to both HDDs and [[http://hardware.slashdot.org/story/13/03/01/224257/how-power-failures-corrupt-flash-ssd-data|SSDs]]. UPS power backups should be used for systems that are on all time for this reason as well as many others. Make sure to also shutdown your computer properly whenever possible to prevent data loss.+Sudden power loss is detrimental to both HDDs and [[http://hardware.slashdot.org/story/13/03/01/224257/how-power-failures-corrupt-flash-ssd-data|SSDs]]. UPS power backups should be used for systems that are on all time for this reason as well as many others. Make sure to also shutdown your computer properly whenever possible to prevent damage and data loss.
  
 ===== #190 Airflow_Temperature_Cel and 194 Temperature_Celsius ===== ===== #190 Airflow_Temperature_Cel and 194 Temperature_Celsius =====
  
 Although many people believe that HDDs should be kept cool and are sensitive to heat, a [[http://research.google.com/archive/disk_failures.pdf|large Google internal study]] suggests that high temperatures are only significantly detrimental to old HDDs. Although many people believe that HDDs should be kept cool and are sensitive to heat, a [[http://research.google.com/archive/disk_failures.pdf|large Google internal study]] suggests that high temperatures are only significantly detrimental to old HDDs.
 +
 +===== Bad Blocks (#5, 196, 197, 198) =====
 +
 +Bad blocks are basically areas of the disk surface that are damaged and can no longer hold data reliably. Internally the HDD/SSD deals with these by marking them and remapping/reallocating them to other areas. Bad blocks increase with the age of the drive. It can be expected that you will encounter bad blocks with every HDD and SSD. The question is when does this become something to be concerned about ? That is hard to say, and in general you will have to deal with each device on an individual basis. A large increase in the number of bad blocks could mean the drive in nearing its end. Keep monitoring the ''Pre-fail'' attributes and decide when to change it out.
 +
 +====== SMART Tests ======
 +
 +There are 3 main types of SMART tests that you can perform.
 +
 +  * short: a superficial test that tests electrical and mechanical performance and updates offline attributes
 +  * conveyance: identifies damage during transport (mostly useful for external or laptop HDDs)
 +  * long: a short test plus it scans the disk surface for bad blocks
 +
 +These tests are run with the ''-t'' option like:
 +<code>
 +smartctl -t long /dev/sda
 +</code>
 +
 +These tests can all be run on a running system without major side-effects. If you expect the long test to finish, you should minimize HDD usage as it has to scan the whole disk to finish the test.
 +
 +After waiting for the test to finish, you can get the results using the ''-a'' option as shown in the previous section.
 +
 +Short and Conveyance tests should always pass. If these fail, check the attributes as the drive is probably failing. A long test can fail if there are bad blocks, and this does NOT mean the drive is failing. The long test stops when it finds an error on the disk, so if there is a bad block it just stops. This doesn't mean the drive is failing, but you will have to wait for the HDD to remap/reallocate the block, or technically you could try to force it to do so:
 +http://www.smartmontools.org/browser/trunk/www/badblockhowto.xml
 +However, this method is difficult to implement safely, so you should usually just wait for the HDD to remap/reallocate.
 +
 +How often should you run these tests ? That depends. If you run a server then more often is better, the smartmontools site recommends weekly tests. For a home user, I usually run a long test every 1000 power on hours, but that is up to you and also depends on the details of the drive and situation.
 +
 +====== Is my drive failing ? ======
 +
 +A failing drive is defined as:
 +  - Having a ''Pre-fail'' attribute below or near threshold, marked ''FAILING_NOW'' or ''In_the_past''.
 +  - Having an ''Old_age'' attribute below or near threshold, marked ''FAILING_NOW'' or ''In_the_past'' **PLUS** other signs of failure such as consistent failure of SMART tests, strange noises, slowing down, corrupt data, etc.
 +
 +<note important>A failed long test does NOT mean your drive is failing, it could be just bad blocks. See previous section.</note>
 +
 +Do not ignore your senses, if the HDD sounds unusual or makes strange noises, monitor it closely and/or replace it. Again, SMART cannot tell you with great accuracy if or when the drive will fail. The drive can fail with above threshold attributes and minimal signs. The only hope you have to keep your data safe is to backup your data, use the 3-2-1 strategy as mentioned above.
 +
 +====== smartd ======
 +
 +What is smartd ? It is a daemon that monitors SMART. So if you don't want to manually monitor and run tests, you can set up smartd to run them on a regular basis. You should refer to ''man smartd'' and ''man smartd.conf'' and ''/etc/smartd.conf'' for everything you need to know about setting up smartd to do what you want it to do.
 ====== Sources ====== ====== Sources ======
 <!-- If you are copying information from another source, then specify that source --> <!-- If you are copying information from another source, then specify that source -->
 howtos:hardware:smart_hdd_diagnostics ()