Not posting a link since this is an advertisement from Diskeeper Authors but the first part which I have posted does explain about how SSD degrades over time.
______________________________________________________________________________________________
by GaryQuan | Jun 1, 2020 | Application Performance, Defrag, Diskeeper, General, Performance, SSD, Solid State, Flash, Uncategorized, Windows PCs and Laptops | 20 comments
You bought SSDs to increase your system performance, but you noticed that the performance has degraded since you first bought them. Can SSD performance degrade over time and is there a way to prevent this? The answer is YES and YES.
The reason for this degradation is an undesirable SSD phenomenon called the Write Amplification Factor (WAF), a dirty word for SSDs. This is a numerical value that indicates the actual amount of data that was written to an SSD in relation to the amount of data that was requested to be written from the Host (i.e. Windows OS System)
data written to the SSD
WAF = ——————————
data written by the host
For example, an application on the Windows Server system writes out 128kb of data to the SSD, but internally on the SSD, 512kb of data had to be written on the SSD for this to occur. This will degrade the SSD write performance.
In this example, the WAF = 512kb/128kb = 4 ! This is bad, a 128kb write from the host
! resulted in 512kb of internal writes on the SSD
Ideally, you want a WAF = 128KB/128KB = 1 ! This is the best case, a 128kb write from the host
! resulted in 128kb of internal writes on the SSD
Now, why does this occur. Unlike HDDs, data cannot be directly overwritten on a disk. On SSDs, data can only be written to erased spaces. When you have brand new initialized SSD, all the pages are in a free/erased state, so no problem in finding free/erased spaces to write new data. But as the SSD starts to fill up with data, resulting in erased spaces having to be created that causes the WAF to increase. I can go into more detail on this but will save it for another time. Suffice to say, a higher WAF value means SSD performance degradation.
Now that you have the knowledge on the restrictions of writing to an SSD, let us get to the real questions.
Do SSDs degrade over time?
The answer is YES but it has to do more with the SSDs filling up over time. I have seen recommendations on the web to keep free space on SSDs anywhere from 10% to 30% to avoid this degradation. With less free space on a highly I/O intensive system, a couple of things occur:
Some SSD technology has been introduced to help with this but did not fully eliminate the problem.
➣SSDs are overprovisioned. For example, a 1TB SSD actually contains 1.1TB of data space. This extra space (seen only by the SSD internals) helps to allow the WAF to remain low.
➣SSD Garbage collection and Trim. Both of these processes include freeing/erasing spaces in the background so new writes can occur quickly on these newly erased spaces.
Here is PART II from Condusiv concerning the SSD degradation:
n Part 1, I explained how SSDs can degrade over time and the reason for it was associated with an undesirable SSD phenomenon called the Write Amplification Factor (WAF). This is a numerical value that indicates the actual amount of data that was written to an SSD in relation to the amount of data that was requested to be written from the Host (i.e. Windows OS System)
data written to the SSD
WAF = ——————————
data written by the host
This occurs because unlike HDDs, data cannot be directly overwritten on a disk. On SSDs, data can only be written to erased spaces. When you have a brand new initialized SSD, all the pages are in a free/erased state, so no problem in finding free/erased spaces to write new data. But as the SSD starts to fill up with data, resulting in erased spaces having to be created and that causes the WAF to increase. A higher WAF value means SSD performance degrades because more writes have to occur than originally requested.
In Part 2, I am going to explain in more detail why this occurs. To do this, I must first define two terms – SSD Pages and Blocks
Pages : This is the smallest unit that can be read/written on an SSD from an application and is usually 4KB in size. So, in this 4KB case, even if the file data is less than 4KB, it still takes a 4KB page to store the data on the SSD. If 5KB of data need to be written out, then two 4KB pages are needed to contain it.
Blocks: Pages are organized in Blocks. For example, some Blocks are 512KB in size. In this 4KB Page Size and 512KB Page size example, there would be 128 pages per block and the first block on the SSD would contain Page-0 to Page-127, and so on. For example:
Block 0
Page 0 | Page 1 | Page 2 | Page 3 |
Page 4 | Page 5 | Page 6 | Page 7 |
:
:
Page 120 | Page 121 | Page 122 | Page 123 |
Page 124 | Page 125 | Page 126 | Page 127 |
Block 2
Page 128 | Page 129 | Page 130 | Page 131 |
Page 122 | Page 123 | Page 124 | Page 125 |
:
:
Page 248 | Page 249 | Page 250 | Page 251 |
Page 252 | Page 253 | Page 254 | Page 255 |
Now that that these terms are defined, lets show how this effects the SSD performance and degradation, specifically Write performance. As indicated before, data cannot be directly overwritten on SSDs. For example, a small piece of existing file data that is already on a page of the SSD needs to be updated, say page 0 of Block 0. There are a few restrictions when writing to an SSD.
In our 4KB example, Data can only be written in 4KB Pages.
In our example, pages can only be erased in 512KB blocks.
So, if the same page of data needs to be updated and you want to retain the data in the rest of the pages on that block, these steps would need to occur.
1. That whole block needs to read into memory.
2. The one page needs to be updated in memory.
3. The whole block needs to be erased (Remember that a page can only write to erased pages).
4. The updated block in memory is then written back to the newly erased block.
So, in this extreme case where all the other pages on this block had to be written back out, the original 4KB Write from the Host machine caused 512KB of data to be written (re-written) to the block. Here, the WAF would be 512KB/4KB = 128. A worst-case scenario. Now, I said this is an extreme case. In most cases:
1. A different page/block that is already erased is found and the updated data is written to the free page/block.
2. The data pointer is mapped to this new page.
3. The previous page is marked ‘stale’, meaning it cannot be written to until erased.
But as an SSD fills up, there are less available free pages, so the extra writes and erases to create erased pages/blocks can and will occur. As indicated before, to keep the WAF low to help your SSDs run like new:
• Keep sufficient free space on your SSDs.
• Enforce Sequential Writes rather than Random Writes.