Saturday, 26 November 2011

SSD a step towards instant computing

 Ever since I first started working on optimizing server performance I have felt that the ultimate goal is instant computing.  Where I define instant as no for the user conceivable delay from the user from request to result.  Unfortunately few suppliers has set such, for the outside observer, quite natural goals.  They are usually just happy with a bit faster than last year or a bit faster than the competitor.  So you will run into a load of configuration limits for system parameters that hasn’t kept up with the explosion in hw possibilities combined with the lowering of price/performance.

As soon as you overcome one bottleneck it’s on to the next one.  Part of this quest has been to get as much of the data into memory as possible to overcome the slowness of traditional spinning plate disks.   With the arrival off ssd’s I thought we could be close to this goal.  And for sample email searches in Outlook it’s close.  If you had a few thousand emails searches takes an age because it goes on in the cache your Windows pc stores locally.  If you use an ssd disk in your laptop/pc it’s down from minutes and sometimes hours to seconds.  The greatest leap ever, but so little appreciated that even Dell stopped (for a while) putting ssd’s as an option even on their high end pc’s.

The greatest gain for servers is obviously where there is a high frequency of ever changing data.  Like database logs.  Unfortunately also the one area where the recommendations are not to use them due to the ssd’s limitation of total rewrites.  There is work going on to automatically exclude areas that nears this limit.  Though not fast enough for some that reached it with total failure of whole disk shelves as a result.  This write limit should also be a thought for san manufacturers that automate on what type of disk the different types of data are stored depending on their frequency of access.  Maybe one should just take the penalty and routinely change out the disks every about 18 months.  An easy task with proper raiding.  And if you went with the cheaper server type or medium sized storage ssd’s instead of the super san = super expensive ones, still a cost effective way.

Aside from that log versus max total writes anomaly databases has much to be gained from ssd’s.  Specially they so large that they can’t be all sucked into ram or where there is a high frequency of updates and where one for security precautions prefer the synchronous write instead of asynchronous.   

Server internal ssd’s are actually an alternative for servers that before was optimised by utilizing the caching ram of an external storage unit.  This way saving considerably on your next system hw  upgrade.  

No comments:

Post a Comment