That has been a very hot day. The interior walls of the apartment were measuring at 30 °C, which only hints at just how hot it really was for the exterior facing walls, behind which I sit at with Eileen-II. Speaking of Eileen-II, I did undervolt her to −37.1 mV instead of leaving her at −35.2 mV like last time. Yes, it is borderline greedy, but any bit that can help with the thermals is a good thing.
I spent most of the day reading, having reached page 364/3790 of Harrison's Princicples of Internal Medicine (20th Edition), and page 506/1321 of Handbook of Data Structures and Applications. I'm nearing the end of the section that deals with multi-dimensional and spatial structures, and I must say that the contents do actually fill in some gap in my knowledge, mostly because these data structures are reaching the point where it is more efficacious to use a pre-implemented one in the form of a library/framework, than to roll it out on one's own.
In many ways this is a demonstration of the principles behind what I usually call the ``professional versus the knowledge base of the fourteen-year-old''. Old school data structures that I had known since I was fourteen were the ones that could mostly be implemented by hand from scratch, mostly because of their lower level of [Kolmogorov] complexity. They are generally good enough for 80% of the cases out there, but they suffer from the single flaw that they are primarily in-memory only and are often non-concurrent safe.
Putting it a little more simply, it means that using such data structures would strictly relegate us back to the early 2000s level of software technology. Since the mid 2000s, multi-core processors that share memory, large secondary storage (local disks), and even larger tertiary storage (think cloud storage/NAS) have been in mainstream use. It is not that the old technologies are not applicable any more, but that these new [hardware] capabilities mean that we can seriously improve the constant terms of more complicated data structures that can leverage on massive concurrency and/or parallelism that can potentially beat the traditional data structures.
I have learnt from young that part of solving a problem (after suitably defining what the problem that needs to be solved is first) is to devise a good representation for the problem space and/or solution space. Data structures affect this aspect of problem solving directly, and the larger the problem, the harder we need to think about the better representation for the problem. Notice that I did not say ``best'', mostly because the performance and scale of the computational machines we have now are generally good enough that we often do not need ``the best'', mostly because having ``the best'' requires super-specialising the data structure and associated algorithms for that specific instance of the problem that needs to be solved. Don't get me wrong, having ``the best'' solution for an expensive enough problem with a great enough payback is sometimes the only way to proceed, but for the purposes of solving more problems overall at a faster rate, the data structures need to be sufficiently generic enough.
That's why the old ``abstract'' data structures like arrays, linked lists, stacks, queues, binary search trees, [in-memory] adjacency lists [in arrays] were defined the way they were, and designated as fundamental. The problem space now is much more diversified, and much of the research in the data structures seem to be more of the specific-solution phase as opposed to generalising.
In many ways then, my urge to read Handbook of Data Structures and Applications is completely warranted and is vindicated as I make my way slowly through the text. It's technically an extended survey monograph, so implementation details will probably require me to dig into the referenced papers, but most times knowing that something can be done is of a higher importance than knowing all the specific details, especially since these days, one can look up the specific details relatively easily through the many search engines out there.
Yes, not implementing complex next-generation algorithms from scratch does lose one some street cred, but at my age and time availability, it's better to be known as one who solves many large problems successfully than the [basement] hacker who only knows how to implement that one complex data structure from scratch. Paraphrasing, implementing a complex data structure from scratch is fun, but it does not pay the bills in this time and age.
What I think I'll do with the remaining time today is to finish up the chapter I am currently on in Harrison's Principles of Internal Medicine (20th Edition), before calling it a day. Doing website development/maintenance/changes at stupid o'clock without taking a nap really does take it out of me.
Oh, I think that I forgot to mention that I had man-handled Blogger to ensure that the disclaimer blurb appears also in the mobile version of this page. Getting that to work required many levels of stupid to be done. As to the why those hoops needed to be jumped through, I don't know.
In case it wasn't obvious enough, I didn't play any video games today, but I did watch a few more episodes of The Daria Restoration Project. I think I might be able to finish that soon-ish, after which I will need to decide what I would like to watch.
After all, being on sabbatical is more than just being occupied with intellectual pursuits---there is also a certain sense of self entertainment and re-discovery as well.
Till the next update then.
No comments:
Post a Comment