Okay okay, I know it's stupid o'clock.
Remember how I said a few hours ago that I was pending the canonical Huffman codes?
Well, I finished writing the generalised canonical Huffman code generator. I actually went one step further and made it bi-directional---I could transform from the generalised canonical Huffman code into the code book that was generated by the normal Huffman encoding algorithm.
I went even further than that though---I added a rudimentary [lazy] encoder that emits output symbols based on an iterable input, and also a rudimentary [lazy] decoder that emits source symbols based on an iterable of coded symbols.
In short, I have a working [abstract] Huffman encoder/decoder. I just need some file I/O related support routines, and some means of generating the first order statistics that the Huffman encoder needs to generatee the relevant encoding.
And it's only nearly 0200hrs. Urgh.
I should go take a shower and turn in for the night.
Till the next update.
An eclectic mix of thoughts and views on life both in meat-space and in cyber-space, focusing more on the informal observational/inspirational aspect than academic rigour.
Friday, March 12, 2021
Thursday, March 11, 2021
Python Takes the Tedium Out of C
The Thursday came and is just about gone.
I continued reading a little bit more of On Playing the Flute: The Classic of Baroque Music Instruction (2nd Edition), and spent a little time watching various YouTube videos, but I had a concept of something to work on at the back of my mind.
A long time ago (end November 2014), I started on a small side project that was tentatively called filemanip. The side project was supposed to be explorations on various compression techniques, using a ``universal'' wrapper that would provide backward compatibility to any of the previous compression algorithms. Part of the reason for starting on that project was to explore the use of online machine learning techniques to provide better compression.
However, the last commit to that code base was some time in September 2016.
Part of the reason was that I was writing it in C99, and even though I love the C programming language, it was rather... cumbersome for something that was more research-y.
That was why even at the end of the last commit, all I had was run-length encoding with some bitstream I/O using Elias gamma coding for certain unbounded integers. And even then, the RLE was only for the encoding, and not the decoding.
In short, it was a bloody mess.
So I am starting from a different tack. This time, I'm just going to rely on good old Python3 as the main language, and writing the algorithms in some abstract way, before doing bit-stream I/O using the struct module. And to start things rolling, I just completed the code for generalised Huffman coding, or at least, the tree-building part. It runs fast enough, even when we have ridiculous input (265536 counter sizes anyone?). It's currently missing the generation of the canonical Huffman code, as well as other support modules to feed information in and out of the encoder/decoder.
Part of working with struct is a way of getting my hands dirty with the idea of using Python3 as a tool for lower-level programming. Most of the Python3 programming that I have done is very high-level in comparison---think writing RESTful services and/or orchestrating numerical computations. Most of the I/O that I have done are very text-centric---the concept of low-level byte or even bit I/O is something that I am not so familiar with in Python3.
But re-implementing bit I/O when I already have that set up in the C version seems like re-inventing the wheel. Perhaps what might happen is that I do some hybrid thing, where the C parts provide the lowest-level operations, but they call Python3 modules via an embedded Python3 interpreter to run the algorithms with the associated niceties.
Hmmm. That actually sounds fun. But maybe I will look into that if performance becomes more of an issue.
Anyway, that's all I have for today. Tomorrow's going to be a Friday---perhaps I should go cycling; I have not done so in a while thanks to really bad skin at the ankles where my sandal strap rubs on.
If it rains though... then it's back to being at home all over again.
Ah well, till the next update then.
I continued reading a little bit more of On Playing the Flute: The Classic of Baroque Music Instruction (2nd Edition), and spent a little time watching various YouTube videos, but I had a concept of something to work on at the back of my mind.
A long time ago (end November 2014), I started on a small side project that was tentatively called filemanip. The side project was supposed to be explorations on various compression techniques, using a ``universal'' wrapper that would provide backward compatibility to any of the previous compression algorithms. Part of the reason for starting on that project was to explore the use of online machine learning techniques to provide better compression.
However, the last commit to that code base was some time in September 2016.
Part of the reason was that I was writing it in C99, and even though I love the C programming language, it was rather... cumbersome for something that was more research-y.
That was why even at the end of the last commit, all I had was run-length encoding with some bitstream I/O using Elias gamma coding for certain unbounded integers. And even then, the RLE was only for the encoding, and not the decoding.
In short, it was a bloody mess.
So I am starting from a different tack. This time, I'm just going to rely on good old Python3 as the main language, and writing the algorithms in some abstract way, before doing bit-stream I/O using the struct module. And to start things rolling, I just completed the code for generalised Huffman coding, or at least, the tree-building part. It runs fast enough, even when we have ridiculous input (265536 counter sizes anyone?). It's currently missing the generation of the canonical Huffman code, as well as other support modules to feed information in and out of the encoder/decoder.
Part of working with struct is a way of getting my hands dirty with the idea of using Python3 as a tool for lower-level programming. Most of the Python3 programming that I have done is very high-level in comparison---think writing RESTful services and/or orchestrating numerical computations. Most of the I/O that I have done are very text-centric---the concept of low-level byte or even bit I/O is something that I am not so familiar with in Python3.
But re-implementing bit I/O when I already have that set up in the C version seems like re-inventing the wheel. Perhaps what might happen is that I do some hybrid thing, where the C parts provide the lowest-level operations, but they call Python3 modules via an embedded Python3 interpreter to run the algorithms with the associated niceties.
Hmmm. That actually sounds fun. But maybe I will look into that if performance becomes more of an issue.
Anyway, that's all I have for today. Tomorrow's going to be a Friday---perhaps I should go cycling; I have not done so in a while thanks to really bad skin at the ankles where my sandal strap rubs on.
If it rains though... then it's back to being at home all over again.
Ah well, till the next update then.
Wednesday, March 10, 2021
6 Pints of Guinness Down the Hatch
Today began strangely.
A pigeon flew at high speed through the apartment from the west-side windows and exited through the east-side windows just as I was sitting down to chug a mug of instant coffee. It was a quick swoop through of less than a second, with a short pause of grabbing a low-hanging horizontal drying rack bar for re-orientation before exiting.
It was some class A flying that was more exhilarating than terrifying. At least the pigeon was observant enough to not ram its head into the glass.
I did more reading for a while before heading out to ``Georges @ TS'', a franchise outlet of the Georges brand for some beer, food, and of course, reading. I could have read OpenStax College: Organisational Behaviour, but I chose to read the dead-tree version of On Playing the Flute: The Classic of Baroque Music Instruction (2nd Edition) by Johann Joachim Quantz instead.
I like that particular bar because it was actually very close to the headquarters of the company that I was working for, and it was the place that I would often head to after work to just unwind with a Guinness (or three), some food, and lots of reading on whatever I had lying around. The location of ``Georges @ TS'' was where the old ``Meats N Malts'' bar was located. The location may be the same, but the vibes were definitely different.
Eventually the owner [of the outlet] and the staff figured me out as a regular and I had been going there at least once a week ever since. That is, until I resigned from my position at the company to go on my sabbatical.
Missing out a little on some Guinness and the ambience, I decided to head out there in the afternoon/evening for old time's sake.
The owner wasn't there, but he did have an assistant manager running the place. I think she learnt about me as a regular before I finally tapped out of the company, and among the first few things that she asked when she saw me earlier today was ``woah, haven't seen you in a while''.
On a tangentially related side track, I was infamous as a regular mostly because I was the onlyidiotnerd who would actually read at a bar during happy hour. To be fair, I was also drinking like a fish while reading, but it seems that I made my mark partly also because I wasn't one of those folks who would sexually harass the wait staff, as told to me by a wait-staff-turned-friend.
I just raised an eyebrow. So my claim to infamy was being an über-nerd and not being a dick. That's really odd to be remembered for.
Anyway, I spent a nice afternoon with the reading, and based on the old eye-balling, am about halfway through On Playing the Flute: The Classic of Baroque Music Instruction (2nd Edition).
Serendipitously, just as I was about done for the day [with 6 pints of Guinness down with two meals and one snack], a couple of my ex-colleagues happened to pass by the bar, and somehow managed to recognise me. They stopped by to say hi, and asked me if I had a job already, to which I naturally replied ``nope'' in the most cheerful countenance I could have.
I think that's about it for today. Till the next update I suppose.
A pigeon flew at high speed through the apartment from the west-side windows and exited through the east-side windows just as I was sitting down to chug a mug of instant coffee. It was a quick swoop through of less than a second, with a short pause of grabbing a low-hanging horizontal drying rack bar for re-orientation before exiting.
It was some class A flying that was more exhilarating than terrifying. At least the pigeon was observant enough to not ram its head into the glass.
I did more reading for a while before heading out to ``Georges @ TS'', a franchise outlet of the Georges brand for some beer, food, and of course, reading. I could have read OpenStax College: Organisational Behaviour, but I chose to read the dead-tree version of On Playing the Flute: The Classic of Baroque Music Instruction (2nd Edition) by Johann Joachim Quantz instead.
I like that particular bar because it was actually very close to the headquarters of the company that I was working for, and it was the place that I would often head to after work to just unwind with a Guinness (or three), some food, and lots of reading on whatever I had lying around. The location of ``Georges @ TS'' was where the old ``Meats N Malts'' bar was located. The location may be the same, but the vibes were definitely different.
Eventually the owner [of the outlet] and the staff figured me out as a regular and I had been going there at least once a week ever since. That is, until I resigned from my position at the company to go on my sabbatical.
Missing out a little on some Guinness and the ambience, I decided to head out there in the afternoon/evening for old time's sake.
The owner wasn't there, but he did have an assistant manager running the place. I think she learnt about me as a regular before I finally tapped out of the company, and among the first few things that she asked when she saw me earlier today was ``woah, haven't seen you in a while''.
On a tangentially related side track, I was infamous as a regular mostly because I was the only
I just raised an eyebrow. So my claim to infamy was being an über-nerd and not being a dick. That's really odd to be remembered for.
Anyway, I spent a nice afternoon with the reading, and based on the old eye-balling, am about halfway through On Playing the Flute: The Classic of Baroque Music Instruction (2nd Edition).
Serendipitously, just as I was about done for the day [with 6 pints of Guinness down with two meals and one snack], a couple of my ex-colleagues happened to pass by the bar, and somehow managed to recognise me. They stopped by to say hi, and asked me if I had a job already, to which I naturally replied ``nope'' in the most cheerful countenance I could have.
I think that's about it for today. Till the next update I suppose.
Tuesday, March 09, 2021
Missing the Pasar Malam
I had a bible study session with my care group today. It was a little detour from the study of Romans, heading back to the Old Testament for some background information for the upcoming Romans 9.
Other than that, it was mostly watching more VODs for ESA 2021 Winter, and some light reading.
I did head out of the apartment for a short walk to get some light snacks (there goes OMAD... but I will catch up on it), and realised that I missed the whole pasar malam concept. It's all that different food that was available---like the kebabs, the Ramly burgers, the Thai iced teas, the Taiwanese sausages, the grilled mushrooms, the ersatz ``bird's nest'' drink.
Then there's all the sundry pop-up shops with some haberdashery stuff, cooking utensils...
I don't know if the pasar malam will ever return. It does feel like the kind of thing that would just quietly die away if there is too much inertia from its return.
This whole idea of having stopping inertia making things harder to return scares me on many levels.
But it is getting late, so I may write more about this some other time instead.
Here's to the pasar malam.
Till the next update.
Other than that, it was mostly watching more VODs for ESA 2021 Winter, and some light reading.
I did head out of the apartment for a short walk to get some light snacks (there goes OMAD... but I will catch up on it), and realised that I missed the whole pasar malam concept. It's all that different food that was available---like the kebabs, the Ramly burgers, the Thai iced teas, the Taiwanese sausages, the grilled mushrooms, the ersatz ``bird's nest'' drink.
Then there's all the sundry pop-up shops with some haberdashery stuff, cooking utensils...
I don't know if the pasar malam will ever return. It does feel like the kind of thing that would just quietly die away if there is too much inertia from its return.
This whole idea of having stopping inertia making things harder to return scares me on many levels.
But it is getting late, so I may write more about this some other time instead.
Here's to the pasar malam.
Till the next update.
Monday, March 08, 2021
Giving Thanks
I've just had my dinner, and am just sitting quietly in my room, looking out of the window with OpenStax College: Organisational Behaviour open next to me in SumatraPDF.
I spent the day watching some videos from Linus Tech Tips, a Youtube channel that I have heard of before, but have not really spent any measure of time looking into. Being on a sabbatical afforded me with the time to really pick up some of the videos randomly. The coverage on enterprise-level hardware was definitely very eye-opening, and I think I have learnt quite a few new things of hardware that I would not have easily known otherwise.
Of course, nothing beats studying some basic electronics engineering, and that is on my list of things to look into at some point. For now, the focus is on ``organisational behaviour'', or ``How to be a Manager 101''. I see it less of learning to be a manager and more of reminding myself what the literal textbook definition of a good manager is, as a type of rebellion against the multitude of dysfunctional management-types that I have the [mis-]fortune of having to deal with.
But the purpose of today's entry isn't about criticising what has happened. Today, I want to write about things to give thanks for.
I'm not a very expressive person---I have been told that I am ``very hardcore'', ``very intense'', ``very serious'', and other epithets that depict me more as some kind of unrelenting mechaniod faking to be a human. I don't know nor do I care too much about how true those are, but ah, I'm getting side tracked again. As I was saying, I'm not a very expressive person, and because of that, I am not one who would be ready to give thanks to the good things that happen to me.
I am, however, seemingly masochistically comfortable with acknowledging the bad that happen to me. No, I do not derive pleasure from bad things that happen to me---it's more of a situation where I am unsurprised when bad things happen to me, and not if.
Anyway, I digress once more. Giving thanks, right.
Among the deep muck that I find myself in (mentally), there are many other good things that I should be giving thanks for. Naturally, giving thanks to God is a must, but I don't think that's a big reason why anyone would continue to read this blog of mine. Among other things to give thanks for (remembering that the final attribution is always back to the Lord), are that I actually have the privilege to be in a position where I can actually be on a sabbatical in the middle of a pandemic which is happening in the middle of yet another somewhat catastrophic adjustment of the economy.
The truly poor will never have the means to ``take time to rethink things'' the way I am doing now. Remember that my nett income now is zero---I am literally not doing any paying jobs. I am still alive and somewhat angsty through the twin privilege of having jobs that paid me well enough that I could save up for rainy days (like this one), and leveraging upon the already paid-for living abode that my parents have.
That is why, despite not having a job now, I am not in a state of panic in looking for one and am relatively calm in allowing myself to actually take this sabbatical.
I am also thankful that despite all the health issues that I had/have, I still managed to get my undergraduate and Masters degree from internationally reputable universities. Maybe those degrees will prove to be less useful to me now with the heavy focus on hiring yes-men in Singapore companies, but at least it is a stamp of approval of a [superior] society that I am capable of hard work in a highly technical (yet very general) field that is computer science.
I am also thankful for the musical talents that I was given, together with the teachers and peers who have taught me many things that I needed to know to master music-making on my favoured instrument of the 笛子 and other flutes. I am also thankful for the various opportunities that I was provided to play with amazing people despite being an ungraded, and unlicensed amateur player.
I am thankful too for the companionship of those who had walked with me, either as friends, or as acquaintances. I know that not all of them will walk with me from when we first met till ``forever'', I am still thankful for the short time that we had together. To those who decided that our life paths were not well aligned to keep the friendship alive and thus move on, I am still thankful for choosing to spend their precious time with me during the times that we were still [close] friends. To those who somehow are still walking with me now, I am thankful for their steadfastness and loyalty.
Despite my frustrations at those who have questionable ethics, I am thankful that their actions made observable to me, so that I can, from their negative example, learn how not to be the person that I do not wish to be. And for those who are good role models in their specific contexts, I am thankful for them being there so that I can learn from their positive examples.
I am thankful, in general, for all who have taken up a teaching role in my life thus far, be it explicitly, or implicitly, and am thankful for the lessons that they have taught me, whether they realised it or not. In many ways, I cannot become who I am if not for all the people whom I have interacted with both directly and indirectly.
I am also thankful for all the lovely animals who had befriended me; in many ways, I think that they could sense my true nature better than anyone else can. Despite meeting them only very shortly, they have always extended a paw to me in welcome, even those whom their very owners knew not to be of the gregarious sort.
I think that's all I would like to write for today. It's time for yet another shower, and then probably more reading. Till the next update.
I spent the day watching some videos from Linus Tech Tips, a Youtube channel that I have heard of before, but have not really spent any measure of time looking into. Being on a sabbatical afforded me with the time to really pick up some of the videos randomly. The coverage on enterprise-level hardware was definitely very eye-opening, and I think I have learnt quite a few new things of hardware that I would not have easily known otherwise.
Of course, nothing beats studying some basic electronics engineering, and that is on my list of things to look into at some point. For now, the focus is on ``organisational behaviour'', or ``How to be a Manager 101''. I see it less of learning to be a manager and more of reminding myself what the literal textbook definition of a good manager is, as a type of rebellion against the multitude of dysfunctional management-types that I have the [mis-]fortune of having to deal with.
But the purpose of today's entry isn't about criticising what has happened. Today, I want to write about things to give thanks for.
I'm not a very expressive person---I have been told that I am ``very hardcore'', ``very intense'', ``very serious'', and other epithets that depict me more as some kind of unrelenting mechaniod faking to be a human. I don't know nor do I care too much about how true those are, but ah, I'm getting side tracked again. As I was saying, I'm not a very expressive person, and because of that, I am not one who would be ready to give thanks to the good things that happen to me.
I am, however, seemingly masochistically comfortable with acknowledging the bad that happen to me. No, I do not derive pleasure from bad things that happen to me---it's more of a situation where I am unsurprised when bad things happen to me, and not if.
Anyway, I digress once more. Giving thanks, right.
Among the deep muck that I find myself in (mentally), there are many other good things that I should be giving thanks for. Naturally, giving thanks to God is a must, but I don't think that's a big reason why anyone would continue to read this blog of mine. Among other things to give thanks for (remembering that the final attribution is always back to the Lord), are that I actually have the privilege to be in a position where I can actually be on a sabbatical in the middle of a pandemic which is happening in the middle of yet another somewhat catastrophic adjustment of the economy.
The truly poor will never have the means to ``take time to rethink things'' the way I am doing now. Remember that my nett income now is zero---I am literally not doing any paying jobs. I am still alive and somewhat angsty through the twin privilege of having jobs that paid me well enough that I could save up for rainy days (like this one), and leveraging upon the already paid-for living abode that my parents have.
That is why, despite not having a job now, I am not in a state of panic in looking for one and am relatively calm in allowing myself to actually take this sabbatical.
I am also thankful that despite all the health issues that I had/have, I still managed to get my undergraduate and Masters degree from internationally reputable universities. Maybe those degrees will prove to be less useful to me now with the heavy focus on hiring yes-men in Singapore companies, but at least it is a stamp of approval of a [superior] society that I am capable of hard work in a highly technical (yet very general) field that is computer science.
I am also thankful for the musical talents that I was given, together with the teachers and peers who have taught me many things that I needed to know to master music-making on my favoured instrument of the 笛子 and other flutes. I am also thankful for the various opportunities that I was provided to play with amazing people despite being an ungraded, and unlicensed amateur player.
I am thankful too for the companionship of those who had walked with me, either as friends, or as acquaintances. I know that not all of them will walk with me from when we first met till ``forever'', I am still thankful for the short time that we had together. To those who decided that our life paths were not well aligned to keep the friendship alive and thus move on, I am still thankful for choosing to spend their precious time with me during the times that we were still [close] friends. To those who somehow are still walking with me now, I am thankful for their steadfastness and loyalty.
Despite my frustrations at those who have questionable ethics, I am thankful that their actions made observable to me, so that I can, from their negative example, learn how not to be the person that I do not wish to be. And for those who are good role models in their specific contexts, I am thankful for them being there so that I can learn from their positive examples.
I am thankful, in general, for all who have taken up a teaching role in my life thus far, be it explicitly, or implicitly, and am thankful for the lessons that they have taught me, whether they realised it or not. In many ways, I cannot become who I am if not for all the people whom I have interacted with both directly and indirectly.
I am also thankful for all the lovely animals who had befriended me; in many ways, I think that they could sense my true nature better than anyone else can. Despite meeting them only very shortly, they have always extended a paw to me in welcome, even those whom their very owners knew not to be of the gregarious sort.
I think that's all I would like to write for today. It's time for yet another shower, and then probably more reading. Till the next update.
Sunday, March 07, 2021
Eliana is Anomalously Designed?
Hmm. I think I have an anomalously designed alto flute.
Here's the context. I was checking out entries in the Flute Forum in Facebook when I stumbled upon this picture post of the Muramatsu alto flute as put up by a member. It was a pretty picture of the Muramatsu alto flute, and I was enjoying it very much when I noticed something strange about this particular picture. I will replicate it here for ease of reference:Look at the upper right corner of the picture of the alto flute---this is where the upper part of the flute (i.e. the part of the flute body closest to the head joint) is located.
There are keys attached to levers that actuate the covers that cover the tone holes for the entire left hand there.
On Eliana (my alto flute), I instead have this particular set up:Don't mind the yellowish colour---it is reflective of the wall colours than any type of tarnish. Look carefully at the upper right corner of the picture as well---notice that Eliana does not have those levers that I saw. In fact, the keys are what I would call ``direct driven'', in that my fingers immediately actuate the covers that cover the various tone holes for the left hand.
I got confused. Was the Muramatsu alto flute special or was Eliana weird?
I went to the Muramatsu alto flute page, and looked up their alto flute schematic. I will replicate it here for ease of reference:The picture is correct; the Muramatsu alto flute does have levers in the left hand that actuate the tone hole covers.
What about alto flutes by other makers then?
A quick check with Altus' alto flutes showed the same. I replicate the image here for ease of reference:Yep, levers in the left hand.
Another quick check with Kotato's and Fukushima's alto flutes showed the same too. I replicate a rotated version of one of alto flutes hereYes, levers in the left hand.
Dizhao's alto flutes? Replicating the rotated and cropped image here for ease of reference:Yes, levers in the left hand also.
I was getting confused. The last major flute maker that I knew that made alto flutes was Eva Kingma. This is where I was a little amused. Looking at the details of the left hand, we see the levers. I replicate the image here for ease of reference:But if we scroll down the main alto flutes page to the section on ``open hole'' version of the alto flute, we find that the left hand are also ``direct drive'' on the key covers. I replicate that image here for ease of reference:This sort of makes sense because the idea of the ``open hole'' alto flute is to allow better venting like the open-hole versions of the regular concert flute.
There was one more thing I had to check, and it was The Flute and Flute-Playing by Theobald Boehm, father of the modern concert flute design. In it, he references the ``bass flute in G'', which is what we call in modern times as the ``alto flute''. In it, on p127, he shows his mechanism schematic for the left hand (or upper parts of the flute).From this, it is clear that everyone who was making alto flutes were basically following this idea of using levers.
I mean, even the Wikipedia entry's picture of the Yamaha alto flute shows the same left levers. I replicate the image here for reference:It is therefore rather clear that Eliana's design is indeed anomalous.
Eliana's design seems to follow closer to that of an upsized concert flute than one that is based off of Boehm's design, possibly as a way of avoiding the need to fabricate longer and more complex parts.
That said, she still plays like a charm, and I was glad that I had enough knowledge to figure out what was going on. I suppose the lever mechanism helps reduce the arm stretch that might be needed for the left hand, which can help in the support of the rather long instrument, but for me, there really isn't that much of a difference. It is possible that I'm just used to the correct support from playing the 倍大 D 笛子, which is roughly of the same tuning as that of the alto flute.
In either case, that is all for today, I suppose; I'm kinda sorry that I nerded out again. Till the next update.
Here's the context. I was checking out entries in the Flute Forum in Facebook when I stumbled upon this picture post of the Muramatsu alto flute as put up by a member. It was a pretty picture of the Muramatsu alto flute, and I was enjoying it very much when I noticed something strange about this particular picture. I will replicate it here for ease of reference:Look at the upper right corner of the picture of the alto flute---this is where the upper part of the flute (i.e. the part of the flute body closest to the head joint) is located.
There are keys attached to levers that actuate the covers that cover the tone holes for the entire left hand there.
On Eliana (my alto flute), I instead have this particular set up:Don't mind the yellowish colour---it is reflective of the wall colours than any type of tarnish. Look carefully at the upper right corner of the picture as well---notice that Eliana does not have those levers that I saw. In fact, the keys are what I would call ``direct driven'', in that my fingers immediately actuate the covers that cover the various tone holes for the left hand.
I got confused. Was the Muramatsu alto flute special or was Eliana weird?
I went to the Muramatsu alto flute page, and looked up their alto flute schematic. I will replicate it here for ease of reference:The picture is correct; the Muramatsu alto flute does have levers in the left hand that actuate the tone hole covers.
What about alto flutes by other makers then?
A quick check with Altus' alto flutes showed the same. I replicate the image here for ease of reference:Yep, levers in the left hand.
Another quick check with Kotato's and Fukushima's alto flutes showed the same too. I replicate a rotated version of one of alto flutes hereYes, levers in the left hand.
Dizhao's alto flutes? Replicating the rotated and cropped image here for ease of reference:Yes, levers in the left hand also.
I was getting confused. The last major flute maker that I knew that made alto flutes was Eva Kingma. This is where I was a little amused. Looking at the details of the left hand, we see the levers. I replicate the image here for ease of reference:But if we scroll down the main alto flutes page to the section on ``open hole'' version of the alto flute, we find that the left hand are also ``direct drive'' on the key covers. I replicate that image here for ease of reference:This sort of makes sense because the idea of the ``open hole'' alto flute is to allow better venting like the open-hole versions of the regular concert flute.
There was one more thing I had to check, and it was The Flute and Flute-Playing by Theobald Boehm, father of the modern concert flute design. In it, he references the ``bass flute in G'', which is what we call in modern times as the ``alto flute''. In it, on p127, he shows his mechanism schematic for the left hand (or upper parts of the flute).From this, it is clear that everyone who was making alto flutes were basically following this idea of using levers.
I mean, even the Wikipedia entry's picture of the Yamaha alto flute shows the same left levers. I replicate the image here for reference:It is therefore rather clear that Eliana's design is indeed anomalous.
Eliana's design seems to follow closer to that of an upsized concert flute than one that is based off of Boehm's design, possibly as a way of avoiding the need to fabricate longer and more complex parts.
That said, she still plays like a charm, and I was glad that I had enough knowledge to figure out what was going on. I suppose the lever mechanism helps reduce the arm stretch that might be needed for the left hand, which can help in the support of the rather long instrument, but for me, there really isn't that much of a difference. It is possible that I'm just used to the correct support from playing the 倍大 D 笛子, which is roughly of the same tuning as that of the alto flute.
In either case, that is all for today, I suppose; I'm kinda sorry that I nerded out again. Till the next update.
Saturday, March 06, 2021
A ``Proper'' Rant?
Yeah yeah, I know I had already written something for today, but like many of my recent posts, they tended to be rather technical in nature.
For me, for this blog, ``technical'' can mean anything from this set of categories:
Well, here's a ``proper'' rant in a while.
The more I look at Singapore, the more that I am starting to find that at some fundamental level, we seemed to have lost what it means to be Singaporean. This is not a rant about immigration/emigration, but a rant about how the values that we inherited from our progenitors are starting to be the main reason why we are facing all kinds of strange social issues today.
In the Old Economy, with a heavy focus on manufacturing, hard work meant that more widgets could be made, which had a direct impact upon productivity. Because of that, a lot of policies were constructed then to support this mode of production; the work ethic then was also revolving around this.
Everyone was working as hard as they could in whatever they were involved in. Education levels were not as high as they are now, and thus hard work was seen as the prize of the workers that were available then.
But in the New Economy, where manufacturing is much sidelined in favour of tertiary industries like ``the knowledge economy'' or even the ``service industry'', hard work alone is insufficient. This is because the nature of work has evolved---gone are the days where quality can be quantified, controlled, and replicated, leaving productivity to just the careful execution of the controlled process by hard work in generating more and more of the widgets. The New Economy demands actual intellectual capability, a certain nous that provides the type of competitive advantage that separates an okay company from a great company.
My generation is stuck in between the Old Economy and the New Economy, because we are at the age group where we do not have enough capital to try new ideas, and have to be stuck with those with the capital who are still running things like it was still the Old Economy. This means that we are in a situation where many of the current bosses still believe erroneously that putting in long hours is still the right way to achieving success for the company.
They suffer from the survivor bias fallacy---``back in my day, I worked eighteen-hour days, and my bosses had to beg me to not work so long hours''. How does working eighteen-hour days work in the generating and testing of good ideas is something that I, after having undergone post-graduate training, cannot understand. And even if it were possible to work eighteen-hour days to generate/test such good ideas, working eighteen-hour days means that there is no time to go out and do something social/recreational, both of which are beneficial to the mental health and in some ways, the national reproductive rate.
``But if you have the passion, then working eighteen-hour days will not feel like working!''---that's the battlecry of the capital holders who thrived in the Old Economy.
I call bollocks on that. There is a reason why the whole forty-ish hour work-week was a thing that was instituted as laws all around the world---this was a right that was fought for with blood and sweat from workers of the late nineteenth century just after massive industrialisation against the capitalists of that era that wanted to exploit the crap out of all the workers (both adults and children) from the under-classes then.
I am not saying that no one should work eighteen-hour days. I'm saying that working eighteen-hour days should be something that is stated up-front to be presented as a choice to the potential employee, with suitable amounts of compensation/remuneration, and not be unethically snuck upon them post facto.
But who am I kidding? It has always been the nature of capitalists to exploit their labour---that is literally how they turn profits from their businesses. Of course they don't say it the way I do; they instead couch it in more pleasant sounding terms like ``looking out for our bottomline'', or ``employee motivation while keeping expenses controlled'' and the like.
That was why slavery was a thing. It would still be a thing, except for the bunch of wars that were fought over it.
Oh wait. It is still a thing---it is only hidden behind Hobson's choice.
I think I'm getting a little too bitter to write coherently, so I'm just going to stop here now.
Till the next update.
For me, for this blog, ``technical'' can mean anything from this set of categories:
- Something related to computer hardware;
- Something related to computer software that I use (not program);
- Something related to computer programming that I did;
- Something related to music things like composition or research;
- Something related to 笛子, flute, or one of the many music instruments that I have experience playing.
Well, here's a ``proper'' rant in a while.
The more I look at Singapore, the more that I am starting to find that at some fundamental level, we seemed to have lost what it means to be Singaporean. This is not a rant about immigration/emigration, but a rant about how the values that we inherited from our progenitors are starting to be the main reason why we are facing all kinds of strange social issues today.
In the Old Economy, with a heavy focus on manufacturing, hard work meant that more widgets could be made, which had a direct impact upon productivity. Because of that, a lot of policies were constructed then to support this mode of production; the work ethic then was also revolving around this.
Everyone was working as hard as they could in whatever they were involved in. Education levels were not as high as they are now, and thus hard work was seen as the prize of the workers that were available then.
But in the New Economy, where manufacturing is much sidelined in favour of tertiary industries like ``the knowledge economy'' or even the ``service industry'', hard work alone is insufficient. This is because the nature of work has evolved---gone are the days where quality can be quantified, controlled, and replicated, leaving productivity to just the careful execution of the controlled process by hard work in generating more and more of the widgets. The New Economy demands actual intellectual capability, a certain nous that provides the type of competitive advantage that separates an okay company from a great company.
My generation is stuck in between the Old Economy and the New Economy, because we are at the age group where we do not have enough capital to try new ideas, and have to be stuck with those with the capital who are still running things like it was still the Old Economy. This means that we are in a situation where many of the current bosses still believe erroneously that putting in long hours is still the right way to achieving success for the company.
They suffer from the survivor bias fallacy---``back in my day, I worked eighteen-hour days, and my bosses had to beg me to not work so long hours''. How does working eighteen-hour days work in the generating and testing of good ideas is something that I, after having undergone post-graduate training, cannot understand. And even if it were possible to work eighteen-hour days to generate/test such good ideas, working eighteen-hour days means that there is no time to go out and do something social/recreational, both of which are beneficial to the mental health and in some ways, the national reproductive rate.
``But if you have the passion, then working eighteen-hour days will not feel like working!''---that's the battlecry of the capital holders who thrived in the Old Economy.
I call bollocks on that. There is a reason why the whole forty-ish hour work-week was a thing that was instituted as laws all around the world---this was a right that was fought for with blood and sweat from workers of the late nineteenth century just after massive industrialisation against the capitalists of that era that wanted to exploit the crap out of all the workers (both adults and children) from the under-classes then.
I am not saying that no one should work eighteen-hour days. I'm saying that working eighteen-hour days should be something that is stated up-front to be presented as a choice to the potential employee, with suitable amounts of compensation/remuneration, and not be unethically snuck upon them post facto.
But who am I kidding? It has always been the nature of capitalists to exploit their labour---that is literally how they turn profits from their businesses. Of course they don't say it the way I do; they instead couch it in more pleasant sounding terms like ``looking out for our bottomline'', or ``employee motivation while keeping expenses controlled'' and the like.
That was why slavery was a thing. It would still be a thing, except for the bunch of wars that were fought over it.
Oh wait. It is still a thing---it is only hidden behind Hobson's choice.
I think I'm getting a little too bitter to write coherently, so I'm just going to stop here now.
Till the next update.
Eileen-II with Hyperthreading and Lowered Temperatures
Okay, I left it as a question. The answer to that is ``no''.
I think this though is the final solution to Eileen-II's temperature issues.
First, let me put up the final numbers, then I will do the explanation:Okay, the last time I was talking about this, I mentioned that for heat-related issues, I had turned off hyperthreading, undervolted by 90.8 mV, and adjusted the Turbo Boost ratios to [40×,39×,35×,33×,31×,30×] for 1 to 6 active cores respectively. This was all done through using Prime95 as the load-tester, and ThrottleStop as the adjustment tool.
All that work meant that the peak temperature of Eileen-II's CPU never exceeded 85°C.
That is all well and good, but as I continued to use Eileen-II, I realised that I was losing some performance from the lack of hyperthreading. Specifically, the benchmarking from WinRAR under multi-threaded computation was giving me a rough throughput of 7 MB/s. It is definitely faster than the 4 MB/s processing speed of Edythe-III, but is actually quite bad, especially since I seem to be operating on RAR files pretty often (in the form of archiving e-books, and compiling comic book archives from web comics).
I remembered that with hyperthreading on, I was getting more than 10 MB/s of processing, or something like 14 MB/s, if memory serves me well---this is anything from 42% to 100% improvement over no-hyperthreading (I'm not much of a speed demon and am not really going for something super rigorous, and so a rough ballpark is good enough for me).
With the system relatively stable, and me getting rather bored(!) at 0000hrs in the morning, it is back to more recalibration to squeeze more output.
I turned on hyperthreading from the BIOS, and proceeded with the same methodology as before. Since I was about to load my processor with about 100% more load, it seemed prudent to undervolt as much as I can to reduce the initial pre-stress thermal load as possible. I started at −90.8 mV as it was, and gradually lowered it by 1 mV for both CPU Core and CPU Cache, waiting each time for a lowered idle CPU use to drop the voltage enough to test for stability---remember that when undervolting the CPU, we are finding the lowest effective voltage that does not stall the CPU.
I managed to lower it as low as −98.8 mV, however, the results were unstable, with the screen flickering when the voltage dipped low enough to ``stun'' the CPU into cranking its voltage higher up. Getting low-enough idle CPU use with hyper-threading on was very finicky and hard, and it was at −99.8 mV when I hit my first blue screen of death. It was funny, because the main display was showing the blue screen, while the side display [that I use for reading PDFs---see this earlier entry for details of that display] was still showing the desktop background.
When I finally rebooted Eileen-II, I noped out and just cranked the offset voltage to −92.8 mV, a number that I was confident that would not allow the lowest voltage of the CPU to drop below 0.5 V---I noticed that when the voltage dropped lower than 0.5 V was when the screen flickering would start and things would Get Weird.
Armed with that new offset voltage, I started up the stress tester and ran the same protocol. With the original turbo ratio limits, I was (as expected) getting very high spike temperatures in the 90+°C range. And so, I used the same protocol as before to fine tune the ratios for the n hottest physical cores, for n in 6 to 1.
Due to hyperthreading being turned on, I needed to alter the protocol a little, and adjust the affinities by the pairs of logical CPUs that would run on a physical Core. Thankfully, the formula relating them is straightforward enough. Each time after tuning, I would just take away the two logical CPUs for the coolest physical Core in the affinity, and rinse/repeat.
The final turbo ratio limits were [37×,34×,32×,31×,29×,28×]. On average, I would be taking away anything between 1× to 3× from the original no-hyperthreading tuned ratios.
It sounds like I have reduced my CPU's capabilities through all these lowered ratios, but really I haven't, since hyperthreading meant that I was getting double the number of processing cycles per active physical Core. Take 6 active cores for example. Without hyper-threading, the total work done in ratio of base clock speed is 6 by 30×, which is 180×. With hyper-threading, the total work done in ratio of base clock speed is now 12 by 28× which is 336×, which is about 87% more available raw clock cycles in comparison to being without hyper-threading, with similar thermal loads (≤85°C). Of course, hyper-threading doesn't quite work that way since there is an actual overlap from synchronisation issues of hyper-threaded CPUs that share the same physical Core, but the point remains that I am increasing the total available capacity for the given generated thermal load despite having lowered turbo ratios.
Oh, and all these led to WinRAR benchmarking at about 13 MB/s compared to 7 MB/s, an overall 86% increase in throughput between no-hyperthreading and with hyperthreading. So, it is more performance overall, in the multi-threaded case.
Definitely a win in my book. By the time I was done, it was already 0145hrs...
------
In other news, I had finally updated my LilyPond installation from v2.20.0 to v2.22.0. The reason for the delay was waiting for Cygwin version to be updated first before updating the tooling for my Windows version that I use with Frescobaldi for music writing. The reason for that is due to my auto-building scripts being designed to run under Cygwin or Linux.
Well, v2.22.0 broke all my sources for v2.20.0/v2.18.0. Reading the changelog revealed that a couple of features that I used, namely \compressFullBarRests and \partcombine, had been renamed to \compressEmptyMeasures and \partCombine respectively, with no real backward compatibility despite the existence of the \version statement.
It was a straightforward enough fix, and I completed it using my favourite combination of
Alright, that's about all that I have to write for now. Till the next update.
I think this though is the final solution to Eileen-II's temperature issues.
First, let me put up the final numbers, then I will do the explanation:Okay, the last time I was talking about this, I mentioned that for heat-related issues, I had turned off hyperthreading, undervolted by 90.8 mV, and adjusted the Turbo Boost ratios to [40×,39×,35×,33×,31×,30×] for 1 to 6 active cores respectively. This was all done through using Prime95 as the load-tester, and ThrottleStop as the adjustment tool.
All that work meant that the peak temperature of Eileen-II's CPU never exceeded 85°C.
That is all well and good, but as I continued to use Eileen-II, I realised that I was losing some performance from the lack of hyperthreading. Specifically, the benchmarking from WinRAR under multi-threaded computation was giving me a rough throughput of 7 MB/s. It is definitely faster than the 4 MB/s processing speed of Edythe-III, but is actually quite bad, especially since I seem to be operating on RAR files pretty often (in the form of archiving e-books, and compiling comic book archives from web comics).
I remembered that with hyperthreading on, I was getting more than 10 MB/s of processing, or something like 14 MB/s, if memory serves me well---this is anything from 42% to 100% improvement over no-hyperthreading (I'm not much of a speed demon and am not really going for something super rigorous, and so a rough ballpark is good enough for me).
With the system relatively stable, and me getting rather bored(!) at 0000hrs in the morning, it is back to more recalibration to squeeze more output.
I turned on hyperthreading from the BIOS, and proceeded with the same methodology as before. Since I was about to load my processor with about 100% more load, it seemed prudent to undervolt as much as I can to reduce the initial pre-stress thermal load as possible. I started at −90.8 mV as it was, and gradually lowered it by 1 mV for both CPU Core and CPU Cache, waiting each time for a lowered idle CPU use to drop the voltage enough to test for stability---remember that when undervolting the CPU, we are finding the lowest effective voltage that does not stall the CPU.
I managed to lower it as low as −98.8 mV, however, the results were unstable, with the screen flickering when the voltage dipped low enough to ``stun'' the CPU into cranking its voltage higher up. Getting low-enough idle CPU use with hyper-threading on was very finicky and hard, and it was at −99.8 mV when I hit my first blue screen of death. It was funny, because the main display was showing the blue screen, while the side display [that I use for reading PDFs---see this earlier entry for details of that display] was still showing the desktop background.
When I finally rebooted Eileen-II, I noped out and just cranked the offset voltage to −92.8 mV, a number that I was confident that would not allow the lowest voltage of the CPU to drop below 0.5 V---I noticed that when the voltage dropped lower than 0.5 V was when the screen flickering would start and things would Get Weird.
Armed with that new offset voltage, I started up the stress tester and ran the same protocol. With the original turbo ratio limits, I was (as expected) getting very high spike temperatures in the 90+°C range. And so, I used the same protocol as before to fine tune the ratios for the n hottest physical cores, for n in 6 to 1.
Due to hyperthreading being turned on, I needed to alter the protocol a little, and adjust the affinities by the pairs of logical CPUs that would run on a physical Core. Thankfully, the formula relating them is straightforward enough. Each time after tuning, I would just take away the two logical CPUs for the coolest physical Core in the affinity, and rinse/repeat.
The final turbo ratio limits were [37×,34×,32×,31×,29×,28×]. On average, I would be taking away anything between 1× to 3× from the original no-hyperthreading tuned ratios.
It sounds like I have reduced my CPU's capabilities through all these lowered ratios, but really I haven't, since hyperthreading meant that I was getting double the number of processing cycles per active physical Core. Take 6 active cores for example. Without hyper-threading, the total work done in ratio of base clock speed is 6 by 30×, which is 180×. With hyper-threading, the total work done in ratio of base clock speed is now 12 by 28× which is 336×, which is about 87% more available raw clock cycles in comparison to being without hyper-threading, with similar thermal loads (≤85°C). Of course, hyper-threading doesn't quite work that way since there is an actual overlap from synchronisation issues of hyper-threaded CPUs that share the same physical Core, but the point remains that I am increasing the total available capacity for the given generated thermal load despite having lowered turbo ratios.
Oh, and all these led to WinRAR benchmarking at about 13 MB/s compared to 7 MB/s, an overall 86% increase in throughput between no-hyperthreading and with hyperthreading. So, it is more performance overall, in the multi-threaded case.
Definitely a win in my book. By the time I was done, it was already 0145hrs...
------
In other news, I had finally updated my LilyPond installation from v2.20.0 to v2.22.0. The reason for the delay was waiting for Cygwin version to be updated first before updating the tooling for my Windows version that I use with Frescobaldi for music writing. The reason for that is due to my auto-building scripts being designed to run under Cygwin or Linux.
Well, v2.22.0 broke all my sources for v2.20.0/v2.18.0. Reading the changelog revealed that a couple of features that I used, namely \compressFullBarRests and \partcombine, had been renamed to \compressEmptyMeasures and \partCombine respectively, with no real backward compatibility despite the existence of the \version statement.
It was a straightforward enough fix, and I completed it using my favourite combination of
find -type f -iname '*.ly' -print0 | \ xargs -0 sed -e 's/\\partcombine/\\partCombine/g' -iNo biggie, but it was something that needed to be done.
Alright, that's about all that I have to write for now. Till the next update.
Friday, March 05, 2021
Essay for a Forum Post
Something I wrote in a public forum on concert flutes regarding materials used for the flute and the scientific results [that they don't matter] conflicting with anecdotal evidence [that perhaps they do].
Here's to tomorrow being a good day also. Till the next update.
My two cents worth is that any real-world musical instrument design is an engineering problem with artistic elements. As an engineering problem, there are compromises that need to be done to ensure that the instrument is acoustically efficient enough for human playing, has the appropriate timbre as required for the instrument, is ergonomically constructed for human playing, is relatively easy to put together, and is of an affordable price point.In other news, I have started reading OpenStax College: Organisational Behaviour. I also went for a long walk about the neighbourhood, finding a park that I had never seen before despite living in the same apartment for more than thirty years. It has been an alright day I think.
There are also two competing aspects that we need to take into account---the makers, and the consumers. The makers want to make and sell flutes (maybe with a profit), while the consumers want to buy and use the flutes (hopefully getting the ``best'' for the cheapest). These two perspectives are competitive in nature, so at the risk of offending people, I will say that if the science of musical instrument making is kept sufficiently muddy and vague, the makers will have the competitive advantage here over the consumers, ergo it is better for makers if there is enough confusion about (in this case) whether material of the walls of the concert flute matters or not. The reason for this claim is that makers have more samples from the engineering design space (they make many, many flutes) compared to the consumers (they buy only a few flutes), and even if the science were actually that muddy and vague thanks to the interplay of many complex variables, the makers' empirical understanding is still light years ahead of the consumers, since "making mistakes in understanding and then correcting for them" is just a part of the R&D process of product development, compared to the consumer who only wants to buy that ``one flute for a life-time''.
Now, on to my perspective on the issues of the science.
Part of science, apart from all the testing methodology, is also about coming to a consensus on the terms that are used to describe various phenomena---it is the agreement of the hypothesis space that assists in defining the scientific problem that leads to the various hypotheses that can later be subjected to various methodologies for testing. Already I see terms like ``resonance'', ``bright'', and ``dark'' being bandied about with only the vaguest of consensus on what they actually mean; we also have not come to any consensus on to whom the quality of the flute sound should be measured (player versus audience). This is problematic because it makes the hypothesis space fuzzy enough that anyone who wants to deliberately confuse matters can wade in and throw enough shade to prove/disprove any point they so choose, and for those who are actively trying to have a proper discussion will find it hard to do so as well.
At some level, I think that part of the reason why some consumers (and scientists) are starting to ask harder questions and wanting quantification of various physical aspects of the modern concert flute is due to the heritage we get from Theobald Boehm. He basically ``open sourced'' his flute design, and it is from this design that many of our modern concert flute cousins come from. This heritage of sharing what works (with some empirical explanation) was further extended through Albert Cooper's ``open sourced'' Cooper scale. So in our common concert flute heritage, these types of ``open source'' sharing of empirical experiments on improving our favourite instrument has precedent.
And empirical experiments are just a first step towards a scientific explanation.
Then, does the ``scientific quantification'' degrade our music making on our favourite instrument? I don't think so---if any thing, a deeper understanding of how something affects (or does not affect) the produced tone (and quality) from the instrument actually empowers us as musicians, because it tells us more on how we can control our instrument to achieve our specific artistic goals. It, again at the risk of offending people, gives some control back to the consumer, who can now have a somewhat more objective way of evaluating the suitability of the various concert flutes available out there against the kind of sound that they are looking for in their next flute.
My final two cents on this awkward question of material and instrument is this personal observation: beyond a certain set of material parameters (like stiffness, hardness, and porosity) to contain the air column (that is really what we are controlling when we are playing the concert flute), the key difference in the concert flute that affects its timbre is its geometry. But geometry alone does not explain the timbre of the flute---the geometry only provides a guideline to what the flute is capable of generating, provided that the coupling between the acoustic system of the air column in the flute with that of the player has their impedence matched.
In more layman's terms, geometry is better controlled through workmanship, and the price of the material (to me) is a proxy on the amount of care being put into the workmanship. In the end, that geometry must match the way that I, the flute player, play the flute in order to be the best possible concert flute for me. This includes player-only tangibles like whether the body vibrates (I don't like that), or if there's some kind of high frequency harmonics in the timbre when I want it (I like that), or even if the flute is heavy enough to not wobble (I like that).
Sorry for the essay---these are the thoughts that have gone through my head a few years ago when I was looking towards getting my "final flute".
Here's to tomorrow being a good day also. Till the next update.
Thursday, March 04, 2021
Reading Day
I met up with an old colleague last evening for dinner, and we caught up on our lives ever since we both were done with working at I²R. He's doing alright, but was undergoing similar issues about matching what he was trained in with what was being expected of him at work, with the single big difference that he had started a family (with a child on the way), and was therefore more ``stuck'' about just holding it out instead of walking away from the nonsense.
Call it confirmation bias, but hearing that coming from him was not exactly a good thing in my mind. Already I have been feeling ``stuck'' in the environment that is Singapore with respect to work, and the session that I had with my old colleague was just feeding me with more information that is leading towards exasperation at the labour market in Singapore.
I am not just a ``cheap-ass warm body'' to be pushed about by people who believe they know more than me through the blind following of so-called ``best practices'' that they themselves have no understanding over. Charlatans... society is run by many charlatans, and it makes me fume.
Did I study so much and think so hard just to be lackeys of conmen?
This is something that I need to think really hard for myself.
Of course, there's the whole other aspect of ``am I really ready to commit to a full life of singlehood with Christ at the centre of it all?''. If I commit to a life of singlehood, then there are many more options that are available since I do not have to hedge any resources that I may possess for the eventual family that I am supposed to raise.
I don't know if I am ready to make such a commitment. In many ways, such a commitment is akin to declaring the equivalent of monk-hood, or the concept of a 出家人 in that one is simultaneously capable of loving all and no one at the same time. It is a very tough call to make. I pray that God will grant me the appropriate amount of wisdom to understand the consequences of the choices so that I can know what He intends me to do.
------
The thing about Feudal Alloy is about how annoying the control scheme is---analogue stick for a 2D platformer. I played a little of it yesterday afternoon, but had to put it away when I was stuck on the same two screens for the past half an hour. That meant that today, instead of trying to play Feudal Alloy in the bid to complete it as talked about a while back, I procrastinated and ended up doing more reading.
The big book that I was reading was No Surrender: My Thirty-Year War by Hiroo Onoda, translated by Charles S. Terry. It is the semi-autobiography of Hiroo Onoda, a Japanese army officer hold out that waged a one-man guerilla war against the Fillipinos even as World War II has ended. The main thing about the whole autobiography that struck me was his final doubt about just what he had did with those thirty years of his life living in the jungle, twisting every single piece of evidence that the outside world was trying to tell him that the war was already over and that he should stand down.
It hit a little close to home for me. Yes, I am in no war with no orders from superior officers and what-not, but the same level of doubt about what I had been doing with my life till this point was a sentiment that I could understand. It was not a situation where one was just sitting around and doing nothing---things were indeed done, but at the end of it all, when one finally has the opportunity to take a step back and reflect upon all that has happened, only one question remained:
Why did I do all that; was it really worth it?
I don't think that what I had done in the past has contributed towards the glory of God. I was/am a sinner, and prior to becoming a believer and being saved by Christ Jesus, I was at best an agnostic, which definitely won no favours of God for sure. The intentions of all that I did then had nothing to do with God in mind, and thus cannot be contributing towards anything towards the glorification of God.
To sum it up, I don't think I got happy from it all. I was much happier making music than working.
Anyway, the other book that I read was A Grave of Fireflies by Nosaka Akiyuki, translated by James R. Abrams. Everyone dies in A Grave of Fireflies, and they all die for the saddest of reasons that is from human suffering and human greed. At least in No Surrender: My Thirty-Year War, the protagonist ended alive---his comrades died one after another over the thirty years, but he managed to live on.
I remembered a few years ago, back when I was still pretty close with Janet that I talked about wanting to watch the Studio Ghibli version of Grave of the Fireflies over a Chinese New Year holiday that I spent at the I²R office while working on something, without knowing what it was other than it was a good piece of animation. Thankfully, I had heeded her advice of not doing so as it was a great way to get all depressed from it all.
I wonder how she is doing now. I could reach out to her---she still exists in cyberspace---but it has been so many years since. Whatever commonality we had before is probably just reduced to being she and I are both humans who once studied in UIUC.
Ah well.
Now that the ``short books'' have been read to offset the craziness that is Deep Learning, it is time to start on another text book, namely OpenStax College: Organisational Behaviour. Hopefully this one will be less excruciating than Deep Learning---OpenStax's open text books tend to be quite readable, but to be fair, most of their text books that I have read tend to be on the soft science aspect and not hard-core mathematics-related.
That's all for today's entry. I'm going to watch some more ESA 2021 Winter VODs before calling it a night. Till the next update.
Call it confirmation bias, but hearing that coming from him was not exactly a good thing in my mind. Already I have been feeling ``stuck'' in the environment that is Singapore with respect to work, and the session that I had with my old colleague was just feeding me with more information that is leading towards exasperation at the labour market in Singapore.
I am not just a ``cheap-ass warm body'' to be pushed about by people who believe they know more than me through the blind following of so-called ``best practices'' that they themselves have no understanding over. Charlatans... society is run by many charlatans, and it makes me fume.
Did I study so much and think so hard just to be lackeys of conmen?
This is something that I need to think really hard for myself.
Of course, there's the whole other aspect of ``am I really ready to commit to a full life of singlehood with Christ at the centre of it all?''. If I commit to a life of singlehood, then there are many more options that are available since I do not have to hedge any resources that I may possess for the eventual family that I am supposed to raise.
I don't know if I am ready to make such a commitment. In many ways, such a commitment is akin to declaring the equivalent of monk-hood, or the concept of a 出家人 in that one is simultaneously capable of loving all and no one at the same time. It is a very tough call to make. I pray that God will grant me the appropriate amount of wisdom to understand the consequences of the choices so that I can know what He intends me to do.
------
The thing about Feudal Alloy is about how annoying the control scheme is---analogue stick for a 2D platformer. I played a little of it yesterday afternoon, but had to put it away when I was stuck on the same two screens for the past half an hour. That meant that today, instead of trying to play Feudal Alloy in the bid to complete it as talked about a while back, I procrastinated and ended up doing more reading.
The big book that I was reading was No Surrender: My Thirty-Year War by Hiroo Onoda, translated by Charles S. Terry. It is the semi-autobiography of Hiroo Onoda, a Japanese army officer hold out that waged a one-man guerilla war against the Fillipinos even as World War II has ended. The main thing about the whole autobiography that struck me was his final doubt about just what he had did with those thirty years of his life living in the jungle, twisting every single piece of evidence that the outside world was trying to tell him that the war was already over and that he should stand down.
It hit a little close to home for me. Yes, I am in no war with no orders from superior officers and what-not, but the same level of doubt about what I had been doing with my life till this point was a sentiment that I could understand. It was not a situation where one was just sitting around and doing nothing---things were indeed done, but at the end of it all, when one finally has the opportunity to take a step back and reflect upon all that has happened, only one question remained:
Why did I do all that; was it really worth it?
I don't think that what I had done in the past has contributed towards the glory of God. I was/am a sinner, and prior to becoming a believer and being saved by Christ Jesus, I was at best an agnostic, which definitely won no favours of God for sure. The intentions of all that I did then had nothing to do with God in mind, and thus cannot be contributing towards anything towards the glorification of God.
To sum it up, I don't think I got happy from it all. I was much happier making music than working.
Anyway, the other book that I read was A Grave of Fireflies by Nosaka Akiyuki, translated by James R. Abrams. Everyone dies in A Grave of Fireflies, and they all die for the saddest of reasons that is from human suffering and human greed. At least in No Surrender: My Thirty-Year War, the protagonist ended alive---his comrades died one after another over the thirty years, but he managed to live on.
I remembered a few years ago, back when I was still pretty close with Janet that I talked about wanting to watch the Studio Ghibli version of Grave of the Fireflies over a Chinese New Year holiday that I spent at the I²R office while working on something, without knowing what it was other than it was a good piece of animation. Thankfully, I had heeded her advice of not doing so as it was a great way to get all depressed from it all.
I wonder how she is doing now. I could reach out to her---she still exists in cyberspace---but it has been so many years since. Whatever commonality we had before is probably just reduced to being she and I are both humans who once studied in UIUC.
Ah well.
Now that the ``short books'' have been read to offset the craziness that is Deep Learning, it is time to start on another text book, namely OpenStax College: Organisational Behaviour. Hopefully this one will be less excruciating than Deep Learning---OpenStax's open text books tend to be quite readable, but to be fair, most of their text books that I have read tend to be on the soft science aspect and not hard-core mathematics-related.
That's all for today's entry. I'm going to watch some more ESA 2021 Winter VODs before calling it a night. Till the next update.
Wednesday, March 03, 2021
CPU Cycle Reduction
I just wanted to write a quick blurb before I run off to do other things.
Yesterday I mentioned about fixing the auto-injected navigation for the 笛子 Materials set of articles to be more mobile friendly, and also the updating of the pretty-printer to poll for changes.
I found a better way of doing the latter using MutationObserver. This reduces the amount of CPU cycles by only triggering the recursive walker when there are changes in the DOM tree. I could make it more adaptive by only running the recursive walker on the nodes that are changed, but to do it correctly will require some way of finding the set of sub-tree root nodes among the mutation list, which is annoying---it is just faster to note that there are changes made to the DOM tree, and just re-run the recursive walker.
Speaking of re-running the recursive walker, I have fixed it to be idempotent, i.e. f(f(x))=f(x), meaning that the recursive walker, when applied to the output of itself, does not change any output. This is important to avoid the leaking of unseen space characters that I use to keep certain symbols more tightly bound to each other.
The auto-injected navigation fix has the hacky styles taken out and shoved into a proper CSS file, and is working beautifully.
While doing testing on the MutationObserver, I learnt that setTimeout() uses a 32-bit signed ``integer'' for its millisecond parameter, and if there is an overflow, defaults to executing immediately. setTimeout() is used to automatically update the fuzzified update-timestamp in web pages so that the smallest viewable unit gets updated accordingly. I saw this instant execution issue when I was doing testing on a page that was last updated about 1.5 years ago---the update process was being triggered every half a second or so (instead of every 0.1 years), much to my confusion. The fix was just to clamp it down to the maximum if it is exceeded.
And with that, it is off to my next adventure.
Yesterday I mentioned about fixing the auto-injected navigation for the 笛子 Materials set of articles to be more mobile friendly, and also the updating of the pretty-printer to poll for changes.
I found a better way of doing the latter using MutationObserver. This reduces the amount of CPU cycles by only triggering the recursive walker when there are changes in the DOM tree. I could make it more adaptive by only running the recursive walker on the nodes that are changed, but to do it correctly will require some way of finding the set of sub-tree root nodes among the mutation list, which is annoying---it is just faster to note that there are changes made to the DOM tree, and just re-run the recursive walker.
Speaking of re-running the recursive walker, I have fixed it to be idempotent, i.e. f(f(x))=f(x), meaning that the recursive walker, when applied to the output of itself, does not change any output. This is important to avoid the leaking of unseen space characters that I use to keep certain symbols more tightly bound to each other.
The auto-injected navigation fix has the hacky styles taken out and shoved into a proper CSS file, and is working beautifully.
While doing testing on the MutationObserver, I learnt that setTimeout() uses a 32-bit signed ``integer'' for its millisecond parameter, and if there is an overflow, defaults to executing immediately. setTimeout() is used to automatically update the fuzzified update-timestamp in web pages so that the smallest viewable unit gets updated accordingly. I saw this instant execution issue when I was doing testing on a page that was last updated about 1.5 years ago---the update process was being triggered every half a second or so (instead of every 0.1 years), much to my confusion. The fix was just to clamp it down to the maximum if it is exceeded.
And with that, it is off to my next adventure.
Tuesday, March 02, 2021
Done with Deep Learning
Well, thank goodness for that. I have finally powered through the remaining 313 pages, of which 65 were the index and bibliography. That was quite a doozy. I would say that I am much more confident with the concept of deep learning now. It also seems that the ``cutting edge'' (circa 2016) was still about meshing graphical models with deep learning architectures.
I wonder how much of it is still true today, some 5 years later.
To be fair, after reading Deep Learning, I am starting to veer towards the conclusion that going down that path for so-called ``data analytics'' and/or ``artificial intelligence'' is nothing but tears.
Despite all the advancements, the fundamental limit is still that of good training data. We're not even talking about labelled data, which is the precursor to supervised learning protocols. We're just talking about good quality training data, something that only the largest of large companies can have easy access to.
Puny SMEs are never going to reach that level, especially if they are hoping that all that ``deep learning'' is going to help them. The more traditional techniques are more likely to be suitable than not, but I digress.
I'm just happy that I have finally finished reading the book. Now, I'm just going to take a small break and read a collection of Chinese poems from the Tang and Song dynasties from a small booklet entitled 《唐宋诗精选》 . Funny story, this booklet has actually been with me since 1992-08-28, as a prize for being Best in Chinese waaaaay back in Primary One.
I have never really read it, having been too frightened by all the difficult-looking Chinese characters.
I am just silently thanking whoever the teacher was who picked this book prize for me back in the day nearly three decades ago. He/she has picked well.
The book is so old that the postal code of the printing company (Seng Wah Cultural Publisher Co.) and the distributor (Seng Yew Book Store) were only four digits, and the telephone numbers were only seven digits.
I also want to point out that Seng Yew Book Store is still alive and well, and has not moved from Blk 231, Bain Street #01-15/17, Bras Basah Complex, Singapore 180231.
------
In other news, I fixed a couple of oddities from my personal web site. One of the things that I fixed was the auto-generated navigation bar for my 笛子 Materials set of articles. I tested it out on my cellphone, and was horrified to find that the long text that appeared fine on the web browser of the desktop was quite hard to read/use when displayed in the mobile.
And so, I cobbled together some horrible styles to generate a small table for navigation. This allowed better text wrapping for the three navigation elements, and gave a little more control over their overall appearance in a space-limited setting.
I also adjusted the table width for the Books column of my bible study tracking page. Previously, Song of Solomon was mysteriously relegated into a multi-line presentation when the bible books filter was activated to ``poetry'' while in mobile mode.
Apart from those two fixes, I also tweaked the prettifying JavaScript to default to polling if it is not being run on my personal domain. The polling was done to take into account the case where new textual elements were being inserted into a page which had elements that needed to be prettified---the classic example is the navigation side bar of this blog. Try expanding ``2021''⟶``January'' and check out ``Career'' is a Misnomer. Without the polling, it would appear as ``Career'' is a Misnomer instead.
The prettifying script is originally executed during the onload event via injection, so any new text elements that occur after the script is done with the prettifying will not be updated.
Anyway, that's about all I have for today's entry. Till the next update, I suppose.
I wonder how much of it is still true today, some 5 years later.
To be fair, after reading Deep Learning, I am starting to veer towards the conclusion that going down that path for so-called ``data analytics'' and/or ``artificial intelligence'' is nothing but tears.
Despite all the advancements, the fundamental limit is still that of good training data. We're not even talking about labelled data, which is the precursor to supervised learning protocols. We're just talking about good quality training data, something that only the largest of large companies can have easy access to.
Puny SMEs are never going to reach that level, especially if they are hoping that all that ``deep learning'' is going to help them. The more traditional techniques are more likely to be suitable than not, but I digress.
I'm just happy that I have finally finished reading the book. Now, I'm just going to take a small break and read a collection of Chinese poems from the Tang and Song dynasties from a small booklet entitled 《唐宋诗精选》 . Funny story, this booklet has actually been with me since 1992-08-28, as a prize for being Best in Chinese waaaaay back in Primary One.
I have never really read it, having been too frightened by all the difficult-looking Chinese characters.
I am just silently thanking whoever the teacher was who picked this book prize for me back in the day nearly three decades ago. He/she has picked well.
The book is so old that the postal code of the printing company (Seng Wah Cultural Publisher Co.) and the distributor (Seng Yew Book Store) were only four digits, and the telephone numbers were only seven digits.
I also want to point out that Seng Yew Book Store is still alive and well, and has not moved from Blk 231, Bain Street #01-15/17, Bras Basah Complex, Singapore 180231.
------
In other news, I fixed a couple of oddities from my personal web site. One of the things that I fixed was the auto-generated navigation bar for my 笛子 Materials set of articles. I tested it out on my cellphone, and was horrified to find that the long text that appeared fine on the web browser of the desktop was quite hard to read/use when displayed in the mobile.
And so, I cobbled together some horrible styles to generate a small table for navigation. This allowed better text wrapping for the three navigation elements, and gave a little more control over their overall appearance in a space-limited setting.
I also adjusted the table width for the Books column of my bible study tracking page. Previously, Song of Solomon was mysteriously relegated into a multi-line presentation when the bible books filter was activated to ``poetry'' while in mobile mode.
Apart from those two fixes, I also tweaked the prettifying JavaScript to default to polling if it is not being run on my personal domain. The polling was done to take into account the case where new textual elements were being inserted into a page which had elements that needed to be prettified---the classic example is the navigation side bar of this blog. Try expanding ``2021''⟶``January'' and check out ``Career'' is a Misnomer. Without the polling, it would appear as ``Career'' is a Misnomer instead.
The prettifying script is originally executed during the onload event via injection, so any new text elements that occur after the script is done with the prettifying will not be updated.
Anyway, that's about all I have for today's entry. Till the next update, I suppose.
Monday, March 01, 2021
It's a March
I got rather upset yesterday evening, and because of that, I ended up having to do some unplanned for packing/repacking of things. The only thing I want to say about this is that thanks to having a supply of surgical face masks, there was none of that rhinorrhea nonsense that I would always have when dealing with these kinds of packing---I suppose the dust is really killer. Also, the thought of buying my own apartment and moving out has come and been abated slightly for now.
Other than that, it has been a quiet day. I ended up playing release 1.16.5 of Minecraft. I've had Minecraft for a while, buying it when it reached release 1.8.7, but for long periods of time, I didn't do much with it. Part of the reason was that at some point, I would realise that I was just digging through random imaginary dirt, and that it felt like I could be doing something... more productive.
I played Minecraft this time round because I just wanted something mildly meditative to take my mind off several things that had been flowing through my head, and at the same time, wanting to listen to the reading of Genesis 12:1 to Exodus 12:51 for part of a bible study session that we are having with the care group. I could read it [again], but wanted to try something a little different instead.
I played Minecraft in survival mode, but with the modification of not losing my gear on death so that it was slightly less frustrating to play. I am on a single-player server anyway, and all that additional hardcore-ness was definitely not fun for the intended meditative-type gameplay I was going for.
I learnt most of my Minecraft knowledge from various Let's Play type videos, among which Joe Hills and Zisteau are the more prominent players, though Zisteau has been playing more of other games in recent years than Minecraft. I knew of both players from back in 2011 or so, when I learnt about the whole Complete the Monument (CTM) maps, particularly the ``Super Hostile Series'' by Vechs. While both the players did these challenge maps, their approach were diametrically opposite of each other, which made for good viewing.
Funny enough, it is only today that I learnt that I could milk cows by right clicking them with a bucket. To me, a bucket was useful for two things---lava to melt mobs, and water to unmelt self if the lava bucket accidentally spilt on myself. I suppose this is partly influenced by all that challenge map playthrough watching I have done. The reason for the milk bucket is to make my first cake in Minecraft.
Small victories huh.
In other news, this is the first month that I will be running purely on whatever funds I have squirrelled away for my sabbatical---the last month did not count because I still had half a month's worth of pay from my last company. It is scary in some ways, but really isn't in many others---hermitting it up at home has its advantages in terms of reducing the out-flow of cash. As long as I don't go overboard with the R&R, focusing on justifiable quality over quantity, I am good.
Well, that's about all I want to write for now. It is time for a shower, and I would want to power through some more of Deep Learning. That book has been mocking me for a long time, as is Harrison's Principles of Internal Medicine, but that latter book is nearly 4k pages long and is thus understandable.
Till the next update, I guess.
Other than that, it has been a quiet day. I ended up playing release 1.16.5 of Minecraft. I've had Minecraft for a while, buying it when it reached release 1.8.7, but for long periods of time, I didn't do much with it. Part of the reason was that at some point, I would realise that I was just digging through random imaginary dirt, and that it felt like I could be doing something... more productive.
I played Minecraft this time round because I just wanted something mildly meditative to take my mind off several things that had been flowing through my head, and at the same time, wanting to listen to the reading of Genesis 12:1 to Exodus 12:51 for part of a bible study session that we are having with the care group. I could read it [again], but wanted to try something a little different instead.
I played Minecraft in survival mode, but with the modification of not losing my gear on death so that it was slightly less frustrating to play. I am on a single-player server anyway, and all that additional hardcore-ness was definitely not fun for the intended meditative-type gameplay I was going for.
I learnt most of my Minecraft knowledge from various Let's Play type videos, among which Joe Hills and Zisteau are the more prominent players, though Zisteau has been playing more of other games in recent years than Minecraft. I knew of both players from back in 2011 or so, when I learnt about the whole Complete the Monument (CTM) maps, particularly the ``Super Hostile Series'' by Vechs. While both the players did these challenge maps, their approach were diametrically opposite of each other, which made for good viewing.
Funny enough, it is only today that I learnt that I could milk cows by right clicking them with a bucket. To me, a bucket was useful for two things---lava to melt mobs, and water to unmelt self if the lava bucket accidentally spilt on myself. I suppose this is partly influenced by all that challenge map playthrough watching I have done. The reason for the milk bucket is to make my first cake in Minecraft.
Small victories huh.
In other news, this is the first month that I will be running purely on whatever funds I have squirrelled away for my sabbatical---the last month did not count because I still had half a month's worth of pay from my last company. It is scary in some ways, but really isn't in many others---hermitting it up at home has its advantages in terms of reducing the out-flow of cash. As long as I don't go overboard with the R&R, focusing on justifiable quality over quantity, I am good.
Well, that's about all I want to write for now. It is time for a shower, and I would want to power through some more of Deep Learning. That book has been mocking me for a long time, as is Harrison's Principles of Internal Medicine, but that latter book is nearly 4k pages long and is thus understandable.
Till the next update, I guess.
Sunday, February 28, 2021
Answering Two sTuPiD qUeStIoNs
Unfortunately, I didn't die. I was just otherwise pre-occupied.
The last couple of days or so were socially more interesting than what I would normally do. I spent the evening of 元宵 having dinner at my other sister's parent's place, and then heading off to her place proper for a housewarming. The last time I wrote about being at my other sister's place, I was helping with the unpacking. It is rather confusing to look back and realise that a full month had passed since that day. The place is definitely more ready to be lived in than then since the carpentry work and other finishing touches were completed as well. It is a cosy place, and I am very glad that my other sister and my other brother-in-law are finally together at their own place---their story has been eleven years in the making.
Yesterday itself was also socially more interesting. I had gone to church of service, as always, but after that the care group met up for dinner at Khansama Tandoori Village. While researching the place for this entry, I realised that there are two such restaurants---the main page I linked to is the one purporting to be in 166 Serangoon Road, Singapore 218050, but the one we went to was at 87 Science Park Dr, Singapore 118260 instead. I think the Serangoon one is the main branch---it makes sense because it is right smack in the middle of Little India, while the one we went to is a side branch. Anyway, the food is delicious, and the company was good.
I was subtly reminded about the time I ate at one of the North Indian restaurants out in Little India with the Muramatsu Flutes technicians when Chara was in charge of taking them out for dinner at the most recent flute festival organised by MusicGear Singapore and The Band World.
It was a happy memory, bittersweet.
------
I spent the past couple of days working on a small Python3 script that answers a stupid question: just how many pages of reading do I have left out of my stash of e-books?
My first answer was based on only counting PDF pages, and it was 192.2k pages at an average size of 15.3k bytes per page.
That didn't feel right, because I didn't process the .mobi files. That made me spend upwards of a day doing research on it and debugging the only mobi Python library (mobi-python) that I could find that didn't generate too much junk. Unfortunately, the code is at alpha quality, and it really showed---the written LZ77 decompressor had mixed semantics on byte versus character that ran completely brokenly in Python3, among other things. I shimmed and patched as much as I could to get what I needed going, and somehow managed to get it to work.
The end result is this:It's 227.8k pages to read, at an average size of 13.3k bytes per page.
The first version was also a single-process, single-threaded, sequential file processor. It took about 4m 42s to count those pages---this included the new mobi files processing.
So, I reworked it to use multiprocessing instead. Using 5 out of the 6 cores yielded a run-time of 1m 18s to count the same pages.
That would have been job finished, had I not asked the next stupid question: just how many pages of reading had I already done?
Now, some qualifiers. Not everything from my read list exists as an e-book, so whatever estimates I have are at best, a lower bound. The other thing is that I set up archives of about 1G bytes in size of the read e-books just for ease of making back-ups.
So to answer my second stupid question, I needed to process lists of files from the RAR archives. I used rarfile to handle this, and did a test run with it on some comic book archive files to test the tool chain, thus increasing more capability for the script. It worked well, and I started to run it on my actual archive files.
The bloody thing crashed. Something about requesting a read of 65k bytes but receiving 0 in return.
I was sick of debugging and shimming mobi-python to get it to work. I double checked the RAR archive file with the provided tools and it checked out. There was just something stupidly wrong with the low-level methods used by RarFile.extractall().
Angrily, I just created a direct call to the unrar tool that rarfile was using in the first place to extract the contents into some temporary directory location so that the original refactored directory-walking page-counting script could work.
There was some additional shenanigans needed to transform the temporary file location (in Cygwin-land) into a path that Windows could understand.
But to cut a long story short, the answer is 151.4k pages, at 28.6k bytes per page on average.And so, to answer 2 stupid questions, I now have a weird Python3 script that can count pages. Hurrah...
I just want to point out that it supports both mobi and epub files, and the number of pages from these reflowable documents are themselves estimates. I also want to add that for some of the PDF that could not be processed by the pyPDF2 module, I ended up using a 1-nearest neighbour estimator to impute the unknown number of pages to generate the final statistics. Yay...
------
I have slowly made my way through Deep Learning, reaching 489/802 pages as at now. What a slog it has been, and what a slog it is going to continue to be. I think instead of heading down immediately to OpenStax College: Organisational Behaviour as originally planned, I might want to detour to some fiction for a bit. But we'll see.
Till the next update, I suppose.
The last couple of days or so were socially more interesting than what I would normally do. I spent the evening of 元宵 having dinner at my other sister's parent's place, and then heading off to her place proper for a housewarming. The last time I wrote about being at my other sister's place, I was helping with the unpacking. It is rather confusing to look back and realise that a full month had passed since that day. The place is definitely more ready to be lived in than then since the carpentry work and other finishing touches were completed as well. It is a cosy place, and I am very glad that my other sister and my other brother-in-law are finally together at their own place---their story has been eleven years in the making.
Yesterday itself was also socially more interesting. I had gone to church of service, as always, but after that the care group met up for dinner at Khansama Tandoori Village. While researching the place for this entry, I realised that there are two such restaurants---the main page I linked to is the one purporting to be in 166 Serangoon Road, Singapore 218050, but the one we went to was at 87 Science Park Dr, Singapore 118260 instead. I think the Serangoon one is the main branch---it makes sense because it is right smack in the middle of Little India, while the one we went to is a side branch. Anyway, the food is delicious, and the company was good.
I was subtly reminded about the time I ate at one of the North Indian restaurants out in Little India with the Muramatsu Flutes technicians when Chara was in charge of taking them out for dinner at the most recent flute festival organised by MusicGear Singapore and The Band World.
It was a happy memory, bittersweet.
------
I spent the past couple of days working on a small Python3 script that answers a stupid question: just how many pages of reading do I have left out of my stash of e-books?
My first answer was based on only counting PDF pages, and it was 192.2k pages at an average size of 15.3k bytes per page.
That didn't feel right, because I didn't process the .mobi files. That made me spend upwards of a day doing research on it and debugging the only mobi Python library (mobi-python) that I could find that didn't generate too much junk. Unfortunately, the code is at alpha quality, and it really showed---the written LZ77 decompressor had mixed semantics on byte versus character that ran completely brokenly in Python3, among other things. I shimmed and patched as much as I could to get what I needed going, and somehow managed to get it to work.
The end result is this:It's 227.8k pages to read, at an average size of 13.3k bytes per page.
The first version was also a single-process, single-threaded, sequential file processor. It took about 4m 42s to count those pages---this included the new mobi files processing.
So, I reworked it to use multiprocessing instead. Using 5 out of the 6 cores yielded a run-time of 1m 18s to count the same pages.
That would have been job finished, had I not asked the next stupid question: just how many pages of reading had I already done?
Now, some qualifiers. Not everything from my read list exists as an e-book, so whatever estimates I have are at best, a lower bound. The other thing is that I set up archives of about 1G bytes in size of the read e-books just for ease of making back-ups.
So to answer my second stupid question, I needed to process lists of files from the RAR archives. I used rarfile to handle this, and did a test run with it on some comic book archive files to test the tool chain, thus increasing more capability for the script. It worked well, and I started to run it on my actual archive files.
The bloody thing crashed. Something about requesting a read of 65k bytes but receiving 0 in return.
I was sick of debugging and shimming mobi-python to get it to work. I double checked the RAR archive file with the provided tools and it checked out. There was just something stupidly wrong with the low-level methods used by RarFile.extractall().
Angrily, I just created a direct call to the unrar tool that rarfile was using in the first place to extract the contents into some temporary directory location so that the original refactored directory-walking page-counting script could work.
There was some additional shenanigans needed to transform the temporary file location (in Cygwin-land) into a path that Windows could understand.
But to cut a long story short, the answer is 151.4k pages, at 28.6k bytes per page on average.And so, to answer 2 stupid questions, I now have a weird Python3 script that can count pages. Hurrah...
I just want to point out that it supports both mobi and epub files, and the number of pages from these reflowable documents are themselves estimates. I also want to add that for some of the PDF that could not be processed by the pyPDF2 module, I ended up using a 1-nearest neighbour estimator to impute the unknown number of pages to generate the final statistics. Yay...
------
I have slowly made my way through Deep Learning, reaching 489/802 pages as at now. What a slog it has been, and what a slog it is going to continue to be. I think instead of heading down immediately to OpenStax College: Organisational Behaviour as originally planned, I might want to detour to some fiction for a bit. But we'll see.
Till the next update, I suppose.
Thursday, February 25, 2021
Procrastination While on Sabbatical?
I talked about a small music-scale exploration sub-project in the previous couple of posts. It's not completely done yet, but I thought I should just pen down some of the crazier things that I found while doing the preparatory coding work.
There are two main ways of doing this: either start from the pitch-class set and then identify the associated intervals as part of the scale (which, incidentally, is similar to assigning degrees to the intervals a la 简谱 or cipher notation with associated accidentals as needed), or start from the ``standard'' 7-degrees of intervals that form a scale and apply accidentals accordingly to derive the set of pitch-classes, eliminating repeated ones. An example of a repeated set could be something like {1,♭2,♭3,♭4,♭5,♭6,♭7}---this is the same as {1,♯1,♯2,♯3,♯4,♯5,♯6}, assuming 12-tone equal temperament.
I haven't started the exploration part yet, but have been doing preparatory coding. And one of the problems I was facing was identifying the intervals (relative to the tonic) by name. I coded up something while at the library at NEX yesterday, but I didn't like the result. I was trying to be clever when there was no need to do so, simply because the sizes of the input we were looking at was that small.
And so I wrote a much simpler brute force way with associated preference scoring (prefer intervals with larger numbers of degree-names used, prefer using one accidental over a mix of both accidentals, do not prefer the dreaded ``double flat'' (can appear as 𝄫3, 𝄫6, or 𝄫7), avoid running into the next octave (i.e. prefer 7 over ♭1′)). The results were much better, especially after applying the conversion to 简谱 from the identified intervals.
I will probably work more on it over time.
------
I procrastinated against reading Deep Learning by reading a few more chapters of Genesis from the ESV Study Bible. For those who are not familiar, a study bible is like the regular bible (in this case, the English Standard Version of the Protestant bible), but augmented with many footnotes, endnotes, and articles to explain the context and provide references to other parts of the bible for the annotated verse. Think of it as the extended version of English literature notes that one had to take back in the day when trying to study a book for the examinations, except in this case we are looking at 66 books instead of just 1. And yes, I am seeing this as a multi-year undertaking.
It's okay, if something is worth doing, it is worth over-doing, which segues into a reason why I know that I had to take a sabbatical to think about what I am doing with my so-called career.
I am a computer scientist by training, a software engineer/data analytics engineer by practice. There is this word, engineer, that comes with it the concept of ethics, but sadly, in my industry, such ethics are treated as bullshit. There are two levels of ethics violations that impinge upon my conscience: the I-won't-be-around-to-run-this-in-five-years-anyway mentality, and the degradation of humans into milkable products for peddling more useless crap.
For the former, I am referring to the tendency for programs (or ``apps'' to use the modern parlance) to be written slap-dashedly to meet business mandated deadlines, at the level of quality that would make a civil engineer blush. Even for those that are well-written and are feature complete, we see ever-increasing nonsensical ``updates'' that are to milk more cash instead of actually improving things---the most egregious of all are the so-called UI/UX updates that are completely tone-deaf to the actual needs of the users.
For the latter, I am talking about the incessant tracking and infringement of basic human rights through forced participation of such tracking schemes. I say ``forced participation'' because while it is possible to technically operate without any digital tracking devices like smartphones or even email accounts, from a practicality perspective, it would be impossible to do so, especially if one were to want to luxuriate in urban living. The social contract has been altered over the past twenty years to this point---participate in the digital economy, or forego your rights as a citizen of the world.
I don't mind using data analytics on machinery data to improve the operation of the machines. But I mind a lot when we are starting to treat people's movements and words/actions as machine-like input for a central committee to control---there is something inherently wrong with this approach. It's like after the Internet has percolated down to the masses that suddenly no one remembers what morality is, and what a good conscience means, and that everyone requires The Law just to behave.
It makes me sad, downright depressed actually. When I started writing my first computer program when I was thirteen years old, I thought how much fun it would be to continue writing clever computer programs that do computation for things that were simultaneously fun and can help people. Now, looking back from a position of twenty-three years later, I just feel bad for myself.
Everything that was good, has been corrupted.
------
On slightly less depressing matters, I had this urge to play some Tyrian 2000. And I did. It is a space-shooter that has some light level-up/upgrade elements. It is fun.
And that's all I have for today. Till the next update.
There are two main ways of doing this: either start from the pitch-class set and then identify the associated intervals as part of the scale (which, incidentally, is similar to assigning degrees to the intervals a la 简谱 or cipher notation with associated accidentals as needed), or start from the ``standard'' 7-degrees of intervals that form a scale and apply accidentals accordingly to derive the set of pitch-classes, eliminating repeated ones. An example of a repeated set could be something like {1,♭2,♭3,♭4,♭5,♭6,♭7}---this is the same as {1,♯1,♯2,♯3,♯4,♯5,♯6}, assuming 12-tone equal temperament.
I haven't started the exploration part yet, but have been doing preparatory coding. And one of the problems I was facing was identifying the intervals (relative to the tonic) by name. I coded up something while at the library at NEX yesterday, but I didn't like the result. I was trying to be clever when there was no need to do so, simply because the sizes of the input we were looking at was that small.
And so I wrote a much simpler brute force way with associated preference scoring (prefer intervals with larger numbers of degree-names used, prefer using one accidental over a mix of both accidentals, do not prefer the dreaded ``double flat'' (can appear as 𝄫3, 𝄫6, or 𝄫7), avoid running into the next octave (i.e. prefer 7 over ♭1′)). The results were much better, especially after applying the conversion to 简谱 from the identified intervals.
I will probably work more on it over time.
------
I procrastinated against reading Deep Learning by reading a few more chapters of Genesis from the ESV Study Bible. For those who are not familiar, a study bible is like the regular bible (in this case, the English Standard Version of the Protestant bible), but augmented with many footnotes, endnotes, and articles to explain the context and provide references to other parts of the bible for the annotated verse. Think of it as the extended version of English literature notes that one had to take back in the day when trying to study a book for the examinations, except in this case we are looking at 66 books instead of just 1. And yes, I am seeing this as a multi-year undertaking.
It's okay, if something is worth doing, it is worth over-doing, which segues into a reason why I know that I had to take a sabbatical to think about what I am doing with my so-called career.
I am a computer scientist by training, a software engineer/data analytics engineer by practice. There is this word, engineer, that comes with it the concept of ethics, but sadly, in my industry, such ethics are treated as bullshit. There are two levels of ethics violations that impinge upon my conscience: the I-won't-be-around-to-run-this-in-five-years-anyway mentality, and the degradation of humans into milkable products for peddling more useless crap.
For the former, I am referring to the tendency for programs (or ``apps'' to use the modern parlance) to be written slap-dashedly to meet business mandated deadlines, at the level of quality that would make a civil engineer blush. Even for those that are well-written and are feature complete, we see ever-increasing nonsensical ``updates'' that are to milk more cash instead of actually improving things---the most egregious of all are the so-called UI/UX updates that are completely tone-deaf to the actual needs of the users.
For the latter, I am talking about the incessant tracking and infringement of basic human rights through forced participation of such tracking schemes. I say ``forced participation'' because while it is possible to technically operate without any digital tracking devices like smartphones or even email accounts, from a practicality perspective, it would be impossible to do so, especially if one were to want to luxuriate in urban living. The social contract has been altered over the past twenty years to this point---participate in the digital economy, or forego your rights as a citizen of the world.
I don't mind using data analytics on machinery data to improve the operation of the machines. But I mind a lot when we are starting to treat people's movements and words/actions as machine-like input for a central committee to control---there is something inherently wrong with this approach. It's like after the Internet has percolated down to the masses that suddenly no one remembers what morality is, and what a good conscience means, and that everyone requires The Law just to behave.
It makes me sad, downright depressed actually. When I started writing my first computer program when I was thirteen years old, I thought how much fun it would be to continue writing clever computer programs that do computation for things that were simultaneously fun and can help people. Now, looking back from a position of twenty-three years later, I just feel bad for myself.
Everything that was good, has been corrupted.
------
On slightly less depressing matters, I had this urge to play some Tyrian 2000. And I did. It is a space-shooter that has some light level-up/upgrade elements. It is fun.
And that's all I have for today. Till the next update.
Wednesday, February 24, 2021
Not Every Day is a Good Day
I am not going to sugar coat things.
The past few entries may have demonstrated a certain overwhelmingly level of positive energy with all the various successes and triumphs that I seemed to have obtained in the things that I were working on.
But not every day is a good day like those. Today, for instance, isn't that good a day.
Granted, I finally finished reading the Rainbow anthology by Robert Wilks, and managed to sneak in Farewell to the Master by Harry Bates (source story for The Day the Earth Stood Still starring Keanu Reeves et al). I also had a nice steak at COLLIN'S, and finally started on some of the music-scale exploration project.
But it was not that good a day. Because intrusive thoughts had creeped into me again.
Nothing relating to the suicide, at least, in the corporeal sense. But a strong sense of committing social suicide kept on surfacing within my subconscious. I was out of the apartment today just to get a little sunlight and movement, and was at Serangoon NEX. Amid the hustle and bustle, I had this startling thought that I was never going to be gregarious, no matter what I did---all forms of gregariousness is literally a sham of sorts, a type of 敷衍.
The more I hermit up, the more I like hermitting up, because hermitting up has demonstrated almost no negative consequences whatsoever, just as long as there is enough money to fuel the needs of living in the city.
Yes, no man is an island. But it does get seductively close, especially when one realises that there is not really anything to live for in general, or rather, there isn't really anything else that is worth living for.
Quitting making music did come to my mind. Quitting whatever is left of my circle of friends came to my mind also. Both thoughts are intrusive, but the former one is going to be staunchly resisted by me---if I quit making music, I am literally killing off one of the few things that I do out of passion and not out of money, in which case I probably should plan to off myself as well. The latter one though... in many ways it has already been going down that path, what with my general aloofness to begin with, combined with spending the best years of my life studying abroad and then returning like a damn fool, and have whoever is left as my friends get married left and right with children.
Eh, ain't nobody got time for me. And that's without my active sabotaging of things.
I really shouldn't actively commit social suicide. Future-me is not going to like it. For the sake of my two best friends in the world (past-me and future-me), I am going to tell the intrusive thoughts to shut the hell up and go away.
I'm out of spoons for today. Till the next update.
The past few entries may have demonstrated a certain overwhelmingly level of positive energy with all the various successes and triumphs that I seemed to have obtained in the things that I were working on.
But not every day is a good day like those. Today, for instance, isn't that good a day.
Granted, I finally finished reading the Rainbow anthology by Robert Wilks, and managed to sneak in Farewell to the Master by Harry Bates (source story for The Day the Earth Stood Still starring Keanu Reeves et al). I also had a nice steak at COLLIN'S, and finally started on some of the music-scale exploration project.
But it was not that good a day. Because intrusive thoughts had creeped into me again.
Nothing relating to the suicide, at least, in the corporeal sense. But a strong sense of committing social suicide kept on surfacing within my subconscious. I was out of the apartment today just to get a little sunlight and movement, and was at Serangoon NEX. Amid the hustle and bustle, I had this startling thought that I was never going to be gregarious, no matter what I did---all forms of gregariousness is literally a sham of sorts, a type of 敷衍.
The more I hermit up, the more I like hermitting up, because hermitting up has demonstrated almost no negative consequences whatsoever, just as long as there is enough money to fuel the needs of living in the city.
Yes, no man is an island. But it does get seductively close, especially when one realises that there is not really anything to live for in general, or rather, there isn't really anything else that is worth living for.
Quitting making music did come to my mind. Quitting whatever is left of my circle of friends came to my mind also. Both thoughts are intrusive, but the former one is going to be staunchly resisted by me---if I quit making music, I am literally killing off one of the few things that I do out of passion and not out of money, in which case I probably should plan to off myself as well. The latter one though... in many ways it has already been going down that path, what with my general aloofness to begin with, combined with spending the best years of my life studying abroad and then returning like a damn fool, and have whoever is left as my friends get married left and right with children.
Eh, ain't nobody got time for me. And that's without my active sabotaging of things.
I really shouldn't actively commit social suicide. Future-me is not going to like it. For the sake of my two best friends in the world (past-me and future-me), I am going to tell the intrusive thoughts to shut the hell up and go away.
I'm out of spoons for today. Till the next update.
Tuesday, February 23, 2021
Final Solution to Eileen-II's Temperature Issues?
Ooo, a somewhat early entry!
I got quite a few things done today, so let me begin.
The first thing that I did was complete The Outer Worlds, completing both the ``good ending'' (The Hope colonists revived) and the ``bad ending'' (Capitalism forever!). That happened at around 0200hrs this morning, and thus counts as a ``today'' thing, even though it was more of yesterday on extended time.
I spent some time in the morning replacing the elastic strap in my Roi backpack. This elastic strap holds the concert flute (Aurelia, in this case) in position. I have had this backpack since 2015, and after carrying it weekly with a nearly full load of flute/piccolo, music, and flute/piccolo stands, it has slowly fallen apart. The top handle's pleather has flaked off, and has a corduroy handle cover (made from an old pant leg) sewed in place by Chara, one of the two zipper sliders is no longer aligning the zipper and is kept permanently in one side, and the latest of course was the loss of elasticity of the strap that was used to hold the flute case down. I carefully unpicked it and sewed in a new one with 1 in thickness, and it is now as good as new.
I just can't wait for us to get back into rehearsals at the Chinese orchestra. I have a new double-width collapsible music stand that I can't wait to make use of---I only need to either buy a carrying case for it, or make one. I'll probably make one, but to do so would mean that I need to source for cloth (probably canvas), and set up the sewing machine. There is also that possibility that I end up hand-sewing it; it's not something new. I have a tendency of doing that most of the time anyway, except for that one time that I was patching back together my pen case for my fountain pens.
------
The last time I talked about Eileen-II, I mentioned about turning off hyper-threading to reduce run-time temperatures. There was only one small problem with that entry that I want to correct: the temperature did not drop to around 80°C---it was more like 89°C to 92°C, as compared to 95°C to 100°C when hyper-threading was turned on.
The other thing that I didn't mention was that the idle temperature was about 66°C.
I don't like these numbers at all. It's partly because such high numbers would imply a shortening of the lifespan of the components (in this case, the CPU), but more importantly such high numbers meant that when I was using the keyboard of Eileen-II herself I would end up actually burning myself slightly should I accidentally touch the institial parts of the frame instead of the plastic key caps of the keyboard.
I knew of a solution to this, but I have not had the wherewithal to do it, because of how time consuming it was---it was the pseudo-inverse of overclocking, a technique called undervolting using a tool like ThrottleStop. CPUs, like all electronic components, require a certain potential difference across it too provide enough electrical energy to do the computations that they need to do. Loosely speaking, for ``greater performance'', the CPU should get as much energy as it can take, which is indirectly controlled by regulating the voltage of the supply going into it.
But the catch is that ``greater performance'' is not the same as ``greater apparent performance'', the reason being that if the CPU gets too hot (for instance, 100°C), the CPU is forced to downclock itself to avoid burning itself from the excess heat. Following the math from this subsection on CPU power, this excess energy is proportional to the square of the voltage supplied.
Every CPU has its own sweet spot voltage where there is a minimal amount of energy that is supplied that can give the CPU its maximal performance without actually generating excessive thermal energy that forces a temperature-based shut down. For reasons of sanity, the factory defaults for the make of Eileen-II are set to be ``performance-oriented'', i.e. they do not attempt to find such a ``best'' voltage. This ensures that the machine will always attempt to run at its ``best performance'' regardless of the situation.
So yeah, the first thing I did then was to find out what that lowest voltage is. To help me with it, I turned off the Turbo Boost capability from within ThrottleStop, and carefully experimented on the voltage offset for the CPU Core and CPU Cache. This was the most time consuming part because if I ever went too low, the CPU would freeze up, and I would have to do a hard reset. Granted, with Eileen-II's specs, a hard reboot doesn't take too long, but it still takes time to pull off. The final values that I ended up was −90.8 mV---starting at the default −100.0 mV voltage offset was just too much and made Eileen-II crash. This led to an idle temperature of around 50°C.
I could try to tweak down to the 1.0 mV precision, but at this point, I don't think it is worth it.
The next thing that I tried to tackle was the way Turbo Boost worked. Disabling Turbo Boost was a good way to get low temperatures, but I didn't just want low temperatures---I wanted Eileen-II to actually run more powerfully if there's a need to, but without the whole hot-enough-to-burn-my-finger-tips thing.
Turbo Boost worked by applying a multiplier on the basic clock speed. For Eileen-II, the basic clock speed is 99.768 MHz (as reported by ThrottleStop). At the published ``base frequency'' of 2.60 GHz, we are looking at a multiplier of around 26×. The maximum reported frequency that the CPU can support is 5.1 GHz, which works out to around 50--51×. Thus, any multiplier larger than 26 and less than 51 are considered potential ``Turbo Boost ratios''.
At this point, I think a screenshot of the Fully Integrated Voltage Regulator (FIVR) screen will make things more enlightening.Observe the lower left corner frame under ``Turbo Rate Limits''. The Max value is the maximum ratio that the CPU is reported to support---going any higher than that counts as overclocking (i.e. out of specifications). The table below lists down the multipliers to be used when Turbo Boost is triggered, sorted by the number of active Cores.
My understanding of active Cores are those that are assigned the particular process/thread to operate on [at 100% capacity].
Now I just want to point out that I live in Singapore, and naturally, am operating Eileen-II in Singapore. In case you were wondering, Singapore has a climate that is simply terrible for high performance electronics if one does not have/use air-conditioning.
So those default numbers in Turbo Ratio Limits that start at 50× for one active Core to 43× for 6 active Cores? Yeah, they are waaaaaaay too optimistic. For reference, the highest I ever saw the clock speed went was about 4.2 GHz, and even then, it immediately faced thermal throttling.
I went ahead to get my favourite prime number tester to generate the types of high CPU loads that I needed to benchmark the relationship between the multiplier and resultant temperature. I first did a full 6-core ``Small FFTs'' torture test with the undervolt conditions and Turbo Boost turned off. According to the options, the ``Small FFTs'' torture test tests the L1/L2/L3 caches, generating maximum power/heat/CPU stress, which was exactly what I wanted. The idea for this initial test is to spot the empirical distribution of which Cores have a tendency to run hotter than others. From this empirical distribution, I would then pick the hottest n cores when I am trying to tune the multiplier for n active Cores.
My goal was to ensure that no matter how many Cores are active, the maximum temperature of all the Turbo Boosted Cores do not exceed 85°C. There is no scientific reason why 85°C over other temperatures, but there are two good ones:
One last piece of the puzzle was to do the actual assignment of Cores for the job. The way I did this was to pull up ``Task Manager'' with Ctrl-Shift-Esc, get it to show more details (this is by default for me), get to the ``Details'' tab, search for prime95.exe under the process name, right clicking on it to pull up the context menu, then selecting ``Affinity''. And from there, I would choose which Cores (or CPUs in this case) to assign prime95.exe to.
Then it was just a case of trial and error on the multipliers with careful execution of the ``Small FFTs'' torture test with the associated number of workers.
The multipliers that you see in the FIVR screenshot above are obtained through this process. Notice that at 6 active Cores, the multiplier is 30× (corresponding to a clock frequency of not more than 2993.04 MHz), and at a single active Core, the multiplier is 40× (clock frequency not more than 3990.72 MHz). These numbers are definitely much lower than the very optimistic defaults.
Notice that I am being very conservative here---I chose to ensure that the hottest Core spike does not exceed 85°C; this does not mean that the hottest Core will operate at 85°C. To illustrate this point, I have taken yet another screenshot, this time with a 6-core ``Small FFTs'' torture test with all the settings set up in the FIVR screenshot.As you can tell, it runs much cooler now, with the lowest spike temperature being 78°C, and the highest spike temperature being 85°C. Notice also how the empirical distribution works for both the instantaneous measurement and the Max measurement.
That was not the end though. I re-ran the ``Small FFTs'' torture tests again from 6-core down to 1-core, but this time without forcing the processor affinity. The temperature profiles are definitely much cooler, since the fewer active Cores often meant that the workload was being shuffled randomly among all the available Cores, leading to less sustained work.
Hopefully I don't have to do anything more drastic to keep the temperatures at this comfortable level.
------
Technical nerdry aside, I also started reading a short anthology of poems/prose/plays by Robert Wilks. Rainbow: A Collection of Stories, Poems and Plays (Volume 2) is funny because it was actually my secondary two English Literature text book from back in the day. It is one of a few books that I kept from my secondary school days, mostly because past-me sort of knew that there would be one day where future-me would be interested in reading the wonderful writings from then. Thank you, past-me!
I have also watched Dick Tracy, a 1990 film about the eponymous comic-book detective. It's funny---I remembered that I loved the concept of Dick Tracy (he's a detective, and when I was young, I loved the idea of being a detective, and also a spy), but I never seemed to like read/watch anything about the character. And so, I rectified that today. To my surprise, Madonna was in it as well.
Woah.
The film itself was fun, but what really drew me in was how normalised all the comic book weirdness was---this is a testament on how well the film was directed/produced. Big coats in all the colours except black, canary yellow hat and coat while trying to be sneaky and somehow not getting seen, colourful back drops despite the 1930s ``hard-boiled'' aesthetic.
And Madonna sings like the nightingale she is. Ho yez.
Anyway, that's all I have for today. It's a long entry, no thanks to nerding out. Till the next update, I suppose.
I got quite a few things done today, so let me begin.
The first thing that I did was complete The Outer Worlds, completing both the ``good ending'' (The Hope colonists revived) and the ``bad ending'' (Capitalism forever!). That happened at around 0200hrs this morning, and thus counts as a ``today'' thing, even though it was more of yesterday on extended time.
I spent some time in the morning replacing the elastic strap in my Roi backpack. This elastic strap holds the concert flute (Aurelia, in this case) in position. I have had this backpack since 2015, and after carrying it weekly with a nearly full load of flute/piccolo, music, and flute/piccolo stands, it has slowly fallen apart. The top handle's pleather has flaked off, and has a corduroy handle cover (made from an old pant leg) sewed in place by Chara, one of the two zipper sliders is no longer aligning the zipper and is kept permanently in one side, and the latest of course was the loss of elasticity of the strap that was used to hold the flute case down. I carefully unpicked it and sewed in a new one with 1 in thickness, and it is now as good as new.
I just can't wait for us to get back into rehearsals at the Chinese orchestra. I have a new double-width collapsible music stand that I can't wait to make use of---I only need to either buy a carrying case for it, or make one. I'll probably make one, but to do so would mean that I need to source for cloth (probably canvas), and set up the sewing machine. There is also that possibility that I end up hand-sewing it; it's not something new. I have a tendency of doing that most of the time anyway, except for that one time that I was patching back together my pen case for my fountain pens.
------
The last time I talked about Eileen-II, I mentioned about turning off hyper-threading to reduce run-time temperatures. There was only one small problem with that entry that I want to correct: the temperature did not drop to around 80°C---it was more like 89°C to 92°C, as compared to 95°C to 100°C when hyper-threading was turned on.
The other thing that I didn't mention was that the idle temperature was about 66°C.
I don't like these numbers at all. It's partly because such high numbers would imply a shortening of the lifespan of the components (in this case, the CPU), but more importantly such high numbers meant that when I was using the keyboard of Eileen-II herself I would end up actually burning myself slightly should I accidentally touch the institial parts of the frame instead of the plastic key caps of the keyboard.
I knew of a solution to this, but I have not had the wherewithal to do it, because of how time consuming it was---it was the pseudo-inverse of overclocking, a technique called undervolting using a tool like ThrottleStop. CPUs, like all electronic components, require a certain potential difference across it too provide enough electrical energy to do the computations that they need to do. Loosely speaking, for ``greater performance'', the CPU should get as much energy as it can take, which is indirectly controlled by regulating the voltage of the supply going into it.
But the catch is that ``greater performance'' is not the same as ``greater apparent performance'', the reason being that if the CPU gets too hot (for instance, 100°C), the CPU is forced to downclock itself to avoid burning itself from the excess heat. Following the math from this subsection on CPU power, this excess energy is proportional to the square of the voltage supplied.
Every CPU has its own sweet spot voltage where there is a minimal amount of energy that is supplied that can give the CPU its maximal performance without actually generating excessive thermal energy that forces a temperature-based shut down. For reasons of sanity, the factory defaults for the make of Eileen-II are set to be ``performance-oriented'', i.e. they do not attempt to find such a ``best'' voltage. This ensures that the machine will always attempt to run at its ``best performance'' regardless of the situation.
So yeah, the first thing I did then was to find out what that lowest voltage is. To help me with it, I turned off the Turbo Boost capability from within ThrottleStop, and carefully experimented on the voltage offset for the CPU Core and CPU Cache. This was the most time consuming part because if I ever went too low, the CPU would freeze up, and I would have to do a hard reset. Granted, with Eileen-II's specs, a hard reboot doesn't take too long, but it still takes time to pull off. The final values that I ended up was −90.8 mV---starting at the default −100.0 mV voltage offset was just too much and made Eileen-II crash. This led to an idle temperature of around 50°C.
I could try to tweak down to the 1.0 mV precision, but at this point, I don't think it is worth it.
The next thing that I tried to tackle was the way Turbo Boost worked. Disabling Turbo Boost was a good way to get low temperatures, but I didn't just want low temperatures---I wanted Eileen-II to actually run more powerfully if there's a need to, but without the whole hot-enough-to-burn-my-finger-tips thing.
Turbo Boost worked by applying a multiplier on the basic clock speed. For Eileen-II, the basic clock speed is 99.768 MHz (as reported by ThrottleStop). At the published ``base frequency'' of 2.60 GHz, we are looking at a multiplier of around 26×. The maximum reported frequency that the CPU can support is 5.1 GHz, which works out to around 50--51×. Thus, any multiplier larger than 26 and less than 51 are considered potential ``Turbo Boost ratios''.
At this point, I think a screenshot of the Fully Integrated Voltage Regulator (FIVR) screen will make things more enlightening.Observe the lower left corner frame under ``Turbo Rate Limits''. The Max value is the maximum ratio that the CPU is reported to support---going any higher than that counts as overclocking (i.e. out of specifications). The table below lists down the multipliers to be used when Turbo Boost is triggered, sorted by the number of active Cores.
My understanding of active Cores are those that are assigned the particular process/thread to operate on [at 100% capacity].
Now I just want to point out that I live in Singapore, and naturally, am operating Eileen-II in Singapore. In case you were wondering, Singapore has a climate that is simply terrible for high performance electronics if one does not have/use air-conditioning.
So those default numbers in Turbo Ratio Limits that start at 50× for one active Core to 43× for 6 active Cores? Yeah, they are waaaaaaay too optimistic. For reference, the highest I ever saw the clock speed went was about 4.2 GHz, and even then, it immediately faced thermal throttling.
I went ahead to get my favourite prime number tester to generate the types of high CPU loads that I needed to benchmark the relationship between the multiplier and resultant temperature. I first did a full 6-core ``Small FFTs'' torture test with the undervolt conditions and Turbo Boost turned off. According to the options, the ``Small FFTs'' torture test tests the L1/L2/L3 caches, generating maximum power/heat/CPU stress, which was exactly what I wanted. The idea for this initial test is to spot the empirical distribution of which Cores have a tendency to run hotter than others. From this empirical distribution, I would then pick the hottest n cores when I am trying to tune the multiplier for n active Cores.
My goal was to ensure that no matter how many Cores are active, the maximum temperature of all the Turbo Boosted Cores do not exceed 85°C. There is no scientific reason why 85°C over other temperatures, but there are two good ones:
- 85°C is roughly the top end of the temperature where the frame is merely warm to the touch;
- Any lower temperature than 85°C would lead to a multiplier that is basically useless, in which case I might as well just turn the Turbo Boost off.
One last piece of the puzzle was to do the actual assignment of Cores for the job. The way I did this was to pull up ``Task Manager'' with Ctrl-Shift-Esc, get it to show more details (this is by default for me), get to the ``Details'' tab, search for prime95.exe under the process name, right clicking on it to pull up the context menu, then selecting ``Affinity''. And from there, I would choose which Cores (or CPUs in this case) to assign prime95.exe to.
Then it was just a case of trial and error on the multipliers with careful execution of the ``Small FFTs'' torture test with the associated number of workers.
The multipliers that you see in the FIVR screenshot above are obtained through this process. Notice that at 6 active Cores, the multiplier is 30× (corresponding to a clock frequency of not more than 2993.04 MHz), and at a single active Core, the multiplier is 40× (clock frequency not more than 3990.72 MHz). These numbers are definitely much lower than the very optimistic defaults.
Notice that I am being very conservative here---I chose to ensure that the hottest Core spike does not exceed 85°C; this does not mean that the hottest Core will operate at 85°C. To illustrate this point, I have taken yet another screenshot, this time with a 6-core ``Small FFTs'' torture test with all the settings set up in the FIVR screenshot.As you can tell, it runs much cooler now, with the lowest spike temperature being 78°C, and the highest spike temperature being 85°C. Notice also how the empirical distribution works for both the instantaneous measurement and the Max measurement.
That was not the end though. I re-ran the ``Small FFTs'' torture tests again from 6-core down to 1-core, but this time without forcing the processor affinity. The temperature profiles are definitely much cooler, since the fewer active Cores often meant that the workload was being shuffled randomly among all the available Cores, leading to less sustained work.
Hopefully I don't have to do anything more drastic to keep the temperatures at this comfortable level.
------
Technical nerdry aside, I also started reading a short anthology of poems/prose/plays by Robert Wilks. Rainbow: A Collection of Stories, Poems and Plays (Volume 2) is funny because it was actually my secondary two English Literature text book from back in the day. It is one of a few books that I kept from my secondary school days, mostly because past-me sort of knew that there would be one day where future-me would be interested in reading the wonderful writings from then. Thank you, past-me!
I have also watched Dick Tracy, a 1990 film about the eponymous comic-book detective. It's funny---I remembered that I loved the concept of Dick Tracy (he's a detective, and when I was young, I loved the idea of being a detective, and also a spy), but I never seemed to like read/watch anything about the character. And so, I rectified that today. To my surprise, Madonna was in it as well.
Woah.
The film itself was fun, but what really drew me in was how normalised all the comic book weirdness was---this is a testament on how well the film was directed/produced. Big coats in all the colours except black, canary yellow hat and coat while trying to be sneaky and somehow not getting seen, colourful back drops despite the 1930s ``hard-boiled'' aesthetic.
And Madonna sings like the nightingale she is. Ho yez.
Anyway, that's all I have for today. It's a long entry, no thanks to nerding out. Till the next update, I suppose.
Monday, February 22, 2021
[Short] Fiction LaTeX Toolchain Update and Other Stories
Ah... so close to completing The Outer Worlds! If only I didn't actually spend time working on other things... but I believe I am jumping the gun here.
I finally spent some time reading Deep Learning, and have something to admit---the more I read it, the more I am starting to agree with Brian that I am wasting my time. It is not that the concept of deep learning is bad, it is just that, unlike more ``stupid'' algorithms like say decision trees, or even some kind of Bayesian belief network, there seems to be nothing innately interesting about deep learning. I get that they are trying to attack the machine learning problem through the use of function composition as the way of generating complexity out of algorithmically simpler units, but there is almost nothing intuitive about it, just really dry tensor mathematics with a tinge of function optimisation in disguise.
Maybe it's because I'm only [nearly] halfway through the 802-page e-book (387 as at the time of writing). But honestly, it doesn't look like it is going to get any better. I can probably understand what goes into a deep learning system, but it is unlikely to draw me close in the way the other learning architectures would.
Heck, even SVMs are more appealing from the intuitive and theoretical aspect.
I think part of the reason is that a large part of deep learning is not amenable to proper human understanding, at least in the way it is being presented. It's a case of ``here, pick/design some deep learning architecture (equivalent to deciding on the basis functions, and the space of function composition to operate in), chuck enough data and time at it, and we can universally approximate the unknown function to `compress' the data representation for future prediction''.
But I should really be drawing conclusions after finishing the book. As I said though, it really doesn't seem to be getting any more interesting. If this is the kind of work that people are expecting data science to be, wow, it's probably the most hypocritcal thing for me to take up those kinds of jobs.
------
Deep Learning aside, I accidentally spent some time doing more fixes on my various toolchains, not necessarily related to the maintenance of my personal domain. I was actually debating on whether to work on Print Me for today when I realised that I still had Elizabeth left to complete.
For those who aren't familiar with my blog-based scribbles, I have a tendency to create nicely typeset versions of such ``serialised'' works after I have completed the story. That I hadn't generated one for Elizabeth was a hint that I haven't completed the story, despite the 25 parts that I have written thus far.
I tried building the PDF from the current writing so as to read the story as a whole to see what is left to write, and found some warnings that were being tossed at me by LaTeX---they had something to do with the weird margins that I was using. I fiddled about before finally realising that I had already fixed the style sheet that I was using for my fiction back in 2015---I just didn't propagate the changes. And so, I spent some time updating the pipelines for each of these standalone stories' typesetting, and regenerated those PDFs to replace the old [and bad] ones. The dates of the stories are still the same as before, but the margins are definitely more consistent.
Naturally, I didn't get to read Elizabeth to determine the amount of work left. I didn't work on Print Me either. Maybe tomorrow.
------
On one final note, I had finally bring myself to writing a quick tool to identify duplicate images by visual perception. The situation is this, I have a set of one thousand or so image files that I have been collecting over time to act as desktop background images. They tend to be of high resolution, and come from many sources, and so one of the first things that I did was to write a script that will rename them according to some schema that keeps their order and resolution ``obvious''.
Some images are just too lovely, and after a long enough time, I accidentally have some duplicates from accidentally redownloading them for storage again.
Visually inspecting which ones are similar from Windows Explorer alone was stupidly impossible, and it was obvious that I needed to use automation to help, specifically some kind of similarity comparison.
Armed with Pillow, I wrote a Python3 script that would convert each image into a row-vector of 20 dimensions, with each entry being a number from [-1,+1] representing absolute black and absolute white from a Lanczos-resampled grayscale conversion of the original image.
Similarity was then done using a variation of cosine similarity---I used the cosine-angle in radians and set the angle threshold to 0.01 to claim similarity.
Images that were similar were physically dumped into a directory representing the cluster, while images that had nothing similar to them were just copied wholesale.
From there, I could do the visual inspection (there were 10 clusters of similar images) and weed out what I didn't want, before re-running the renaming script on the unique-ified images.
Was it overkill? Maybe, especially when the images that were similar appeared to be exactly the same---it wasn't even the case that one was a higher resolution version of the same image as another. Since I did not have the wherewithal to actually see which of the similarity cases were there, it was just faster to use a more general algorithm.
The job was done, and I am happy.
------
I think that I have been writing a little too much about writing random scripts/updating various toolchains that I use to operate my cyberspace affairs. I suppose it is just the season---it seems that I am currently in the ``hot'' phase of programming useful scripts/fixing old ones.
There is still one more thing that is niggling at the back of my mind that I would want to do before I am ``programmed out'', and that involves music.
I may also want to do some lighter reading instead of Deep Learning, perhaps a short anthology of short stories, or even poems. I don't know which just yet, but I have some decent ideas. And they are likely to be dead-tree versions too, just for variety.
I think that's all I have for today. Maybe after I shower, I will try and finish up one of the endings for The Outer Worlds.
Maybe.
Anyway, till the next update.
I finally spent some time reading Deep Learning, and have something to admit---the more I read it, the more I am starting to agree with Brian that I am wasting my time. It is not that the concept of deep learning is bad, it is just that, unlike more ``stupid'' algorithms like say decision trees, or even some kind of Bayesian belief network, there seems to be nothing innately interesting about deep learning. I get that they are trying to attack the machine learning problem through the use of function composition as the way of generating complexity out of algorithmically simpler units, but there is almost nothing intuitive about it, just really dry tensor mathematics with a tinge of function optimisation in disguise.
Maybe it's because I'm only [nearly] halfway through the 802-page e-book (387 as at the time of writing). But honestly, it doesn't look like it is going to get any better. I can probably understand what goes into a deep learning system, but it is unlikely to draw me close in the way the other learning architectures would.
Heck, even SVMs are more appealing from the intuitive and theoretical aspect.
I think part of the reason is that a large part of deep learning is not amenable to proper human understanding, at least in the way it is being presented. It's a case of ``here, pick/design some deep learning architecture (equivalent to deciding on the basis functions, and the space of function composition to operate in), chuck enough data and time at it, and we can universally approximate the unknown function to `compress' the data representation for future prediction''.
But I should really be drawing conclusions after finishing the book. As I said though, it really doesn't seem to be getting any more interesting. If this is the kind of work that people are expecting data science to be, wow, it's probably the most hypocritcal thing for me to take up those kinds of jobs.
------
Deep Learning aside, I accidentally spent some time doing more fixes on my various toolchains, not necessarily related to the maintenance of my personal domain. I was actually debating on whether to work on Print Me for today when I realised that I still had Elizabeth left to complete.
For those who aren't familiar with my blog-based scribbles, I have a tendency to create nicely typeset versions of such ``serialised'' works after I have completed the story. That I hadn't generated one for Elizabeth was a hint that I haven't completed the story, despite the 25 parts that I have written thus far.
I tried building the PDF from the current writing so as to read the story as a whole to see what is left to write, and found some warnings that were being tossed at me by LaTeX---they had something to do with the weird margins that I was using. I fiddled about before finally realising that I had already fixed the style sheet that I was using for my fiction back in 2015---I just didn't propagate the changes. And so, I spent some time updating the pipelines for each of these standalone stories' typesetting, and regenerated those PDFs to replace the old [and bad] ones. The dates of the stories are still the same as before, but the margins are definitely more consistent.
Naturally, I didn't get to read Elizabeth to determine the amount of work left. I didn't work on Print Me either. Maybe tomorrow.
------
On one final note, I had finally bring myself to writing a quick tool to identify duplicate images by visual perception. The situation is this, I have a set of one thousand or so image files that I have been collecting over time to act as desktop background images. They tend to be of high resolution, and come from many sources, and so one of the first things that I did was to write a script that will rename them according to some schema that keeps their order and resolution ``obvious''.
Some images are just too lovely, and after a long enough time, I accidentally have some duplicates from accidentally redownloading them for storage again.
Visually inspecting which ones are similar from Windows Explorer alone was stupidly impossible, and it was obvious that I needed to use automation to help, specifically some kind of similarity comparison.
Armed with Pillow, I wrote a Python3 script that would convert each image into a row-vector of 20 dimensions, with each entry being a number from [-1,+1] representing absolute black and absolute white from a Lanczos-resampled grayscale conversion of the original image.
Similarity was then done using a variation of cosine similarity---I used the cosine-angle in radians and set the angle threshold to 0.01 to claim similarity.
Images that were similar were physically dumped into a directory representing the cluster, while images that had nothing similar to them were just copied wholesale.
From there, I could do the visual inspection (there were 10 clusters of similar images) and weed out what I didn't want, before re-running the renaming script on the unique-ified images.
Was it overkill? Maybe, especially when the images that were similar appeared to be exactly the same---it wasn't even the case that one was a higher resolution version of the same image as another. Since I did not have the wherewithal to actually see which of the similarity cases were there, it was just faster to use a more general algorithm.
The job was done, and I am happy.
------
I think that I have been writing a little too much about writing random scripts/updating various toolchains that I use to operate my cyberspace affairs. I suppose it is just the season---it seems that I am currently in the ``hot'' phase of programming useful scripts/fixing old ones.
There is still one more thing that is niggling at the back of my mind that I would want to do before I am ``programmed out'', and that involves music.
I may also want to do some lighter reading instead of Deep Learning, perhaps a short anthology of short stories, or even poems. I don't know which just yet, but I have some decent ideas. And they are likely to be dead-tree versions too, just for variety.
I think that's all I have for today. Maybe after I shower, I will try and finish up one of the endings for The Outer Worlds.
Maybe.
Anyway, till the next update.
Subscribe to:
Posts (Atom)