Tuesday, August 31, 2021

SMS Musing #15

In response to a Facebook post asking about opinions on musical instruments made in China on the Flute Forum:
Which country a flute is made matters less compared to the quality control (consistency of output) and standards of said quality control ("student" grade versus "concert" grade). We use brand names as a proxy to determine these factors, because the well-known brands have an established track record.

Well-known brands can still produce duds, though that is likely to be less likely than some brand that is lesser known. Remember also that all well-known brands have started their lives once upon a time as an unknown one.

The only sure way to know if a flute works well or not is to play it for oneself. Trade shows are a way for the companies to showcase their best work. If one is looking for a steady supply of instruments, then it is almost necessary to visit the factories themselves to determine the two aspects of quality control I talked earlier before making a decision.

For musical instruments using more organic components (like wood, or animal skins) the question is more layered. It is less about it being made in a country and more about the ambient temperature/climate of the factory of origin. Organic materials are a bit more dynamic than metals, and can be affected quite drastically in their preparation and treatment in manufacturing. In these cases, the quality of control aspect extends deeper into the material supplier as well.

It's turning into an essay here, so I shall stop for now. 😅 I'm not here to defend nor denigrate "made in China" instruments, but want to advocate a more informed decision making process as opposed to relying too heavily on mental shortcuts that uses stereotypes.

Monday, August 30, 2021

``Renewable'' Rant?

Everything is ``renewable'' in the sense that for most things at the energy level that we can operate in, the conservation of mass (both in macro-scale and in sub-atomic scale) holds. Now, whether it is ``renewable'' at the time scale of the human being/race though, that's a completely different story altogether.

The thing is, the matter that make up everything that we possess have more or less existed since the formation of the earth---what has changed in between the aeons is the arrangement of them within the space-time dimension. Low-levels of energy are often stored in the form of chemical bonds between the atoms in molecules, and much of life on Earth is based on manipulating these chemical bonds to release energy to create other chemical bonds to adjust the underlying arrangement of atoms within the molecules to achieve various outcomes which can include things like locomotion or even reproduction.

Generating ``arrangements'' is an intrinsically mathematical way of creating complexity. It is how we take the two elements of 0 and 1 to generate first the countably infinite, and with careful reasoning, to an uncountably infinite. The universe is basically mathematics applied over an extreme space-time-energy scale, and we are the product from all that, as is our predecessors, and our postdecessors.

In the grand scheme of things, we are nothing. So even if the Earth raises its temperature by another 10°C, or if the days grow longer due to the slowing down of the rotation of the planet, it literally does not matter---all the matter that has existed will still exist, with the only difference that the arrangements that are more meta-stable will be different from the ones that are currently meta-stable. This is just a infinitesimal sample of how the large-scale universe operates, and it really drives home the level of awe one should have for the One God who created it all---literally unknowable by us.

But back to the topic. ``Renewable''. As I said before, everything is ``renewable'', but not everything is ``renewable'' at a time-scale (and with intermediate states of system-transition) that is favourable to humanity. Science and engineering are literal baby-steps towards understanding how the bigger Earth system works, and we are always prematurely optimising via exploiting whatever limited science that we have at hand to gain that advantage. That's why we have things like asbestos, leaded petrol, chloro-fluoro-carbon coolants, and tritium, all of which were heavily exploited within years of their discovery before the detrimental aspects of their [uncontrolled] use surfaced. I don't mean that because of that, we should stop all forms of science/engineering, but rather, I say that since we have now more or less overcome the basic problems of survival, we should be a little more careful in terms of how we choose to affect our environment in general, with an overall sense of leaving enough of it unchanged for our future generations of humans to live in.

The tendency of using science and engineering to fix things has always been to add more things rather than to remove, which explains the strong inertia by large corporations that have largely benefitted from all that ``new'' added stuff. In many ways, capitalism has always been more of adding new things than actually solving anything---the answer to a problem is not to do less of an existing [money-making] activity, but to do more of something that seemingly mitigates the problem away. Synthetic plastics from the complex hydro-carbons from fossil fuels were seen as more hygienic and longer-lasting versions of their ``organic'' counterparts of wood/paper, and yet it is this longer-lasting property that is currently causing a lot of the harm, due to the general incompatibility between long-lasting and feeding the consumerism that enables capitalism to flourish the way it is.

Why bother fixing something when it is much easier to toss it out and buy a newer version that is usually more improved than the old?

For those that are more sentimental in nature (i.e. whose value isn't necessarily from use-value alone but from some intangible prescribed emotional value) like musical instruments, nothing seems to beat the more traditional ways of doing things, albeit with improvements that come from more precise tool control. At the industrial scale though, sentimentality means nothing---it's all about whether it is cheaper to use/fix an existing machine/process, or to replace it with a new one. The largest consumers of a lot of the plastics aren't necessarily by the end-consumers (though they definitely contribute via their consumption), but are likely the many intermediaries that are involved in industry. To the industries, plastics are seen as consumables of production---to be used once and then discarded, either because it is cheaper to do so, or because regulation demands so.

Thus comes the paradox of using a largely cheap by ``non-renewable'' resource over a less cheap ``renewable'' one.

Would the situation be better if it were centrally planned as opposed to being left to the devices of the hidden hand of the market? Well, it will only be better if we know more about the ramifications/consequences, and no one has a good grasp of that since foresight is always orders of magnitude harder than hindsight. It is effectively a toss-up---on some matters, centralised planning can help, but on many, the interlocking sub-systems are too enmeshed that it becomes too complex to have a simple input/output relationship drawn up to allow proper dictation of a centralised authority. History will support that through the many instances when centralised planning was more effective and when that was more destructive as compared to a lassez-faire market-based attitude.

If both major principles of organisation are not perfect, then are we doomed? Maybe.

Only the future will tell.

Sunday, August 29, 2021

Cleaning My Old Clocky

I've owned a clocky alarm clock since 2009. It's my favourite alarm clock, not because it can run about when triggered, but because it has a nice, loud, pseudo-random alarm, and uses 24-hour time for both display and setting of the alarm, without any other cluttering display elements. Each function also has its own dedicated button (alarm on/off, ``roll off'' on/off, snooze, hour advance, minute advance, alarm set, time set) without any silly touch screen mechanics. It's all digital, which makes setting things precisely that much easier as well.

It does run on 4× AAA batteries, but that's not really a problem. Each set of batteries last about a year or so---I didn't keep track of it because it is that infrequent that I need to change the batteries. There is a ``low-battery'' indicator that will inform me when it is about time to change the batteries, so it really isn't an issue.

I recently took apart my clocky alarm clock to clean it up. I've not cleaned the poor bugger for more than a decade, and the amount of dust and dirt built up is just... horrible. I did try to clean it every now and then when I was way put off, but without taking things apart, there were just too many strange places of dust gathering that could not be cleaned. I also screwed up one of the more recent cleaning steps by using 70% isopropyl alcohol to clean the window, and somehow it leaked through the adhesive and caused the interior to fog up, making it impossible to read the display without triggering the lighting mechanism via the snooze button. That was not ideal, and so I had to take it apart to remove that plastic window so that I can see the display again.

The first obstruction to taking it apart was that the four cross-head screws were covered up by a plastic screw cover that, over the years, had been shoved in deep. There was no way to non-destructively remove them, and so, they were destructively removed via drilling. For more details on the guts, consider checking out this page from Instructables.com.

Once that was done, everything else was much more straightforward to handle. I don't have any ``before'' pictures (those are too horrific to share), but here is a fully-assembled ``after'' picture.
Yes, the wheels are no longer pristine white, as is all the other plastic parts, but what can I say about a device that literally travelled with me throughout the world over the past 12 years?

Discolouration issues aside, the only other complaint I have after such a long operation time is that it has started to gain time at the rate of about 1 minute every couple of months or so. It's not that big a deal (I set my alarm clock to run 15 minutes ahead of local time anyway), but it is something that I need to remember to check on every now and then. That is just a good habit anyway for any time-keeping device that doesn't synchronise itself against the GPS constellation or the Network Time Protocol (NTP).

There aren't many electronics that last as long and remain as useful as my beloved alarm clock. Here's to another decade of wonderful operation.

Till the next update.

Saturday, August 28, 2021

The Rules of the Internet

It's 2021, or rather, nearing the end of 2021. I've lived on the 'net for years now, and after mulling over all the nonsense that I had been observing that had been driving me somewhat nuts, I went back to my old roots to revisit The Rules of the Internet. I'll duplicate the current version here for records purposes:
  1. Don't f**k with cats.
  2. You don't talk about /b/.
  3. You DON'T talk about /b/.
  4. We are Anonymous.
  5. We are legion.
  6. We do not forgive, we do not forget.
  7. /b/ is not your personal army.
  8. No matter how much you love debating, keep in mind that no one on the Internet debates. Instead they mock your intelligence as well as your parents.
  9. Anonymous can be a horrible, senseless, uncaring monster.
  10. Anonymous is still able to deliver.
  11. There are no real rules about posting.
  12. There are no real rules about moderation either---enjoy your ban.
  13. Anything you say can and will be used against you.
  14. Anything you say can and will be turned into something else.
  15. Do not argue with trolls---it means they win.
  16. The harder you try, the harder you will fail.
  17. If you fail in epic proportions, it may just become a winning failure.
  18. Every win fails eventually.
  19. Everything that can be labelled can be hated.
  20. The more you hate it, the stronger it gets.
  21. Nothing is to be taken seriously.
  22. Pictures or it didn't happen.
  23. Original content is original only for a few seconds before it's no longer original. Every post is always a repost of a repost.
  24. On the Internet men are men, women are also men, and kids are undercover FBI agents.
  25. Girls do not exist on the Internet.
  26. You must have pictures to prove your statements; anything can be explained with a picture.
  27. Lurk more---it's never enough.
  28. If it exists, there is porn of it. No exceptions.
  29. If there is no porn of it, porn will be made of it.
  30. No matter what it is, it is somebody's fetish.
  31. No matter how fucked up it is, there is always worse than what you just saw.
  32. No real limits of any kind apply here---not even the sky.
  33. CAPS LOCK IS CRUISE CONTROL FOR COOL
  34. EVEN WITH CRUISE CONTROL YOU STILL HAVE TO STEER
  35. Desu isn't funny. Seriously guys. It's worse than Chuck Norris jokes.
  36. Nothing is Sacred.
  37. The more beautiful and pure a thing is---the more satisfying it is to corrupt it.
  38. If it exists, there is a version of it for your fandom... and it has a wiki and possibly a tabletop version with a theme song performed by a Vocaloid.
  39. If there is not, there will be.
  40. The Internet is SERIOUS FUCKING BUSINESS.
  41. The only good hentai is Yuri, that's how the Internet works. Only exception may be Vanilla.
  42. The pool is always closed.
  43. You cannot divide by zero (just because the calculator says so).
  44. A Crossover, no matter how improbable, will eventually happen in Fan Art, Fan Fiction, or official release material, often through fanfiction of it.
  45. Chuck Norris is the exception, no exceptions.
  46. It has been cracked and pirated. You can find anything if you look long enough.
  47. For every given male character, there is a female version of that character (and vice-versa). And there is always porn of that character.
  48. If it exists, there's an AU of it.
  49. If there isn't, there will be.
  50. Everything has a fandom, everything.
  51. 90% of fanfiction is the stuff of nightmares.
  52. If a song exists, there's a Megalovania version of it.
  53. The Internet makes you stupid.
I remember up to Rule 63 (though Rule 0 seems new to me), the stuff after that is too new to me from circa 2007, especially since Undertale (the source game for Megalovania) only came out 2015, being in development from 2013 till then.

As flippant and strange as the rules sound, I think they are surprisingly relevant even in today's context of ``curated'' social media. The rules epitomise the concept of how not knowing history has a way of making the same mistakes from then repeat themselves. The platform in which the interactions take place may be different, the people interacting may not be technology nerds the way the old 'net was, but the basic principles of the interactions are still the same.

But who am I kidding? Newfags aren't going to look at The Rules of the Internet---each of them is too busy trying to repost the repost to win imaginary points and be the hippest most up-to-date totally non-mainstream popular person.

Yeah, this isn't good content for a blog entry, but I just want to have a copy of The Rules so that I can refer back to it for later discussion.

Till the next update.

Friday, August 27, 2021

SMS Musing #14

While dining out alone:
With much of the world staying connected through electronic networks, I wonder sometimes how hard it can get to voluntarily fall off it. I don't mean it in the sense of "going off-the-grid" in a prepper sort of way, but more like, how easy it is to "delete Facebook" and stay away from voluntary electronic networks. This means that those "not compulsory but you will totally suffer otherwise" government-related electronic networks don't count. Maybe it is really easy to do so... but the ramifications... I'm not so sure about that.

Will I ever be missed if I'm gone?

Thursday, August 26, 2021

Off Tangent?

There is often a slight difference of mentality between writing a blog entry in the morning, and one that is nearer the end of the day. The morning entries have a tendency to be based on a whim/fancy that was struck during the hours of sleep, while that of the evening is after a whole day of exertions. To me, morning entries have a tendency to be more abstract and ``large scale'' in nature, with little to no direct relationship with the happenings of the world, while the evening entries tend to be inspired by the goings on that I had experienced throughout the day.

Considering that I am currently still on sabbatical, much of what I am thinking are based on what I have been reading and am already mulling over, as opposed to actually being reflective of what has happened. Because let's face it, when one is spending most of one's time at home looking metaphorically out of the window, there really isn't much of the outside world that affects one directly. The number of interactions with people are few, and even if there were, they had a tendency to be much more muted in affect. Besides, as time goes by, the state of my being on sabbatical and how the rest of the world isn't means that a divergent of general interests and behaviour is already taking place.

If I keep up this sabbatical for another year, it could well be the case that I might be a total and absolute hermit. At this point, I cannot convince myself that it is not that appealling an idea to pursue.

Extended sabbatical... isn't it just another way of saying ``early retirement''? Considering that my current ``goal'' is to merely outlive my parents before I set my on/off switch permanently to ``off'', it might well be the case that it is a worthy goal to pursue.

Realistically though, I will still need to get some kind of job to pay the bills. This isn't the era where hard work will grant one with the means of living a stress-free life ``in the future''---the old pension schemes that promoted that idea had been scrapped within the past thirty years once employers/corporations start realising that people are living, on average, twenty years longer than they were ``supposed'' to, which, as a result, confirmed that the predicted spending required to fulfil pension requirements was never going to be enough, more so if they expected to leverage on the future contributions of future workers to feed into the pension to pay off the current pensioners.

Sadly, the numbers will never match up for the simple reason that every economic system currently in operation is based solely on the assumption of growth. There are the usual platitudes on why it is believable: technology is always ``improving'' the way we can do more with less, there will always be cheap workers available for the parts that we cannot automate away, and more work is being done to ``value add'' to products beyond mere manufacturing.

Unfortunately, they are all wrong.

With much more at stake, technology is mostly in the ``exploit'' phase as opposed to the ``explore'' phase. There's a tendency to see more of the same, but ``at scale'' as opposed to something truly new/extraordinary that changes the way we do things for the better. The internal combustion engine, for example, is still going to be around for quite a while, no matter what they say about electrification, as long as last-mile electricity distribution isn't improved upon. Fiat money will still be primal no matter what the crypto-currency enthusiasts say.

The game of cheap workers have been one of arbitrage; it's actually symptomatic of the entire theory of economic systems. The basic premise here is that of exploiting competitive advantage in order to secure higher profits (not just revenue). The only ``small'' catch is that for certain types of competitive advantages (specifically that of labour), the margins will decrease the more they are exploited as the associated cost of living increases as the quality of life improves. In short, there will come a time when all places offer equally expensive labour. Labour is expensive because their replacement tends to take a while and have interesting scaling issues unrelated to just the quantity---the quality (available skills and competency) of the said labour counts as well. There isn't an economic system that prioritises self-sustenance and balance---and it isn't obvious how one can be created as such since all we have known over the past two hundred years of human history is growth, growth, and growth [in human population].

``Value adding'' to products after manufacturing is a bit difficult to understand, for the simple reason that it is likely to be a multi-dimensional manifold with many possible local optima. The objective of the game then is to identify enough of these local optima to provide the ``value add'', while using the least amount of resources. Put in more lay-terms, it means that one needs to identify and corner the market where the consumers in it want to consume the very services that one can produce, amidst a large and not-globally-well-understood needs space. It ends up with fragmentation, really. Think of it as the ``artisanal'' phase of service. Maybe at some point enough similarity of service requirements may be found to automate away, but there is still that intangible aspect of service that makes such automated means less welcome as compared to a real person.

Where was I again? Eh... it doesn't matter. Got my daily rant out of the way.

Off to something else then. Till the next update.

Wednesday, August 25, 2021

No Big Thoughts Today

No big thoughts today. Big thoughts take time to fester, to generate enough annoyance that it becomes harder and harder to not expel them all out with one long-winded rant.

Big thoughts get increasingly harder to come by the more contented one is. Not because that the more contented one is, the more ignorant one becomes, but rather, the more contented one is, the less one is likely to waste time worrying about things. Recall that to worry, is to spend brain-cycles on something that will, inevitably, not do any good at all because the said thing is by itself, outside of one's control.

So all that armchair socio-economico-politico commentary/rants here? Yeah, I'm not some angsty teenager---I know that merely talking about them here isn't going to change anything. Even if I were some kind of high-profile blogger, that would still be the case, for as long as no one decides to take action to enact a change, the status quo will always prevail for the simple reason of inertia.

Never discount inertia in any situation. Inertia is often times the most potent reason why an obviously needed change never gets off the ground. Inertia, to borrow from a physics definition, affects both motion and stasis---an object in motion (or stasis) remains in motion (or stasis) until an external force of sufficient sufficient magnitude to overcome the object's inertia acts on it to impart an acceleration to it to change its direction (or give it motion).

That's why mainstream trends are as hard to buck as it is to change an ossified bureaucracy to be made more efficient.

Alright, that's enough for someone who has no big ideas today. Till the next update.

Tuesday, August 24, 2021

Change

When people say ``the only constant thing is change'', they are making the observation that the world at large is an emergent system in the sense that from any specific time period [in the past] to any specific time period [future relative to the past], the overall system itself demonstrates behaviour that is consistent with one that has moved from one state to another often-times different one, especially so the system is taken as a whole instead of deliberately confining it to any specific localised region.

But that observation doesn't mean that:
  1. The rate of change is a constant;
  2. Change is always for the better;
  3. We must all ``embrace'' change.
At this point after living through the transistor revolution (as opposed to the industrial one about three hundred years ago), the post-modern era people are more comfortable with noting that the rate of change is not a constant. It is hard to continue to accept that old [mis-]interpretation when we observe over the span of 10-year windows the rapid miniaturisation of electronic ``thinking'' hardware through the rapidly increased ability to do more with smaller and smaller hardware---the ubiquity of various ``smart'' devices are a testament to this, and can prove to be enough to convince even the most recalcitrant to acknowledge that the rate of change is not constant.

That second point of how change isn't always for the better is a little harder to get to, since it involves a judgement of value, which by itself is a far trickier concept to objectively describe and obtain a consensus on a universal scale of values. The usual way of doing so [particularly in economics] is to assign a dollar-value to the outcome, using the innate assumption that the dollar-value, as a mass hallucination, as a good enough fungible measure of value for trade, is also as good for determining the value of things outside of the context of trade. Naturally there are flaws in this method, and the type of change that I am referring to [in an emergent system] is the sort that isn't quantifiable, since quantifiable change can then be subjected to the rigours of differential and integral calculus to further analyse and work with them. No, the type of change I am referring to is usually of the social kind, the kind that involves trends in behaviours within the emergent system that is society. Is being a more liberal-minded society [as compared to before] ``better'' than one that is less so? Is a return to a theocratic mode of government ``better'' than one that is based on the oh-so-very-obviously-flawed version of democracy? None of these questions have acceptable answers, for the simple reason that any acceptable answer will eventually need to be decomposed into a series of judgement calls on what values are prized and what aren't within the particular society.

There are, however, some useful rules of thumb (or heuristics) to guide us. Generally any change that leads to increased individual agency can be considered as being better, since more knowledge (a side effect of our information-dense societies of today) often leads to more tendencies towards doing things using one's own way instead of following the proverbial herd. Any change that reduces harm to others is usually considered as being better as well, under the slightly tweaked utilitarian principle (instead of maximising overall utility, we are talking about minimising overall dis-utility---these two statements are not mutually exclusive to each other). Any change that can provide a pathway to those who do not wish to exercise their free will too much for whatever reason can also be considered better, since a purpose can then be ascribed to these individuals to avoid a situation where they may be swayed by a more destructive [to the system, or society in this case] power. The heuristics are not necessarily consistent with each other, hence the need for the judgement of value to be applied before the change can be considered ``better''.

That change is inevitable and therefore we must all ``embrace'' it is a two-part puzzle that needs unravelling. ``Change is inevitable'' can be considered an axiom in this context, an assumption of the behaviour of the emergent system that cannot be proven/disproven to be true. The idea of ``embracing'' change itself is a nuanced statement---in most contexts of lay-use of natural language, we would interpret ``embracing change'' as adopting wholesale the change that is happening/has happened, possibly adapting any old processes/assumptions that were used before to incorporate the changed elements. Reality is often more subtle than that, and conveying the entire nuanced character of pronouncements of various aspects of society with a compact/concise statement is truly an art-form on its own. Change should never be allowed to be taken in wholesale, because change itself requires a judgement of value to determine what parts of it are relevant, and of the relevant parts how can it affect the existing assumptions and processes, with the important distinction that the existing assumptions and processes have a proven track record (good or bad) as compared to the change.

What I mean is that it is back to the old ``exploit versus explore'' debate all over again. An existing assumption/process before the change in question has a history of its effects and associated adjustments. Based on the understanding of the system then, a certain amount of adaption in line with ``optimisation'' has been applied and has been shown to be working (the contrary concept of existing assumptions/processes before the change have been proven to be not working despite efforts of adaptation/optimisation holds as well). The change is new territory, not necessarily proven within the [localised] context of the emergent system. Maybe it is a better fit, or maybe it isn't---we don't have a good enough predictive system to ``prove'' the outcome without any doubt. Then it becomes a judgement of value call with a dose of risk assessment as well. To ``exploit'' is to declare a conservative risk appetite, and prefer the known over the unknown; to ``explore'' is to declare a higher risk appetite, and prefer trying out the unknown over the known in the hopes of achieving the possibly larger potential of the unknown.

But ``exploit versus explore'' issues aside, there is also a time element with respect to embracing a change that is often not taken into consideration. Over time, an existing assumption/process within the emergent system will require increasing effort to maintain, especially if the said change to them in question gets increasingly mainstream relative to the old practices. Here, the judgement of value is in play still: whether the greater value of retaining the older assumptions/processes is worth the effort in keeping them less tainted by the changes that are going mainstream. There are people who are willing to figuratively ``die on the hill they are on'' for their beliefs, and this is a representation of that. There are also those who are willing to ``reform'' their beliefs through carefully deconstructing their original assumptions/processes to isolate what is truly fundamental and cannot be discarded (judgement of value) and what is frou-frou that can be changed. That latter group requires much more thinking involved, and are often agents of change within their localised context. Their level of determined heresy is determined by how radical their excision of the frills go, and there isn't a pattern to describe what level of enthusiasm leads to a greater chance of success.

Ultimately though, any change as applied to an emergent system provides it with energy to move on to another state, and the resistance to change can be seen as an [increasing] expenditure of energy to maintain a previous state. It may be possible to hold off change for quite a while, but without enough long-term support, that holding off will still be overcome by the sheer amount of energy that change provides.

Can a change be ``undone''? Not in the literal sense of returning to a previously observed state, since each state of the emergent system is a product of the entire history of all its previous states. It's like the complex logarithm spiral---there is a component that can be ``undone'' to what seems to be a previous state, but due to the passage of time and the generation of histories, the ``actual'' position of the state is nowhere near the previous state; at some level, the ``undoing'' nature is really more illusionary than real.

The only constant thing may be change, but it doesn't mean that our treatment of it has to remain constant. A large part of how we deal with it boils down to exercising a judgement of value together with an application of risk assessment. In both cases, it requires thinking.

Maybe the reason why humans are the undisputed rulers of Earth is because we have vastly improved on our ability to out-think all other creatures on this planet. And as long as we can keep on thinking, we can keep on surviving and thriving, which makes all that anti-intellectual movements all the more terrifying to think about.

Till the next update.

Monday, August 23, 2021

Judgement

There are many things to disagree about in the world. But perhaps there are some things that we need to agree about so that we can still go through this life without injustice.

There are two kinds of justice that I know about: the concept of divine justice, and that of society-driven consensus-based justice. In both cases, ``justice'' still carries the meaning of righting the wrong, but their key difference is the time-span of enaction; divine justice is essentially unbounded in time, since whoever is in charge of it (in my case, God) is not a being of the universe and thus has literal free reign over all of space-time to righting the wrong. The other difference that is more subtle is the definition of what ``wrong'' means. Divine justice involves righting wrongs that are more cosmic in nature, the type of wrong that is done against a universal morality, while society-driven consensus-based justice can be seen as worldly implementation of divine justice, but with more cultural aspects to it. I cannot, however, say that divine justice is a proper superset of regular justice as we know it, since there may be cultural components that introduce concepts of justice that lie outside of a universal morality that may even be considered be blasphemous.

But in either case, we can agree that the notion of ``justice'' also entails the action of ``judgement'', a singular event that pronounces that an action is just/unjust, and the meting out of punishment/rehabilitation to correct the unjust actions to bring the judged back to be among the just. ``Judgement'', though stated as a singular event, has been known to be provided by any of these three types of judges: a universally impartial supernatural judge (in my case God), a society-driven consensually appointed human proxy (i.e. the regular judges in a court of [human] law), or a society-driven amorphous consciousness that simultaneously exists but cannot be precisely defined (i.e. public opinion).

The outcomes of a universally impartial supernatural judge's opinions are known in a vague way but not with the specific outcomes that can be pin-pointed to some specific space-time range; by definition, this judge is unknowable in many senses of the world. As such, we cannot reason about this judge in a way that is meaningful for discussion---the only true thing we can reason about is whether we ought to have faith in the infallibility of this judge or to not have faith in the infallibility of this judge, with the latter meaning either that the judge is fallible at some level that we do not comprehend, or that this judge simply does not exist. With that in mind then, we leave this judge out of discussion.

A society-driven consensually appointed human judge enforces human law, which we already have established as a particular implementation of divine justice that is itself affected heavily by cultural contexts. For the purposes of the exposition, ``cultural context'' here simply refers to the level of human understanding that a particular population comprises for that particularly defined society---it can mean a village of people, a country of people, an ethnic group of people, or even a religious group/cult of people. Human law is no different from that of sequential list of conditions and conclusions being applied to interpret a series of actions revealed through investigation to determine justice with respect to the [human] law, or what we would more commonly call as ``legality''. Judges of this sort are not necessarily as impartial as the universally impartial supernatural judge by definition, but they are generally among the most well-respected of people---they are believed strongly by the society-driven amorphous consciousness that simultaneously exists but cannot be precisely defined to perform their duties ``to the letter'' of the law that was agreed to by the said society from which all will abide by (and thus be justified).

In the case of regular judges, a principle of ``innocent until proven guilty'' seems to be in vogue. I don't think that principle is a correct statement: ``innocent'' and ``guilty'' are often misconstrued to being more polar than what they should mean in context. What I am saying is that ``innocent'' and ``guilty'' ought to be understood in the context in which a particular series of charges are being levied at---attributing more unnecessary context to these two words is one of the reasons why a legal system may be deemed to be unjust. Regular judges are well aware of this requirement, but the same cannot be said of public opinion.

And that is where character assassinations come into play through the use of public opinion.

It is never about ``innocent until proven guilty''. It has always been:
Let the plaintiff/complainant present their pieces of evidence and associated logical reasoning to demonstrate that the defendant is in violation of the specific laws that the defendant is charged with. The defendant may then be acquitted if they can present sufficient counter-evidence and proof of contradictions to the provided logical reasoning to demonstrate that the plaintiff/complainant has no case against them.
It really doesn't matter if the defendant is ``not innocent'' of other things not related to the specific charges---in the court of law, all they are answering to are the specific charges that are listed against them. If the plaintiff/complainant does not provide enough evidence and link them into a convincing enough narrative to explain how the defendant is violating the specific laws that comprise the charges, then the defendant must be acquitted---it has nothing to do with innocence or guilt.

That is part of the reason why good lawyers are so important. And it is also the reason why the law seems to favour the rich and powerful---not because they can bribe their way through the system (they don't have to, really), but because they have the means to hire the types of lawyers who are competent enough to mesh the evidence into a narrative that requires a formidable intellect to riposte against. As plaintiffs/complainants, their chances of success improve when the evidence are strongly linked into a narrative that requires an exceedingly improbable set of events to break either the evidence or the reasoning down; as defendants, their chances of success improve when they can break the relevance of the supplied evidence and point out alternative logical reasonings that contradict the proposed one by the plaintiff/complainant.

However, that's all technical; a classic demonstration of logos. The problem here is that public opinion (or the society-driven amorphous consciousness that simultaneously exists but cannot be precisely defined) runs on pathos, a distinctively different manner of reasoning. It's not about the logic, it's all about the emotion, the ``feels'', the type of intellectual laziness of lumping both the shortfalls of a person's character with the specific charge that they are being levied with. This is not helped at all by the media, social or otherwise, that thrives on generating lots of controversy (or as I would call it, ``stirring shit up'') to increase the visibility of that particular media to feed into the participation/relevance/advertising revenue-loop.

That is how someone who is exonerated in the eyes of the law can still be cast as a pariah of society. Because accusations get blown up into declarations of guilt in the court of public opinion, and where retractions of such mistakes never travel as far as the more controversial accusations. It is a classic example of a denial of service attack, in which case the ``service'' that the victim is denied is that of an unassumingly normal life devoid of unnecessary and incorrect judgement.

The problem is obvious, but the solution here isn't. How are we to deal with such manipulation in general? Can we even do anything about it at all? Sadly, I think not---humans are not rational by any degree, no matter what our economist friends might say (they are also starting to realise that homo rational is a simplifying assumption the same way that physics uses ``a spherical cow'' to create models to help explain a [small] part of how an emergent system works), with the added effect that the more we try to get people to be rational, the more they are likely to take those words out of context and twist it to fit their own pathos. Examples of the latter may be found through the many ``anti-vaxx'' arguments that can be seen out there that I don't want to reproduce for fear of damaging my brain even more.

Going back to my original statement, I think we really need to start thinking about the processes in which we take to thinking about things and developing our opinions. I think I might have mentioned before (either here or in meatspace) that it is often impossible to convince anyone from the extreme sides of a situation to change their minds---I said that ``they cannot be reasoned with if they don't want to listen to reason''. A recent musing that I read crystallised this concept into something that is a bit more proof-friendly:
Don't charge in to ``convince'' someone by throwing out all sorts of evidence, arguments, and associated reasonings---they are not likely to want to listen to all of that. Instead, ask these people what sorts of evidence, arguments, and associated logical reasonings that they are expected to see proven true or refuted before they are willing to change their mind instead.
I like that perspective because it forces contrarians to be more precise in terms of what they are contrary about. If an extremist can be articulate about the ``sacred cows'' that they hold dear to keep them in that particular polarity, then that person is no extremist any more---they are really extremists who can be reasoned with, as opposed to the intellectually lazy extremist who does not know what he/she is being extreme about, and opposes all forms of evidence, arguments, and associated reasonings because they can do so flippantly.

I guess this whole rant is just trying to see the link between how this new perspective gels in with what we believe to be an integral part of a functioning society.

That's all for now. Till the next update.

Sunday, August 22, 2021

In Brief

It's Sunday. I went to church online. It rained a lot---the weather was nice. I had curry chicken rice for dinner. I watched more videos from various Hololive folks while reading a few more pages of The Mahabharata of Krishna-Dwaipayana Vyasa: Book 3: Vana Parva as translated by Kisari Mohan Ganguli. I played a little bit of Grim Dawn as well, trying to take my first player character through the Ultimate difficulty having failed to do so without twinking. I started on Voronoi diagrams in Computational Geometry in C (2nd Edition) by Joseph O'Rourke. I cut my own hair down to the shorter length that I prefer. I did area cleaning as well.

None of what I said above is in any particular chronological order.

That's all for today. Till the next update.

Saturday, August 21, 2021

Grabbag: ``Fully Vaccinated'' and Mini Rogue

Alright, so it has been about two weeks since I had my second dose of Cominaty, so technically I'm now considered ``fully vaccinated''.

Since it is the pandemic after all, there were quite a few things that I had to do after the waiting period was over. The regulations have been updated from Aug 19 as noted in the Ministry of Health home page, replicated here because it will update according to changes.
The key point here to take is that there is going to be differentiated requirements for compliance for those that are ``fully vaccinated'' and those who aren't.

There are a few ways of demonstrating the vaccination status:
  1. Status as shown in the HealthHub portal/app (I used this to confirm my vaccination status);
  2. Status as shown in the TraceTogether app (I've never used the app, so I cannot attest to this);
  3. Reconfiguration of the standalone TraceTogether scanner to beep differently according to vaccination statue (I've not seen this in action yet, since I've not gone out of the apartment yet);
  4. Use of Verify via a binary blob ``certificate'' as generated by Notαrise (I tested this with the QR code generated in the document generated by Notαrise); or
  5. Show the official document generated by Notαrise which states when the various doses of the vaccines were taken, and more importantly, the effective date from which the person stated was considered ``fully vaccinated''.
Of the whole lot, I only have the TraceTogether token, which is a standalone device that acts as a beacon/eavesdropper to determine close contacts. The token itself is tied to me the digital citizen entity in the grand database of the government, so for the purposes of everything related to COVID-19 contact tracing (and other ``extended'' uses), should be self-contained.

But frankly, I kinda doubt that the decentralised approach towards the whole checking in/out process ensures strong compliance. And so I took the time to actually generate the relevant official certificate and print it out nicely to fit into the A5 paper size. That document comes with a QR code that leads to the Verify site, a fancy single-purpose trusted (in the sense that it is ``obviously'' a government database) server, as well as a JSON document representing various information about the vaccination regime, both of which were generated and sent via email.

Digging through the JSON document reveals that it is a proof-of-concept work for the OpenAttestation project (created and run by GovTech). It claims to be a blockchain framework for handling these kinds of certificate information, providing guarantees of immutability of the data that is stored within, but frankly, I don't buy it as long as the only people who are running it is GovTech itself. Considering that all these certificates are government-issued in the first place, having a blockchain that is hosted by only one single already trusted entity seems like a waste of CPU cycles, and is just a way of checking off someone's KPI list.

I would happily take back my words if the network is allowed to be run distributedly across machines that do not all belong to the government; I wanted to say machines owned by citizens, but I don't think that the restriction is necessary. The key point here is that the blockchain is a means of demonstrating transparency in terms of the transactions that are taking place and the so-called ``state of the world'', and so expanding the participant list to include not-only-the-government would be the right thing to do and justify the additional computational expense from having to maintain a blockchain.

Anyway, that's it for me for the saga that is the COVID-19 vaccination programme. The next step in the ``fun'' is to see if they are going to order booster shots, and what the plans are for re-energising SIN City Inc, which will definitely include some form of relaxation of restrictions and mitigation of the accepted increase in risk as people start to re-contribute to the economy once again.

------

In other news, I have finally received my copy of Mini Rogue! I've backed it about a year ago and am glad that it has arrived on schedule. Just look at what it came with!
The Kickstarter Mini Rogue is a rework and prettification of the Mini Rogue micro-game that was created back in 2016. That version was a print-it-yourself-and-play one that was easy to set up, but still had quite a bit of fun in it. The new one is much more professionally produced, coming with its own nicely printed cards and various accessories to make the game complete. The level of support that I had in the Kickstarter gave me upgrades to the tracking board (they were thicker cardboard with inserts cut out to better hold the counter cubes and dice), expansion cards (more emergent gameplay!), self-assembling flat-packing dice throw tray, foil card replacements for the boss monsters (shiny, literally), and even a professionally printed of the original mini-game with updated mechanics as well!

It's basically the off-line version of one of my favourite genres of games---the roguelike. The randomness of events/dungeon evolution combined with the way I would react to them with my player character is just the right amount of emergent behaviour. That it is short is also a draw for the most part because the time investment in the character is just so, without having to worry too much if the current ``build'' affects something that will happen many hours from now; it is the same reason why I like the old DoomRL or the modern Jupiter Hell as compared to say NetHack, AngBand, or Dungeon Crawl Stone Soup.

``Hardcore'' permadeath mechanics with meta-gameplay isn't something that I enjoy a lot of, because I cannot just start a game without remembering what over-arching thematic update I was going for. This is part of the reason why even though I have Desktop Dungeons, Cogmind, or Sword of the Stars: The Pit, I've never really gotten as deep into them as I was in DoomRL or Jupiter Hell. Life is too short to keep dying [in-game] just to gain enough meta-elements to actually have a better-than-miniscule chance of successfully pulling off a run. If meta was important, I would play something without permadeath mechanics. If I were younger and had way more free time, I probably wouldn't mind the meta-gameplay as much, though I would probably be annoyed if there was no way of grinding out all the benefits from the meta-gameplay to get the absolute beastiest of player characters for use in-game.

Permadeath mechanics with reaction-sensitive elements are also a game-killer for me. That explains why games like Spelunky, Crypt of the NecroDancer, The Binding of Isaac: Rebirth, Rogue Legacy don't appeal to me as much too. I mean, I have them on Steam/GOG.com, but I just don't feel like playing them as much, as compared to Jupiter Hell.

In other words, I'm saying that despite being a fan of roguelikes, there are certain combinations of mechanics that I do not like as much, and am unlikely to go round looking for them in the future.

Okay, I think that's enough for now. Till the next update.

Friday, August 20, 2021

He Who Fights With Monsters...

Now I know why the whole ``woke''/``cancel'' culture grinds my gears to no end. Or rather, I know how to put in words why that ``culture'' grinds my gears to no end.

One word: outlaw.

More precisely, being outlawed with no recourse through rational discussion.

Back in the bad old days, things were extremely authoritarian. If the monarch/warlord was displeased with any particular person, there is literally no recourse for that said person, no matter what status that he/she has, for the simple reason that there is no one with a status that is higher than the monarch/warlord. People were property then, and only the masters/owners of property can decide how they would like to dispense them.

The Magna Carta Libertatum was among the first attempts [in the Western world] of the leaders-under-the-monarch/warlord to be given a legal status that would allow them to operate some semblence of a rational law. It gets ugly for a while before the more famous Constitution of the United States, its associated United States Bill of Rights, and the eventual (less famous) Universal Declaration of Human Rights came about.

All those listed documents act as a framework to define what counts as lawful behaviour of people. There are, of course, national laws, state laws, and other ordinances that are defined at various levels of various societies as well.

The problem then of ``woke''/``cancel'' culture is the systematic throwing out of all these laws/ordinances with the declaration that the laws/ordinances enshrine systemic biases that are innately unfair to various sub-sets of people. It is the taking apart of the normal processes of adjusting laws/ordinances and the replacement with a more pathos-heavy reasoning process: ``it doesn't make me feel good, therefore it is wrong; and most important of all, if you do not agree with me that it is wrong, then you are wrong and there is no way for you to not be wrong unless you agree with me''. It is an extreme position that does not allow for dissenting views to be discussed with a better rule being created---it is militant in the sense of weaponising ex-communication, to make the person who dissents an outlaw, through the deprivation of effective protection of the said person as provided by the law through sowing fear, uncertainty, doubt via the loud and pathos-heavy path of the public opinion.

``Woke''/``cancel'' culture is quick to accuse, quicker to crucify, and slow to admit mistakes, never truly apologising/recanting statements made in error (intentionally or otherwise). It's about mobilising the momentum of public opinion to advance a narrative of extremes: everyone is either wholly bad, or wholly good, and ``goodness'' is defined as those that are with the accuser. The vehemence and vitriol that can be spewed disgusts me greatly---everything that is said by someone who is ``woke'' has a tendency to be self-centred with no room for understanding subtlety, with the usual claim that they had been trying to open communications with those who are in the system, but all diplomacy has failed, which leads to the completely un-erring conclusion that violent reprisal (physical or verbal) is the only solution.

To me, it's like watching a bunch of three-year-olds enacting their version of justice, only it is less cute because we are talking about adults who are deliberately doing things that make life a living hell for other adults, whether or not the receiving party ``deserves it'' or not.

I suspect that part of the reason why this ``woke''/``cancel'' culture is more prevalent now is due to the general improvement in the life styles of people that they no longer have to worry too much about looking after their food and shelter needs. It's like the three-year-olds all over again---their form of [extreme] justice works because they literally do not have to worry about when their next meal is coming, or if they have a place to sleep safely at night. It was the case that we would say that a child has ``an old head on young shoulders'' as though it were a good thing when the said child acts more matured than their physical age might suggest. But having reflected on the cases where such things happen, I come to a different conclusion: any child that acts more matured than their physical age has done so because they didn't have the luxury of living in ignorance of the daily toils that were needed to ensure that life could continue.

Am I dissing on people who are ``woke'' or practise ``cancel'' culture? Yeah, in a way. I mean, I can understand their intent in fixing a corrupt and broken system, but I do not agree with their methods. As Friedrich Nietzsche once said:
He who fights with monsters should look to it that he himself does not become a monster. And when you gaze long into an abyss the abyss also gazes into you.
Funny enough, I just realise that this concept is behind Alucard's anger at Father Alexander Anderson in Hellsing when the latter decides to use the Nail of Helena to become a plant-type regenerator to fight Alucard, and his refrain of ``Only humans can kill monsters.''.

Anyway, that's all I have for now. Till the next update.

Thursday, August 19, 2021

Hosting Bills: Paid!

Whelp, I'm glad I've enough to pay for my various hosting fees this year. They are usually small enough that I don't explicitly budget for them, but they aren't that small that they can be considered negligible, even if I am drawing an actual income instead of pulling it out of my savings as is this sabbatical.

For the uninitiated, the hosting fees are for a [virtual] server that runs The_Laptop's Domain, and another that is used to host my Apache Subversion code repositories.

``Subversion? Code repositories? Use github/gitlab lol!'' No, I don't want to, and more importantly, I don't have to. The use cases for these code repositories are strongly personal in nature, and are used mainly for me to track changes effortlessly. They also provide a fallback in case of a screw up that I did, or as a quick way to bootstrap a new system up. None of them are ever intended to become some kind of publishable code, and I am the only developer touching the code base anyway. So the centralised model of Subversion, running on my own server as opposed to a public one, is good enough.

I used to run my Subversion repositories off some other hosted platform for many years for really cheap. Then they had new management, and started to think that they were some kind of CI/CD shop (with Subversion), and tossed a whole lot of new features on it. All that was fine---I just didn't have a use for them, so I didn't bother. But then they decided to charge about 12× the price for all these new features that I didn't need---that was when I started exploring alternatives. I found that I could run my own Subversion server off Linode for like a third of the price, with the added benefit of having more space made available, and weekly virtual disk image back-ups. The caveat was that I had to manage that server on my own, but it was pretty low maintenance Linux administration work---it was basically fine.

And so I upped my roots from that old service and moved it over to the new one that I had up and running in well under a day. It cost about 4× the original amount I paid, but it was still preferable over the 12× that was the asking---and that was after the customer service representative was trying to give me a discount [for the first year(?)] to keep me on their platform.

``Linode? Use AWS lol!'' No, I don't want to. I only need one measly [puny] server to run what I want to run, and I don't want to have my entire life tied to the AWS eco-system where if any sub-system detects abuse (true or otherwise), I get kicked out of everything in the eco-system with little to no recourse. I don't need it to be scalable, I don't need access to crazy services, and I don't need it to be ``managed'' by AWS in such a way that if I want to up and go, I cannot easily do that.

Anyway, that's all I want to talk about for now. Till the next update.

Wednesday, August 18, 2021

Blurb

Headache's gone mostly. That's all I want to write today.

Till the next update.

Tuesday, August 17, 2021

Dust and Fine Ash Everywhere

So, I'm running a headache now. Not sure why, not sure how.

But I've gotta write an entry for today to keep that streak going. So here goes.

I bought the HATTEFJÄLL office chair back in March. And it was only yesterday that I realise the back of the chair had five different height levels that could be triggered through a ratchetting process that involved lifting and releasing it. It was in the manual---but somehow, I think I didn't remember that bit at all.

That would explain why I was feeling really strange back discomfort, and why I could suddenly rest my head on the top of the back-rest.

🤦‍♂️

I was working on Eileen-II today when I started to hear some crazy mid-frequency vibration. There were only two things that moved within Eileen-II---either the left fan, or the right fan. I was afraid that something bad would happen, and so I quickly powered down and unplugged everything, before bringing out my screw driver to open up the rear.

As expected, there was fine dust on the fans. They were not clogging them to the point of non-movement, but since these fans were super efficient, any amount of disruption of their balance would cause loud and unnecessary vibrations. I cleaned it up with some cotton buds and isopropyl alcohol and put everything back together. Eileen-II has been operating really quietly ever since.

This was not the first time that such dust caused the vibrations. I think the last time that I had to clean things out was maybe about 6 months ago, while it took nearly eight months before I had to do that before.

I think there were two big contributions to this slightly more frequent occurrence:
  1. There has been more construction-related things, as well as more incense burning as somehow some folks decided to set up some altar at the void deck under my block that wasn't previously there before. That latter meant that there were now three incense burners diagonally opposite from my window which were used to burn incense into some fine ash that would eventually blow into my room, as is the concrete dust from all the drilling. All these would eventually make their way into the fans through normal operation.
  2. I keep switching out between placing Eileen-II flat on the table, and on legs, using the latter for all the situations where I would not be typing for an extended period. All that means that there was always movement which angled the fans in one way or another, which led to the layer of fine ash/dust getting more jostled about. That jostling could create the situation where some of the dust falls just in the wrong angle and upset the balance of the fan blades, giving that awful vibration noise.
There really isn't much I can do about all these---even when I was rocking a desktop, dust was always a problem. I can't stop people from burning all sorts of incense, and I do not know why people are suddenly setting up random altars at the void deck when there had been none over the past thirty years. I don't even know how such altars are allowed to exist in a common area that is not a designated Residents' Corner. I don't even want to get into the whole spiel of why people think it is a great idea to burn all sorts of incense that smoke like crazy immediately under an apartment block, where the wind will blow the smoke into the apartments themselves.

All I know is, my eyes are drying up, my nose is shot, I have a headache, and my laptop's fans were clogged with dust/ash that had to be cleaned out.

Till the next update.

Monday, August 16, 2021

Whachu Talkin' 'bout?

Live and let live, am I right? That seems to be a pretty sound principle for navigating through this world with so many people who are different from one in one way or another.

Then of course, the exceptions start coming in, eventually, like all ``short'' enough principles.

The problem with ``short'' enough principles is related to the whole issue relating to information transmission/understanding in general. The chunking nature of memory combined with the pattern recognition architecture of the mind means that humans tend to find low Kolmogorov complexity concepts much easier to retain, retrieve, and apply as compared to the high Kolmogorov one. This is just a long-winded technical way of saying we prefer simple concepts over those that require us to keep track of ``too many'' attributes/properties before reaching a conclusion.

Reality is nuance filled---it is, therefore, high Kolmogorov complexity in general. It takes an entire Universe to represent the universe, and so far, our attempts at ``compressing'' the information contained in the universe isn't fantastic. We do have quite a few physical laws that we have divined from observation and experimentation, but those are inductive in nature---they do not contain the so-called seed information that is needed that can kick start the entire process of generating the Universe from ``scratch'', which allows us to predict all the associated emergent behaviour. By constraining ourselves to various subsets of the Universe though, we do manage to get pretty good ``compression'' of the information of that localised system, examples of which include the Newtonian mechanics and quantum mechanics.

And that's what models do. They provide a ``compression'' of the strongly localised region [of the Universe], where the number of seed attributes is much easier to comprehend/control, from which the inductive laws can be applied to predict the behaviour from the statistical perspective.

``Models'' is just the technical term. In the context of this blog entry, the ``principles'' that are referred to can be considered as a concrete version of a type of ``model''.

Anyway, I lost track of what I wanted to get at---maybe there isn't anything that I want to get at in the first place. My mind is drifting off into finding convex hulls in 3-space, and maybe I shouldn't have started this blog entry at the time I did.

But eh, it's me ranting/intellectually masturbating. Gotta have my daily writing fun anyway. Till the next update or something.

Sunday, August 15, 2021

Social Media?

It occurs to me that the rise of social media and platforms operating similar to social media are some sort of natural experiment with respect to culture building and interaction of the sort that would be impossible/unethical to conduct on non-social media platform communities. It's probably not a new thought, seeing that there are sociologists who have published papers studying various effects of such interactions as they intersect with existing real-world alliances/cultures.

But unlike the real-world alliance/culture, there is one major difference: there is an overt overlord over the social media platform that is more understandable than the covert overlord over the real world (i.e. God). God's will is unknowable a priori; even prophets generally can only get the rough vibe of what is to come without necessarily knowing when it is to come, thus missing the second key ingredient of a prescriptive-type premonition. Social media platform owners/operators though, their will is much easier to comprehend.

Whatever allows them to generate profits to perpetuate the existence of the corporation.

That's why the recent punitive actions of Facebook against researchers working on discovering how disinformation works within the platform doesn't surprise me the least. Each platform is basically its own reality, with associated physics and emergent social rules determined by the platform owner, the former explicitly, and the latter through in-world ``law enforcement'' or moderation. No matter how strong the laws of the real world may be, in the reality as defined by the platform, the source code (as controlled by the owner) is god of that world. These platforms are probably a different type of threat against the sovereignity of national governments, though they share many commonalities with the old colony corporations; the key difference being the lack of a military arm, since much of the territory is controlled digitally with little to no physical presence.

Governments tolerate socia media platforms only because the latter does not ``steal'' any physical assets away from the country/nation that the government is running. Any physical assets that are used by the social media platform (like data centres) are paid for, and so can be considered as a net gain for the government since it is contributes directly to the economy.

Digressions aside, the intersectionality between the social media platform reality and the real-world community provides an interesting study of influence and what I would unceremoniously call ``mass hallucination''. Social media platforms are walled-gardens---do not let the ease of creating a new account on any platform by anyone fool one into thinking that it is as open as going out in the middle of the street and just shouting something. The rules that operate in social media platforms, both physics and social, may not share the same values as that of the real world in which they appear in. This is well-understood by governments a long time ago, as shown by some rather draconian measures by some countries' governments to force the values to be the same through controlling the context by isolating the network to within the sovereign control of the country. What is funny though is that despite doing that, the governments haven't really taken the logical leap of running their own social media platforms for their people, to act as the modern day digital equivalent of the piazza, where the official narrative can be disseminated from a known safe source, and where the digital divide that exists between the poor citizens and the rich is bridged through universal access to all digital services and its associated discussion fora.

In many recent instances of disinformation/misinformation in social media, we find that it really is a replaying of old Cold War tendencies, except with much higher growth rates due to the ease of starting an exponential spread event. As to how such disinformation/misinformation can benefit the party creating them, I remain mystified. I know that the outcome is always about obtaining more relative power, but the path to get there which has disinformation/misinformation on issues relating to a global pandemic is fuzzy to me.

Anyway, that's enough mouthing out of me. I'm still making my way through the 1048-page The Mahabharata of Krishna-Dwaipayana Vyasa: Book 3: Vana Parva as translated into English by Kisari Mohan Ganguli, roughly reading 406/1048 pages.

By the end of this week, the prison gates will open, and I can head out to my favourite bar to drink and read once more; a much welcomed change of environment.

And that's all I have for now. Till the next update.

Saturday, August 14, 2021

A Manual Would Have Saved Me 6 Months of Time

I managed to play with a cat within the past day or so.

All's well with the world.

------

In other news, I've been an idiot. I kept on complaining about how Eileen-II was always running too bloody hot. I started with undervolting (correct thing to do, though the undervolt offset was best applied uniformly across all applicable components instead of just the CPU for stability reasons), then started messing with the Turboboost ratios at 100% load (sketchy) while keeping an eye on the temperature, then messed with EPP values which lowered the temperature but introduced lag spikes/stuttering from sudden downclocks, before starting to use the alarm function in Throttlestop to trigger a much cooler profile setting while simultaneously using the original (but undervolted) turbo-ratio options.

Mind you, I was trying my hardest to ensure that the sustained temperature did not heat up the keyboard to an uncomfortable level. That was empirically determined to be about 85°C two months ago.

After one bloody big round, I just realised this little item hiding in the corner of the Options screen:
Oh my, what is this I see? PROCHOT Offset?

That's the bloody option to set the temperature threshold from which the CPU will auto thermal throttle. In this case, it is technically the number of degrees Celsius lower than 100°C that the processor is allowed to reach before it will be forced to thermal throttle.

🤦‍♂️

That would have solved all my stupid problems without doing so much crazy stuff.

Anyway, after discovering that, I applied it, and did some additional adjustments and associated testing of the parameters:
  1. From experience, the PROCHOT value does trigger maximum sensed values about 7°C higher than the actual threshold(!), so for the default value of 100°C, HWiNFO64 does show a maximum CPU package temperature of 107°C; I'm not sure why this is so. Thus, the PROCHOT Offset is set to 20, which is equivalent to saying ``throttle if we exceed 80°C''. Empirical evidence shows that the 10-minute average of 500 ms temperature samples were steady at around 80°C, with very rare spikes of 87°C as predicted.
  2. I've set the AC Timer Resolution to 1.00 ms so that the CPU can adjust its clock speed much faster to avoid triggering that higher-than-expected temperature. Since it only applies when plugged in, the cost is negligible.
  3. I am still using the Alarm (and by extension, the higher poll rate for Throttlestop). I did adjust the settings so that it will trigger off at temperatures at 90°C or higher, using a new setting that basically just disables Turboboosting to allow the CPU to cool down faster.
  4. I pushed the EPP down to 16 (instead of 32 as noted last time).
  5. Thermal Velocity Boost is completely disabled across all the profiles since there really isn't any thermal room for it; that way we don't waste time for the CPU logic to do crazy ramp ups only to ramp down before ramping up again.
The result of it all is much better average performance with much better thermal performance.

Man, just what have I been doing all these while?

In defense, it's not as though Throttlestop has a reference manual or something. The features may exist, but there is little to no explanation of what they do. Each time I make an update here about something else I did came about because I stumbled upon some forum post that described the feature with enough detail for me to learn how it affects both the temperature and the performance accordingly.

Now, the use of PROCHOT Offset means also that I can, in theory, run Eileen-II without using some standing legs. However, a test of running Grim Dawn while watching Ina's Minecraft stream in Waterfox showed that the 10-minute average of 500 ms Core ratio samples was around 19×, while doing the same with the standing legs for ventilation reached 24×. Note that while both numbers are much lower than the base ratio [of no more than 26×], the ventilation did improve the thermals enough to allow a 26% increase in clock speed. Of course, clock speed isn't everything (actual processing load is also as important), but empirically from within Grim Dawn, it was a difference of about 10--20 fps. So yes, the ventilation did make a substantial difference.

Alright, that's all for now. Till the next update.

Friday, August 13, 2021

Short and Sweet(?)

This will be a short entry to comply with my goal.

It's been a roller-coaster of a day so far. They are continuing with work on the covered-walkway just next to my window, and so the day has been full of drilling/welding sounds no more than 5 m away from where I am sitting. Hopefully they will be done soon.

It's a roller-coaster of a day because I feel somewhat emotionally impacted. Doesn't matter much for now though---I don't want to talk about it. Maybe after things are done.

The weekend is upon us soon, and it'll be a new day.

Just need to get through each day, one second at a time. We'll all reach the end, so there's no rush.

Till the next update.

Thursday, August 12, 2021

Hmmm

Hmmm.

Another week to go before I am deemed ``fully vaccinated'' by the powers that be. I'm a little eager for it so that I can go visit my favourite bar again and enjoy a pleasant afternoon reading outdoors-ish while drinking beer and eating luncheon chips (among other foods).

I've got about another quarter left to go before I need to commit to either finding a job, creating my own job, or get into the hikikomori lifestyle. Despite all my bluster, it seems most likely that I am either finding a job or just going hikikomori.

It also seems very likely that I might end up, as what they say in economics, under-employed, doing some job in the sub-3.5k+ per month (gross) category, mostly because I don't need the stupidity that comes with the higher-paying jobs. Can't play nice with people whose motive is to backstab their way to success.

Not wanting children, and almost to the point of eschewing a spouse also means that I have suddenly freed myself from a lot of potential financial obligations, which means a much lowered upkeep cost, making it easier to operate with a much lower income.

There are some days I wish I were born with half of my thinking power. At least it would make it easier to pass the day quietly in ignorance, not having to worry about things like whether a job is ethical, or if there are problems with management/the world and the like.

Knowledge is seductive when one is ignorant, but once that is attained, a lot of the magic goes away and one will get steadily confronted with the harsh realities of the prettiness that has been allowed to paper over what truly is.

It's true for physics and sociology. It's also true for when Adam and Eve ate from the tree of knowledge of good and evil. God cast out Adam and Eve from Eden after they consumed from that tree not just because they disobeyed His instructions, but that the serenity of Eden would expose the inner turmoil from having too much knowledge about reality that would lead to the eventual destruction of the Garden itself---getting thrown into a world where suffering is prevalent is a great way to distract one's realisation of what is out there and prevent rampancy.

Still, I do have a quarter left of my sabbatical. I'll make good use of it, and then spend the time needed to get into whatever it is I would want to get into after my sabbatical to start the next chapter of this sorry-ass life.

Till the next update.

Wednesday, August 11, 2021

Remembering to Keep Things Fun, and Them EPP Values

The numbers game is always so seductive to play. Once a quantifiable measure is defined, it eventually becomes easier over time to attempt to improve upon the numbers that make up that measure, a phenomenon that is generalised by Marilyn Strathern from Charles Goodhart's original observation as:
When a measure becomes a target, it ceases to be a good measure.
It's not a new concept, the original observation as stated by Goodhart was from 1975 (generalisation was in 2007).

Don't get me wrong---determining the quantifiability of some variable is often a very good first step towards understanding, since the use of numbers allows one to define these things all at once (through mapping properties to that of numbers):
  1. ``Ordering'', a means of putting state space into some kind of sequence for purposes of comparison;
  2. ``Boundedness'', a determination if a maximum/minimum may be found, or if suprema/infima are applicable instead; and
  3. ``Resolution'', the determination of the smallest amount of change in the phenomenon that can be measured.
The study of numbers is the most rigorised among the many different branches of knowledge that are available, and thus finding some kind of isomorphism with them is an effective way of leap-frogging knowledge in a different domain using the theories that have already been proven/established within that of numbers.

However, the quantified measures never explain the whole story except in the most restrictive of conditions---even physics, a large source of quantified measures of reality, has to define several strong assumptions and conditions before the specific equations can be used, which is often summarised as being the cosmological principle. It's a ``principle'' in the sense that it is an axiomatic assumption---its provability is nuance-filled and sketchy at best.

Heading back to the topic at hand, I am not-so-obviously not intending to talk about cosmology here, but my own ``numbers game'' associated with my reading list. I have this compulsion to want to beat my old records of reading [at an annual basis], and it is starting to pervert the way I choose books to read. It has also changed the way that I have been deciding to spend my time, feeling guilty at times when I am doing something else other than reading some long work that can yield more than one ``named item'' as defined in my read list.

Essentially, I've made reading go from something I do for fun into something that I am doing for not-fun. I can't call it ``work'', because it isn't, but it sure is turning into some kind of a chore.

I'm on a sabbatical, I'm supposed to be recharging and recentring myself. I should be exploring, thinking, planning, improving, and not getting all hung up on things that I was previously doing at work that was driving me insane.

------

In other news, I figured out my problems with the stuttering frame rate for Grim Dawn. Turns out that I screwed up the Speed Shift EPP values---by keeping it at 255, I sort of improved on thermals through overly preferring a lower clock-rate, but it meant that under sufficient load, even the 26× multiplier inevitably gets dropped to 8× ever so often, which causes a lag spike from the loss of a whole lot of clock cycles. I adjusted it to 32 for both the bursty and continuous performance, and the micro-stuttering went away, while the thermals (at −80.1 mV offset voltage across the board) were very tolerable.

Grim Dawn is quite CPU intensive, and that's why that little blip was enough to stutter. I think that Minecraft would benefit from this too... I'm going to get into my single-player 1.17.1 world and build an iron farm to try it out.

So dumb. All this episode does is to remind me to think carefully about the whole system and do enough tests to find out the bottlenecks and fix them accordingly. Maybe I'm getting a little softer around the edges now that I am on sabbatical... who knows.

That's all I have for now. Till the next update.

Tuesday, August 10, 2021

What Balance?

Where is the balance point between ``I'd like to share something that I find to be cool'' and ``I am just craving attention for attention's sake'' these days? I think that some of my internal monologue is directed towards this particular question and its associated ramifications.

I mean, at some level I'd like to share what I think is cool with people I know. And yet, I cannot shake that feeling that by doing so, I am really just trying to grab attention for attention's sake at a subconscious level.

I want to operate on my own terms, yet cannot walk away completely from the mainstream for fear of total ex-communication and being one iota away from being an outlaw. I'd like to be recognised for the good I do, yet cannot live with the thought that I would be under the scrutiny for the very same.

Where then is that balance point?

Flipping the thought the other way, perhaps all that selection bias that I perceive that is present on ``social media'' (man, I still dislike that term a lot) is the take that others have on the same question, with the big difference being they'd care much less about whether their action may be attention for attention's sake as compared to me.

Does it mean that at some level, I am just envious of what they are showing?

I would be lying if I said no. It is probably human nature to envy, and in a bleak outlook that comes from the friendly neighbourhood global pandemic that is COVID-19, liberties that remind oneself of the current predicament do end up triggering off much stronger amounts of envy.

I did demonstrate that earlier in the year when my friends in the US were already getting their second vaccine dose for COVID-19, while I was still waiting [im-]patiently for my turn even as my demographic got repeatedly penalised from the actions of the elderly ``at-risk'' group who don't seem to give too much of a damn, but still somehow got much sympathy applied to them. Yet after I personally have been given the second vaccine some time last week, I didn't do the same thing at my friends did via bragging about it on social media.

Instead, I wrote a rather sedate version of it here, possibly in defiance to the on-going trend of doing so on social media.

Am I virtue signalling in a different way then? If so, am I just another hypocrite?

Yeah, it's probably correct to claim so. I don't have any defense.

I'm just resigned. Tired.

Eh?

Check it out: stupid o'clock came early today!

Alright, that was a joke, sort of. The joke part is on the timing, the sort of part is the type of content that is coming up.

I like to think of ``stupid o'clock'' posts as those that have a stronger tendency to be more rant-y and personal, as compared to the arm-chair critique type stuff that I end up talking a lot about when posting at a more orthodox time.

I suppose there is that magic that comes with a quiet and cool night that makes the mind take a less combative attitude towards the world, and along with that comes the lowering of the conscious barriers that keep the sad inside.

Sort of like why people prefer drinking heavy alcohol when it's starting to get dark outside as opposed to during the day. The darkness is more permissive to a more candid view on things.

That's why campfires exist from back in the day when humanity was still hunter-gatherers.

That is also why it is important, from the morality standpoint, to literally operate in the light. So that we are always consciously aware of what we are doing, and are thus able to exercise our willpower and self-control as things are shown in their full glory under the light.

Digressions aside, today's ``stupid o'clock topic'' is inspired by yet another Facebook post. This time, it is from the sharing of the happy times of a friend and the said friend's spouse celebrating yet another temporal-based milestone.

I've always been a bit miffed by the whole idea of sharing one's temporal-based milestones like that. Partly because of the semi-staged nature of the whole thing [instead of just taking in the moment], and partly because it is yet another self-selection of good events that portray an imbalanced perspective on social media.

Be znlor V whfg unir zvkrq srryvatf juvpu vapyhqr fbzr yriry bs rail naq naablnapr ng jung V ab ybatre unir, naq znl arire unir ntnva. Pbafvqrevat gung guvf vf n ``fghcvq b'pybpx'' cbfg, V'z zber vapyvarq gb guvax gung guvf vf gur zber inyvq ernfba.

Vg'f orra zber guna n lrne fvapr gur ovt oernx-hc. Jungrire grnef gung V pbhyq unir furq, V unir nyernql furq. Gur jnyyf gung jrer bapr qbja sbe fbzrbar unir, bire gur pbhefr bs gur svany zbaguf bs wbo qrfpevcgvba qrivngvba, cnaqrzvp-vafcverq abafrafr, naq gur nffbpvngrq vfbyngvba gung pbzrf sebz gur pbzovangvba bs gur ynggre jvgu zl frys-vzcbfrq rkvyr va gur sbez bs zl fnoongvpny, fgnegrq gb erohvyq gurzfryirf ntnva.

I don't feel a strong connection with people any more.

It's truly hard to put into words what I feel here. It is not that I am going to cut myself off from human interaction completely (even though it seems to be that way now), but that I just don't feel inclined/motivated to open myself up to anyone anymore.

Crbcyr ner zber nxva gb guvatf gung zbir nobhg va gur onpxtebhaq nzbat nyy gur bgure guvatf gung rkvfg arneyl fgngvpnyyl va gur onpxtebhaq. Fbzr bs gurfr crbcyr jr vagrenpg guebhtubhg gur pbhefr bs gur qnl sbe gur fnxr bs trggvat fbzrg guvatf qbar, ohg bgure guna gung, gurl qba'g frrz gb znggre nf zhpu nal zber.

Vg'f n ovg yvxr bcrengvat cebtenzf ba n pbzchgre. Gurer whfg vfa'g n pbzchyfvba gb yrnea qrrcre nobhg gur cebtenz bgure guna gur bppnfvbany arrq gb eha vg gb trg fbzrguvat qbar.

Vg nyy fbhaqf yvxr guvatf unir orpbzr zber genafnpgvbany va angher. V zrna, va n jnl vg vfa'g jebat, fvapr n eryngvbany glcr bs eryngvbafuvc fhttrfgf n zber tvir-naq-gnxr glcr bs nggvghqr, n glcr bs obbx-xrrcvatyrff glcr bs onegrevat bs vasbezngvba.

It feels at some level that I'm just taking a lot, and have nothing to give back in return. And I do mean this for many of the remaining relationships that I have left, assuming that I still have them at the end of the day.

Senaxyl, V jbhyqa'g or fhecevfrq vs V fhqqrayl rvgure raq zl yvsr, be orpbzr fbzr xvaq bs znavchyngvir cflpubcngu. Va obgu fvghngvbaf, n pbzzbanyvgl vf gur ynpx bs n fgebat pbaarpgvba jvgu crbcyr.

Fb sne, V unira'g ernyyl tbar qbja gur veerirefvoyr cngu bs qvffbpvngvat crbcyr vagb zrer guvatf. Jung V fnvq rneyvre vf zber bs n zrgncube qrfpevovat gur graqrapvrf gung V srry engure guna n qrpynengvba bs na npghny nygrerq gubhtug.

I spend more time ``talking'' to people through typing stuff out via various messaging systems, and at some point, it feels no different from just interacting with a computer with no one on the other end, even though I am well aware that there is really someone on the other end of the conversation.

It's just that the thread of humanity that seemingly connects us seems to be a tad too frayed to the point where it gets too de-personalised.

Don't get me wrong, I'm not really a stranger to communicating with people almost entirely over text messages---I have been doing that for a long time, and in many cases, it is the only form of connection that I can maintain with people, since most of my friends/acquaintances are geographically far away from me.

Eh, what else is there to say? Maybe I am really losing my marbles.

V qvq fcraq fbzr gvzr guvaxvat nobhg ubj V pbhyq jevgr na raq-zl-yvsr-cyrnfr qverpgvir gung V pbhyq fgnaq ol fb gung V pna or bssrq ng n cbvag bs zl bja pubbfvat vafgrnq bs jnvgvat sbe zr gb ybfr zlfrys pbzcyrgryl, tenqhnyyl be bgurejvfr. Fhpu na raq-zl-yvsr-cyrnfr qverpgvir fubhyq vqrnyyl or choyvpyl ernqnoyr fb gung gurer vf ab qbhog nf gb zl jvfurf.

Gur bayl ceboyrz jnf gung V pbhyq abg pbaivapr zlfrys gung fhpu n qverpgvir jbhyq pbire gur evtug fvghngvbaf, naq zber vzcbegnagyl, jvyy abg or zvfhfrq ol bguref.

V zrna, vg pna or engure boivbhf jura V ybfr zl zneoyrf fhqqrayl---gur punatr vf qenfgvp, naq bsgra abg sbe gur orggre. Na raq-zl-yvsr-cyrnfr qverpgvir sbe fhpu n fvghngvba pna or engure fgenvtugsbejneq.

Jung vs V ybfr zl zneoyrf tenqhnyyl? Jung xvaq bs yvar fubhyq or qenja fb gung gur raq-zl-yvsr-cyrnfr qverpgvir pna or nccyvrq? Guvax nobhg qrzragvn. Va fbzr jnlf, gur cngvrag jvgu qrzragvn vf fgvyy fbeg bs gurzfryirf naq va fbzr xvaq bs pbageby. Rkprcg gung gurve ybphf bs pbageby unf vgf onfvf frireryl nygrerq gb cbvag ryfrjurerr bgure guna ng onfryvar ernyvgl.

Fubhyq zl raq-zl-yvsr-cyrnfr qverpgvir fgngr gung vs V nz bhg bs gbhpu jvgu onfryvar ernyvgl, V fubhyq or raqrq? Jung vs V, va gung qrzragrq fgngr, nz cresrpgyl pbagrag, gubhtu V nz abg gur zr jub vf jevgvat guvf abj? Vf vg evtug gb raq vg gura?

Dhrfgvbaf yvxr gurfr ner jul V pbhyq arire oevat zlfrys gb jevgvat na raq-zl-yvsr-cyrnfr qverpgvir, yrg nybar chggvat vg va n choyvp cynpr va pnfr V arrq fbzrbar ryfr gb rkrphgr vg orpnhfr V oybbql uryy pnaabg.

Ru... jurer jnf V ntnva orsber V tbg znffviryl fvqrgenpxrq jvgu gur raq-zl-yvsr-cyrnfr qverpgvir? Nu evtug, ybfvat pbaarpgvba jvgu crbcyr.

I don't think that my existence has been a positive impact on people. Doesn't feel like it. Can't prove it one way or another. Can't tell if people saying things like ``no, you are precious to me; I would be sad if you were to go away'' are reflexive platitudes or true emotions; don't know how it should affect me if it is one way or the other.

Truth is, eventually all of us are going to die, and we are going to die alone. There is a specific day in our lives beyond which we are no longer alive. We all know that, but we don't know which specific day it will be. And so, we just mark off each day as it comes, and when the specific day shows up, most of us would be quite surprised by it, as will many people around.

``I didn't expect him to pass away so soon.''

Fcrnxvat bs cnffvat njnl, V'z gbgnyyl abg rkcrpgvat crbcyr ng zl shareny jura vg pbzrf. Bxnl, znlor zl fvfgre'f snzvyl, naq n cnfgbe sbe sbeznyvgvrf. Gung'f vg.

Walls man... those are killer. It can feel very safe behind the many [emotional] walls, but the flip side of it is that if nothing can go out, nothing can come in either.

Ohg vs bar vfa'g rkcrpgvat nalguvat gb pbzr va, vfa'g vg nyevtug gb xrrc gubfr jnyyf hc gura?

I don't know. ``Stupid o'clock'' isn't meant to be full of solutions---it is just a brain dump with little to no inhibitions, not even designed to be thought-provoking even though it is likely that the [mostly] unfiltered response may provoke other thoughts in the reader.

And that's all the brain dump that I have for this ``stupid o'clock'' post. Till the next update.