Thursday, September 30, 2021

Main Loop Start and Mob Farm+

Ah. After a bunch of effort creating the QuickBASIC/QBasic syntax highlighting script for Vim, I finally sat down to write the skeleton for the main loop of LED2-20. The first thing that I started on was the keyboard event trapper. According to my design document, I had to define 21 keys:
  • 4 built-in for arrows;
  • 4 for ``extended'' keyboard arrows;
  • 4 weapon-related;
  • 8 relating to utility functions; and
  • 1 for quitting the main loop.
The key idea here is to use the event-trapping to set a bit-wise flag that the key had been detected to press. Then, when I reach the ``user interaction'' processing part of the main loop, I can consult the bit-wise flags ``all at once'' and deal with them. This allows the processing of the so-called chorded keys, i.e. having more than one key pressed at once. Using the old method of reading directly from INKEY$ led to all kinds of synchronisation problems, among which was the sheer impossibility of tracking chorded keys.

The proper tracking of chorded keys is important for movement. One of the old-school bugs/features of the day is the unintentional speed boost from chording two or more directions at once. Usually the keys provide only 4-directional movements, with each key adding one unit of movement in their associated direction. This means that if I chord [say] the right and down keys, I end up moving at a distance of about 1.4 in the down-right direction, as opposed to moving 1 in the the down-right direction. If the movement units are singular (like just 1 unit), then it doesn't matter too much. Since I am intending to allow the player to change the maximum thrust power of the LED, this boost is definitely unintentional. With the method I stated, the direction keys now provide the correct interpretation of setting the direction, and more importantly I don't need to spend much of my interpreter time polling the keyboard via INKEY$ and do weird math to figure out the chording.

In QuickBASIC/QBasic land, there is a need to differentiate between interpreter and behind-the-scenes run-time. Interpreter run-time is necessarily slower, since each line of code translates to multiple lines of machine code, whereas behind-the-scenes run-time tend to be more efficient since there is little to no translation time required to get it to run on the metal. That's why during the later parts of QuickBASIC/QBasic's run, near the end of the Windows 98 era, assembly-based libraries like DirectQB and FutureLibrary were a thing. At that point, much of the main-loop and rendering engine is handled using behind-the-scenes run-time, while the game logic is still handled at the interpreter level.

I could use those libraries for LED2-20, but that's not the purpose of this retro-programming project.

Anyway, I wrote up the basic MAINLOOP.BAS, and used it to test the qb.vim file, doing more fixes along the way to both and publishing the updated qb.vim file as and when I reached some kind of milestone. One thing I learnt along the way was the idea of a ``dotted filetype'' in Vim, where one can define a chain of syntax highlight scripts to load in left-to-right order. I learnt about it solely to fix the problems that NERD Commenter was having when I used qb.vim only. The problem was that NERD Commenter was expecting something called ``basic'' in the filetype system variable, and not seeing it, defaulted to using C-style comments.

And that's about it for the retro-programming bit for now.

------

In Minecraft, I've finally started building up the item auto-sorter and the two automated sugar cane farms under my mob farm. They were surprisingly easy to build, and I had finished it within a couple of hours of starting. So now, when I AFK at the mob farm, I can get bones (good for bonemeal for an empowered manual tree farm), arrows (good in general), gunpowder (great for rockets needed to fly in Elytra), and sugar cane (can be crafted into paper to make rockets).

Maybe the next Minecraft project will be to build me some nether-rails. That one will involve some adventuring and careful math. Another project might be to use my Elytra to find some other biomes containing other useful materials that I still haven't seen, like bamboo.

Anyway, that's all I have for now. Till the next update then.

Wednesday, September 29, 2021

Fixing Stewpid O'Clock Mistakes

Okay, stewpid o'clock's claim of too subtle was itself too subtle, or ``I really shouldn't be refactoring code at stupid o'clock in the morning when I am not thinking so straight''.

Here's how it should really look like:
I had the relevant search patterns defined more or less correctly, but the problem was that I had put them in the wrong place. You see, in Vim syntax highlighting, the order in which the patterns appear matters a whole lot. Usually we will structure the most general matching to create an initial assignment, and then bring up specific patterns to refine it to special case input.

So in this case, the (simplified version of) general matching is that of a Comment, which is
syn region qbComment start="'" end="$" contains=qbTodo
And the pattern for the '$INCLUDE: 'FILEPAK.BI' metacommand is
syn region qbMetacommand start="'\s*\$include" end="$"
This means that the pattern for $INCLUDE is more specific than the one for regular comments.

In my bid to refactor, I had shifted all the QuickBASIC-only patterns up to the front of the file. So it had first assigned the matching for $INCLUDE correctly, but then when it encountered the rule for comments, it re-assigned the matching to that of the comments since it was more general.

I fixed that this morning, and filled in quite a few more quality-of-life type updates, specifically detecting and highlighting some lexical-level errors where possible, as well as tweaking some edge case-related stuff that I ``know''.

Anyway, I'm basically done with this side-track for now. You can download it from here if you are interested for some reason. Note that this isn't exemplary of how a Vim syntax file should look like---I did hack it up in a moment of annoyance in less than a day.

Till the next update.

Stewpid O'Clock with Basic in Vim

I wasn't expecting to do this. I never wanted to do this.

But I had to. I felt compelled to.

And so, it is done. Here we have it running in QuickBASIC syntax mode:
Font of Vim (see previous post for details) and retro light-gray text on dark blue background aside, what I had done over the past few hours was to painstakingly define the various highlight groups needed for highlighting the various keywords that QuickBASIC has (there are many). I derived this list from the handy index page of the built-in QuickBASIC help file.

On a whim, I decided to pull up the QBasic help file and check out what keywords it didn't support and incorporate a check in the syntax highlighting definition list to display them as an error. And here is how it looks like with that applied:
Notice how the COMMAND$ is now highlighted in error-red. That's because QBasic does not support that keyword which returns the arguments that were passed from the command line.

Notice also the more subtle effect of '$INCLUDE: 'FILEPAK.BI'. In QuickBASIC mode, it shares the same metacommand syntax highlight as '$DYNAMIC, but in QBasic mode, it is treated as nothing more than a regular comment. The colour change is very subtle, and it is there---in fact, I had to edit this post after publishing just to state this, because I didn't even notice it myself while writing this.

It works well as compared to whatever was being distributed in regular Vim or even in Vim-polyglot. I think there might be some other edge cases that I have overlooked, but c'mon, it's stupid o'clock now. I gotta go crash out before I start to massively hallucinate.

Peace.

Tuesday, September 28, 2021

Quickie

I... got annoyed at the terrible syntax highlighting for the QuickBASIC dialect of BASIC in Vim. And so, I've been spending time working on it today, after I had read the chapter on Romans from the ESV Study Bible.

I also updated my main house in Minecraft, expanding the outside porch with smooth stone, and adding a new brick wall (more like fences from a game mechanics perspective) perimeter. I experimented a little with using firework rockets for my crossbow: I didn't like the high variance in damage, and mob densities really don't justify the need for such a large AOE attack in the first place. I also did a little exploration with my Elytra to find the treasure with a treasure map that I found, and it was cool.

My next projects in my Minecraft world would be to build an auto-sorter at my mob farm, and a couple of sugar cane farms under my mob farm so that I can easily harvest and build rockets, TNT, and other fun things like make use of the bone meal which I can get from crushing the large amounts of bones that my mob farm drops.

Anyway, that's all for today's entry. I'm going to work more on that syntax highlighting script. Till the next update.

Monday, September 27, 2021

gVim Font Fixes and Other Tangents

Okay, I fixed something that has been irking me for a very long time: why gVim in Windows cannot use my favourite pan-Unicode font, Unifont in it despite being distinctively monospaced in nature as compared to the regular proportional fonts.

As it turns out, the TTF marks it as ``Even Width'' instead of ``Monospaced'', which explains why despite having the font installed, I simply could not select/set it in gVim via the .vimrc file.

The solution then is to grab Fontforge (I already have it installed some time back), and go through the annoying process of setting the correct marked information. I'm replicating the steps here in case the information goes away:
  1. Fire up Fontforge and load the relevant TTF file.
  2. Once loaded, go ``Element''→``Font Info''→``OS/2''→``Panrose'' and set ``Proportion'' to ``Monospaced''.
  3. Head to ``PS Names'' and update the various fields to not clash with the existings ones for Unifont (I just shoved a capital `H' in front for ``Hacked'', so my hacked font is now ``HUnifont'').
  4. After dismissing all the dialogue boxes, head to ``File''→``Generate Fonts'' and just follow through to generate some new TTF, ignoring warnings/errors.
  5. Install the generated TTF and we are done.
And from that, we've got a ``monospaced'' version of Unifont that can be activated in gVim. The multi-cellular span of some double-width characters (like CJK ideograms) are rendered correctly, as is the navigation through them, so all in all, I call this a success.

On a semi-related note, after getting back into QuickBASIC on my retro-programming tour, I've fallen back in love with the stupidity that is the 9×16 VGA character font. You see, when I was writing FilePak for LED2-20, I used the iconic QuickBASIC IDE with its glorious 80×25 text mode display with blue background and light gray characters in full-screen, and that really made my eyeballs feel so at home, despite these days trying my best to avoid dealing with blue light [at night] in general.

So nostalgic.

Anyway, someone has gone through the trouble and created modern equivalents of the old school text-mode fonts, with the added proportion adjustments to ensure that what is rendered is closer to the true physical proportions from back in the day.

And this brings up a different aspect of the old days. Programmers these days are spoiled for a few reasons:
  1. Physical screen aspect ratios have been synchronised with logical screen aspect ratios, leading to square pixels; and
  2. Screen resolutions are high enough that a pixel is much closer to the ideal of a dimensionless point than before.
For the first point, things have largely been determined through the HDMI standard that unifies computer screens and television screens, defining important features for video formats that get back-ported into regular computer displays. In yonder days of computing, the computer display being solely analogue meant that the actual output signal from the DAC to the display had little to do with how the screen was logically handled, which led to interesting problems like having a 4:3 (aspect ratio 1.3+) monitor trying to display a 320×200 (aspect ratio 1.6) screen. This meant that each logical pixel had a physical form that was some squished version. This led to interesting ``adjustments'' to draw things like circles, as seen in this official documentation in QuickBASIC's help file:
I can only imagine how complex the various early versions of CAD programs were just to accomodate the myriads of odd-shaped displays while still using the logical CGA/EGA/VGA/VESA standards.

By a fluke of history, what was originally complicated aspect ratio wrangling in the past for 320×200 resolutions is completely nullified with the modern screens having exactly the 1.6 aspect ratio. It makes old programs that tried to compensate for that look wrong now, but it makes lazy programmer me who didn't see the need to mess with all these physical stuff that I cannot control very happy to see that circles that I kept logically correct is now physically correct as well. And this is likely to keep being the same since 4:3 monitors are not making a come back any more. Rumour has it that the reason for the 4:3 aspect ratio came from how it introduced less variance in the curvature of the CRT (cathode ray tube you faux ``woke'' bugger) between the centre and the edges since it was closest to being ``circular''---I don't know if it is true, nor do I really care as much.

As for the second point, it means that more details can be added to the graphics in a brute-force sort of way, as opposed to many of the tricks that had to be employed to work with the big-ass rectangle pixels that we used to see. There's no real problem with this, just that it really makes graphics related design that much easier since it relies more on fundamentals than on advanced trickery that is tailored to the hardware of the time.

I think that's about it. Maybe I should create a new tag in addition to technical for these retro-programming stuff.

Maybe.

Till the next update then.

Sunday, September 26, 2021

FilePak for the Caveman Programmer

Oh look at the time. I should probably write an entry before I bust the deadline.

I spent much of today working on a small library in QuickBASIC that I call FilePak as part of my preparatory work for the LED2-20 retro-programming project. Previously I had talked about how I might want to use CHAIN and COMMON as mechanisms to create overlays of the code that generates the sprites from DATA statements from the code that runs the main event-loop. It was an interesting idea, but after thinking about it while working on my design document, I realised that I probably didn't want to pursue it.

If the intent was just to refactor/rewrite the old LED(II), then the mechanism works well, since all the sprites are pre-determined and are really of the same shape, and thus it is straightforward in laying out the module flow for CHAIN and COMMON to be effective.

The problem was that as I sat down to work on the design document to flesh out my vision on how LED2-20 should work out, I have changed the calculus by quite a bit. Gone was the simplistic 32×16 enemy sprites repeated over several waves---I wanted each wave to mean something more than just a change of sprite with the same behaviour. I wanted some form of procedurally generated level layout for the waves, I wanted boss fights. I wanted terrain tiles, I wanted background tiles, I wanted parallax where meaningful.

Essentially, I would be creating an intricate call chain with the CHAIN command and associated sources just to get all these done. It is not sound engineering---it is crazy mad scientist levels of cranky engineering. I could still use a program to generate the sprites, but I can no longer use it actively in the intricate dance that I was about to choreograph.

Which meant that I would have to follow Hung's Dynamite Man concept, or Lianne in... The Dark Crown by DarkDread/Dark Dreams, which was to use lots of small files (sub-500 bytes each) to store the binary data for the image and masks. As mentioned before, in the era where each cluster size is 4 kiB as opposed to 512 bytes, storing each of these files as-is is very inefficient and can cause heavy external fragmentation. It's also an ugly mess when it comes to distribution since it's really hard to keep track of all these many files.

The solution has always been to create some kind of package file to store all these assets and provide APIs to retrieve their contents during run-time. These days, the game-engine that one purchases a license to use would have this built-in, but since I'm working retro, it's time to go retro.

Enter FilePak. It's a really dumb format. What passes as a header is just a 15-bit integer on the number of records in the File Allocation Table (FAT), with the sign bit used to mark if the FilePak is read-only and optimised. This is followed by however many entries are there of the FAT. Each FAT entry is a 12-character space-padded Latin-1 string representing the old 8.3 file name, two 32-bit signed integers representing the absolute byte position for the start and end of the real file that acts as the container, with the sign bit for the start position used as an indicator of a deleted entry since QuickBASIC only handles file positions of less than 231 anyway. Immediately after the FAT just comes the data themselves.

It's really dumb because there is no compression, no checksum, and is not even a ``real'' file system since it doesn't free up space when an existing FAT entry is marked as deleted.

How I envision myself using it will be to generate the sprites from sketch into the correct binary format for the screen mode I am using (mode 7h, or EGA's 320×200 with 16 colours (and 8 pages)) and saving them into the FilePak using the API I wrote up. Subsequently, I will just retrieve the binary data directly into the binary buffers (represented as 16-bit integer arrays in QuickBASIC) for use later on.

The programming for the binary I/O into memory was interesting. It seemed that the 16-bit integer arrays were more stable in terms of their location---their segment:offset didn't change between when they were called and when they were used. This is in opposition to the character strings (or the STRING type)---these change in between QuickBASIC statements, and had to be ``fixed'' by poking their values directly into an integer array.

It was relatively easy to write these memory-related functions in QuickBASIC because their variables are all passed by reference. So I could just pass the array (say a()) directly into the sub-routine, and call VARSEG(a(LBOUND(a))) and VARPTR(a(UBOUND(a))) respectively to get the desired segment:offset.

While trying to support multi-segment arrays (i.e. arrays larger than 64 kiB), I ran into an interesting compiler problem---it yelled at me for ``Math Overflow'' when it saw my supposed 32-bit long integer constant of 65535. I was confused for quite a while---why would it bitch about that when I could run it in the interpreter with no problems? Then it dawned upon me: I had always used DEFINT A-Z as a default for all my programs (that and the '$DYNAMIC metacommand for dynamically-sized arrays). The reason for that was subtle: by using DEFINT A-Z, I was telling the interpreter/compiler to emit 16-bit integer opcodes when running/compiling. For those who are confused, computer processors generally run optimally when the data they are operating on is of the same size as the so-called ``machine word''. The QuickBASIC compiler existed during the time where 16-bit processors ruled supreme, and thus the 16-bit operations tended to work the fastest. Back then also, 32-bit ``long'' integers were abnormal, and tended to be slower. Floating point performance is also abysmal.

So, for the fastest opcodes generated, one would want 16-bit integer arithmetic to be used as much as possible, especally in real mode. That same property that I had been using forever had bitten me in the ass for that particular chunk of code, because in 16-bit integer land, 65535 is a bogus number. Yes, it can be represented in 16 bits, but only when it was unsigned, a particular version of integers that has always remained little used and has usually been the main source for the strangest of bugs, security or otherwise. I had to convince the compiler to not do that for that chunk of code, and so for that sub-routine only, I switched to DEFLNG A-Z instead.

And it compiled and it worked. Yay.

One last thing that I added to FilePak was a simple tool that would optimise it for reading. The FAT was always loaded into memory by the module, and it was usually referred to by the file name. Unfortunately, the sprite generation process is likely to be more programmer friendly in terms of ordering, and so the FAT had to rely on a linear search to find the correct entry. There was also the problem about handling the internal fragmentation from file deletion and file overwrite through reusing an older FAT entry---leaving these artefacts alone meant that the FilePak was just going to keep expanding with useless data, and a long time to find anything.

The optimise for reading tool simply loaded the existing FAT, sorted it by file name order, discarded all FAT entries that were unused or soft-deleted, before finally writing out the adjusted FAT with updated byte positions to a new file, thus compacting it. The sorting was important because it could then allow binary search to be applied in the look up for the FAT entry, which was much faster.

Unfortunately, QuickBASIC didn't have a built-in sort routine like C/C++/every other modern language, and so the old faithfuls had to come out. I started with a simple bubble sort, but quickly converted it to comb sort instead. For what I had in mind, trying to build a quicksort for this was just not worth it. However, I will write quicksort for the rendering pipeline: that's one place where having a fast sort is super important, since it can occur quite frequently while trying to compute collisions among the objects as part of the associated computational geometry.

Anyway, that's that. Till the next update, I suppose.

Saturday, September 25, 2021

``Do Your Own Research''

``Do your own research.''

Ah, the battle cry of the anti-vaxxers, the conspiratorists, and a whole lot of people who cannot seem to understand what ``research'' means. It's a bit like how the true meaning of ``literal'' was bowdlerised and replaced with ``figurative'' in the sense of ``I literally fell off my chair when I heard that news''.

Let's take a step back, shall we?

Colloquially, when we do focused reading on some specific topic, we have a tendency to call it ``research''. That alone isn't a problem; after all, doing focused reading is a great way to find out what the world has to say about the specific topic.

The true problem though, is two-fold.
  1. Is the material being read valid and true?
  2. Do I have enough background information to understand and evaluate what I read critically with respect to its validity?
Let's address each aspect separately.

The validity of the material seems to be the easier of the two, but in reality, it is probably the hardest of all. In the academic world, where research is their business, the validity of material is steeped from two traditions, namely that of the scientific method, and peer review. The former outlines a methodology for establishing empirical facts that welcomes anyone with the prerequisites to attempt to reproduce the results, while the latter brings together trained experts to poke at the material critically, hereby strengthening it through improving on the methods/arguments used, or culling material that does not stand up to scrutiny due to weakness in method, argument, or even assumption. Because of this bias of writing for others in the profession, there is a tendency for the material to use technical jargon that simplifies for the expert but can confuse the uninitiated. Part of the PhD/MSc training process is learning how to read these scientific papers, determining what the authors are saying, and more importantly, what the authors aren't saying.

A generally good proxy on the validity of piece of material is its longevity, with the longer the paper exists with associated high citation count, the more likely it is treated as ``axiomatic''. The caveat is that the citation need not be an endorsement---it could be cited as a negative example to be debunked.

Notice how I limited the materials to be researched to that of scientific papers. The chief reason is that such materials have many expert eyes looking at them, and there is an intellectual bar to clear just to have it published as a paper. A blog/Facebook/Twitter/Instagram entry saying some platitude comes nowhere close to requiring any form of systematic inquiry. One hundred thousand social media quotes mean almost nothing compared to one peer reviewed paper.

Going on to the second point on whether I the reader have enough background information to understand and evaluate what I read critically with respect to its validity, it really depends. Remember that the materials that are highly valuable in terms of validity are often written as communiques to other researchers in the field, with each of them having undergone at least an undergraduate education in the field or its allied fields. This means that there are many unspoken assumptions about the background knowledge of the person who is reading the material. It's not meant to be read by ``mere mortals'' who are untrained in the field, and it's definitely not meant to have random sentences quoted out of context to advance some personal agenda. Another subtle point is that the scientific publishing process requires hypotheses to be simplified enough to fit into the page limit, which means that more often than not the conclusions drawn in any single paper are heavily conditioned, with a fuller picture emerging only after doing meta-analysis across multiple papers over time.

My beef then with ``do your own research'' as yelled by the masses is that their approach is wrong-headed. They replace quality [of the materials being researched on] with quantity, with the more hardworking of them believing that they somehow have sufficient background knowledge to make their own judgement about something completely outside of their field of expertise.

Sadly, they are also the same people that like to quote random verses of the Bible out of context just to ``prove'' their [twisted] point.

Even more sadly, they are a very vocal bunch, and as condescending as it sounds, they will drag everyone down to their level, and then apply as many logical fallacies as possible to point out how everyone else isn't ``doing their own research'' while maintaining a smug demeanour. This is twice as true for things that have a time critical component, like the various advisories on behaviours and effectiveness/efficacies of various vaccines and their associated administration protocols. Those with the training with relevant field experience take a more cautious and measured approach towards interpreting each research outcome as they appear, updating their understanding accordingly, while the ``do your own research'' crowd latches quickly on a narrative that they prefer and pontificate upon them without ever entertaining the possibility that it may be wrong and is updated with a later piece of research work that has hypotheses that specifically target either strengthening the previous research's conclusion, or more likely, showing how the conclusion can be wrong for a particular circumstance that is more relevant than not.

But try explaining that to the ``do your own research'' crowd. They are worse than trolls---at least for trolls, we know that they know better and are just behaving that way to egg people's [over-]reaction. They are worse than trolls for the reason that they truly believe in something that is wrong, refuse to admit that they are wrong, and more importantly, still want to coerce everyone to follow their perspective.

That is how humanity goes to hell. One ignorant shouter at a time.

I think I'm gonna lie down. Am feeling feverish from this confusing weather---it was bloody sunny, then had a random convectional rain moment while the sun was still shining through brightly, before going back to being blusteringly hot. Urgh. Till the next update then.

Friday, September 24, 2021

We Retro-Programming

It's Friday. The day's been yo-yoing between being bloody hot and bloody cool to the point that I am feeling myself feeling feverish.

I've started working on LED2-20, the mini-project that is a ``remaster'' of the concept in LED(II) using the same tools, but with 20 years of understanding. I am seriously considering if I want to use an overlay-esque approach where I have a smaller program (with the sprite data) generate the bitmaps and bitmasks from DATA statements, storing it in an unnamed COMMON block before handing over to the primary game-play loop program via the CHAIN command. This saves conventional memory from all the DATA statements, among other things.

It's similar in concept to the Dynamite Man game in QBasic by Hung. It's a Bomberman clone written in pure QBasic, and is of a very high quality. There are some things that I would like to avoid that Hung did though.
  1. Dynamite Man uses many small binary files saved/loaded with BSAVE/BLOAD for the sprites. I'd like to avoid doing that because of the fragmentation that it will induce in modern drives---each cluster is about 4 kiB, while each of these sprites are about 300+ bytes, which means that having a directly full of these small files is super inefficient.
  2. Dynamite Man is careful to limit itself to using a single screen with static background and sprites that move within the screen, while I'm thinking of something that involves a moving background with/without parallax in side-scroller action. So, what I have in mind is technically more complicated than what Dynamite Man has done.
I've written up a coarse design document on what I would like to put into my rewrite, and am excited with working on it semi-old school in DOSBox. I do use Vim outside of DOSBox to sort out some other things where it works better, but that's alright since the purpose is to apply experience to improve upon what I had done before, not necessarily to relive the old days faithfully.

I don't have much else to talk about for today, and so, till the next update.

Thursday, September 23, 2021

Maybe It's Time to Retro Program

You know, all these nostalgia of computing feelings have started to manifest itself into some kind of idea in my head. But hold on to that thought for the moment.

Here are some screen shots of an old, old game that I wrote back in the day using QuickBASIC: LED(II). What is LED(II)? It's the second program after the first (now lost) one called LED, which was a circle-shooting game using a circle cross hair controlled with the keyboard over a screen. And why is that called LED? I don't know, my best guess was that it was just some quick random-ish of text when I had to quickly save the file in the old 8.3 format... I did eventually backronym it as ``Legendary Enemy Destroyer'' for LED(II). In the header information in the source file, I even had this back story written:
The Legendary Enemy Destroyer (LED) of Earth is on a hunt, to hone its skills. Little did it know that at the very moment, waves of extraterrestrial objects come hurtling towards Earth. The nations are caught with their pants down, and so it's all up to LED to deal with the invasion.

You are the LED. Using the keyboard, you control the LED's direction of movement, acceleration and weapons. The LED has an arsenal of powerful energy based weapons, but choose the most efficient one for use.

Survive the waves to save Earth from impending doom.

Good luck!
Eh, the story's corny, but that's fifteen-year-old me talking. I'm gonna give past-me some slack.

Anyway, here's the title screen:
Here's a typical ``wave'':
And here's the game over screen on dying:
It's a very simple 320×200×16-colour game that mimics the side-scrolling things, except it was done by a fifteen-year-old me during my spare time. The old program logs said that I started on the sources for LED(II) in 2000-03-05, and finished the first playable version in 2000-03-15, followed by on-and-off changes and updates up to 2000-12-30. The next time that I updated it on and off was in the later half of 2002, and added some super minor cheats into the source while attempting to tweak it to work better with faster machines.

Man, so many things that were wrong are wrong. For instance, see how the sprites are drawn on screen? The short answer is that I was using XOR mode for ease of ``non-destructive'' sprite drawing. Confused? Here's a screenshot of the relevant statement in the QBasic help file:
The correct way is to use two bitmaps---one bitmap for the bitmask to AND with what is on the screen to leave background pixels intact while wiping out pixels that need to be overwritten with those of the sprite, before applying the sprite itself with an OR command. In addition to that obvious problem, there was the other issue of using busy waiting in various forms to simulate appropriate amounts of delay (one was brain-dead do-nothing counter loop, the other was a slightly less brain-dead polling-loop on the TIMER value, which returns the current time of day in seconds with resolution no better than 18.2-1 s (courtesy of the IBM PC clock standard of 18.2 Hz), and a laggy-as-hell handling of input commands because it uses INKEY$ in the delay-driven polling instead of relying on something that was more event-driven.

🤦‍♂️

In short, it was a flaming pile of horribly patched code.

QBasic actually has a pretty useful set of event-driven mechanisms. Some examples include:
  1. ON ERROR GOTO line: used for error handling (I've used this before);
  2. ON COM(n) GOSUB line: for characters that are received from the communications port n (I've not used this ever);
  3. ON KEY(n) GOSUB line: for keystrokes on key n (I've not used this, even though it is interrupt driven!);
  4. ON PEN GOSUB line: lightpen related activity (I've never seen/use one);
  5. ON PLAY(n) GOSUB line: interrupts when the background music buffer has less than n notes in it (I used it in LED(II) for the sound effects);
  6. ON STRIG(n) GOSUB line: activity on joystick trigger n (I've never had a joystick, so I didn't use this); and
  7. ON TIMER(n) GOSUB line: triggers when n seconds have passed since the enabling of the event.
All the ON event GOSUB line statements define the handler when the event in question occurs, with the actual enabling of the event trapping itself triggered with an event ON, queued with event STOP, and completely ignored with event OFF. Since they are effectively interrupt driven, they are likely to be more responsive than the messy-ass main-loop that I had been approximating before.

I think I didn't use them for a couple of reasons. Firstly, I wasn't really familiar with the event-driven paradigm when I was fifteen---recall that I had only written programs for solving algorithmic problems in competitive programming, and even then, it only started when I was thirteen. It was not like those who taught me did any form of ``serious'' programming that would use such paradigms in the first place. ``Event-driven'' without having any GUI elements is something that one is more likely to see when writing servers (I wrote my first server program using the interrupt-like event-driven loop only when I was in university), and not in such console-driven programs. Secondly, there was also a purity-issue at some level: each of the event-driven definitions were more tied to ``old'' BASIC where the target of the GOSUB was a line number or a label, which was something that I didn't understand/like since I was exposed to the newer modular-styled SUB and FUNCTION in QBasic, compared to the ``assembly''-styled linenumber/label sub-routines. It didn't make sense in my head then, and so I avoided it, and explored the complicated world that was returned strings in INKEY$ for keys that were outside of the regular printable characters (like the function keys, and the very important arrow keys). The QBasic help file contained a list of keyboard scan codes that were to be used with the ON KEY(n) GOSUB line statement, but to me then, it seemed irrelevant since I thought that I needed to write a customised assembly-based keyboard handler to use these ``raw'' codes.

Well, it's more than twenty years later. I feel like I can do a much better job than fifteen-year-old me using the same tools, and I think I will actually go about doing it.

In fact, I might see if I can do it in regular QBasic instead of QuickBASIC, keeping the source code small enough to fit into one file (as opposed to the current 2-file monstrosity just to ensure that it can be compiled). Let's see if twenty years of experience can make a difference given the same tools.

Now, going back to the initial statement of nostalgia computing feelings, the thought came to mind. Why not do retro-programming for fun? I mean, there are hobbyists who are building custom ROMs for old game consoles that they grew up with that they loved, while I grew up with MS-DOS and loved it through and through. So why not go back to that as a hobby, building new things using the old tools, especially since now I have the means of publishing what I had done.

I mean, people have made ``fantasy'' virtual machines like the pico-8 which has severe limitations to spur creativity. The venerable IBM PC with its 640 kiB limit is no pico-8, but it is a beloved platform that I am quite intimately aware of, spending my formative years poking at it. Compare that to my beefy modern machine of Eileen-II (32 GiB RAM with 6 Hyperthreaded Cores at a base clock speed of 2.6 GHz means that I can shove 32k memory-instances of the IBM PC, running at least 104× faster than the maximum speed of the 80286), it is as good as being a fantasy machine anyway, since it is very unlikely to have real hardware for it. I mean, I am using DOSBox to run the original LED(II) to get those screenshots.

Right now, I'm thinking that when I release these retro-programs, I might just build them into the traditional DOS executables, relying on DOSBox to supply the emulation layer. If things are in QBasic/QuickBASIC though, I can have the option of using QB64 or FreeBASIC to build something that can run natively in uhhh... Windows, Linux, MacOS.

Screw that, I think I'll just release it for DOS. Windows/Linux builds, I can probably test and might do so for my own personal edification, but I have no way of testing MacOS builds. Moreover, it feels less retro if I'm doing this, and it is likely to eat a stupid amount of space on the web server just to hold on to these files (a quick manual conversion of the original QuickBASIC sources of LED(II) to compile in QB64 yielded a file that was 2.68 MiB large, compared to 151 kiB from the native QuickBASIC compiler).

Between QB64 and FreeBASIC, I've used FreeBASIC before, specifically to build a Windows version of my take of the ``Matrix digital rain'', which itself is an adaptation of my DOS text-mode version of the same thing. I liked that program because it used various VGA routines to change the font and the colour palette to fit the ``Matrix digital rain'', without ever running off into graphics mode. I used it to generate some of the special effects that were needed in the ``scholar's film'' that we needed to create back in the day for the scholarship presentation ceremony.

Ah... so much nostalgia.

Anyway, this entry is getting too damn long. I'll stop here, and go mess more with Minecraft. I've been to the End, demolished a small End City for all the Purpur Blocks, found the End Ship, grabbed the Elytra and Dragon Head there, and amassed some awesome Shulker Boxes that I had sorted out earlier today.

Till the next update.

Wednesday, September 22, 2021

Damaged Goods

We are all ``damaged goods''. Some of us are damaged from the experiences that we had, some are born damaged in ways that we recognise. But all of us are damaged in the sense that we are mortal---we are born to live, and then to die once again, the only mortal record of our existence is that temporally-limited trail of influences we have on other mortal beings, dead or alive.

Only one perfect human existed, and even then, he was also divinity as well, for the single great rescue plan for us sinners. Sadly, this post isn't about him.

It's about how we all still need to live somehow while being ``damaged goods''.

It was easier to do so in the earliest of days, since we were a part of nature. Petty quarrels stemming from who should ``do the dishes'' had no place in a time where a moment's unawareness of the entire tribe signalled a bloody brutal end, either at the hands of some other tribe, or at the hands of the non-human animals that hunger enough to hunt the most dangerous hunter they have ever known.

Everyone understood power then---power meant the strongest, the fleetest, and anyone who could ensure the long term survival of the tribe either through healing, or good planning (foresight). Disputes could be easily (but brutally) settled with a simple fight, with the mightier one prevailing. Fights would get bloody, but it was unlikely to be fatal---the tribe would always need their strongest fighters around to defend against the other tribes after all. It was the type of law that was obeyed, and there probably wasn't much time to wax lyrical about rights and equality to the levels of abstraction beyond whatever that could be seen and touched then.

Now, most fights are abstract in nature, generally for the better. ``Might is right'' has been checked by ``legal is right'' in the past two centuries, though the original law still lurked in the shadows, waiting to come into play the moment that ``legal is right'' is no longer respected.

That last point is something that many people in recent years don't seem to ``get it'' despite being all woked and what-not. The mega-societies of today operate the way they do because we have decided to share the same hallucinations on how the abstract rules control us. The abstract rules are what keeps us from degrading into a purely instinctual form of ``jungle law'', and they only hold because we collectively choose to allow them to hold. The abstract rules can be changed---they have been changed. Unfortunately, the process of change is usually a long one, since it requires re-hallucinating the majority of society that the change is a better version, and that takes time and effort.

But all that abstract rule change hallucination mumbo-jumbo is nice and dandy only in the high-level abstract sort of way; in the end, what we will experience is still the low-level inter-personal interactions that we have. The mighty power of a country's military or even police means nothing until the said military or police starts knocking on one's own door, demanding from them actions that may not be legal/ethical.

Should one comply then?

If idealism is your schtick, then perhaps non-complying with the order might be the option you'd choose. However, the consequences of that could be injury (physical or otherwise), harassment, or even [immediate] death.

If pragmatism is your thing, then maybe compliance is the way to go, that is, until one is safe enough from immediate danger to start activating all the other checking and balancing procedures to retaliate against the order.

As they say in self-defense circles:
Not here, not now, not you.
It takes one person to realise that the system is corrupt and needs to be cleansed. But it takes the continuous effort of a substantial segment of society to actually fix the system. Those in power know this very well, and so they apply one of the oldest strategies in the book to prevent that massing of people: the old divide and conquer approach.

But I digress severely. This post wasn't supposed to be about power---it was supposed to be about people being ``damaged goods''. So let me return to that now.

Everyone's ``damaged goods'', and everyone's trying to make some meaning for themselves as they crawl about this dirt-ball generation ship travelling through space. Many try to seek their ``significant other'', looking for companionship that can last till [hopefully] the end of their journey, sharing resources and pooling talents together to form a family unit. Most don't really care how their ``significant other'' is found, they just want one, like some kind of magic genie wishing thing. Post-enlightenment folks talk about high-brow values of romance and compatibility, while everyone knows that a key feature that has been euphemised with post-facto ``values'' is that of lust. Lust. Lust of the flesh.

Feels wrong, somehow.

Eh, I'm a ``damaged goods'' myself too; it's not like I'm somehow magically better than everyone else. It does feel like that I am about to learn of a new decision point, even though I have been keeping it away with that tricky little word called ``hope''.

But that's for next time.

Tuesday, September 21, 2021

Nostalgic Reading at Bar

It's about a couple of days into my ninth month of my sabbatical proper. I feel energised, with a little bit more positivity towards what the future might bring, even though I am not ready to start the much maligned job search process.

Or it could be from the natural high from visiting my favourite bar, drinking 4 pints of Guinness, and reading me some computer history articles from The Best of BYTE: Two Decades on the Leading Edge edited by Jay Ranade, and Alan Nash. The Best of BYTE was a book that I had read back in 1999 while I was still in secondary school---I loved that book because it succinctly captured the early history of the now-ubiquitos microcomputer. I remember that it was from there that I learnt of the fast Hartley transform, comb sort, and saw my favourite stack language Forth for the very first time, amidst the voluminous hardware reviews of the day (think the Intel 8086, the 80386, and also the motorola chips that Apple used in the old Macs).

I had finally owned a copy of that book from a library clearance sale from the resource library back in I2R some time in 201X for the price of a single dollar.

It's a nostalgia read for sure. I've also dug into my ancient collection of software, and set up DOSBox to run them. Good old QuickBASIC IDE ran well, and some of my old favourite text editors (like the now defunct Program Editor (Pedit) by Goldshell Media---the link provided goes to a different company by the same name now). The last time that I ran any of these things was back when Windows 98 ruled the Wintel PC space---I vaguely remember using them when Windows XP became a thing, for a couple of reasons:
  1. By then, MS-DOS and QBasic was starting to be less relevant [from the competitive programming front] as Linux became more prevalent in the form of djgpp; even the venerable Turbo C++/Borland C++ and Turbo Pascal were giving way to gcc and Free Pascal respectively.
  2. My eventual drifting towards Cygwin as my final solution towards bringing Linux-like toolchains into the Windows environment.
To top it off, I was spending much more time working on competitive programming related stuff than just writing programs for fun, something that old QBasic/QuickBASIC was excellent at for their rustic but comprehensive standard library.

Did I forget to mention that I had also started mastering Vim as my primary text editor as well, which contributed in no small way to abandon some of the old tools from the MS-DOS era.

Those were the fun times, when I was literally a fourteen-year-old. I learnt a lot about the MS-DOS platform, from the different interrupt calls (oh my good friend INT 10h, how I've missed you!), specific segmented memory locations for faster I/O (hi b800 memory segment!), and integrating these low-level information into good old QBasic for that speed (hi CALL ABSOLUTE together with friends VARPTR/VARSEG and DEF SEG).

Nowadays... well we can't do stuff like that any more. The computer is no longer ``open'' for us to explore. There isn't really a built-in user-friendly programming language/IDE available out of the box that has a simple enough interface to work with. Even doing the traditional ``hello world'' is messy exercise that requires some rudimentary understanding of objects and other crazy stuff, not to mention the need to download some multi-megabyte nonsense juuust to write and run a program (no, not even Python escapes this criticism; Javascript in the browser is not a viable alternative either, since it requires exactly the messy amount of a priori technical knowledge that I was referring to).

I mean, just look at how stupidly easy it was in QBasic:
PRINT "Hello World"
That was it---you type it in a nice little IDE.
Then hit the F5 key (or click on <F5=Run> with the mouse, or use Run⇒Start via the menu). And boom!
You get to see the output, with a nice line to tell you the IDE is waiting for you to press any key to continue. No need to type arcane command line options to compile, manage ``project files''/make files. The IDE even comes with its own comprehensive help system on the QBasic language---all without the need of the 'net. (If you want to play with it, grab it from the Internet Archive. The file is a self-extracting executable that should open nicely under your favourite compressed file program.)

Sure, one can always do it with the old school batch file in Windows, but more likely than not, the console screen will show up fleetingly and then disappear. And writing batch files is the worst kind of ``programming'' that can be done, since it is really a command ordinator (hello my lost French!), a more serious affair than frivolous exploration with say QBasic.

I mean, I can go on about QBasic (it is my first programming language/environment where I learnt my foundations in from zero after all), but I won't. The point is that times are so different now.

Everyone wants to build an app for this, an app for that. Even those who go into those ``coding bootcamps'' don't really learn the joy of figuring stuff out through understanding---it's about hopping on the latest trend to make the alleged big bucks and bail out with their new line item in their resume to hop on to the next bandwagon.

Urgh. I don't really want to talk about it any more. Till the next update then.

Monday, September 20, 2021

Smoke Gets In Your Eyes

From Smoke Gets In Your Eyes & Other Lessons from the Crematory by Caitlin Doughty:
In 1961, a paper in the Journal of Abnormal and Social Psychology laid out the seven reasons humans fear dying:
  1. My death would cause grief to my relatives and friends.
  2. All my plans and projects would come to an end.
  3. The process of dying might be painful.
  4. I could no longer have any experiences.
  5. I would no longer be able to care for my dependents.
  6. I am afraid of what might happen to me if there is a life after death.
  7. I am afraid of what might happen to my body after death.
Let's see...
  1. So... just wait till the relatives that I care about are no longer around to care about it, and maybe reduce the friend list down to the bare minimum, and even then, keep only those who can understand where you might be coming from when you declare that you are done with life and would like to embrace death.
  2. If one is contented, then there are no more plans or projects that are left to be continued, for the definition of contentment is to not have things that are left undone/uncompleted.
  3. Only if we allow ourselves to suffer the ravages of time. If one can choose euthanasia/suicide, it can end relatively painlessly for the self.
  4. I don't think this is a problem. Experiences have a way of repeating themselves after a while in the sense that ``everything is just a rehash of something that I had seen before''. With the 'net being a real thing nowadays, information and experiences can be obtained at a much reduced cost than before, which allows experiences to be gotten in larger quantities for lower costs in time and money.
  5. Eh, don't have children? 🤷‍♂️
  6. That's easy to solve. Believe in Jesus Christ as your saviour, and your after-life's existence is assured.
  7. Eh, ashes to ashes, dust to dust. Only the soul (as information representing the ``you'') gets moved on---the atomic components that make up the flesh body has to return to star-dust form from which it came, to be reconfigured into some other life-form in the future.
Seems straightforward to me.

That's all the update I have for now. Till the next one.

Sunday, September 19, 2021

Upgrading Vertical Movement in Minecraft

I upgraded my mob farm to allow auto-harvesting of drops, as well as preventing spiders from spawning (I don't need that much string that I cannot get from normal mob whacking). I also added an AFK box for the auto-harvesting to take place, and added various fast water elevators to both the mob-farm, and to my hill-side base to increase the speed in which it took for me to get from lower levels to the operational levels that I care about. I only built the upward going ones, relying on gravity and a single water source block at the bottom to nullify fall damage. The magma-block speed version is quite annoying to use, since the end point requires remembering to crouch to exit, in addition to the damage ticks that come from standing directly on the block despite being in water.

I could only build the water elevators now due to a much needed resource that wasn't available before: kelp. Kelp needs to grow in water, running or otherwise, but once it is planted in one, will convert that water block into a water source block. This is important for the upwards water elevator because the mechanism relies heavily on having the entire water column saturated with water source blocks, and the traditional way of doing it with water buckets out of an infinite water source is just... not possible, mostly because the water column is tall vertically, which makes dumping water source blocks throughout all of the blocks it encompasses an exercise in frustration. Conversely, kelp can be easily laid down: start with a water source block at the top of the column, wait for the water to flow to the bottom, plant the first kelp into the ground, then enter the water column, holding the right-click button laying out kelp one on top of another until one reaches the top.

Clearing the kelp is even easier: just thwack the kelp at the bottom, and the entire kelp column comes free, leaving behind only water source blocks. And the kelp conveniently floats to the top. So if one had constructed the upwards water elevator correctly, one can just hop into the now working water elevator to head to the top at high speed and pick up all the kelp for use at another day.

Anyway, I needed the auto-harvesting for more gunpowder from the creepers to create TNT. I need that TNT to go find the ancient_debris deep in the Nether to smelt into netherite scraps that is part of a chain of steps to create Netherite equipment, the highest tier available in-game.

That said though, the work with redstone to get the mob-farm working and this entire production process is making me itch for either Factorio or even Kerbal Space Program, two massive time sink games that I had previously not played because they used the same brain cells as what I used at work.

Maybe if the work I do to pay the bills relaxes the use of those specific brain cells, I can finally start scratching that itch with these games. Hmmm...

------

In other news, I've completed my reading of The Sports Gene: Inside the Science of Extraordinary Athletic Performance by David Epstein. It gives a good summary of the evidence for nature versus nurture, which demonstrates that any extreme position of which is more contributive to [athletic] success will run into limitations very quickly, and that it is a combination of the two that can lead to success. I doubt this is something new, but at least it tries its best to gather information and references in one place, which can assist readers to contruct their own understanding when more research turns up over time. While the book limits itself to sports as the motive for understanding how genetics/training affects the sports outcome, some of the conclusions thus drawn has their corollary from the disease front as well, even though Epstein was careful to not be speculative in that regard.

The key takeaway is that while genetic research reveals new connections for us to learn about how it can affect performance, it also reveals that the proverbial ``book of life'' from our genes is has a level of complexity that may be beyond basic understanding---the more we understand, the more we realise that we do not really understand. And that perhaps the true answer/model that can explain why we end up being whoever we are might be of orders of magnitudes higher than the simple cause and effect type reasoning that we have grown to expect from the early successes of intellectual endeavours.

Anyway, that's all for now. Till the next update.

Saturday, September 18, 2021

Peeled Toe Nails

So I did the Singapore Botanic Gardens walk way back in July, and what I didn't write was how I developed blisters under the nails of my two little toes.

Yep, blisters under the nails. I didn't feel comfortable leaving them alone, because I figured that the chance of the un-dealt with blisters under the nails had a high chance of causing the said nails to be ripped out if they snagged on something---I knew from experience that the skin atop a blister of any sort has much more give than a regular piece of skin. The answer out was obvious.

I burst them carefully, washed them clean, and applied antiseptic, and prayed hard that the nail wouldn't spontaneously decide to rip itself out in the mean time, repeating the process until they were no longer weepy. Thankfully, there was no infection---my disinfection process was good.

The nail didn't rip itself out, but it was in the strange situation where it still seemed to be detached from the nail bed. I was worried of having things trapped there and causing infections of all sorts that I could not easily take care of (think fungus in addition to regular bacterial stuff), and so I stayed on the down low at home, taking extra care on them. The nails were still growing in length, and I was extra cautious in trimming them down to avoid the dreaded snag-rip.

Recently, nearly two months later, the nails finally fell off.

Saying that they ``fell off'' isn't particularly correct. It's more of a case of them ``peeling off''. It happened accidentally for one toe. While checking on it, I found the gap between the nail plate and the nail bed to be suspiciously wide with no pain, and gingerly touched it. The nail peeled off the nail bed almost cleanly with my gentle checking, with just a little that was still holding tightly past the cuticle. I gently trimmed that part of the peeled off nail as closely to that cuticle as I could to avoid that ripped flesh scenario that was making me pay very close attention to the nail in the first place.

Beneath it wasn't raw flesh---it was a thin but serviceable new nail that had grown. Cool!

I waited for the rest of the day to observe if there were any complications and found that there were none. Curious, I started to fiddle with the other toe nail to see if I could remove it as well to avoid having to worry obsessively about it.

Interestingly enough, it too had developed a thin new nail over the nail bed, and was already detached from the nail bed. The key difference was that its shear edge along the cuticle wasn't as brittle as the other nail, to which I aided it along through gentle repeated flexing.

And with that, both the nails that had come from the blisters under them were successfully removed from my toes without any other injuries, which reduced the risk of them ripping unintentionally.

Good riddance.

Friday, September 17, 2021

Still Nothing to Report

I started on The Sports Gene: Inside the Science of Extraordinary Athletic Performance by David Epstein. It looks into the nature versus nurture debate with respect to the abnormal [relative to average humans] athletic accomplishments of athletes.

I also started working on the mob farm in Minecraft. It's a pain in the ass.

That's all.

Thursday, September 16, 2021

Min[de]craft

Why is euthanasia/suicide frowned upon? Why do people claim that ``he/she has died so young'' as opposed to celebrating the life that the person has led till his/her death, especially in cases where the life was ended by their hands? Why is it that some people can believe that God has plans for each of us and cannot accept that perhaps God's plans for that person was to end his/her life at the time that he/she did by his/her own hands?

Maybe the reason isn't so much as that they care about the person who had chosen euthanasia/suicide. Maybe the real reason is the innate selfishness of their own feelings, of how they might feel once they learn of this person's death. Most will claim that it is a loss if that person dies, but what is often unsaid is why that person believes that it is a loss to them. Almost all will say things to that extent as a form of reflex, a guilt admission that they were afraid of death, and dealing with the death of someone that they knew, that their hitherto unquestioned ``immortality'' was now being proven starkly in their face that they too, like the person who passed, will die.

In a society that celebrates go-getters who have clearly articulated goals, it seems to be quite paradoxical that there is such a strong distaste for people who choose to end their lives on their own terms. Isn't the choosing of a specific means and time to die itself the ultimate hard core goal, as compared to wooly ones like ``make my first million by thirty''? Isn't the person who follows through the plan for euthanasia/suicide one who has the same amount of determination and never-give-up attitude like that serial entrepreneur?

If so, why the double standard? If celebrating death is something that is a little hard to stomach, then providing that modicum of respect for someone's choices might be the right way to describe things.

Note of course that I am referring to euthanasia/suicide, not ``suicide attempts''. ``Suicide attempts'' differ from actual euthanasia/suicide in that they are often poorly thought through, impulsive, and are more of a cry for attention than a resolute step. They deserve all the help they can get, and perhaps society needs to relook into its values to determine if they have strayed away from what is healthy for inculcation into its members.

But for those who have chosen with grim determination their time and manner of death, complete with using high success rate methods, they should be respected. Because it takes a certain amount of guts to tell the world to fuck off and go on one's own way into the great unknown.

Why would someone choose death over other things? Maybe it's because there just isn't anything else to look forward to in life anymore. This could be from too much physical/emotional suffering from incurable diseases that give a constant negative quality of life, or even having reached the end of a long road mostly satisfied with nothing else to look forward to.

That latter bit is usually associated with the aged, but somehow I don't think that it is limited to only the aged. It is possible to live long enough without hitting age numbers that can be considered ``old'', and it is something that only the person involved can determine; anyone else saying otherwise is just projecting.

If living has no meaning other than waking up, eating, staring out into space, clearing waste products, showering, and then sleeping repeatedly day by day, then perhaps it might be time to just go. Why be held back with some unsubstantiated bull-shit feel-good optimism that ``it will be better in the future''? I would argue that living such a meaningless existence is no different from being put on life support after all ways of curing a disease have failed (it's terminal)---it can be considered cruel.

In most cases, those with terminal diseases are kept on life support for a long time because it is the will of the people around them who demand it, not they themselves.

Selfish bastards.

It's the same with playing the ``but you have so much to live for!'' types of talk that one might get when one declares the time and manner of one's chosen death. It's basic disrespect of another person's life---they literally have never lived in the shoes of the person, and have never considered their perspective before the declaration of a firm end date on terra firma, so why would their last exhortations be of any relevance at all?

If anything, I would say that those people are the ones who are hallucinating. Because they would rather pretend that they aren't going to be dying, living each day as though they were immortal, and then being bloody surprised in the end when they really die, claiming things like ``but I have so much I haven't done!''.

Well yeah, no shit sherlock. You failed to plan, so you have been planning to fail.

For the record, life is one-hundred percent terminal. We all [physically] die in the end. Now whether we die spiritually depends on whether we have been saved by the Lord, which is a different question altogether.

But let's flip the question around a little. All these seeming glorification of euthanasia/suicide is based on the rather strong assumption that the person in question is rational, that he/she has actually thought through everything that mattered, and have reasoned with himself/herself about why this was the right path to take, without devolving into some kind of emotionally charged state. That last property is the linchpin in the defense, and is one that can be manipulated by others who may choose euthanasia/suicide as a cover for mischief.

I mean, if euthanasia/suicide is less frowned upon, I can almost guarantee that in places where this is true, there would be an increased amount of euthanasia/suicide of ``problematic [to society]'' people, complete with water-tight reasons to demonstrate that the person in question truly wanted it so, as opposed to being in an emotionally charged state, or be outright murdered in the first place. So, to defend against others who usurp this power, there is some unspoken consensus to condemn euthanasia/suicide at the society level.

That said, if one is not of society though, then the rules only apply if one chooses to allow them to apply---a single individual can choose to break any/all rules of society with associated consequences after all. So if one is determined enough, there should be ways to achieve the end goal of euthanasia/suicide.

Just a little food for thought.

------

On less morbid matters, I built up my apartment in Minecraft and prettified it a little with the carpets that I made from the wool from my wool farm. I also dug more digital cobblestones and friends from my deep mine while watching more Hololive videos simultaneously, and ended in some long and convoluted cave system which I proceeded to explore and light up to prevent spawns. It's probably the third or fourth day of not reading anything substantial, and it is a nice break in my break.

I can't decide whether I want to build a mob-farm or go explore the End next. If I do the latter first, I can potentially get Shulker Boxes, which will make transporting material for building the mob-farm way out in the Deep Ocean biome to be easier. But if I did the former first, I'd have a ready source for powerful enchanted equipment and/or repairing stuff with the Mending enchantment, which can be obtained from the deep sea fishing.

Hmmm...

Anyway, I'll decide whenever. I am apparently in a Minecraft phase now. T'is fun.

Till the next update.

Wednesday, September 15, 2021

Nothing Much to Report

I built a wool farm in Minecraft.

That is all.

Tuesday, September 14, 2021

Urban... Homesteading?

You know, a thought just came to me. Given my proclivities of tinkering and amassing large amounts of knowledge from seemingly esoteric fields with no intention of being certified as a specialist in one way or another, it seems like homesteading might be the right type of lifestyle for me.

Work needs to be done on a homestead, and all work done on a homestead contributes to keeping the homestead alive and well. Work on a homestead requires different knowledge at different times, and there are a variety of projects of different durations and complexities with associated clearly defined goals that can be achieved. It threads the fine line between survival and death, but with a much larger safety margin than just heading out into the jungle to live. One does as much work as one needs to fulfil the comfort of living one wants, and thus have time to do other things (possibly of passion) without worrying if it can satisfy some future lust of capitalistic corporations who always want ``good fit'' labourers of all kinds without providing the training they want to ``improve their base line''.

A very charming thought. It will be tough, but it isn't something that humanity hasn't done before---that was the way of life before we turned into Heinlein's insects and overly-specialised ourselves. For the confused (with the emphasis being mine), Heinlein's words from Time Enough to Love:
A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyse a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.
I suppose the only tough part then is how to enact such a concept in an urban city the way SIN city is.

It's not like I can set up a farm for self-sustenance. Even the traditional farm areas in SIN city are getting their literal last land lease extension to get the hell out before they get bowled over by the new zoning. And farming for food is one of those things that actually need the land to do so, whether it is in the ``inefficient'' two-dimensional sprawl, or the energy-intensive technologically supported three-dimensional ``shelf''-type farms. And land... it's not easily available in SIN city, even if one had the capital to pay for it.

But I like that idea of homesteading; it gives me a direction to re-focus my thoughts about that I want to do in the future; I don't want to live for others (how many others do I have to live for now, given that almost everything that I used to cherish dearly have been taken away from me one by one?), but I definitely want to live for myself while being in God's good graces, and not take bullshit from other people.

Urban homesteading... let me ponder about this more. Till the next update.

Stupid O'clock Thoughts

There isn't a good reason to be writing something at stupid o'clock ``today''. It's not like I have some stimulus which had given me inspiration to vent something out, profound or confused.

Maybe it's just the mood of things. I mean, HoloMyth did just have their one year anniversary, and like all things relating to anniversaries, it does put one in the more contemplative mood.

Back in the day, one of the issues that I was facing that brought me to my first round of consultation with a counsellor was the loss of my inner voice. Before that, I've always had a comfortable dialogue of sorts in my head, weighing things out based on observation, a type of comforting/comfortable inner chatter that helped me retain a good grasp of who I was.

Somewhere between the ages of eighteen and twenty-five, I somehow lost that. Today, ten years later, I am going to confirm that I never did get the inner voice back. So in some sense, I lost an anchor to who I was.

Sometime between twenty-one and thirty, I lost my dreams, though I'm not sure if it should really be considered a loss if there weren't really any to begin with. You see, contrary to popular belief, I am by no means an ambitious person by any regard.

Popular belief will be contrary due to observable outcomes of so-called successes, from various awards, opportunities, or even that relentless behaviour in pursuing some truth that borders on the manic. But those are external observations---they reveal nothing about my inner world.

What I am getting at is, my inner world is really lacking in ambition. Any and all successes that I seemingly have obtained thus far is due to somehow meeting the right people under the right circumstances through happenstance.

I didn't win the ``Most Outstanding Pupil'' award back in primary school because I was gunning for it to begin with---my form teacher recommended me and suggested that I turn in my curricula vitae for consideration.

I didn't go on television in primary five for my dizi because I was aiming for it---I was selected by the teacher-in-charge for the school Chinese orchestra to be one of the two representatives there.

I didn't pick the dizi because I wanted to be a maestro at it. I was supposed to play the erhu, but that one day where there wasn't enough erhu to go around with surplus dizi made me switch my instrument. And if not for meeting sifu, I wouldn't advance much in dizi to the point where I can hold on my own today---those days, the dizi section had at most three people, and there was no dedicated instructor for us. We were just left to our own devices.

I didn't represent SIN city at two international programming competitions because I knew of their existence and wanted to make a name for myself there. My first representation chance came from being in a team of senior students who were that much better than I was, but could not represent SIN city because they were not citizens of SIN city. My second representation chance had some minor contribution of effort on my end, and only because I enjoyed solving timed algorithm problems, and had some good luck as well.

I didn't get into River Valley High because I was an amazing student. I knew nothing about that school other than it was ``a good school'', and somehow managed to get in, and only because I didn't get into my first choice at that time.

I didn't get a scholarship for BSc-PhD because I was brilliant, but because I took part in a talent search competition to spite my then form teacher [in junior college] who shot down my project proposal despite knowing absolutely nothing about it. I even wanted to drop out of the final interview round because I was going to be unavailable for the time slot because I was going to be in a completely different time zone competing at my second representation of SIN city in an international programming competition, as I never felt like I was scholarship material in the first place. The principal figuratively slapped some sense into me by intervening on my behalf and getting me access to a global-roaming cell phone to take the interview while I was half way around the world.

I didn't go to Carnegie Mellon University because I was aiming to be there---I would have been happy to be at NTU studing EEE. When the scholarship was awarded to me, I had no bloody clue on where to go, and it was the advice of the scholarship-counselling teacher who suggested CMU to me as a good less-crazy-to-enter university for computer science (as compared to say Stanford or MIT). My General Paper tutor then whom I had approached to help write recommendation letters for even gently rebuked me for my choice of going into Victoria Junior College in the first place, suggesting that I probably could have done much more [and better] at some place like Raffles Junior College.

There are more such incidents, however they are still a little too close temporally for me to talk about them comfortably. Given the sample here though, the general idea should be clear: I was never one to have a big ambition to fulfil.

I was never one to have a big ambition to fulfil.

What I managed to get, where I managed to be, is because of providence. As a believer now, I will say that it has all been by the grace of God that these things happened. I am/was a nobody---my family has no pedigree to speak of be it wealth or power, and I have shitty skin with a weird temperament. Even my mom made a commment (after my sister has graduated too) that she never expected us to get to where we were---she'd be happy if we could get some diploma from the polytechnic and work to keep ourselves going. I ribbed her then for her lack of faith in us, but now as I look back, perhaps there is some truth in her words.

I had been floating through life with little to no ambition, with my direction gently nudged by the people that I meet. A lack of ambition is one of the many ways to reach contentment, but such a life path is highly incompatible with one of the most capitalistic societies in the world. It is also unfortunate that the one passion that I am willing to allow to be commercialised (computer science) is also the one that the world has seized as a means to create new and more insidious yokes to enslave the minds of the masses and bend them in ways whose nature we are only slowly starting to wake up to.

And that is a big part of why I am on sabbatical. Because I couldn't stand doing hypocritical things---I needed an out. Well, now I am out, but the question remains: what's next?

For someone who doesn't have ambition, ``what's next'' can be a hard question to answer. But I need to remind myself to think soberly of things, and to remember the sunk cost fallacy. Just because I possess a Masters degree in Computer Science doesn't mean that I am defined solely on the basis of possessing that Masters degree in Computer Science. Since it doesn't spark joy, there is no point in trying to continue on that path with increasing bitterness just because it can pay well---this is especially true since I have few financial commitments to worry about due to being single and not being tied to paying off a housing loan and such.

But as I had mentioned in a recent meat-space conversation, knowing what I don't want to do isn't enough: at the end of my sabbatical, I need to answer affirmatively on what I want to do so as to move forward. As long as I do not have that answer, I cannot claim success of my sabbatical in clearing my thinking.

Thus, this last quarter of my year-long sabbatical is probably the most crucial and most difficult one compared to the first three.

Monday, September 13, 2021

I Miss Rolling with a Crew

I miss rolling with a crew. It can be either for something formal, like the colleagues that we can banter with at work as short respites from the daily grind, or a group of folks who share the same interest hanging out and doing something together.

What I miss is the crew that bands together and strives to create/produce something. Part of the reason why I stepped away from Aikido (apart from the physical aspects of having to do more and more advanced breakfalls to advance) was that while we were all practising together, it was still a personal affair. This is unlike say playing in the TGCO, Kiltie Band, or even Flute Choir---in all those cases, we each had a role to play, and together we create something that not a single person could have done alone, no matter how powerful.

I suppose that is why after being in a care group for nearly a year, and being associated with a church for roughly the same, I find myself feeling the same types of feelings that I had when I was doing Aikido. We were together sharing fellowship and camaraderie, but ultimately the spiritual path is still one that can only be walked by oneself---there is just nothing that can be achieved as a group. That sort of bums me out, I suppose.

For someone who used to declare himself as a misanthrope, he sure as hell isn't too happy when he gets his apparent wishes.

The global handling of the COVID-19 is a shitshow that has fucked up the development, maintenance, and advancement of inter-personal relationships. At times, it feels like the nonsense that came after 9/11 all over again---anxiety and paranoia is at an all-time high, everyone is seeking for certainties amidst developing situations where it becomes all too easy to mis-estimate the associated risk/exposure, and where rational discussions go to die. The only difference between the two is that COVID-19 is a more invisible enemy than the nebulous term of ``terrorists''---it is hard to apply the social control mechanism of ``othering'' to galvanise the population when it is not some other social group that can be targeted. In the early days of the pandemic, the US President of the time had tried doing that by calling it by a term that the WHO eventually frowned upon as being unproductive from the perspective of communications and coordination of control measures at a global scale.

Anyway, I'm not interested in talking about the big world for the moment; I just want to talk about the much smaller world that I inhabit: missing my crew.

I managed to figure out what I was feeling after watching the 1 Year Anniversary live stream for HoloMyth, the first group of livestreamers from the then newly formed Hololive EN branch. I probably mentioned before how I enjoyed their videos because they were like an artist group, where the ``art'' in question is video content creation in the form of live-streaming. That anniversary video shows that plainly, as well as the all-important bonds that they had forged from rolling as a crew for one whole year.

Yeah, I miss that camaraderie. Maybe that's why I'm still a little bummed out that we haven't started rehearsals for TGCO yet, no thanks to the still-raging pandemic.

I think that as a person, I am very selective in terms of whom I call my crew. It's not about being elitist---I think that calling someone a part of one's crew is more than just casual meet-and-greet levels of acquaintance. There's something innately complementary for the members in the crew so that while individually possibly weak, as a crew, we are strong. Perhaps that is why I have usually walked alone, with at most a couple of close friends for that particular era, until their life path and ``aetheric resonance'' starts diverging. At that point, I'll just quietly let it die away and talk about it no more.

And that's why when someone I call part of my crew suddenly turns around and stabs me, it hurts much more than the initial damage from that action alone.

Sunday, September 12, 2021

Thinking Oneself to Death?

Is it possible to think oneself to death?

Context: In the Halo video game series, the idea of ``AI Rampancy'' was a plot point in Halo 4, with the explanation that it was due to the divergence of the underlying neural network lattice, which causes too many instances thinking at cross-purposes which eventually leads to the famous line of how it really is an AI ``thinking itself to death''.

I tried to think about it (heheh) from a human perspective to see how it might work out. But I immediately ran into two issues.

Firstly, for those of us (the majority) who have learnt some form of natural language, we seem to be stuck with the ``inner talking voice'' model of thinking. This means that when we have thoughts, these said thoughts tend to come in the form of words, usually ordered in some sequential manner according to whatever we managed to grok out of the associated grammar. With such a strong serialisation in place, I found it hard to apply the Halo version of ``thinking oneself to death'', since it seems that I cannot hold more than one ``inner talking voice'' at a time. I can, however, see how getting stuck in a reasoning loop of any sort (think of the Chinese idiom of 转牛角尖) as being one way of ``thinking oneself to death'' through seeming hopelessness. Since this is still serial in nature though, it doesn't have that cascaded effects feel that is implied through the Halo interpretation.

Secondly, much of human thought processes are massively parallel to begin with, though not all thought processes thus harnessed are associated with conscious abstract thought that is implied in the Halo context. Unlike Cortana of Halo, we are still piloting a meat-shielded carbonated hydroxyapatite skeleton mecha that runs on chemically-driven hydraulics, and all that movement requires a fair bit of coordinated control to operate properly, things that most AI don't have to do. A large part of the human brain has brain-cycles dedicated to processing the sensory input and associated coordinated neurological firing, but since these are unconscious/subconscious in nature, we don't usually call them as ``thinking''.

That said, I still conclude that it is possible to think oneself to death, though not in the manner in which Cortana talks about in Halo. I can come up with at least two ways too.

The first is already talked about earlier: getting stuck in a reasoning loop with seemingly no solution to the reasoning problem. This can cause death not through resource deprivation in the biomechanical sense, but in the more abstract ``future energy'' aka ``hope'' sense. It is related to the placebo effect, where the body reacts according to the level of optimism/belief within the mind. We commonly see this effect in situations where someone is literally given a sugar pill and told that it can provide some therapeutic effect which ends up happening despite the sugar pill being known to not have any active ingredients that can affect the particular symptom it treats in any way. Imagine now a negative version of the placebo effect that comes from a reasoning loop---that is how one might ``think oneself to death''. It is not the mind that dies from the overload, but that the body itself fails from the hopelessness that the mind has prescribed.

The second is more obvious: suicide. Literally thinking about how one can die, and then proceeding to make it come true through physical actions directed by the brain.

But ``thinking oneself to death'' like how it is described in the ``AI Rampancy'' concept in Halo? Nah... brains don't work that way.