Sunday, May 04, 2025

Fear of the Future

I fear for the future, and not in the typical ``AI singularity is taking over the world!'' sort of way.

I fear for the future because of the amount of hand-holding and retraining that we veterans of the workforce need to do due to all the traumatic nonsense that has come from ``the COVID years''.

Let's math it out, shall we?

COVID-19 may begin in 2019, but let's make it easier on ourselves by starting in 2020, where much of the real pain truly began. The entire global epidemic lasted for roughly 2+ years (let's say 3 for an ``absolute range''). So folks in their formative years are affected if the said formative years are between 2020 and 2022 (inclusive). Let's also take into account the access to ChatGPT and other Large Language Models (LLMs), which is end of 2022 (more specifically, 2022-12 onwards).

The 95% confidence interval for freshmen ages is [18,21] (non-scientific), and if we consider the earliest entries (18 years old at 2020) and the latest entries (21 years old at 2022), we'll find that the graduates will start streaming out from 2023 to 2025, which is about now. The few fresh graduates that I did manage to hire during this period were bright, could work well enough via quasi-remote settings (i.e. via text messaging systems and teleconferencing mechanisms), but had some minor deficiencies here and there with respect to social interactions in meatspace, which feels funny to declare considering that (1) I'm not exactly a paragon of social interaction, and (2) I'm not exactly someone who has a statistically significant number of subordinates to make a meaningful statement on this.

I would say that with respect to the COVID-19 impact, it's probably minor. Social cues are things that folks will eventually learn when they are out and about in the working world, mostly because there is still a larger number of old fogeys out there establishing/maintaining the norms that anyone with even a small amount of emotional intelligence can pick up on their own, or have their social norms forcefully recalibrated through someone 讲-ing them.

That's the easy part.

The bigger problem is the rise and abuse of ChatGPT/LLM systems. Using a similar methodology as before (95% confidence interval for freshmen ages being [18,21] (non-scientific)), we see that the earliest graduates are coming out into the working world at around 2026, or next year. They are probably not as problematic, as their abuse of ChatGPT-esque tools is limited to the last three years of their formal education, though to be fair, the intensity of those three years may make the effective qualitative outcome ``bad enough''.

The bigger problem is the one where we are talking about secondary school children (i.e. 13 years old at 2023). They are more apt to abuse ChatGPT-esque systems more, and for longer, and since the teenage epoch is usually a formative one that determines one's future outlook, it means that this generation (and possibly successive ones) is likely to have the most problems with respect to actually thinking and solving problems on their own without the abuse of ChatGPT-esque systems.

This is problematic because these folks are supposed to graduate into the working world (assuming college attendance) some time in 2032. That's seven whole years of possibly abusing ChatGPT-esque systems, where the long term cognitive effects are not known at this stage.

Seven years is an eternity in that space, especially when considering the so-called ``Moore's Law of AI'' that is a 7-month duration between the doubling of the human-task duration-equivalent metric of AI performance.

I'd be nearing my sixth decade by then, and hopefully on the path to retirement (I'm kidding---in SIN city, I don't think we're allowed to retire, what with the pathological perversion of seeing ``numbers go up'' while the quality of life hits a stagnation point).

``MT, you keep saying `abuse of ChatGPT-esque systems'. What do you mean?''

It's about delegating the reading, understanding, and critical response aspects of what an intelligent human is supposed to do to some AI model that touts to be good at all those under some very restrictive interpretatons. I like tools that help me work faster and more productively, but I only use the tools to take the short-cuts only after I have developed a deep enough understanding of the underlying matter to make the short-cuts meaningful. I like to think of this as how we all have to ``earn our way'' towards using calculators by demonstrating understanding of basic arithmetic at first, and basic differential/integral calculus next. The tools that make our life easier presents either an algorithmic short-cut or a data-driven short-cut that is best understood as what I said---a short-cut. Using these short-cuts without any form of deep understanding is dangerous in the future economy because the future is not powered by the ability to replicate without thought---the true value has always been about a new form of intellectual property. If all the ideas and outcomes that one creates is based purely on the output of such ChatGPT-esque models, then one loses the ability to actually create, which is dangerous.

It is the same form of argument I have against automated driving systems. We always assume that the automated driving systems should have a human ``in the loop'' who can take over the handling should the environment exceed the normal circumstances in which the automated driving system is trained on, but if everyone uses the automated driving systems from the get-go, the quality of skill/decision-making needed to handle the exceptional situation paradoxically degrades drastically instead, all because people took short-cuts and never really internalised the foundational information.

``So MT, what's your point at the end of it all?''

I'm just scared. I don't know what kind of people we will get when 2032 comes around. I may be one of the few ``crazy old man engineers'' left who are strongly straddling between the old ways and the new, and I pray that I have enough strength and determination left to train the newbies in the old ways so that they can harvest the effects of both worlds to truly shine.

Because the effects of a failure to do so are too catastrophic for me to even start thinking about.

No comments: