Hello everyone, today I’ve decided to write an article about GenAI, specifically focusing on LLMs. I’d like to start with some key concepts to avoid falling into the trap of baseless opinions about not following new technologies. I’m interested in GenAI, and I believe it’s important to stay updated on it because it’s already present in the market. Before being a technology enthusiast, I’m also a worker, and in the tech industry, keeping up is essential for survival.
Artistic Matter
A good starting point is the Greek word “αὐτόματα,” which encapsulates the principle of self-operation and efficiency. This concept reflects humanity’s age-old desire to replicate itself, enabling others to perform tasks while also contemplating the beauty of an artificial human form created by human hands. I’m not referring to a godlike feeling, but rather the satisfaction of creating the most complex random entity in the world: human thought. Given this focus on beauty, it seems reasonable to transition to art. I believe that generative AI (GenAI) is sometimes more about art than practical application. There’s a race to create the most human-like technology, aiming to perfectly project human thought into a machine. I suspect I hold this view because I only see a small part of the GenAI industry and research. My current knowledge of GenAI is limited to following and using new LLM models and considering how to integrate them into my work activities. As a DevOps Engineer and Executive Manager, the practical applications I envision are primarily to assist humans in areas where they struggle with consistency or scale:
- Software project scaffolding
- Code bug detection
- Enhanced monitoring understanding
- Automated soft remediation in response to system degradations or faults
- Help customers maintain order despite having a vast array of systems and a diverse workforce
It seems to me that these tasks can be adequately performed with existing LLMs, but we could achieve much greater perfection, simply to appreciate the beauty of human identity migrated into a machine. Is it art if we prioritize aesthetic achievement over practical utility? Of course, this perspective is limited to my specific viewpoint. In the medical sector, for example, GenAI can rapidly detect health problems far more effectively than humans, and this is not art; it is beneficial to humanity. I hope this technology will be adopted and made sustainable for everyone, but I have doubts, as it requires significant effort and resources.
Unbalanced Energy Equation
I am afraid of all things that are free or easy and that are in or near the market. I don’t want to give you the usual example of talking about social media, but it is one of the main ones. Social media are free but are not free. Your attention is what social media use to make money. Additionally, there are things that cost little money but have the potential to yield significant returns, such as the lottery. You can spend a small amount to buy a lottery ticket for a chance to win a large sum of money. However, statistics show that lottery companies are not concerned about you winning.
So everything that appears easy or free and is in the market is not really what they promise to be. For this reason, I think that once the cost of work in terms of calories is not compensated totally by the economic effort of using LLM, it will not be sustainable for people.
But this is not only an economic matter. It is like what humans produce is proportionally great with the work used to produce something. In the case of using GenAI to build a space shuttle or a cure for cancer, I think that this is balanced. People use GenAI to obtain help in order to not be distracted by little and noisy problems, and they can concentrate on hard and intellectually expensive problems to solve. This is a problem for people who can solve little problems and nothing else.
For example, I am a little bit afraid about the disequation between the calories needed to build an LLM model and the calories used by a mind that uses it to produce simple work: send an email, create a spreadsheet, write simple code to automate a simple operation, invent some lines of text. These are very few examples of simple things, but the point is not to specify what is simple or not simple. The point is what will be presented to people who will delegate simple operations to the AI and do nothing else complicated? It seems to me like “learn to code in 2 months” or buy a lottery ticket to become rich in a short time. It cannot work because of this disequilibrium of the equation between the work used to build and operate the model and the work generated.
These considerations are because, looking backward and around me, I have examples of too many simple things that do not generate value, but people are convinced that they do. Below are some examples:
Write an article entirely with LLM. It is easy, it can transform you into a writer shortly, but you cannot talk with people about your article deeply because you had not spent time writing it. You had not generated value.
Find technical solutions to solve system problems. Too many times (in 2025, I don’t know about the future), solutions generated by LLM are not correct or are partially correct. The effort generated to solve “the partially correct” is balanced by what you studied, and this effort is going to dissolve because you studied also because you spent a lot of time in the past solving technical problems without LLM. The new generation cannot have the possibility to spend this time.
Probabilistic Brute Force
When I was a child, I was fascinated by perpetual motion. I built machines with magnets, motors, and everything I could find at home to create a machine that would work indefinitely. I remember that whatever I did, I could improve efficiency and reduce energy dispersion, but I couldn’t create infinite work as the principles of thermodynamics teach us. Sometimes GenAI seems to me a bit like a brute force statistical approach to the problem. We are transforming heat into the best possible combination of words, but not the definitive and ultimate best combination. I sometimes wonder if, in the time we spend generating the best response with models, human thought will have evolved in the meantime, and thus, like in the paradox of Achilles and the tortoise, we will generate an infinite chase for the final result.
With this, I don’t mean to say that the best approximation of the result is not okay, don’t get me wrong, I just want to say that the whole system behind the generation of LLMs is elegant but has nuances tending towards brute force that make the effort much more probabilistic and distant from simple and effective things.