On Change and AI
A perspective on change and how it has affected humanity through the centuries
I appreciate you supporting my work by clicking the ❤️ at the bottom of this page. When you do so, it allows my content to become more visible. Thanks!
I am sitting in a skateboard shop, pondering what I should write about. A regular walks in, and I say, “One of the biggest challenges about a weekly deadline is deciding what to write about over-and-over again. Any idea what I should write about?”
“AI,” he replies.
Provocative.
As a Certified Public Accountant for thirty-five years, I have stayed on top of economic changes in this country for a long time. I both watch and read so-called-expert opinions on big pushes in corporate investment, the effects of material changes in technology and methodologies being used by businesses. I have been privy to this information for well over fifty years. The current push to implement, develop, and leverage artificial intelligence (AI) may seem groundbreakingly unique, but it is no different from many changes of the past. The development of computers, the invention of the personal computer, the popularization of the internet, and the movement from paper to paperless and physical to virtual workspaces have all had the similar impacts on industry, individuals, and the economy.
Anyone paying attention right now knows that most major corporations are investing heavily in AI. This is exactly the same as during the other pushes I describe. The tech industry is spending astronomical amounts of money on server infrastructure and other AI-related investments, working to get ahead of the curve and maximize the profits they can make off this new, magical product niche. In fact, Google AI estimates the investment by tech companies in AI in 2025 alone is $364 billion to $427 billion for the year. One year.
However, experts say, in an effort to get ahead of the curve most of the companies making these investments are overinvesting. Like any new fad (lasting or not), the odds that this tremendous investment pays off for everyone is literally zero. There are going to be companies who have invested at just the right level, some which have underinvested, but the consensus of the economists is that the majority have been over-optimistic, creating a problem when it comes to public investment in the companies pushing the charge. Sure, they have money invested in infrastructure, but if that infrastructure ends up being overkill, it makes the investment in that infrastructure a poor one, creating potential long-term losses, especially since most of these purchases are leveraged (purchased with borrowed money).
History also reminds us to proceed with caution. AI development is no different than many great discoveries of the past, rife with cautionary tales. One example is the discovery of sugar by the Western world. Sugar, of course, is delicious. Like AI, one taste and many people thought, “Oh my goodness! I must have all of this all the time! I will use it for everything.” As with many good things, people believed that if a little bit of sugar was a good thing, a lot was even better. In fact, the wealthy went as far as creating entire feasts created entirely of sugar: sugar shaped into roasts, vegetables, every type of food product, colored to replicate the actual food item. It was a sign of great affluence to eat almost only sugar.
Of course, when the aristocrats started having their feasts of sugar, they believed them to be remarkable, nutritious, effectively magical. What could go wrong? Well, over time, the results were disastrous. Needless to say, one cannot live on sugar alone. Nutrition suffered. And, people started getting ill, teeth started rotting out of people’s mouths, and they started dying of sepsis even more frequently than before, as infections (bacteria fed by sugar) entered the bloodstream, often through rotting and infected teeth. In the long run, we all now know the many disastrous long-term effects of sugar on health, diabetes included. Sugar is known to feed cancer in addition to bacterial infections. So, the magical discovery of sugar came with many downsides which only were understood with the passing of time, and society continues to reel from the fanatical introduction of sugar into the food industry hundreds of years after it’s discovery.
Radiation is another discovery considered magical at its inception. There is no question that the advent of x-ray technology, radiation treatment for cancers, and other scientific uses have had tremendously positive results. However, as with most new developments, enthusiasm for the new discovery far overshadowed it’s actual usefulness. Radium was sold as a "wonder cure" to the general public in the 1920s. You could actually buy radioactive liquid you could drink, radioactive powders and creams, radioactive toothpaste, even radioactive chocolate bars. Not unlike AI, it was a cure for all ills. Radiation was thought to be a magical, life-extending elixir. It was believed to cure innumerable ailments, whiten teeth, enhance and extend youthful beauty, improve general health and longevity. Needless to say, it was actually killing people. Even after it was discovered how lethal radium could be, it was continued to be used by corporations (in glow-in-the-dark paint, for example), killing and maiming many employees required to handle and work with it. (Click here for a related story about the Radium Girls.)
AI is not dissimilar in many ways. Psychologists have started to observe a significant enough number of individuals suffering from AI induced psychosis that many believe there should be a new DSM category for mental illness addressing it. (Click here for an American Psychological Association article on AI). Saturday Night Live did a great bit laughingly covering how dependent some people have become on AI.
It is a rare individual who hasn’t observed how affected we become when being exposed to too much screen time. Many of us have watched friends, relatives, and ourselves become so affected by virtual content that it affects our mood, ability to function, our personalities, our opinions, our lives. Back before computers, our parents used to warn us, if we watched TV for too long, our “brains would rot.” Little did they know the impact screens would have on our lives all these years later.
The effect of AI, taking basic thought off our plates, has many side-effects. It makes us less likely to think, when just asking one of many bots to think for us is so much easier, while simultaneously requiring us to think on a much higher level. For example, job market specialists have observed that while AI is quickly replacing many entry-level jobs (like coding, customer service, and data entry), it is going to require a higher number of experienced, project-management-level employees to oversee it’s results and prompt it’s workflow. Right now, there are experienced project managers who can do that, but with entry-level work gone, how will we develop individuals with the experience needed to assess the actual intelligence of AI? Furthermore, how do educators assure students learn to understand the foundations of what AI is doing, while preparing them to enter the workplace at a much higher level than in the past?
AI challenges are further complicated by the ethics behind AI development. Most AI is being trained through exposing it to pirated content. Ethical companies have trained their AI using purchased content. For example, when developing their photo editing software, Adobe only used images which they purchased and actually own. That is an example of ethically sourced and trained AI. The majority of AI developers, however, are using unpurchased content gleaned from the internet to train their AI, which is arguably copyright infringement. In fact, many successful actors, musicians and artists are suing developers for stealing their art to create tools and results mimicking their stolen work.
All this information begs a million questions, of course. How do we stop the rampant, rolling stone of AI, gobbling up pirated information across the internet. How do you plug the hole in the dyke after most of the water has already drained from the other side. It is even possible? You can’t unsink the Titanic. What have we actually done, and at what cost? What is the human cost in employment, skills, knowledge, mental health, etcetera. Another question which my skate shop regular posed is, will this glut of AI content actually make real-world, human created product MORE valuable instead of less? That might be an up side. I’d like to think so.
Meanwhile, the singular thing we can actually be assured of is only time will tell the actual outcome of the AI race. History teaches us a few things we can pretty much count on, however. First of all, there is no way everyone’s optimism about AI is one-hundred percent well-founded. Industry may charge forward, but we, as individuals, should avoid going down the rabbit hole with abandon like those who died from sugar or radiation poisoning. We should watch carefully and mindfully. We should use tools which have been tested, are ethical, and have been shown to actually be beneficial. Don’t fall for the hype. There has never been a time in history following trends blindly plays out well. Use the tools available, but be smarter than the tools you work with.
.If you appreciate this post, please be sure to “heart” it. It improves the visibility of “Folding Plants and Watering Laundry” in Substack. Thanks for your support!


