top of page

AI Conclusion

What then should our relationship with AI be? Should we embrace it, empower it, or partner with it? How can we resist its power to control and use large and/or complex data? We know well that the distribution of certain information, worthless information, and bad information can be destructive and divisive. In a world increasingly dependent on AI, it will be essential to establish a common set of rules and standards governing its development, deployment, and sharing. There can be no opacity. Humanity has failed to achieve a governing equivalent between the nationalities of the earth, and so we kill and fight wars.

 

We acknowledge that the United Nations was conceived as a template for preventing humanity from repeating its darkest mistakes. Yet its record is uneven, and its failures are undeniable. Even so, the world has avoided a major global war for eighty‑one years - an achievement that should not be dismissed. Regional cooperation has also produced meaningful progress; NATO stands as a prominent example.

 

Across many domains, nations have accepted shared standards—measurement systems, timekeeping conventions, and scientific protocols. Business and innovation have benefited from international frameworks such as the Patent and Trademark Treaties, which enjoy strong compliance. China, though a signatory, remains a notable exception.

 

Still, the early architecture for global coordination in artificial intelligence is beginning to take shape. Initiatives like the Global Alliance for Artificial Intelligence (GAFAI), the European Association for AI (EurAI), and the Partnership on AI, with more than one hundred members, represent the first scaffolding of a cooperative regime. But these beginnings are fragile. To ignore the risks posed by unregulated AI capabilities is to gamble with humanity’s future. The cost of inaction could be catastrophic.

 

The four or five-thousand-year-old trail of the common thread in pursuit of reality and truth has brought us to an upsetting moment of what we are and what we are not. We must understand and accept that a machine that can exceed the limitations of the human brain challenges the notion of our importance in our tiny place and microscopic moment in the life of the cosmos. 

 

Where do we go from here? What is the future? One thing is certain: we don’t know. But we know that, whatever it is, it will be on an exponential calendar. Human beings think linearly.  The events in our world and the universe are exponential. We humans think in a sequence of logical steps, progressing from one point to another.  Our solutions for what to eat for dinner or a differential equation are about a series of decisions connecting one thought after another to reach a conclusion - the truth about a reality. Everything else is exponential. Our technical progress is exponential. We went from horses to cars in 15 years. In 1946, a passenger plane’s travel speed was under 300mph. 10 years later, jet planes were at almost 600mph. The desktop computer first appeared in the late seventies. The first smartphone appeared in 1994. Everything that was available in a desktop computer is now in the smartphone plus capabilities we never imagined, including cameras and video recorders. Today, it is something we keep in our pockets, and it is everywhere.

 

Progress doubles in ever shorter times. Think of what that implies, using the old exponential example with pennies. You receive 1 penny on day one, 2 pennies on day two, 4 pennies on day three. 8 pennies on day four, 16 pennies on day five, and so on, doubling the number of pennies each time for just one thirty-day month. You end up with over $5.3MM in pennies received for that last day alone plus the $5.2MM you accumulated from each day you received pennies for the thirty days. The total is approximately $10.5MM.

 

Social media platforms are shaping our digital world and society. Such programs as Facebook, YouTube, TikTok, Instagram, X, Reddit, etc. dispense information everywhere at their discretion. They can shape, stop, edit, or create their own content with no form of intellectual or legal liability. There are video conferencing services such as  ZOOM, Meet, iMessage, FaceTime, etc. redefining the styles and rules of dialogue – a form of real and virtual reality. This is where humans interact with other humans virtually – electronically. Humans respond to other humans in a vast spectrum of ways that are filtered out by using electronic sight and sound. Three-dimensional sight, sound, smell, the environment, etc. are absent. We can legitimately conclude that we lose something important with so much missing in this monolithic digital arena. Combine this with Dark AI and the dogs of hell will be let loose. The other dark side of AI is warfare, which enables the enhancement of weaponry and cyber capabilities, as well as tactical, defensive, and strategic applications.  As learning and discovery accelerate exponentially, our frameworks must shed rigidity and become living systems that are capable of adapting, anticipating, and growing as swiftly as the technologies they seek to use. There are many examples of failure because of their reliance on linear thinking - Eastman Kodak, General Electric, RCA, BlackBerry, Motorola, Polaroid, Sears, to mention a few.

 

Across cultures, the enduring legacies of Eastern and Western philosophy still guide modern inquiry, shaping our spiritual sensibilities, ethical reasoning, and practical approaches to living. Eastern philosophical schools often integrate spiritual and religious elements, while Western schools are more secular.

 

 Both traditions have made profound contributions to global intellectual history, enriching our collective understanding of ourselves and the world. We can see their influence in various fields, such as psychology, ethics, social justice movements, and cosmology. The Socratic academy emphasizes rationality, logic, and empirical evidence. By understanding these differences, we gain a deeper appreciation for the diversity of human thought and the various paths towards understanding existence and morality. An interesting example of mixing different disciplines in education is the recent trend of some physicists joining philosophical departments as opposed to their home department, physics.

 

The development of the internet gave us ubiquitous connection to almost everything. It eliminated so many elements of distance. Communication with others, with its inherent limitations of the electronic wall filtering many of the elements of human behavior realities, has become the standard. And not necessarily a better one. It has redefined the social norm with its imposed limitations. Then where is AI with its ubiquitous power to once again affect almost everything? AI is not only a change, but a profound event in human technical evolution, equivalent to the Industrial Revolution, or perhaps greater. Mix that with super-computing – quantum computing – and all society will change into a very different future. Conjectures of what  that tomorrow will be remain in the miasma of a thousand uncertainties, surrounded by good and evil fantasies. Goethe told us that truth must be repeated over and over to prevent us from succumbing to untruths and unprovoked errors. He told us we are so often afraid of the truth and out of fear we look away. He referred to it as the ‘fire of truth.’ What again is reality but accepting the consequences of data followed by verification. Plato argued that our senses mislead us. What we see, hear, and touch are mere shadows of reality. True knowledge requires turning away from sensory impressions toward reason and intellect. Cicero regarded truth as both the highest obligation of humanity but its most fragile possession.

 

We are at an apex of evolution, using algorithms that can employ extravagant speeds across vast seas of data, to chart the path toward truths. If done rightly, and therein lies the eternal caveat of human imperfection, all illusion must be cast aside. What remains is not the shadow-play of appearances but the fire Goethe spoke of: ‘a flame that burns away deception, illuminates reality, and demands the courage to face its brilliance.’ Yet the fire is double-edged. It can illuminate, but it can also flame out followed by destruction. The algorithms are not neutral oracles; they are a mirror of who we are, carrying within their creation our styles, biases, blind spots, and ambitions. To wield it is to confront both the promise of clarity and the peril of distortion. Thus, the fundamental challenge is not to attain greater speeds but to nurture acumen, ensuring in our pursuit of knowledge, so we do not mistake speed for wisdom, nor data for reality. Only then does the algorithm become more than a technique but the door to Goethe’s fire – the naked truth. The Renaissance spoke of truth in the structure of mortality.

 

Truth about life has one enduring thing – birth, living, and death – a template that never varies. Everything else within that template is dynamic subject to the variabilities of luck, choice, and  fate. Algorithmic reasoning moves in the other direction. It rests on the belief that the observed world is stable. That the collected data is a preserved truth, that the patterns endure, that tomorrow will reflect yesterday. Thus, continuity becomes the structure on which we build  predictions. We assume every polynomial has the same output. Life is governed by decay, by disruption, and by the erosion of every certainty we imagine secure. When systems mistake persistence for truth, they misread the world. We are creatures of constant change. Forget that, and every guide becomes a compass to illusion, not a path to wisdom. Consider Petrarch’s struggle with truth. He saw truth through categorization, examining  every current and ancient document he could review. Kant’s truth was based on cognition. Combine these and you have the algorithms of AI. Petrarch’s search through mountains of information – data – and Kant’s dependence on logic, probability, and algebraic reality. The challenge is as always to detect the signal from the noise. AI is separate meaningful data in the deafening roar of the data tsunami, processing volumes that would crush human cognition.

 

Then there is the promise of AGI – Artificial General Intelligence. AGI has been defined as a technology capable of mirroring human intelligence, employing all human cognitive capabilities. Google’s DeepMind  Chief Scientist Shane Legg defines AGI as a technology capable of doing anything a human can do, but do it better. He believes that it is both imminent and transformative. He defines imminent as within the next five to ten years. He warns us of the dangers. They comprise misuse, misalignment, mistakes, and structural risks including the need for overwhelming amounts of energy.  AGI repeats once again the debates of the past about reality and truth. Demis Hassabis, the founder and CEO of DeepMind, like Bacon and Petrarch, trusts in the sheer accumulation of data and scale: ‘more parameters, more computer, more truth.’ Shane Legg, on the other hand, is cautious and fears misuse and misalignment.  Their critics, recalling Descartes, insist that intelligence cannot be conjured by size alone. It  requires structure, models, and conceptual clarity. The pursuit of AGI stages once more the ancient struggle between empiricism and rationalism, between what appears and what is, between the momentum of progress and the restraint of wisdom. Standing on the broad plateau of tomorrow, we must confront a decisive question: will our machines provide Goethe’s fires of truth, or will the understanding of reality require a form of insight no civilization has yet imagined.

 

The path to AGI has revealed the risk potential of human civilization. The fractures of the promises of safety and ethical guidance unmasked by the conflicts of OpenAI’s founders with their shifting policy opinions, research, and outcomes. OpenAI began as a non-profit organization dedicated to improving the world, but has become a profit-making company which has the potential of making its executive staff the wealthiest people in the world. Sam Altman, OpenAI’s CEO,  has confused and changed his goals many times since the founding of the company in 2019. As an observer, one suspects that the compelling allure of power corrupts good intentions and degrades character, or more to the point reveals true character.. One founder , Dario  Aledai, along with his sister left the organization and started Anthropic, which gave birth to Claude. The company appears to stay with the original ethical safety walls once proclaimed by OpenAI. Altman believes we are on the cusp of machines making their own decisions, backed by recent evidence of such occurrences. The claim that AGI will be smarter than humans becomes an evolutionary and therefore an existential problem. Is AGI the next step in evolution, replacing us – the homo sapiens? AGI becomes the smartest generation on the planet with no need for humans – perhaps we might remain as housebroken pets.

 

Will a dependence on AI destroy the element of trust required to build and maintain a moral society? There are unwritten agreements in a country like ours that make a positive and a good existence possible. For example, there are certain lines that we have agreed never to cross, such as the use of certain derogatory terms. Even if they are not written in the law, they exist. AI has no sense of acceptable social behavior elements, and its results could be logically perfect but socially destructive

 

The common thread in our struggle to understand reality is the persistent pursuit of truth. Thus, we are subject to an inheritance that burdens and enriches us in equal measure. When people abandon empirical discipline, lazily and/or irresponsibly, for the comfort of conviction, they hand themselves over to an authority that reshapes them into subjects of that authority. What they believe hardens into the world they inhabit, and the world they inhabit becomes the one that rules them.

 

As I reflect on the common thread of our journey, I cannot help but wonder whether we have opened Pandora’s box. In the old legend, opening the box released misery, evil, and curses upon humanity. Today, AI spreads across every domain of life, becoming the universal instrument that extends the limits of our minds.

 

We move in a fragile dance along the lip of a darkness we do not yet understand.  The dance is unavoidable; the choice must be ubiquitous. If we choose wisely, the future that emerges may transcend every horizon we once believed impossible. If we choose poorly, Dante’s circles will become the architecture of our future.

 

Nous devons être d‘accorde … We must agree.

 

 

 

AI-conclusionVS1.2_20260424.docx

Recent Posts

See All

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page