
Thodoris Kouleris
Software Engineer

Raw thoughts on AI
It is a fact that artificial intelligence is the future of humanity. The real question is how humans will approach it to fulfill the fundamental goal of their evolution. Will it be just another tool that enhances human creativity, or will humans become slaves to artificial intelligence? The capitalist system, in the way we have seen it evolve in our lives, suggests the latter. But what will happen with intellectual property?
For AI to function, it requires data from real people to learn, just as a child observes the people around them and learns from them. AI cannot generate anything new that has not first been said or invented by humans. So, the question is: to whom does AI-generated content belong?
One could quickly and thoughtlessly say that it belongs to the company that developed the model. And to some extent, that might be true. However, as we said, no model has value without the data it was fed to develop. If a script-generation model has been trained using the scripts of the best Hollywood screenwriters, does the final product perhaps belong to them? The same applies to programming—if an AI model has been trained on code from GitHub, does the code it produces ultimately belong to the programmers who wrote the original material?
I believe that the greatest challenge engineers had to solve was how to feed AI with data. This issue was resolved with Microsoft's acquisition of GitHub and the rise of social media in our lives. Data has now become a commodity—one that will generate another commodity (AI) for the wealth of large corporations.
Before AI becomes deeply integrated into the professional lives of those whose work requires creative thinking, the issue of intellectual property rights must first be resolved. I might be wrong (probably), but what happens if, in the future, two scripts or pieces of code produced by rival AI companies (e.g., OpenAI and DeepSeek) are 99% identical? Who will own the rights? Surely, legal teams will use all their creativity to win such a legal battle. Or rather (since we are talking about the future), the AI-powered legal department will work based on all previous similar cases. The problem is that until the first case emerges, AI models will operate in legal uncertainty.
And I believe this will be the greatest obstacle for artificial intelligence—it will not be able to produce truly original work. Then again, I might be wrong, which is allowed by my human nature—just as it is for every professional in their field. But how much will we trust AI in the future? Will we accept its answers as absolute truth, or will there be room for doubt? Today, anyone can challenge the work of any scientist or professional: "The calculation is wrong," "You didn’t use the right methodology," and so on—phrases we often hear in debates among professionals or scientists.
In any case, no matter how much corporations and the media try to ascribe authority to artificial intelligence or even replace human creativity with it, we must approach AI with caution—not like a five-year-old child who just discovered the taste of sugar. To me, artificial intelligence is a tool that can extend the capabilities of the human species. It is not infallible, nor can it solve every problem creatively, especially when it relies on existing human creations to evolve.