What do I mean with ‘AI’?
‘AI’ is a term used in so many ways today. And in my opinion, we do not have anything that would qualify as ‘AI’ in the sense of the word. There is nothing intelligent about a statistical model; okay, there is something intelligent about how they are build and trained, but the model itself isn’t what I would define as intelligent.
This site will talk about LLMs like the GPT series of models released by OpenAI. For those uninitiated, LLM stands for “Large Language Model” and describes a kind of model which takes in Text and generates new more Text from that. You may know for example ChatGPT, although technically that is just the UI made by OpenAI to interact with their GPT series of modesl.
There are definitely valid uses for kinds of generative models, for example finding medicines or helping people with physical or mental disabilities, but I will not talk about this further here.
So why don’t I like LLMs?
I don’t like LLMs as they currently are made and used, but why? There are so many people who use them day to day for so many things and are happy with it, why don’t I like it? Shouldn’t I as a student in software development and electronics, love this? Isn’t this progress? Well no. The concept of LLMs has existed since at least the 90s. In short, they learn from a lot of texts how those are written, what the next most likely word is to a previous text. In a way they are fancier autocompletion. And would you trust your phone’s autocorrection to give you factual answers? No? Thought so.
Also, have you noticed? I told you that LLMs need data to be trained. Where does one get this data from and how much data is needed? The former question is easy to answer: theft. The second is a bit harder, since it differs depending on the size of a model, but for the greatest and latest models the answer is: everything that can be collected. If you have noticed a problem with this, then happy you, many have before you. Yes, training LLMs happens largely on copyrighted materials that are used without their creators’ permission 12. This is not only the case with LLMs, but with many commercial generative models, for example for images or music 2. It must be said that in some cases courts have ruled that the models themselves don’t infringe copyright laws, for example in the UK 3, but it is unclear if the training process infringes copyright laws. Although it is debatable, if a system that can reproduce creations should fall under fair use.
And then there is the invisible side of LLMs: their impact on the climate. Firstly let me say this is an area maybe even more vague than the copyright situation, since there is almost no public data of the companies behind the state of the art LLMs on this 4. Another problem is, how do we calculate the impact of LLMs? There is almost no data on how much CO2 internet requests produce, which are made by the billions by scrapers to get the Data for training 567. Then there is the training itself, using a lot of electrical energy, as well as water and partially other resources. The server farms have to be built. And when using LLMs, the servers running them also use electrical power and water 89. To be fair, most server centres are really power efficient, but the size of LLMs and their compute intensity requires a lot of servers, which in total produces a lot of emissions.
Let us take a closer look into the ethics of LLMs beyond content theft.
LLMs are based on data, mostly from the internet. Do you know what happens on the internet? If not, have a read! 1011121314151617 And also there is Reddit. I will spare you from examples, but that isn’t what I would call good training-data if you want a balanced and racism free discussion. So is AI biased? Yes. Yes, it is 18192021. And it also gets fed by autocrats 22! It not only re-enforces the same biases humans have, even strengthened them, but it also is very susceptible to misinformation and propaganda. Is this a problem? I let you be the judge (Hint: yes it is.).
Furthermore, the training data also contains heaps of misinformation, which is quite the problem if you want to use LLMs for news-gathering. To be honest, sometimes I feel like I can’t help people who think a statistical language model can be used for facts gathering reliably; but then again, there are so many naive people thinking machines are smarter than humans. But have your source that you shouldn’t trust LLMs on news topics: 23
But there are even worse ethical dilemmas I see. There are people using LLMs who have psychological problems, and LLMs will support them in their views. I can really just refer to a great video by Eddy Burback 24 and a video by Cedric Mössner (German) 25 here. But let me summarise for you, if you don’t have the time to watch them or don’t want to use YouTube (I get that). Note though that Cedric Mössner has their sources under his video. I will not refer to all of them here.
In Eddys video he, shows himself interacting with a LLM by OpenAI through the ChatGPT WebUI and trying out how far it would go to affirm him in his own views. The answer to this? It will go to the end. Cedric Mössner comes to a similar conclusion. But he is also talking about the fact that this problem persists in newer models. And even more importantly, he shows that the model had recognised that he in his own experiment was showing signs of hallucinations, but still proceeded to affirm him further. This might be a decision by OpenAI, but for legal reasons I must say: I don’t know.
Okay, but there is another point. We don’t like big companies and autocrats, right? But LLMs are owned by them. Why can we trust that? I get that many will not follow my argument here, but realistically speaking not only Chinese models like Deepseek, but also models by American companies will be under strong political control. Why do I say this? Well, firstly the owners of those big companies sat down with Donald Trump and praised him 2627, so one might argue they will cooperate with him. Well, they give him money 28, so that is that. Furthermore, people like Mark Zuckerberg have shown to have flexible morals at best 29 or in case of Elon Musk straight up Nazi views 3031. Personally, I don’t have the greatest trust into those people to not abuse this power over this software, which is widely used and trusted. And this isn’t just my opinion, but experts also warn about this 32.
LLMs are so good at X. No they aren’t.
Now, we know LLMs aren’t accurate with news, but that changes often. How about something that isn’t as factually important, so no history topics; how about programming? How about summarising things? The latter works okay, but the former is kind of hit or miss. It really depends on what you want to do and how much information about this is online and how specific your request is. Anecdotally, for me LLMs were slower with simple questions than just searching myself and with harder questions failed all together. You need to work with a specific version of a framework? No, the LLM will use other versions as reference as well, so good luck. You want code compliant with special policies? Forget about it. Show it to the ESA? Wait, no, LLMs aren’t allowed in the ECSS, so that is a nogo anyway. Small models specific for coding are great, but general purpose LLMs are quite bad at it. Also, they are no software engineer, so don’t think they can just build you a giant new thing.
Okay, what about creative writing? Welp, please stop if you really consider this. LLMs can help find another way of writing something, but it is not creative. It can not create Art, but only Artifice. Art is there for conveying unquantifiable information, like feelings. LLMs are by definition quantified versions of texts, which internally have no feelings or understanding of the world. Using texts from actual artists to create a new one, without own input, isn’t art. It is theft.
Climate.
I have already said this isn’t really quantifiable, how much LLMs impact the climate. The answer is quite likely somewhere in between “much” and “very much”. Some studies say that LLMs are more efficient at writing than humans, energy wise, but often don’t consider the impact the production of the servers had, as well as the impact of the training itself. This feels weird, when on the site of the human you are considering their average living habits as well. If you use an LLM you will still have a living creature there waiting for the result. If you say in a vacuum a LLM is better that can be true, but in the real world we can not just kill all people and replace them with LLMs. Also in this study they assumed the client device would require 75 W while writing a text, although they say it is for an average laptop. Sure there are modern laptops that can draw much more, but whilst writing a text you can not just slap the average TDP into your calculation. The most modern laptops will consume less than 10 W during a task like writing a text. I can only explain this by saying that the data the article is referring to seems to be over 10 years old. Looking at the page in the Wayback Machine 33 it shows the first snapshot in 2014, but the copyright on the bottom of the page says 2012. Either way, the data seems to be the same today on that page, even though the site design changed slightly. This refers to the following article: 34
Furthermore the scale of the industry is rapidly growing, models get more and more resource hungry the bigger they get and also the bigger the query is 35. Also more and more people are using the offerings of OpenAI and others 36. Further Companies like Google are putting the outputs of LLMs into their products by default, which also adds to the total emissions.
At the moment it seems like LLMs will put out about half a gram up to over 20 grams of CO2e per Query, depending on the model and size of the query 35. This doesn’t calculate in how much CO2e was set free during the training phase. With broad estimates the training alone will, for big models like those OpenAI is using today, have used more than a GWh of electricity 37. With the average CO2 emissions for American electricity being about 384 grams of CO2e per kWh 38 that alone accounts for multiple hundreds of metric tons of CO2e emissions.
And in the end Remember: each time you query an LLM instead of thinking yourself, it will create more demand for servers that require many rare resources, which not only impact the climate but also the people, the kids getting those resources out of the ground. If you didn’t get this: Yes, there is still child labour for some materials that are used in computers 3940[^41]. There is no Fair world. Even companies that strive to be fair, aren’t totally fair [^42]. I know we can’t just stop using technology and by that stop this, but using giant inefficient services like LLMs certainly doesn’t help.
A small thought experiment.
Imagine a monkey, one that doesn’t know our language, one that doesn’t need to feed and won’t get bored. We put that monkey in front of a typewriter and let it type. Let us just assume the monkey enters characters in a random order and that forever, since that typewriter is a magical one (You may know this part as the infinite monkey theorem 41). This monkey will at some point have written every part of human literature in addition to every other nonsensical combination of characters. Though the monkey might have written all works of Shakespeare, the bible, all science and fiction books in this world, no one would attribute it the competence of the original authors. Since the monkey doesn’t understand the texts and had no intention in creating those works, they can not be considered thoughtful or artistic from the hands of this monkey.
The same is true with LLMs. They might be better at choosing what letter to put out next, since they are literally made to replicate the most likely next letter judging from their input data, but they still don’t understand any of it. They didn’t think about it or have feelings they convey in a text. Just like the monkey they ’type’ aimlessly.
Now you could argue that from an outsiders perspective you can’t distinguish between what the monkey has written and what a human writer has created, and you would be right about this. But art was always context aware. If you did not know when an artwork was created and by whom it doesn’t have its full meaning. For example a drawing by cave humans on stone might be very impressive for people from that time and have a deep meaning, even today, but if someone today drew on a stone wall in the same style that would have no deeper meaning to us, but being a reference to those people in the past, since for them it was hard work, they had to put aside significant amounts of resources and time they could have used to ease their survival, but they decided to spend those resources and time on an artwork.
If you see an oil painting from hundreds of years ago, you know someone had to learn how to do that, it was expensive and time intensive. If you see an image in the style of an oil painting generated by a generative model, that has no deeper meaning, since for the machine it was not harder than an image in wax crayon child drawn optic. It has no deeper meaning. There is no sentinal creature behind that image or a text, which gives it meaning.
LLMs are not a tool, they are a service provider.
If you look at a person asking an LLM to generate a text, that person is not an artist, not the creator of that text. They are the client of a machine. If someone says they’ve created an image generated by a generative model or a text, or whatever. That is no different than to asking someone to create something for you and then tell everyone that you were the person who made that. The person inputting the prompt isn’t doing the work. It is not a tool. It is a machine that does the whole process. There is no skill required to make it do it. It is like kings of a past time, who paid painters for creating artworks of them. They are not the creators of the painting, just the subject, the client of the painter. There was no creativity involved on their side, they just asked to use a service.
This leads me to the point of the heading. LLMs and “AI” in general are no tools, but services replacing the creative process.
Some people say that “AI” is the democratisation of art, but that is - in my opinion - just false. It is the opposite. The largest and strongest generative models are all held by a few large coorperations, seeking money, wanting to extract if from you. In comparison to real democratisation they are centralised and you have no influence on how they work and what they do. Tools like GIMP, Krita, Darktable, OBS Studio, Blender, Ardour and Audacity are the way to go. They are Free as in Freedom and they can be adjusted to your liking. They don’t discriminate against anyone and they will not stop working, because you can’t afford their license fees.
-
Buick, Adam on Copyright and AI training data: https://academic.oup.com/jiplp/article/20/3/182/7922541 (last accessed: 13.11.2025) ↩︎
-
Heise about ChatGPT loosing to GEMA in court: https://www.heise.de/en/news/GEMA-vs-OpenAI-Defeat-for-ChatGPT-in-Munich-court-11073551.html (last accessed: 13.11.2025) ↩︎ ↩︎
-
Heise about a ruling in the UK: https://www.heise.de/en/news/Copyright-vs-AI-London-court-does-not-help-image-agency-Getty-Images-11067232.html (last accessed: 13.11.2025) ↩︎
-
Winter, Lena on making AI’s environmental impact measurable: https://zenodo.org/records/16608366 (last accessed: 13.11.2025) ↩︎
-
!OLD! The guardian on the energy use of websearches: https://www.theguardian.com/environment/2015/sep/25/server-data-centre-emissions-air-travel-web-google-facebook-greenhouse-gas (last accessed: 13.11.2025) ↩︎
-
Root Web Design Studio Ltd. about making websites energy efficient, also containing information about averaged emissions: https://rootwebdesign.studio/articles/how-much-carbon-does-a-website-produce/ (last accessed: 13.11.2025) ↩︎
-
Eberle GmbH on sustainable web design with notes on averaged emissions: https://eberle-werbeagentur.de/en/blogpost/the-co2-footprint-of-a-website/ (last accessed: 13.11.2025) ↩︎
-
Info by the UN (likely underestimated): https://unric.org/en/artificial-intelligence-how-much-energy-does-ai-use/ (last accessed: 13.11.2025) ↩︎
-
Science News on LLM emissions: https://www.sciencenews.org/article/ai-energy-carbon-emissions-chatgpt (last accessed: 13.11.2025) ↩︎
-
Amnesty international on online violence: https://www.amnesty.org/en/what-we-do/technology/online-violence/ (last accessed: 13.11.2025) ↩︎
-
Walther, Joseph B. on online hate: https://www.sciencedirect.com/science/article/abs/pii/S2352250X21002505 (last accessed: 13.11.2025) ↩︎
-
Keum, Brian and Miller, Matthew on online racism: https://www.researchgate.net/publication/325636856_Racism_on_the_Internet_Conceptualization_and_Recommendations_for_Research (last accessed: 13.11.2025) ↩︎
-
U.S. Government accountability office (let’s see how long that stays online) https://www.gao.gov/blog/online-extremism-growing-problem-whats-being-done-about-it (last accessed: 13.11.2025) ↩︎
-
Multiple in nature on the impact of online racism on children: https://link.springer.com/chapter/10.1007/978-3-031-69362-5_39 (last accessed: 13.11.2025) ↩︎
-
Archer school on racism on the internet and its affect on young people: https://archeroracle.org/133013/features/the-rise-of-racism-on-social-media-and-how-its-affecting-gen-z/ (last accessed: 13.11.2025) ↩︎
-
The standfort university on misinformation on the web: https://news.stanford.edu/stories/2022/04/know-disinformation-address (last accessed: 13.11.2025) ↩︎
-
A guide from the princeton library on online misinformation: https://princetonlibrary.org/guides/misinformation-disinformation-malinformation-a-guide/ (last accessed: 13.11.2025) ↩︎
-
World economic forum on LLM biases: https://www.weforum.org/stories/2021/07/ai-machine-learning-bias-discrimination/ (last accessed on: 13.11.2025) ↩︎
-
Allaboutai on LLM biases: https://www.allaboutai.com/resources/ai-statistics/ai-bias/ (last accessed on: 13.11.2025) ↩︎
-
StudyFinds article based on a study from the university college London: https://studyfinds.org/ai-systems-amplify-human-bias/ (last accessed: 13.11.2025) ↩︎
-
IBM on LLM biases: https://community.ibm.com/community/user/blogs/stylianos-kampakis/2025/06/10/bias-and-discrimination-in-ai (last accessed: 13.11.2025) ↩︎
-
Heise on how Russia is trying to injest misinformation into LLMs: https://www.heise.de/en/news/Poisoning-training-data-Russian-propaganda-for-AI-models-10317581.html (last accessed: 13.11.2025) ↩︎
-
EBU on news Integrity in AI assistants: https://www.ebu.ch/Report/MIS-BBC/NI_AI_2025.pdf (last accessed: 13.11.2025) ↩︎
-
Eddy Burback “ChatGPT made me delusional”: https://www.youtube.com/watch?v=VRjgNgJms3Q (last accessed: 13.11.2025) ↩︎
-
Cedric Mössner “ChatGPT’s hallucinations go too far”: https://www.youtube.com/watch?v=URZxd5KKDSw (last accessed: 13.11.2025) ↩︎
-
Fortune on the tech giants meeting: https://fortune.com/2025/09/05/trump-tech-dinner-full-attendee-list/ (last accessed: 13.11.2025) ↩︎
-
Wired on the tech giants meeting: https://www.wired.com/story/tech-ceos-donald-trump-white-house/ (last accessed: 13.11.2025) ↩︎
-
The washington post on the money tech giants gave to Trump: https://www.washingtonpost.com/technology/2025/01/11/trump-big-tech-inauguration-zuckerberg-bezos-google/ (last accessed: 13.11.2025) ↩︎
-
About FaceMesh, Zuckerbergs questionable beginnings: https://www.metro.us/everything-to-know-about-facemash-the-site-zuckerberg-created-in-college-to-rank-hot-women/ (last accessed: 13.11.2025) ↩︎
-
German news on Elon Musks Hitler salute: https://taz.de/Elon-Musks-Hitlergruss/!6060000/ (last accessed: 13.11.2025) ↩︎
-
More German news on Elon Musks Hitler salute: https://www.zeit.de/kultur/2025-01/elon-musk-hitlergruss-amtseinfuehrung-donald-trump (last accessed: 13.11.2025) ↩︎
-
Heise on experts about LLMs as instruments of power: https://www.heise.de/en/news/Philosopher-AI-is-not-a-tool-but-an-instrument-of-power-11072370.html (last accessed: 13.11.2025) ↩︎
-
Wayback Machine: https://web.archive.org/web/20140730083052/https://www.energuide.be/en/questions-answers/how-much-power-does-a-computer-use-and-how-much-co2-does-that-represent/54/ (last accessed: 15.11.2025) ↩︎
-
“The carbon emissions of writing and illustrating are lower for AI than for humans”: https://www.nature.com/articles/s41598-024-54271-x (last accessed: 15.11.2025) ↩︎
-
“How Hungry is AI? Benchmarking Energy, Water, and Carbon Footprint of LLM Inference”: https://arxiv.org/pdf/2505.09598 (last accessed: 15.11.2025) ↩︎ ↩︎
-
“The State of Generative AI Adoption in 2025”: https://www.stlouisfed.org/on-the-economy/2025/nov/state-generative-ai-adoption-2025 (last accessed: 15.11.2025) ↩︎
-
Association of Data Scientists on LLM energy use during training and use: https://adasci.org/how-much-energy-do-llms-consume-unveiling-the-power-behind-ai/ (last accessed: 15.11.2025) ↩︎
-
Our World in Data on CO2 emissions per kWh of electricity: https://ourworldindata.org/grapher/carbon-intensity-electricity (last accessed: 15.11.2025) ↩︎
-
Child Labor and the Human Rights Violations Embedded in Producing Technology: https://www.culawreview.org/journal/child-labor-and-the-human-rights-violations-embedded-in-producing-technology (last accessed: 13.11.2025) ↩︎
-
Digital Shadows: Unveiling the Crisis of Forced Labour in the Tech Age: https://unu.edu/article/digital-shadows-unveiling-crisis-forced-labour-tech-age (last accessed: 13.11.2025) ↩︎
-
DIGITAL ECONOMY CHALLENGE: HIDDEN EXPLOITATION OF CHILD LABOUR THROUGH THE USE OF DIGITAL DEVICES: https://www.researchgate.net/publication/392094099_DIGITAL_ECONOMY_CHALLENGE_HIDDEN_EXPLOITATION_OF_CHILD_LABOUR_THROUGH_THE_USE_OF_DIGITAL_DEVICES (last accessed: 13.11.2025) ↩︎