The Risks of Using (Too Much) Artificial Intelligence
April 2, 2026
I have long intended to write a piece about Artificial Intelligence (AI), but this seems to be one of these subjects where the world turns faster than my ability to process it. Nonetheless, here we go, finally.
Let’s start with some general comments.
Firstly, let’s remind ourselves that the investments made into this new technology are eye-watering, so that you must wonder what the quid pro quo looks like: According to a report on the website aimagazine, Microsoft planned to invest over US$80bn in AI data centres and cloud infrastructure in 2025 alone. Meta committed up to US$65bn by 2025 to construct data centres housing 1.3 million GPUs, declaring 2025 the “defining year for AI”. AWS is reported to see AI “as an opportunity for once-in-a-lifetime-reinvention” and to have committed as much as US$100bn in capital expenditures in 2025, primarily for infrastructure expansion. UBS reported that the global expenditure for data centres in 2025 was $375bn and predicts more than $500bn in 2026! Whether these reports will prove to accurate, though, remains to be seen: One interesting side effect of the current closure of the Strait of Hormuz and the bombardment of Quatar’s LNG field is that the world’s helium supply might decrease by about 30%. Helium is a necessary component for cooling the machines responsible for building AI chips. Another necessary component for building AI data centers are random access memories (RAM), the prices of which have surged so dramatically that people started talking about a RAMgeddon and Samsung announced that the price of their next smartphone will have to be raised by 100$ for this reason.
In any case, the investments put into AI are huge sums and, as a humble patent attorney, I wonder who will eventually pick up the bill for all of it and when. Right now, basic versions of many AI services are still for free. Call me incredulous, but I doubt that this will remain so. It is a bit hard to believe that these companies are just motivated by a sudden charitable impulse or a genuine will to do something good for the society at their own expense.
This raises a couple of questions. Firstly, what exactly is “AI” as we now use the term? Which capabilities does it have and which does it not have? What is the business model, if any, behind the AI investments? Could it be that we are seeing the rise of a giant AI bubble right now, like the dotcom bubble around the year 2000? And, more specifically, should we use AI at all, and particularly in IP where data confidentiality is perhaps more important than in most places elsewhere? If so, to what extent? What are the opportunities, what are the associated risks?
Please don’t expect a complete and final answer on each of these questions, some of which are hotly debated among much more knowledgeable experts. Furthermore, and contrary to an AI chatbot, I am not claiming to be in the possession of the full truth and I am certainly not an AI expert. I merely intend to provide lay readers with some food for thought. Feel free to add your views in the comments. Self-doubts and a striving for truth (rather than claiming to be in possession of the same) are, at least in my view, signs of intelligence, which are often underestimated.
What is AI?
In its current usage, AI, or “generative AI”, as is sometimes used, means “artificial intelligence”, and is, according to wikipedia,
the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.
High-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., language models and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go).
In order to keep my feet on the ground, I will limit myself to large language models (LLM) in this piece, because it seems to me that this is what lay people mean when using the term AI. Let’s again start with wikipedia’s definition, which is very helpful, not least because it explains where the famous abbreviation GPT comes from:
A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation.[1][2]The largest and most capable LLMs are generative pre-trained transformers (GPTs) and provide the core capabilities of chatbots such as ChatGPT, Gemini and Claude.
The full article is well worth reading, but an even more illuminating (and entertaining!) piece on AI is the discussion of two highly intelligent individuals, Martin Wolf, Chief Economic Commentator of the Financial Times, and Paul Krugman, Professor at the CUNY and Nobel Laureate, about this subject on youtube, which I highly recommend watching in full. I will reproduce a few highlights below:
Martin Wolf: So Paul, when you think of what is now being called artificial intelligence, or as I read today in the FT, as one expert refers to these technologies not as artificial intelligence but stochastic parrots, which I think is a lovely description. Anyway, whichever description you want, what excites you, what disturbs you about this phenomenon?
Paul Krugman: What we’re calling artificial intelligence really isn’t, at this point, intelligence. There’s an endless dispute about whether it may be about to become something that you might really call that. But really, this is an evolution of large language models of basically taking in tonnes and tonnes of data, applying very clever algorithms, so clever we don’t quite understand how they work, to be able to answer in natural language, questions posed in natural languages. And it’s not a minor thing.
The term “stochastic parrots” sticks. The fundamental problem of LLMs is that the concept of truth or accuracy does not exist in their algorithms. These models are merely trying to imitate the response of an (intelligent) human being by using statistical models based on huge amounts of data but have as much concern as a parrot about whether their response is actually true or not.
Krugman at least admits (and I agree with him) that there are several areas where LLMs have actually helped to achieve true progress. For example:
“Translation was a joke. I don’t know if they did the old anecdote about the supposed Russian-English translation programme. That took “The spirit was strong, but the flesh was weak” and it came back as “The vodka was good, but the meat was spoiled”. So, you know, it used to be that translation was a joke. Now it’s actually quite good….”
Conversely, he is quite sceptical about the current hype on AI:
What we’re seeing is a rush to implement AI before it has been proved that it’s useful. There’s this enormous fashionability of putting AI. I mean, I’m finding that stuff that I use routinely, you know, search engines have actually been degraded because the companies involved are so eager to be there on the AI and that I have to put in extra work to turn the damn stuff off, so that I can just get a plain ordinary search result. So I wonder whether this time around we’re not seeing instead something like a kind of rush to be part of the wave of the future before we’re actually even sure that it really is the wave of the future.
And Wolf joins him, highlighting the current uncertainty:
Wolf: My impression at the moment is there’s quite a lot of very different views among experts on how it will play out.
Krugman: Yeah, and this has been one of those subjects where, you know, I’ve tried to talk to people, people who really do pay attention in a way that I can’t, and have come to the conclusion that anything that I want to believe about the prospects of AI and its economic effects, all I need to do is do a little searching, and I can find some expert who will tell me whatever it is I want to believe. It’s one of the situations where there’s just such a range of possible interpretations. And I’m not saying that these people are dishonest or anything. It’s just that it’s really, really unknown at this point and there’s so little actual experience.
This brings us right to the next problem: AI can obviously produce more “content” than we will ever be able to consume/digest in a meaningful way. And it does so without regard to the veracity or even meaning of its content. This may in turn – and has already begun to – result in spilling the internet with spam, mostly useless or even questionable content. I have learnt that this process is now called slopification, the flooding of cyberspace with digital slop. According to the highly interesting documentary “KI: Der Tod des Internet” by Mario Sixtus, which is currently available here via the arte Mediathek (in German), critics increasingly warn that AI-generated content could suffocate the entire internet. Instead of knowledge, culture and entertainment, they say, the web is increasingly resembling a data graveyard consisting of fake friends, and meaningless clickbait. They present an example where AI is prompted to write a book about a few keywords input and puts this book on sale on a big online sales platform, all within a few seconds. On top of that there is also the ability of AI to allow for the easy generation of all sorts of terrible and respectless pictures of real persons without their consent.
Some Risks Involving AI (LLMs)
Financial Risks
I am neither an economist nor a financial expert. I can only offer a grain of common sense and a few links of people who seem to have more insights. On the one hand, we have been seeing an enormous influx of money (triple-digit billions of US$ in 2025 alone) into AI research and development. Share prices of AI companies have skyrocketed. So it seems that somebody (and probably somebody with a lot of money) is ready to bet a lot of money on AI. On the other hand, we can read serious reports and voices[HE1] predicting (using the words of Cory Doctorow) that “the real (economic) AI apocalypse is nigh”. Doctorow argues
that the AI bubble is driven by monopolists who've conquered their markets and have no more growth potential, who are desperate to convince investors that they can continue to grow by moving into some other sector, e.g. "pivot to video," crypto, blockchain, NFTs, AI, and now "super-intelligence." Further: the topline growth that AI companies are selling comes from replacing most workers with AI, and re-tasking the surviving workers as AI babysitters ("humans in the loop"), which won't work. Finally: AI cannot do your job, but an AI salesman can 100% convince your boss to fire you and replace you with an AI that can't do your job, and when the bubble bursts, the money-hemorrhaging "foundation models" will be shut off and we'll lose the AI that can't do your job, and you will be long gone, retrained or retired or "discouraged" and out of the labor market, and no one will do your job. AI is the asbestos we are shoveling into the walls of our society and our descendants will be digging it out for generations.
This is strong stuff, but even the Wall Street Journal has meanwhile (25.9.2025) published a lengthy story (by Eliot Brown and Robbie Whelan, paywall) on the catastrophic finances of AI companies, which is titled
„Spending on AI Is at Epic Levels. Will It Ever Pay Off?
Tech companies pour hundreds of billions into data centers, taking on heavy debt, but current revenue is relatively thin; echoes of dot-com bubble“
The WSJ writers compare the AI bubble to other bubbles, like Worldcom's fiber optic bonanza (which saw the company's CEO sent to prison, where he died), and conclude that the AI bubble is vastly larger than any other bubble in recent history.
I cannot tell you who is right and who is wrong here. It seems to be quite evident to me that we are currently in a boom phase caused by AI, but whether this boom phase is a bubble and when it will be followed by a bust is the big unknown. There are signs that this might indeed be the case at some point. At least you have been duly warned.
Environmental Risks
The energy and water consumption of LLMs are enormous. The new AI data centers consume so much energy that this does indeed have at least a local impact on the supply with electricity and/or the grid stability.
Sophia Chen has written an excellent article about this in Nature, which I highly recommend for further study (paywall, a paper version appeared in Nature, Vol. 639, 6 March 2025). She reports that the construction of at least seven large data centre projects in Virginia (USA) will likely double the electricity demand in this state within ten years. And there is more:
Virginia already has 340 such facilities (…), where they account for more than one quarter of the state’s electricity use, according to a report by EPRI, a research institute in Palo Alto, California. In Ireland, data centres account for more than 20% of the country’s electricity consumption – with most of them situated on the edge of Dublin. And the facilities’ electricity consumption has surpassed 10% in at least 5 US states.
Moreover, her article includes an estimate of the energy needed if Google searches use generative AI (spoiler: about 23-30 times the energy of a normal search, going by figures reported by Google in 2009[HE2] ). Another interesting figure of this article plots a graph showing the respective energy consumption of AI image generation (mean 519 Wh), text generation (288 Wh), image captioning (105 Wh), question answering (23 Wh) and summarization (7 Wh, all values estimated means). The energy required to fully charge your smartphone is about 22 Wh. Google itself presents quite different figures, though. They say the median prompt in their Gemini AI consumes 0.24 Wh, whereas a normal search query consumes about 0.3 Wh. The website kanoppi estimates the energy per query of a normal google search to be 0.3 Wh and the energy per query of ChatGPT 2.9 Wh. According to their further calculations, AI has about 340x higher CO2 emissions and a 58x higher daily energy use. The consumption of cooling water of an AI data centre is also huge.
Maybe think about it before you mindlessly type a question into ChatGPT, MS-Copilot, Claude or the like next time.
Readers who still remember the laws of thermodynamics will also not be surprised that this enormous amount of energy has to go somewhere, and that most of it will be converted into heat. Chris Stokel-Walker reports in the New Scientist that AI data centres can warm surrounding areas by – hold your breath – up to 9.1°C. I admit that I would have nothing against the current Munich temperature (4°C) being warmed by 9.1°C, but our weather forecast has predicted temperatures of up to 20°C for Easter. People living in the vicinity of such data centers may thus well have a problem in summer.
Let us now turn briefly to other risks caused by (overly) using and/or relying on LLMs.
Risk of Degradation of Human Thinking Skills
“ChatGPT may erode critical thinking skills according to a new MIT study” is the heading of an interesting, even though relatively small, study reported in time.com. 54 subjects were divided in three groups and asked to write several essays using ChatGPT, Google’s search engine, and just their brains, respectively. Of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Even worse, over the course of several months, „ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study“.
How come I am not surprised?
Wrong results
In a news feature of Nature, published on 21 January 2025 (paywall), Nicola Jones argues that AI hallucinations can’t be stopped, but that there are techniques that can limit their damage. Despite these tricks, Jones concludes that „large language models are still struggling to tell the truth, the whole truth and nothing but the truth“. Referencing this article, Joseph Dumit and Andreas Roepstorff write in the same journal on 4 March 2025 that AI hallucinations are a feature of LLM design, not a bug. Their argument is that LLMs are trained on texts, not truths. Each text bears traces of its context, and a correct sentence in one scientific discipline may be inaccurate or nonsensical in another. However, LLMs are not trained to understand context. They operate by constructing meaning rather than retrieving it from elsewhere, confabulate and synthesize across disparate texts, and extrapolate from partial inputs. Dumit and Roepstorff conclude that their responses are therefore inherently
„creative, context-specific and untrustworthy“.
While the latest models offer a marked reduction in hallucinations relative to those available in early 2025, the problem remains. It follows that humans should better check and approve what LLMs generate before accepting it and passing it on, and it is perhaps a good idea to think twice before putting inherently hallucinating LLMs in charge of computers, weapons and economies.
Where this is not done, embarrassing things may happen that may at times have expensive consequences. Just to provide a three examples to make this point
(1) Krishani Dhanji reported in the Guardian on 6 October 2025 that Deloitte was caught using AI in an expensive report for the Albanese government, which had consequences:
Deloitte will provide a partial refund to the federal government over a $440,000 report that contained several errors, after admitting it used generative artificial intelligence to help produce it.
(2) Elon Musk’s AI Grok showed its trustworthiness by staunchly asserting, at least in the beginning, multiple times that Charlie Kirk is still alive and laughing, after he was shot dead in the neck. I have linked you a source of the dialogue but will not reproduce it here. I have read that Grok’s boss subsequently ordered a reeducation program for his AI. He might himself badly need one as well. Let’s hope both of these programs will be successful, though I am a bit sceptical.
(3) At least in my opinion, there is also a well-substantiated suspicion, if not even compelling indications that the current US administration used AI in designing President Trump‘s first set of tariffs in 2025. The first indication is the creation of a seemingly ingenious formula to calculate such tariffs, which made no sense at all upon closer inspection. Prof. Krugman, after trashing the formula itself, had the following to say about it (source):
Where is this stuff coming from? One of these days we’ll probably get the full story, but it looks to me like something thrown together by a junior staffer with only a couple of hours’ notice. That USTR note, in particular, reads like something written by a student who hasn’t done the reading and is trying to bullshit their way through an exam.
But it may be even worse than that. The Trump formula is apparently what you get if you ask ChatGPT and other AI models to make tariff policy.
The second indicia, not noted in this piece by Krugman, is the fact that the US tariffs specifically covered the territories of Jan Mayen and Spitsbergen, both of which were subjected to a 10% tariff, whereas Norway (to whom Jan Mayen and Spitsbergen belong) was subjected to 15%. Well, Jan Mayen is probably a lovely, but also a very tranquil island. In fact, it is so lonely that exactly no human lives there permanently and there is no economic activity there whatsoever. I happened to be so lucky to discover Jan Mayen once on a return flight from Japan, where I took the picture shown above.
In my personal opinion it is highly unlikely that anyone in the Trump administration knew anything at all of this lonesome island - otherwise they would certainly not have honoured it with a special tariff. It is much more plausible, conversely, that an AI application was prompted to design tariffs for any territory in the Atlantic Ocean and held that it is appropriate, in the case of Jan Mayen, to raise a 10% tariff on whatever is delivered from there.
Of course, the use of AI is no excuse for an ill-thought-out tariff policy, which could literally be called „headless“. Likewise, AI is of course also no valid excuse for errors made in an attorney brief. And attorneys, in particular those who want to keep a solid reputation, should definitely not try to imitate AI-like hallucinations in their briefs. This may badly backfire, as is evident from the following quote of a decision of the Regional Court of Frankfurt (2-13 S 56/24, translation by the author)
Insofar as Plaintiff's representative, in his brief of August 10, 2025, cites three lengthy citations from decisions by the FCJ (Federal Court of Justice), which are marked with quotation marks as verbatim quotations, to support his dissenting legal opinion, these are – as he had to admit – complete forgeries. Neither the references given nor the dates or file numbers exist. The ZB file numbers cited (for appeal proceedings before the FCJ) would, as should have been apparent upon closer analysis, not be used for decisions on the value in dispute, since the Federal Court of Justice is known to have no jurisdiction over appeals concerning the value in dispute (Section 66 (3) clause 3 of the Law on Court Fees). The board hopes that the forgeries were not made by the plaintiff's representative himself but were “hallucinated” by a chatbot. However, when board members conducted a neutral prompt about the assessment of the value in dispute for removal claims in WEG proceedings in standard chatbots, including legal ones, the correct legal opinion as expressed in the court’s preliminary opinion was always retrieved, even when it was asked whether there was any evidence in the case law of the Federal Court of Justice (FCJ) to support the view favoured by the plaintiff's representative.
Ouch.
AI in IP
Private Practice
Turning now to IP more specifically, one of my favorite IP commentators, Dr Rose Hughes, wrote a fantastic piece on the IPKat blog about the spectre of hallucination when using AI in the patent industry, citing a lot of hilarious examples of what can go wrong and what has gone wrong when blindly relying on LLMs. Dear colleagues, please read Rose’s contribution and learn from it.
In addition, and on a less hilarious note, European Patent Attorneys should also be aware of (and observe!) the epi guidelines on AI, which can be found here. The overarching principle is this:
When using AI of any kind in professional work, a Member must adopt the highest possible standards of probity; must take all reasonable steps to maintain confidentiality when this is required; and at all times must put the interests of clients first as required by Article 1 of the epi Code of Conduct.
epi’s warning that despite the increasing use of generative AI, its operations are often poorly understood, which can severely, adversely impact the correctness of the work of patent attorneys and can cause detriment to clients and instructing principals, should also be taken to heart.
German Patent and Trademark Office (GPTO)
Perhaps unsurprisingly, AI has also reached the GPTO in the meantime and its examiners seem to be keen on using LLMs for their searches. However, this raises a pretty serious problem, i.e. the problem of data confidentiality for unpublished applications. It is probably one of the main no-goes for attorneys in private practice to do a web-search using the client’s new invention as a prompt or search-query. Even with a standard web search, there is an (admittedly small) risk that such a prompt will become publicly available through the auto-complete function. This is one reason why specialist patent search firms use databases which do not make the prompts public. Caution about using such new inventions as prompts applies all the more for general AI chatbots: not only may the search prompts be used to train the AI and hence be regurgitated as a public disclosure elsewhere, but the chatbot may also carry out a web search with the new invention, again risking a leak. Specialist AI software needs to be used to avoid these risks. But how about a prior art search conducted by a patent office using AI in a search engine which is (possibly) open to the internet?
Clearly, this should also not happen, and the GPTO management recognises this. The GPTO President has just issued the following notice No. 1/26, which announces an update of the search and examination guidelines of the GPTO with regard to the use of „external search sources, including the use of applications of artificial intelligence“.
To put it briefly, the GPTO recognises that it has to search the relevant state of art upon request, which includes prior art worldwide to the extent it has been made available to the public. At the same time, the GPTO has to treat the information in unpublished applications confidentially. But now it comes (translation by deepL, reviewed by the author):
In the age of digitalisation and artificial intelligence, the volume of knowledge that can be accessed and retrieved worldwide is growing faster than ever before. It is now regularly available only in electronic form and, in fields such as information technology or chemistry, is often stored solely in external research sources. High-quality, up-to-date searches are therefore often only possible if external electronic search sources are also used, where necessary by using artificial intelligence applications.
For this reason, the German Patent and Trade Mark Office has amended its search guidelines, examination guidelines (both available under Forms/Patents) and utility model search guidelines (available under Forms/Utility Models). The revised version expressly permits the examiners to use external electronic search sources also by using artificial intelligence applications, to the extent permitted by the DPMA, as part of the search. At the same time, it is clarified that the search must continue to be conducted in such a way that information contained in undisclosed patent applications is not made available to the public.
Well and good, but what if these „external electronic search sources“ or „artificial intelligence applications“ were open to the internet? The GPTO is refreshingly, or perhaps shockingly, candid about who bears the risk in such cases: the applicant! The President’s notice concludes with this paragraph:
Externe elektronische Recherchequellen unterliegen nur eingeschränkt der Kontrolle des DPMA. Auch unter Beachtung der im Einzelfall gebotenen Sorgfalt kann deshalb nie gänzlich ausgeschlossen werden, dass Begriffe, Sequenzen, chemische Strukturformeln oder Text aus Anmeldungen Dritten zugänglich werden. Anmelder, die dieses Restrisiko vermeiden möchten, sollten daher erwägen, erst nach der Offenlegung ihrer Anmeldung einen Antrag auf Recherche oder Prüfung der Anmeldung zu stellen.
External electronic search sources are subject only to limited control by the DPMA. Even when exercising due care in individual cases, it can therefore never be entirely ruled out that terms, sequences, chemical structural formulae or text from applications may become accessible to third parties. Applicants wishing to avoid this residual risk should therefore consider submitting a request for a search or examination of the application only after the publication of their application.
In other words, if you wish to make sure that content of the application does not leak the patent office too early, you should better wait with filing your request for examination or search until after publication of the application. Whether this is so satisfactory for applicants of the GPTO is a different question.
European Patent Office
So, are you safer off, as an applicant, when having your invention searched and examined by the EPO? At least, the Guidelines for Examination promise that the application will be kept confidential prior to the publication.
But as avid readers of this blog know only too well, the EPO has a well-polished facade to the outside world and a reality behind the scenes. I have heard from a trustworthy source that there are EPO examiners who routinely use their own personal ChatGPT accounts for searches, and do not understand about the risks. If this is so, I can only hope that this article will help to such activities being stopped. Conversely, the superiors of a well-managed patent office should also withstand the temptation to increase the working pressure on examiners. If less and less examiners will have to deal with more and more applications, there is an inherent pressure towards unduly using AI too much and too uncritically. AI should never be misunderstood as a tool that may replace human brains or help examiners/attorneys to save considerable amounts of time by outsourcing their jobs to a machine. That can only end badly for quality (and for the professionals). Nor should AI of course be used to promote laziness. Independent thinking and careful attention may now be required more than ever.
If all goes wrong and an application is leaked to the internet through improper use of AI tools, this is very bad news for the applicant. EPO decision T 585/92 dealing with an erroneous early publication by a Latin American patent office is quite clear about such mistakes – even Article 55 EPC does not save you if your application is leaked by a patent office.
Of course, AI can also be used in a “closed system” with no efflux to the internet, and this should be the standard for each and every patent office.
On a more general level, the EPO considers itself as “AI-friendly”, because it thinks that AI enables improvements of effectiveness, quality and timeliness of its services and administration. The EPO is, however, also very aware of the associated risks and has issued a comprehensive “AI Policy” that discusses both the opportunities and the risks created by the use of AI. Fundamentally, this policy provides that the impact and risks of the implementation or use of AI for a particular task must be appropriately assessed and managed in accordance with its policy. A standard procedure for such an impact assessment and risk management was developed and is part of the policy. This includes a list of identified areas of high-risk AI, such as biometrics, education and vocational training, employment, access to essential services and benefits and administrative investigations. AI technology used in interactions with humans and to generate or manipulate content is considered "limited-risk”, and users must be informed that they are interacting with an AI system, unless this is obvious from the circumstances.
All of this, including the EPO’s human-centric approach presented by the EPO’s CTO/COO Angel Aledo Lopez in this interview on LinkedIn, makes sense to me on a general or policy level. It is reassuring to read that examiners remain the ultimate decision-makers but we will of course have to see how all of this will be implemented.
One not so great example has surfaced last year. It was about the EPO’s use of AI for patent classification, introducing a new system called “AI-autoclose”. According to this system, classes allocated by, e.g., other patent offices were compared against AI suggestions. If the classification groups match, the classes will then be confirmed as final by AI and the circulation of the file for the purpose of classification is closed without further human intervention. Otherwise, the file is given to a human classifier. The staff representation wrote in an open letter to the Administrative Council on 11 April 2025 that they conducted a survey and gathered 247 responses from classifiers and their managers. Among them, about 67% replied that they are dissatisfied with the quality of classification provided by AI-autoclose whereas only about 13% were satisfied with the quality. Hmmm. It seems that improvements are or at least were, at that time, still needed. I will try to watch the next developments as they come.
Conclusion
“Artificial intelligence is changing the way we receive information and communicate, but who directs it and for what purposes? We must be vigilant in order to ensure that technology does not replace human beings, and that the information and algorithms that govern it today are not in the hands of a few.”
The same speaker emphasised
“I urge you: never sell out your authority”.
I must say that I like both of these statements and endorse them, although, to be completely candid, I am not a blind “follower” of the speaker, Pope Leo XIV. This is because I neither blindly trust humans who claim (or at least claimed once) to be infallible and in the possession of the whole truth, nor do I trust AI. Being an avid tubist myself, I rather sympathize with the music and the views of the Altneiheiser Feuerwehrkapelle (video in German):
“Traue niemals der KI!”
But here, Pope Leo XIV is right in my view. We should indeed try everything to retain our authority as humans, never stop thinking (also critically) and, above all, everyone of us should stay a Mensch.
You may also like