News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

AI and its power consumption: one problem among others

Started by Redaktion, February 21, 2024, 21:22:17

Previous topic - Next topic

Redaktion

It's certainly no secret that ChatGPT and other AI systems are power guzzlers. And when Sam Altman, CEO of OpenAI, admits that this could become a problem, it's certainly worth doing the math and looking to the future.

https://www.notebookcheck.net/AI-and-its-power-consumption-one-problem-among-others.805608.0.html

NikoB

Funny. I wrote about this back in 2023. Immediately as soon as naive moans about "cool" AI began to arise. None of the ordinary people will ever receive this coolest "AI", especially for free, precisely for the reasons I mentioned earlier, and now here too - energy consumption per request with a really powerful neural network is simply unprofitable in the mass case. =)

MQ.Yang

It's no secret that AI models need to move operations to ASICs; for reasons of operational cost and efficiency

lmao

Quote from: NikoB on February 21, 2024, 22:03:25energy consumption per request with a really powerful neural network is simply unprofitable
"It's estimated that a search driven by generative AI uses four to five times the energy of a conventional web search."
"0.3 Wh is used per (google) search query"

gives ai request figure of 1.2-1.5 wh per request
very "unprofitable"

lmao

for people without math background 1.5 wh means if your gpt request is running for 20 seconds, it is using 0.0083 watts

lmao

so ai consumption problem is not per-request consumption, it's people asking chatgpt millions (if not billions) trash logic questions ,asking to summarize text for them like we all already degraded into illiteracy and can't fucking summarize text ourselves lol or pervs using gpt for their sex "rolepaying" chat

NikoB

This is a key problem for massive requests to truly complex neural networks. Otherwise, what is the point of all the cries about the "new era of AI" in the press? )))

Yes, there will be a new era - for the rich, kleptocratic stratum. And the poor will not have any access to a model with this level of complexity. As I predicted in 2023.
Which will only increase inequality significantly, from birth. Which was also predicted earlier. Finally turning the lives of mere mortals into concentration camp routines. Which will be almost impossible to resist - after all, the difference in capabilities will be colossal.
And it is unlikely that any "neo-Luddite" movements and trends will be able to interfere with this, the rich kleptocratic layers. The greater the difference in capabilities, the easier it is to manipulate and control the masses.

This was obvious from the beginning - only the very rich always get the best things. In all areas of human progress. The poor gain access to them at a similar level only after decades, at best, and only if it is beneficial to the powerful stratum. But by that time, the rich again already have a head start, decades, in access to the best achievements of scientific and technological progress...

Anti-Ai Activist

Could we have less articles on Ai? Every company trying to provide such services is trying to sell you some b.s. that gives you censored results. So running locally is your only real option, but according to the resident LLM expert on this forum that requires minimum 96GB-128GB. How many people have that?

As far as I'm concerned it's just a rich mans play toy. What next, NBC going to be writing articles on yachts? Maybe by 2030 it'll become more relevant.

RobertJasiek

Quote from: Anti-Ai Activist on February 24, 2024, 15:11:17Could we have less articles on Ai? Every company trying to provide such services is trying to sell you some b.s. that gives you censored results. So running locally is your only real option, but according to the resident LLM expert on this forum that requires minimum 96GB-128GB. How many people have that?

More articles on real AI, please! Fewer articles on PR-born low-speed AI!

lmao

robert are you using two nicknames to lick your own butt lmao
you are the only 'expert' here who is convinced one needs 96gb for llm

reality check -
play.google.com/store/apps/details?id=com.biprep.sherpa
runs locally on 8gb android, but is slow because reasons lmao

RobertJasiek

Quote from: lmao on February 24, 2024, 16:48:38robert are you using two nicknames

Unlike you, I always use my real name.

We have at least three Roberts here: me, robert and someone else, whose nickname I do not recall now.

I do use two nicknames: RobertJasiek (guest), Robert Jasiek (registered, name with space). The reason is that NBC does not allow the same spelling as a guest and registered user. It only allows different guests to use the same nickname, such as the (at least) two users called A.

(Your insult is not cited.)

Quoteyou are the only 'expert' here who is convinced one needs 96gb for llm

I am not an expert on LLM. The Apple fan user A seems / claims to know something about it and I have taken his word for the order of magnitude to produce sufficiently interesting results. I find this plausible as commercial LLM instances are much larger and have seen examples of limited quality of AI generated images from text inputs, something that is somewhat comparable to LLM and generated texts from text inputs.

Quote8gb

I do not have the sligtest doubt that low storage, or slow chips (CPUs), are enough to get some results. It all depends on desired quality. (In fact, one can even do AI calculations by pen and paper, LOL. One just will not get useful quality for interesting contents.)

lmao

Quote from: RobertJasiek on February 24, 2024, 18:00:18Unlike you, I always use my real name.
then i will have to believe you got someone willing to put his tongue between your buttcheeks that deep in a forum with <20 users lmao and of course by chance you were the next one to comment on the topic you've ignored for 3 days

Quick Reply

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview