Quote from: Worgarthe on Today at 00:31:26Yes it would, and it's cheaper too (as opposed to getting an absolute top specs laptop with its insane price tag; talking in general here about laptops, not about the P1 G8 which tops at 8 GB VRAM).
I have an RTX 5070 Ti (16 GB) working perfectly fine with both of my ThinkPads (X1 Carbon and P16). There's a bit of bottleneck if you're chasing super-high fps (for example, if you can get, say, 350 in a desktop you won't really reach more than 300 here), but if you cap that to 60-165 fps there really is no difference in gaming experience and you save significant money in the process too (while getting more raw GPU power). That's a 780-ish € GPU, and it will completely stomp both the Blackwells here in this P1 (PRO 1000 and PRO 2000), even with the mentioned Thunderbolt bottleneck (which is marginal outside of games and very high fps).
The tradeoff is not being able to play ultra graphics on the go because some (homeless?) people apparently do that all day long, they simply travel and play continuously without doing anything else (apart from crying how 8 GB VRAM is insufficient for games), and those same people struggle to open their in-game settings to put textures to high instead of ultra to significantly lower their VRAM usage, but other than that - no complaints, all works flawlessly when I get home - I put my laptop(s) on a table, plug in a single cable and that's it. The eGPU gets activated automatically, and I can play immediately.
Quote from: 2k for 8GB VRAM gg on Yesterday at 10:08:06Running local LLMs / AI has been a thing for a few years now, using llama.cpp and its webUI is all you need. A LLM can be fully loaded into the GPU's VRAM or, if the LLM can't fit, parts of it can be offloaded to system RAM. This laptop has 32 GB RAM + 8 GB VRAM. Small and better capable, big open-weights LLMs exist and the more RAM+VRAM your PC has, the better. Every GB helps. So, from 8 GB to 12 GB to 16 GB VRAM would already be a good to very good improvement.Genuinely curious - do you do anything else in your life apart from running local LLMs? Well, aside from spamming the same bs under quite literally every single existing review around here...
Quote from: veraverav on Yesterday at 19:24:06Would plugging in an eGPU resolve this bottleneck for someone that absolutely has to game?Yes it would, and it's cheaper too (as opposed to getting an absolute top specs laptop with its insane price tag; talking in general here about laptops, not about the P1 G8 which tops at 8 GB VRAM).
Quote from: 2k for 8GB VRAM gg on Yesterday at 10:08:06Quote8 GB VRAM2000 for only 8 GB VRAM? Nice trolling.
Even games have a problem with only 8 GB VRAM: youtube.com/watch?v=ric7yb1VaoA: "Gaming Laptops are in Trouble - VRAM Testing w/ @Hardwareunboxed"
Most big games are made for consoles first in mind and the PS5 has 16 GB VRAM, minus 4 GB for the OS, and games expect your GPU to have at least 12 GB VRAM.
Running local LLMs / AI has been a thing for a few years now, using llama.cpp and its webUI is all you need. A LLM can be fully loaded into the GPU's VRAM or, if the LLM can't fit, parts of it can be offloaded to system RAM. This laptop has 32 GB RAM + 8 GB VRAM. Small and better capable, big open-weights LLMs exist and the more RAM+VRAM your PC has, the better. Every GB helps. So, from 8 GB to 12 GB to 16 GB VRAM would already be a good to very good improvement.
QuoteQuotematte tandem OLED can appear slightly grainy up close
How to destroy the beautiful popping colors and the sharp text of an (glossy) OLED? Take grinding paper/stone and rub it on the screen, then you get the screen in this laptop.
Quote8 GB VRAM2000 for only 8 GB VRAM? Nice trolling.
Quotematte tandem OLED can appear slightly grainy up closeHow to destroy the beautiful popping colors and the sharp text of an (glossy) OLED? Take grinding paper/stone and rub it on the screen, then you get the screen in this laptop.
Quoteone-year base warranty instead of three yearsWow
Quoteno ECC RAMThis is a big one. Since this is a workstation, at least having the option?
QuoteLPCAMM2This is modern, at what MT/s is this running?