Tag Archives: NVIDIA

Computer freezing with black screen on game exit

I’ve first experienced this with game LIMBO and couldn’t figure it out until today when it happened in Tower Wars game again. And the symptoms were too peculiar to be a coincidence.

Symptoms:

When you exit the game, screen remains black. No mouse or keyboard input seems to do anything as no amount of Ctrl+Esc, Alt+Tab, Alt+F4 did anything. If you had music player playing in the background via music player, music will continue playing (meaning system didn’t actually freeze).

Affected devices and games:

  • NVIDIA GeForce graphic card (GTX 980 in my case)
  • LIMBO (tested game)
  • Tower Wars (tested game)

Offending component/setting(s):

FreezingGamesOnExit.png

Solution:

Change the “Preferred refresh rate” from “Highest available” to “Application-controlled” in NVIDIA Control Panel.

The reason I’ve always changed this to “Highest available” was because I have a 144Hz monitor and I want everything to run at such high refresh. Apparently, with games that enforce own refresh rate (which isn’t 144Hz, but 60Hz instead), this somehow conflicts and causes this lockup with black screen.

Everyone losing their shit about RX480 power consumption

I’m gonna drop a quick post about this, because people apparently aren’t capable of thinking rational anymore. Also the double standards when it comes to NVIDIA and AMD…

As you might have heard, AMD released Radeon RX480 recently, a killer cheap graphic card based on new Polaris architecture. It’s priced up to $230 for 8GB version and performs a bit better than GTX 970/R9 390, but slightly worse than GTX 980 or R9 390X. Not bad to be honest.

Things got complicated when reviewers found out some cards draw more than 150W of power. Now, that by itself wouldn’t be a problem if the excess power draw wasn’t pulled from PCIe slot which is rated at 75W. The 6pin PCIe power connector on the graphic card is rated at 75W as well officially, but can draw a lot more power without any problem. Both combined deliver 150W of power. But the card in tests pulled 166W. So, you have to get more power from somewhere and RX480 apparently does it from PCIe slot. Pulling more than 75W from PCIe slot can potentially damage cheap crappy motherboards.

So, up till this point, I have no problems. There apparently is an issue and reviewers are there to point it out so company behind the product is aware of the issue and is going to remedy it. After all, I’m a consumer and such behavior is preferred because it benefits consumers.

However (not unexpected eh?), AMD already officially acknowledged the issue with this statement (provided by W1zzard from TechPowerUp):

“As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8 Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU’s tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016).”

As you can see, AMD is already preparing a fix for it. Modern GPU’s are very advanced and power delivery can be fully controlled via BIOS or driver. I don’t know exact Polaris electrical design, but knowing Maxwell 2 already has this, been fiddling with it myself on GTX 980 that I have, and I see no reason why brand new GPU like Polaris wouldn’t have it as well. Meaning AMD isn’t just selling hot air and buying time, they have a realistic solution. Tuesday 5th July is the day they will give more info and potentially a driver with the fix. They can either strictly restrict power draw to 150W as whole or restrict PCIe to 75W as specified and let 6pin additional connector draw a bit of excess power. From what I heard, even though 6pin power connector is rated to 75W, it can pull up to 150W just like 8pin. Which won’t “gimp” the performance, it will just bring it to level AMD has specified while leaving PCIe within specs.

Has that calmed people? Nope. Everyone still losing their shit and creating more drama even before anyone can even evaluate the fix. I’m pretty sure using graphic card in such conditions for few days won’t affect anything. So why all this fucking drama?

What’s even more hilarious, NVIDIA had the same shit. On TWO occasions and I’ve only heard about it now. Never before. NVIDIA also fucked up the GTX 1080 fan profiles on Founders Edition cards (reference models)? There were mentions of it, but nowhere on the same level of crazy nonsense people are doing now for AMD.

For fucks sake, stop being such god damn fanboys. I own a GTX 980 and I’m defending AMD here…

Everyone calm the fuck down and wait for the fix. Evaluate it and if performance or anything else will be greatly affected by that, then start losing your shit again. But until then, calm the fuck down. Fucking hell.

UPDATE (2016/07/06)!

AMD issued an update on the given matter like an hour ago on Facebook.

We promised an update today (July 5, 2016) following concerns around the Radeon RX 480 drawing excess current from the PCIe bus. Although we are confident that the levels of reported power draws by the Radeon RX 480 do not pose a risk of damage to motherboards or other PC components based on expected usage, we are serious about addressing this topic and allaying outstanding concerns. Towards that end, we assembled a worldwide team this past weekend to investigate and develop… a driver update to improve the power draw. We’re pleased to report that this driver—Radeon Software 16.7.1—is now undergoing final testing and will be released to the public in the next 48 hours.

In this driver we’ve implemented a change to address power distribution on the Radeon RX 480 – this change will lower current drawn from the PCIe bus.
Separately, we’ve also included an option to reduce total power with minimal performance impact. Users will find this as the “compatibility” UI toggle in the Global Settings menu of Radeon Settings. This toggle is “off” by default.

Finally, we’ve implemented a collection of performance improvements for the Polaris architecture that yield performance uplifts in popular game titles of up to 3%. These optimizations are designed to improve the performance of the Radeon RX 480, and should substantially offset the performance impact for users who choose to activate the “compatibility” toggle.

AMD is committed to delivering high quality and high performance products, and we’ll continue to provide users with more control over their product’s performance and efficiency. We appreciate all the feedback so far, and we’ll continue to bring further performance and performance/W optimizations to the Radeon RX 480.

Interestingly enough, they will provide the fix, but they are confident enough the problem with PCIe power draw isn’t serious enough to enable the fix by default. Which is a bit strange, but I guess they know what they are doing. They also optimized drivers for a 3% boost which offsets the roughly 1% performance penalty when enabling the fix. Meaning even if users decide to enable the fix, they won’t lose any performance. I’m still interested in seeing performance and power draw results in a re-test of the RX480 with new drivers compared to old ones (and with or without the fix). Just to be really sure what’s happening. I’ll keep you posted…

UPDATE (2016/07/08)!

Read the news about resolved PCIe power issues on RX480 here. I’ve decided to post it as a new article while linking it back here for reference.

AMD fixed the issue entirely. That’s what I call a professional response for seemingly unfixable problem…

 

NVIDIA ForceWare 368.39 drivers causing problems

I’ve been having hard times figuring out random “The stub received bad data” errors as well as random failing of Task Scheduler service.

stub_bad_data.png

Event Viewer was also getting filled with “Faulting application name: svchost.exe_wuauserv” in connection to NVIDIA’s “nvwgf2umx.dll”.

With more digging I’ve uncovered that ForceWare 368.39 drivers are indeed bugged (probably entire 368.xx series), but didn’t know what exactly.

With more Googling and later testing myself, I can confirm it’s caused by “MFAA” setting. If you leave it off (default state when drivers are installed clean) everything will be fine. But if you change it, it will corrupt the settings and cause all these problems.

NVIDIA is already aware if it and they are working on it, till they release a fix, best way to get this stupid crap resolved is to disable MFAA. If that doesn’t help, reinstall drivers, use reset settings to default during installation and never turn it on (for now at least). You can adjust everything else, just leave MFAA alone.

I’ll keep you posted about the updated driver which includes an actual fix for this mess.

Fix DisplayPort not working (No Signal)

I keep on randomly experiencing this, either after changes I do to system or just when restarting the system and it all of a sudden decides to fuck itself up for no real reason and based on Google, it seems to be a very common thing with NVIDIA graphic cards and high end ASUS monitors like my ASUS VG248QE (144Hz gaming monitor). Though I’ve seen reports with other brands as well.

It’s a really annoying one because you often don’t think it’s “just this” and you try to exclude countless other things, wasting time when solution for the issue takes just 1 minute of sitting still. Yeah…

When I was experiencing this issue, I’ve disassembled half of my system, tested with backup graphic card (which only happens to have HDMI output, not DisplayPort so I was basically just testing if the rest of the system is working), reinstalled my ASUS Strix GTX 980 and the damn thing wouldn’t output image through DisplayPort to my ASUS VG248QE. When it was connected through HDMI, image was there. Connected through DisplayPort, “No Signal”. I was going mad WTF is going on. Was testing different DisplayPort ports on graphic card, turning monitor on and off using Power button, changing input on monitor, nothing. But nothing changed because one variable in the problem wasn’t really getting changed or “reset”…

Then I figured out yet another dumbest solution to seemingly complex issue…

Unplug monitor from power for 1 minute and then plug it in again. Voila, DisplayPort magically starts working again! How retarded is that eh?

It seems like there is some electronic in monitor that goes haywire here and there and only way to get it unstuck is to cut power to the monitor till all capacitors in it fully discharge. This apparently resets the “problematic” state and it then manages to sync with a graphic card correctly and starts outputting the image again through DisplayPort.

Hope this helps. Just try to remember this as the first “go to” solution when it happens again so you won’t waste time testing everything else… Try this first since it only takes a minute.

GeForce GTX 970 has dark secrets…

As users have uncovered suspicious things regarding GeForce GTX 970, it has now turned out that NVIDIA was in fact lying to everyone and they are now admitting it around the corners. GTX 970 doesn’t actually have 64 ROP units. It only has 56. Despite them advertising it as 64 ROP cards. Bus width? Nope, 224+32 instead of full-blown 256bit. 3,5 + 0,5GB of VRAM instead of 4GB.

Now I’m going to place tinfoil hat on my head and draw some sort of conclusion on it. You buy a GTX 970 today, all is well, games work fine. What will happen when users start using 4K more in more demanding games? If users already noticed something isn’t right now, who really thought it would go unnoticed 1 year later? To me it seems like NVIDIA engineers said, so we have a GTX 980 core here, we can use crippled ones and make GTX 970. And we’ll somehow patch the memory thing and no one will notice it anyway. And so they’ve done that…

Now, why is this important? Today, games don’t use that much memory to really go past 3,5GB very often, but things will change and this might happen very soon with GTA5 and other demanding games on the horizon. But when they will use it, that 0,5GB segment will do shitty stuff to the performance. In computers, systems are as fast as the weakest link in the system. In this case this 0,5GB of VRAM is the weak link. If the rest of GPU and VRAM has to wait for data to be fetched from this crappy part of slow VRAM, the fact is, the rest will stall. And while this doesn’t necessarily mean unplayable, it will result in a framerate drop and other issues like the reported stuttering.

Call me biased because I’m using AMD Radeon and because I’ve decided to wait for R9-380X to see what’ll be better, but I’m kinda laughing here, knowing how people are defending a broken piece of hardware just because they bought it and no one ever wants to admit he made a mistake. GTX 970 is a hacked crap that works well when conditions are good, but when they won’t be quite soon, it won’t be so good anymore.

At least GTX 980 is a proper card that isn’t hacked, but it’s a lot more expensive and leaves a bad aftertaste knowing they were hacking things and covering them up from consumers until someone caught them and now they are making excuses and explaining how it doesn’t affect anything. Do whatever conclusion you want, but I don’t like it when companies do that…

GeForce GTX 970 suffering from dementia

Well, not quite, but apparently GeForce GTX 970 is suffering from a design flaw that causes all weird shit to happen when VRAM usage goes beyond 3,3GB. It starts to lose memory…

Not much is known so far and people are still trying to figure out things, but from the looks of it, NVIDIA chopped the GTX 980 chip in order to make it fit for the lower model segment aka GTX 970 and they sort of quickly patched it in order to seemingly make it work fine with 4GB of on-board memory. In most conditions it’s fine, but apparently things go haywire when you try to allocate more than 3,3GB of VRAM.

There are some rather lame excuses in terms of “segmented memory where that extra 0,7GB is added as an extra” and “well, you’re taxing the graphic card, that’s why it’s working so badly”, but the reality is, you just tax it with a bit more textures to spill over the 3,3GB barrier and things go south despite cards supposedly having 4GB of VRAM. No such thing on a full-fledged GTX 980 cards and these aren’t all that faster than GTX 970 to automatically explain why this isn’t happening on GTX 980.

While it all looked like NVIDIA made a really decent card, it’s starting to turn out it was a fast and sloppy job. Good thing I haven’t pulled the trigger on GTX 970 and I was like hours away from transferring the money. Luckily R9-380X news came out and I’ve postponed the purchase. Which now seems even better decision seeing how GTX 970 is less than ideal thing and that waiting for R9-380X looks like a reasonable thing to do. No one knows if AMD will have problems of their own, but so far it’s a better option either way…

You can follow this link in order to see how things will unfold:

http://www.techpowerup.com/forums/threads/geforce-gtx-970-design-flaw-caps-video-memory-usage-to-3-3-gb-report.209205/