| Main > Main Forum |
| Smart power strips - why? |
| << < (4/5) > >> |
| Gray_Area:
--- Quote from: bkenobi on December 15, 2011, 11:09:58 am ---Propaganda... You are spending ~$30 to save you money on your electric bill. Most of the drains that people try to scare you about are miniscule. If you were to check what these drains actually are, you'd find that they won't cost you much. --- End quote --- If you think outside of your home, the amount is quite a bit. Everyone pays for that. --- Quote from: Green Giant on December 15, 2011, 07:21:30 pm ---If you have a monitor that powers up upon receiving power, then a smart strip is really nice. Otherwise it isn't so necessary. --- End quote --- Some monitors will stay off until they receive a video signal. --- Quote from: Cynicaster on December 15, 2011, 10:21:20 am --- --- Quote from: HanoiBoi on December 15, 2011, 07:31:16 am ---I've got a smart strip and I have my pc set to 'shut down' (like a software shut down) when pressing the power button. --- End quote --- I was a bit surprised to have to read so many responses before seeing this one; I didn't think anybody did it any other way. --- End quote --- I've thought of this, but prefer to have my front end handle shut down. @Monmotha: what about the 'reset' button on the PC case? I notice that Windows boots back just fine (no 'Windows wasn't shut down properly' message) if I press that. |
| MonMotha:
Whacking the reset button is the same as a hard power off and immediately turning it back on. Windows has given up on yelling at users that they didn't shut down properly. It only does that now if you interrupt the start-UP process or there's a known inconsistency indicated by the journal (which would appear random to the user); it then assumes that there may have been a problem that caused it to lock up or similar on startup (somewhat common due to updates, buggy drivers, etc.) and offers to run in safe mode for you. On hard power off, it just recovers the journal and moves about its merry business hoping that you won't notice any problems. The chance of data loss is in this case "relatively low", but definitely non-zero. If you're using Windows 2000 or newer, you should never lose the entire filesystem (excepting an OS bug), but spotty data loss is possible, and if that loss is in a system file (unlikely, but possible), you could wind up with an unbootable system. Like I said, I've done it for years with few issues, but I'd never ship a production system where users would be likely to routinely cut power without running the shutdown sequence, like an arcade game, without putting some sort of protections in place like a COW ramdisk or scratch disk. On Win9x and DOS, total filesystem loss (as in everything goes poof and the filesystem is unusable) is entirely possible. Be careful if you're still running one of these ancient OSes. |
| Gatt:
--- Quote from: MonMotha on December 15, 2011, 09:53:11 pm ---Whacking the reset button is the same as a hard power off and immediately turning it back on. Windows has given up on yelling at users that they didn't shut down properly. It only does that now if you interrupt the start-UP process or there's a known inconsistency indicated by the journal (which would appear random to the user); it then assumes that there may have been a problem that caused it to lock up or similar on startup (somewhat common due to updates, buggy drivers, etc.) and offers to run in safe mode for you. On hard power off, it just recovers the journal and moves about its merry business hoping that you won't notice any problems. The chance of data loss is in this case "relatively low", but definitely non-zero. If you're using Windows 2000 or newer, you should never lose the entire filesystem (excepting an OS bug), but spotty data loss is possible, and if that loss is in a system file (unlikely, but possible), you could wind up with an unbootable system. Like I said, I've done it for years with few issues, but I'd never ship a production system where users would be likely to routinely cut power without running the shutdown sequence, like an arcade game, without putting some sort of protections in place like a COW ramdisk or scratch disk. On Win9x and DOS, total filesystem loss (as in everything goes poof and the filesystem is unusable) is entirely possible. Be careful if you're still running one of these ancient OSes. --- End quote --- I would disagree that it's relatively low. IIRC, newer Windows versions do defrag in the background periodically without user intervention, and a hard reboot at the wrong time would cause some pretty serious problems there. I could see a situation where it's doing defrag where it could easily hose the drive. Especially with delayed write caches in either Windows, or the HD's buffer, since modern drives pack 16 or 32 megs of buffer at least. I would also argue that you could potentially hose Windows with ease, since it's never shutting down processes and services properly. I realize that it's alot less risky than it once was, but I would argue that it's a significant risk, even if a limited one in scope. Plus, it wasn't all that long ago I hosed a drive by doing hard reboots while trying to excise one of those fake virus scanner demons. Something must've been updating the MFT at precisely the wrong time, because I lost the MFT, and consequently the drive. |
| MonMotha:
The caching issues (both the OS write cache as well as the cache on the hard drive) are solved by inserting "barriers" within the cache schedule. Re-ordering writes around these barriers is not permitted. I know Linux does this with ext3 and ext4. Windows should at this point; it's not a new technique. Hardware support is a bit more spotty, but it can be emulated (at a performance cost) if the hardware doesn't support it. Linux actually maintains a blacklist of drives that claim to support cache barriers but actually ignore them. If the OS actually journals the filesystem and uses write barriers (and the drive actually implements them or the OS otherwise knows that it doesn't) correctly, it should not be possible to hose the entire filesystem. You could lose data (unless that's journaled, too, but that's uncommon due to the severe performance hit) in the form of truncated files or missing files, but the filesystem metadata should always be in a "consistent" state meaning you shouldn't lose everything. Now, that's a pretty high assumption as it's tough to get right, but it's at least guaranteed by the design of a journaled filesystem. I'd like to think they've worked most of the bugs out of NTFS at this point given that it's well over a decade old... I don't know if the Windows defrag program honors any of these guarantees. It damn well should since it would be a glaring hole in an otherwise reasonably well designed system to ignore them, but it can't really be inspected. Also, as you note, if data loss or corruption occurs in a critical system file, the OS could be rendered unusable. It's considered good practice to make the OS as tolerant of this as possible, and Windows is surprisingly good at it. My suggestion of "the risk is low" is experience driven, though. I've killed the power to both Windows XP and Linux ext3 systems without shutting them down for basically as long as they've been available, both intentionally and unintentionally, and I don't think I've ever lost an entire filesystem. In one case, this happened daily for over a year (a one-off test arcade system). I don't think I've ever even ended up with an unbootable OS. I *have* lost entire FAT and ext2 (both are not journaled) filesystems, despite my not having done nearly as much with them. I *have* also lost data, even on NTFS and ext3/4, due to hard power down, and that's definitely bad, hence my suggestion that you should always shut down properly or use another means of ensuring things don't break like COW to a ramdisk or scratch drive, but I've never lost an entire filesystem. The design criteria of a journaled filesystem are supposed to guarantee this exact behavior, so I'd tend to believe the implementors have gotten it at least mostly right. All the commercial PC arcades I've torn down use some form of ramdisk to contain writes and discard them on reboot, always bringing the system up in the "factory" state (high scores and operator settings are stored elsewhere with checking to see that it's OK and blowing them away with the defaults if they're not). On Windows, this is called EWF: the Enhanced Write Filter. Unfortunately for a typical MAME builder, these aren't the easiest things in the world to set up. EWF doesn't ship with XP Pro (it comes with the XP embedded toolkit), though it can be installed by hand, and the only way to manage it is a somewhat obscure command line tool. Doing a UnionFS/tmpfs on Linux is also rather complicated (probably more so, but also more flexible); there are other options on Linux that are easier and more typically deployed (system in initrd + data on readonly partition is the most common). I guess what I'm saying is that it comes down to whether or not you feel lucky, punk. I'd wager that most users will never have a problem, especially given the largely static nature of a typical MAME setup, but like I said, I'd never ship a production system like this without addressing this problem. |
| BobA:
Let's just say that a smart strip is CHEAP insurance. I'd rather do it right than have to rebuild a drive even if it only means formatting it and copying the HD. :D I don't think it saves that much electricity in the long run but is sure saves on all the other components in the cab by letting them power down so they do not burn out as quickly. Most monitors go into a low power state without signal and the amp is not being driven to it's power usage is also low but that flourescent light in the marquee if you are not using leds uses real power. |
| Navigation |
| Message Index |
| Next page |
| Previous page |