
You're running a really good stress test on the software with what you are doing.Īnother option, if you have the hardware, might be to run it on a machine with more processor capacity. The more complex software is, the more likely it is to have these kinds of problems. These assumptions can be perfectly valid for years even if they do not match the letter of the documentation (or even if they do the documentation can change or be incomplete to start with), but which go away as machines get faster or something deep in the OS changes the handling of data very slightly. Its really easy to build in assumptions about how data that is written arrives at the reader (assuming one blob or various kinds of fragmentation, becoming multiple fragments or different subdivisions of the data). Timing and chunking issues are a particularly awkward area and are very prone to subtle changes over time. This is the problem with old Windows software, or software from any OS. Most of what is in the event log IS gibberish unless you have a passionate interest (wrote the software that produced it) in whatever it is. If its a disk error then you'd see something in the event viewer complaining (with a red icon) about hardware. The reference to read error could be to disk, network or inter-process communication.
