Considerations.
It would appear that it will not be possible to directly discern the difference in image quality made possible by the increased color depth. The monitor must be assumed to be incapable of displaying it. It's also not clear whether the human eye can perceive such subtle differences, but we must presume that some people can discern them; it may be that women are biologically more sensitive to color differences than are men (they certainly have a lot more names for colors, it seems). In any case, it does not seem reasonable to rule out that possibility, and so we must retain the possibility that directly displayed differences are relevant.
We have determined that, in all likelihood, all the other parts of the image rendering process are capable, and so we must assume that whatever differences are produced, they will be incident on the monitor itself. So we must ask: What other differences created by the increased color depth might be perceivable on the average monitor?
It has been proposed that an insufficient color depth, presuming that 24bit color is in fact insufficient for our purposes, produces a phenomenon sometimes called 'clumping'. This is where a group of pixels are closer in color than the monitor can detect, and so the monitor presents them as all the same color. On the face of it, the phenomenon seems plausible, and absent any compelling falsification of the proposal, it would seem reasonable to look for it. It is trivial to determine whether it exists: zoom in on an image until the pixels themselves are visible, and look to see if in fact there are groups of pixels that have identical value when the surrounding tonality indicates they should not necessarily be so.
So, 'clumping' is one thing to seek.
The next question is: where would this be evident, and where would it have the potential to "degrade" the image? One place would be where very subtle and smooth gradations of tonality are to be expected. It would not be relevant in image areas where the color does not change. It might be relevant if the size of the clumps were found to be big enough to degrade edge acutance. It might be relevant at the extremes of image density, especially where the density is the lightest.
Another question is: how would this be evident; ie, how would the image be degraded? There does not seem to be a clear answer here. Some suspicions arise, however: 1) If the clumps are large enough to appear as "grain" where grain would be considered degrading. 2) If the clumps are not large enough to be directly perceived, but there is a observed difference in smoothness and subtle detail between their presence and their absence. While the first may be directly demonstrable, the second may not be, and so the issues of quantifi- cation arise.
Presence or absence is a simple boolean quantification. Effect as observed in terms of smoothness and detail is likely to be much more difficult. If it turns out that it is the second case, it may well cause a problem, in that the presence or absense of such an effect may be arguable. In that case, it will be up to the observer to make a determination, and all that can be said here is that I do or do not perceive the effect, and/or do or do not find it remarkable.