The short answer is 'because that's what the standard says'; IEEE floating point defines infinities and NaNs, whereas the integer standards don't. Decimal, despite having a floating point in the most literal sense of that term, is based on the integer standards (and obviously
long is a native integer type). The 'special' bit patterns are typically 'most bits on' and as such would represent small negative numbers in an integer type.
You can alter the overflow behaviour of native types in .Net with the checked{...} or unchecked{...} constructs, with a compiler option setting the default: see
MSDN[
^].