System.OverflowException
The exception that is thrown when an arithmetic, casting, or conversion operation in a checked context results in an overflow.
Minimum version: >= 1.1 >= Core 1.0
Statistics
How to handle it
try
{
}
catch (System.OverflowException e)
{
}
try
{
}
catch (System.OverflowException e) when (e.Message.Contains("something"))
{
}
try
{
}
catch (System.OverflowException e) when (LogException(e))
{
}
private static bool LogException(Exception e)
{
logger.LogError(...);
return false;
}
How to avoid it
We haven't written anything about avoiding this exception yet. Got a good tip on how to avoid throwing System.OverflowException? Feel free to reach out through the support widget in the lower right corner with your suggestions.
Links
YouTube videos
Possible fixes from StackOverflow
[Browsable(false)]
[EditorBrowsable(EditorBrowsableState.Never)]
[Obsolete("Property '" + nameof(Duration) + "' should be used instead.")]
public long DurationTicks { get; set; }
[NotMapped]
public TimeSpan Duration
{
#pragma warning disable 618
get { return new TimeSpan(DurationTicks); }
set { DurationTicks = value.Ticks; }
#pragma warning restore 618
}
Update
This is now achievable since EF Core 2.1, using Value Conversion.
builder.Entity<Stage>()
.Property(s => s.Span)
.HasConversion(new TimeSpanToTicksConverter()); // or TimeSpanToStringConverter
The current implementation of System.Array
uses Int32
for all its internal counters etc, so the theoretical maximum number of elements is Int32.MaxValue
.
There's also a 2GB max-size-per-object limit imposed by the Microsoft CLR.
A good discussion and workaround here...
And a few related, not-quite-duplicate, questions and answers here...
Because the specification says so in section 7.6.10.4:
Each expression in the expression list must be of type
int
,uint
,long
, orulong
, or implicitly convertible to one or more of these types.
This is most likely to easily allow creation of arrays larger than 2 GiB, even though they are not supported yet (but will be without a language change once the CLR makes such a change). Mono does support this, however and .NET 4.5 apparently will allow larger arrays too.
Regarding array length being an int
by the way: There is also LongLength
, returning a long
. This was in .NET 1.1 and probably a future-proofing change.
This corner-case is very specifically addressed in the compiler. Most relevant comments and code in the Roslyn source:
// Although remainder and division always overflow at runtime with arguments int.MinValue/long.MinValue and -1
// (regardless of checked context) the constant folding behavior is different.
// Remainder never overflows at compile time while division does.
newValue = FoldNeverOverflowBinaryOperators(kind, valueLeft, valueRight);
And:
// MinValue % -1 always overflows at runtime but never at compile time
case BinaryOperatorKind.IntRemainder:
return (valueRight.Int32Value != -1) ? valueLeft.Int32Value % valueRight.Int32Value : 0;
case BinaryOperatorKind.LongRemainder:
return (valueRight.Int64Value != -1) ? valueLeft.Int64Value % valueRight.Int64Value : 0;
Also the behavior of the legacy C++ version of compiler, going all the way back to version 1. From the SSCLI v1.0 distribution, clr/src/csharp/sccomp/fncbind.cpp source file:
case EK_MOD:
// if we don't check this, then 0x80000000 % -1 will cause an exception...
if (d2 == -1) {
result = 0;
} else {
result = d1 % d2;
}
break;
So conclusion to draw that this was not overlooked or forgotten about, at least by the programmers that worked on the compiler, it could perhaps be qualified as insufficiently precise language in the C# language specification. More about the runtime trouble caused by this killer poke in this post.
It's not something you are doing wrong, apart from being overly precise perhaps. I don't think its a new problem either.
You could argue it's a bug or just a gap in functionality. The .Net Decimal
structure just can't represent the value that is stored in your SQL Server decimal
so an OverflowException
is thrown.
Either you need to manipulate the value to something compatible in the database before you retrieve it or, read the data out in a raw binary or string format and manipulate it on the .Net side.
Alternatively, you could write a new type that handles it.
It's probably simpler just to use a compatible decimal
definition in the first place unless you really need that precision. If you do I'd be interested to know why.
Source: Stack Overflow