You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think this issue is similar to #7073, but this one is for timestamps, whereas the other one affected dates.
I ran into this issue when parsing a metadata file of a delta lake table containing large dates. The table was created using Spark SQL i.e., this issue occurs in the wild.
The text was updated successfully, but these errors were encountered:
I've actually been debugging a similar issue for DataFusion Comet, and will open a related issue shortly. The issue may stem from the fact that Spark still defaults to writing INT96 for timestamps. In my issue, we read back a Parquet file written with large timestamp values from a Parquet file, and arrow-rs coerces them into a Timestamp(TimeUnit::Nanoseconds, None) by default which cannot represent as large of a date range as an INT96.
Describe the bug
Attempting to convert a string with a valid timestamp with a large date in the ISO format (e.g., +10999-12-31T00:00:00) results in:
Error parsing timestamp from '+10999-12-31T00:00:00': error parsing date
To Reproduce
Expected behavior
The cast succeeds.
Additional context
I think this issue is similar to #7073, but this one is for timestamps, whereas the other one affected dates.
I ran into this issue when parsing a metadata file of a delta lake table containing large dates. The table was created using Spark SQL i.e., this issue occurs in the wild.
The text was updated successfully, but these errors were encountered: