You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This was found in open source issue: #70 - Connect to preview
Consider the following sequence:
Consider a ACID table test with just one column i of type integer. DF containing (1, 2, 3) is used to insert into it using HiveAcidTable API: insertInto(df: DataFrame, statementId: Option[Int] = None): Unit
Use statement Id as 1
Again do a insert into with DF having (4, 5,6), but this time give no statement id.
So there would be 2 directory structure created:
first transaction:
hdfs://0.0.0.0:9000/tmp/hive/test/delta_0000001_0000001_0001/bucket_0000
second transaction:
hdfs://0.0.0.0:9000/tmp/hive/test/delta_0000003_0000003/bucket_0000
When you update or delete i = 1 and 4 which are in different files but with same bucket id: 0, they need to be written to same bucket file in the new transaction's delete_delta and delta directory.
But thing to note here is even if they both belong to bucket 0, the rowId.bucketId will be different as they have different statement id. rowId.bucketID is encoded this way:
Represents format of "bucket" property in Hive 3.0.
top 3 bits - version code.
next 1 bit - reserved for future
next 12 bits - the bucket ID
next 4 bits reserved for future
remaining 12 bits - the statement ID - 0-based numbering of all statements within a
transaction. Each leg of a multi-insert statement gets a separate statement ID.
The reserved bits align it so that it easier to interpret it in Hex.
Since we are repartitioning based on rowid.bucketId both 1 and 4 will be processed by different tasks and they end up conflicting as they need to write to the same file. Hence we will have to repartition to just the bucketID encoded in rowid.bucketId
The text was updated successfully, but these errors were encountered:
This was found in open source issue: #70 - Connect to preview
Consider the following sequence:
Consider a ACID table
test
with just one columni
of type integer. DF containing (1, 2, 3) is used to insert into it using HiveAcidTable API:insertInto(df: DataFrame, statementId: Option[Int] = None): Unit
Use statement Id as 1
Again do a insert into with DF having (4, 5,6), but this time give no statement id.
So there would be 2 directory structure created:
first transaction:
hdfs://0.0.0.0:9000/tmp/hive/test/delta_0000001_0000001_0001/bucket_0000
second transaction:
hdfs://0.0.0.0:9000/tmp/hive/test/delta_0000003_0000003/bucket_0000
When you update or delete i = 1 and 4 which are in different files but with same bucket id: 0, they need to be written to same bucket file in the new transaction's delete_delta and delta directory.
But thing to note here is even if they both belong to bucket 0, the rowId.bucketId will be different as they have different statement id. rowId.bucketID is encoded this way:
Represents format of "bucket" property in Hive 3.0.
top 3 bits - version code.
next 1 bit - reserved for future
next 12 bits - the bucket ID
next 4 bits reserved for future
remaining 12 bits - the statement ID - 0-based numbering of all statements within a
transaction. Each leg of a multi-insert statement gets a separate statement ID.
The reserved bits align it so that it easier to interpret it in Hex.
Since we are repartitioning based on rowid.bucketId both 1 and 4 will be processed by different tasks and they end up conflicting as they need to write to the same file. Hence we will have to repartition to just the bucketID encoded in rowid.bucketId
The text was updated successfully, but these errors were encountered: