Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HUDI-8072] Enable log file only test for Hive MOR tables #12669

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
import org.apache.hudi.exception.HoodieIOException;
import org.apache.hudi.exception.HoodieUpsertException;
import org.apache.hudi.execution.JavaLazyInsertIterable;
import org.apache.hudi.io.AppendHandleFactory;
import org.apache.hudi.io.CreateHandleFactory;
import org.apache.hudi.io.HoodieMergeHandle;
import org.apache.hudi.io.HoodieMergeHandleFactory;
Expand Down Expand Up @@ -268,6 +269,10 @@ public Iterator<List<WriteStatus>> handleInsert(String idPfx, Iterator<HoodieRec
LOG.info("Empty partition");
return Collections.singletonList((List<WriteStatus>) Collections.EMPTY_LIST).iterator();
}
if (table.getIndex().canIndexLogFiles()) {
return new JavaLazyInsertIterable<>(recordItr, true, config, instantTime, table, idPfx,
taskContextSupplier, new AppendHandleFactory());
}
Comment on lines +272 to +275
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be put into BaseJavaDeltaCommitActionExecutor? COW with commit action should not use append handle.

return new JavaLazyInsertIterable<>(recordItr, true, config, instantTime, table, idPfx,
taskContextSupplier, new CreateHandleFactory<>());
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,6 @@
import org.apache.hadoop.mapred.Reporter;
import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Disabled;

import java.io.IOException;
import java.util.ArrayList;
Expand All @@ -87,12 +86,6 @@
import static org.junit.jupiter.api.Assertions.assertTrue;

public class TestHoodieFileGroupReaderOnHive extends TestHoodieFileGroupReaderBase<ArrayWritable> {

@Override
@Disabled("[HUDI-8072]")
public void testReadLogFilesOnlyInMergeOnReadTable(RecordMergeMode recordMergeMode, String logDataBlockFormat) throws Exception {
}

private static final String PARTITION_COLUMN = "datestr";
private static JobConf baseJobConf;
private static HdfsTestService hdfsTestService;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -155,13 +155,14 @@ public void testReadLogFilesOnlyInMergeOnReadTable(RecordMergeMode recordMergeMo
// Use InMemoryIndex to generate log only mor table
writeConfigs.put("hoodie.index.type", "INMEMORY");


try (HoodieTestDataGenerator dataGen = new HoodieTestDataGenerator(0xDEEF)) {
// One commit; reading one file group containing a base file only
commitToTable(dataGen.generateInserts("001", 100), INSERT.value(), writeConfigs);
// One commit: reading one file group containing 1 log file only.
yihua marked this conversation as resolved.
Show resolved Hide resolved
commitToTable(dataGen.generateInserts("001", 100), UPSERT.value(), writeConfigs);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a validation on the precondition that there is no base file after each commit?

validateOutputFromFileGroupReader(
getStorageConf(), getBasePath(), dataGen.getPartitionPaths(), false, 1, recordMergeMode);

// Two commits; reading one file group containing a base file and a log file
// Two commits: reading one file group with 2 log files only.
commitToTable(dataGen.generateUpdates("002", 100), UPSERT.value(), writeConfigs);
validateOutputFromFileGroupReader(
getStorageConf(), getBasePath(), dataGen.getPartitionPaths(), false, 2, recordMergeMode);
Expand Down
Loading