Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SUPPORT] MOR table Syncing to Catalogs Hive metadata refresh issue. #12901

Open
Toroidals opened this issue Mar 3, 2025 · 2 comments
Open
Labels
hive Issues related to hive meta-sync

Comments

@Toroidals
Copy link

Toroidals commented Mar 3, 2025

Tips before filing an issue

  • Have you gone through our FAQs? y

  • Join the mailing list to engage in conversations and get faster support at [email protected].

  • If you have triaged this as a bug, then file an issue directly.

Describe the problem you faced

flink write:Using the Flink upsert method to write to Hudi's MOR table and sync it to Hive, an error occurs when querying the MOR's RO table with Spark SQL: "Error in query: Table or view not found: xx.xxx_ro; line 120 pos 19." Observing the Hive metadata reveals that the metadata record for the table is sometimes present and sometimes absent (as if it is being deleted and then recreated), resulting in intermittent query errors indicating that the table does not exist.

spark sql read: An error occurs approximately once every ten queries: "Error in query: Error in query: Table or view not found: ods.ods_xxx_cmf_fin_pa_projects_cdc; line 120 pos 19;
'Aggregate [unresolvedalias(count(1), None)]
+- 'SubqueryAlias a
+- 'Project ['ccch.header_claim_header_id, 'ccch.header_opr_company_id, 'ccch.header_company_segment, 'ccch.header_claim_type_id, 'ccch.header_claim_type_code, unresolvedalias('to_utc_timestamp('ccch.header_creation_date, Asia/Shanghai), None), unresolvedalias('to_utc_timestamp('ccch.header_gl_date, Asia/Shanghai), None), 't.account_line_id, 't.claim_header_id, 't.code_combination_id, 't.entered_dr, 't.entered_cr, 't.description, 't.object_version_number, unresolvedalias('to_utc_timestamp('t.creation_date, Asia/Shanghai), None), 't.created_by, 't.last_updated_by, unresolvedalias('to_utc_timestamp('t.last_update_date, Asia/Shanghai), None), 't.last_update_login, 't.program_application_id, 't.program_id, unresolvedalias('to_utc_timestamp('t.program_update_date, Asia/Shanghai), None), 't.request_id, 't.attribute_category, ... 90 more fields]
+- 'Filter (('t._flink_cdc_ts_ms >= 2025-02-26 14:52:27) AND ('t._flink_cdc_ts_ms <= 2025-02-27 13:36:13))
+- 'Join LeftOuter, ('fbcm.project_id = 'cfbp1.project_id)
:- 'Join LeftOuter, ('houb.unit_id = 'fbcm.unit_id)
: :- 'Join LeftOuter, ('fcb.company_id = 'fbcm.company_id)
: : :- 'Join LeftOuter, (('cfbad.budget_id = 'fbcm.account_id) AND NOT ('cfbad._flink_cdc_op = d))
: : : :- 'Join LeftOuter, (('fbcm.comb_map_id = 't.budget_key) AND NOT ('fbcm._flink_cdc_op = d))
: : : : :- 'Join LeftOuter, (('cfba.business_activity_id = 't.business_activity_id) AND NOT ('cfba._flink_cdc_op = d))
: : : : : :- 'Join LeftOuter, (('cfbsc.bz_small_class_id = 't.bz_small_class_id) AND NOT ('cfbsc._flink_cdc_op = d))
: : : : : : :- 'Join LeftOuter, (('t.contract_id = 'con.contract_header_id) AND NOT ('con._flink_cdc_op = d))
: : : : : : : :- 'Join LeftOuter, ('t.customer_id = 'cfac.customer_id)
: : : : : : : : :- 'Join LeftOuter, ('t.budget_project_id = 'cfbp.project_id)
: : : : : : : : : :- 'Join LeftOuter, ('cfpp.project_id = 't.project_id)
: : : : : : : : : : :- Join Inner, (header_claim_header_id#107 = claim_header_id#20)
: : : : : : : : : : : :- SubqueryAlias t
: : : : : : : : : : : : +- SubqueryAlias spark_catalog.ods.ods_xxx_cmf_clm_gl_account_lines_cdc
: : : : : : : : : : : : +- Project [_hoodie_commit_time#14, _hoodie_commit_seqno#15, _hoodie_record_key#16, _hoodie_partition_path#17, _hoodie_file_name#18, account_line_id#19, claim_header_id#20, code_combination_id#21, entered_dr#22, entered_cr#23, description#24, object_version_number#25, creation_date#26, created_by#27, last_updated_by#28, last_update_date#29, last_update_login#30, program_application_id#31, program_id#32, program_update_date#33, request_id#34, attribute_category#35, attribute1#36, attribute2#37, ... 69 more fields]
: : : : : : : : : : : : +- Relation ods.ods_xxx_cmf_clm_gl_account_lines_cdc[_hoodie_commit_time#14,_hoodie_commit_seqno#15,_hoodie_record_key#16,_hoodie_partition_path#17,_hoodie_file_name#18,account_line_id#19,claim_header_id#20,code_combination_id#21,entered_dr#22,entered_cr#23,description#24,object_version_number#25,creation_date#26,created_by#27,last_updated_by#28,last_update_date#29,last_update_login#30,program_application_id#31,program_id#32,program_update_date#33,request_id#34,attribute_category#35,attribute1#36,attribute2#37,... 69 more fields] parquet
: : : : : : : : : : : +- SubqueryAlias ccch
: : : : : : : : : : : +- SubqueryAlias spark_catalog.dwd.dwd_xxx_cmf_clm_claim_headers_pub_upd
: : : : : : : : : : : +- Relation dwd.dwd_xxx_cmf_clm_claim_headers_pub_upd[header_claim_header_id#107,header_opr_company_id#108,header_company_segment#109,header_claim_type_id#110,header_claim_type_code#111,header_creation_date#112,header_gl_date#113,province_code#114] orc
: : : : : : : : : : +- 'SubqueryAlias cfpp
: : : : : : : : : : +- 'UnresolvedRelation [ods, ods_xxx_cmf_fin_pa_projects_cdc], [], false
: : : : : : : : : +- SubqueryAlias cfbp
: : : : : : : : : +- SubqueryAlias spark_catalog.ods.ods_xxx_cmf_fin_budget_projects_cdc
: : : : : : : : : +- Relation ods.ods_xxx_cmf_fin_budget_projects_cdc[_hoodie_commit_time#115,_hoodie_commit_seqno#116,_hoodie_record_key#117,_hoodie_partition_path#118,_hoodie_file_name#119,project_id#120,project_number#121,project_name#122,description#123,project_type#124,company_id#125,status#126,relevant_department_id#127,start_date#128,end_date#129,object_version_number#130,creation_date#131,created_by#132,last_updated_by#133,last_update_date#134,last_update_login#135,program_application_id#136,program_id#137,program_update_date#138,... 31 more fields] parquet
: : : : : : : : +- SubqueryAlias cfac
: : : : : : : : +- SubqueryAlias spark_catalog.ods.ods_xxx_cmf_fin_ar_customers_cdc
: : : : : : : : +- Relation ods.ods_xxx_cmf_fin_ar_customers_cdc[_hoodie_commit_time#170,_hoodie_commit_seqno#171,_hoodie_record_key#172,_hoodie_partition_path#173,_hoodie_file_name#174,customer_id#175,customer_number#176,customer_name#177,customer_short_name#178,source_system_code#179,customer_type#180,orgcert_number#181,customer_class#182,inner_code#183,vat_registration_num#184,status#185,start_date#186,end_date#187,object_version_number#188,creation_date#189,created_by#190,last_updated_by#191,last_update_date#192,last_update_login#193,... 35 more fields] parquet
: : : : : : : +- SubqueryAlias con
: : : : : : : +- SubqueryAlias spark_catalog.ods.ods_xxx_cmf_cm_contract_headers_cdc
: : : : : : : +- Relation ods.ods_xxx_cmf_cm_contract_headers_cdc[_hoodie_commit_time#229,_hoodie_commit_seqno#230,_hoodie_record_key#231,_hoodie_partition_path#232,_hoodie_file_name#233,contract_header_id#234,contract_self_number#235,contract_number#236,contract_name#237,province_code#238,create_company_id#239,create_dept_id#240,create_user_id#241,contact_tel#242,contract_subject#243,superior_sign_flag#244,retroactive_flag#245,sup_agreement_type#246,orig_contract_number#247,orig_contract_name#248,draft_date#249,approve_date#250,signed_date#251,archive_date#252,... 94 more fields] parquet
: : : : : : +- SubqueryAlias cfbsc
: : : : : : +- SubqueryAlias spark_catalog.ods.ods_xxx_cmf_fin_business_sm_class_cdc
: : : : : : +- Relation ods.ods_xxx_cmf_fin_business_sm_class_cdc[_hoodie_commit_time#347,_hoodie_commit_seqno#348,_hoodie_record_key#349,_hoodie_partition_path#350,_hoodie_file_name#351,bz_small_class_id#352,seq_num#353,class_code#354,class_name#355,enabled_flag#356,object_version_number#357,creation_date#358,created_by#359,last_updated_by#360,last_update_date#361,last_update_login#362,program_application_id#363,program_id#364,program_update_date#365,request_id#366,attribute_category#367,attribute1#368,attribute2#369,attribute3#370,... 19 more fields] parquet
: : : : : +- SubqueryAlias cfba
: : : : : +- SubqueryAlias spark_catalog.ods.ods_xxx_cmf_fin_business_activity_cdc
: : : : : +- Relation ods.ods_xxx_cmf_fin_business_activity_cdc[_hoodie_commit_time#390,_hoodie_commit_seqno#391,_hoodie_record_key#392,_hoodie_partition_path#393,_hoodie_file_name#394,business_activity_id#395,activity_code#396,activity_name#397,enabled_flag#398,object_version_number#399,creation_date#400,created_by#401,last_updated_by#402,last_update_date#403,last_update_login#404,program_application_id#405,program_id#406,program_update_date#407,request_id#408,attribute_category#409,attribute1#410,attribute2#411,attribute3#412,attribute4#413,... 32 more fields] parquet
: : : : +- SubqueryAlias fbcm
: : : : +- SubqueryAlias spark_catalog.ods.ods_xxx_cmf_fin_budget_comb_map_cdc
: : : : +- Relation ods.ods_xxx_cmf_fin_budget_comb_map_cdc[_hoodie_commit_time#446,_hoodie_commit_seqno#447,_hoodie_record_key#448,_hoodie_partition_path#449,_hoodie_file_name#450,comb_map_id#451,province_code#452,pri_key#453,company_id#454,project_id#455,unit_id#456,account_id#457,activity_code#458,business_activity_id#459,brand_code#460,usergroup_code#461,product_code#462,channel_code#463,strategy_code#464,enabled_flag#465,start_date#466,end_date#467,orig_update_date#468,object_version_number#469,... 47 more fields] parquet
: : : +- SubqueryAlias cfbad
: : : +- SubqueryAlias spark_catalog.ods.ods_xxx_cmf_fnd_budget_account_def_cdc
: : : +- Relation ods.ods_xxx_cmf_fnd_budget_account_def_cdc[_hoodie_commit_time#517,_hoodie_commit_seqno#518,_hoodie_record_key#519,_hoodie_partition_path#520,_hoodie_file_name#521,budget_id#522,budget_code#523,budget_name#524,province_code#525,enable_flag#526,object_version_number#527,creation_date#528,created_by#529,last_updated_by#530,last_update_date#531,last_update_login#532,program_application_id#533,program_id#534,program_update_date#535,request_id#536,attribute_category#537,attribute1#538,attribute2#539,attribute3#540,... 21 more fields] parquet
: : +- SubqueryAlias fcb
: : +- SubqueryAlias spark_catalog.ods.ods_xxx_fnd_company_b_cdc
: : +- Relation ods.ods_xxx_fnd_company_b_cdc[_hoodie_commit_time#562,_hoodie_commit_seqno#563,_hoodie_record_key#564,_hoodie_partition_path#565,_hoodie_file_name#566,company_id#567,company_code#568,company_type#569,address#570,company_level_id#571,parent_company_id#572,chief_position_id#573,start_date_active#574,end_date_active#575,company_short_name#576,company_full_name#577,zipcode#578,fax#579,phone#580,contact_person#581,object_version_number#582,request_id#583,program_id#584,created_by#585,... 67 more fields] parquet
: +- SubqueryAlias houb
: +- SubqueryAlias spark_catalog.ods.ods_xxx_hr_org_unit_b_cdc
: +- Relation ods.ods_xxx_hr_org_unit_b_cdc[_hoodie_commit_time#653,_hoodie_commit_seqno#654,_hoodie_record_key#655,_hoodie_partition_path#656,_hoodie_file_name#657,unit_id#658,parent_id#659,unit_code#660,name#661,description#662,manager_position#663,company_id#664,enabled_flag#665,object_version_number#666,request_id#667,program_id#668,created_by#669,creation_date#670,last_updated_by#671,last_update_date#672,last_update_login#673,unit_category#674,unit_type#675,unit_type_code#676,... 47 more fields] parquet
+- SubqueryAlias cfbp1
+- SubqueryAlias spark_catalog.ods.ods_xxx_cmf_fin_budget_projects_cdc
+- Relation ods.ods_xxx_cmf_fin_budget_projects_cdc[_hoodie_commit_time#725,_hoodie_commit_seqno#726,_hoodie_record_key#727,_hoodie_partition_path#728,_hoodie_file_name#729,project_id#730,project_number#731,project_name#732,description#733,project_type#734,company_id#735,status#736,relevant_department_id#737,start_date#738,end_date#739,object_version_number#740,creation_date#741,created_by#742,last_updated_by#743,last_update_date#744,last_update_login#745,program_application_id#746,program_id#747,program_update_date#748,... 31 more fields] parquet

To Reproduce

Steps to reproduce the behavior:

1.flink upsert write to Hudi:

public class CustomHudiStreamSink {

public static HoodiePipeline.Builder getHoodieBuilder(HashMap<String, String> infoMap, HashMap<String, String> connectInfo) {

    HoodiePipeline.Builder builder = HoodiePipeline.builder(infoMap.get("hudi_table_name"));
    Map<String, String> options = new HashMap<>();
    options.put(FlinkOptions.DATABASE_NAME.key(), infoMap.get("hudi_database_name"));
    options.put(FlinkOptions.TABLE_NAME.key(), infoMap.get("hudi_table_name"));
    options.put(FlinkOptions.PATH.key(), infoMap.get("hudi_hdfs_path"));
    options.put("catalog.path", "hdfs:///apps/hudi/catalog/");
    String hudiFieldMap = infoMap.get("hudi_field_map").toLowerCase(Locale.ROOT);
    ArrayList<ArrayList<String>> fieldList = JSON.parseObject(hudiFieldMap, new TypeReference<ArrayList<ArrayList<String>>>() {
    });
    log.info("fieldList: {}",  fieldList.toString());
    for (ArrayList<String> columnList : fieldList) {
        builder.column("`" + columnList.get(0) + "` " + columnList.get(1));
    }
	//添加主键
    String[] hudiPrimaryKeys = infoMap.get("hudi_primary_key").split(",");
    builder.pk(hudiPrimaryKeys);
   
    options.put(FlinkOptions.PRECOMBINE_FIELD.key(), infoMap.get("hudi_precombine_field"));
	
	//调整合并逻辑
    **options.put(FlinkOptions.PAYLOAD_CLASS_NAME.key(), OverwriteWithLatestAvroPayload.class.getName());
    options.put(FlinkOptions.RECORD_MERGER_IMPLS.key(), OverwriteWithLatestMerger.class.getName());
    options.put(FlinkOptions.RECORD_MERGER_STRATEGY_ID.key(), "ce9acb64-bde0-424c-9b91-f6ebba25356d");**

    options.put(FlinkOptions.TABLE_TYPE.key(), HoodieTableType.MERGE_ON_READ.name());
    options.put(FlinkOptions.INDEX_TYPE.key(), HoodieIndex.IndexType.BUCKET.name());
    options.put(FlinkOptions.BUCKET_INDEX_NUM_BUCKETS.key(), infoMap.get("hudi_bucket_index_num_buckets"));
    options.put(FlinkOptions.BUCKET_INDEX_ENGINE_TYPE.key(), infoMap.get("hudi_bucket_index_engine_type"));

	//设置压缩策略
    options.put(FlinkOptions.COMPACTION_TRIGGER_STRATEGY.key(), infoMap.get("hudi_compaction_trigger_strategy"));
    options.put(FlinkOptions.COMPACTION_DELTA_COMMITS.key(), infoMap.get("hudi_compaction_delta_commits"));
    options.put(FlinkOptions.COMPACTION_DELTA_SECONDS.key(), infoMap.get("hudi_compaction_delta_seconds"));
    options.put(FlinkOptions.COMPACTION_MAX_MEMORY.key(), infoMap.get("hudi_compaction_max_memory"));
    

    //避免超时异常org.apache.hudi.exception.HoodieException: Timeout(601000ms) while waiting for instant initialize
    options.put(HoodieWriteConfig.ALLOW_EMPTY_COMMIT.key(), "true");

    //要保留的提交数
    options.put(FlinkOptions.CLEAN_RETAIN_COMMITS.key(), "150");

    //hive同步设置
    options.put(FlinkOptions.HIVE_SYNC_ENABLED.key(), "true");
    options.put(FlinkOptions.HIVE_SYNC_MODE.key(), "hms");
    options.put(FlinkOptions.HIVE_SYNC_DB.key(), "hudi");
    options.put(FlinkOptions.HIVE_SYNC_TABLE.key(), "mor_test_01");
    options.put(FlinkOptions.HIVE_SYNC_CONF_DIR.key(), "/etc/hive/conf");
    options.put(FlinkOptions.HIVE_SYNC_METASTORE_URIS.key(), "thrift://xx01:9083,thrift://xx02:9083,thrift://xx03:9083");
    options.put(FlinkOptions.HIVE_SYNC_JDBC_URL.key(), "jdbc:hive2://xx01:21181,xx02:21181,xx03:21181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2");
    options.put(FlinkOptions.HIVE_SYNC_SUPPORT_TIMESTAMP.key(), "true");
    options.put(FlinkOptions.HIVE_SYNC_SKIP_RO_SUFFIX.key(), "true");
    
    options.put(FlinkOptions.PARTITION_PATH_FIELD.key(), "part_dt");
    options.put(FlinkOptions.HIVE_SYNC_PARTITION_FIELDS.key(), "part_dt");

    //写入速率限制
    options.put(FlinkOptions.WRITE_RATE_LIMIT.key(), "20000");


    //WRITE_TASKS 实际写入任务的并行性,默认为执行环境的并行性。
    options.put(FlinkOptions.WRITE_TASKS.key(), 8);

    //写入模式
    options.put(FlinkOptions.OPERATION.key(), WriteOperationType.UPSERT.value());

    builder.options(options);
    return builder;
}

}

  1. saprk sql query:
    export SPARK_MAJOR_VERSION=3
    export SPARK_VERSION=3.3
    /usr/hdp/3.3.1.0-002/spark3/bin/spark-sql
    --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
    --conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'
    --conf 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog'
    --conf 'spark.kryo.registrator=org.apache.spark.HoodieSparkKryoRegistrar'
    --conf 'spark.port.maxRetries=100'
    --conf 'spark.sql.codegen.maxFields=200'
    --conf 'spark.sql.autoBroadcastJoinThreshold=-1'
    --queue flink
    --driver-memory 8G
    --num-executors 20
    --executor-cores 2
    --executor-memory 8g

An error occurs approximately once every ten queries: "Error in query: Error in query: Table or view not found: ods.ods_xxx_cmf_fin_pa_projects_cdc; line 120 pos 19;

Expected behavior
When Flink writes to Hudi and synchronizes Hive metadata, there should be no occurrences of the external table not existing during the refresh. The expectation is to directly update the metadata instead of deleting and then recreating it.

Environment Description

  • Hudi version : 1.0.0

  • Flink version : 1.15.2

  • Spark version : 3.3.2

  • Hive version : 3.1.3

  • Hadoop version : 3.3.4

  • Storage (HDFS/S3/GCS..) :

  • Running on Docker? (yes/no) : no

Additional context

Stacktrace

Add the stacktrace of the error.

@danny0405
Copy link
Contributor

Is there any clues in the JM error log, did you also check the Hive server log?

@danny0405
Copy link
Contributor

cc @cshuo to take care of this issue.

@danny0405 danny0405 added meta-sync flink Issues related to flink hive Issues related to hive and removed meta-sync flink Issues related to flink labels Mar 3, 2025
@github-project-automation github-project-automation bot moved this to ⏳ Awaiting Triage in Hudi Issue Support Mar 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hive Issues related to hive meta-sync
Projects
Status: Awaiting Triage
Development

No branches or pull requests

2 participants