From 8fa706423e2516fcbd5281a9a335072726a70fbf Mon Sep 17 00:00:00 2001 From: Scot Breitenfeld Date: Thu, 30 May 2024 10:06:39 -0500 Subject: [PATCH 1/2] fixed spelling and added spell checking --- .codespellrc | 6 + .github/workflows/codespell.yml | 14 +++ docs/source/asyncapi.rst | 6 +- docs/source/bestpractice.rst | 4 +- docs/source/debug.rst | 2 +- docs/source/issue.rst | 6 +- src/h5_async_vol.c | 208 ++++++++++++++++---------------- 7 files changed, 133 insertions(+), 113 deletions(-) create mode 100644 .codespellrc create mode 100644 .github/workflows/codespell.yml diff --git a/.codespellrc b/.codespellrc new file mode 100644 index 0000000..f0c675e --- /dev/null +++ b/.codespellrc @@ -0,0 +1,6 @@ +# Ref: https://github.com/codespell-project/codespell#using-a-config-file +[codespell] +skip = .git,.codespellrc +check-hidden = true +# ignore-regex = +ignore-words-list = te diff --git a/.github/workflows/codespell.yml b/.github/workflows/codespell.yml new file mode 100644 index 0000000..1477b14 --- /dev/null +++ b/.github/workflows/codespell.yml @@ -0,0 +1,14 @@ +# GitHub Action to automate the identification of common misspellings in text files +# https://github.com/codespell-project/codespell +# https://github.com/codespell-project/actions-codespell +name: codespell +on: [push, pull_request] +permissions: + contents: read +jobs: + codespell: + name: Check for spelling errors + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4.1.1 + - uses: codespell-project/actions-codespell@master diff --git a/docs/source/asyncapi.rst b/docs/source/asyncapi.rst index 0c01276..018efd0 100644 --- a/docs/source/asyncapi.rst +++ b/docs/source/asyncapi.rst @@ -1,6 +1,6 @@ Async VOL APIs ============== -Besides the HDF5 EventSet and asynchronous I/O operation APIs, the async VOL connector also provides convinient functions for finer control of the asynchronous I/O operations. Application developers should be very careful with these APIs as they may cause unexpected behavior when not properly used. The "h5_async_lib.h" header file must be included in the application's source code and the static async VOL library (libasynchdf5.a) must be linked. +Besides the HDF5 EventSet and asynchronous I/O operation APIs, the async VOL connector also provides convenient functions for finer control of the asynchronous I/O operations. Application developers should be very careful with these APIs as they may cause unexpected behavior when not properly used. The "h5_async_lib.h" header file must be included in the application's source code and the static async VOL library (libasynchdf5.a) must be linked. * Set the ``disable implicit`` flag to the property list, which will force all HDF5 I/O operations to be synchronous, even when the HDF5's explicit ``_async`` APIs are used. @@ -19,7 +19,7 @@ Besides the HDF5 EventSet and asynchronous I/O operation APIs, the async VOL con .. code-block:: // Set the pause flag to property list that pauses all asynchronous I/O operations. - // Note: similar to the disable implict flag setting, the operations are only paused when + // Note: similar to the disable implicit flag setting, the operations are only paused when // the dxpl is used by another HDF5 function call. herr_t H5Pset_dxpl_pause(hid_t dxpl, hbool_t is_pause); @@ -47,7 +47,7 @@ Besides the HDF5 EventSet and asynchronous I/O operation APIs, the async VOL con .. note:: The operations are only delayed when the dxpl is used by another HDF5 function call. -* Convinient APIs for other stacked VOL connectors +* Convenient APIs for other stacked VOL connectors .. warning:: Following APIs are not intended for application use. diff --git a/docs/source/bestpractice.rst b/docs/source/bestpractice.rst index 10341e1..2d2cb8d 100644 --- a/docs/source/bestpractice.rst +++ b/docs/source/bestpractice.rst @@ -8,7 +8,7 @@ To take full advantage of the async I/O VOL connector, applications should have Inform async VOL on access pattern ---------------------------------- -By default, the async VOL detects whether the application is busy issuing HDF5 I/O calls or has moved on to other tasks (e.g., computation). If it finds no HDF5 function is called within a short wait period (600 ms by default), it will start the background thread to execute the tasks in the queue. Such status detection can avoid an effectively synchronous I/O when the application thread and the async VOL background thread acquire the HDF5 global mutex in an interleaved fashion. However, some applications may have larger time gaps than the default wait period between HDF5 function calls and experience partially asynchronous behavior. To avoid this, one can set the following environment variable to disable its active ''wait and check'' mechnism and inform async VOL when to start the async execution, this is especially useful for checkpointing data. +By default, the async VOL detects whether the application is busy issuing HDF5 I/O calls or has moved on to other tasks (e.g., computation). If it finds no HDF5 function is called within a short wait period (600 ms by default), it will start the background thread to execute the tasks in the queue. Such status detection can avoid an effectively synchronous I/O when the application thread and the async VOL background thread acquire the HDF5 global mutex in an interleaved fashion. However, some applications may have larger time gaps than the default wait period between HDF5 function calls and experience partially asynchronous behavior. To avoid this, one can set the following environment variable to disable its active ''wait and check'' mechanism and inform async VOL when to start the async execution, this is especially useful for checkpointing data. .. code-block:: @@ -24,7 +24,7 @@ By default, the async VOL detects whether the application is busy issuing HDF5 I Mix sync and async operations ----------------------------- -It is generally discouraged to mix sync and async operations in an application, as deadlocks may occur unexpectedly. If it is unavoidable, we recommend to separate the sync and async operations as much as possible (ideally using different HDF5 file IDs, even they are opearting on the same file) and set the following FAPL property for the sync operations: +It is generally discouraged to mix sync and async operations in an application, as deadlocks may occur unexpectedly. If it is unavoidable, we recommend to separate the sync and async operations as much as possible (ideally using different HDF5 file IDs, even they are operating on the same file) and set the following FAPL property for the sync operations: .. code-block:: diff --git a/docs/source/debug.rst b/docs/source/debug.rst index c5aa6f9..17b3499 100644 --- a/docs/source/debug.rst +++ b/docs/source/debug.rst @@ -54,7 +54,7 @@ Following is an example of the messages printed out. By default, when running in [ASYNC VOL DBG] entering push_task_to_abt_pool [ASYNC VOL DBG] leaving push_task_to_abt_pool [ASYNC VOL DBG] async_file_create waiting to finish all previous tasks - [ASYNC ABT DBG] async_file_create_fn: trying to aquire global lock + [ASYNC ABT DBG] async_file_create_fn: trying to acquire global lock [ASYNC ABT DBG] async_file_create_fn: global lock acquired .. note:: diff --git a/docs/source/issue.rst b/docs/source/issue.rst index 98d6c82..db8564f 100644 --- a/docs/source/issue.rst +++ b/docs/source/issue.rst @@ -3,7 +3,7 @@ Known Issues Slow performance with metadata heavy workload --------------------------------------------- -Async VOL has additional overhead due to its internal management of asynchronous tasks and the background thread execution. If the application is metadata-intensive, e.g. create thousands of groups, datasets, or attributes, this overhead (~0.001s per operation) becomes comparable to the creation time, and could result in worse performance. There may also be additional overhead due to the *wait and check* mechnism unless ``HDF5_ASYNC_EXE_*`` is set. +Async VOL has additional overhead due to its internal management of asynchronous tasks and the background thread execution. If the application is metadata-intensive, e.g. create thousands of groups, datasets, or attributes, this overhead (~0.001s per operation) becomes comparable to the creation time, and could result in worse performance. There may also be additional overhead due to the *wait and check* mechanism unless ``HDF5_ASYNC_EXE_*`` is set. ABT_thread_create SegFault -------------------------- @@ -20,7 +20,7 @@ When an application has a large number of HDF5 function calls, an error like the [ 2] 0 libabt.1.dylib 0x0000000105bdbdc0 ABT_thread_create + 128 [ 3] 0 libh5async.dylib 0x00000001064bde1f push_task_to_abt_pool + 559 -This is due to the default Argobots thread stack size being too small, and can be resovled by setting the environement variable: +This is due to the default Argobots thread stack size being too small, and can be resolved by setting the environment variable: .. code-block:: @@ -43,7 +43,7 @@ This `patch `_ Synchronous H5Dget_space_async ------------------------------ -When an application calls H5Dget_space_async, and uses the dataspace ID immediately, a deadlock may occur occationally. Thus we force synchronous execution for H5Dget_space_async. To re-enable its asynchronous execution, set the following environement variable: +When an application calls H5Dget_space_async, and uses the dataspace ID immediately, a deadlock may occur occasionally. Thus we force synchronous execution for H5Dget_space_async. To re-enable its asynchronous execution, set the following environment variable: .. code-block:: diff --git a/src/h5_async_vol.c b/src/h5_async_vol.c index 638c4c0..996f2a0 100644 --- a/src/h5_async_vol.c +++ b/src/h5_async_vol.c @@ -64,7 +64,7 @@ works, and perform publicly and display publicly, and to permit others to do so. /* Experimental feature to merge dset R/W to multi-dset R/W */ /* #define ENABLE_MERGE_DSET 1 */ -/* Whether to display log messge when callback is invoked */ +/* Whether to display log message when callback is invoked */ /* (Uncomment to enable) */ /* #define ENABLE_DBG_MSG 1 */ /* #define PRINT_ERROR_STACK 1 */ @@ -206,7 +206,7 @@ typedef struct async_instance_t { bool delay_time_env; /* Flag that indicates the delay time is set by env variable */ bool disable_async_dset_get; /* Disable async execution for dataset get */ uint64_t delay_time; /* Sleep time before background thread trying to acquire global mutex */ - int sleep_time; /* Sleep time between checking the global mutex attemp count */ + int sleep_time; /* Sleep time between checking the global mutex attempt count */ hid_t under_vol_id; #ifdef ENABLE_WRITE_MEMCPY hsize_t max_mem; @@ -1288,7 +1288,7 @@ async_instance_init(int backing_thread_count) /* fprintf(stderr, " [ASYNC VOL DBG] Init Argobots with %d threads\n", backing_thread_count); */ /* #endif */ - /* Use mutex to guarentee there is only one Argobots IO instance (singleton) */ + /* Use mutex to guarantee there is only one Argobots IO instance (singleton) */ abt_ret = ABT_mutex_lock(async_instance_mutex_g); if (abt_ret != ABT_SUCCESS) { fprintf(stderr, " [ASYNC VOL ERROR] with ABT_mutex_lock\n"); @@ -2451,7 +2451,7 @@ push_task_to_abt_pool(async_qhead_t *qhead, ABT_pool pool, const char *call_func fprintf(fout_g, " [ASYNC VOL DBG] checking task func [%p] dependency\n", task_elt->func); #endif is_dep_done = 1; - // Check if depenent tasks are finished + // Check if dependent tasks are finished for (i = 0; i < task_elt->n_dep; i++) { /* // If dependent parent failed, do not push to Argobots pool */ @@ -2801,7 +2801,7 @@ H5VL_async_object_wait(H5VL_async_t *async_obj) fprintf(fout_g, " [ASYNC VOL ERROR] %s with H5TSmutex_release\n", __func__); // Check for all tasks on this dset of a file - // TODO: aquire queue mutex? + // TODO: acquire queue mutex? DL_FOREACH(async_instance_g->qhead.queue, task_iter) { if (task_iter->async_obj == async_obj) { @@ -3177,7 +3177,7 @@ H5VL_async_set_delay_time(uint64_t time_us) * * \details This function is a workaround of avoiding synchronous execution due to the HDF5 global * mutex. If we start the background thread executing the task as they are created by - * the application, the backgrond thread will compete with the application thread for + * the application, the background thread will compete with the application thread for * acquiring the HDF5 mutex and may effective make everything synchronous. Thus we * implemented this "spying" approach by checking the HDF5 global mutex counter value, * if the value does not increase within a predefined (short) amount of time, then we @@ -5241,7 +5241,7 @@ async_attr_create_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -5281,7 +5281,7 @@ async_attr_create_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -5596,7 +5596,7 @@ async_attr_open_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -5636,7 +5636,7 @@ async_attr_open_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -5940,7 +5940,7 @@ async_attr_read_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -5980,7 +5980,7 @@ async_attr_read_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -6231,7 +6231,7 @@ async_attr_write_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -6271,7 +6271,7 @@ async_attr_write_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -6547,7 +6547,7 @@ async_attr_get_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -6587,7 +6587,7 @@ async_attr_get_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -6834,7 +6834,7 @@ async_attr_specific_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -6874,7 +6874,7 @@ async_attr_specific_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); if (args->args.op_type != H5VL_ATTR_ITER) { @@ -7139,7 +7139,7 @@ async_attr_optional_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -7179,7 +7179,7 @@ async_attr_optional_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -7423,7 +7423,7 @@ async_attr_close_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -7463,7 +7463,7 @@ async_attr_close_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -7715,7 +7715,7 @@ async_dataset_create_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -7755,7 +7755,7 @@ async_dataset_create_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -8069,7 +8069,7 @@ async_dataset_open_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -8109,7 +8109,7 @@ async_dataset_open_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -8390,7 +8390,7 @@ async_dataset_read_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -8432,7 +8432,7 @@ async_dataset_read_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -8894,7 +8894,7 @@ async_dataset_read_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -8934,7 +8934,7 @@ async_dataset_read_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -9247,7 +9247,7 @@ async_dataset_write_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -9289,7 +9289,7 @@ async_dataset_write_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -9702,7 +9702,7 @@ async_dataset_write(async_instance_t *aid, size_t count, H5VL_async_t **parent_o } // End async_dataset_write > 1.13.3 #ifdef ENABLE_MERGE_DSET -// Check and merge current write into an exisiting one in queue, must be collective +// Check and merge current write into an existing one in queue, must be collective static herr_t async_dataset_write_merge_mdset_col(async_instance_t *aid, size_t count, H5VL_async_t **parent_obj, hid_t mem_type_id[], hid_t mem_space_id[], hid_t file_space_id[], @@ -9905,7 +9905,7 @@ async_dataset_write_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -9945,7 +9945,7 @@ async_dataset_write_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -10288,7 +10288,7 @@ async_dataset_get_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -10328,7 +10328,7 @@ async_dataset_get_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -10582,7 +10582,7 @@ async_dataset_specific_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -10622,7 +10622,7 @@ async_dataset_specific_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -10872,7 +10872,7 @@ async_dataset_optional_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -10912,7 +10912,7 @@ async_dataset_optional_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -11157,7 +11157,7 @@ async_dataset_close_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -11197,7 +11197,7 @@ async_dataset_close_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -11456,7 +11456,7 @@ async_datatype_commit_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -11496,7 +11496,7 @@ async_datatype_commit_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -11788,7 +11788,7 @@ async_datatype_open_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -11828,7 +11828,7 @@ async_datatype_open_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -12107,7 +12107,7 @@ async_datatype_get_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -12147,7 +12147,7 @@ async_datatype_get_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -12396,7 +12396,7 @@ async_datatype_specific_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -12436,7 +12436,7 @@ async_datatype_specific_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -12682,7 +12682,7 @@ async_datatype_optional_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -12722,7 +12722,7 @@ async_datatype_optional_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -12968,7 +12968,7 @@ async_datatype_close_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -13008,7 +13008,7 @@ async_datatype_close_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -13261,7 +13261,7 @@ async_file_create_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -13282,7 +13282,7 @@ async_file_create_fn(void *foo) /* async_instance_g->start_abt_push = false; */ - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -13584,7 +13584,7 @@ async_file_open_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -13604,7 +13604,7 @@ async_file_open_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -13894,7 +13894,7 @@ async_file_get_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -13934,7 +13934,7 @@ async_file_get_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -14182,7 +14182,7 @@ async_file_specific_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -14222,7 +14222,7 @@ async_file_specific_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -14479,7 +14479,7 @@ async_file_optional_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -14519,7 +14519,7 @@ async_file_optional_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -14773,7 +14773,7 @@ async_file_close_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -14813,7 +14813,7 @@ async_file_close_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ /* assert(task->async_obj->obj_mutex); */ /* assert(task->async_obj->magic == ASYNC_MAGIC); */ while (task->async_obj && task->async_obj->obj_mutex) { @@ -15103,7 +15103,7 @@ async_group_create_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -15143,7 +15143,7 @@ async_group_create_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -15440,7 +15440,7 @@ async_group_open_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -15480,7 +15480,7 @@ async_group_open_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -15759,7 +15759,7 @@ async_group_get_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -15799,7 +15799,7 @@ async_group_get_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -16052,7 +16052,7 @@ async_group_specific_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -16092,7 +16092,7 @@ async_group_specific_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -16337,7 +16337,7 @@ async_group_optional_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -16377,7 +16377,7 @@ async_group_optional_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -16622,7 +16622,7 @@ async_group_close_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -16664,7 +16664,7 @@ async_group_close_fn(void *foo) // There may be cases, e.g. with link iteration, that enters group close without a valid async_obj mutex if (task->async_obj->obj_mutex) { - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { if (ABT_mutex_trylock(task->async_obj->obj_mutex) == ABT_SUCCESS) { @@ -16929,7 +16929,7 @@ async_link_create_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -16969,7 +16969,7 @@ async_link_create_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -17255,7 +17255,7 @@ async_link_copy_fn(void *foo) assert(task->async_obj); assert(task->async_obj->magic == ASYNC_MAGIC); - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -17296,7 +17296,7 @@ async_link_copy_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -17576,7 +17576,7 @@ async_link_move_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -17617,7 +17617,7 @@ async_link_move_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -17898,7 +17898,7 @@ async_link_get_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -17938,7 +17938,7 @@ async_link_get_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -18195,7 +18195,7 @@ async_link_specific_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -18238,7 +18238,7 @@ async_link_specific_fn(void *foo) assert(task->async_obj->magic == ASYNC_MAGIC); /* No need to lock the object with iteration */ if (args->args.op_type != H5VL_LINK_ITER) { - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); while (1) { if (ABT_mutex_trylock(task->async_obj->obj_mutex) == ABT_SUCCESS) { @@ -18496,7 +18496,7 @@ async_link_optional_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -18536,7 +18536,7 @@ async_link_optional_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -18794,7 +18794,7 @@ async_object_open_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -18834,7 +18834,7 @@ async_object_open_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -19109,7 +19109,7 @@ async_object_copy_fn(void *foo) assert(task->async_obj); assert(task->async_obj->magic == ASYNC_MAGIC); - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -19152,7 +19152,7 @@ async_object_copy_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -19434,7 +19434,7 @@ async_object_get_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -19474,7 +19474,7 @@ async_object_get_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ /* assert(task->async_obj->obj_mutex); */ assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -19735,7 +19735,7 @@ async_object_specific_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -19775,7 +19775,7 @@ async_object_specific_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); if (args->args.op_type != H5VL_OBJECT_VISIT) { @@ -20041,7 +20041,7 @@ async_object_optional_fn(void *foo) pool_ptr = task->async_obj->pool_ptr; - func_log(__func__, "trying to aquire global lock"); + func_log(__func__, "trying to acquire global lock"); if ((attempt_count = check_app_acquire_mutex_fn(task, &mutex_count, &acquired)) < 0) goto done; @@ -20081,7 +20081,7 @@ async_object_optional_fn(void *foo) } is_lib_state_restored = true; - /* Aquire async obj mutex and set the obj */ + /* Acquire async obj mutex and set the obj */ assert(task->async_obj->obj_mutex); assert(task->async_obj->magic == ASYNC_MAGIC); while (1) { @@ -20805,7 +20805,7 @@ H5VL_async_is_implicit_disabled(int op_type, const char *func_name) func_log(func_name, "implicit async disabled with disable_implicit"); ret_value = 1; } - // Need file ops to be implicit to init requried internal data structures + // Need file ops to be implicit to init required internal data structures if (op_type != FILE_OP && op_type != DSET_RW_OP && async_instance_g->disable_implicit_nondrw) { func_log(func_name, "implicit async disabled with HDF5_ASYNC_DISABLE_IMPLICIT_NON_DSET_RW"); ret_value = 1; @@ -22031,7 +22031,7 @@ H5VL_async_file_create(const char *name, unsigned flags, hid_t fcpl_id, hid_t fa if (H5VLintrospect_get_cap_flags(info->under_vol_info, info->under_vol_id, &cap_flags) < 0) return NULL; - /* Querying for the VFD is only meaninful when using the native VOL connector */ + /* Querying for the VFD is only meaningful when using the native VOL connector */ if ((cap_flags & H5VL_CAP_FLAG_NATIVE_FILES) > 0) { hid_t vfd_id; /* VFD for file */ @@ -22134,7 +22134,7 @@ H5VL_async_file_open(const char *name, unsigned flags, hid_t fapl_id, hid_t dxpl if (H5VLintrospect_get_cap_flags(info->under_vol_info, info->under_vol_id, &cap_flags) < 0) return NULL; - /* Querying for the VFD is only meaninful when using the native VOL connector */ + /* Querying for the VFD is only meaningful when using the native VOL connector */ if ((cap_flags & H5VL_CAP_FLAG_NATIVE_FILES) > 0) { hid_t vfd_id; /* VFD for file */ From 9825b559d654705375a6980cda844d1b2934ff63 Mon Sep 17 00:00:00 2001 From: Scot Breitenfeld Date: Thu, 30 May 2024 10:15:31 -0500 Subject: [PATCH 2/2] fixed sp --- docs/source/asyncapi.rst | 2 +- docs/source/gettingstarted.rst | 4 ++-- src/h5_async_vol.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/source/asyncapi.rst b/docs/source/asyncapi.rst index 018efd0..b4646c8 100644 --- a/docs/source/asyncapi.rst +++ b/docs/source/asyncapi.rst @@ -15,7 +15,7 @@ Besides the HDF5 EventSet and asynchronous I/O operation APIs, the async VOL con herr_t H5Pget_dxpl_disable_async_implicit(hid_t dxpl, hbool_t *is_disable); .. note:: - The ``disable implicit`` flag only becomes effective when the corresponding ``fapl`` or ``dxpl`` is actually used by another HDF5 function call, e.g., with ``H5Fopen`` or ``H5Dwrite``. When a new ``fapl`` or ``dxpl`` is used by any HDF5 function without setting the ``disable implict`` flag, e.g., ``H5P_DEFAULT``, it will reset the mode back to asynchronous execution. + The ``disable implicit`` flag only becomes effective when the corresponding ``fapl`` or ``dxpl`` is actually used by another HDF5 function call, e.g., with ``H5Fopen`` or ``H5Dwrite``. When a new ``fapl`` or ``dxpl`` is used by any HDF5 function without setting the ``disable implicit`` flag, e.g., ``H5P_DEFAULT``, it will reset the mode back to asynchronous execution. .. code-block:: // Set the pause flag to property list that pauses all asynchronous I/O operations. diff --git a/docs/source/gettingstarted.rst b/docs/source/gettingstarted.rst index 457d516..e3f5137 100644 --- a/docs/source/gettingstarted.rst +++ b/docs/source/gettingstarted.rst @@ -185,7 +185,7 @@ If any test fails, check ``async_vol_test.err`` in the test directory for the er Implicit mode ============= -This mode is only recommended for testing. The implicit mode allows an application to enable asynchronous I/O through setting the environemental variables :ref:`Set Environmental Variables` and without any major code change. By default, the HDF5 metadata operations are executed asynchronously, and the dataset operations are executed synchronously. +This mode is only recommended for testing. The implicit mode allows an application to enable asynchronous I/O through setting the environmental variables :ref:`Set Environmental Variables` and without any major code change. By default, the HDF5 metadata operations are executed asynchronously, and the dataset operations are executed synchronously. .. note:: Due to the limitations of the implicit mode, we highly recommend applications to use the explicit mode for the best I/O performance. @@ -270,7 +270,7 @@ Applications may choose to have async VOL to manage the write buffer consistency Async vol checks available system memory before its double buffer allocation at runtime, using get_avphys_pages() and sysconf(). When there is not enough memory for duplicating the current write buffer, it will not allocate memory and force the current write to be synchronous. -With the double buffering enabled, users can also specify how much memory is allowed for async VOL to allocate, with can be set through an environment variable. When the limit is reached during runtime, async VOL will skip the memory allocation and execute the write synchronously, until previous duplicated buffers are freed after their operation compeleted. +With the double buffering enabled, users can also specify how much memory is allowed for async VOL to allocate, with can be set through an environment variable. When the limit is reached during runtime, async VOL will skip the memory allocation and execute the write synchronously, until previous duplicated buffers are freed after their operation completed. .. code-block:: diff --git a/src/h5_async_vol.c b/src/h5_async_vol.c index 996f2a0..2178ff8 100644 --- a/src/h5_async_vol.c +++ b/src/h5_async_vol.c @@ -2643,7 +2643,7 @@ add_task_to_queue(async_qhead_t *qhead, async_task_t *task, task_type_t task_typ task->type = task_type; - // Need to depend on the object's createion (create/open) task to finish + // Need to depend on the object's creation (create/open) task to finish /* if (task_type == DEPENDENT) { */ /* // Add unfinished parent's create task as dependent */