You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the FAIR EASE project we used FUJI fairness tester on a bunch of datasets from the earth system and biodiversity domain. I have a few recommendations regarding documentation to the user of the assessment tool.
We found that when we put in a different URL to the same ro-crate-metadata.json file, we got different results.
Now, that this is not so much a problem as a fact - what FUJI runs on what it is given, and different URLs may point to the same dataset but to different ways of providing that dataset to the machine.
RECOMMENDATION 1: It would be useful to have a webpage/section on the “Methods” webpage that was specifically “How FUJI looks for metadata when assessing them with its tool”. Some of the text that is currently on https://www.f-uji.net/index.php?action=methods in “Findability” (grep for “Our practical tests verify the presence…”) explains just this, and could be moved to this new section. Make it a bit more understandable by a non-data-technical audience as well.
RECOMMENDATION 3: FUJI could put on their website somewhere in a section saying “community feedback” or “community collaboration” or similar, to say that there is a GH space where you can raise issue, create your-community versions of FUJI (for the technically-capable, at least), and then to say how you go about feeding that back into FUJI so others can benefit from that. There is a bit in About under Usage, but could be more.
RECOMMENDATION 2: I really think it would be useful to explain the scope of the FUJI assessment you can do via the website https://www.f-uji.net/index.php?action=test. That it runs programmatic tests against the FAIR principles, looking into the DOI, the XXX, and the XXX exposure of the URI you give it, that if you really want to use the test results to assess and improve, you need to read into the actual results, including the debug messages, in order to figure out what you have not provided as content, or whether your content is present but not semantically interoperable, or not technically interoperable, or whether in fact FUJI does not (yet) understand your metadata 100%. It can only find what it knows to look for, and in the entire research domain (from social science, to life science, to phyiscs and chemistry), there are so many permutations of metadata that it cannot (yet) understand them all.
Context
This can help others understand the scope of the tool better
Possible Implementation
The text was updated successfully, but these errors were encountered:
Just a quick note that part of the JSO-LD confusion happened on Github hosted JSON-LD/ro-crate pages which actually do deliver the JSON via the raw.githubusercontent.com URL see: #542
Detailed Description
For the FAIR EASE project we used FUJI fairness tester on a bunch of datasets from the earth system and biodiversity domain. I have a few recommendations regarding documentation to the user of the assessment tool.
We found that when we put in a different URL to the same ro-crate-metadata.json file, we got different results.
Now, that this is not so much a problem as a fact - what FUJI runs on what it is given, and different URLs may point to the same dataset but to different ways of providing that dataset to the machine.
RECOMMENDATION 1: It would be useful to have a webpage/section on the “Methods” webpage that was specifically “How FUJI looks for metadata when assessing them with its tool”. Some of the text that is currently on https://www.f-uji.net/index.php?action=methods in “Findability” (grep for “Our practical tests verify the presence…”) explains just this, and could be moved to this new section. Make it a bit more understandable by a non-data-technical audience as well.
RECOMMENDATION 3: FUJI could put on their website somewhere in a section saying “community feedback” or “community collaboration” or similar, to say that there is a GH space where you can raise issue, create your-community versions of FUJI (for the technically-capable, at least), and then to say how you go about feeding that back into FUJI so others can benefit from that. There is a bit in About under Usage, but could be more.
RECOMMENDATION 2: I really think it would be useful to explain the scope of the FUJI assessment you can do via the website https://www.f-uji.net/index.php?action=test. That it runs programmatic tests against the FAIR principles, looking into the DOI, the XXX, and the XXX exposure of the URI you give it, that if you really want to use the test results to assess and improve, you need to read into the actual results, including the debug messages, in order to figure out what you have not provided as content, or whether your content is present but not semantically interoperable, or not technically interoperable, or whether in fact FUJI does not (yet) understand your metadata 100%. It can only find what it knows to look for, and in the entire research domain (from social science, to life science, to phyiscs and chemistry), there are so many permutations of metadata that it cannot (yet) understand them all.
Context
This can help others understand the scope of the tool better
Possible Implementation
The text was updated successfully, but these errors were encountered: