You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are using the Whisper example from the RKNN Model Zoo ( https://github.com/airockchip/rknn_model_zoo/tree/main/examples/whisper/cpp) for RKNN inference on our device. However, we have observed that the model sometimes generates hallucinated text,meaning it produces words or phrases that were not present in the original audio input.
To improve transcription accuracy, we would like to disable or minimize hallucination. Could you provide guidance on how to adjust the model settings, decoding parameters, or RKNN configurations to achieve this? If modifications to the example code are required, please suggest the necessary changes.
I found some solution in openai guthub discussion: openai/whisper#679, but there are different from rknn configs
Also this: https://community.openai.com/t/how-to-avoid-hallucinations-in-whisper-transcriptions/125300/16.
Can you help us to do these changes in rknn cpp
Steps to Reproduce:
Run the Whisper C++ example on an RKNN-supported device.
Provide an input audio sample with silent or low-quality speech.
Observe that the model sometimes generates text that was not actually spoken.
Expected Behavior:
The model should generate transcriptions that closely match the input speech (in our case from mic) and avoid unnecessary hallucination.
The text was updated successfully, but these errors were encountered:
We are using the Whisper example from the RKNN Model Zoo ( https://github.com/airockchip/rknn_model_zoo/tree/main/examples/whisper/cpp) for RKNN inference on our device. However, we have observed that the model sometimes generates hallucinated text,meaning it produces words or phrases that were not present in the original audio input.
To improve transcription accuracy, we would like to disable or minimize hallucination. Could you provide guidance on how to adjust the model settings, decoding parameters, or RKNN configurations to achieve this? If modifications to the example code are required, please suggest the necessary changes.
I found some solution in openai guthub discussion: openai/whisper#679, but there are different from rknn configs
Also this: https://community.openai.com/t/how-to-avoid-hallucinations-in-whisper-transcriptions/125300/16.
Can you help us to do these changes in rknn cpp
Steps to Reproduce:
Run the Whisper C++ example on an RKNN-supported device.
Provide an input audio sample with silent or low-quality speech.
Observe that the model sometimes generates text that was not actually spoken.
Expected Behavior:
The model should generate transcriptions that closely match the input speech (in our case from mic) and avoid unnecessary hallucination.
The text was updated successfully, but these errors were encountered: