-
Notifications
You must be signed in to change notification settings - Fork 516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Moderation api is not working #386
Comments
hi, @kishanios123 |
@SimFG Yes, correct. Sorry for creating an issue by mistake ... And thank you very much for considering adding Moderation api. BTW I have one question, Is Moderation also using Caching mechanism or not? If yes how does it work? Like exact word match or some other mechanism? |
@kishanios123 yes, it's same as other api. The biggest advantage may be that it can relieve the pressure on the network. Of course, this api is free, and you can determine whether to use the cache according to the actual scenario. |
@SimFG Thanks for your response. I am confused with init_similar_cache & pre_func init_similar_cache(pre_func=get_openai_moderation_input)
I am mentioning my code below to show how i am using this... please tell me if a problem exists in my code regarding caching...
|
in each cache, the preprocessing method is different, because each llm request format is different. The following is my suggestion: os.environ['OPENAI_API_KEY'] = ''
cache.set_openai_key()
llm_cache = Cache()
onnx = Onnx()
data_manager = manager_factory("sqlite,faiss", vector_params={"dimension": onnx.dimension})
llm_cache.init(
embedding_func=onnx.to_embeddings,
data_manager=data_manager,
similarity_evaluation=OnnxModelEvaluation(),
post_process_messages_func=temperature_softmax
)
# maybe you can use the SearchDistanceEvaluation, because it should fit a lot of scenes,
# if you say yes, try:
# llm_cache = Cache()
# init_similar_cache(data_dir="llm_cache", cache_obj=llm_cache)
moderation_cache = Cache()
init_similar_cache(data_dir="moderation_cache", cache_obj=moderation_cache, pre_func=get_openai_moderation_input)
modOutput = openai.Moderation.create(input=question, cache_obj=moderation_cache)["results"][0]["flagged"]
if not modOutput:
start = time.time()
response = openai.ChatCompletion.create(model="gpt-3.5-turbo", temperature=1.0, # Change temperature here
messages=[
{"role": "user", "content": "My Question"}], cache_obj=llm_cache)
print("Time elapsed:", round(time.time() - start, 3))
print("moderation passed answer = " + response['choices'][0]['message']['content'] + " - token = " + str(
response['usage']['total_tokens']), flush=True)
else:
print("moderation failed question = " + question, flush=True)
answer = "Your question violates the policy. Please ask only relevant and appropriate questions."
return redirect(url_for("index", result=answer)) Note: |
I am using api env. (Flask), is this code good? Like Cache() create a new object every time right ... so on the next start of my flask server respective old cache used for both LLM and moderation or I need to do something else ?
Oh, I don't know. I have tried SearchDistanceEvaluation, but it fails in many simple cases. |
When you start the server, you go to init the cache, which is no problem. In the next start, the cache data will restore from the cache directory. |
@SimFG It's working well now. Waiting for your suggestion on the similarity algorithm ... Thanks again for your support. |
I have found one more issue. I want to use Exact Match Evaluation in Moderation API. I don't know how to do this? I have tried something like this but getting this error "TypeError: init() got an unexpected keyword argument 'pre_func'"
I can not find "init_exact_cache" function similar to "init_similar_cache" ... |
change
|
In this particular example, the two sentences are similar in structure, with the only difference being the negation in the second sentence. As a result, the embeddings of these sentences could be close in the embedding space, reflecting the shared context and structure. |
@kishanios123 a possible way of the wrong cache answer, link: #388 |
@SimFG Got your point. I am not an expert in this but I think ONXX manages it well right? Because it is ML Model? It is understanding the negation part? |
@kishanios123 yes, but due to limited training data, it has no way to support a large token. |
Anything we can do on the evaluation part? |
Getting this error, can someone help ? modRes = openai.Moderation.create(input=question, cache_obj=moderation_cache) |
@kishanios123 What embedding function did you use? I guess you should not set embedding. |
@SimFG ... but it is working before few weeks ....i have followed this code - #386 (comment) moderation_cache = Cache() |
maybe you should use the |
Thanks for providing a solution, it works ...
|
I am getting this warning. does it cause any issues in the future?
I am using moderation cache like this : if i change
to
then warning is not showing ... |
I got another error ...
|
it's just warning |
For the error, can you give a testing demo |
|
sometimes error comes, and some time working well... |
I try to run the below example and don't reproduce the error:
|
please try on different inputs ... I found it many times... |
it's not about updates. I tried now in GPTCache-0.1.26, getting the same error on the first run. |
the latest version is 0.1.32, maybe you can try it. From the latest source code, I think this error should not exist |
i received exception initially in 0.1.32, then i downgraded to 0.1.26. getting the exception in both versions.
|
Current Behavior
modOutputres = openai.Moderation.create(input=question)
File "/opt/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/adapter/openai.py", line 319, in create
res = adapt(
File "/opt/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/adapter/adapter.py", line 39, in adapt
pre_embedding_res = chat_cache.pre_embedding_func(
File "/opt/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/processor/pre.py", line 19, in last_content
return data.get("messages")[-1]["content"]
TypeError: 'NoneType' object is not subscriptable
Expected Behavior
it should give moderation api output.
Steps To Reproduce
No response
Environment
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: