-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sanitize input #173
base: master
Are you sure you want to change the base?
Sanitize input #173
Conversation
Hello @mcopik ! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like very good progress; thank you so much for your work!
I think the missing piece is modifying trigger and experiment code (should require only small changes), and running benchmarks to verify that they now work correctly.
import socket | ||
from datetime import datetime | ||
from time import sleep | ||
from jsonschema import validate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you try running any of the benchmarks, e.g., on AWS? I think that jsonschema should be added to requirements.txt
of each benchmark.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't test-run the benchmarks on AWS. I really need assistance with this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@niranjank2022 If the issue is that you cannot use the free tier of AWS, then please let me know - I'm happy to do the testing on this PR. Otherwise, if you found deploying and running benchmarks too complicated or not documented well enough, then I'm happy to help with that as well - just let me know.
|
||
client = storage.storage.get_instance() | ||
key = client.upload(output_bucket, f'results-{request_id}.csv', '/tmp/data.csv') | ||
return { 'status': 'success', 'result': 'Returned with no error', 'measurement': key } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is correct, but we also need to update the experiment in sebs/experiments/...
to make sure it now checks for measurement
key to download results.
try: | ||
validate(event, schema=schema) | ||
except: | ||
# !? To return 'measurement': {'bucket-key': None, 'timestamp': event['income-timestamp']} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leftover comments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had a doubt to clarify. That's why I commented on it. When there is an exception, is it enough to return only {'result',' status' }? I noticed in some cases, that some values do exist and can be returned. For example, here, 'timestamp' can be returned in 'measurements'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@niranjank2022 Yes, in that scenario we should return the exception contents (as string) and a failure status.
print("Can't setup the connection") | ||
break | ||
server_socket.close() | ||
# !? To return 'measurement': {'bucket-key': None, 'timestamp': event['income-timestamp']} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leftover comments?
@@ -28,12 +46,12 @@ def handler(event): | |||
process_time = (process_end - process_begin) / datetime.timedelta(microseconds=1) | |||
upload_time = (upload_end - upload_begin) / datetime.timedelta(microseconds=1) | |||
return { | |||
'result': { | |||
'status': 'success', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also need to update the results parsing in sebs/faas/function.py
and sebs/{platform}/triggers.py
- make sure that we check for status
, and then retrieve results from measurement
.
@mcopik |
Meanwhile, I tried launching the benchmarks locally in minio storage. I noticed that whenever we launch the Docker containers for the benchmarks using this command - And while doing this, I observed that the benchmarks 020, 030 and 040 returned |
And I did some changes for incorporating jsonschema in requirements.txt. When I try to push my code, I am getting this warning:
Could you please help me resolve this? |
@niranjank2022 Correct - these are internal microbenchmarks that will not work outside of an experiment. I need to update the documentation for that. Launching locally looks correct as long as you get a proper HTTP response- :) EDIT: What error do you observe on benchmark |
@niranjank2022 There is documentation on configuring the AWS account. Once you do that, you can just run Can you please elaborate on which of these steps are failing for you or if you don't know how to start with them? https://github.com/spcl/serverless-benchmarks/blob/master/docs/platforms.md |
This seems like a problem with your local git configuration: maybe replacing the address from |
I resolved this problem of mine. I used Personal tokens with certain scopes. |
@niranjank2022 Hi! Do you plan to continue working on this PR? IIRC, it has not been tested on AWS? |
I am extremely sorry @mcopik. I got into some serious health problems and because of that, I have not got any time to work on Opensource. I will be mostly back by next month. Is it possible that I can continue from where I left off next month? |
@niranjank2022 Sure, no worries! Let me know if you have any questions. |
This PR addresses issue #118.
Currently, the handler.py files in every benchmark (000 to 500) are modified. Also, formatted strings using .format() are converted to f-strings.