Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sanitize input #173

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
Open

Conversation

niranjank2022
Copy link

This PR addresses issue #118.

Currently, the handler.py files in every benchmark (000 to 500) are modified. Also, formatted strings using .format() are converted to f-strings.

@niranjank2022
Copy link
Author

Hello @mcopik !
I would like to hear from you whether I have done the first part correctly or if I have to revert anything.

Copy link
Collaborator

@mcopik mcopik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like very good progress; thank you so much for your work!

I think the missing piece is modifying trigger and experiment code (should require only small changes), and running benchmarks to verify that they now work correctly.

import socket
from datetime import datetime
from time import sleep
from jsonschema import validate
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you try running any of the benchmarks, e.g., on AWS? I think that jsonschema should be added to requirements.txt of each benchmark.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't test-run the benchmarks on AWS. I really need assistance with this.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@niranjank2022 If the issue is that you cannot use the free tier of AWS, then please let me know - I'm happy to do the testing on this PR. Otherwise, if you found deploying and running benchmarks too complicated or not documented well enough, then I'm happy to help with that as well - just let me know.


client = storage.storage.get_instance()
key = client.upload(output_bucket, f'results-{request_id}.csv', '/tmp/data.csv')
return { 'status': 'success', 'result': 'Returned with no error', 'measurement': key }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is correct, but we also need to update the experiment in sebs/experiments/... to make sure it now checks for measurement key to download results.

try:
validate(event, schema=schema)
except:
# !? To return 'measurement': {'bucket-key': None, 'timestamp': event['income-timestamp']}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leftover comments?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a doubt to clarify. That's why I commented on it. When there is an exception, is it enough to return only {'result',' status' }? I noticed in some cases, that some values do exist and can be returned. For example, here, 'timestamp' can be returned in 'measurements'

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@niranjank2022 Yes, in that scenario we should return the exception contents (as string) and a failure status.

print("Can't setup the connection")
break
server_socket.close()
# !? To return 'measurement': {'bucket-key': None, 'timestamp': event['income-timestamp']}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leftover comments?

@@ -28,12 +46,12 @@ def handler(event):
process_time = (process_end - process_begin) / datetime.timedelta(microseconds=1)
upload_time = (upload_end - upload_begin) / datetime.timedelta(microseconds=1)
return {
'result': {
'status': 'success',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also need to update the results parsing in sebs/faas/function.py and sebs/{platform}/triggers.py - make sure that we check for status, and then retrieve results from measurement.

@niranjank2022
Copy link
Author

@mcopik
I don't know how to deploy the benchmarks in aws. I need assistance with that.

@niranjank2022
Copy link
Author

@mcopik I don't know how to deploy the benchmarks in aws. I need assistance with that.

Meanwhile, I tried launching the benchmarks locally in minio storage. I noticed that whenever we launch the Docker containers for the benchmarks using this command -
./sebs.py local start 110.dynamic-html test out_benchmark.json --config config/local_deployment.json --deployments 1
it modifies the out_benchmark.json. Input object is generated with a default value for each benchmark. I copy pasted that input as data to invoke the function. My doubt, is this the propriety way to test locally?

And while doing this, I observed that the benchmarks 020, 030 and 040 returned status: 'failure'. and 311 returned error. I wanted to bring this to your notice.

@niranjank2022
Copy link
Author

@mcopik I don't know how to deploy the benchmarks in aws. I need assistance with that.

Meanwhile, I tried launching the benchmarks locally in minio storage. I noticed that whenever we launch the Docker containers for the benchmarks using this command - ./sebs.py local start 110.dynamic-html test out_benchmark.json --config config/local_deployment.json --deployments 1 it modifies the out_benchmark.json. Input object is generated with a default value for each benchmark. I copy pasted that input as data to invoke the function. My doubt, is this the propriety way to test locally?

And while doing this, I observed that the benchmarks 020, 030 and 040 returned status: 'failure'. and 311 returned error. I wanted to bring this to your notice.

And I did some changes for incorporating jsonschema in requirements.txt. When I try to push my code, I am getting this warning:

remote: Permission to niranjank2022/serverless-benchmarks.git denied to niranjank2022.
fatal: unable to access 'https://github.com/niranjank2022/serverless-benchmarks.git/': The requested URL returned error: 403

Could you please help me resolve this?

@mcopik
Copy link
Collaborator

mcopik commented Sep 5, 2023

@mcopik I don't know how to deploy the benchmarks in aws. I need assistance with that.

Meanwhile, I tried launching the benchmarks locally in minio storage. I noticed that whenever we launch the Docker containers for the benchmarks using this command - ./sebs.py local start 110.dynamic-html test out_benchmark.json --config config/local_deployment.json --deployments 1 it modifies the out_benchmark.json. Input object is generated with a default value for each benchmark. I copy pasted that input as data to invoke the function. My doubt, is this the propriety way to test locally?

And while doing this, I observed that the benchmarks 020, 030 and 040 returned status: 'failure'. and 311 returned error. I wanted to bring this to your notice.

@niranjank2022 Correct - these are internal microbenchmarks that will not work outside of an experiment. I need to update the documentation for that.

Launching locally looks correct as long as you get a proper HTTP response- :)

EDIT: What error do you observe on benchmark 311? This one shouldn't fail.

@mcopik
Copy link
Collaborator

mcopik commented Sep 5, 2023

@mcopik I don't know how to deploy the benchmarks in aws. I need assistance with that.

@niranjank2022 There is documentation on configuring the AWS account. Once you do that, you can just run ./sebs.py benchmark invoke .... Everything benchmarks need for execution should be covered by AWS Free Tier.

Can you please elaborate on which of these steps are failing for you or if you don't know how to start with them?

https://github.com/spcl/serverless-benchmarks/blob/master/docs/platforms.md

@mcopik
Copy link
Collaborator

mcopik commented Sep 5, 2023

And I did some changes for incorporating jsonschema in requirements.txt. When I try to push my code, I am getting this warning:

remote: Permission to niranjank2022/serverless-benchmarks.git denied to niranjank2022.
fatal: unable to access 'https://github.com/niranjank2022/serverless-benchmarks.git/': The requested URL returned error: 403

Could you please help me resolve this?

This seems like a problem with your local git configuration: maybe replacing the address from https:// to [email protected] could help.

@niranjank2022
Copy link
Author

And I did some changes for incorporating jsonschema in requirements.txt. When I try to push my code, I am getting this warning:

remote: Permission to niranjank2022/serverless-benchmarks.git denied to niranjank2022.
fatal: unable to access 'https://github.com/niranjank2022/serverless-benchmarks.git/': The requested URL returned error: 403

Could you please help me resolve this?

This seems like a problem with your local git configuration: maybe replacing the address from https:// to [email protected] could help.

I resolved this problem of mine. I used Personal tokens with certain scopes.

@mcopik
Copy link
Collaborator

mcopik commented May 13, 2024

@niranjank2022 Hi! Do you plan to continue working on this PR? IIRC, it has not been tested on AWS?

@niranjank2022
Copy link
Author

@niranjank2022 Hi! Do you plan to continue working on this PR? IIRC, it has not been tested on AWS?

I am extremely sorry @mcopik. I got into some serious health problems and because of that, I have not got any time to work on Opensource. I will be mostly back by next month. Is it possible that I can continue from where I left off next month?

@mcopik
Copy link
Collaborator

mcopik commented May 13, 2024

@niranjank2022 Sure, no worries! Let me know if you have any questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants