-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
s3a error #8
Comments
this is the main error in cloudwatch Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid directory for output- |
if I provide the configuration information I entered but followed the documentation |
Hey @webroboteu |
I had already tried these parameters without success. Now out of desperation I was thinking of bypassing the hadoop interface and managing the stream directly. Is your email on linkedin the one you posted on your profile? I would like to have you on my network to discuss the project |
ll try again and let you know |
if i want to recompile it you suggest to use your hadoop version 2.6.0-qds-0.4.13 but not the reference to your repository. Can you suggest something about version 2.8 for example? |
Right. But you can just compile with the existing open source 2.6.0 hadoop version and just copy the hadoop-aws jar later to your binary that should work as well. This is a comment I added in another issue Compiling #2
|
recompiling as you say I have the following error: |
I have a repository of a docker image: https://github.com/webroboteu/sparklambdadriver |
With hadoop 2.9, referring to bundle 1.11.199 with these docker lines there is progress but I still have to confirm that it works on lambda context RUN wget http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-bundle/1.11.199/aws-java-sdk-bundle-1.11.199.jar |
with local execution now i have this problem: java.lang.NullPointerException I update you |
I'm in the right direction since I can now recompile it correctly. For some strange reason try to load the data from the same executorId 4775351731 java.io.FileNotFoundException: No such file or directory: s3: //webroboteuquboleshuffle/tmp/executor-driver-4775351731/30/shuffle_0_0_0.index |
in the example I have attached the following problem appears, which seems to be related to the management of shuffle in spark in the s3 context.
Did I confirm that the problem occurs or is it a configuration problem of mine?
ShuffleExample.scala.zip
The text was updated successfully, but these errors were encountered: