Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 Upload: RequestTimeout (client): Your socket connection to the server was not read from or written to within the timeout period. #885

Closed
ondrejhlavacek opened this issue Jan 22, 2016 · 8 comments

Comments

@ondrejhlavacek
Copy link

Sorry for bugging you a lot lately, I have spotted another quirk yesterday. Everything runs in the same region (us-east-1) and it was a one-off (so far) after updating to 3.13.1 yesterday.

http://keboola-logs.s3.amazonaws.com/debug-files/2016-01-22-02-10-39-56a1818f1b47f-exception.html

Any thoughts? Thanks!

@MiroCillik
Copy link

Same problem here.

@jeskew
Copy link
Contributor

jeskew commented Jan 22, 2016

No worries, @ondrejhlavacek.

S3 responds with that error when a client hasn't sent any bytes for 20 seconds. The SDK does retry those errors (error responses are checked here), and we have a test that simulates this error.

If you're seeing this error surface, that means that it occurred after the SDK exhausted all of its retries. Clients are configured to attempt three retries by default, but you can override this by setting the retries option on a client to any integer. You can also override the number of retries on a per-operation level by setting @retries in the arguments passed to PutObject.

You might also want to look at what might be causing this error. You might be experiencing a lot of contention on your local network.

@jeskew jeskew closed this as completed Jan 22, 2016
@ondrejhlavacek
Copy link
Author

I'd like to reopen this issue, maybe I can shed some more light. We're still trying to figure out this issue together with AWS Support and before tcpdumping traffic on a traffic-heavy node this might be worth a shot.

Basically what happens is that the first attempt ends with cURL error 52: empty reply and all following retries are immediately terminated as 400 Bad Request (RequestTimeout).

Here's a dump of some communication with AWS support, S3 logs, AWS debug logs - https://gist.github.com/ondrejhlavacek/6405f56ce8e0105bb1f9.

@jeskew
Copy link
Contributor

jeskew commented Mar 23, 2016

Are you providing the file to the PutObject command as a resource handle? Guzzle was not rewinding streams before sending them to cURL prior to version 6.2.0 (guzzle/guzzle#1422), which could explain why all request retries time out.

@ondrejhlavacek
Copy link
Author

Yes, it's a resource handle. I'll try updating guzzle to the 6.2.0 version. Thanks a lot, will let you know if it helped.

@usamamashkoor
Copy link

@ondrejhlavacek and @jeskew i am uploading the files to amazon s3 using jquery ajax to show progress bar and i am facing the same problem. Here is my scenario can you please
I am facing one problem the file progress bar is working correctly but when it reaches 100% then the request in the browsers still runs for 2 to 3 minutes after 2 to 3 minutes the ajax request return this response
RequestTimeout Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed
and file is does not upload on S3.
kindly please help me on this issue thanks.

@kstich
Copy link
Contributor

kstich commented Jul 17, 2017

@usamamashkoor If you are on a version between 3.31.0 and 3.31.2, #1326 is possibly causing this and we'd recommend updating to 3.31.3. If you are on another version, please file a new issue with some sample code that can be used to reproduce the problem.

@usamamashkoor
Copy link

@kstich i am uploading the file using Jquery Ajax and uploading files using JavaScript Xhr object here is some code
xhr.open('POST', 'bucket_name', true)

then i am using the progress event listener to track the progess of the upload so that i can show progress bar to the user
xhr.upload.addEventListener("progress", uploadProgress, false);

i think that issue is related to Laravel php amazon s3 api can you please help me on this or suggest me how can i fix this if i really need to submit a new issue then sure i will do it with more code but just want to confirm it before submitting the issue.

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants