Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

immediately close the child handle for a pipe after retrieving all co… #27

Closed
wants to merge 1 commit into from

Conversation

schmunk42
Copy link
Contributor

…ntent

prevents issues which can arise from getting content of blocking/non-blocking streams while running in FPM.

@mikehaertl
Copy link
Owner

I'm very averse to changing this. Whenever we touch this we see new reports about things breaking or blocked executions, see #20. I've already reverted a couple of fixes as each of them introduced new problems.

So I don't want to change this before we can't clearly identify the issue in #20 and have test cases for it.

@mikehaertl
Copy link
Owner

@schmunk42 Have you also tried the proposed fix in #20?

    // set streams to non blocking mode
    stream_set_blocking($pipes[1], 0);
    stream_set_blocking($pipes[2], 0)

Would this also fix your problem?

@schmunk42
Copy link
Contributor Author

I remember now :)

I played around with stream_set_blocking(), but it appears that it already has the correct setting (blocking in my case).
Although I also found issues saying that the returned meta-data is not correct in all cases.

I think it would be nice to have a parameter for stream_set_blocking(), so it might be easier to debug this.

A parameter for switching the order of getting STDOUT and STDERR might be a bit overkill, but could also help - I am not sure here.

The idea behind my proposed change was just to close the stream, when finished reading, instead of having an open file handle for the first stream, when reading the second one.

We were seeing this problem in our logs: https://serverfault.com/questions/839135/nginx-php-fpm-child-exited-with-code-0
Log messages piled up to 50.000 per minute.

@schmunk42
Copy link
Contributor Author

Maybe this could also be related to STDIN? Please see https://bugs.php.net/bug.php?id=73342

So shellcommand's proc_open() does not handle STDIN at all - it would be interesting to see if the bug can be avoided by using (and closing) also a descriptor, like:

 0   => array('pipe','r'),

@mikehaertl
Copy link
Owner

@schmunk42 Not sure if you're still using this library but maybe you could help testing PR #41? I've now completely refactored the execution with proc_open() and try to use non-blocking mode whenever possible. This requires some nifty altnernating handling of the I/O streams, though. In my tests it worked fine, but I'd like to ensure that the new solution works reliably in all cases.

@schmunk42
Copy link
Contributor Author

I'll try to take a look. My usage was mainly with this lib: https://github.com/omauger/docker-compose-php

Basically I wanted to immediately see the output from docker-compose.

@mikehaertl
Copy link
Owner

Basically I wanted to immediately see the output from docker-compose.

Right, I remember now. Honestly I think that we can't support immediate output. If you check the code, using non-blocking streams is already complex enough. I can't think of an easy way how you'd implement that real-time output feature. So for that use case you'd need another library - or brew your own solution for that specific use case. I doubt that there's a simple generic solution.

@mikehaertl
Copy link
Owner

@schmunk42 Closing this for now as the original issue should be solved with 1.5.0.

Regarding immediate output:

I was thinking about a complete rewrite of the library with a cleaner interface. In theory I think immediate output would be possible - but at the cost that the code you need to write always becomes ugly. Basically you'd need a loop in your code, something like this:

$command = new Command('sleep 20');
$command->runAsync();
while ($command->isRunning()) {
    echo $command->getOutput();
}

But to be on the safe side, that's not enough. You'd have to copy the logic to alternately check the stdin/stdout streams in your code:

// Due to the non-blocking streams we now have to check in
// a loop if the process is still running. We also need to
// ensure that all the pipes are written/read alternately
// until there's nothing left to write/read.
$isRunning = true;
while ($isRunning) {
$status = proc_get_status($process);
$isRunning = $status['running'];
// We first write to stdIn if we have an input. For big
// inputs it will only write until the input buffer of
// the command is full (the command may now wait that
// we read the output buffers - see below). So we may
// have to continue writing in another cycle.
//
// After everything is written it's safe to close the
// input pipe.
if ($isRunning && $hasInput && $isInputOpen) {
if ($isInputStream) {
$written = stream_copy_to_stream($this->_stdIn, $pipes[0], 16 * 1024, $writtenBytes);
if ($written === false || $written === 0) {
$isInputOpen = false;
fclose($pipes[0]);
} else {
$writtenBytes += $written;
}
} else {
if ($writtenBytes < strlen($this->_stdIn)) {
$writtenBytes += fwrite($pipes[0], substr($this->_stdIn, $writtenBytes));
} else {
$isInputOpen = false;
fclose($pipes[0]);
}
}
}
// Read out the output buffers because if they are full
// the command may block execution. We do this even if
// $isRunning is `false`, because there could be output
// left in the buffers.
//
// The latter is only an assumption and needs to be
// verified - but it does not hurt either and works as
// expected.
//
while (($out = fgets($pipes[1])) !== false) {
$this->_stdOut .= $out;
}
while (($err = fgets($pipes[2])) !== false) {
$this->_stdErr .= $err;
}
if (!$isRunning) {
$this->_exitCode = $status['exitcode'];
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($process);
} else {
// The command is still running. Let's wait some
// time before we start the next cycle.
usleep(10000);
}
}

You always need to check how much of the input/output you've already sent/received.

The library could buffer this internally to make things a little easier (e.g. with php://temp or php://memory). But to be honest I feel like there's too many scenarios where things can go horribly wrong. Maintaning could soon become a nightmare.

So I'd say: If you need this, it's really better to build or own logic with proc_open() based on the loop I linked above and taylor it to fit your needs.

@mikehaertl mikehaertl closed this Aug 17, 2019
@mikehaertl mikehaertl mentioned this pull request Sep 15, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants