ChatGPT解决这个技术问题 Extra ChatGPT

Asynchronous shell exec in PHP

I've got a PHP script that needs to invoke a shell script but doesn't care at all about the output. The shell script makes a number of SOAP calls and is slow to complete, so I don't want to slow down the PHP request while it waits for a reply. In fact, the PHP request should be able to exit without terminating the shell process.

I've looked into the various exec(), shell_exec(), pcntl_fork(), etc. functions, but none of them seem to offer exactly what I want. (Or, if they do, it's not clear to me how.) Any suggestions?

No matter which solution you choose, you should also consider using nice and ionice to prevent the shell script from overwhelming your system (e.g. /usr/bin/ionice -c3 /usr/bin/nice -n19)
Possible duplicate of php execute a background process

C
Community

If it "doesn't care about the output", couldn't the exec to the script be called with the & to background the process?

EDIT - incorporating what @AdamTheHut commented to this post, you can add this to a call to exec:

" > /dev/null 2>/dev/null &"

That will redirect both stdio (first >) and stderr (2>) to /dev/null and run in the background.

There are other ways to do the same thing, but this is the simplest to read.

An alternative to the above double-redirect:

" &> /dev/null &"

This seems to work, but it needs a little more than an ampersand. I got it working by appending "> /dev/null 2>/dev/null &" to the exec() call. Although I have to admit I'm not exactly sure what that does.
Definitely the way to go if you want fire and forget with php and apache. A lot of production Apache and PHP environments will have pcntl_fork() disabled.
Just a note for &> /dev/null &, xdebug won't generate logs if you use this. Check stackoverflow.com/questions/4883171/…
@MichaelJMulligan it closes the file descriptors. That said, despite the efficiency gains, in hindsight, using /dev/null is the better practice, as writing to closed FDs causes errors, whereas attempts to read or write to /dev/null simply silently do nothing.
Background scripts will be killed when restarting apache ... just be aware of this for very long jobs or if you are unlucky timing-wise and you wonder why your job disappeared ...
M
MartyIX

I used at for this, as it is really starting an independent process.

<?php
    `echo "the command"|at now`;
?>

in some situations this is absolutely the best solution. it was the only one that worked for me to release a "sudo reboot" ("echo 'sleep 3; sudo reboot' | at now") from a webgui AND finish rendering the page .. on openbsd
if the user you run apache (usually www-data) doesn't have the permissions to use at and you can't configure it to, you can try to use <?php exec('sudo sh -c "echo \"command\" | at now" '); If command contains quotes, see escapeshellarg to save you headaches
Well thinking about it, to get sudo to run sh isn't the best idea, as it basically gives sudo a root access to everything. I reverted to use echo "sudo command" | at now and commenting www-data out in /etc/at.deny
@Julien maybe you could make a shell script with the sudo command and launch it instead. as long as you aren't passing any user submitted values to said script
at package (and command) is non default for some linux distributions. Also it is need to be running atd service. This solution may be a hard to understand because of explanation lack.
L
LucaM

To all Windows users: I found a good way to run an asynchronous PHP script (actually it works with almost everything).

It's based on popen() and pclose() commands. And works well both on Windows and Unix.

function execInBackground($cmd) {
    if (substr(php_uname(), 0, 7) == "Windows"){
        pclose(popen("start /B ". $cmd, "r")); 
    }
    else {
        exec($cmd . " > /dev/null &");  
    }
} 

Original code from: http://php.net/manual/en/function.exec.php#86329


this only execute a file, not work if using php/python/node etc
@davidvalentino Correct, and that's fine! If you would like to execute a PHP/Pyhton/NodeJS script you have to actually call the executable and pass to it your script. E.g.: you don't put in your terminal myscript.js but instead, you will write node myscript.js. That is: node is the executable, myscript.js is the, well, script to execute. There's a huge difference between executable and script.
right thats no problem in that case,, in other cases example like need to run laravel artisan, php artisan just put comment here so no need tracing why it wont work with commands
Short, sweet and it works (Windows + Wamp - running the command: execInBackground("curl ".$url);
D
Darryl Hein

On linux you can do the following:

$cmd = 'nohup nice -n 10 php -f php/file.php > log/file.log & printf "%u" $!';
$pid = shell_exec($cmd);

This will execute the command at the command prompty and then just return the PID, which you can check for > 0 to ensure it worked.

This question is similar: Does PHP have threading?


This answer would be easier to read if you included only the bare essentials (eliminating the action=generate var1_id=23 var2_id=35 gen_id=535 segment). Also, since OP asked about running a shell script, you don't need the PHP-specific portions. The final code would be: $cmd = 'nohup nice -n 10 /path/to/script.sh > /path/to/log/file.log & printf "%u" $!';
Also, as a note from one who has "been there before", anyone reading this might consider using not just nice but also ionice.
What does "%u" $! do exactly?
@Twigs & runs preceding code in the background, then printf is used for formatted output of the $! variable which contains the PID
Thank you so much, I tried all sorts of solutions I was finding to output to a log with an async PHP shell call and yours is the only one that's fulfilled all of the criteria.
C
Community

php-execute-a-background-process has some good suggestions. I think mine is pretty good, but I'm biased :)


L
Leo

In Linux, you can start a process in a new independent thread by appending an ampersand at the end of the command

mycommand -someparam somevalue &

In Windows, you can use the "start" DOS command

start mycommand -someparam somevalue

On Linux, the parent can still block until the child has finished running if it's trying to read from an open file handle held by the subprocess (ie. stdout), so this isn't a complete solution.
Tested start command on windows, it does not run asynchronously... Could you include the source where you got that information from?
@Alph.Dev please take a look to my answer if you're using Windows: stackoverflow.com/a/40243588/1412157
@mynameis Your answer shows exactly why the start command was NOT working. Its because of the /B parameter. I've explained it here: stackoverflow.com/a/34612967/1709903
佚名

the right way(!) to do it is to

fork() setsid() execve()

fork forks, setsid tell the current process to become a master one (no parent), execve tell the calling process to be replaced by the called one. so that the parent can quit without affecting the child.

 $pid=pcntl_fork();
 if($pid==0)
 {
   posix_setsid();
   pcntl_exec($cmd,$args,$_ENV);
   // child becomes the standalone detached process
 }

 // parent's stuff
 exit();

The problem with pcntl_fork() is that you are not supposed to use it when running under a web server, as the OP does (besides, the OP have tried this already).
p
philfreo

I used this...

/** 
 * Asynchronously execute/include a PHP file. Does not record the output of the file anywhere.  
 * Relies on the PHP_PATH config constant.
 *
 * @param string $filename  file to execute
 * @param string $options   (optional) arguments to pass to file via the command line
 */ 
function asyncInclude($filename, $options = '') {
    exec(PHP_PATH . " -f {$filename} {$options} >> /dev/null &");
}

(where PHP_PATH is a const defined like define('PHP_PATH', '/opt/bin/php5') or similar)

It passes in arguments via the command line. To read them in PHP, see argv.


A
Anton Pelykh

I also found Symfony Process Component useful for this.

use Symfony\Component\Process\Process;

$process = new Process('ls -lsa');
// ... run process in background
$process->start();

// ... do other things

// ... if you need to wait
$process->wait();

// ... do things after the process has finished

See how it works in its GitHub repo.


Warning: if you don't wait, the process will be killed when the request ends
Perfect tool, that is based on proc_* internal functions.
G
Gordon Forsythe

The only way that I found that truly worked for me was:

shell_exec('./myscript.php | at now & disown')

'disown' is a Bash built-in and doesn't work with shell_exec() this way. I tried shell_exec("/usr/local/sbin/command.sh 2>&1 >/dev/null | at now & disown") and all I get is: sh: 1: disown: not found
f
felipe.zkn

You can also run the PHP script as daemon or cronjob: #!/usr/bin/php -q


g
geocar

Use a named fifo.

#!/bin/sh
mkfifo trigger
while true; do
    read < trigger
    long_running_task
done

Then whenever you want to start the long running task, simply write a newline (nonblocking to the trigger file.

As long as your input is smaller than PIPE_BUF and it's a single write() operation, you can write arguments into the fifo and have them show up as $REPLY in the script.


L
LF00

without use queue, you can use the proc_open() like this:

    $descriptorspec = array(
        0 => array("pipe", "r"),
        1 => array("pipe", "w"),
        2 => array("pipe", "w")    //here curaengine log all the info into stderror
    );
    $command = 'ping stackoverflow.com';
    $process = proc_open($command, $descriptorspec, $pipes);