I've got a PHP script that needs to invoke a shell script but doesn't care at all about the output. The shell script makes a number of SOAP calls and is slow to complete, so I don't want to slow down the PHP request while it waits for a reply. In fact, the PHP request should be able to exit without terminating the shell process.
I've looked into the various exec()
, shell_exec()
, pcntl_fork()
, etc. functions, but none of them seem to offer exactly what I want. (Or, if they do, it's not clear to me how.) Any suggestions?
nice
and ionice
to prevent the shell script from overwhelming your system (e.g. /usr/bin/ionice -c3 /usr/bin/nice -n19
)
If it "doesn't care about the output", couldn't the exec to the script be called with the &
to background the process?
EDIT - incorporating what @AdamTheHut commented to this post, you can add this to a call to exec
:
" > /dev/null 2>/dev/null &"
That will redirect both stdio
(first >
) and stderr
(2>
) to /dev/null
and run in the background.
There are other ways to do the same thing, but this is the simplest to read.
An alternative to the above double-redirect:
" &> /dev/null &"
I used at for this, as it is really starting an independent process.
<?php
`echo "the command"|at now`;
?>
www-data
) doesn't have the permissions to use at
and you can't configure it to, you can try to use <?php exec('sudo sh -c "echo \"command\" | at now" ');
If command
contains quotes, see escapeshellarg to save you headaches
echo "sudo command" | at now
and commenting www-data
out in /etc/at.deny
at
package (and command) is non default for some linux distributions. Also it is need to be running atd
service. This solution may be a hard to understand because of explanation lack.
To all Windows users: I found a good way to run an asynchronous PHP script (actually it works with almost everything).
It's based on popen() and pclose() commands. And works well both on Windows and Unix.
function execInBackground($cmd) {
if (substr(php_uname(), 0, 7) == "Windows"){
pclose(popen("start /B ". $cmd, "r"));
}
else {
exec($cmd . " > /dev/null &");
}
}
Original code from: http://php.net/manual/en/function.exec.php#86329
myscript.js
but instead, you will write node myscript.js
. That is: node is the executable, myscript.js is the, well, script to execute. There's a huge difference between executable and script.
php artisan
just put comment here so no need tracing why it wont work with commands
On linux you can do the following:
$cmd = 'nohup nice -n 10 php -f php/file.php > log/file.log & printf "%u" $!';
$pid = shell_exec($cmd);
This will execute the command at the command prompty and then just return the PID, which you can check for > 0 to ensure it worked.
This question is similar: Does PHP have threading?
action=generate var1_id=23 var2_id=35 gen_id=535
segment). Also, since OP asked about running a shell script, you don't need the PHP-specific portions. The final code would be: $cmd = 'nohup nice -n 10 /path/to/script.sh > /path/to/log/file.log & printf "%u" $!';
nice
but also ionice
.
&
runs preceding code in the background, then printf
is used for formatted output of the $!
variable which contains the PID
php-execute-a-background-process has some good suggestions. I think mine is pretty good, but I'm biased :)
In Linux, you can start a process in a new independent thread by appending an ampersand at the end of the command
mycommand -someparam somevalue &
In Windows, you can use the "start" DOS command
start mycommand -someparam somevalue
start
command on windows, it does not run asynchronously... Could you include the source where you got that information from?
/B
parameter. I've explained it here: stackoverflow.com/a/34612967/1709903
the right way(!) to do it is to
fork() setsid() execve()
fork forks, setsid tell the current process to become a master one (no parent), execve tell the calling process to be replaced by the called one. so that the parent can quit without affecting the child.
$pid=pcntl_fork();
if($pid==0)
{
posix_setsid();
pcntl_exec($cmd,$args,$_ENV);
// child becomes the standalone detached process
}
// parent's stuff
exit();
I used this...
/**
* Asynchronously execute/include a PHP file. Does not record the output of the file anywhere.
* Relies on the PHP_PATH config constant.
*
* @param string $filename file to execute
* @param string $options (optional) arguments to pass to file via the command line
*/
function asyncInclude($filename, $options = '') {
exec(PHP_PATH . " -f {$filename} {$options} >> /dev/null &");
}
(where PHP_PATH
is a const defined like define('PHP_PATH', '/opt/bin/php5')
or similar)
It passes in arguments via the command line. To read them in PHP, see argv.
I also found Symfony Process Component useful for this.
use Symfony\Component\Process\Process;
$process = new Process('ls -lsa');
// ... run process in background
$process->start();
// ... do other things
// ... if you need to wait
$process->wait();
// ... do things after the process has finished
See how it works in its GitHub repo.
proc_*
internal functions.
The only way that I found that truly worked for me was:
shell_exec('./myscript.php | at now & disown')
shell_exec("/usr/local/sbin/command.sh 2>&1 >/dev/null | at now & disown")
and all I get is: sh: 1: disown: not found
You can also run the PHP script as daemon or cronjob: #!/usr/bin/php -q
Use a named fifo.
#!/bin/sh
mkfifo trigger
while true; do
read < trigger
long_running_task
done
Then whenever you want to start the long running task, simply write a newline (nonblocking to the trigger file.
As long as your input is smaller than PIPE_BUF
and it's a single write()
operation, you can write arguments into the fifo and have them show up as $REPLY
in the script.
without use queue, you can use the proc_open()
like this:
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w") //here curaengine log all the info into stderror
);
$command = 'ping stackoverflow.com';
$process = proc_open($command, $descriptorspec, $pipes);
Success story sharing
&> /dev/null &
, xdebug won't generate logs if you use this. Check stackoverflow.com/questions/4883171/…/dev/null
is the better practice, as writing to closed FDs causes errors, whereas attempts to read or write to/dev/null
simply silently do nothing.