w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Bash script: spawning multiple processes issues
At the very least, this line: `process $j &` Shouldn't have any backticks in it. You probably just want: process $j & Besides that, you're overwriting your log files instead of appending to them; is that intended?

Categories : Bash

Have bash script execute multiple programs as separate processes
You can run a job in the background like this: command & This allows you to start multiple jobs in a row without having to wait for the previous one to finish. If you start multiple background jobs like this, they will all share the same stdout (and stderr), which means their output is likely to get interleaved. For example, take the following script: #!/bin/bash # countup.sh for i in `seq 3`; do echo $i sleep 1 done Start it twice in the background: ./countup.sh & ./countup.sh & And what you see in your terminal will look something like this: 1 1 2 2 3 3 But could also look like this: 1 2 1 3 2 3 You probably don't want this, because it would be very hard to figure out which output belonged to which job. The solution? Redirect stdout (and optionally st

Categories : Python

Bash parallel process on multiple input files, one output file
Yes, your output file gets overwritten by each process. Make each script output to its own file, and once all the scripts are finished, concatenate the output: i=0 for x in /home/moleculo/x* ; do ExtractOutCalls2.sh /home/Scripts/000 $x > OUT.$i & (( i++ )) done wait cat OUT.* > OUT rm OUT.* You have to change the script to output to standard output instead of the file, or make it accept the name of the output file to be created.

Categories : Bash

Encrypting multiple files in bash script
Iterate over the positional arguments; using $# to check that at least one was received and $@ (quoted) to retrieve each one in order. if (( $# == 0 )); then echo "crypten <file1> [ <file2> ... ]" echo " - crypten is a script to encrypt file using des3" exit fi for FNAME in "$@"; do openssl des3 -salt -in "$FNAME" -out "$FNAME.des3" done

Categories : Bash

multiple files as argument in bash script
The following will collect an array of inputFiles, and a single variable with the output file name: inputFiles=( ) outputFile= while (( $# )); do if [[ $1 = -o ]]; then outputFile=$2; shift elif [[ $1 = -i ]]; then inputFiles+=( "$2" ); shift else inputFiles+=( "$1" ) fi shift done ...then, you could do something like this: # redirect stdout to the output file, if one was given [[ $outputFile ]] && exec >"$outputFile" # loop over the input files and process each one for inputFile in "${inputFiles[@]}"; do process "$inputFile" done

Categories : Linux

Condense multiple files to a single BASH script
If you can just pass the script into the stdin of sqlplus you can do: sqlplus $user/$pass@$db << END <contents of sql script here> END (cat script.body; uuencode Report.zip Report.zip) | mail -s "Report" user@domain.com -- -f no-reply@domain.com if you still want stdin (useful if it might ask for a password or something) and assuming sqlplus won't try anything with the script file you can do: sqlplus $user/$pass@$db START <(cat << END <contents of sql script here> END) (cat script.body; uuencode Report.zip Report.zip) | mail -s "Report" user@domain.com -- -f no-reply@domain.com

Categories : Bash

Bash script - Start two background processes, wait for both to complete and get their output in variables
Commands run in background run in a child process, and there is no way for a child process to modify a parameter (variable) in the parent process. So technically, what you're looking for is impossible. However, you can store the child's stdout (and, if you wish, stderr) in a file; you'll just have to make sure to give the file a unique name. (See man mktemp, for example.) After you wait for the background process to finish, you can read the temporary file into a parameter, and delete the file. tmp1=$(mktemp) tmp2=$(mktemp) command1 > "$tmp1" & command2 > "$tmp2" & wait OUTPUT1=$( < "$tmp1" ) && rm "$tmp1" OUTPUT2=$( < "$tmp2" ) && rm "$tmp2"

Categories : Bash

Run PBS script and post-process output within bash script
I don't believe PBSPro supports this, but TORQUE (another PBS derivative) has a -x option that you might be interested in. You can submit a job like this: qsub -I -x <executable> This would run your job interactively and run the executable, with all of the output directed to your terminal, and the job will execute as soon as that executable terminates. You could then begin post-processing at that point. PBSPro may have similar functionality, but what I've described here is for TORQUE.

Categories : Bash

How to allow for a multi-threaded process to use as much ressource as multiple processes
All WebClient instances (in the same AppDomain) are limited to active 2 connections by default. You can change this programmatically by setting System.Net.ServicePointManager.DefaultConnectionLimit property. This can also be configured using app.config. This question shows several options for changing the limit. Just make sure the web api doesn't block you for making too many requests!

Categories : C#

Multiple event handlers in one process? Or distribute among processes
Take a look at the 12-factor app, which recommends running separate applications for different independent tasks. However, I recommend hosting all tasks in the same code repository, to make it easier to do end-to-end testing and keep them in sync. Then, use a technique like Procfiles to launch multiple instances of different applications.

Categories : Node Js

sqlite3 for multiple processes? how does other process gets effected after a huge update in data?
no 100Mb is probably border line. I would consider real database like PostgreSQL Other processes will need to query data to see what changed if you have multiple writing processes sqlite might not be a good choice if in the future you will have processes running on different machines sqlite is definitely not a good choice.

Categories : Sqlite

bash: ensure one line at a time from stdout written to file when backgrounding multiple processes
If you're tied to using Bash here... how about: You write your output to 10 separate numbered csv files Replace the time interval wait with a wait for all csv files to populated Add a combine script after the 10th iteration to push all csv files into your main accounts.csv Otherwise, I would suggest using a language that has more library support for multiple threads.

Categories : Bash

How can I get process.env.PORT from a bash script for cloud9?
$PORT gives you the variable PORT from current shell environment. It could be different for the process. To get the variables from the started server's environment you must know its pid. Then you can run cat /proc/pid/environ //or more readable xargs --null --max-args=1 echo < /proc/pid/environ Replace pid with the value of server process's pid. You can then extract the PORT variable from above.

Categories : Node Js

How to process files concurrently with bash?
I bash you can easily run part of the script in a different process just by using '(' and ')'. If you add &, then the parent process will not wait for the child. So you in fact use ( command1; command2; command3; ... ) &: while ... do ( your script goes here, executed in a separate process ) & CHILD_PID = $! done And also the $! gives you the PID of the child process. What else you need to know? When you reach the k processes launched, you need to wait for the others. This is done using wait <PID>: wait $CHILD_PID If you want to wait for all of them, just use wait. This should be sufficient for you to implement the system.

Categories : Bash

dump files from bash script in different directory from where python script ran it
You should change directory within the same command: cmd = "/path/to/executable/executable" outputdir = "/path/to/output/" subprocess.call("cd {} && {}".format(outputdir, cmd), shell=True)

Categories : Python

Can One Bash Script Launch Multiple Other Bash Scripts?
Run them in the background, just like you would in an interactive shell. command1 & command2 & command3 & wait # Wait for all background commands to finish The commands can be just about anything, not just other bash scripts.

Categories : Linux

Bash script that move files from one directory with lots of files to a month folder
something like this? DEBUG=echo cd ${directory_with_files} for file in * ; do dest=$(stat -c %y "$file" | head -c 7) mkdir -p $dest ${DEBUG} mv -v "$file" $dest/$(echo "$file" | sed -e 's/.* (.*)/1/') done DISCLAIMER: test this in a safe copy of your files. I won't be responsible for any lost of data ;-)

Categories : Bash

Trying to capture stdout of background process in bash script
Jenkins most likely looks at the process exit code to determine whether tests fails. This is what all Unix tools do. There are multiple ways of doing this. If your test files output something like "FAIL" instead of properly returning an exit code, you can do: #!/bin/bash ( ruby /root/selenium-tests/test/test1.rb & ruby /root/selenium-tests/test/test2.rb & ruby /root/selenium-tests/test/test3.rb & wait ) > log ! grep "FAIL" log exit $? # <- happens implicitly at the end of the script, and can be left out In this case, grep finding "FAIL" will cause the script to fail, and Jenkins to detect the failure. The more correct way, if your scripts return proper exit codes, is your method but without relying on job control (which by default is turned off in non-

Categories : Bash

Bash Script to process data containing input string
Kevin, you could try the following: #!/bin/bash directory='/home' tag=$1 for files in $directory/*$tag* do if [ -f $files ] then #do your stuff echo $files fi done where directory is your directory name (you could pass it as a command-line argument too) and tag is the search term you are looking for in a filename.

Categories : Linux

Provide arguments (via bash script) to a process that is already running
You can use an anonymous pipe: # open a new file descriptor (3) and provide as stdin to myapp exec 3> >(run myapp) # do some stuff .... # write arguments to the pipe echo "arg1 arg2 -arg3 ..." >&3 The advantage over a named pipe is the fact that you don't need to worry about cleaning up and you won't need any write permissions.

Categories : Bash

Files different from running process and heroku run bash
When you issue heroku run bash a new dyno is created just for this one-off, and you are given access to it. Any file you create will "disappear" once your log-off, since the Heroku file-system is ephemeral. That means the file-system is restored to its native state whenever a new dyno is created, or a dyno is rebooted. The "native" state is what's in your slug -- the "compiled" version of your app -- whatever is built by the build-pack after you "git push" to Heroku. If you want a read-only file available to all your Dynos, either put it in your slug (for example: by including it in git, but also by using a different build-pack), or put it somewhere all your dynos can access (like a shared database, a Redis/Memcache instance, or most logically: S3).

Categories : Bash

How to create a bash script that will lower case all files in the current folder, then search/replace in all files?
Something like this should work: #!/bin/bash for file in *.html do lowercase=`echo $file | tr '[A-Z]' '[a-z]'` mv "$file" "$lowercase" for f in *.html do sed -i "s/$file/$lowercase/g" "$f" done done Change *.html to *.<extension> if you're working with something other than html files.

Categories : Bash

C++ executing a bash script which terminates and restarts the current process
Use exec() instead of system(). It will replace your process with the new one. Note there is a significant different in how exec() is called and how it behaves: system() passes its string argument to the system shell to run. exec() actually executes an executable file, and you need to supply the arguments to the process one at a time, instead of letting the shell parse them apart for you.

Categories : C++

Passing the environment variables to the launched process bash script
First of all you need to source your shell script in order for the env variable to be set. and secondly include quotes in the getenv call. char * env_var = getenv("SAMPLE_VAR"); if (env_var != NULL) printf("var set ");

Categories : C++

Bash script to zip all the files that begin the same characters
The key part is the find command. It only selects files in current directory with a match in first three characters of the last part of the path (filename): find . -maxdepth 1 -type f -regex '.*/fil.*' -print And now simply provide that output to your favorite compression tool: For a bzip2 file using tar: tar -cjf myfile.tbz2 $(find . -maxdepth 1 -type f -regex '.*/fil.*' -print) For a zip file using 7z: 7z a myfile.zip $(find . -maxdepth 1 -type f -regex '.*/fil.*' -print)

Categories : Bash

Bash Script as OSX app wait for dropped files
You may have to make some small adjustments to your script in order to make it compatible with the manner in which a Platypus droplet handles dropped files. Here's the breakdown on how the droplet works: When you first launch the application, the script will run once When you drop a file into the droplet, the script will run taking the dropped file as arguments So the solution to the problem would be to refactor your script to handle the passed arguments. You can do this in bash by using $# for total number of arguments, and $1, $2, $3… etc for the rest of the arguments. You can also use $* to use all arguments. Here's an example script that will output the file names (in an applescript alert) of the passed files that the droplet sees. Note the if statement that exits if there are

Categories : Osx

monitor a directory and pull new files as they appear - bash script
Two products come to mind depending on how you plan to approach the solution. I personally use Splunk on a variety of platforms (Windows & Linux servers/local Linux & OSX dev environments). It is a real-time log aggregator that features an API and the ability to query. Even if this doesn't solve your problem, the free version has some very robust features that you should consider: http://www.splunk.com The second approach would be synchronization of your web directories using something like RSync. I've used RSync on Linux boxes and always appreciated what it can do. I even see it now has a Windows port: https://www.itefix.no/i2/cwrsync

Categories : Bash

How to access grid files by coordinate args in a bash script?
Okay, figured out my self. It was pretty simple adding only 3 lines to the code above: # Calculates the line number and column. LINE=$(bc <<< "scale=0;($Y-$y11+7)/1") COLN=$(bc <<< "scale=0;($X-$x11+1)/1") # Prints the Z Coordinates at X=COLN and Y=LINE awk -v line=$LINE 'NR == line { print $0 }' $INPUT | cut -f $COLN -d " " Simply calculate the position of the elevation data value by substracting the entered $X and $Y by the starting coordinate x11 and y11 like written above. The whole working code is: #!/bin/bash INPUT=$1 # Parses starting coordinates. x11=$(awk '$1 == "xllcorner" { print $2 }' $INPUT) y11=$(awk '$1 == "yllcorner" { print $2 }' $INPUT) # Gets requested coordinates from args. X=$2 Y=$3 # Calculates the line number and column. LINE=$(bc <<

Categories : Bash

How to delete dated txt files as part of a bash script under linux
Not 100% accurate, but maybe this is good enough for the case that you describe: find ~/cron/obnam -type f -mtime +3 -name 'test-*.txt' -exec rm -v {} + >>$LOGFILE 2>&1 If you have some corner cases that this does not handle well, please drop a comment and I will amend.

Categories : Bash

Bash Script to Find a Symbol in a Substantial Amount of .jar Files
The jar usage seems ok, but there are too many variables, arrays, pipes that could have been avoided. The following should work for you: for i in `find ${PWD} -name "*.jar"`; do jar -tf $i | grep -qs $1 && echo $i done This would list the jar file(s) containing the symbol.

Categories : Bash

Passing N files as arguments (also in random positions) in bash script
You want to iterate your list of filenames with shift after you get your arguments, shift $(( OPTIND-1 )) while [ -f $1 ] do #do whatever you want with the filename in $1. shift done

Categories : Bash

bash script to get command values in two files and write is a pattern to new file
To get the common lines, you can do something simple like awk 'NR==FNR{x[$1]=1} NR!=FNR && x[$1]' file1 file2 That leaves you with a list, and you need to group the elements in ranges. That's a simple awk script: awk 'NR==1 {s=l=$1; next} $1!=l+1 {if(l == s) print l; else print s ":" l; s=$1} {l=$1} END {if(l == s) print l; else print s ":" l; s=$1}' Putting it all together: awk 'NR==FNR{x[$1]=1} NR!=FNR && x[$1]' file1 file2 | awk 'NR==1 {s=l=$1; next} $1!=l+1 {if(l == s) print l; else print s ":" l; s=$1} {l=$1} END {if(l == s) print l; else print s ":" l; s=$1}' Explanation: We keep track of the start of the current range and the last value we saw. NR==1 {s=l=$1; next} NR==1 only runs on the first line. It will always be the first element of a range.

Categories : Bash

Breaking a big file down into smaller files on special char in bash script
Your post appears to be cut off, but from what I gather this script should help you get started. awk 'BEGIN{ FS="|" y=1 outputFile="/tmp/outfile" }{ for (i=1; i<=NF; i++) { tmpoutput=tmpoutput" "$i if (y == 1000) { y=1 print tmpoutput > outputFile tmpoutput="" } else { y++ } } }END{ print tmpoutput > outputFile }' inputFile

Categories : Bash

What is the simplest method to join columns from variable number of files using bash script?
You can do this by combining two joins. $ join -o '0,1.3,2.3' -a1 -a2 -e 'NA' file1 file2 Adam a1 a2 Bills b1 NA Carol c1 c2 Dean d1 NA Evan NA e2 First join the first two files together, using -a1 -a2 to make sure lines that are only present in one file are still printed. -o '0,1.3,2.3' controls which fields are output and -e 'NA' replaces missing fields with NA. $ join -o '0,1.3,2.3' -a1 -a2 -e 'NA' file1 file2 | join -o '0,1.2,1.3,2.3' -a1 -a2 -e 'NA' - file3 Adam a1 a2 NA Bills b1 NA b3 Carol c1 c2 c3 Dean d1 NA NA Evan NA e2 e3 Then pipe that join to another one which joins the third file. The trick here is passing in - as the first file name, which tells join to use stdin as the first file. For an arbitrary number of files, here's a script which applies this idea recursively

Categories : Shell

bash script loop multiple variables
You can use arrays for that: A=({a..z}) B=({1..26}) for (( I = 0; I < 26; ++I )); do echo "/dev/sd${A[I]} /disk${B[I]} ext4 noatime 1 1" >> test done Example output: /dev/sda /disk1 ext4 noatime 1 1 ... /dev/sdz /disk26 ext4 noatime 1 1 Update: As suggested you could just use the index for values of B: A=('' {a..z}) for (( I = 1; I <= 26; ++I )); do echo "/dev/sd${A[I]} /disk${I} ext4 noatime 1 1" >> test done Also you could do some formatting with printf to get a better output, and cleaner code: A=('' {a..z}) for (( I =

Categories : Bash

How can I avoid multiple starting of a bash script?
There are quick-n-dirty solutions that use ps with grep (don't do this). It is better to use a lock file as a "mutex". A nice way of doing this is by using a directory as a lock file (http://mywiki.wooledge.org/BashFAQ/045). I would also suggest taking a look at: http://mywiki.wooledge.org/ProcessManagement#How_do_I_make_sure_only_one_copy_of_my_script_can_run_at_a_time.3F , which mentions use of setlock(http://cr.yp.to/daemontools/setlock.html) that abstracts the lock file handling for you.

Categories : Bash

Bash script being passed one parameter, but I want multiple
Try: eval set "$@" or (safer if it might begin with shell options): eval set -- "$@" After that you should be able to use "$@". As with all evals, this has all kinds of dangers. :-) Example of a danger: $ set '`ls`' $ eval set -- "$@" $ echo $# 28 $ echo "$1" COPYRIGHT Edit: here's a protect shell function and an example of using it. I am not sure I protected against everything, but it gets the two obvious cases: #! /usr/bin/env bash protect() { local quoted quoted="${1//$/\$}" quoted="${quoted//`/\`}" # use printf instead of echo in case $1 is (eg) -n printf %s "$quoted" } foo=expanded set -- '-e -n $foo `ls` "bar baz"' eval set -- "$@" echo "without protect, $# is $#:" for arg do echo "[$arg]"; done set -- '-e -n $foo `ls` "bar baz"' eval set -- $(

Categories : Bash

Forking two processes results in multiple processes
The problem with your code is that after the child processes have slept, they return from fork_off and repeat everything the parent is doing. void fork_off(int * proc_t, int * proc_i) { int f = fork(); if (f == 0) { fprintf(stderr, "Proc %d started ", getpid()); usleep(5000000); exit (0); /* exit() closes the entire child process * instead of simply return from the function */ } else if (f > 0) { /* Make sure there isn't an error being returned. * Even though I've never seen it happen with fork(2), * it's a good habit to get into */ proc_t[*proc_i] = f; (*proc_i)++; } else { /* Adding to the aforementioned point, consi

Categories : C

Bash script to find file older than X days, then subsequently delete it, and any files with the same base name?
I doubt this can be done cleanly in a single pass. Your best bet is to use -mtime or a variant to collect names and then use another find command to delete files matching those names. UPDATE With respect to your comment, I mean something like: # find basenames of old files find .... -printf '%f ' | sort -u > oldfiles for file in ($<oldfiles); do find . -name $file -exec rm; done

Categories : Bash

Writing a bash script to push local files to Amazon EC2 and S3 - needs proper owner/permissions
After everything is done, you can put this in your script to execute commands remotely (e.g. chown, chmod and the like), ssh -l root -i ${BUILD_DIR}mykey.pem" $SERVER_IP "chown whateveruser:whatevergroup $SERVER_DIR" Syntax: ssh -l root -i key $SERVER_IP "your command"

Categories : Bash



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.