w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
analog command grep -o in Poweshell
One way is: $a = '<a id="tm_param1_text1_item_1" class="tm_param1param2_param3 xxx_zzz qqq_rrrr_www_vv_no_empty" >eeee <span id="ttt_xxx_zzz">0</span></a>' $a -match '>([0-9])<' #returns true and populate the $matches automatic variable $matches[1] #returns 0

Categories : Powershell

Grep multiple lines from different files, and print them to a xml
#This functions gets a field from a ".desktop" file get_variable() { FILE=$1 FIELD=$2 #grep the lines that starts with the fields and cuts them after '=' char grep -m 1 -e "^$FIELD" "$FILE" | cut -f 2 -d"=" } #gets all the .desktop files in the current folder #gets the specified fields from them for file in *.desktop; do name=get_variable "$file" "Name" type=get_variable "$file" "Type" icon=get_variable "$file" "Icon" url=get_variable "$file" "URL" last=get_variable "$file" "X-KDE-LastOpenedWith" #echo the XML file echo "<action label="$name" icon="$icon" exec="$last $url"/> " done

Categories : Bash

Grep multiple strings on large files
In reflect to the above comments i done another test. Taked my file from md5deep -rZ command (size: 319MB). Randomly selected 100 md5 checksums (each 32chars long). The time egrep '100|fixed|strings' md5 >/dev/null time real 0m16.888s user 0m16.714s sys 0m0.172s for the time fgrep -f 100_lines_patt_file md5 >/dev/null the time is real 0m1.379s user 0m1.220s sys 0m0.158s Nearly 15times faster as egrep. So, when you get only 0.3 sec improvment betwen egrep and fgrep IMHO thats mean: your IO is to slow The computing time for egrep is not slowed by processor or memory but IO and (IMHO) therefore you don't get any speed improvement with fgrep.

Categories : Bash

How to use grep command on multiple files at same time?
Do yourself a favor and install ack. You'll rarely use grep after that. ack: automatically recurses into subdirectories treats search parameter as a Perl regular expression rich set of command-line options more powerful and DWIMmy than grep Install by installing the App::Ack distribution from CPAN

Categories : Perl

Grep for keyword over multiple directories and move found files
Try for file in `grep -rl ...` The -l (--files-with-matches) will only print filenames, so you should end up with the collection of all files that match, exactly as you want. EDIT: To deal with spaces, try this: OLDIFS="$IFS" FILES=`grep -rl ...` IFS="$OLDIFS" for file in $FILES do ... done

Categories : Linux

Merge Files over multiple folders using PowerShell
Assuming that all CSVs have the same columns something like this should work: $root = 'C:path oRoot_dir' $csv = 'C:path ooutput.csv' Get-ChildItem $root -Filter 'message.csv' -Recurse | % { Import-Csv $_.FullName } | Export-Csv $csv -NoTypeInformation To remove duplicates from the output try this instead: $root = 'C:path oRoot_dir' $csv = 'C:path ooutput.csv' Get-ChildItem $root -Filter 'message.csv' -Recurse | % { Import-Csv $_.FullName } | ConvertTo-Csv -NoTypeInformation | select -Unique | Out-File $csv

Categories : Powershell

File System Monitor for multiple files at once in PowerShell
A simple option for doing something like this would be setting up a scheduled task that runs a PowerShell script every 3-5mins. The script you could call would be something simple like this: $folder = "d:Logs emp" $destination = "d:LogsXML" $filter = "*.xml" Move-Item ($folder+""+$filter) $destination -force This will move only file that match the $filter out of the start $folder and put them all in the $desination folder. http://windows.microsoft.com/en-GB/windows7/schedule-a-task

Categories : Powershell

Delete multiple files or folders from a CSV file that contain more than one columns (Powershell)
First I would remove the type information from your CSV like so: Import-Module ActiveDirectory Get-ADUser -SearchBase "OU=Marked for Deletion,OU=Disable Users,DC=******,DC=com" -Filter * -Property * | Select-Object -Property homeDirectory,profilePath | Export-CSV -Path .Remove.csv -NoTypeInformation Then for your delete code I would use this: Import-Csv "C:lab emove.csv" | % { Remove-Item -Path $_.homeDirectory -force -Recurse Remove-Item -Path $_.profilePath -force -Recurse } write-host -foregroundcolor yellow "Delete action complete" The problem with your code is that you are not looping through a column, you looping by line and then doing it twice. To do it your way you would need to split the line at the comma.

Categories : Powershell

PowerShell Clear-ADAccountExpiration not performing in the same way as the manual method
The date when the account expires. This value represents the number of 100-nanosecond intervals since January 1, 1601 (UTC). A value of 0 or 0x7FFFFFFFFFFFFFFF (9223372036854775807) indicates that the account never expires. http://msdn.microsoft.com/en-us/library/windows/desktop/ms675098(v=vs.85).aspx

Categories : Powershell

Add Powershell Snapin for Powershell Module and Import Multiple Times
You might want to try to specify this module required by your own module through a module manifest (.psd1). See RequiredModules here.

Categories : Powershell

Run Ant files when performing Build and Deploy Worklight 5.x.x Application
I believe you can add an extra project builder to your application that invokes the Ant task you need. Under Eclipse take a look at Project->Properties->Builders->New. You will find Ant buildfiles as project builders a useful link.

Categories : Eclipse

How to grep files under a pattern path
From the UNIX philosophy: Write programs that do one thing and do it well. Write programs to work together. I don't like the GNU extension for recursive directory searching. The tool find does a much better job, has a cleaner syntax with more options and doesn't break the philosophy! $ find foo/*/VIEW -name "*.groovy" -exec grep method {} ;

Categories : Unix

Using grep with perl to find files
How about ls *keyword* If you trying to do this within perl @files = glob("*keyword*"); for $file (@files) { print "$file "; } Note that grep in perl is a core function, but it has nothing to do with regular expressions. It is a more like SQL where; it filters an array to a subarray by applying a function (which may or may not be a regex) to each element. If glob expressions are not good enough, you can do @files = grep /(fun[kK]y_)keyword?/ glob("*");

Categories : Regex

grep -f on files in a zipped folder
If you need a multiline output, better use zipgrep : zipgrep -s "pattern" TestZipFolder.zip the -s is to suppress error messages(optional). This command will print every matched lines along with the file name. If you want to remove the duplicate names, when more than one match is in a file, some other processing must be done using loops/grep or awk or sed. Actually, zipgrep is a combination egrep and unzip. And its usage is as follows : zipgrep [egrep_options] pattern file[.zip] [file(s) ...] [-x xfile(s) ...] so you can pass any egrep options to it.

Categories : Linux

Powershell Script : Unzip files and execute bat file within the zip files
Why not embed the one line from the batch file in the powershell script, then you can use the variables in your powershell script. Simply put cmd /c before the copy: cmd /c copy "$NewLocation*.prn" /b \PCPrinter

Categories : Windows

Copying files and renaming files into the original folder using Powershell
In PowerShell: Get-ChildItem 'C:somefolder' -Filter 'main.txt' -Recurse | % { Copy-Item $_.FullName (Join-Path $_.Directory 'temp.txt') } In batch: @echo off for /r "C:somefolder" %%f in (main.txt) do ( copy "%%~ff" "%%~dpftemp.txt" )

Categories : Powershell

Mac: open the files found with grep in sublime
A bash script could solve this nicely. Here is an untested concept (may require some tweaking): #!/bin/bash files=() while grep --include=*.php -R -l "tribe_events_event_classes" .; do files += ("$REPLY") done for file in "${files[@]}"; do subl $file done

Categories : Osx

How to segregate files based on recursive grep
I managed to write this script which solves my question. PWD=`$pwd` FILES=$PWD/* for f in $FILES do str=$(cat $f/file1) if [ "$str" == "foo" ]; then cp -rf $f $HOME_PATH/newdir/ fi done

Categories : Bash

Performing multiple updates in EF
This has nothing to do with Entity Framework. You must insert one row at a time in SQL, that's the way SQL works. Each insert is a separate statement. There is no way (outside of special bulk-insert methods, which would be pointless for 20 records) of doing it differently. Now, yes, you can insert those 20 records in one request, which is what EF will do. I don't understand you comment "As I'm using EF, I'm a little concerned about performance". EF is no worse performing than anything else in this respect. It's a simple 20 record insert, there's nothing special here or any complexity that could cause performance issues.

Categories : Entity Framework

multiple threads performing writes?
You should create a thread-safe implementation of a data structure. It can be either lock-based (for example implemented by using mutexes) or lock-free (using atomic operations or memory orderings which are supported in C++11 and boost). I can briefly describe the lock-based approach. For example, you may want to design a thread-safe linked list. If your threads perform only read operations everything is safe. On the other hand if you try to write to this data structure you might need a previous and a next node pointers in the list (if its double-linked you need to update their pointers to point to the inserted node) and while you modify them some other thread might read the incorrect pointer data so you need a lock on the two nodes between which you want to insert your new node. This cre

Categories : C++

How to mask divs with a linear gradient mask?
I would think that the transparent PNG would be the best bet, make it absolute with a higher z index and make it inside a container div. This container div would float over the background sliding image? I would use a very small 2 px slice and just repeat it along the y axis, but I might not be seeing your problem correctly. I tried it with a very small slice of the radiant that I repeated down (y) and the underlying image did scroll through the top transparent images. If I follow what you are trying to do. It worked in chrome, firefox and safari: here is the css #container { background-attachment: scroll; background-image: url(Untitled-1.jpg); background-repeat: repeat-x; clear: both; float: left; height: 768px; width: 80000px; position: relative; } #container #info { float: left; wid

Categories : HTML

Entity Framework performing different on multiple computers
This sounds suspiciously like an already 'fixed' issue with EF... have a look at this duplicated join issue They note that removing the orderby removes the duplicate... worth a test eh

Categories : C#

Performing multiple inserts for a single row in a query
Because I don't know what SQL you are using its difficult to decide if this is correct. i also don't know if you already tried this but it's the best idea i have: insert into tblPerson (Forename, Surename) Select ContactForename, ContactPersonSurename from tblCompany insert into tblCompanyPerson (CompanyID, PersonID) select CompanyId, PersonID from tblPerson, tblCompany where ContactForename = Forename and ContactPersonSurename = Surename Sarajog

Categories : SQL

Performing multiple operations simultaneously on repo
repo just runs git operations under the covers, so you shouldn't run into situations where the data in the individual projects is corrupted. However, if repo tries to run a git operation on a single project while another instance of repo (or git) is running git operations on that same project, the second one will fail because it can't lock the repository. This could result in half of your projects getting updated and the other half being left behind or something similar. I'd recommend not using two instances of repo on the same sandbox for this reason.

Categories : GIT

Windows - How to grep (or findstr) html files and showing the first matching expression
With grep you can do something like: grep -oP '(?<=href=")[^"]+(?=")' html.file This is not the ideal way of parsing an html file. However, if it is a one off thing then you can probably get away with it. ?<=href=" is a look behind search. If the above it returning a lot of stuff then you can probably add which is unique to the url lines.

Categories : HTML

How to get Multiple Information using | grep?
Just function identify() { for fname in "$@" do while read line do echo "$fname $line" done < <(exiftool "$1"|egrep 'File Type|File Name') done } Now you can identify *.mkv *.avi (note untested: I don't have these tools or any sample files available)Update. Just tested by making a dummy helper function exiftool() { echo File Type 5; echo 42 File Name; } identify * If you wanted all information for a file on one line, you could add xargs: exiftool "$1"|egrep 'File Type|File Name' | xargs

Categories : Linux

performing border tracing on multiple objects in an image
You can try using the canny edge detection technique for resolving this issue. You can find more about it in the following URL, http://homepages.inf.ed.ac.uk/rbf/HIPR2/canny.htm Regards Shiva

Categories : Opencv

optimize multiple executions of grep
Let's do it step by step. First, no need to call Perl twice. Instead of img_url=$(echo $line | perl -pe 's/[ ].*//g' | perl -pe 's/(.*)_.*/$1/g'), you can just do img_url=$(echo $line | perl -pe 's/[ ].*//g;s/(.*)_.*/$1/g') But then, we can combine the two regex' together: s/.*_([^ ]*).*/$1/ (find a group of non-empty characters following an underscore) Also, Perl is an overkill where sed suffices: img_url=$(echo $line | sed "s/.*_([^ ]*).*/1/") But hey, maybe Perl should be actually your method of choice. You see, for every url read you read the two files (queue and links) in their entirety to find a matching line. If only there was a way of reading them and keeping the inventory in memory! Ohwait. Yes, we could do it in bash. No, I would not like to do it :-) The Perl scr

Categories : Shell

Grep multiple bash parameters
A safe eval could be a good solution #!/bin/bash if [[ $# -gt 0 ]]; then TEMP=("grep" "-e" ""$1"" "*") for (( I = 2; I <= $#; ++I )); do TEMP=("${TEMP[@]}" "|" "egrep" "-e" ""$${I}"") done eval "${TEMP[@]}" fi To run it: bash script.sh A B C

Categories : Linux

How do I grep out multiple lines of the same pattern?
I'm not sure if there's a way to do it with grep, but it might be easier to use something like Perl: perl -ne '$m = 0 if m/string/; print if $m++ > 29' foo.log > new_file.log (Here $m is the number of lines since the last line containing string.)

Categories : Linux

Grep output with multiple Colors?
grep is a regular expression matcher, not a syntax highlighter :). You'll have to use multiple invocations of grep, using a different value of GREP_COLOR for each. GREP_COLOR="1;32" grep foo file.txt | GREP_COLOR="1;36" grep bar That would highlight "foo" and "bar" in different colors in lines that match both. I don't think there is a (simple) way to handle all occurrences of either pattern, short of merging the output stream of two independent calls: { GREP_COLOR="1;32" grep foo file.txt GREP_COLOR="1;36" grep bar file.txt } | ... which will obviously look different than if there were a way to assign a separate color to each regular expression. You can use awk to substitute each match with itself wrapped in the correct control code. echo "foo bar" | awk '{ gsub("bar", "33[1;3

Categories : Bash

Easiest way to do "git grep" for multiple strings?
You didn't mention what operating system you're using, but if it's linux-like, you can write a "wrapper" script. Create a shell script named something like git-grep1 and put it in a directory that's in your $PATH, so git can find it. Then you can type git grep1 param1 param2... as if your script were a built-in git command. Here's a quick example to get you started: # Example use: find C source files that contain "pattern" or "pat*rn" # $ git grep1 '*.c' pattern 'pat*rn' # Ensure we have at least 2 params: a file name and a pattern. [ -n "$2" ] || { echo "usage: $0 FILE_SPEC PATTERN..." >&2; exit 1; } file_spec="$1" # First argument is the file spec. shift pattern="-e $1" # Next argument is the first pattern. shift # Append all remaining patterns, separating them with '--

Categories : GIT

PostgreSQL performing multiple dynamic WHERE conditions without dynamically writing SQL
SQL Fiddle with s as ( select array(select generate_series( a[i][1] * 64 + a[i][2], a[i][1] * 64 + a[i][3] )) as a from (values (array[[0,20,28],[1,12,15]])) s(a) cross join generate_series(1, array_length(array[[0,20,28],[1,12,15]], 1)) g(i) ) select id from mytable2 cross join s group by id having count((not(val_low && a or val_high && a)) or null) = 0 array[[0,20,28],[1,12,15]] is the passed parameter

Categories : Database

Is it better to use git grep than plain grep if we want to search in versioned source code?
git grep only searches in the tracked files in the repo. With grep you have to pass the list of files to search through and you would have filter out any untracked files yourself. So if you are searching for something that you know is in the repo, git grep saves you time as all you have to do is provide the pattern. It also is useful for not having to search through anything that is untracked in the repo.

Categories : Linux

Loop for applying mask to multiple columns and creating a data frame of means
You are not using the correct syntax for selecting the columns of the matrix and the brackets are not at the correct places. And using a loop for this is slow and cumbersome. Use the colMeans() function. > mat1 <- matrix(rnorm(21 * 1e6), ncol = 21) > mat1 <- data.frame(mat1) > > system.time({ + for (i in seq_len(ncol(mat1))) { + if (i>1) { + theMeanValues <- c(theMeanValues, mean(mat1[, i], na.rm = TRUE)) + } else { + theMeanValues <- mean(mat1[, i], na.rm = TRUE) + } + } + }) user system elapsed 0.53 0.05 0.58 > system.time({ + theMeanValues2 <- colMeans(mat1, na.rm = TRUE) + }) user system elapsed 0.16 0.09 0.25 > names(theMeanValues2) <- NULL > all.equal(theMeanValues, theMeanValues2

Categories : R

How to grep multiple terms and output in the order of the search?
One lame solution might be.. Run the grep command multiple times (maybe u can paste it into a shell script and run it multiple times) grep 'name1' file.txt > newfile.txt grep 'name2' file.txt >> newfile.txt grep 'name3' file.txt >> newfile.txt grep 'name4' file.txt >> newfile.txt Hope this helps!

Categories : Osx

GREP data within multiple tags from cURL html
Assuming your string is in the variable $data, you can: IFS=$' ' result=$(echo $data | sed 's/&[^;]*;//') result=$(echo $result | sed 's/<[^>]*>/ /g') for string in $result; do if [[ ! $string =~ ^ *$ ]]; then echo "string=$string." fi done

Categories : Regex

Powershell: search for multiple string multiple xml and output results to csv
I'm not sure I'm getting your question right but: If you have an XML input such as: <persons> <person> <firstname>Mickey</firstname> <lastname>Mouse</lastname> </person> <person> <firstname>Donald</firstname> <lastname>Duck</lastname> </person> </persons> Save on d: empInput.xml Then the command $xml = [xml](get-Content d: empinput.xml) $csv = $xml.persons.person | Convertto-CSV -NoTypeInformation Should do the trick.

Categories : Xml

grep command on linux ( especially grep --exclude)
you can find grep on man pages here. what exclude does: --exclude=PATTERN: Recurse in directories skip file matching PATTERN. your command will search, recursively, in all directories, skipping file pattern "*.svn*" and searching for file pattern "1.0.0.8/*1.0.0.8.config > 1.0.0.8-REVISION.txt"

Categories : Linux

Powershell - Mix 2 XML files
Here's one way to do it, your files are contains invalid xml, fix them and give this a try. $newXml = @" <configuration> <protocol> <NATIVE></NATIVE> <ICAP></ICAP> <RPC> <ClientList> <items> $( $xd.SelectNodes("//configuration/protocol/RPC/ClientList/items").InnerXml $xd2.SelectNodes("//configuration/protocol/RPC/ClientList/items").InnerXml ) </items> </ClientList> </RPC> </protocol> </configuration> "@ $newXml | Out-File $xmlpath2 Remove-Item $xmlpath1 -Force

Categories : Xml



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.