w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Bash shell read names of files from folder and create an output file for each input file with the same name:
The underscore is a valid character in variable names. Do: ./"$f" > "${f}_july_15.txt" Your code was trying to use a variable named f_july_15. You need to use braces to delimit the variable name when it's followed by a character that can be part of a variable name.

Categories : Bash

File names (from multiple files) as a column names in one data frame
You can do the next thing: import os import pandas as pd file_names = [] data_frames = [] for filename in os.listdir(path): name = os.path.splitext(filename)[0] file_names.append(name) df = pd.read_csv(filename, header=None) df.rename(columns={0: name}, inplace=True) data_frames.append(df) combined = pd.concat(data_frames, axis=1) Here I renamed every DataFrame column to match the file name, you can leave that step out, and just use ignore_index=True.

Categories : Python

read files according to their serial numbers and not names
file_list = os.listdir(trainDir) file_list.sort(key=lambda s: int(os.path.splitext(s)[0])) Or, to skip the O(n lg n) cost of sorting, inside the loop do img = imread("%d.EXT" % i) where EXT is the appropriate extension (e.g. jpg).

Categories : Python

Batch file to read txt file with file names then search for that file and copy to folder
you can try this: FOR /D /R "%~dp0" %%I IN (*) DO for /f "usebackq delims=" %%a in ("%~dp0list.txt") do xcopy "%%~I\%%~a" "C:your_files" /e /i For more help show list.txt.

Categories : Windows

According to files names of a file, duplicate files into anther directory
You need to turn the first file into a set (for fast membership testing). You are also using os.walk, which gives you three pieces of information, the path to the directory, a list of subdirectories and a list of the files in that directory: import sys import shutil,errno import os srcDir = 'Root' dstDir = 'De' with open('file.txt','r') as f: read_filenames = {fname.strip() for fname in f} # set comprehension for root, directories, files in os.walk(srcDir): for filename in read_filenames.intersection(files): shutil.move(os.path.join(root, filename), dstDir) The .intersection() call returns all elements in the read_filenames set that are also in the files list. Note that I tell shutil.move() the full path of the file to move to dstDir, using os.path.join() based on th

Categories : Python

How to read file names and versions from XML file and add them into dictionary
C# has a fairly extensive XML library available. The class you are likely looking for is called XmlReader and documentation is available here: http://msdn.microsoft.com/en-us/library/system.xml.xmlreader(v=vs.71).aspx http://support.microsoft.com/kb/307548 Each line in the sample text you posted is called an element in XML , so as you read through the documentation in the above links, the XmlNodeType element corresponds to each individual line. The information you want to parse, the file names and versions, are called attributes which pertain to the element on each line, so as you read through that sample code in the second link, note how you can extract attributes from each element. In your case, those two terms are enough to easily parse this file using the existing methods.

Categories : C#

Read file in chunks - RAM-usage, read Strings from binary files
yield is the keyword in python used for generator expressions. That means that the next time the function is called (or iterated on), the execution will start back up at the exact point it left off last time you called it. The two functions function identically; the only difference is that the first one uses a tiny bit more call stack space than the second. However, the first one is far more reusable, so from a program design standpoint, the first one is actually better. EDIT: Also, one other difference is that the first one will stop reading once all the data has been read, the way it should, but the second one will only stop once either f.read() or process_data() throws an exception. In order to have the second one work properly, you need to modify it like so: f = open(file, 'rb')

Categories : Python

Read Excel file sheet names
From Access you can automate Excel, open the workbook file, and read the sheet names from the Worksheets collection. This sample uses late binding. If you prefer early binding, add a reference for Microsoft Excel [version] Object Library and enable the "early" lines instead of the "late" lines. Give the procedure the full path to your workbook file as its pWorkBook parameter. Public Sub List_worksheets(ByVal pWorkBook As String) 'Dim objExc As Excel.Application ' early 'Dim objWbk As Excel.Workbook ' early 'Dim objWsh As Excel.Worksheet ' early Dim objExc As Object ' late Dim objWbk As Object ' late Dim objWsh As Object ' late 'Set objExc = New Excel.Application ' early Set objExc = CreateObject("Excel.Application") ' late Set objWbk = objExc.Workbo

Categories : Excel

Batch file to loop through contents of a text file, create variables and move files with partial names
Test this - the top portion creates the test files and folders. It then moves the files as you seem to want them moved. @echo off md "c:mylibrary2Accident Investigation Report" 2>nul md "c:mylibrary2Address Change" 2>nul type nul > "c:mylibrary2Accident Investigation Report2013-06-06 16-28-59 165 Accident Investigation Report - J Bloggs.pdf" type nul > "c:mylibrary2Accident Investigation Report2013-06-06 16-28-59 165 Accident Investigation Report - J Bloggs.xml" type nul > "c:mylibrary2Address Change2013-06-06 16-28-59 165 Address Change - J Bloggs.pdf" type nul > "c:mylibrary2Address Change2013-06-06 16-28-59 165 Address Change - J Bloggs.xml" type nul > "c:mylibrary2Accident Investigation Report2013-06-11 15-38-07 147 Accident Investigation Report - L Test.pdf" typ

Categories : Windows

Merging files (and file names) in R
You can just make a wrapper around the read.table() function that adds in your filename variable. Something like this should work: read.data <- function(file){ dat <- read.table(file,header=F,sep=",") dat$fname <- file return(dat) } Once there you just need to apply that function across your data files. Since you didn't post any example data I'm not sure what it actually looks like, but for now I'll assume it's clean as can be and that rbind() is sufficient to join them together, in which case this example should illustrate that function in action: > data(iris) > write.csv(iris,file="iris1.csv",row.names=F) > write.csv(iris,file="iris2.csv",row.names=F) > dataset <- do.call(rbind, lapply(list.files(pattern="csv$"),read.data)) > head(dataset) Sepal.Len

Categories : R

php creating zip file for files with unicode names
ZIP files don't have a specified encoding for filenames*. Consequently any use of non-ASCII characters is completely unreliable. *: Not completely true: there is an extension to the format that allows UTF-8 filenames to be used, and the zip command will use it. But Windows's ZIP interface (“Compressed Folders”) doesn't support it, and always uses the default (“ANSI”) code page to interpret the filename bytes. If you know that your target audience all have Windows boxes with a particular locale then you can target that locale... otherwise, best stick to ASCII.

Categories : PHP

How to create a zip file with the multiple files sharing the same names
This is actually very easy using Archive::Zip. my $zip = Archive::Zip->new; $zip->addString("test one", "a.txt"); $zip->addString("test two", "a.txt"); $zip->writeToFileNamed("test.zip"); Nothing very interesting happens when trying to extract it with the standard unzip tool: $ unzip test.zip Archive: test.zip extracting: a.txt replace a.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename: y extracting: a.txt If you prefer slightly more interesting content you can, of course, use the addFile method instead of addString.

Categories : Perl

finding most recent file version from list of file path names with jumbled file names
you can try: find $WORK.../.history -type f -printf '%T@ %p ' | sort -nr | cut -f2- | xargs grep 'your_pattern' Decomposed: the find finds all plain files and prints their modification time and path the sort sort sort them numerically - and reverse, so highest number comes first (the latest modified) the cut removes the time from each line the xargs run its argument for each file what get to it input, in this case will run the grep command, so the 1st file what the grep find - was the lastest modified The above not works when the filenames containing spaces, but hopefully this is not your case... The -printf works only with GNU find. For the repetative work, you can split the command to two parts: find $WORK.../.history -type f -printf '%T@ %p ' | sort -nr | cut -f2- > /somewhe

Categories : Eclipse

Make subfolder names from part of file name and copy files with Robocopy
@ECHO OFF SETLOCAL SET "sourcedir=." SET "destdir=c:destdir" FOR /f "tokens=1*delims=_" %%i IN ( 'dir /b /a-d "%sourcedir%*_*."' ) DO XCOPY /b "%sourcedir%\%%i_%%j" "%destdir%\%%i" GOTO :EOF This should accomplish the task described. You'd need to set up the source and destination directories to suit, of course. Add >nul to the end of the XCOPY line to suppress 'copied' messages.

Categories : Windows

Map job which splits file into small ones and generates names of these files on reduce stage
In the map phase you can do 2 emits per record: and <list_statuses, status>. The 'list_statusses' must be a unique key you choose in advance. Then in the reduce phase your behaviour depends on the key, if it equals your special key then you emit a file with the statuses (this reduce function will store all statuses in a Set for example) otherwise generate the <status,field> file. Does this make sense to you?

Categories : Hadoop

Get all the files from a FTP location with previous week's dates in the file names using Linux
you can use following command in linux (tested on Cent OS 6), change the -1 day for appropriate dates yesterday="date +%Y%m%d --date="-1 day"" more reference = http://blog.midnightmonk.com/85/bash/bash-date-manipulation.shtml

Categories : Linux

PHP how to list all files in directory in a combobox - only file names and not with full paths
$files = glob('images/items/Done/*.jpg'); echo "<select>"; foreach ($files as $file) { echo "<option>".pathinfo($file, PATHINFO_BASENAME)."</option>"; } echo "</select>";

Categories : PHP

Batch: Output file names with relative paths to files and additional signs
Could you please specify how you will differentiate "somefile.vi", does it not contain any numbers, or do you know its individual name? If it is the latter, I believe this should work if run from the C:foo folder for /r %%a in (*) do (if %%~na NEQ somefile echo %%~pa%%~na%%~xa >> output.txt) Note instead of "@@ar..." you will get "fooar..." If there are multiple files you want to exclude simple nest more if commands: for /r %%a in (*) do (if %%~na NEQ somefile1 if %%~na NEQ somefile2 if %%~na NEQ somefile3 echo %%~pa%%~na%%~xa >> output.txt) And so on.... I've tried this on my computer and it worked fine. Yours, Mona

Categories : File

sort files as per their dated names at the same time create filehandles for each file in perl
I propose this script: use strict; use warnings; #1. Set up the general values my $directory = "... the dir ..."; my $out_file = "... the out file ..."; #2. Fill an array with the names of your files opendir(my $dh, $directory) or die $!; while( my $file = readdir($dh) ) { push @files, $file; } closedir $dh; #3. Sort the array @files = sort {$a cmp $b} @files; #4. Open the target file open $out_file , '>', $filename or die $!; #5. Iterate for each input file, open it, # and write line by line its contents to the target foreach my $filename(@files) { open my $ifh, '<', $filename or die $!; while( my $line = <$ifh> ) { print $out_file $line; } close $ifh; } #6. Close the target close $out_file;

Categories : Perl

C# File - Read files from desktop and write them to a specific file
@charqus, This should Work if (!File.Exists(fileName)) File.Create(fileName).Dispose(); string[] PDFiles = Directory.GetFiles(sourceDirectory, "*.pdf", SearchOption.TopDirectoryOnly); List<String> fileList = new List<String>(); using (FileStream fs = new FileStream(fileName, FileMode.Open, FileAccess.Read)) { using (BinaryReader r = new BinaryReader(fs)) { fileList.Add(r.ReadString()); } } string[] textFile = fileList.ToArray(); Calling the Dispose method ensures that all the resources are properly released.

Categories : C#

Rename directory names and class names of maagento php files
Magento's autoloader uses class names to determine the path to class files, e.g. Company_Module_Helper_Data must reside in the Company/Module/Helper/Data.php file for it to be properly loaded. So if you change the names of any folders, be sure to change the names of all the classes correspondingly.

Categories : PHP

How to use fscanf() to read a file containing integer number?
You should use fgets() to read the file line by line and then parse the numbers using sscanf(). You can then skip the first number for each line just as you please. Here is an example: #include <stdio.h> #include <string.h> int main() { char fname[] = "filename.txt"; char buf[256]; char *p; /* open file for reading */ FILE * f = fopen(fname, "r"); /* read the file line-wise */ while(p = fgets(buf, sizeof(buf), f)) { int x, i = 0, n = 0; /* extract numbers from line */ while (sscanf(p+=n, "%d%n", &x, &n) > 0) /* skip the first, print the rest */ if (i++ > 0) printf("%d ", x); printf(" "); } } For reference: http://linux.die.net/man/3/fgets http://linux.die

Categories : C

How to append 4 digit number to the next string read from file
Is this what you want? while((data = br.readLine()) != null) { String[] data=br.readLine().split(); if(data!=null&&data.length==2) { System.out.println(data[1]+"/"+data[0]); }else { System.out.println("bad string!"); } }

Categories : Java

How to read several files to array of hashtable and get corresponding file name
You can use another Hashtable to manage your collection of Hashtables!! If you want to be slightly more modern, use a HashMap instead. You can use an outer hash table that maps files to inner hash tables, and the inner hash tables can then be analyzed. For each file you find, add an entry to the outer hash table, then for each entry, do the process you have already figured out for that file.

Categories : Java

read csv file exportc-csv to multiple files
It's not an error. You didn't specify InputObject parameter for the cmdlet. You can fix or by setting this parameter explicitly: export-csv -Path $row.CreateFileName.csv -noType -InputObject $row Or by sending it to cmdlet by pipeline: $row | export-csv -Path $row.CreateFileName.csv -noType

Categories : Powershell

Read xml files from App.config file path?
add the app.config using VS wizard. add your values, for example <?xml version="1.0"?> <configuration> <FilePaths> <add name="MyKey" FilePaths="c:ProjectsXMLfolderfolderabc.xml; c:ProjectsXMLfolderfolder1xyz.xml"/> </FilePaths> </configuration> and then read it in the console application, for example: string paths = ConfigurationManager.ConnectionStrings["MyKey"].FilePaths; string[] splittedPath = paths.Split(';'); foreach(string currPath in splittedPath) { // Your code here }

Categories : C#

read specific number from different lines in a txt file and add it to the end of each line block in txt
This should work for the problem you described: INST_HT = [1.545000, 1.335000] lines = open('tmp.txt') out = open('tmp2.txt', 'w') i = -1 while True: try: line = lines.next() except StopIteration: break if 'slope' in line.lower(): i += 1 out.write(line) while True: line = lines.next() if 'end slope' in line.lower(): out.write(line) break else: out.write(' ' + line.strip()[:-1] + ', ' + str(INST_HT[i]) + '; ') else: out.write(line) out.close()

Categories : Python

Read filenames from a text file and then make those files?
try : echo -e "$correctFilePathAndName" | touch EDIT : Sorry correct piping is : echo -e "$correctFilePathAndName" | xargs touch The '<' redirects via stdin whereas touch needs the filename as an argument. xargs transforms stdin in an argument for touch.

Categories : Linux

How to read multiple files from directory and run the script on each file
either do all your data gathering on each file as your glob() loop reaches it open file1 get metadata1 get metadata2 get metadata3 etc... open file2 repeat... or do multiple loops and get each type of metadata and store it: open file1 get metadata1 open file2 get metadata2 .... open file1 get metadata2 open file2 get metadata2 ... The first option would be far more efficient, since you're only visiting each file once.

Categories : PHP

UIActivityViewController - preserve file name of attached files from read from URL
yes I think it is possible, try this NSString *str = [[NSBundle mainBundle] pathForResource:@"AppDistributionGuide" ofType:@"pdf"]; UIActivityViewController *activityViewController = [[UIActivityViewController alloc] initWithActivityItems:@[@"Test", [NSURL fileURLWithPath:str]] applicationActivities:nil];

Categories : IOS

how to read variable lines from two txt files in perl and write them in to another txt file
It seems you are grouping the lines by the last part after the underscore. It is a bit unclear in what order the lines should be printed (e.g. if P1_M2A came after P2_M2A in the 2nd file), but the following code gives exactly the expected output for the data you gave. It first reads the 1st_file into a hash, remembering the paragraph without the first line for each id (last part after the _). Then, it goes over the second file and prints the remembered lines after printing the "header". It only tests the id on the third line of each paragraph, the remaining lines are ignored. As noted above, you have not specified how to get the id. If more than the last part is important, you will have to accommodate the code slightly. #!/usr/bin/perl use warnings; use strict; open my $F1, '<', '1s

Categories : Perl

How do I read in MATLAB a text file of doubles with variable number of columns per line?
Here is a one-liner, broken into several lines for readability C = cellfun(@(x) sscanf(x, '%f').', ... regexp(... regexprep(... fileread('test.txt'), ... '( | $)', ''), ... ' ', 'split'), ... 'uni', 0).';

Categories : Matlab

Merge the files into a new big file until the number of user id's become 10 Million
You can avoid the use of the Set as intermediate storage if you write at the same time that you read from file. You could do something like this, import java.io.BufferedReader; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; import java.io.PrintWriter; public class AppMain { private static final int NUMBER_REGISTERS = 10000000; private static String[] filePaths = {"filePath1", "filePaht2", "filePathN"}; private static String mergedFile = "mergedFile"; public static void main(String[] args) throws IOException { mergeFiles(filePaths, mergedFile); } private static void mergeFiles(String[] filePaths, String mergedFile) throws IOException{ BufferedReader[] readerArray = createReaderArray(filePaths); boolean[] closedRea

Categories : Java

batch file-count the number of files copied over
read HELP XCOPY and then use XCOPY /L /Q command to achieve what you want. or alternatively, I would use ROBOCOPY http://en.wikipedia.org/wiki/Robocopy

Categories : Batch File

Bash script to grep through one file for a list names, then grep through a second file to match those names to get a lookup value
awk -v search="$search_string" '$0 ~ search { gsub(/"/, "", $5); print $1" "$5; }' "$filename" | while read line do result=$(awk -v search="$line" '$0 ~ search { print $3; } ' "$lookup_file"); # Do "something" with $result done

Categories : Bash

is that possible using VS2010 to read .vspx file(or other profiling related files) generated by VS2012
According to this MSDN document you can't: Profiler report files: You can open Profiler report files (.vsp .vsps, .psess, and .vspf) in both Visual Studio 2012 and Visual Studio 2010 SP1. You can’t open a .vspx file in Visual Studio 2010 SP1.

Categories : C#

Could not read/access the video files inside APK Expansion (.obb) file. It throws nullpointerexception
I have had limited success using the StorageManager class to access an obb created by JOBB. I haven't tried the APKExpansionSupport class. StorageManager is built into the Android libraries. I say limited success because most of the time onObbStateChange() doesn't get called when I mount the obb using mountObb(). However, it does appear that the obb is getting mounted. I can see it in the file system and I can call getMountedObbPath() to access it. I also occasionally am unable to access the files within the obb. It mounts but then appears to be empty (which led me to your post). This has, at least once, fixed itself after rebuilding and downloading a new obb. I have no idea why but at the moment I once again cannot access the contents.

Categories : Android

pattern read, match, replace from two files and create output file with the results
This awk should work: awk -F '=' 'FNR==NR{a[$1]=$2;next} !($1 in a){print} ($1 in a){print $0 a[$1]}' myfile.txt responsefile.txt Expanded form: awk -F '=' 'FNR == NR { a[$1] = $2; next } !($1 in a) { print } ($1 in a) { print $0 a[$1] }' myfile.txt responsefile.txt OUTPUT: '#'Please fill the user id details. '#'Here is example user=urname. user==myname '#'Please fill the group id details. '#'Here is example group=urgroup. group==mygroup

Categories : Linux

Check if files have same name and store line count of files with same names
Dictionaries are very accommodating for tasks like this. You will have to modify the example below if you intend to recursively process input files at different directory depths. Also keep in mind that you can treat Python strings as lists, which allows you to splice them (this can cut down on messy regex). D = {} fnames = os.listdir("txt/") for fname in fnames: print(fname) date = fname[0:8] # this extracts the first 8 characters, aka: date if date not in D: D[date] = [] file = open("txt/" + fname, 'r') numlines = len(file.readlines()) file.close() D[date].append(fname + " has " + str(numlines) + " lines") for k in D: datelist = D[k] f = open('output/' + k + '.txt', 'w') for m in datelist: f.write(m + ' ') f.close()

Categories : Python

Read initially unknown number of N lines from file in a nested dictionary and start in next iteration at line N+1
You could use a dictionary to keep track of all the IDX columns and just add each line's IDX column to the appropriate list in the dictionary, something like: from collections import defaultdict import csv all_lines_dict = defaultdict(list) with open('your_file') as f: csv_reader = csv.reader(f) for line_list in csv_reader: all_lines_dict[line_list[3]].append(line_list) Csv reader is part of python standard library, and makes reading csv files easy. It will read each line as a list of its columns. This differs from your requirements because each key is not a dictionary of dictionaries but it is a list of the lines that share the IDX key.

Categories : Python



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.