w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Splitting flat file rows to multiple flat files SSIS
I think you don't need a loop, a C# script will do the trick. With blocks defined as: Source -> [RowCounter into variable Counter] -> [Script Task] -> Destination Script Task: int val = Convert.ToInt32(Dts.Variables["Counter"].Value); Dts.Connections["destinationConnectionName"].ConnectionString = "c:\yourpath\" + val.ToString() + ".txt"; of course you have to specify readonly parameter: Counter. This will set the connection name for each row. Combine it with sorting, which you already have.

Categories : SQL

How to update a flat file using the same flat file in bash (Linux)?
One way with awk: awk ' # set the input and output field separator to "|" BEGIN{FS=OFS="|"} # Do this action when number of fields on a line is 3 for first file only. The # action is to strip the number portion from first field and store it as a key # along with the second field. The value of this should be field 3 NR==FNR&&NF==3{sub(/[0-9]+$/,"",$1);a[$1$2]=$3;next} # For the second file if number of fields is 2, store the line in a variable # called line. Validate if field 1 (without numbers) and 2 is present in # our array. If so, print the line followed by "|" followed by value from array. NF==2{line=$0;sub(/[0-9]+$/,"",$1);if($1$2 in a){print line OFS a[$1$2]};next}1 ' file file Test: $ cat file t1ttt01|/a1 t1ttt01|/b1 t1ttt01|/c1 t1ttt03|/a1|1 t1ttt03|/b1|1 t1ttt03

Categories : Linux

Creating fingerprint file from flat file in python
You can use collections.defaultdict here: from collections import defaultdict with open('abc') as f: dic = defaultdict(dict) for line in f: cmp, val, col = line.split() dic[cmp][col] = val print dic # defaultdict(<type 'dict'>, #{'cmp1': {'val_5': '0.127', 'val_4': '0.809', 'val_1': '0.277', 'val_3': '0.795', 'val_2': '0.097'}, # 'cmp2': {'val_5': '0.148', 'val_4': '0.909', 'val_7': '0.599', 'val_6': '0.938', 'val_3': '0.839'}}) #get a sroted list of all val_i from the dic vals = sorted(set(y for x in dic.itervalues() for y in x)) keys = sorted(dic) print "name {}".format(" ".join(vals)) for key in keys: print "{} {}".format(key, " ".join(dic[key].get(v,'0') for v in vals) ) Output: name val_1 val_2 val_3 val_4 val_5 va

Categories : Python

SQL Server : export from file.sql to a flat file
In SSMS go to Tools -> Options -> Query Results -> SQL Server -> Results to Text On this pane choose Output format: Customer delimiter, and enter '|' Return to your query and select Query -> Results to File (Ctrl+Shift+F) Execute your query, you will be asked to select the file path and name File will be saved with a .rpt extension, you can alter it to .txt

Categories : SQL

Reading a file and storing it in an array, then splitting up the file everywhere there is a },
You'd better parse that JSON into an Object using Jackson Mapper, like this: import org.codehaus.jackson.JsonGenerationException; import org.codehaus.jackson.map.JsonMappingException; import org.codehaus.jackson.map.ObjectMapper; public void readJSON(){ MyBean bean; ObjectMapper mapper = new ObjectMapper(); try { bean = mapper.readValue(new File("/sampleProject"), MyBean.class); } catch (JsonGenerationException e) { e.printStackTrace(); } catch (JsonMappingException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } You can get more information about JSON -> Java Obj

Categories : Java

XML to flat file via XSLT
Will this example suffice? T:ftemp>type obs.xml <?xml version="1.0" encoding="UTF-8"?> <observations realtime_start="2013-02-08" realtime_end="2013-02-08" observation_start="1776-07-04" observation_end="9999-12-31" units="lin" output_type="2" file_type="xml" order_by="observation_date" sort_order="asc" count="792" offset="0" limit="100000"> <observation date="1947-01-01" CPIAUCSL_20130208="21.48"/> <observation date="1947-02-01" CPIAUCSL_20130208="21.62"/> <observation date="1947-03-01" CPIAUCSL_20130208="22.0"/> </observations> T:ftemp>xslt obs.xml obs.xsl obs.csv T:ftemp>type obs.csv date,CPIAUCSL 1947-01-01,21.48 1947-02-01,21.62 1947-03-01,22.0 T:ftemp>type obs.xsl <?xml version="1.0" encoding="UTF-8"?> &l

Categories : Xml

The batch file to edit Host file is continuosly running without stopping
Use the :PAUSE at the end of your code. @Echo off :: Check for permissions >nul 2>&1 "%SYSTEMROOT%system32cacls.exe" "%SYSTEMROOT%system32configsystem" :: If error flag set, we do not have admin. if '%errorlevel%' NEQ '0' ( Echo Requesting administrative privileges... goto UACPrompt ) else ( goto gotAdmin ) :UACPrompt Echo Set UAC = CreateObject^("Shell.Application"^) > "%temp%getadmin.vbs" Echo UAC.ShellExecute "%~s0", "", "", "runas", 1 >> "%temp%getadmin.vbs" "%temp%getadmin.vbs" Exit /B :gotAdmin if exist "%temp%getadmin.vbs" ( Del "%temp%getadmin.vbs" ) Pushd "%CD%" CD /D "%~dp0" C:WindowsSystem32 otepad.exe C:WindowsSystem32driversetchosts :PAUSE

Categories : Batch File

Edit to file renaming VBA to fix runtime error: file not found
Add the following line at the top of your subroutine: On Error Resume Next This statement does exactly what it says, it ignores the error and moves on to the next line of code. You need to use caution when using this statement as it does not fix the error. For your current issue, it should be fine, but in many others you'll need to handle the error instead of ignoring it. A good resource to get the basics is Error Handling in VBA. If you want to learn more about Excel VBA, brettdj's answer to What is the best way to master VBA macros is a great place to start. To see how errors are affected by On Error Resume Next or On Error GoTo 0, step through the following in your VBA Editor: Sub ExcelVBAErrorDemo() Dim forceError As Integer ' ignore errors On Error Resume Next

Categories : Excel

Need assistance to decompile a jar file, edit the a class, and save as jar file
then there are lot of free utilities available like WinZip , 7-ZIP you can use them. Before opening jar file just change the extension of jar to zip file and then extract that particular class file that you want to edit , then decompile it using any decompiler ,make the changes , compile it back and then finally put it back in the zip file. Hope it helps. Thanks

Categories : Java

Mainframe Flat file to C# classes
This would depend an awful lot on what the format of the flat file is, and how the schema works - is it self-describing, for example? However, it sounds to me like most of the work here would be in understanding the flat-file format (and the schema-binding). From there, the choice of "deserialize into the static type" vs "deserialize into a dynamic type" is kinda moot, and I would say that there is very little point deserializing into a dynamic type just to have to map it all to the static type. Additionally, the static type can (again, depending on the file-format specifics) be a handy place to decorate the types to say "here's how to interpret this", if the file-format needs specification. For example (and I'm totally making this up as I go along - don't expect this to relate to your for

Categories : C#

how do you call a flat file via ajax?
The point you're looking for is 'datatype'. Change it to 'text' and it should run smoothly. EDIT: You requested a txt-file so I gave you one, JSON however is an entirely different story. jQuery has this horrible behaviour to silently cancel a JSON ajax call if the JSON isn't valid, a problem I'm sure has already cost thousands of hours wasted time looking anywhere but the JSON response, which was my problem at least twice and is most likely also yours. What you need to do is validate the JSON that you're getting from the php, if you can get the plain data you can paste it into jslint.

Categories : Javascript

Faster Alternatives to cp -l for a Whole File Structure?
Depending on the files, you may achieve performance gains by compressing: cd "$original && tar zcf - . | (cd "$copy" && tar zxf -) This creates a tarball of the "original" directory, sends the data to stdout, then changes to the "copy" directory (which must exist) and untars the incoming stream. For the extraction, you may want to watch the progress: tar zxvf -

Categories : Shell

Edit file scanner to scan only first .01MB of each file
A quick look at the code in ent.c, shows a while statement on line 181. This line does an fgetc from the file pointed to by fp. In order to get this to work correctly, you would just add a counter to the statement. Something along the lines of while ((my_count++ < MAX_COUNT) && ((oc = fgetc(fp)) != EOF)) where my_count is an int (or long - depending on how big you really want) and MAX_COUNT for you would be 100000. Be sure to initialize my_count to 0.

Categories : C

Using C# File class to edit file, but throwing exception
If this is a file on a live site then there's a good chance that the web server has a lock on it. Assuming your working in Windows, try using Process Explorer to see what has a lock on the file.

Categories : C#

What is the best way to parse a non-flat file format in Java?
The fastest approach is to use a format like this already e.g. JSon or YAML. These formats do this and are supported. As a side note, I have no control over the format If you want to know the best way to parse something like Yaml, but not, is to read the code for a simple Yaml parser. Just parsing the file is unlikely to be enough, you will also want to trigger events or generate a data model from the data you load.

Categories : Java

Flat file storage of database details
In your htaccess file you can add <Files "database.txt"> Order Allow,Deny Deny from all </Files> Which will prevent people accessing that file

Categories : PHP

Writing text to flat file using spring batch
You can simply write a class implementing FlatFileFooterCallback interface or just use default implementation SummaryFooterCallback class. http://static.springsource.org/spring-batch/apidocs/org/springframework/batch/item/file/FlatFileFooterCallback.html

Categories : Spring

Need to create a regex Pattern to search in a flat file
I think what you're looking for is: "^FILEs+(myapps.*?(_taskmenu.xml|.xlf)) [A-Z0-9]*:[A-Z0-9]*:[A-Z0-9]*:1$" That will work on the data that you provided. The path name will be in the first capture group.

Categories : Regex

How to save multidimensional array to a flat text file - PHP
I'd simply do this: <?php if ($_POST) {//always check, to avoid noticed file_put_contents('theFilenameToWriteTo.json', json_encode($_POST)); } ?> The benefit of json_encode is that it's more or less standard. All languages that I know of can parse this format. If you're using node.js, it can read the data, Java? JSON-java is there for you. Python? import json, hell, even C++ libs are easy to find. To reuse that data in PHP: $oldPost = json_decode(file_get_contents('theFileNameToRead.json')); //$oldPost will be an instance of stdClass (an object) //to get an assoc-array: $oldPostArray = json_decode(file_get_contents('sameFile.json'), true); There are other options to: serialize and write, followed by unserialize on read. If you want that array to be copy-past

Categories : PHP

What are the pros and cons of a Relational DB vs Mongo vs Flat file behind a CDN
As per your requirements, It is better to store product descriptions in a schema free database like MongoDB since your products can have very different fields with wide variation in number of attributes (and corresponding fields). Also such information is written far less often then they are read. MongoDB has collection level write locks which deter write heavy applications if you like to do consistent writes. However the reads in MongoDB are very fast because you dont have to do joins or fetch field values from a EAV schema table. Needless to say, based on your data volume sharding and replication needs to be done in a production environment. It is better than storing in a flat file since MongoDB's read performance is very good because of memory mapped files and you get replication/shar

Categories : Mongodb

Master/details flat file with FileHelpers in the same line
The best way to use FileHelpers is to think of it as a way to describe the file format and (for the time being) to ignore the desired destination classes. Then after you've run the file through the FileHelpers engine, you can transform the resulting records in a separate step. There is a standard name for moving data between systems: ETL which stands for Extract, Transform, Load. If you like, FileHelpers is a good tool for the 'Extract' part of this. So in your case, your FileHelpers class would look something like this: [FixedLengthRecord()] public class MyFileHelpersClass { [FieldFixedLength(5)] public String Cstrid; [FieldFixedLength(10)] public String Name; [FieldFixedLength(10)] public String Phn1; [FieldFixedLength(20)] public String Email1;

Categories : C#

Hosting a war file on cloudfoundry with custom domain? (or alternatives)
It could be done by creating a CNAME record for your app (see Azure example here). Unfortunately, it seems that Cloud Foudry (CF) does not support it yet. As I understand, it is caused by the fact that CF router determines the exact Virtual Machine (and, hence, IP) by parsing URL and determining the route according to the host name (mycoolapp in your case). Ideally there would be an interface in CF where you could register all CNAME aliases for your app (as implemented for Azure websites) If CNAME record would be enabled, that it would also work for HTTPS, as it basically resolves IP address. And definitely there would be an interface for you to upload a certificate for your domain. This leads to problems mentioned below about SSL termination. But, again, as far as I know, it is not suppo

Categories : Java

How do I set the file type for a new file in Komodo Edit/IDE?
You can choose the file type/language when you create a new file in Komodo. Instead of selecting File/New/New File, use File/New/File from Template, and choose Python (for 2.x) or Python3 from the dialog. The file type will take effect immediately for syntax highlighting and checking. After you do this for the first time, you'll notice Python or Python3 appearing in the numbered list in the submenu under File/New, along with any other languages you've used recently. If you're already editing a file and want to change its type without saving, you can use Edit/Current File Settings, and in the File Preferences panel change the Language selection. This may be useful, for example, if you need to switch a file from Python to Python3 or vice versa.

Categories : Python

azure table storage Export Data to Flat or XML File for SQL
If you're looking for a tool to export data from Azure table storage to a flat file, may I suggest you take a look at Cerebrata's Azure Management Studio (Commercial, NOT Free) or ClumsyLeaf's TableXplorer (Commercial, NOT Free). Both of these tools have the capability to export data into CSV and XML file format. Since both of the tools are GUI based, I don't think you can automate the export process. For automation, I would suggest you look into Cerebrata's Azure Management Cmdlets as it provides a PowerShell based interface to export data into CSV or XML format. Since I was associated with Cerebrata in the past, I can only talk about that. The tool won't export on a partition-by-partition basis but if you know the all the PartitionKey values in your table, you can specify a query to ex

Categories : Database

Bulk Inserting data from Flat File to Database Table
It is not-nice to have commas in fields in a comma-delimited file. There may be some way to approach this with a fancy format file. My recommendation is to load the data into a staging table with a single character column and no delimiter. Then, you can load your final table as: insert into t(productID, productName, cost) select cast(left(st.line, 4) as int) as ProductId, substring(st.line, 6, len(st.line) - 5 - charindex(',', reverse(st.line))) as ProductName, cast(right(st.line, charindex(',', reverse(st.line))-1) as float) as cost from stagingTable st; This is using string manipulation to pull out the different fields from each line.

Categories : Sql Server

Detecting large flat surfaces from .OBJ file (created from Kinect Fusion)
This won't be a full answer, but it might give you some ideas. I personally would try some image processing algorithms. First one would be region growing. Second would be seed fill. I think that you would find some other better suited algorithms for segmentation too. The key for this method to work is to consider face normal as the key feature. If two adjacent faces has similar enough (same) normal, you can consider them as a part of same surface. Analogy here is, that you would replace pixel intensity, with face normal and then implement the image processing segmentation algorithm. Maybe, but I doubt it would work for all cases, you can index the normals, thus assigning each normal it's index which would substitute the pixel intensity. EDIT: My idea is, that you take the scene snaps

Categories : C#

Been trying to write an xquery code that converts a flat xml to 5-level hierarchy xml file
It's unclear what XQuery processor you're using, or the exact schema of the data you will need to process, but here is an example of how to transform the data, assuming each row contains a unique set of entries: let $data := <data> <row> <Year>1999</Year> <Quarter>8</Quarter> <Month>5</Month> <Week>10</Week> <Flight>6/11/1995</Flight> <Un>WN</Un> <Air>193</Air> </row> </data> for $row in $data/row return element row { element Year { element value { $row/Year/data() }, element Quarter { element value { $row/Quarter/data() }, element Month { elemen

Categories : SQL

How to feed an image slider from a flat JSON file by Handlebars template?
I found the solution, everything is fine but one line that is $('#imageslider').innerHTML = tmpl(data); It should be $('#imageslider').HTML(tmpl(data)); As it follows the jQuery syntex.

Categories : Json

Issue with parsing JSON flat file in PhantomJS (NO jquery, raw javascript please)
OMG! You used eval ... :P I'm suprised this question hasn't already been down voted 5 times. On the real, excellent question. Your problem, if @DCoder has actually posted an answer, is your JSON. A 'flat file from mongodb' is not necessarily valid JSON. Further, to make it valid you're gonna need to parse the string first: wrap it in square braces make sure you have a comma after each data line exported from mongo. Seriously, eval? Use JSON.parse twice on your converted string. That should solve it. Everything else looks clean. (.. eval .. I cant believe this scrub)

Categories : Javascript

How can I add a GUID column in SSIS from a Flat File source before importing to a SQL Table?
You can do this with a Script Component, or a Custom Data Flow Component. Use NewGuid to make a new guid for each row.

Categories : SQL

SSIS Package that removes or ignores multiple rows in flat file
I figured out the solution to my own question. I used the Conditional Split as shown in my diagram to filter out the rows I didn't need. For example, I put a condition that checks if the first column of data (member_no) was < (less than) a number. If TRUE, it goes to my OLE DB. If False, it goes nowhere. This prevented the "SUM TOTAL" from being passed to the database. I also edited my start range with 'Sheet1$A2:I' as opposed to 'Sheet1$A2:I20000'. That way the package scans until there is no records to scan and stops (I assume).

Categories : Sql Server

Splitting WP_Query Results into different sections and adding headers
This may not be the most elegant solution, but it will certainly save some 'if' statements. During each iteration of the loop, keep track of the last post's first letter. ex, assuming posts are already in alphabetical order: $lastLetter = ''; while ( have_posts() ) { the_post(); $currentLetter = strtolower(substr(get_the_title(), 0, 1)); if ($currentLetter != $lastLetter) { // do custom nav header code } // do list of links content here $lastLetter = $currentLetter; }

Categories : PHP

Uniformly colorize perfectly flat surfaces (contiguous pixels) in geotiff file in R
You probably just need to do image( df.gtiff , col = rep( "cornflowerblue" , ncell(df.gtiff) ) ) But in case that doesn't work, here are some worked examples: require( raster ) # Observe difference between a brick and a raster logoB <- brick(system.file("external/rlogo.grd", package="raster")) logoR <- raster(system.file("external/rlogo.grd", package="raster")) # Use a brick and plotRGB to get intended colours (image uses heat.colors) par(mfrow=c(1,2)) image( logoR , axes = FALSE , labels = FALSE ) plotRGB( logoB ) # To manually mask and colour to a single value: # This file doesn't have NA values so setting e.g. plot( logoR , col = "blue" ) won't work... # Example of cell values for background colour logoB[1] red green blue [1,] 255 255 255 # Make binary ma

Categories : R

SQL Server 2012 Failure: when Importing flat file and DATES are in FORMAT of YYYY.M.DD
You could use a programmers editor, or sed (if you have Cygwin tools installed) to do a regular expression replace on the date strings, since the pattern is quite distinctive (and all you would need to do is replace the period with dashes to do a bulk import, which is what I'm assuming you're trying to do). Failing to do that, is there an OleDB driver for Tableau? You could import the flat files into Tableau, and then from there to SQL Server...

Categories : SQL

Renaming column headers after splitting data frames with split function
Use this simply: lapply(names(split_df), function(x) write.csv(split_df[[x]], file= paste0(x, '_ID.csv'))) Note double square brackets in split_df[[x]].

Categories : R

Splitting file into multiple files
I THINK what you're looking for is: awk '/^[[:space:]]+[[:digit:]]+./{ if (fname) close(fname); fname="out_"$1; sub(/..*/,"",fname) } {print > fname}' file Commented version per @zjhui's request: awk ' /^[[:space:]]+[[:digit:]]+./ { # IF the line starts with spaces, then digits then a period THEN if (fname) # IF the output file name variable is populated THEN close(fname) # close the file youve been writing to until now # ENDIF fname="out_"$1 # set the output file name to the word "out_" followed by the first field of this line, e.g. "out_2.Biochem" sub(/..*/,"",fname) # strip everything from the period on from the file name so it become

Categories : Osx

splitting a string based on tab in the file
You can use regex here: >>> import re >>> strs = "foo bar spam" >>> re.split(r' +', strs) ['foo', 'bar', 'spam'] update: You can use str.rstrip to get rid of trailing ' ' and then apply regex. >>> yas = "yas bs cda " >>> re.split(r' +', yas.rstrip(' ')) ['yas', 'bs', 'cda']

Categories : Python

hadoop file splitting using KeyFieldBasedPartitioner
FYI, there is actually a much cleaner way to split up files using a custom OutputFormat This link describes how to do this really well. I ended up tailoring this other link for my specific application. Altogether, its only a few extra lines of Java

Categories : Hadoop

splitting string from text file : IndexOutOfBounds
Is there a way to split them when there's more then 3 spaces between them? Yes use this regex "\s{3,}" Which means 3 or more spaces See more on Regular Expressions here Also your last line Total: 746.54 Will cause problems unless you handle it What do you get in your file after that line read? You could put an if in to check length of columns is 3 if(columns.length == 3) // Assign your values else // Log error in reading

Categories : Java

Splitting characters from a text file in Python
Use just list(line.rstrip()), list() can convert a string to list of single characters: with open('filname') as f: result = [list(line.rstrip()) for line in f] And don't use file.readlines, you can iterate over the file object itself. Demo: >>> list("foobar") ['f', 'o', 'o', 'b', 'a', 'r']

Categories : Python



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.