w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Copying files and renaming files into the original folder using Powershell
In PowerShell: Get-ChildItem 'C:somefolder' -Filter 'main.txt' -Recurse | % { Copy-Item $_.FullName (Join-Path $_.Directory 'temp.txt') } In batch: @echo off for /r "C:somefolder" %%f in (main.txt) do ( copy "%%~ff" "%%~dpftemp.txt" )

Categories : Powershell

moving rows up and down on ext grid using ext4
It might be a lot easier to add an extra field to the store/model that ranks the records. This way you only have to change the rank-number and then filter the store. This should keep your selection intact. If you don't want this rank-field to be synced to the proxy, set the "persist" property false on the field. That's anyway better than using private parameters on the remove method.

Categories : Extjs

Publishing ASP.NET vs. copying files
It used to be that you copying files was not good enough, because you needed to create a virtual directory/Application within the IIS configuration system for your applications. Recent versions of IIS allow you to handle this entirely within the app.config, and so it's much less of an issue.

Categories : C#

Copying DLL files dynamically.
Split your library in two and remove the reference for log4net.dll from the main library, and reference the second library only if you'll need stuff tied to part that need log4net.dll. Good way how to handle such cases is dependency injection - take a look at Enterprise Library (unity container) - though that will give you one extra dll :) Using Unity Container: You'll have Library1 with ILog4NetUsingInterface In Library2 you'll have class Log4NetUsingClass : ILog4NetUsingInterface In Library2 you'll have bootstrapper that will register Log4NetUsingClass as implementation of ILog4NetUsingInterface public static class Bootstrapper { public static void Init() { ServiceLocator.IoC.RegisterSingleton<ILog4NetUsingInterface, Log4NetUsingClass>(); } } This Init method

Categories : C#

json data for Ext4 store which requires a root element using SPRING MVC
Wrap the list in an object: @ResponseBody Users where class Users { public List<User> items; } And yes, brackets [] are used for JSON arrays.

Categories : Json

TeamCity dependency without copying files?
As far as I know there is no way to prevent artifact copy from server to agent: it will be impossible for the compiler / linker to find dependencies... In my opinion you can take the best of both configurations by publishing zipped artifacts (just postpone a ".zip" to the destination path) and fetching them from "last successful build". This way you will trigger the lib recompile only on respective source code changes (decreasing overall build time) and artifacts will be transferred as a compressed archive (decreased transfer time). Maybe you can optimize further by building each lib separately from others: only libs with pending changes will be recompiled.

Categories : C++

Copying Files From One Zip File Into Another In Gradle
In case you are trying to merge Wars here, you can't do that with a Copy task/method. You'll have to use a Zip task (there is no equivalent method). In case you want to merge into an existing War, the way to do this is existingWar.from { zipTree(otherWar) }.

Categories : Gradle

DFRS has gone mad. Copying old files back
You could use robocopy for that task. Run the following on SourceServer: robocopy C:path oFolder1 C:path oFolder2 /e /copyall /dcopy:t /xj robocopy \TargetServerFolder1 C:path oFolder2 /e /copyall /dcopy:t /xj /xo For general troubleshooting of DFS-R see here and here.

Categories : Windows

Copying files on mounted drive
Neither cp nor the smb protocol are smart enough to realize that the source + destination of the file are on the same remote server. cp will simply do its usual thing and slurp all the data from the source file (copying it to the client machine), then spit it back out in the target file on the server. So yes, it'll be a round-trip through the client. A better solution for this sort of thing is using an SSH remote command, turning it into a purely server-side operation: ssh imageuser@x.x.x.x 'cp sourcefile targetfile' You can still keep the fileserver mounted on your local machine to actually see what files you're dealing with, but do all the file copy/move operations via the ssh commands for efficiency. Since the server is a Windows machine, you'll probably have to install cygwin and g

Categories : PHP

copying files from remote desktop using php
Yeah you can copy files from remote server if you have previlliages/rights here is link <?php if(!@copy('http://someserver.com/somefile.zip','./somefile.zip')) { $errors= error_get_last(); echo "COPY ERROR: ".$errors['type']; echo "<br /> ".$errors['message']; } else { echo "File copied from remote!"; } ?>

Categories : PHP

Estimate time for copying files
You can't really do that unless you are in the process of sending the file. The best you can do is make estimates on each side of the process. When you begin to sending/receive, store the time. As you send/receive each group of bytes, you take note of the elapsed time (the current time minus the start time). Divide that by the number of bytes sent/received to determine how long it takes to send one byte. Then, multiply that ratio by the number of bytes left, and you will have the approximate time left to perform the transfer.

Categories : C#

Copying folders and files using wildcards
Your second cp command has an incorrect slash in *_DocketPORT/*. It should be *_DocketPORT*. Try changing it to: cp -r -d $BACKUPDIR/*_DocketPORT* $HOME

Categories : Bash

copying files using Google script
I would suggest testing the DriveApp version of this function for two implicit reasons: The DocsList service is deprecated and may be discontinued as the DriveApp functions are fleshed out. DriveApp is a little more agnostic when it comes to filetypes, and may bypass your error. Check out fileMakeCopy() in the Apps Script Documents for details.

Categories : Google Apps Script

Issues in copying files on iOS using NSStreams
This works for me for any type of file: if (![NSFileManager.defaultManager copyItemAtPath: sourceFileName toPath: targetFileName error: &error]) { NSAlert *alert = [NSAlert alertWithError: error]; [alert runModal]; return; }

Categories : Python

Copying files to system directory from app?
You can copy a file from the specified location using Enivronment.getExternalStorageDirectory(). From this line you will get the root of the directory structure and now you can navigate to predefined path as per your wish.. e.g., File root = Environment.getExternalStorageDirectory(); also you can use IOUtils.copy(inputStream, outputStream); to copy from an inputsream (your png file) to an outputstream (where you want to save it).

Categories : Java

Its about the copying image files from one directory to another
You can try out the sample code here: http://www.codeproject.com/Questions/212217/increment-filename-if-file-exists-using-csharp

Categories : C#

Copying 8 files from a folder to 40 other subfolders
Would something like this work for selecting the destination folder? $destination = Get-ChildItem "D:example2" -Recurse | ? { $_.PSIsContainer -and $_.Name -eq "EmailTemplates" } Otherwise you'll probably have to determine the destination like this: $destination = Get-ChildItem "D:example2" | ? { $_.PSIsContainer } | % { Join-Path $_.FullName "EmailTemplates" } | ? { Test-Path -LiteralPath $_ } | Get-Item Then copy the files like this: Get-ChildItem "d:wwwexample" | ? { -not $_.PSIsContainer } | % { Copy-Item $_.FullName $destination.FullName }

Categories : Powershell

Copying files in the same Amazon S3 bucket
Since your source path contains your destination path, you may actually be copying things more than once -- first into the destination path, and then again when that destination path matches your source prefix. This would also explain why copying to a different bucket is faster than within the same bucket. If you're using s3s3mirror, use the -v option and you'll see exactly what's getting copied. Does it show the same key being copied multiple times?

Categories : Python

Copying content files in GruntJS?
Check out grunt-contrib-copy and its processContentExclude option. Should be able to just pass an array of [*.coffee, *.whatever].

Categories : Javascript

Copying files into asset folder in android
You cannot modify assets -- including adding or removing them -- at runtime. You are welcome to store your downloaded files on internal storage or external storage.

Categories : Android

Copying files between two windows servers through jenkins
Jenkins might be able to do it, via the script steps running the scp command; however, if this is part of a build, I would suggest attaching the file(s) to a project, and distributing them through the maven repository.

Categories : Windows

Bandwidth throttling while copying files between computers
The ThrottledStream class you linked to uses a delay calculation to determine how long to wait before perform the current write. This delay is based on the amount of data sent before the current write, and how much time has elapsed. Once the delay period has passed it writes the entire buffer in a single chunk. The problem with this is that it doesn't do any checks on the size of the buffer being written in a particular write operation. If you ask it to limit throughput to 1 byte per second, then call the Write method with a 20MB buffer, it will write the entire 20MB immediately. If you then try to write another block of data that is 2 bytes long, it will wait for a very long time (20*2^20 seconds) before writing those two bytes. In order to get the ThrottledStream class to work more

Categories : C#

how to avoid manual work of copying dll files to other PC
You could provide an installer/setup with your application to install the assemblies in the GAC. Or just provide all the assemblies in the same directory as your application. If the assemblies are not yours, then please check if and how you can redistribute it.

Categories : Vb.Net

Copying all the files modified this month from the command line
Of course, just doing grep Jul is bad because you might have files with Jul in their name. Actually, find is probably the right tool for your job. Something like this: find $DIR -maxdepth 1 -type f -mtime -30 -exec cp {} $DEST/ ; where $DIR is the directory where your files are (e.g. '.') and $DEST is the target directory. The -maxdepth 1 flag means it doesn't look inside sub-directories for files (isn't recursive) The -type f flag means it looks only at regular files (e.g. not directories) The -mtime -30 means it looks at files with modification time newer than 30 days (+30 would be older than 30 days) the -exec flag means it executes the following command on each file, where {} is replaced with the file name and ; is the end of the command

Categories : Shell

Find command only scanning sub folders and not only copying files from the last day
/usr/bin/find /home/user/USBHD/Movies/ -mtime +1 -exec cp -n {} /home/user/NASVD/Movies ; Two points - this will continue copying files more then 1 day old - that include 2 ,3 ,4 ... forever. Also I hope this is run in the user's crontab so that file permissions work correctly. If this is root's crontab the copied files will be owned by root. This script also does not remove day 1+ and older files it just copies them. So: as I said it will always copy any file 1+ days old. IF you change cp to mv the file will disappear from the first directory and get moved into the second,

Categories : Linux

Copying the files based on modification date in linux
I guess I would first store the list of files temporarily and use a loop. find . -mtime -90 -ls >/tmp/copy.todo.txt You can read the list, if it is not too big, with for f in `cat /tmp/copy.todo.txt`; do echo $f; done Note: the quotes around cat... are backquotes, often in upper left corner of keyboard You can then replace the echo command with a copy command: for f in `cat /tmp/copy.todo.txt`; do cp $f /some/directory/ done

Categories : Linux

How to get to last NTFS MFT record?
The MFT location isn't always fixed on the volume. You should get the starting MFT offset from the boot sector (sector 0 of the volume, you can find the structure online). The first file in the MFT is the "$MFT" file which is the file record for the entire MFT itself. You can parse the attributes of this file like any other file and get it's data run list. When you know the size of each fragment in clusters, parse the last cluster for each 1024 byte record of the last fragment (although I believe a fragmented MFT is rare). The last record in the MFT is the last record in that particular cluster marked "FILE0", if you encounter a null magic number that would be 1024 bytes too far. Or you can just get the file size from it's attributes and calculate the offset to the end of the MFT based o

Categories : Windows

Copying files to web application folder in IIS causes file permission issues
Here are a couple of things to try. One edits the file security to remove any "deny" access permissions and to give your app full rights, and the other removes any "read only" settings on a file and sets the attributes to "normal" (I've had to use this one in the past): protected void Page_Load(object sender, EventArgs e) { string path = Server.MapPath("theFileLocation"); RemoveFileSecurity(path, @"App Pool Identity", FileSystemRights.FullControl, AccessControlType.Deny); AddFileSecurity(path, @"App Pool Identity", FileSystemRights.FullControl, AccessControlType.Allow); FileAttributes a = File.GetAttributes(path); a = RemoveAttribute(a, FileAttributes.ReadOnly); File.SetAttributes(path, FileAttributes.Normal); } private FileAttributes RemoveAttribute(FileAttribut

Categories : C#

NTFS vs. File Share
Short answer: No. In Windows each file and directory has an ACL controlling access to it. Each file share also has an ACL controlling access to the share. When you access a remote file through a share you are doing so using the credentials used to login to the local computer. (You can connect using different credentials by entering a username/password when connecting). The remote computer tests the supplied credentials against the ACL on the share. Once you are past that, then every file you attempt to access on the remote machine through this connection will be checked using your credentials against the ACL on the file and the share. This allows a file share to offer more restricted access to some files than if the same user were attempt to access them locally. (So you could share file

Categories : Misc

Maven: Not copying files from src/main/resources to the classpath root - Executable Jar
You can try replacing your Configuration as follows; <configuration> <archive> <manifest> <mainClass>xx.com.xxx.xxxx.xx.xxxx.InterfaceRunner</mainClass> </manifest> </archive> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> <finalName>InterfaceRunner</finalName> </configuration> And then mvn package

Categories : Java

Copying/using Python files from S3 to Amazon Elastic MapReduce at bootstrap time
If you are using boto, I recommend packaging all your Python files in an archive (.tar.gz format) and then using the cacheArchive directive in Hadoop/EMR to access it. This is what I do: Put all necessary Python files in a sub-directory, say, "required/" and test it locally. Create an archive of this: cd required && tar czvf required.tgz * Upload this archive to S3: s3cmd put required.tgz s3://yourBucket/required.tgz Add this command-line option to your steps: -cacheArchive s3://yourBucket/required.tgz#required The last step will ensure that your archive file containing Python code will be in the same directory format as in your local dev machine. To actually do step #4 in boto, here is the code: step = StreamingStep(name=jobName, mapper='...', reducer='...', ... cac

Categories : Amazon

NTFS Journal USN_REASON_HARD_LINK_CHANGE event
As always with the USN, I expect you'll need to go through a bit of trial and error to get it to work right. These observations/guesses may, I hope, be helpful: When the last hard link to a file is deleted, the file is deleted; so if the last hard link has been removed you should see USN_REASON_FILE_DELETE instead of USN_REASON_HARD_LINK_CHANGE. I believe that each reference number refers to a file (or directory, but NTFS doesn't support multiple hard links to directories AFAIK) rather than to a hard link. So immediately after the event is recorded, at least, the file reference number should still be valid, and point to another name for the file. If the file still exists, you can look it up by reference number and use FindFirstFileNameW and friends to find the current links. Comparin

Categories : Windows

NTFS - file record size
It's not actually that much of a waste. You should try to look at what happens when the number of attributes stored in the file record exceeds 1 KB. (by adding additional file names, streams, etc.) It is not clear (to me at least) for different versions of NTFS if the additional attributes are stored in the data section of the volume or in another File Record. In previous versions of NTFS the size of a MFT File Record was equal to the size of a cluster (generally 4KB) which was a waste of space since sometimes all the attributes would take less than 1 KB of space. Since NT 5.0 (I may be wrong), after some research, Microsoft decided that all MFT File Records should be 1KB. So, one reason for storing that number may be backwards compatibility. Imagine you found an old hard drive which stil

Categories : File

What is the difference between HDFS and NTFS and FAT32?
... Because NTFS and FAT aren't Distributed. The advantage of HDFS is that it is. See the HDFS Introduction.

Categories : Hadoop

Viewing Ciphertext of Encrypted File on NTFS (EFS)
The way you open an encrypted file in order to read its raw encrypted contents (e.g. for a backup/restore application) is to use the: OpenEncryptedFileRaw, ReadEncryptedFileRaw, WriteEncryptedFileRaw, and CloseEncryptedFileRaw api functions. Writing the code on the fly, in a hypothetical hybrid language: void ExportEncryptedFileToStream(String filename, Stream targetStream) { Pointer context; res = OpenEncryptedFileRaw("C:UsersIanwallet.dat", 0, ref context); if (res <> ERROR_SUCCESS) RaiseWin32Error(res); try { res = ReadEncryptedFileRaw(exportCallback, null, context); if (res != ERROR_SUCCESS) RaiseWin32Error(res); } finally { CloseEncryptedFileRaw(context) } } function ExportCallback(pbData: PBYTE, pvCallbackCo

Categories : Windows

batch file not copying files even though it says it did.. error: "UNC paths are not Supported. Defaulting to Windows Directory"
Can you run the bat file from the local machine instead of running it from a network drive? You have an extra space between Application and Data in "C:Documents and SettingsAll UsersApplication Data

Categories : Batch File

NTFS sparse file data runs ($UsnJrnl)
No, it means that $UsnJrnl occupies 2576 clusters on disk. Sparse clusters don't occupy any space on disk, if you'd try to read sparse cluster, e.g. cluster 10 in your example, NTFS just returns zeros. Generally, you can't determine start and end cluster of the file, since files can be fragmented - your example says that first 1408 clusters are not allocated on disk at all, then 128 clusters of that file occupy disk clusters 510119 - 510247, then 2448 clusters of the file occupy disk clusters 256 - 2704; so in this case you can't say that file begins by cluster X (on disk) and ends by cluster Y (on disk) - it's possible only if file is not fragmented (when it uses only one cluster run).

Categories : Windows

How does NTFS handle the conflict of short file names?
NTFS won't create two short names like that. The first will be THISIS~1.txt and the second will be THISIS~2.txt. For example, open a command prompt and from the root of C: drive type C:>dir prog* /x /ad<Enter> On a Windows 7 64-bit system you will see output similar to this 03/28/2013 12:24 PM <DIR> PROGRA~1 Program Files 07/31/2013 11:09 AM <DIR> PROGRA~2 Program Files (x86) 12/10/2012 05:30 PM <DIR> PROGRA~3 ProgramData

Categories : Windows

access to ntfs stream for a very long filename fails
As the very helpful page on CreateFile says referring to the lpFileName parameter which specifies the filename: In the ANSI version of this function, the name is limited to MAX_PATH characters. To extend this limit to 32,767 wide characters, call the Unicode version of the function and prepend "?" to the path. Since you are contemplating BackupRead obviously you are wanting to access this stream programatically. If so, test things programatically. Trying all these operations from the command prompt is a crap-shoot and will not establish anything more than the ability to perform such operations from the command prompt. With that in mind, let's try this simple program - boilerplate code removed: #include "stdafx.h" int APIENTRY _tWinMain(HINSTANCE, HINSTAN

Categories : Windows

Visual Studio 2012 under Windows8: how to prevent copying ASP.NET web site files into user temp folder?
The problem was related to unit tests project: once I disabled it problem was resolved. I didn't try to find what exactly is wrong (may be that was just some configuration options). I just created a new project for tests and moved all classes from the old one into a new.

Categories : Asp Net



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.