w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
C# Upload Files on another partition of the server
Based on the documentation from http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.fileupload.saveas.aspx, the String filename is the full path name of the location to save. Meaning you should be able to do so e.g: FileUpload.SaveAs("D:where_you_want_to_save") By the way what have you tried and what error did you get?

Categories : C#

Why Hive use file from other files under the partition table?
You are not getting error because you have created partition over your hive table but not assigning partition name during select statement. In Hive’s implementation of partitioning, data within a table is split across multiple partitions. Each partition corresponds to a particular value(s) of partition column(s) and is stored as a sub-directory within the table’s directory on HDFS. When the table is queried, where applicable, only the required partitions of the table are queried. Please provide partition name in your select query or use your query like this: select buyer_id AS USER_ID from hive_test where pt='20130805000000' limit 1; Please see Link to know more about hive partition.

Categories : SQL

After new partition gets created insert partition info in another table through trigger
You can do it with DDL Trigers. Check out this link One important note from the author you should consider adding a partition is not DDL if Oracle decides to do it internally, it’s only DDL if it’s an end-user statement Implicit partition creation is a new feature of 11g and it reffers to the interval partition option.

Categories : Oracle

Hoare partition : is this implementation more/less efficient than the standard partition algorithm?
I always prefer the standard Hoare's implementation. If you look at it, it is not very intuitive, but it has a visible advantage: Less number of swaps. While your implementation effectively always does exactly N comparisons and N swaps, the Hoare's implementation does only N comparisons, but it does not swap anything if it is not needed. The difference is significant in some scenarios. At first in a case you use environment where swaps or assignment of variables/objects is a costy operation. For example if you use C/C++ with arrays of objects. Another typical examples where Hoare's partition implementation performs better if when many of the items in your array are of the same value or when the array is almost sorted and needs just to swap a few items. In that cases Hoare's version perfor

Categories : Matlab

How to get to last NTFS MFT record?
The MFT location isn't always fixed on the volume. You should get the starting MFT offset from the boot sector (sector 0 of the volume, you can find the structure online). The first file in the MFT is the "$MFT" file which is the file record for the entire MFT itself. You can parse the attributes of this file like any other file and get it's data run list. When you know the size of each fragment in clusters, parse the last cluster for each 1024 byte record of the last fragment (although I believe a fragmented MFT is rare). The last record in the MFT is the last record in that particular cluster marked "FILE0", if you encounter a null magic number that would be 1024 bytes too far. Or you can just get the file size from it's attributes and calculate the offset to the end of the MFT based o

Categories : Windows

MySQL table Partition with FLOOR function (partition function not allowed)?
MySQL documents the partitioning functions here. The floor() function appears to have some special considerations. In this case, I think the issue might be that the division is returning a float/double result rather than a decimal result. This is easily fixed in your case because you do not need to do the division: CREATE TABLE `fact_events` ( `event_key` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `event_type_key` tinyint(3) unsigned NOT NULL, `analytic_file_id` bigint(20) unsigned NOT NULL, `sdk_session_id` bigint(20) unsigned NOT NULL, `virtual_button_create_id` bigint(20) unsigned NOT NULL, PRIMARY KEY (`event_key`), KEY `idx_events_event_type` (`event_type_key`) ) PARTITION BY RANGE(event_key) ( PARTITION p0 VALUES LESS THAN (0), PARTITION

Categories : Mysql

NTFS vs. File Share
Short answer: No. In Windows each file and directory has an ACL controlling access to it. Each file share also has an ACL controlling access to the share. When you access a remote file through a share you are doing so using the credentials used to login to the local computer. (You can connect using different credentials by entering a username/password when connecting). The remote computer tests the supplied credentials against the ACL on the share. Once you are past that, then every file you attempt to access on the remote machine through this connection will be checked using your credentials against the ACL on the file and the share. This allows a file share to offer more restricted access to some files than if the same user were attempt to access them locally. (So you could share file

Categories : Misc

What is the difference between HDFS and NTFS and FAT32?
... Because NTFS and FAT aren't Distributed. The advantage of HDFS is that it is. See the HDFS Introduction.

Categories : Hadoop

NTFS Journal USN_REASON_HARD_LINK_CHANGE event
As always with the USN, I expect you'll need to go through a bit of trial and error to get it to work right. These observations/guesses may, I hope, be helpful: When the last hard link to a file is deleted, the file is deleted; so if the last hard link has been removed you should see USN_REASON_FILE_DELETE instead of USN_REASON_HARD_LINK_CHANGE. I believe that each reference number refers to a file (or directory, but NTFS doesn't support multiple hard links to directories AFAIK) rather than to a hard link. So immediately after the event is recorded, at least, the file reference number should still be valid, and point to another name for the file. If the file still exists, you can look it up by reference number and use FindFirstFileNameW and friends to find the current links. Comparin

Categories : Windows

NTFS - file record size
It's not actually that much of a waste. You should try to look at what happens when the number of attributes stored in the file record exceeds 1 KB. (by adding additional file names, streams, etc.) It is not clear (to me at least) for different versions of NTFS if the additional attributes are stored in the data section of the volume or in another File Record. In previous versions of NTFS the size of a MFT File Record was equal to the size of a cluster (generally 4KB) which was a waste of space since sometimes all the attributes would take less than 1 KB of space. Since NT 5.0 (I may be wrong), after some research, Microsoft decided that all MFT File Records should be 1KB. So, one reason for storing that number may be backwards compatibility. Imagine you found an old hard drive which stil

Categories : File

Viewing Ciphertext of Encrypted File on NTFS (EFS)
The way you open an encrypted file in order to read its raw encrypted contents (e.g. for a backup/restore application) is to use the: OpenEncryptedFileRaw, ReadEncryptedFileRaw, WriteEncryptedFileRaw, and CloseEncryptedFileRaw api functions. Writing the code on the fly, in a hypothetical hybrid language: void ExportEncryptedFileToStream(String filename, Stream targetStream) { Pointer context; res = OpenEncryptedFileRaw("C:UsersIanwallet.dat", 0, ref context); if (res <> ERROR_SUCCESS) RaiseWin32Error(res); try { res = ReadEncryptedFileRaw(exportCallback, null, context); if (res != ERROR_SUCCESS) RaiseWin32Error(res); } finally { CloseEncryptedFileRaw(context) } } function ExportCallback(pbData: PBYTE, pvCallbackCo

Categories : Windows

access to ntfs stream for a very long filename fails
As the very helpful page on CreateFile says referring to the lpFileName parameter which specifies the filename: In the ANSI version of this function, the name is limited to MAX_PATH characters. To extend this limit to 32,767 wide characters, call the Unicode version of the function and prepend "?" to the path. Since you are contemplating BackupRead obviously you are wanting to access this stream programatically. If so, test things programatically. Trying all these operations from the command prompt is a crap-shoot and will not establish anything more than the ability to perform such operations from the command prompt. With that in mind, let's try this simple program - boilerplate code removed: #include "stdafx.h" int APIENTRY _tWinMain(HINSTANCE, HINSTAN

Categories : Windows

NTFS sparse file data runs ($UsnJrnl)
No, it means that $UsnJrnl occupies 2576 clusters on disk. Sparse clusters don't occupy any space on disk, if you'd try to read sparse cluster, e.g. cluster 10 in your example, NTFS just returns zeros. Generally, you can't determine start and end cluster of the file, since files can be fragmented - your example says that first 1408 clusters are not allocated on disk at all, then 128 clusters of that file occupy disk clusters 510119 - 510247, then 2448 clusters of the file occupy disk clusters 256 - 2704; so in this case you can't say that file begins by cluster X (on disk) and ends by cluster Y (on disk) - it's possible only if file is not fragmented (when it uses only one cluster run).

Categories : Windows

How does NTFS handle the conflict of short file names?
NTFS won't create two short names like that. The first will be THISIS~1.txt and the second will be THISIS~2.txt. For example, open a command prompt and from the root of C: drive type C:>dir prog* /x /ad<Enter> On a Windows 7 64-bit system you will see output similar to this 03/28/2013 12:24 PM <DIR> PROGRA~1 Program Files 07/31/2013 11:09 AM <DIR> PROGRA~2 Program Files (x86) 12/10/2012 05:30 PM <DIR> PROGRA~3 ProgramData

Categories : Windows

How to make Gradle stop trying to chmod MANIFEST.MF on an NTFS drive
Gradle is trying to set default permissions for that file, and I can't see a way to stop it from doing that. (You could make it set different permissions, but I guess that won't help.) Under Windows/NTFS this normally works just fine, so it might be a problem with you Linux NTFS driver or configuration.

Categories : Linux

Create a symbolic link (or other NTFS reparse point) in Windows Driver
There isn't direct API to create reparse points. You need to use ZwFsControlFileZwFsControlFile() to send FSCTL_SET_REPARSE_POINT ioctl with appropriate input buffers and parameters. Don't have example though!

Categories : Windows

SUM OVER PARTITION BY
remove partition by and add group by clause, SELECT BrandId ,SUM(ICount) totalSum FROM Table WHERE DateId = 20130618 GROUP BY BrandId

Categories : SQL

Only get the partition label
As far as I know, Get-PSDrive doesn't know the difference betwee network and local drive. You can use Get-WMIObject Win32_LogicalDisk, which supports filtering drives by type. Filtering example: Get-WmiObject Win32_LogicalDisk | select-object DeviceID, DriveType, @{Name="Type";Expression={[IO.DriveType]$_.DriveType}} | ? {$_.Type -eq 'Fixed'} or Get-WmiObject Win32_LogicalDisk | ? {$_.DriveType -eq 3}

Categories : Powershell

How should I partition this table?
I suggest Sharding the database. More information can be found here importance of sharding and approaches towards MySQL sharding. Hope this helps.

Categories : Mysql

Why only one active partition in MBR
Wikipedia has a nice article on MBR with a lot of useful links. "Only one active partition" seems to be a design choice from the early IBM/DOS bootloader, and has remained that way since. Basically they defined multiple active partitions as an error, and checked for this error at boot. It kind of makes sense because you can only boot one operating system at a time anyway, and a forced single active partition will prevent ambiguity. If I recall correctly LILO and possibly GRUB (linux bootloaders) don't mind if there are multiple active partitions, so I think this is a DOS/Windows issue mostly. As for your questions. An "active" partition only means that the first byte is different from an "inactive" partition. There's no advantage or disadvantage, it's just a flag. Partition information

Categories : Misc

how do I give partition in the
Here is mark-up for the layout you describe (without all the style - just the layout)... <ul class="menu"> <li><a href="/home">Home</a></li> <li><a href="/about">About</a></li> <li><a href="/songs">Songlist</a></li> </ul> And the CSS ul.menu { margin: 0; padding: 0; text-align: center; } ul.menu li { display: inline-block; width: 33%; } ul.menu a { display: block; width: 100%; padding: 0.2em 0; }

Categories : HTML

Partition by consecutive values
You need to group rows by sets of operation. One method is to use a running total that increases when it reaches a new "set", as in: SQL> SELECT mat, op, dt, 2 SUM(change_set) over (PARTITION BY mat ORDER BY dt) set_group 3 FROM (SELECT mat, op, dt, 4 CASE WHEN op != lag(op) over (PARTITION BY mat 5 ORDER BY dt) 6 THEN 1 7 END change_set 8 FROM DATA); MAT OP DT SET_GROUP ----- ---------- ----------- ---------- M1004 100 25/08/2013 M1004 100 25/08/2013 M1004 100 29/08/2013 M1004 600 29/08/2013 1 M1004 600 30/08/2013 1 M1004 600 30/08/2013 1 M1004

Categories : SQL

Parallelizing std::nth_element and std::partition
Cuda THRUST has partition function implemented (http://docs.nvidia.com/cuda/thrust/index.html#reordering). The main idea should be following: Using prefix sums to calculate position of element it the array and then rearrange the array.

Categories : C++

Add sub partition on another column in oracle
No alter query for adding subpartitions as far as i know. To get the desired result performe the folowing steps Create the table in the structure you want using create as select with the partitions and the sub partitions. switch the names of the two tables. you can also explore the use of dbms_Redefinition but if you have a luxury of a littel downtime it's not worth it.

Categories : Oracle

Partition an array with this algorithm
Say you start with an array a with length l. Then you should create two arrays lesser and greater with l-1 length (since all values could be smaller or larger than the pivot). double[] lesser = new double[a.length() - 1]; double[] greater = new double[a.length() - 1]; After that, it is simply (as in your exercise) copying the data to these arrays. Keep track of the length of both arrays like lesserlength = 0; greaterlength = 0; and incrementing that each time you insert a value. That way you know where you can insert the next value in lesser and greater. Eventually you can copy lesser into a new array of length l. double[] result = new int[a.length()]; You know that lesserlength + 1 + greaterlength == a.length() You can use System.arraycopy(source, srcpos, dest, dstpos, len) to

Categories : Java

Array Partition with pointer
A pointer to a pointer is not the same as an array of arrays. You can however use a pointer to an array instead: const char (*pointer)[20]; You of course need to update the printGrid function to match the type. As for the reason why a pointer-to-pointer and an array-of-array (also often called a matrix) see e.g. this old answer of mine that shows the memory layout of the two.

Categories : C++

Pig Latin Partition By clause
What is the use of "Partition By" clause in Pig Latin? This allows you to set the Partitioner of your choice. Pig uses the default one i'e HashPartitioner except for order and skew join. But sometimes you might wanna have your own implementation to enhance the performance. Partition By helps there. Also please provide an example usage. DATA = LOAD '/inputs/demo.txt' using PigStorage(' ') as (no:int, name:chararray); PARTITIONED = GROUP DATA by name PARTITION BY org.apache.pig.test.utils.SimpleCustomPartitioner parallel 2; Does it allow only custom partition or allows partition by column? It is just to specify custom partitioners and not to partition directly based on some field. See PIG-282 for more details.

Categories : Hadoop

Partition using Lead in Oracle
As one of the approaches, we can do the following: with cte(key, book, prd_key, direction, trdtime, price, grp) as( select t.* , dense_rank() over(order by t.trdtime desc) from trd t ) select q.key , q.book , q.prd_key , q.direction , q.trdtime , q.price , grp , (select max(c.price) from cte c where q.direction <> c.direction and c.grp = (select min(grp) from cte l where l.direction <> q.direction and l.grp > q.grp ) ) as next_price from cte q Result: Key Book Prd_Key Direction Trdtime Price Next_Price ------------------------------------------------------

Categories : Oracle

SQL Server Switch more than one Partition at once
Not that I'm aware of. What I'd typically do is place the switch inside the loop. Something like this: DECLARE @Partitions TABLE (PartitionId int PRIMARY KEY CLUSTERED); DECLARE @PartitionId INT; INSERT @Partitions(PartitionId) SELECT prv.boundary_id PartitionId FROM sys.partition_functions AS pf INNER JOIN sys.partition_range_values prv ON prv.function_id=pf.function_id WHERE (pf.name=N'PartitionFunctionName'); WHILE EXISTS (SELECT NULL FROM @Partitions) BEGIN SELECT TOP 1 @PartitionId = PartitionId FROM @Partitions; ALTER TABLE MS_PROD SWITCH PARTITION @PartitionId TO MS_Stage PARTITION @PartitionId; RAISERROR('Switched PartitionId %d to Stage',0,1,@PartitionId) WITH NOWAIT; DELETE @Partitions WHERE PartitionId = @Parti

Categories : Sql Server

Inconsistent SUM when using OVER PARTITION in MSSQL
For floating point arithmetic the order that numbers are added in can affect the result. But in this case you are using Decimal(12,2) which is precise. The issue is that with duplicate values for nGroup, nUser, dTransaction the ROW_NUMBER is not deterministic so different runs can return different results. To get deterministic behaviour you can add guaranteed unique column(s) into the end of the ORDER BY to act as a tie breaker.

Categories : Sql Server

Cassandra: choosing a Partition Key
Indexing in the documentation you wrote up refers to secondary indexes. In cassandra there is a difference between the primary and secondary indexes. For a secondary index it would indeed be bad to have very unique values, however for the components in a primary key this depends on what component we are focusing on. In the primary key we have these components: PRIMARY KEY(partitioning key, clustering key_1 ... clustering key_n) The partitioning key is used to distribute data across different nodes, and if you want your nodes to be balanced (i.e. well distributed data across each node) then you want your partitioning key to be as random as possible. That is why the example you have uses UUIDs. The clustering key is used for ordering so that querying columns with a particular clusteri

Categories : Cassandra

Assigning a letter to OEM Partition
Partitions are storage units used to divide physical disks into smaller, independent parts. Basically they're containers for filesystems. Volumes are storage units with a filesystem. A volume can be inside a partition, but it's also possible for a volume to span multiple partitions. As for assigning a drive letter to that particular partition: you need to determine the type of the filesystem inside that partition first. GParted might prove helpful there. Windows can only mount FAT(32) and NTFS volumes. For other filesystems (like Ext2/3/4 or XFS) you're going to need 3rd party tools.

Categories : Powershell

PARTITION BY doesn't work in H2 db
I don't think H2 supports window functions (aka analytic functions). However, you can do the query in the link using standard SQL: SELECT t.* FROM yourtable t join (select vendorname, max(incidentdate) as maxdate from yourtable yt group by vendorname ) vn on vn.vendorname = yt.vendorname ORDER BY vn.maxDate DESC, t.VendorName ASC, t.IncidentDate DESC; Although this should run in both environments, the over form probably performs better in SQL.

Categories : Oracle

How to partition a list in Haskell?
Here's one option: partition :: Int -> [a] -> [[a]] partition _ [] = [] partition n xs = (take n xs) : (partition n (drop n xs)) And here's a tail recursive version of that function: partition :: Int -> [a] -> [[a]] partition n xs = partition' n xs [] where partition' _ [] acc = reverse acc partition' n xs acc = partition' n (drop n xs) ((take n xs) : acc)

Categories : Haskell

SQL "over" partition WHERE date between two values
row_number() operates after where. You'll always get a row 1. For example: declare @t table (id int) insert @t values (3), (1), (4) select row_number() over (order by id) from @t where id > 1 This prints: 1 2

Categories : SQL

using different storage engines per partition
I would suggest having two separate tables, one for your archives and one for your working set. I don't think MySQL views are smart enough to optimize this. It would also give you complete control over the schema but puts the burden on your application.

Categories : Mysql

Find partition size using df and awk
You can use sed instead of awk again: df -h | grep "/partition" | awk '{print $3}' | sed -e 's,[A-Z]$,,' Keep in mind that this (and your one-liner) does not print the size of the partition, but the size used.

Categories : Linux

MySQL Partitioning: why it's not taking appropriate partition
The HASH partitioning scheme means MySQL translates your arbitrary numerical value into its own hash value. You have defined 366 partitions. What do you think would happen if your query were: EXPLAIN PARTITIONS SELECT * FROM temp where PartitionID = 400 ? Your PartitionID cannot mean in this case the real partition's ID/name, since there is no partition 400. Now, just between the two of us, you might be interested to learn that MySQL's HASHing function is a simple modulus. Thus, 0 maps to partition p0, 1 maps to partition p1, and 400 maps to partition 34 (== 400-366). Generally speaking you should not be too interested in the identify of the particular partition being used. You should be more interested to know that there is a good balance of partitions. If the balance doesn't seem ri

Categories : Mysql

How does oracle manage a hash partition
A hash is not random, it divides the data in a repeatable (but perhaps difficult-to-predict) fashion so that the same ID will always map to the same partition. Oracle uses a hash algorithm that should usually spread the data evenly between partitions.

Categories : Oracle

Aggregation over order-dependent partition?
You didn't specify engine or dialect or version so I assumed SQL Server 2012. Example that you can run to see the solution: http://sqlfiddle.com/#!6/f5d38/21 You solve it by creating correct partitions in your set. Code looks like this. WITH groupLimits as ( SELECT [Key] AS groupend ,COALESCE(LAG([Key]) OVER (order by [Key]),0)+1 AS groupstart FROM sourceData WHERE F1 = 'Y' ) SELECT MIN(sourceData.F2) FROM groupLimits INNER JOIN sourceData ON sourceData.[Key] BETWEEN groupLimits.groupstart and groupLimits.groupend GROUP BY groupLimits.groupstart ORDER BY groupLimits.groupstart

Categories : SQL



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.