w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Block Replication Limits in HDFS
The rate of replication work is throttled by HDFS to not interfere with cluster traffic when failures happen during regular cluster load. The properties that control this are dfs.namenode.replication.work.multiplier.per.iteration (2), dfs.namenode.replication.max-streams (2) and dfs.namenode.replication.max-streams-hard-limit (4). The foremost controls the rate of work to be scheduled to a DN at every heartbeat that occurs, and the other two further limit the maximum parallel threaded network transfers done by a DataNode at a time. The values in () indicate their defaults. Some description of this is available at https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml You can perhaps try to increase the set of values to (10, 50, 100) respectively to spruce

Categories : Hadoop

Hadoop input split size vs block size
To 1) and 2): i'm not 100% sure, but if the task cannot complete - for whatever reason, including if something is wrong with the input split - then it is terminated and another one started in it's place: so each maptask gets exactly one split with file info (you can quickly tell if this is the case by debugging against a local cluster to see what information is held in the input split object: I seem to recall it's just the one location). to 3): if the file format is splittable, then Hadoop will attempt to cut the file down to "inputSplit" size chunks; if not, then it's one task per file, regardless of the file size. If you change the value of minimum-input-split, then you can prevent there being too many mapper tasks that are spawned if each of your input files are divided into the block

Categories : Hadoop

Take high supported picture size in android
it's solve and it's corrected method : public void GetHighResolutionForCamera(){ int Max = 0; Parameters params = mCamera.getParameters(); List<Camera.Size> resolution ; resolution = params.getSupportedPictureSizes(); Camera.Size MR = resolution.get(0); for (Camera.Size size : resolution) { if(Max < size.height){ MR = size ; Max = size.height; } } params.setPictureSize(MR.width, MR.height); mCamera.setParameters(params); } problem because MR was null

Categories : Android

Big array of size 1mega caused high CPU?
I just wrote this: #include <iostream> #include <cstdio> using namespace std; static __inline__ unsigned long long rdtsc(void) { unsigned hi, lo; __asm__ __volatile__ ("rdtsc" : "=a"(lo), "=d"(hi)); return ( (unsigned long long)lo)|( ((unsigned long long)hi)<<32 ); } const int M = 1024*1024; void bigstack() { FILE *f = fopen("test.txt", "r"); unsigned long long time; char buffer[M]; time = rdtsc(); fread(buffer, M, 1, f); time = rdtsc() - time; fclose(f); cout << "bs: Time = " << time / 1000 << endl; } void bigheap() { FILE *f = fopen("test.txt", "r"); unsigned long long time; char *buffer = new char[M]; time = rdtsc(); fread(buffer, M, 1, f); time = rdtsc() - time; delete [

Categories : C++

Move data from oracle to HDFS, process and move to Teradata from HDFS
Looks like you have several questions so let's try to break it down. Importing in HDFS It seems you are looking for Sqoop. Sqoop is a tool that lets you easily transfer data in/out of HDFS, and can connect to various databases including Oracle natively. Sqoop is compatible with the Oracle JDBC thin driver. Here is how you would transfer from Oracle to HDFS: sqoop import --connect jdbc:oracle:thin@myhost:1521/db --username xxx --password yyy --table tbl --target-dir /path/to/dir For more information: here and here. Note than you can also import directly into a Hive table with Sqoop which could be convenient to do your analysis. Processing As you noted, since your data initially is relational, it is a good idea to use Hive to do your analysis since you might be more familiar with SQL-

Categories : Oracle

what's the difference between distcp hdfs and hftp,why distcp hdfs is effective?
distcp hftp should be used when the copying of the data between two clusters of different versions(different versions of hadoop). the command should be executed from the destination cluster, (more specifically, on TaskTrackers that can write to the destination cluster) The source should be specified with hftp:///.

Categories : Hadoop

Problems listing block size
Unless you want to re-format, I believe you are SOL. Check fdisk. Block size is often a characteristic of how data is organized within the media (e. g. hard drive) and would need a low level re-organization like re-formating the disk to the desired block size. Variant blocks sizes may not even be allowed for various media.

Categories : C

Azure Could Computing high availoability vs NEO4J high availability?
The short answer is probably yes. Windows Azure provide you infrastructure that allow you to build high availability system, it won't make any system high available by magic. As NEO4J is state-full, each node (with only one node Azure don't give you any SLA, you instance will be down) will need to share some state and the way to do it will be dependent on how NEO4J is working. So you will need to rely on NEO4J mechanism to do it. I don't know how NEO4J is working but you won't be able to skip designing an high available architecture around NEO4J using Windows Azure infra. Cloud may be a magic buzz word that can make things append on management level, but when we are on hard real world level Harry magic wand doesn't exist.

Categories : Neo4j

Simple average over a block of a fixed size
Here's one approach that's pretty straightforward. First, some sample data: set.seed(1) x <- cbind(Plant = letters[1:5], as.data.frame(matrix(rnorm(60), ncol = 12))) x # Plant V1 V2 V3 V4 V5 V6 # 1 a -0.6264538 -0.8204684 1.5117812 -0.04493361 0.91897737 -0.05612874 # 2 b 0.1836433 0.4874291 0.3898432 -0.01619026 0.78213630 -0.15579551 # 3 c -0.8356286 0.7383247 -0.6212406 0.94383621 0.07456498 -1.47075238 # 4 d 1.5952808 0.5757814 -2.2146999 0.82122120 -1.98935170 -0.47815006 # 5 e 0.3295078 -0.3053884 1.1249309 0.59390132 0.61982575 0.41794156 # V7 V8 V9 V10 V11 V12 # 1 1.35867955 -0.4149946 -0.1645236 -0.7074952 0.3981059 1.9803999

Categories : R

why is my font-size on my p tag messing up my inline-block?
You have two inline blocks next to each other in a container that's not wide enough for both, so they are wrapping. That's expected for inline blocks. Make your browser wide enough and they fit fine.

Categories : HTML

quicksand sorting price values low to high / high to low
val() gives you a string, so > and < comparisons are lexographical (not numeric). Try wrapping the values in parseInt() or parseFloat(). Make sure to add appropriate error handling as well.

Categories : Jquery

Centering a variable-size block element on a page
One way is to adjust it. Here's a way I did that got it pretty much dead center: <html> <body> <table height='100%' width='100%'> <tr style="vertical-align:middle; text-align: center;"><td style="position: relative; top: -6%"> <pre style="font-size: xx-large; "> Q: How many Bell Labs Vice Presidents does it take to change a light bulb? A: That's proprietary information. Answer available from AT&T on payment of license fee (binary only). </pre> </td></tr></table> </body> </html> Edit, to center the block and not the text, I used this: <html> <body> <table height='100%' style="margin-left: auto; margin-right: auto;"> <tr style="vertical-align:middle;"><td style="position: relat

Categories : HTML

Show different adsense block based on screen size?
A less efficient, but simple way would be to use javascript to dynamically load your adverts. On page load, check which class tag is visible and then load the appropriate ad. You lose on SEO, but since 99.99% adverts are also loaded in runtime by JS anyway, this won't affect you at all. + You probably don't want SEO from adverts anyway, so no problem there.

Categories : Javascript

Zurb Foundation Block Grid with different image size?
you could insert every image into a div with: overflow:hidden; and position the image in the center of the div, then: $("img").each(function(){ var width = $(this).width(); var height = $(this).height(); if(height < width){ $(this).height($("#div").height()); //if the width > height, set the image height equal to the div height.. //let's assume that this image is into the div with id="div" } else{ $(this).width($("#div").width()); } });

Categories : CSS

Specified initialization vector (IV) does not match the block size for this algorithm
Try a different string encoding. While I used base64 string same as you in my class, I had control over the IV generation and encoding. Perhaps wherever you got the string encoded form of the IV byte array from used UTF8 or some such. I think I read somewhere that there are only certain byte array lengths that are used as IV's. One way to check would be to get a byte array from that string in various encodings and use the byte array that most closely resembles a power of 2 (128, 256... or 192 since that's also a common magic number).

Categories : Asp Net

Error due to variable size data in Matlab function block
First you should see whether you want your data to be variable-size. Based on the sample code it looks like you want sizes to be different based on dataType. dataType feels like a compile time (not run-time) variable. If this is the case you should make this a parameter in MATLAB Function block and make sure it is not tunable. You can then set the value of this parameter using set_param function. If you do that then the value of dataType is known at compile time and only one of the "if" branches is analyzed which would result in fixed-size "ak". If that is not the case and you want to switch dataType at run-time then you must check the Variable size option in data and ports manager. Your ak is variable size data in this case. This output can only be used with other blocks which can suppo

Categories : Matlab

Varnish High DB Connections In High Traffic
Mostly spiking backend connections has little to do with your varnish configuration but alot to do with the cachability of your site. Are there cookies that prevents you from caching efficiently? You can chose to strip them or remove all but chosen ones, there are examples for both on the varnish site. Do a varnishstat and check your hit rates during peaks. Is it a good cache hitratio? Is it the same as during low load? If it's the same or higher in low load it's easy to work on improving it at any time. Do a varnishtop -i txurl to see what requests are the most frequently sent to backend servers. Maybe it's some URLs that are just not cached due to faulty headers? Maybe some pages can be cached longer? Maybe some parts of the pages can be cached with ESI? Make sure your varnish is not

Categories : Mysql

Hadoop - appropriate block size for unsplittable files of varying sizes (200-500mb)
If you have non-splittable files then you are better off using larger block sizes - as large as the files themselves (or larger, it makes no difference). If the block size is smaller than the overall filesize then you run into the possibility that all the blocks are not all on the same data node and you lose data locality. This isn't a problem with splittable files as a map task will be created for each block. As for an upper limit for block size, i know that for certain older version of Hadoop, the limit was 2GB (above which the block contents were unobtainable) - see https://issues.apache.org/jira/browse/HDFS-96 There is no downside for storing smaller files with larger block sizes - to emphasize this point consider a 1MB and 2 GB file, each with a block size of 2 GB: 1 MB - 1 block

Categories : Hadoop

ZMQ throughput optimization
The ZeroMQ guide illustrates the Black Box Pattern for high speed subscribers. In essence, it uses a two stream approach (per node), where each stream has it own I/O thread and subscriber, both of whom are bound to a specific network interface (NIC) and core, so you'll need two network adapters and multi-cores per node for this to work. You can read the full details in the guide.

Categories : C++

Does WebSockets have a throughput limit?
An exact answer probably depends on which WebSocket implimentations you are using, but in general there shouldn't be any WebSocket specific bandwidth limitations. There is some minor overhead with respect to the framing, UTF8 validation (text messages only), and masking (client to server messages only) that is not present in a raw binary TCP connection. With sufficient CPU these should scale up to the available bandwidth.

Categories : HTML

Throughput of the unix TCP socket
Your question is worth a paper, not sure there is a single, clear and unique answer :) Generally speaking if the two computers are connected over a WAN (i.e. The Internet :)) most probably the limiting factor will not be the throughput of the socket (TCP or UDP) on the sending host, but the network itself. To kind of test you are talking about is often called BTC (Bulk Transfer Capacity) for a single link. BTC is mainly meaningful for a TCP connection where the packets are retransmitted by the stack, and the sliding window mechanics may slow the transmission rate if the producer/consumer speeds do not match. Once you know the link capacity between the two hosts, having a single or multiple links might be evaluated, taking into account also other elements such as the applications architect

Categories : Sockets

Can JMeter Assert on Throughput?
Yes its possible. You can use the Jenkins Text Finder plugin and the JMeter "aggregate report". With aggregate report you can write a CSV or XML file. You can search this file for your throughput with the Jenkins Text Finder Plugin and then you can mark the build as failed or unstable. Alternativly, you can use a BASH script to search the generated JMeter report file and return a non null return value. This will make your build fail.

Categories : Maven

How to view throughput using a SELECT?
A simple way I can think of is to load up perfmon on your machine and watch it while you're doing the select. This will give you the transfer rate to your machine from the DB. If you want to know the IO to disk on the DB, then you'll probably have to stop all other loads, load up perfmon on the DB, and watch it while you're executing the select. This result is highly dependent on how much of the data is already in the cache. If you can't isolate your select, then you can average what your baseline is and see how much more throughput there is during your select. If you can't pull up perfmon, then you can see if the relevant counters are in sys.dm_os_performance_counters (http://technet.microsoft.com/en-us/library/ms187743.aspx).

Categories : Sql Server

Illegal Block Size Exception Input length must be multiple of 16 when decrypting with padded cipher
I was able to run the code without any problem. However, I used Apache's Base64 for encoding/decoding...maybe your Base64 has bugs. If you wrote it yourself, there is a big chance that you missed some cases. For real production code, use heavily tested libraries such as Apache's. You can find the library that I used for Base64 here: http://commons.apache.org/proper/commons-codec/download_codec.cgi Here is the full working code: package security.symmatric; import javax.crypto.Cipher; import javax.crypto.spec.SecretKeySpec; import org.apache.commons.codec.binary.Base64; public class AES { public static String symmetricEncrypt(String text, String secretKey) { byte[] raw; String encryptedString; SecretKeySpec skeySpec;

Categories : Java

When to use low < high or low + 1 < high for loop invariant
If your invariant is that the target must lie in low <= i <= high, then you use while (low < high); if your invariant is that the target must lie in low <= i < high then you use while (low + 1 < high). [Thanks to David Eisenstat for confirming this.]

Categories : Algorithm

java HashMap keys comparison with low throughput
otherMap.get(key) will not find an entry for key="" and thus the call to equals(...) will throw a NPE. Since you seem to try and check whether there is an entry for me.getKey() in otherMap try otherMap.get(me.getKey()) != null or otherMap.containsKey(me.getKey()=) instead. Additionally, otherMap.get(key).equals(me.getKey()) will never be true in your case (independent on the value of key), since you're comparing the value from otherMap with the key from origMap. Also please note that calling toString() might result in a NPE as well, unless you are sure that there are no null values. I'll try and restructure your code to what I think you want: Map<String, String> newMap=new HashMap<>(); //as of Java 7 BigDecimal value1=null; for (Map.Entry<String,Object> me : origMap.

Categories : Java

Minimizing throughput hits on disk I/O in Java?
I don't think it is a magic number or anything. It just buffers data up to that limit before it actually writes it to the disc. So if you have short data blocks you can batch them and just write once instead of many times. This save disc seeks because the disc would need to find the correct location at the beginning of each block. So it simply safes a couple of the expensice (not so expensive when using a SSD) disc seeks, when write many small chunks of data. Update: 8MB is just one unit larger than the default buffer size which is 8kb.

Categories : Java

Block pointer variable 'block' is uninitialized when captured by block
To just declare your block you would use void (^block)(void); then initialize it with block =^ // Get warning here! { next = dispatch_time(next, DELAY_IN_MS * 1000000L); // Do my periodic thing ... dispatch_after(next, dispatch_get_main_queue(), block); } That is why putting in the semicolon works. Why it gives you an error without the semicolon: you are referencing block in its own declaration/assignment. You are using it in the "dispatch_after" call, but its not fully set up yet.

Categories : IOS

Measuring performance/throughput of fast code ignoring processor speed?
I'm pretty sure this has to be equivalent to the halting problem, in which case it can't be done. Things such as branch prediction, memory accesses, and memory caching will all change performance irrespective of the speed of the CPU upon which the program is run.

Categories : C++

Moving a lot of data data from HDFS to HDFS
Try distcp, which is a tool used for large inter/intra-cluster copying. See http://hadoop.apache.org/docs/r0.19.0/distcp.html

Categories : Multithreading

inline-block Pseudo Element forcing size and squeezed out of main element
Add font-size: 0; to the main element .header_handle. This eliminates any space between inline elements. I got the trick from the Fighting the Space Between Inline Block Elements on CSS-Tricks.

Categories : Misc

Show text block under an image block on clicking the image block
You want like this, check DEMO http://jsfiddle.net/yeyene/gNQVR/1/ JQUERY $('#imageDiv img').on('click', function(){ $(this).hide(); $(this).siblings('#textDiv').show(); }); $('#textDiv .close').on('click', function(){ $(this).parent('#textDiv').hide(); $(this).parent().siblings('img').show(); }); HTML <div id="imageDiv" > <img src="http://wallpaper-fullhd.com/wp-content/uploads/2013/03/at-the-beach-hd-wallpaper-1920x1200.jpg" class="close" width="200" height="200"> <div id="textDiv"> <a class="close">x</a> This is the text for the image. ...</div> </div>

Categories : Javascript

Trying to have a view & block have access to a variable from a another block that is used to determine which element number in array
Is there no natural order to these steps? You don't have an 'i' you can pass in when you include the partial (partial local variables)? If not, it sounds like you need a slightly smarter 'Step' object that is aware of its index in the workflow. You could also place the icon list in a helper, like so: def icons ["icon-link", "icon-bar-chart", "icon-edit", "icon-beaker","icon-link", "icon-bar-chart", "icon-edit", "icon-beaker"] end Gets it out of your view.

Categories : Ruby On Rails

A module I uploaded block is displaying before the catalog category block. How do I change it?
The before and after attributes only work in one of two cases When you insert into a core/text_list block like your template block calls getChildHtml without any paramaters For example <reference name="root"> <block type="core/template" name="example_block" after="content" template="page/html/example-block.phtml"/> </reference>

Categories : Xml

Error regarding using a block within a block in Ruby - Array can't be coerced into Fixnum (TypeError)
commonMultiples is already an Array, and in find_sum_for_each_multiple, it gets converted to an Array of Arrays Change this line sumsOfCommonMultiples = find_sum_for_each_multiple(from, to, commonMultiples) to sumsOfCommonMultiples = find_sum_for_each_multiple(from, to, *commonMultiples)

Categories : Ruby

Keep another div displayed block when I mouse over button but then close block element if I were to roll over it
Set the other div to display: none in your CSS and use jQuery's fadeIn() and fadeOut() methods to toggle it on/off: $(".button").on({ mouseenter: function (e) { $(".other-div").fadeIn(); }, mouseleave: function () { $(".other-div").fadeOut(); } }); See DEMO.

Categories : Jquery

Is the size of a union equal to the size of its member of the largest size?
Looking at the Cx0 standard it says: The size of a union is sufficient to contain the largest of its members So, you cannot rely on the size being equal to the size of the largest member. Whether it is on not depends on the compiler implementation, flags used, etc.

Categories : C

Webkit displaying non-cached inline-block images as block
If the images have been cached then they will always display correctly. Make sure the width of the container is less than 100% Make sure the position and float properties are not set Make sure the important value is syntactically correct: display: inline-block !important;

Categories : CSS

blktrace output & block sizes in linux block layer
Blktrace gets its data from the Linux kernel which considers sectors to be 512 bytes long. So I think that, regardless of the device physical sector size, blktrace displays offsets and sizes in 512-bytes sectors. You can make a test with dd to verify what happens and record disk accesses with btrace. For instance, dd if=/dev/something of=/dev/null bs=512 count=1 skip=512 For your second question, a lot is happening in the block layer: I/O requests are buffered, merged, scheduled. So don't be surprised if the kernel make disk accesses with a different block size than the one specified in your application.

Categories : Linux

Block in a block not call in ad hoc archive ,but runs normal with xcode
Your local variable is __weak ARC is allowed to nuke this reference the minute requestWithURL: ... returns (the return decrements the retain count, __weak doesn't increment it). Your request variable is therefore going to be released and set to nil before you can call startAsynchronous on it. Why did you make that local variable __weak? There is no unbreakable retain cycle involved in your code when you remove it. I believe that the completion block is nil'ed out (and released / destroyed) after it is invoked. If this isn't the case, and you have a true leak here then you'll have to create another, non-weak reference to the ASIFormDataRequest so that it doesn't get blown away during your method. The reason the AdHoc version doesn't work is because AdHoc is built with the Release t

Categories : Iphone



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.