w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
jQuery - Dividing tables into varying lengths of sections
To achieve this functionality you will have to identify the rows which need to be grouped togetherand add a class and an image of plusminus to indicate collapsible space. On the click function of the plusminus, use the toggle() function to collapse or uncollapse the grouped data. toggle jquery

Categories : Javascript

android Dividing circle into N equal parts and know the coordinates of each dividing point
You need to convert between polar and Cartesian coordinates. The angle you need is the angle between the (imaginary) vertical line that splits the circle in half and the line that connects the center with the circle's boundary. With this formula you can calculate the X and Y offsets from the center. In your example image the first angle is 0, and the second one is 360/n. Each next is i*(360/n) where i is the index of the current line you need to draw. Applying this will give you the X and Y offsets in a clockwise order (and you can just add them to the X and Y coordinates of the center to find the coordinates of each point) EDIT: some kind of pseudo-code: //x0, y0 - center's coordinates for(i = 1 to n) { angle = i * (360/n); point.x = x0 + r * cos(angle); point.y = y0 + r *

Categories : Android

Data size increases in hbase
It depends, according to this blog So to calculate the record size: Fixed part needed by KeyValue format = Key Length + Value Length + Row Length + CF Length + Timestamp + Key Value = ( 4 + 4 + 2 + 1 + 8 + 1) = 20 Bytes Variable part needed by KeyValue format = Row + Column Family + Column Qualifier + Value Total bytes required = Fixed part + Variable part So for the above example let's calculate the record size: First Column = 20 + (4 + 4 + 10 + 3) = 41 Bytes Second Column = 20 + (4 + 4 + 9 + 3) = 40 Bytes Third Column = 20 + (4 + 4 + 8 + 6) = 42 Bytes Total Size for the row1 in above example = 123 Bytes To Store 1 billion such records the space required = 123 * 1 billion = ~ 123 GB I would presume your calculations are grossly incorrect, perhaps sh

Categories : Hadoop

dividing data and fitting in R
I'm not sure why you want to (1) break the data into two regions, (2) eliminate records where there was no rainfall, and (3) fit the model you describe. You may want to consult with a statistician on these matters. To answer your question, though, I came up with an example for a second model and show the fits from both models on the same plot. x <- seq(dat.1951) sel <- dat.1951 >= 100 model1 <- lm(dat.1951[sel] ~ poly(x[sel], 2)) model2 <- lm(log(dat.1951[!sel]) ~ x[!sel]) plot(dat.1951, cex=1.5) lines(x[sel], fitted(model1), col="blue", lwd=3) lines(x[!sel], exp(fitted(model2)), col="navy", lwd=3) Just for grins, I added a third model that fits all of the data with a generalized additive model using the function gam() from the package mgcv. library(mgcv) model3 <

Categories : R

Dividing the One Column Data into Four Columns
$quater = floor(mysql_num_rows($fetchstate)/4); $count = 0; while($row = mysql_fetch_array($fetchstate)) { if($count < $quater) { // echo table content1 } if($count >= $quater && $count < $quater*2) { // echo table content2 } if($count >= $quater*2 && $count < $quater*3) { // echo table content3 } if($count >= $quater*3 && $count < $quater*4) { // echo table content4 } $count++; }

Categories : PHP

Does specifying the schema name for tables affect performance?
The second one is faster, but barely. SET is extremely cheap. Generally, a schema-qualified table name has the potential to be slightly faster, since the query to the system catalog can be more specific. But you won't be able to measure that. It's just irrelevant, performance-wise. Setting the search_path does have implications for security and convenience though. It is generally a good idea. For instance, consider the advice in the manual on "Writing SECURITY DEFINER Functions Safely". Are you aware that there are many ways to set the search_path?

Categories : Postgresql

Sql performance more columns or more tables for reporting
Not sure what you want to improve the performance of. Certainly normalisation is likely to improve it for collecting and maintaining data. But denormalisation tends to improve reporting. So the trick is to have two databases, one for each requirement, and populate reporting by mining the operational database. All depends on the load on the db and where you want to take the hit if you don't want to take the cost of setting up a reporting db. All you can say is, reporting will slow down operational transactions and vice versa. If you want the best performance you can get for both requirements, then two databases is the way to go, or you compromise...

Categories : Sql Server

bulk delete from two related tables performance
The objectName column should be indexed for better performance. And try the following queries. $idToBeDeleted = SELECT ext_id FROM ai_tr_tbl_extids WHERE objectName='$object' AND sfID ='$s'" ; DELETE FROM ai_tr_tbl_extids WHERE ext_id='$idToBeDeleted'; DELETE FROM ai_$object WHERE ext_id='$idToBeDeleted'; But if you will be able to get the list of ext_id to be deleted, you can remove the loop and just delete with IN. DELETE FROM ai_tr_tbl_extids WHERE ext_id IN ($idToBeDeletedCommaSeparatedValue'); DELETE FROM ai_$object WHERE ext_id IN ($idToBeDeletedCommaSeparatedValue');

Categories : PHP

Javascript performance suffers when hiding/displaying nested tables
Looks to me like you are getting the html content of the div that follows the button, and then inserting that after the nearest tr node. The dom has to give you the html as a string, and then parse the string that you give it back. If you just want to show and hide rows why not just set .hide() or .show() on them? That way there is no dom manipulation which is actually quite expensive because the browser has to recalculate everything incase your new additions alter anything else. CSS changes are usually faster. If you are getting the html from elsewhere you might want to look at that as being the bottleneck. Try using something like console.log((new Date()).getTime()) to profile different points in your code and see where the problem is. If you can recreate the delay in your fiddle that

Categories : Javascript

Database Design: Many tables VS One "long" table _ Performance-Wise?
Databases are designed to store and manage data. Thousands of rows is considered a "small" table. You should store all the information about attributes in one table, assuming the attributes are common across all items. The reasons for using multiple databases would involve security or backup requirements. If you do get really big tables (say tens of millions of rows) and performance is an issue, then you can start learning about (vertical) partitions.

Categories : Mysql

FULL TEXT INDEX - Huge Performance Decrease on Multiple Tables
Your question comments suggest you improved performance to a satisfactory level by rewriting an unrelated part of the query (not shown in the question). This is fair enough if it works, but doesn't explain why the two separate queries and the combined query differ so significantly, when other unrelated parts of the query are kept constant. It's difficult to say confidently without seeing a query plan and statistics results; however I can think of two possibilities based solely on reasoning about how the SQL queries are written: One or both of the ID columns (from FooBar and FooBalls) may be non-unique in the row set after these two tables have been inner joined. Doing two, rather than one, join to CONTAINSTABLE result sets may thus be "breeding" rather more records than a single join

Categories : Sql Server

Simple Inner Join on 2 tables resulting in wrong estimated rows and slow performance
Non updated statistics might have something to do with it. Try running a exec sp_updatestats and see if the plan changes. Also make sure you have a maintenance job running on the SQL server that updates the statistics periodically.

Categories : SQL

One view using two tables with data that might not be in both tables, Oracle SQL
To get all records from one table regardless of whether they are in the table you are joining to you need an outer join. To do this you just add (+) to the side of the join that may have records missing. However, looking at the example above you have records in Table A that are not in B and vice versa. For this you will need a full outer join. I don't know of a concise way to express this in oracle, but one way that should work is to use a union with an outer join both ways on each side e.g. try this: select a.FORM_JOURNAL_ID, a.COMPANY_ID, a.RETAIL_PRICE as RETAIL, a.WHOLE_SALE_PRICE as WHOLESALE, nvl(b.INDIVIDUALS,0) as INDIVIDUALS, nvl(b.ENTITIES,0) as ENTITIES, nvl(b.COMPLEX,0) as COMPLEX from TableA a, TableB b where a.FORM_JOURNAL_I

Categories : SQL

Applying the Data Table Join Operator to a List of Data Tables
Per ?lapply: For historical reasons, the calls created by lapply are unevaluated, and code has been written (e.g. bquote) that relies on this. This means that the recorded call is always of the form FUN(X[[0L]], ...), with 0L replaced by the current integer index. This is not normally a problem, but it can be if FUN uses sys.call or match.call or if it is a primitive function that makes use of the call. This means that it is often safer to call primitive functions with a wrapper, so that e.g. lapply(ll, function(x) is.numeric(x)) is required in R 2.7.1 to ensure that method dispatch for is.numeric occurs correctly. You need to wrap [ in a function, thus: lapply(l, function(d) `[`(d, data))

Categories : R

Having data stored across tables representing individual data types - Why is it wrong?
Why shouldn't you separate out the fields from your tables based on their data types? Well, there are two reasons, one philosophical, and one practical. Philosophically, you're breaking normalization A properly normalized database will have different tables for different THINGS, with each table having all fields necessary and unique for that specific "thing." If the only way to find the make, model, color, mileage, manufacture date, and purchase date of a given car in my CarCollectionDatabase is to join meaningless keys on three tables demarked by data-type, then my database has almost zero discoverablity and no real cohesion. If you designed a database like that, you'd find writing queries and debugging statements would be obnoxiously tiresome. Which is kind of the reason you'd use

Categories : Mysql

How to split a large data table to some small data tables?
You shouldn't need to split tables like that just out of concern for joins. If the tables related use primary keys for FKs or indexes with the correct columns, the data retrieval would be quite efficient. See this post and the comments+answers, for an example of how the query analysis should be used to make sure your queries are using the right indexes, refs, etc. Also, if you're going to split tables to move things like location (a varchar field?) out of the table, then are the fields you are left with fixed-width? If not, the data-retrieval speed benefit you get from moving out one variable width column is lost if there are other variable width columns still left in the table, like item_name. (On a side note, see this answer about the type of optimizations ppl consider doing to improve

Categories : Database

LOAD DATA INFILE for inserting data into two tables concurrently?
Row filter is not possible with LOAD DATA INFILE. You can however create a small shell script using grep or awk that parses your file and only inserts the records that matches your criteria: cat file.txt | awk '/ .+/' | mysql -u your_username -pyour_password -e "LOAD DATA LOCAL INFILE '/dev/stdin' IGNORE INTO TABLE table_name COLUMNS TERMINATED BY ' ' LINES TERMINATED BY ' ' (col1, col2);" database_name Other better approach would be to load all the data in a temporary table and use this table to load the the data in original table by filter required data.

Categories : Mysql

Joining data that exists in two tables and aggregating non joined data
How about this: Get the matched items and do a union with a second query. The second query would select name, 'others', sum(count) and would group on name where id is not in the ids from the first table (you can check that with a subquery). I can type it out if you want to, you look like you know what you are doing and just need a general idea about how to do it.

Categories : Asp Net

Sending TCP packets via Netty, Netty is dividing the data into different packets?
If you'd like to abstract fragmentation from your handler you need to frame your messages. This can be done easily by making use of the LengthFieldPrepender when sending and the LengthFieldBasedFrameDecoder when receiving. This ensures that your message decoder only sees a byte buffer representing a full message. Note that your frame handlers should be first in your ChannelPipeline except when you are also using an SSL handler and/or compression handler in which case SSL should be first, followed by compression, followed by the frame handlers. Being first in the pipeline means that a handler will be first to handle and inbound event and last to handle an outbound event.

Categories : Java

Sorting data from two different sorted cursors data of different tables into One
You could combine both queries into a single query. First, ensure that both results have the same number of columns. If not, you might need to add some dummy column(s) to one query. Then combine the two with UNION ALL: SELECT alpha, beeta, gamma, Remark, id, number FROM X UNION ALL SELECT Type, Date, gamma, Obs, NULL, number FROM Y Then pick one column of the entire result that you want to order by. (The column names of the result come from the first query.) In this case, the Start column is not part of the result, so we have to add it (and the Date column is duplicated in the second query, but this is necessary for its values to end up in the result column that is used for sorting): SELECT alpha, beeta, gamma, Remark, id, number, Start AS SortThis FROM X UNION ALL SELECT Ty

Categories : Android

JScrollPane increases its size
BoxLayout accepting Min, Max and PreferredSize override those methods for JPanel use JSPlitPane, there you can to hide Divider

Categories : Java

mod_rewrite increases load time
mod_rewrite "Using a high trace log level for mod_rewrite will slow down your Apache HTTP Server dramatically!" You might be using it. From the documentation, that seems like the only way mod_rewrite could slow down Apache.

Categories : PHP

iOS memory usage increases until crash
viewDidUnload is called if your controller's view is evicted from memory due to a low-memory warning. Dropping the view and loading it again later was how UIViewControllers dealt with low-memory warnings under iOS 2–5. Under iOS 6 the view controller never drops its view. So you never get viewDidUnload. In your case that'll mean you add another UIImage every time the first block of code is running (I assume it's not in viewDidLoad?). The old one won't be deallocated because it has a superview; that you're releasing your reference to it makes no difference. Furthermore, the initWithContentsOfFile: would be better expressed as [UIImage imageNamed:] as the latter explicitly caches the images whereas the former reloads from disk every time, creating a new copy of the pixel contents. So th

Categories : IOS

Android Webview memory increases
You should use caching to avoid such memory leaks! loaded images can be cached on disk and loaded incrementally. Here is a generic caching library together with example usage >>

Categories : Android

Time complexity of a loop increases by log?
Try thinking about it this way: the outer loop definitely runs O(n) times, so if you can determine the amount of work done in the inner loop, you can multiply it by O(n) to get an upper bound on the total work done. So let's look at the inner loop. Notice that k will take on the values j, j + log n, j + 2 log n, ..., j + i log n as the loop iterates, where i is the number of iterations that the loop has run for. So think about the maximum number of times that loop can execute. It stops as soon as k ≥ n, which means that it stops as soon as j + i log n ≥ n, so after (n - j) / log n iterations the loop will stop running. If we want to get a conservative upper bound for the number of times this can happen, notice that in the worst case we have j = 1. This means that the maximum nu

Categories : Algorithm

IE 7 offset increasing as child increases
Check your HTML, it looks like a tag is not being closed correctly. Most modern browsers will correct this as they render the page, but IE7 can be pretty picky. Additionally, make sure your page is in standards mode (by setting a valid DTD), or IE7 will not recognise that > direct child selector.

Categories : HTML

Score increases randomly onActionMove
You can use a SurfaceScrollDetector, which detects when the user is sliding his finger across the screen. It has the following events associated with it: onScrollStarted onScroll onScrollFinished I guess you could increase by 50 when you reach the onScrollFinished. I am currently using the SurfaceScrollDetector for a project, but I haven't used it in a way you are asking about, so I can't say forsure if it will work as expected. Here is one of the examples that uses it (in addition to the PinchZoomDetector): https://github.com/nicolasgramlich/AndEngineExamples/blob/GLES2/src/org/andengine/examples/PinchZoomExample.java

Categories : Java

How to create product id when your db auto_increment increases by 10?
You can change auto-increment step back to 1 using: SET @@auto_increment_increment=1; I would not do it however - it may screw up your Azure/ClearDB installation if you are using replication. In general, you should not be fixed up on how exactly primary keys are allocated. Even with auto_increment_increment=1 there is no 100% guarantee that keys will be allocated without any gaps - typical reason for gaps are aborted transactions.

Categories : Misc

RestKit Performance and Core Data
Hard to say much without more context, but what are your RestKit logging options? From my experience, mapping logging in RestKit is very verbose, slowing things down by the factor of 10x. Disable all RestKit logging, and see if anything has improved. Then, if there still is a problem, use Instruments to profile your application - you should easily see which code paths are taking most of the time (parsing, RestKit mapping, Core Data etc.)

Categories : IOS

Mysql performance sending data
This is your query: SELECT COUNT( * ) FROM table WHERE Position LIKE 'BAG%' AND Userid = ".USERID."; This is a pretty simple query. The only way you can speed it up is by using an index. I would suggest a composite index of table(UserId, Position). MySQL should optimize the like because the initial prefix is constant. But, if you want to be really, really sure the index gets used, you can do: SELECT COUNT( * ) FROM table WHERE Position in ('BAG0', 'BAG1', 'BAG2', 'BAG3', 'BAG4', 'BAG5', 'BAG6') and Userid = ".USERID.";

Categories : Mysql

Updating core data performance
I'd say option 1 would be most efficient, as there is rarely a case where downloading everything (especially in a large database with large amounts of data) is more efficient than only downloading the parts that you need.

Categories : IOS

NSFetchedResultsController loads the whole data and bad performance
I believe you need to set the fetch limit on your fetch request, not just the batch size. Try calling [fetchRequest setFetchLimit:20]; and see if that helps. This should cause the first 20 records to be fetched. Then as the user scrolls through your data, make additional fetch requests to get more data.

Categories : IOS

reading data from socket - no performance
The normal way to get decent performance from a socket (for int read() and write(int)) is to use a buffered stream / reader / writer. That reduces the number of system calls and makes byte or character at a time I/O much faster. I don't expect that using NIO will make a significant difference. If the problem is not buffering, then (IMO) it is probably a network bandwidth and/or latency issue. If that is the case, there is little that you can do in Java that would make much difference.

Categories : Java

Is core data performance better with less attributes?
Yes it's adviced to keep your entities small. When you have a list view for example, you generally don't need all the information on the objects, but when you click one and go to the detail view, you would want to fetch more detailed information. Then you can fetch it from the other entities. Of course, you should make relationships between these entities.

Categories : IOS

Dividing by zero warning
The division by zero is caused by your use of the % operator. This operator returns the remainder of a division and so will fail if the second operand is zero. So, if numbers[r] % min leads to a zero divide error, then min is zero. You'll need to treat that case specially. And similarly for the other use of the % operator.

Categories : C#

VBA script when dividing by zero
try using an else something like Range("H" & Row).Activate If Range("F" & Row) = 0 Then ActiveCell.Formula = 0 Else ActiveCell.Formula = Range("G" & Row) / Range("F" & Row) End If

Categories : Excel

Dividing by 100 returns 0
The problem is that what you're putting in your float variable is the result of operations on integers: it's an integer. In other words, a.precio * a.iva / 100 is first evaluated to an integer (that's where you lose precision), and then this integer is assigned to a.total as a float. You therefore need to specify that the operation a.precio * a.iva / 100 has to be done on floats by casting the integer values. Change a.total=a.precio*a.iva/100; to a.total= ((float)a.precio)*a.iva/100;

Categories : Java

Why is it implying I'm dividing by zero?
If b*c is large, j will eventually equal 2147418112 65536 (=216) and j*j will be 0 (remember, Java ints are always 32-bits). Performing % when the divisor is 0 results in your ArithmeticException. Note that any multiple of 65536 will cause this error. The 2147418112 (=231-216) originally referenced above is just the largest multiple that will fit in an int. Example code (you can run it yourself at http://ideone.com/iiKloY): public class Main { public static void main(String []args) { // show that all multiples of 65536 yeild 0 when squared for(int j = Integer.MIN_VALUE; j <= Integer.MAX_VALUE - 65536; j += 65536) { if((j*j) != 0) { System.out.println(j + "^2 != 0"); } } System.out.pr

Categories : Java

Number of Likes increases when page is refreshed
Since .live() is deprecated try to use .on() like $("#likeThis").on("click", function () { and put these functions in document.ready like $(document).ready(function(){ $("#likeThis").on("click", function () { --------------------- });

Categories : Jquery

Increases movement speed Image after scale
Adding answer for anybody who has the same problem in the future. As the Image changes size, the pinch movement speed increases or decreases along with it. To compensate for this and get the image to constantly move at the same speed, the amount the image translates by needs to be modified relative to the scaling. I suggested dividing the translation by the scale factor scaleImage.ScaleX translateImage.X += ( e.DeltaManipulation.Translation.X / scaleImage.ScaleX); OP found the solution by tweaking this and multiplying by scaleImage.ScaleX which gave the desired result: translateImage.X += ( e.DeltaManipulation.Translation.X * scaleImage.ScaleX);

Categories : C#



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.