w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
android Dividing circle into N equal parts and know the coordinates of each dividing point
You need to convert between polar and Cartesian coordinates. The angle you need is the angle between the (imaginary) vertical line that splits the circle in half and the line that connects the center with the circle's boundary. With this formula you can calculate the X and Y offsets from the center. In your example image the first angle is 0, and the second one is 360/n. Each next is i*(360/n) where i is the index of the current line you need to draw. Applying this will give you the X and Y offsets in a clockwise order (and you can just add them to the X and Y coordinates of the center to find the coordinates of each point) EDIT: some kind of pseudo-code: //x0, y0 - center's coordinates for(i = 1 to n) { angle = i * (360/n); point.x = x0 + r * cos(angle); point.y = y0 + r *

Categories : Android

SQL Server - Dividing one column by another in a SELECT query
Is this what you want? select (thisweek / lastweek) - 1 from (SELECT (CASE WHEN TIMESTAMP >= GETDATE() -7 AND TIMESTAMP < GETDATE() THEN ORDER_ID END) AS THISWEEK, (CASE WHEN TIMESTAMP >= GETDATE() -14 AND TIMESTAMP < GETDATE() -7 THEN ORDER_ID END) AS LASTWEEK from . . . ) t It defines the variables in a subquery and then ues them. Note that there is not any performance penalty for this; SQL Server does not "instantiate" the subquery.

Categories : SQL

MySQL simple insert query slow on "query end" step
For anyone else that runs into this I think I've tracked down the issue a bit. I had changed the innodb_flush_method from ALL_O_DIRECT on the regular box to O_DIRECT on the VM because mysql was giving this warning: kernel: EXT4-fs (vdb1): Unaligned AIO/DIO on inode 17565314 by mysqld; performance will be poor. Reverting back to ALL_O_DIRECT makes that warning start showing up again - but my performance is 100x better on the "query end" step so I'm going to go with it for now. I hope this helps anyone else who runs into the same issue.

Categories : Mysql

Postgresql slow query. Change query or posgresql.conf?
I don't know the postgres dialet of SQL but it may be worth experimenting with outer joins. In many other dbms' they can offer better performance than subselects. Something along the lines of SELECT "users_users"."id", "users_users"."email" FROM "users_users" us left join "users_blacklist" uo on uo.email = us.email left join "games_user2game" ug on us.id = ug.user_id where uo.email is null AND ug.id is null I think that is doing the same thing as your original query, but you'd have to test to make sure.

Categories : SQL

Make Query More Efficent (Really slow query taking forever!)
The subquery in the second SELECT column will execute for every m_users row that passes the WHERE condition: SELECT COUNT(*) AS users, (SELECT COUNT(*) FROM enswitch_mobile_users) AS total <-- here's the problem FROM enswitch_mobile_users AS musers WHERE musers.phone_type = whatever If I'm reading this correctly, you need a one-row result with the following columns: users - number of enswitch_mobile_users rows with the specified phone_type total - count of all enswitch_mobile_users rows You can get the same result with this query: SELECT COUNT(CASE WHEN musers.phone_type = whatever THEN 1 END) AS users, COUNT(*) AS total FROM enswitch_mobile_users The CASE checks for the phone type, and if it matches the one you're interested it it yields a 1, which is counted. If it d

Categories : PHP

Oracle query runs very slow when used sub-query. Can this be rectified?
Since your subquery is eligible for Oracle's scalar subquery caching feature, I suspect that the reason for slow performance could be a missing index on any (or both) of: allen.main_table.program_status allen.main_table.alert_logged_time

Categories : SQL

Avoid RAND() query from slow query log
I am not aware of any possibility to disable the logging of individual queries. log_queries_not_using_indexes is global and changing it on-the-fly would prevent the logging of any concurrent queries (although I understand this is quite unlikely if the query is that quick). Since you actually want to lower the load induced by this logging, you may want to play with the log_throttle_queries_not_using_indexes (added in v5.6.5 only) or the min_examined_row_limit server options instead. The latter exists at session level. It could be increased to an absurdly large value just before your query, with no impact on concurrent connections. Surprisingly, no special privilege is required.

Categories : PHP

Taking two sql query into one query working too slow
You are missing the group by clause. I would try to replace the count(*) with exists: SELECT Sum(CASE WHEN exists(SELECT 1 FROM `retry` AS `rty` WHERE `rty`.`phone` = `msgs`.`phone` AND `rty`.`sushi_subscription_id`=`msgs`.`sushi_sub_id`) THEN 1 ELSE 0 END) AS `In_Retry`, Sum(CASE WHEN exists(SELECT 1 FROM `retry` AS `rty` WHERE `rty`.`phone` = `msgs`.`phone` AND `rty`.`sushi_subscription_id`=`msgs`.`sushi_sub_id`) THEN 0 ELSE 1 END) AS `Not_In_Retry`, phone, sushi_sub_id FROM (SELECT phone, sushi_sub_id FROM `yesnotmp`.`msg` LEFT JOIN `yesnotmp`.`msg_t` ON (`msg`.`id`=`msg_t`.`msg_id`) WHERE `msg_t`.`send_time` BETWEEN '2013-06-02' AND '2013-06-03' AND `msg_t`.`status` = 'Failure.Provider.Connection') AS `msgs`

Categories : PHP

Very, very slow query
Your joins do not depend on each other, that's why the temp tables are exploding. A simple fix is to make: SELECT a.id, a.name, (select count(*) from vessels b where a.id = b.organization_id group by b.organization_id), (select count(*) from licenses b where a.id = b.organization_id group by b.organization_id), (select count(*) from fleets b where a.id = b.organization_id group by b.organization_id), (select count(*) from users b where a.id = b.organization_id group by b.organization_id), FROM organizations a It will be far more faster, if you do it like this: SELECT a.id, a.name, v.total, w.total, x.total, y.total FROM organizations a LEFT JOIN (select b.organizantion_id, count(*) total from vessels b group by b.org

Categories : Mysql

Sql Query is too slow
You might try splitting the search up to remove the OR, which is notorious for poor performance: SELECT two.id, two.username, one.firstname, one.middlename, one.lastname FROM ( SELECT id, firstname, middlename, lastname FROM table_one WHERE one.firstname LIKE '%|%' UNION SELECT id, firstname, middlename, lastname FROM table_one WHERE one.middlename LIKE '%|%' UNION SELECT id, firstname, middlename, lastname FROM table_one WHERE one.lastname LIKE '%|%' ) one INNER JOIN table_two two ON two.id = one.id With an index on each of the name columns, there's a chance each will be used in the separate unioned queries. The use on UNION conveniently discards duplicates, so the case where multiple name columns contain a pipe char won't cause duplicate output

Categories : SQL

Very slow query
One thing that I could imagine is that MySQL does not use the index (or uses it ineffectively) because one of the fields has arithmetic on it. That is speculation. You can write the query using variables. Not my favorite approach, but it might work in this case: Create TEMPORARY table temp_table as SELECT pcur.RECORD_ID, pcur.Price, (pcur.Price - @prevPrice) as 'Price_Difference', CASE when @prevPrice between 0 and 0.25 then ((pcur.Price - @prevPrice)/0.001) when @prevPrice between 0.2501 and 0.5 then ((pcur.Price - @prevPrice)/0.005) when @prevPrice between 0.5001 and 10 then ((pcur.Price - @prevPrice)/0.01) when @prevPrice between 10.0001 and 20 then ((pcur.Price - @prevPrice)/0.02) when @prevPrice between 20.0001 and 100 then ((pcur.

Categories : Mysql

MySQL query becomes slow
You can try an alternative SELECT a.dak_dept, b.dept_name, SUM(CASE WHEN a.dak_stat='N' THEN 1 ELSE 0 END) new, SUM(CASE WHEN a.dak_stat='O' THEN 1 ELSE 0 END) open, SUM(CASE WHEN a.dak_stat='C' THEN 1 ELSE 0 END) closed FROM dak_dept_mast a JOIN tapal_dept_mast b ON a.dak_dept = b.dept_code GROUP BY a.dak_dept

Categories : Mysql

Terrible and slow query
first of all, i think you have to expalin to yourself why using int and bigint. do you really expect so much data? try using smallint or mediumint, they need less memory and are much faster. if you use the mediumint and smallint as unsigned, they can have a pretty large value, take a look at: http://dev.mysql.com/doc/refman/5.0/en/integer-types.html second, you need to combine some field to one key: ALTER TABLE `employees_log ` ADD INDEX ( `uid` , `id` ) ;

Categories : Mysql

Slow query with group by
You can do the aggregation without the join: select injury.age_group_id, injury.body_part_id, count(*) from injury group by injury.age_group_id, injury.body_part_id This will only return results where there are injuries. If this performs well, then do the join afterwards: select t.id, t2.id, coalesce(injury.cnt, 0) from age_group as t1 CROSS JOIN body_part as t2 left outer join (select injury.age_group_id, injury.body_part_id, count(*) as cnt from injury group by injury.age_group_id, injury.body_part_id ) injury LEFT JOIN injury ON t1.id = injury.age_group_id and t2.id = injury.body_part_id

Categories : Mysql

MySQL Query slow
You can get a more accurate count by phrasing the query like this: SELECT page_id, COUNT(distinct user_id_hash) from user_likes ul GROUP BY page_id LIMIT 0,30; Speeding it up in MySQL is tricky, because of the group by. You might try the following. Create an index on user_likes(page_id, user_id_hash). Then try this: select p.page_id, (select count(distinct user_id_hash) from user_likes ul where ul.page_id = p.page_id ) from (select distinct page_id from user_likes ul ) p The idea behind this query is to avoid group by -- a poorly implemented operator in MySQL. The inner query should use the index to get the list of unique page_ids. The subquery in the select should use the same index for the count. With the index-based operations, the c

Categories : Mysql

PostgreSQL query is slow when using NOT IN
get_customer_trans() is not a table - probably some stored procedure, so query is not really trivial. You'd need to look at what this stored procedure really does to understand why it might work slow. However, regardless of stored procedure behavior, adding following index should help a lot: CREATE INDEX do_not_email_tbl_idx1 ON do_not_email_tbl(do_not_email_address); This index lets NOT IN query to quickly return answer. However, NOT IN is known to have issues in older PostgreSQL versions - so make sure that you are running at least PostgreSQL 9.1 or later. UPDATE. Try to change your query to: SELECT t.* FROM get_customer_trans() AS t WHERE NOT EXISTS ( SELECT 1 FROM do_not_email_tbl WHERE do_not_email_address = t.user_email LIMIT 1 ) This query does not use NO

Categories : SQL

slow sql query optimization,
It's rather difficult to answer this question sufficiently well since there isn't enough background detail in the question. However, looking at the query some of these points might help you out: Try and perform fewer joins Avoid using LIKE queries with a wildcard prefix and suffix (i.e. '%thing%') - it will result in a full table scan, something that will cripple performance if there are a large number of rows Try and avoid sub-selects. They're not always a problem, but they might be indicative of approaching the query in the wrong way Use the Explain syntax to understand where you might be missing important indexes Good luck!

Categories : Mysql

SQL query by time too slow
Yes, the ORDER BY is processed before the LIMIT, but that's the correct functionality. Paging wouldn't actually work otherwise. But some ideas for optimization. Don't SELECT * if it's not absolutely necessary. I feel like it's probably not because if you're paging results it's almost certainly not every field in both tables the user is looking at. Create a covered index on AutoIncID, TimeStamp to keep it from reading the data page. Add Name to that index if it comes from Objects. Create a covered index on rowid, Name, if Name comes from FTSObjects. If the returned fields can be limited, consider adding those fields to the covered indexes if it's only a couple fields. You don't want the index to get too big because then it will affect write times.

Categories : C#

MySQL query is slow
SELECT DISTINCT l.* FROM links l INNER JOIN keyworks_links kl ON kl.link_id = l.id INNER JOIN keywords ON kl.keyword_id = k.id WHERE k.keyword IN ("facebook", "google", "apple") ; EDIT Added DISTINCT to remove duplicates

Categories : Mysql

slow GROUP_CONCAT in sub query
You might be better off using this: SELECT a.actiondate, GROUP_CONCAT(IFNULL(al.accessory,'') ) as acc, SUM(IF(al.actionid IS NULL,0,1)) as acccount FROM `action` a LEFT JOIN accessorieslink al ON al.actionid = a.actionid GROUP BY a.actionid ORDER BY NULL

Categories : Mysql

MySQL query become really slow when using ORDER BY
I am afraid that if you must allow your users to sort on any field (and have this sort use an index) then you need an index for each possible sort. It is impossible to do otherwise by definition. Sorting on a given row may only make use of an index on this row. I see little alternatives here. Either reduce the number of rows to be sorted (25k lines is a bit large a result set, do your users really need that many lines?) or do not allow sorts on all rows. Notice that a query will usually not be able to use more than one index by table. As advised by others, a compound index is better for the query you mentionned, although I would rather advise the opposite order ((guid, date)) (the query first needs to select each guid, and then, for each of them, sort the corresponding rows). Also add a

Categories : Mysql

MySQL Query Running Very Slow - But only some of them
There's nothing implicitly wrong with the example query you've given. Theres no indexes it could use, but unless its a really wide table, I would expect it to be very fast. In a shared server environment, you have no idea how many other databases are being hosted or how heavily they are being used. If you can migrate to a local database you'll be able to see if performance improves. If you have 300 active users of your installation, there could be a lot of lock contention, especially on MyISAM which uses full table locks. For example, it looks like the Joomla users table has a lastVisitDate, which, if it is being updated on every page load by a logged in user, could definitely cause locking problems.

Categories : Mysql

Extremely slow mongoDB query
I believe that there is a problem with the index. I have two similar tables: 200,000 records and about 500,000. A similar request is performed for about 40 seconds with an index and a very long time without an index. Run the query: db.poss_opt.find({poss_idx: "some_id"}).explain() If the above query could not use an index, you will see: { "cursor": "BasicCursor", "nscannedObjects": 532543, "nscanned": 532543, "millis": 712, "indexBounds": {}, } Otherwise: { "cursor": "BtreeCursor poss_idx_1", "nscannedObjects": 0, "nscanned": 0, "millis": 0, "indexBounds": {"poss_idx": [["some_id", "some_id"]]}, } To view index information for the collection, use db.poss_opt.stats() and db.poss_opt.getIndexes() If the problem is with the ind

Categories : Performance

Django Query extremely slow
You should try and first isolate the problem. Run manage.py shell and run the following: scope = Scope.objects.get(pk='Esoterik I') print scope Now django queries are not executed until they very much have to. That is to say, if you're experiencing slowness after the first line, the problem is somewhere in the creation of the query which would suggest problems with the object manager. The next step would be to try and execute raw SQL through django, and make sure the problem is really with the manager and not a bug in django in general. If you're experiencing slowness with the second line, the problem is eitherwith the actual execution of the query, or with the displayprinting of the data. You can force-execute the query without printing it (check the documentation) to find out which o

Categories : Django

MySQL query slow besides using the index?
Based on the comments (Thx, people!), I wrote the following JOIN which I will use to UPDATE: UPDATE `data_der` SET `v1305` = '-95' WHERE `def` IN ( SELECT * FROM ( SELECT t1.`def` FROM `data_der` AS t1 JOIN `data_der` AS t2 ON (t1.`cntry`,t1.`var`,t1.`type`,t1.`track`,t1.`year`) = (t2.`cntry`,t2.`var`,t2.`type`,t2.`track`,t2.`year`) WHERE t1.`type` = 'str' && t1.`svar` != '99' && t1.`v1305` = '-90' && t2.`v1305` != '-90' ) AS sub ) And the EXPLAIN of the subquery: 1 SIMPLE t1 ref idb,sdef,cntry,var,type,track,year,ddef type 14 const 65338 Using where 1

Categories : Mysql

MySQL Slow Group By Query
In MySQL, GROUP BY implies ORDER BY If no particular order in needed, add ORDER BY NULL By default, MySQL sorts all GROUP BY col1, col2, ... queries as if you specified ORDER BY col1, col2, ... in the query as well. If you include an ORDER BY clause explicitly that contains the same column list, MySQL optimizes it away without any speed penalty, although the sorting still occurs. If a query includes GROUP BY but you want to avoid the overhead of sorting the result, you can suppress sorting by specifying ORDER BY NULL. For example:

Categories : Mysql

Mysql query still slow even using indexes
Reverse the order of the tables and use a join condition, which includes the extra condition: select distinct customer_id from systems_address join customers_address on systems_address.address_id = customers_address.address_id and customer_id != -1 where system_id = 2 This should perform very well, using indexes and minimizing the number of rows accessed. Make sure you have the following indexes defined: create index idx1 on systems_address(system_id); create index idx2 on customers_address(address_id); Just to be sure, also update the statistics: analyze systems_address, customers_address;

Categories : Mysql

Really slow cypher query (neo4j)
Problem solved. The performance issue was caused by using not a global or threadlocal ExecutionEngine. Do not create an ExecutionEngine per request but always thread local (or global) otherwise you will kill the cache.

Categories : Neo4j

Mysql Slow Query - Even with all the indices
You have the required indexes to perform the joins efficiently. However, it looks like MySQL is joining the tables in a less efficient manner. The EXPLAIN output shows that it is doing a full index scan of the flows table then joining the comments table. It will probably be more efficient to read the comments table first before joining. That is, in the order you have specified in your query so that the comment set is restricted by the predicates you have supplied (probably what you intended). Running OPTIMISE TABLE or ANALYZE TABLE can improve the decision that the query optimiser makes. Particularly on tables that have had extensive changes. If the query optimiser still gets it wrong you can force tables to be read in the order you specify in the query by beginning your statement with

Categories : Mysql

Slow wordpress query time
Try and explain as follows. Paste the following into phpmyadmin and execute it:- EXPLAIN SELECT wp_posts.* FROM wp_posts INNER JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) INNER JOIN wp_term_taxonomy ON (wp_term_relationships.term_taxonomy_id = wp_term_taxonomy.term_taxonomy_id) INNER JOIN wp_terms ON (wp_term_taxonomy.term_id = wp_terms.term_id) WHERE 1=1 AND wp_posts.blog_id = '1' AND wp_term_taxonomy.taxonomy = 'post_tag' AND wp_terms.slug IN ('rate') AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish') GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 0, 20 Then copy the results of that and paste them into your original question. It will give details of which keys were used in which tables, etc.

Categories : Mysql

Slow Query: Data categorization
If the data is re-loaded every day, then you should just fix it when it is reloaded. Perhaps that is not possible. I would suggest the following approach, assuming that the triple url, shop, InsertionTime is unique. First, build an index on url, shop, InsertionTime. Then use this query: select ap.* from AllProducts ap where ap.InsertionTime = (select InsertionTime from AllProducts ap2 where ap2.url = ap.url and ap2.shop = ap.shop order by InsertionTime limit 1 ); MySQL does not allow subqueries in the from clause of a view. It does allow them in the select and where (and having) clauses. This should cycle through the table

Categories : Mysql

Slow query after upgrade mysql from 5.5 to 5.6
The slow query in v5.6 is caused by the engine being unable to, or deciding not to, merge the two relevant indexes (index_contents_on_slug_hash, index_contents_on_slug) in order to process your query. Remember that a query may only use one index per table. In order to be able to take advantage of several indexes on the same table, it needs to pre-merge on-the-fly these indexes into one (in memory). This is the meaning of the index_merge and Using union(...) notices in your execution plan. This consumes time and memory, obviously. Quick fix (and probably preferred solution anyways): add a two-colums index on slug and slug_hash. ALTER TABLE pens ADD INDEX index_contents_on_slug_and_slug_hash ( slug, slug_hash ); Now your new server is probably unable to merge these indexes because it

Categories : Mysql

Slow SQL Server linq query
for linq to sql, should be something like: Dim result = From a In dbCtx.Orders Join b In dbCtx.ItemsByCustomers On b.ItemId Equals a.ItemId Join c In dbCtx.Customers On c.CusomterId Equals b.CusomterId Where c.custCity = "Atlanta" Select a

Categories : Sql Server

Why is MySQL slow when using LIMIT in my query?
Indexes do not necessarily improve performance. To better understand what is happening, it would help if you included the explain for the different queries. My best guess would be that you have an index in id_state or even id_state, id_mp that can be used to satisfy the where clause. If so, the first query without the order by would use this index. It should be pretty fast. Even without an index, this requires a sequential scan of the pages in the orders table, which can still be pretty fast. Then when you add the index on creation_date, MySQL decides to use that index instead for the order by. This requires reading each row in the index, then fetching the corresponding data page to check the where conditions and return the columns (if there is a match). This reading is highly inef

Categories : Mysql

Neo4j cypher query is really slow
Disclaimer: This advice is for 1.8 and 1.9. If you're using 2.0 or 2.1, these comments may no longer be valid. Query 1: Make your WITH your RETURN, and skip that extra step. Query 2: Don't do distinct in WITH as you are now. Go as far as you can without doing distinct. This looks like a premature optimization in the query that makes it not be lazy and has to store many more intermediate results to calculate the WITH results. Query 3: Don't do -[*1..1]->; that's the same as -[]-> or -->, but it uses a slower matcher for variable length paths when it really just needs adjacent nodes and can use a fast matcher. Make the WITH your RETURN and take out that extra pipe it needs to go through so it can be lazier (although the order by kind of makes it hard to be lazy). See if you can get it to c

Categories : Performance

Neo4j slow cypher query
try this, so the pattern matcher can pull your f<>f1 comparison into the patern match. start mag=node(1387),f=node(53) MATCH mag-[:MAGASINS]->t-[:CONTAINS_SF]->sf1-[:IN_FAMILLY]->f WITH distinct t,f MATCH t-[:CONTAINS_SF]->sf2-[:IN_FAMILLY]->f1 WHERE f<>f1 return sf2,count(distinct t) as count order by count desc limit 15 what does profiling (profile start ...) your query return ? Is that the first or subsequent runs? make sure to use parameters if you run this in production

Categories : Neo4j

Slow query and composite index
You can do two queries and UNION them. CREATE TABLE ff_atl AS SELECT universe.customer_id as ID, orders.order_date as orddt, orders.order_sequence as ordnum, taxonomy.age as prodage, taxonomy.category as prodcat, taxonomy.source as prodsrc, orders.order_category as channel, orders.quantity as quantity, orders.price_after_discount as pad, orders.number_of_issues_left as nIssuesLeft, orders.number_of_times_renewed as nTimesRenewed, orders.number_of_invoice_effort as nInvoiceEfforts, case when orders.order_status in (1,2,3,4) then 1 else 0 end as cancelled, customer.zip as zipcode, customer.create_date as fordt, orders.item as item, orders.subscription_id as subid FROM p

Categories : Mysql

MySQL query performance slow
You're right in thinking that JOINS are usually faster than WHERE IN subqueries. Try this: SELECT T3.stop_id, T3.stop_name FROM trips AS T1 JOIN stop_times AS T2 ON T1.trip_id=T2.trip_id AND route_id = <routeid> JOIN stops AS T3 ON T2.stop_id=T3.stop_id GROUP BY T3.stop_id, T3.stop_name

Categories : Mysql

Optimizing slow LINQ query
I guess this may improve it partly: foreach (IGrouping<DateTime, DateTime> item in groups) { var common = initialGroups .GroupBy(grp => { var c = CalculateArg(grp.a.Arg); return (c == item.Key && grp.b.Arg == someId) ? 1 : c == item.Key ? 2 : 3; }) .OrderBy(g=>g.Key) .Select(g=>g.Sum(c=>c.Value)).ToList(); var countMatchId = common[0]; var countAll = common[0] + common[1]; }

Categories : C#

Slow MySQL Query on ~400.000 entries
What if you put the AND when joining the tables rather than first join then filters the result set Give it a try SELECT post_id FROM ap_props LEFT JOIN ap_moneda ON ( ap_props.rela_moneda = ap_moneda.id_moneda AND `table`.rela_inmuebleoper = "2" AND `table`.rela_inmuebletipo = "1" ) LEFT JOIN wp_posts ON ( ap_props.post_id = wp_posts.id AND wp_posts.post_status = "publish") WHERE rela_barrio IN ( 6, 23085, 23086, 23087, 7, 23088, 23089, 23090, 23091, 23092, 26, 23115, 23116, 23117, 23118, 23119, 23120, 32, 43, 23123, 23124, 23125 ) AND (( approps_precio * Ifnull(moneda_valor, 0) >= 2000 AND approps_precio * Ifnull(

Categories : Mysql



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.