w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML Categories
Determining what indexes should be created in DB2 to optimize the performance of a particular query
If you are using DB2 for LUW, you can feed your actual query to the DB2 Design Advisor, which may suggest indexes and other approaches to improving the expected query performance.

Categories : SQL

SQL Server 2008: Optimize Query Performance with known empty result
Depending on parameter it is selecting records from respective table. So no need of using UNION ALL. Use IF ELSE construct - DECLARE @Input VARCHAR(20) = 'Cars' IF (@Input = 'Cars') BEGIN SELECT cars.id, cars.name FROM cars END ELSE IF (@Input = 'Planes') BEGIN SELECT planes.id, planes.name FROM planes END This would also help SQL Optimizer to use Parameter Sniffing technique and use best execution plan which would improve your query performance. More on Parameter Sniffing - Parameter Sniffing Parameter Sniffing (or Spoofing) in SQL Server

Categories : SQL

Performance mongodb query with starts-with
From the index documentation: Every query, including update operations, uses one and only one index. In other words, MongoDB doesn't support index intersection. Thus, creating a huge number of indexes is pointless unless there are queries that use this index and this index only. Also, make sure you're calling the correct Count() method here. If you call the linq-to-object extensions (IEnumerable's Count() extension rather than MongoCursor's Count, it will actually have to fetch and hydrate all objects). It is probably easier to throw these in a single mutli-key index like this: { "References" : [ { id: new ObjectId("..."), "_t" : "OemReference", ... }, { id: new ObjectId("..."), "_t" : "CrossReferences", ...} ], ... } where References.id is indexed

Categories : C#

MongoDB $IN query performance issue
In some MongoDB versions $in does not use index - also Mongo has a limitation of using more than one index for the same query. You query comprises flight_id, arrival, duration, capacity and rooms. Try to setup an index with arrival, duration, capacity and rooms, which shall provide you with good index on a selective criteria instead of putting all fields. The flight_id will be just a final selection, after the hard work has already been done by the selective criteria. Also it does not help that indexBound was not pasted, it could give clues whether the index composition is optimal or not.

Categories : Performance

MongoDB C# query performance much worse than mongotop reports
MongoTop is not reporting the total query time. It reports the amount of time MongoDB is spending holding particular locks. That query retuned in 0ms according to the explain (which is extremely quickly). What you are describing sounds like network latency. What is the latency when you ping the node? Is it possible that the network is flakey? What version of MongoDB are you using? Consider upgrading both MongoDB and the C# driver to the latest stable versions.

Categories : Mongodb

MongoDB, performance of query by regular expression on indexed fields
Indexes can't help when you use regular expressions. An index only helps you when you know exactly what you search for. But regular expressions are much more complex than that. They need to read the whole content of each string for a match. But when you don't need the full power of regular expressions and only search for complete words, like in this example, you can create a text index and use text search. When you want to know if a specific query uses an index or not, you can use the explain method of the cursor.

Categories : Mongodb

MongoDB complex query for matching elements in array and performance
If you need to query on the difference between the price and salesChange.price, I'd recommend storing the computed value with each item in the array. Just set it before each insert and update. You've got a couple of options for getting computed values like that in the query but it's going to be slow any way you do it. If this is something you'll need to do often, just set it before you insert it and create an index on it. Folks schooled in more traditional databases woud cringe at the idea of denormalizing your data willy nilly, but Mongo isn't a relational database, it's based on Documents. In general, you can feel free to denormalize data WITHIN a document because generally the entire document is always updated atomically, so there's not the same risk of ending up with invalid data.

Categories : Mongodb

How to Optimize Performance with appstats
On the plus side, appstats has narrowed it down to three lines for you: loginInfo.setLogoutUrl(userService.createLogoutURL(requestUri)); loginInfo.setIsGoogleLogin(true); ch.zhaw.ams.server.auth.user.User userAms = DatabaseHelper.findByParama(user.getEmail(), "emailAddress", "String", ch.zhaw.ams.server.auth.user.User.class) You can probably fiddle around and try to figure out which line causes the delay. My best guess is that this is the first time you're loading the class ch.zhaw.ams.server.auth.user.User.class. This class might be causing other classes to load. The long delay you see might just be the class loading time. You might be able to add a startup handler to load some of these classes, so it hopefully only appears rarely, but you'll see a lot of complaints a

Categories : Java

I have a query in mongodb and the reference key's are in hashmap list , I need to process a simple query using java in mongodb
When you post code, it helps if you indent it, so it is more readable. As I mentioned to you on another forum, you need to go back and review the Java collection classes, since you have multiple usage errors in the above code. Here are a few things you need to do to clean up your code: 1) You don't need to use the itkey iterator. Instead, use: for (String key : likey) and get rid of all the itkey.next calls. Your current code only processes every second element of the List. The other ones are printed out. 2) Your HashMap will map a key to a Boolean. Is that what you want? You said you want to count the number of non-zero values for the key. So, the line: Boolean listone = table.distinct(keys).contains(nullvalue); is almost certainly in error. 3) When you iterate over the Ha

Categories : Java

Trying to optimize I/O for MongoDB
Here are my thoughts: 1) Properly explain your performance concerns. So far I can't really figure out what the issue is or if you have one at all. As far as I can tell you're doing around a GB of updates and are writing about a GB of data to the disk... not much of a shock. Oh and do some damn testing - Not sure if this is a lot worse in performance than doing $set or not. - why don't you know? What do your tests say? 2) Check to see if there is any hardware mismatch. Is your disk just slow? Is your working set bigger than RAM? 3) Ask on mongo-user and other MongoDB specific communities... ...simply because you might get a better answer there than the lack of answers here. 4) Consider trying TokuMX. Wait what? Didn't I just accuse the last guy of suggesting that basica

Categories : C#

How to optimize the performance of a MySQL view
You're using InnoDB - glad you mentioned that. Here's a checklist: Optimize your table This will reorganize your table on your server for quicker input/output OPTIMIZE TABLE b; OPTIMIZE TABLE a; Check out more about InnoDB Optimization at: MySQL What I don't understand is why you have created a key for each and every single column of yours, that is a little redundant - as you could just tie in multiple columns as a single key (composite index) - especially if you are only going to compare a single column to another column from another table. Ideally, you created an index in the same order as the group by as well. Also logically, they would only be comparing a single key index when using the GROUP BY rather than sorting through each key and then placing them next to the correct va

Categories : Mysql

Is there any way to optimize performance of reading stream?
Here is solution! Public Shared Function CallWebService(ByVal an As String, ByVal xmlcommand As String) As String Dim _url = "http://testapi.interface-xml.com/appservices/ws/FrontendService" Dim soapEnvelopeXml As XmlDocument = CreateSoapEnvelope(xmlcommand) Dim webRequest As HttpWebRequest = CreateWebRequest(_url, an) webRequest.Proxy = System.Net.WebRequest.DefaultWebProxy webRequest.Headers.Add("Accept-Encoding", "gzip, deflate") InsertSoapEnvelopeIntoWebRequest(soapEnvelopeXml, webRequest) Dim asyncResult As IAsyncResult = webRequest.BeginGetResponse(Nothing, Nothing) asyncResult.AsyncWaitHandle.WaitOne() Dim soapResult As String Using webResponse As WebResponse = webRequest.EndGetResponse(asyncResult)

Categories : Http

MySQL optimize GROUP BY index performance?
Increasing amount of data, not decreasing it, that's what you need: you have two functions in GROUP BY clause, and if this you calculate YEAR(FROM_UNIXTIME(timestamp/1000)) and DAYOFYEAR(FROM_UNIXTIME(timestamp/1000)) beforehand in a trigger and store values to additional fields, your SELECT statement will be much faster. Other than that, you may simply truncate timestamp to nearest day by dividing it by 1000*3600*24=86400000 and group by only one field, because I can't see a point in grouping by year and day of year separately, when you can group only by date: SELECT MAX(timestamp) AS timestamp, SUM(value) AS value, COUNT(timestamp) AS count FROM data WHERE channel_id = 4 AND timestamp >= 1356994800000 AND timestamp <= 1375009341000 GROUP BY timestamp/86400000; Per

Categories : Mysql

How to Optimize Performance for a Full Table Update
EF is not the solution for raw performance... It's the "easy way" to do a Data Access Layer, or DAL, but comes with a fair bit of overhead. I'd highly recommend using Dapper or raw ADO.NET to do a bulk update... Would be a lot faster. http://www.ormbattle.net/ Now, to answer your question, to do a batch update in EF, you'll need to download some extensions and third party plugins that will enable such abilities. See: Batch update/delete EF5

Categories : C#

How can I optimize these two code snippits that does DB operations, for better performance?
Unfortunately EF doesn't support set based operations natively, although it is something they would like to add in at some stage, (feel free to add your two cents around this here http://entityframework.codeplex.com/discussions/376901) There is however an extension which does add support for set based deletes, but im not too sure about the performance of this method, it would be worth your while benchmarking before and after trying this. https://github.com/loresoft/EntityFramework.Extended The other key thing to note is that you could drastically improve performance by performing SaveChanges only once, this means that EF will push it all to the DB at once and only have to wait for one roundtrip to the database server. eg foreach (var newsitemz in GetAllItems) { if (newsitemz.Date &l

Categories : C#

Removing subquery of select statement used multiple times to optimize the performance
You have this as a subquery in the select clause. Hence, it must return one value. Hence, you can replace it with: ( SELECT top 1 CAST(decCostFactor as decimal(6,4)) FROM tblsubteam WITH (NOLOCK) WHERE intstore = st.intStore AND strsubteam = SUBSTRING(tblDetail.strMiscText,12,4) ) SQL Server is pretty good about optimization. However, the distinct might pose a problem. Removing the distinct and just taking an arbitrary value might improve the optimization. The way to check, though, is by looking at the execution plan. The performance would probably be improved by having an index on tblsubteam(instore, strsubteam, decCostFactor).

Categories : SQL

Translating MongoDB query to a MongoDB java driver query
I haven't checked the syntax. Also I don't know if your query works or not but here's my try. //unwind DBObject unwind = new BasicDBObject("$unwind", "$scorearray"); // Now the $group operation DBObject groupFields = new BasicDBObject("player", "$player"); groupFields.put("venue", "$scorearray.venue")); groupFields.put("score", "$scorearray.score")); DBObject group = new BasicDBObject("$group", new BasicDBObject("_id", groupFields)); //sort DBObject sort = new BasicDBObject("$sort", new BasicDBObject("_id.score":1)); //secondgroup DBObject secondGroupFields = new BasicDBObject("_id", "$_id.player") secondGroupFields.put("maxScore", new BasicDBObject("$last":"$_id.score")); secondGroupFi

Categories : Mongodb

Need to optimize the query by avoiding the union. Please Help the other option the query is as below
Why do you want to avoid it ? If it's too slow you might check if it's possible to change to UNION ALL instead, which avoids the costly DISTINCT. Additionally the NOT IN might be more efficient when you rewrite it as a NOT EXISTS.

Categories : SQL

query based on matching elements in DBRef list for mongodb using spring-data-mongodb
Querying for one element on an array is exactly like query for a field equality. You could read the MongoDB documentation here. So your query will be: new Query(Criteria.where("users.$id").is(new ObjectId(userId)))

Categories : Mongodb

Performance in MongoDB and GridFS
For your first question, both _id and filename fields are indexed by default. While _id field is unique, filename is not. So if you have files with same filenames, getting a file with the filename will be relatively slower than getting it by the _id field. For your second question, you can always have metadata for any GirdFS file you inserted. That means you don't have to have more than GridFS. Use GridFS to insert data, but just before inserting it, assign your metadata to the file you want to insert. That way you can query files using the metadata. If the metadata you want to have is fixed for all documents, then you can have those fields indexed too, and queryable of course.

Categories : Mongodb

How to optimize SQL query more?
Instead of using IN try it with JOIN add add few more indexes. SELECT DISTINCT u.name, u.username, p.content, p.post_time FROM post p INNER JOIN user u ON u.user_id = p.user_id INNER JOIN ( SELECT friend_id id FROM friend WHERE you = 1 UNION ALL SELECT follows id FROM follow WHERE user_id = 1 ) s ON p.user_id = s.ID ORDER BY post_time DESC LIMIT 10

Categories : Mysql

how to optimize my query?
The query doesn't look too bad IMO. However the normalization of the data looks a bit strange, e.g. why would you have a country (name) field on user_data table, just to join into country on name to look up the code? Instead, the more logical thing to me would be to reference country by country code (or other indexed key constraint). This would also save a join to country, if you just need the code as per your example query. If user_data is a high volume table, you will want to keep the data in it to a minimum to reduce IO when reading (density). Also, as an aside, joining using JOIN instead of in the WHERE clause will improve the readability of your code, IMO: SELECT cd.code, COUNT(ud.country) as count FROM topic_data as td INNER JOIN user_data as ud ON td.code = ud.topic_code I

Categories : PHP

Can we optimize this SQL Query?
Right I've tried to do what I can. Mostly some reordering, some swapping to INNER JOINs and movement of WHERE syntax to the joins. Some renaming of aliases because you had used alias names matching table names negating their purpose. This will return rows where there are no matching transactions, so you might want to change that join to be INNER also, rather depends on the purpose / intended output, but this should be a good starting point. It allows MySQL to reduce the number of rows it looks at. Further improvement can be had by suitable indexes, but advising on these is hard without knowing data types / the variance in the data etc. SELECT DISTINCT cnt.last_name, cnt.bus_ph_num, cnt.email_addr, u.login, cnt.fst_name, cnt.fk_id FROM contact cnt -- Changed

Categories : Mysql

How can i optimize the below query?
Something on this lines should work: select event_date::date AS Date , count_eventA = sum(case when event_name = 'event_A' then 1 else 0 end), count_eventB = sum(case when event_name = 'event_B' then 1 else 0 end) from tblname GROUP BY (event_date::date)) If you have more events you only need to add more sum(case) lines :) The DBEngine only runs through the table once to give you the totals, independiently of the number of the events you want to count: when you have a high rowcount you will observe significant delay with the original query. Should I add this to my answer, you think

Categories : SQL

How do I optimize this query further?
Based on the information you provided, the right approach to solving this will be to select the entire query, right-click on that selection, and choose "Analyze Query in Database Engine Tuning Advisor". It will give you some ideas on how to optimize it. There is no way we can optimize this without significant knowledge of the schema, execution plan, and the current run time. We would further need knowledge of the hardware it's running on. Generally speaking the tuning advisor will give you insight into what changes you could make to the underlying schema to produce a faster result. It will also tell you how much of an improvement you should see (e.g. 98%).

Categories : SQL

Optimize SQL Sub-Query for Max Value
I think you can replace it with a window function on the di table (done here as a subquery). Note that the where clause is not in the subquery, because that effects the rows used for the max() (this is probably the problem in your second query): SELECT dlp.ParamID ParamID, dp.ParamName ParamName, dlp.LocID LocationID, ml.LocName LocationName , di.Entered_On DateEntered, dlp.FreqDays Frequency FROM data_LocParams dlp INNER JOIN (select di.*, max(Entered_On) over (partition by LocId, ParamId) as maxeo from data_Input di ) di on dlp.LocID = di.LocID INNER JOIN data_Parameters dp on dp.ParamID = di.ParamID INNER JOIN map_Locations ml on ml.LocId = dlp.LocId WHERE (dlp.FreqDays IS NOT NULL) AND di.Entered_On =

Categories : SQL

SQL Optimize query to get max value
Try using ROW_NUMBER() instead of a correlated subquery: SELECT * FROM ( SELECT dlp.ParamID, dp.ParamName, ROW_NUMBER() OVER (PARTITION BY dlp.LocId, dlp.ParamID ORDER BY di.Entered_On) as RowNum FROM data_LocP dlp JOIN data_In di on dlp.LocID = di.LocID JOIN data_Parms dp on dp.ParamID = di.ParamID JOIN map_Loc ml on ml.LocId = dlp.LocId ) WHERE RowNum = 1 If you have multiple records that can match the same Entered_On value, then use RANK() instead of ROW_NUMBER().

Categories : SQL

MongoDB read performance dependency
you have to do it yourself: 1. db.coll1.find({}).explain() 2. db.coll2.find({}).explain() and after your could measure the difference of performance between two different queries.

Categories : Performance

MongoDB - Geospatial intersection performance
After tearing my hair out trying to figure out the best way to accomplish better performance in MongoDB, I decided to try our existing standard DB, SQL Server. I guess my low expectations for SQL Server's geospatial functionality were unfounded. The query ran in < 12 seconds without an index, and didn't scale up exponentially like MongoDB for larger drawn polygons. After adding an index, most queries are in the 1 second range. I guess I'll be sticking with what I know. I really had high hopes for MongoDB, but geospatial performance is severely lacking (or severely under-documented on how to improve it).

Categories : Mongodb

MongoDB performance - having multiple databases
Our application needs 5 collections in a db. When we add clients to our application we would like to maintain separate db for each customer. For example, if we have 500 customers, we would have 500 dbs and 2500 collections (each db has 5 collection). This way we can separate each customer data. That's a great idea. On top of logical separation this will provide for you, you will also be able to use database level security in MongoDB to help prevent inadvertent access to other customers' data. My concern is, will it lead to any performance problems? No, and in fact it will help as with database level lock extremely heavy lock contention for one customer (if that's possible in your scenario) would not affect performance for another customer (it still might if they are com

Categories : Database

MongoDB - Java Driver performance
Default Write concern has changed from NORMAL to SAFE for Java driver since V 2.10.0 See here This means that in the older driver version insert operations by default return as soon as a message is written to socket. In the newer driver version on the other hand, operations by default must be acknowledged by the server before returning, which is much slower.

Categories : Java

MongoDB Bulk Insert Performance
One approach is to store one document per user, with a ratings field that is a hash of item ids to users, for example class UserRating include MongoMapper::Document key :ratings key :user_id end UserRating.create(:user_id => 1, :ratings => {"1" => 4, "2" => 3}) You have to use string keys for the hash. This approach doesn't make it easy to retrieve all the ratings for a given document - if you do that a lot it might be easier to store a document per item instead. It's also probably not very efficient if you only ever need a small proportion of a user's ratings at a time. Obviously you can combine this with other approaches to increasing write throughput, such as batching your inserts or sharding your database.

Categories : Ruby On Rails

How to optimize the performance for Azure Service Bus REST Service
1) If your clients are using ChannelFactory, then cache the channels. 2) Is your endpoint on a data center near you and your customers? If not, I would highly suggest you change that. 3) According to the docs, if you are using NetTcpRelayBinding you can also set the TcpConnectionMode to Hybrid, which will establish "direct connections between two parties that sit behind opposing Firewalls and NAT devices"

Categories : Azure

MySQL: How to optimize this query?
Update your column data types to be INT or one of its variants, since the ones you are checking against are all integer IDs (assumption). Create indexes on following columns(if possible in all tables): prod.status supplier_id active_package_id Use IN clause instead of concatenating OR segments. I'll be putting the updated WHERE clause here: WHERE prod.status IN(1, 3, 5) AND ( sup.active_package_id NOT IN(1, 5, 6) OR prod.supplier_id = 0 ) AND 1989 IN (prod.supplier_id, prod.credit_supplier_id, prod.photo_supplier_id)

Categories : Mysql

optimize query sql for messenger
First note on optimization, it's a lot more involved than how can I optimize this? Secondly, some ideas: Don't use SELECT * if it's not necessary. Just bring back the fields needed. This builds off the first one. Build a covering index. This means that if the fields a, b, c are used in the query anywhere, then you can build an index on a, b, c on the table. This will allow the database to read off the index page rather than having to seek, load, and read from the data page.

Categories : Mysql

sql optimize a query using the join
I tried this solution: SELECT * FROM productHistory x INNER JOIN ( SELECT MAX(id_H) as maxId FROM productHistory GROUP id_product ) y ON x.id_H = y.maxId and x.TSINSERT >=:start and x.TSINSERT <=:end

Categories : SQL

Optimize expanding SQL query
For SQL Server here's a method of finding the nearest neighbor using expanding radius in a single query. This can be easily modified to fund k neighbors. http://blogs.msdn.com/b/isaac/archive/2008/10/23/nearest-neighbors.aspx

Categories : Mysql

how to optimize sql query with many conditions
try this: SELECT A.ticketid, B.station_name AS stationid, B.station_name AS destination, CONVERT(VARCHAR(5), GETDATE(), 108) AS TIME, CONVERT(VARCHAR(12), GETDATE(), 103) AS Date, B.station_name AS ticket_type, B.station_name AS journey_type, amount, issuedby FROM ticketcollections A station_tbl AS B, WHERE A.ticketidparent = '" + Request("ParentId") + "' AND A.stationid = B.stationid AND B.type in(0,4,3)

Categories : SQL

How would you optimize this mysql query?
I hope this's better. SELECT DISTINCT A.X1, A.X2 FROM TABLEAA AS A INNER JOIN(TABLEBB AS B) ON(A.Y = B.Y) INNER JOIN(TABLECC AS C) ON(A.Y = C.Y) WHERE B.Z1 = 'SELECTED1' AND B.W NOT LIKE '%SLECTED3%' AND C.Z2='SELECTED2' AND A.W NOT LIKE '%SLECTED3%'

Categories : Mysql

Optimize the query(mysql, php)
Try to remove the NOT IS CLAUSE Have removed it,and used a LEFT JOIN select slno from invoice_master LEFT JOIN invoice_refresh on (Inv_slno = slno ) where Inv_slno is null

Categories : PHP



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.