w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Does performance of query improve on increasing text length of index for text fields
It's hard to answer your question without knowing more. Firstly, you might want to run an EXPLAIN to see if your index is being used at all, and what its effect is. Secondly, yes, lengthening the index might improve performance - but only if you have a LOT of entries where the first 10 characters in the text field are identical, and the data is longer than 10 characters. You might want to look at full text searching instead - it's usually a lot faster for searching long blocks of text.

Categories : Mysql

Clojure performance - why does the "ugly" "array swap trick" improve lcs performance?
As Chas mentions, loops with primitive hints are problematic. Clojure attempts to keep ints unboxed when you provide a hint, but it will (for the most part) silently fail when it can't honor the hints. Therefore he is forcing it to happen by creating a deftype with mutable fields and setting those inside the loop. It's a ugly hack, but gets around a few limitations in the compiler.

Categories : Arrays

RavenDB polymorphic Index
What are you trying to do? If the end result is to query all Principals, then load the entire User or AppInstance, why not just go straight for querying all Users or all AppInstances? Raven won't store base classes; it will only store the most derived type (User or AppInstance, in your case). So you'll have a collection of Users and a collection of AppInstances. If you really need to query both at once, you can use Multi Map indexes. You can also change the entity name and store both in a single Principal collection. But it's difficult to recommend a solution without knowing what you're trying to do. Explain what you're trying to do and we'll tell you the proper way to get there.

Categories : C#

Improve performance of PL/SQL
it seems that this table : apps.fnd_concurrent_requests is getting selected several times in this procedure. try to place the relevent data in a WITH clause at the procedure's start to avoid unnecessary I/O on this table. you can find more information about the WITH clause here

Categories : SQL

How can I improve this performance?
The best would be to use threads to deal with the data faster. Also, it would give you the opportunity to give some feedback to the user making the lag almost unnoticeable or at least less annoying - especially if you consider sending data even if all is not processed yet. If you're looking for code optimizations, I would suggest that you use a profiler and modify the code accordingly.

Categories : IOS

Improve the performance of ElasticSearch
You don't need to search before updating. Read the es docs on updating and scroll down to the upsert section. upsert is a parameter which holds a document to use if the document does not exist on the server, otherwise the upsert is ignored and it works like a normal update request (as you are doing now). Good luck!

Categories : Elasticsearch

Improve performance of ivy resolve
Are you using <ivy:cleancache>? This is why your rebuilds are so short, but your initial builds are so long. Ivy is going to be slower than using Ant without Ivy. Ivy has to download each and every jar you specify, plus all dependencies into the Ivy cache (which by default is $HOME/.ivy2/cache on Unix/Mac/Linux and %USERPROFILE%.ivy2cache on Windows) before it can begin. You might specify 4 or 5 jars, but these could depend upon others which might depend upon even more. There really isn't a way to speed up the Ivy downloading of jars1, but once jars are downloaded, there really isn't a reason to constantly clean the Ivy cache each and every time you do a new project, or when you do a clean. What you can do is setup your clean, so you can avoid cleaning the Ivy cache unless you spe

Categories : Ant

SQL Server : how to improve performance
You could include columns into the index (on the leaf nodes, not in the actual index) that should safe some time as after finding the right match in the leaf node no pointing to and lookup of data is required. The bottleneck for performance is the match check that has to be done with every row, wether the check turns out positive or not. Even if your query returns no rows as a result, because no match was found, it will still need the same magnitude of time. If the resultset is large and that large amount of data must then be transported over the net that of course has negative influence as well. EDIT: sorry for claiming result set size was irrelevant For each matching ID the desired fields must be read out. If they are not included with the index they must be retrieved from the "norma

Categories : Performance

Improve query performance
Your naive approach is flawed for the very reason that you already know - the stuff eating your memory are the model objects dangling in the memory waiting to be flushed to the mysql. The easiest way would be to not use the ORM for conversion ops at all. Use the SQLAlchemy table objects directly, as they're also much faster. Also, what you can do is create 2 sessions, and bind the 2 engines into separate sessions! Then you can commit the mysql session for each batch.

Categories : Python

How to improve performance of following loop
please check: whether the start address of the array is aligned to 16byte. i7 support high latency unaligned avx load without complain "bus error" please check the cache hit and miss rate using a profile tool. It seems memory access is the bottleneck of version 2 of the loop you can downgrade the precision, or using a result table for sin and cos calculation. please consider how much performance improvement you plan to achieve. Since the version 1 of the loop take only 1/3 of total running time. if you optimize the loop to zero, the performance only improve 30%

Categories : C

How can I improve the performance of this code?
If you want to search in those you can give HashSet<T> a try? The search in a hashset is amortized at O(1). HashSet<Dictionary<string, string>> tr = new HashSet<Dictionary<string, string>>(); HashSet<string> _authorizedBks = new HashSet<string>();

Categories : C#

How can i improve performance of search script?
If you can avoid ajax requests, do so. You might need to use them though if you are having client-side problems, the browser can handle only so much data. If you need to work with ajax requests or a lot of data, set up your keydown event to start a 100-300 ms timeout that is reset and restarted on every subsequent keydown, which in turn calls your search function. That way your search will run only when the user is idle for 200ms (not much, but enough to cut down number of search calls by a lot).

Categories : Javascript

sql server 2008 r2 improve performance
I am not familiar with what unit "lacs" is, but you could look at the following to try to improve performance. Try selecting only the fields that you need. Remove any kind of formatting from the control. A friend of mine was encountering some issue with a Telerik grid when he was trying some non-standard configuration. See if there is a problem with the server (is cpu/ram maxing out?) Is there a problem with the network? Do other servers have the same issue? You could compare configurations to see what the difference is. Is the table fragmented? Not sure how much this would affect select performance though. If you are not using any joins I do not think that putting an index would improve anything.

Categories : Database

How to improve notes database performance?
There are many possible reasons for poor performance, and many things that we could suggest. For example, we could talk about disk fragmentation. We could discuss what Domino version you have (which you didn't mention) and possible upgrades. We could discuss your server's i/o subsystem... But the first and most glaringly obvious issue is that you have too many views!! The fact is that 30,000 documents is not at all unreasonable for a Notes database, even with a decent amount of new and edited documents per day. On a properly designed and maintained database on fairly basic hardware, this should perform just fine. I've seen databases with 100,000 or more docs with tens of thousands of new and deleted docs every day perform okay, but I've also seen them perform badly if, for example, delet

Categories : Performance

How to improve performance for query with CHARINDEX?
I posted this approach as a tongue-in-cheek answer to this question. Unfortunately, too many people take it seriously. Though it's OK for ad-hoc querying, it's really not appropriate for production. The accepted answer at that question is much better and can address your performance problem also.

Categories : SQL

Does hiding DOM elements improve performance?
ng-show and ng-hide will only set a CSS display style, and will still process the bindings. ng-switch, however, will completely comment out the cases that do not apply, which in turn means bindings in those are not processed. I agree, however, with Edmondo1984's reply, that I doubt you should base your choices on this. Do not rewrite your ng-shows as ng-switches because of this! You can verify this with the Chrome extension Batarang, the performance tab shows which watches are active.

Categories : Performance

How to improve EF query performance with 20+ Includes()
In my project I created repositories for using EF, so I wrote as follow public IQueryable<Company> Companies { get { return context.Companies.Include("BankAccounts").Include("CompanyContacts").Include("MyClients").Include("MyCompanies").Include("Address"); } } And now I can call every where in my code Companies without include. CompanyRepository comrep = new CompanyRepository(); var companies = comrep.Companies(c=>c.FullName == "Made in Azerbaijan").SingleOrDefault(); I hope this will help you

Categories : Entity Framework

get random int from a set in python and how to improve performance
Read the file only once, and turn the set into a list; the random.sample() implementation already turns a set into tuple just to be able to pick a sample. Avoid that overhead and just use random.choice() instead: books = None def getRandomBook(): global books if books is None: books = list(getBookSet()) return random.choice(books) No need to call int() because you already converted the read values. This at least speeds up picking a random value on repeat calls to getRandomBook(). If you need to call this only once per run of your program, there is no way around this other than creating a simpler file with just the unique book values.

Categories : Python

How to improve performance of contacting WebService?
I am using the exact same code to update a database in my app, i am sending with php/echo/json a large amount of data, and the only time i have experienced a slow down is when the internet speed is low or the server gets slow. After upgrading to a better server the only time this operation takes a few seconds to finish is when the signal strength gets low on the phone and the data transfer slows down to dial up speeds or lower.... Your php code and java codes are correct, i suspect the problem is with your server. You should do testing when your phone is on wi-fi with good signal strenght to rule out the net speed, and test how long it takes for your DB HELPER in php to get the results. If its SQL connection and not Mongo or something else it should work faster than 50ms even for 10 000

Categories : PHP

How to improve my query performance by indexing
I would combine these indexes into one index, instead of having three separate indexes. For example: CREATE INDEX ix_cols ON dbo.Table1 (Col1, Col2, Description) If this combination of columns is unique within the table, then you should add the UNIQUE keyword to make the index unique. This is for performance reasons, but, also, more importantly, to enforce uniqueness. It may also be created as a primary key if that is appropriate. Placing all of the columns into one index will give better performance because it will not be necessary for SQL Server to use multiple passes to find the row you are seeking.

Categories : Sql Server

How to improve the performance of stored procedure?
There's nothing you can do about the CASE. As for the WHERE clause, it should use INDEX on SoldDate if such one exist (yes your WHERE clause is Sargable). But in case, the first time (in a long time) you run the query, the date range is so wide, sql server makes a full scan, you could have a parameter sniffing problem. So, it might be good to use a query hint in order to force the index usage (you wouldn't lose much performance where full scan would have been more efficient anyway). FROM dbo.vatcs WITH (INDEX(IDX_vatcsInnerTable_SoldDate)

Categories : Sql Server

Improve Performance of VBA String Comparison
See if this runs any faster for you: Sub checkTemplate(shnam1 As Worksheet, shnam2 As Worksheet, shnam3 As Worksheet) Dim lCalc As XlCalculation Dim arrResults(1 To 65000, 1 To 5) As Variant Dim arrTable() As Variant Dim varCriteria As Variant Dim rIndex As Long Dim cIndex As Long Dim ResultIndex As Long With Application lCalc = .Calculation .Calculation = xlCalculationManual .EnableEvents = False .ScreenUpdating = False End With On Error GoTo CleanExit arrTable = shnam1.Range("A1").CurrentRegion.Value For rIndex = 2 To UBound(arrTable, 1) For cIndex = 3 To UBound(arrTable, 2) varCriteria = shnam2.Cells(cIndex, "A").Value If arrTable(rIndex, cIndex) <> varCriteria Then

Categories : Performance

Improve selectors performance with jQuery
It has changed now. Most of the browsers are implementing : var matches = document.body.querySelectorAll('div.highlighted > p'); Inside their implementation in javascript. That's what jQuery uses inside now; It is implementing sizzle.js, a selector js library that chooses whether to use querySelector or the regular getElementsByTagName function; For example, for the jquery constructor function $() if the first argumemt is a string : $(iAmAString), then if the first letter of the string is a #, jquery will call document.getElementById(iAmAString.substr(0)). If not it will let sizzle handle whether calling querySelector depending on the browser's compatibility and the complexity of the string. and lots of other awesome things. Being the most precise when selecting your element, u

Categories : Javascript

Dynamic column - how to improve performance
This gives you the list that you need. If you want to concatonate the values into a long string with delimiters based on the cte's Delimiter field, see here: Concatenate many rows into a single text string? use master; go with cte (TypeParent,ParentID,TypeEnfant,EnfantID,Numero,Delimiter) as ( select l.TypeParent , l.ParentID , l.TypeEnfant , l.EnfantID , e.Numero , '§' as Delimiter from dbo.ges_Jumelages_Liens as l join dbo.ges_Equipements as e on l.EnfantID = e.EquipmentID where l.TypeParent = 1 and l.TypeEnfant = 2 union all select l.TypeParent , l.ParentID , l.TypeEnfant

Categories : SQL

how to improve select performance in mysql?
Ideally, you'd use the primary key, (id) within your program. But failing that, an index on the link column would solve your problem. Right now, the query you are running requires that all one million rows from the table be loaded from disk, and then compared to your query string value. Using an index, this could be reduced to no more than three or four disk reads. Also, a text datatype is stored separately from the rest of the table, which is not very efficient. I would re-define the two text columns to something like varchar(300), which is certainly long enough for any URL that might be encountered, but provides for more efficient storage as well: the TEXT type is really for (potentially) long fields like memos, or the content of web pages.

Categories : Mysql

SQL Update How to Improve Performance using Primary Key
My random guess (no table definition) is that the primary key datatype and constant value have a mismatch requiring a conversion: this means the index won't be used. So it scans every row in the table

Categories : SQL

Improve Performance of CSS url() with DataURI Image
You could create your own embedded web server that runs from inside the app and use URLs like the following to load the image: http://127.0.0.1/myimage.png Still, I'm not sure this should be an app if it doesn't run offline and it doesn't utilize any native functionality.

Categories : Javascript

How to improve the Solr "OR" query performance
you may try lucene-c-boost.Optimized implementations of certain Apache Lucene queries in C++ (via JNI) for anywhere from 0 to 7.8X speedup. see https://github.com/mikemccand/lucene-c-boost.

Categories : Performance

How can I improve Perl compare performance
O(n²) You compare each element in @$users against every element in there. That is 5E4² = 2.5E9 comparisions. For example, you wouldn't need to compare an element against itself. You also don't need to compare an element against one you have already compared. I.e. in this comparision table X Y Z X - + + Y - - + Z - - - there only have to be three comparision to have compared each element against all others. The nine comparisions you are doing are 66% unneccessary (asymptotically: 50% unneccessary). You can implement this by looping over indices: for my $i (0 .. $#$users) { my $userA = $users->[$i]; for my $j ($i+1 .. $#$users) { my $userB = $users->[$j]; ...; } } But this means that upon match, you have to increment the weight of both matching users. Do thin

Categories : Perl

Improve string parse performance
You could "cheat" and work at the Encoder level... public class UTF8NoZero : UTF8Encoding { public override Decoder GetDecoder() { return new MyDecoder(); } } public class MyDecoder : Decoder { public Encoding UTF8 = new UTF8Encoding(); public override int GetCharCount(byte[] bytes, int index, int count) { return UTF8.GetCharCount(bytes, index, count); } public override int GetChars(byte[] bytes, int byteIndex, int byteCount, char[] chars, int charIndex) { int count2 = UTF8.GetChars(bytes, byteIndex, byteCount, chars, charIndex); int i, j; for (i = charIndex, j = charIndex; i < charIndex + count2; i++) { if (chars[i] != '') { chars[j] = chars[i];

Categories : C#

Improve SimpleMembership query performance?
Can the sql query be changed? No, the SQL query is burnt into the code of the simple membership provider. Checkout with reflector the code of the WebMatrix.WebData.SimpleMembershipProvider.GetUserId method which looks like this: internal static int GetUserId(IDatabase db, string userTableName, string userNameColumn, string userIdColumn, string userName) { object obj2 = db.QueryValue("SELECT " + userIdColumn + " FROM " + userTableName + " WHERE (UPPER(" + userNameColumn + ") = @0)", new object[] { userName.ToUpperInvariant() }); if (<GetUserId>o__SiteContainer5.<>p__Site6 == null) { <GetUserId>o__SiteContainer5.<>p__Site6 = CallSite<Func<CallSite, object, bool>>.Create(Binder.UnaryOperation(CSharpBinderFlags.None, ExpressionType.

Categories : Asp Net Mvc

improve python performance in networkx
I am not sure why this is slow when converting a numpy matrix to Di-Graph. Please try this approach below and see if it helps. def create_directed_graph(rows): g = nx.DiGraph() for row in rows: curRow = row['r'] curCol = row['c'] weight = row['val'] g.add_edge(curRow,curCol,Weight=weight) return g

Categories : Python

How can I improve the performance of a Seda queue?
SEDA with concurrentConsumers > 1 would absolutely help with throughput because it would allow multiple threads to run in parallel...but you'll need to implement your own locking mechanism to make sure only a single thread is hitting a given http endpoint at a given time otherwise, here is an overview of your options: http://camel.apache.org/parallel-processing-and-ordering.html in short, if you can use JMS, then consider using ActiveMQ message groups as its trivial to use and is designed for exactly this use case (parallel processing, but single threaded by groups of messages, etc).

Categories : Apache

How to improve performance of this linear interpolation in r
Apart from using rollmeans, the base R filter function can do this sort of thing as well. E.g.: linint <- function(vec) { c(vec[2],filter(vec,c(0.5,0,0.5))[-c(1,length(vec))],vec[length(vec)-1]) } x <- c(1,3,6,10,1) linint(x) #[1] 3.0 3.5 6.5 3.5 10.0 And it's pretty quick, chewing through 10M cases in less than a second: x <- rnorm(1e7) system.time(linint(x)) #user system elapsed #0.57 0.18 0.75

Categories : R

Does "Detach Elements to Work with Them" improve performance always?
It improves performance when you make modifications to the element that would cause reflows, i.e. the browser would have to recompute the layout of the element. Reading the DOM usually does not cause reflows. As far as I know, reading certain layout information (such as the offset of an element) can trigger reflows to ensure that the correct values are returned. But in such a case, you probably want the element to be part of the document since you might not get accurate information otherwise.

Categories : Javascript

Does splitting up an update query improve performance
To be honest, it may be better not to use that method at all, but to instead use BULK INSERT as specified here: Handling Bulk Insert from CSV to SQL It is quite simple though: BULK INSERT dbo.TableForBulkData FROM 'C:BulkDataFile.csv' WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = ' ' ) If you're doing it through C#, then you can use the SqlBulkCopy library, or if you need to do it from command line, you can always use BCP. Note, the method you're currently using is up to 10 times slower: QUOTE FROM ARTICLE: Data can be inserted to the database from a CSV file using the conventional SQLCommand class. But this is a very slow process. Compared to the other three ways I have already discussed, this process is at least 10 times slower. It is strongly recommended to not loop throu

Categories : SQL

JPA/Hibernate improve batch insert performance
Here are two good answers on the subject Hibernate batch size confusion How do you enable batch inserts in hibernate? Notice that with identity generator (it is the generator used by default with play) batch insert is disabled.

Categories : Hibernate

Improve query performance from many Master table
SELECT STRAIGHT_JOIN * FROM t_responden a INNER JOIN t_kepemilikan_tik b ON a.res_id = b.res_id INNER JOIN t_televisi c ON a.res_id = c.res_id INNER JOIN t_telepon_hp d ON a.res_id = d.res_id INNER JOIN t_radio e ON a.res_id = e.res_id INNER JOIN t_media_cetak f ON a.res_id = f.res_id INNER JOIN t_internet g ON a.res_id = g.res_id LEFT JOIN t_mst_pendidikan n ON a.res_pendidikan_kk = n.kd_pendidikan LEFT JOIN t_mst_pendidikan o ON a.res_pendidikan = o.kd_pendidikan LEFT JOIN t_mst_penghasilan p ON a.res_penghasilan = p.kd_penghasilan LEFT JOIN t_mst_aksesibilitas q

Categories : Mysql

Improve the performance for enumerating files and folders using .NET
This is (probably) as good as it's going to get: DateTime sixtyLess = DateTime.Now.AddDays(-60); DirectoryInfo dirInfo = new DirectoryInfo(myBaseDirectory); FileInfo[] oldFiles = dirInfo.EnumerateFiles("*.*", SearchOption.AllDirectories) .AsParallel() .Where(fi => fi.CreationTime < sixtyLess).ToArray(); Changes: Made the the 60 days less DateTime constant, and therefore less CPU load. Used EnumerateFiles. Made the query parallel. Should run in a smaller amount of time (not sure how much smaller). Here is another solution which might be faster or slower than the first, it depends on the data: DateTime sixtyLess = DateTime.Now.AddDays(-60); DirectoryInfo dirInfo = new DirectoryInfo(myBaseDirectory); FileInfo[] oldFiles = dirInfo.EnumerateDirect

Categories : C#

How to improve InnoDB's SELECT performance while INSERTing
I imagine you have already solved this nearly a year later :) but I thought I would chime in. According to MySQL's documentation on internal locking (as opposed to explicit, user-initiated locking): Table updates are given higher priority than table retrievals. Therefore, when a lock is released, the lock is made available to the requests in the write lock queue and then to the requests in the read lock queue. This ensures that updates to a table are not “starved” even if there is heavy SELECT activity for the table. However, if you have many updates for a table, SELECT statements wait until there are no more updates. So it sounds like your SELECT is getting queued up until your inserts/updates finish (or at least there's a pause.) Information on altering that priority can be f

Categories : Mysql



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.