w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
Why does adding extra check in loop make big difference on some machines, and small difference on others?
One big difference between CPUs is the pipeline optimization The CPU can execute in parallel several instructions until reaches a conditional branch. From this point instead of waiting until all the instructions are executed, the CPU can continue with a branch in parallel until the condition is available and ready to be evaluated. If the assumption was correct, then we have a gain. Otherwise the CPU will go with the other branch. So the tricky part for a CPU is to find the best assumptions and to execute as many instructions in parallel as possible.

Categories : C++

What is the difference between these 2 queries
this query: select distinct af.Code from AIR af inner join Float spf on spf.Station = af.AirID inner join Float spf1 on spf1.DeptStation = af.AirID is equal to an "and" join select distinct af.Code from AIR af inner join Float spf on spf.Station = af.AirID and spf.DeptStation = af.AirID Edit You had a error on your second query. It should be inner join Float spf1 on spf1.DeptStation = af.AirID

Categories : SQL

What's the difference between queries?
If a movie has no ratings, rating.mid will be null, so movie.mid=rating.mid will be false. If a movie does have ratings, then movie.mid=rating.mid will work, but (presumably) stars is null will be false. So your condition is never satisfied. The correct answer that you posted works because the join condition is separate to the where condition. First the movies table is joined to the rating table, then the result is filtered to rows which had nothing joined to them.

Categories : SQL

Two SQL Queries - Performance difference?
The answer to your question is that the second should perform better in MySQL than the first, for exactly the reason you gave. MySQL will run the full group by on all the data and then select the one group. You can see the different in execution paths by putting an explain in front of the query. That will give ou some idea of what the query is really doing. If you have an index on user_id, timestamp, then the second query will run quite fast, only using the index. Even without an index, the second query would do a full table scan of the two tables -- and that is it. The first will do a full table scan and a file sort for the aggregation. The second takes longer. If you wanted to pass in the userid only once, you could do something like: select coalesce(greatest(bc_last_timestamp,

Categories : Mysql

Explain the difference between these 3 queries and which is the best to use?
As people have started to point out in the comments, none of these methods is advisable or safe. The problem is SQL injection as explained here. You want to use PDO. See this tutorial or this reference. So to connect: $dsn="mysql:host=127.0.0.1;dbname=myDatabase"; $username="myUsername"; $password="myPassword"; try { $DBH=new PDO($dsn,$username,$password); } catch(PDOException $e) { echo $e->getMessage(); } And a sample insertion: $STH = $DBH->prepare("INSERT INTO job (snum, date, length, type, ref) VALUES (:u,:d,:l,:t,:r)"); $STH->bindParam(':u', $myVariable1); $STH->bindParam(':d', $myVariable2); $STH->bindParam(':l', $myVariable3); $STH->bindParam(':t', $myVariable4); $STH->bindParam(':r', $myVariable5); try { $S

Categories : PHP

Two MongoDB queries same result, what is the difference?
The difference is that $elemMatch finds items for one single array element. This solution: db.users.find({ "linkedProviders.userId": "1XXXXXXXX6", "linkedProviders.providerId": "facebook" }) Finds any user that has that userId and that providerId, but possibly in different items in linkedProviders, e.g., if linkedProviders[0].userId matches the first and linkedProviders[1].providerId matches the second part of the query, the full document (i.e., the user) will match that query. On the other hand, db.users.find({ "linkedProviders": { "$elemMatch": { "userId": "1XXXXXXXX6", "providerId": "facebook" } } }) will match only if the index values (0 and 1 in the previous example) are the same in the document, i.e., only if one array e

Categories : Mongodb

Behavior difference between two cypher queries
What version are you using? Trying to generate the faulty result in http://console.neo4j.org/r/6lvxd8 does not give me any. If you can recreate it in the console, please file an issue!

Categories : Neo4j

PHP + MySQL: Difference between buffered and unbuffered queries
See : http://php.net/manual/en/mysqlinfo.concepts.buffering.php Unbuffered MySQL queries execute the query and then return a resource while the data is still waiting on the MySQL server for being fetched. This uses less memory on the PHP-side, but can increase the load on the server. Unless the full result set was fetched from the server no further queries can be sent over the same connection. Unbuffered queries can also be referred to as "use result". Following these characteristics buffered queries should be used in cases where you expect only a limited result set or need to know the amount of returned rows before reading all rows. Unbuffered mode should be used when you expect larger results. Buffered queries are default. Unbuffered Example: <?php $mysqli = new mysqli("localho

Categories : PHP

Scoring difference between seemingly equivalent Solr queries
I've followed femtoRgon's advice to check the debug element of the score calculation. What I've found is that the calculations are indeed mathematically equivalent. The only difference is that in the conjunction-of-queries calculation we store intermediate results. More precisely, we store the contribution to the sum of each sub-query in a variable. Apparently, stopping in order to store intermediate results has an effect of accumulating a numerical error: Each time we store the intermediate result we're losing some accuracy. Since the actual queries in the application are quite big (not like the trivial example query), there's plenty of accuracy to be lost, and the accumulated error sometimes even changes the ranking order of the returned documents. So the conjunction-of-terms query is e

Categories : Java

Difference between using SMO and SQL queries (through SqlConnection) when building a SQL Server table
I'm not sure that SMO covers every single attribute and property that you'd want to set for a table, and scrounging around to find them all can be an exercise in futility. Personally I would much rather use DDL (CREATE TABLE, ALTER TABLE, etc) not only because I don't have to look up all the properties but also because it helps me reinforce my knowledge of those commands for cases where I don't want to or can't use SMO. But that may just be a preference thing. As for speed, no difference whatsoever unless you're measuring the parse/interpretation through the SMO layer with a nanosecond stopwatch. It's going to eventually build the same CREATE TABLE command you would write yourself, and send that on to SQL Server.

Categories : C#

Finding the difference between the results of two queries in Access 2003
Since the [not_zero] query results will always be a subset of the [all] query results, the "difference" between the two are the rows in [all] that do not appear in [not_zero]. The following query finds them: SELECT all.* FROM all LEFT JOIN not_zero ON (all.NAME=not_zero.NAME) AND (all.LOCATION=not_zero.LOCATION) AND (all.[BUSINESS UNIT]=not_zero.[BUSINESS UNIT]) WHERE (not_zero.NAME Is Null) AND (not_zero.LOCATION Is Null) AND (not_zero.[BUSINESS UNIT] Is Null);

Categories : SQL

MYSQL COLLATE Performance
In this case, since the forced collation is defined over the same character set as the column's encoding, there won't be any performance impact. However, if one forced a collation that is defined over a different character set, MySQL may have to transcode the column's values (which would have a performance impact); I think MySQL will do this automatically if the forced collation is over the Unicode character set, and any other situation will raise an "illegal mix of collations" error. Note that the collation recorded against a column's definition is merely a hint to MySQL over which collation is preferred; it may or may not be used in a given expression, depending on the rules detailed under Collation of Expressions.

Categories : Mysql

How do I indicate collate order in Roxygen2?
The include tag is used to state that one file needs another to work. (The name include might not have been the best choice, but such is life). If you want to make sure that file B is loaded before file A, then make sure to @include B in A. Roxygen will take care of ordering the collate field to satisfy your restrictions.

Categories : R

SQL latin1 to utf-8 converting
create new database in utf8 restore your database structure run "ALTER TABLE the_latin_one CONVERT TO CHARACTER SET utf8;" for all of your tables create php script that create sql.txt like this file_put_contents(PATH2."sql.txt","insert into ". $TABLE[0]. " values(". (rtrim($q_col,",") )."); ",FILE_APPEND); really you must create all tables insert query. using "show FULL tables where Table_type<>'VIEW'" and "set character set 'latin1'" query may help you for do this. if you have any problem write a comment for me ;) in command line mysql -u xxx -p yourDB< sql.txt i do this and solve my problem after 3 years :))

Categories : Mysql

using latin1 encoding in a rails app
First, you need to supply the program versions. Next, you need to understand that the default locale setting LC_CTYPE is determined at cluster creation time. I quote the manual here: Some locale categories must have their values fixed when the database is created. You can use different settings for different databases, but once a database is created, you cannot change them for that database anymore. LC_COLLATE and LC_CTYPE are these categories. They affect the sort order of indexes, so they must be kept fixed, or indexes on text columns would become corrupt. (But you can alleviate this restriction using collations, as discussed in Section 22.2.) The default values for these categories are determined when initdb is run, and those values are used when new databases are cr

Categories : Ruby On Rails

Does the method I use to make classes make a difference in the long run?
It makes no difference how you create the class as long as its syntactically correct. You can do it manually or use productivity tools like Visual Studio or Resharper to help you, but the end result is the same. In fact you can write your entire program in notepad and have it work the same way... If you get used to using productivity tools you may never want to do it manually, but that's a personal preference

Categories : C#

When to case fold and when to collate using Boost.Locale?
I think it depends on what you want to achieve. In the collation case (using collator_base::secondary), punctuation will be ignored as well. This is sometimes what you want, and sometimes not. So it's up to you to decide, which is to be preferred in a particular case. The documentations says: fold_case is generally a locale-independent operation, but it receives a locale as a parameter to determine the 8-bit encoding. For me generally means in this case, that fold_case is locale independent, and the locale is only used in order to determine the 8-bit encoding. (But I'm not an english native speaker...)

Categories : C++

Latin1/UTF-8 Encoding Problems in AngularJS
I found one way that works, using the transformRequest config parameter. transformRequest: function (data, headersGetter) { return encode_utf8(JSON.stringify(data)); } function encode_utf8(s) { return unescape(encodeURIComponent(s)); } I'm using the encode function found and explained at http://ecmanaut.blogspot.com/2006/07/encoding-decoding-utf8-in-javascript.html and the JSON library found at http://www.JSON.org/json2.js.

Categories : Django

MYSQL latin1 and utf8 after mysqldump
Your database contains some data that's encoded with utf-8, but you have stored it in table columns declared to be latin1. Presumably you want Général to be displayed as Général. You're going to have to repair your data to fix this. To find the offending data, see this article. How can I find Non-Ascii characters in MySQL

Categories : Mysql

iOS: trouble when querying swedish letters with sqlite3
From the reference manual; NOCASE - The same as binary, except the 26 upper case characters of ASCII are folded to their lower case equivalents before the comparison is performed. Note that only ASCII characters are case folded. SQLite does not attempt to do full UTF case folding due to the size of the tables required. Your best bet is probably to compile your own SQLite with ICU extensions.

Categories : IOS

Perl: Why do i need to set the latin1 flag explicitly since JSON 2.xx?
This is such a huge can of worms that you're opening here. I suspect that the answer is something along the lines of "a bug was fixed in the character handling of JSON.pm". But it's hard to know what is going on without a lot more information about your situation. How is $string_with_umlauts being set? How are you encoding the data that you write to the HTML document? Do you want to handle utf8 data correctly (you really should) or are you happy assuming that you live in a Latin1 world? It's important to realise that if you completely ignore Unicode considerations then it can often seem that your programs are working correctly as errors often cancel each other out. When you start to address Unicode issues, it can seem that your programs are getting worse until you address all of the is

Categories : Json

Convert huge database from Latin1 to UTF8?
You're hoping to do something that is possible, but very hard, and risky as well. Give up on clever: there is nothing magical that makes this easy. You're trading off downtime on the one hand against your labor cost and the risk of data loss on the other hand. Your labor cost will probably be ten times higher than it would if you took the 15 hours downtime. Is it possible to write a SELECT query for every table that is guaranteed to retrieve every row that has been added or changed since a particular date/time, and do so quickly? If so, write this query for every table and keep it at hand. If not, you can't use this method. You can do this table by table. The small tables won't take much to do; you probably can do them while your application is live at off-peak hours. Just convert t

Categories : PHP

Writing and then reading a string in file encoded in latin1
Your data was written out as UTF-8: >>> 'On écrit ça dans un fichier.'.encode('utf8').decode('latin1') 'On écrit ça dans un fichier.' This either means you did not write out Latin-1 data, or your source code was saved as UTF-8 but you declared your script (using a PEP 263-compliant header to be Latin-1 instead. If you saved your Python script with a header like: # -*- coding: latin-1 -*- but your text editor saved the file with UTF-8 encoding instead, then the string literal: s='On écrit ça dans un fichier.' will be misinterpreted by Python as well, in the same manner. Saving the resulting unicode value to disk as Latin-1, then reading it again as Latin-1 will preserve the error. To debug, please take a close look at print(s.encode('unicode_escape')) in the fi

Categories : Python

C++, linux, how to pop_back() efficiently a non latin1 char from a string
On modern Linux in particular most (all?) text and code editors save "Motörhead" in the file with 10 bytes between the quotation marks. Try hexdump on your source code file and you'll see something like 00000050 32 20 3d 20 22 4d 6f 74 c3 b6 72 68 65 61 64 22 |2 = "Mot..rhead"| You can achieve this behavior in a portable manner with C++11 if you use u8"Motörhead" As for finding out how many bytes are in each multibyte character, it's rarely necessary, but if you really need it, std::mblen, std::mbrlen and related functions can help.

Categories : C++

Symfony2: How can I set twig |date("d F, Y") filter to output months in Swedish?
The Twig Intl Extension You can use the Twig Intl Extension found in fabpot's official Twig extension repository. It provides a localized date filter which can be used like this: {{ date | localizeddate('full', 'none', app.request.locale ) }} use app.request.locale as third parameter for current locale or just 'sv'. Integration into your project add the official extensions to your composer.json using: composer require twig/extensions:1.0.*@dev composer update twig/extensions config.yml #enable intl extensions services: twig.extension.intl: class: Twig_Extensions_Extension_Intl tags: - { name: twig.extension } quick tip: another handy extension is the Text extension providing truncate,...etc filters services: twig.extension.text:

Categories : PHP

How to get list of records from table where the time difference found using DATEDIFF function between 2 variables that are select queries themselves?
You can't use variables this way. Now it's hard to tell for sure not seeing your table schema and sample data but you should be able to do what you want using JOIN with a query like this SELECT l1.* FROM log.time l1 JOIN log.time l2 ON l1.sender = l2.sender AND l1.receiver = l2.receiver AND l1.code = 158 AND l2.code = 189 WHERE l1.sender = 'Japan' AND l1.receiver = 'USA' AND DATEDIFF(minute, l1.log_time, l2.log_time) >= 10 If you were to provide a table schema, sample data and desired output, then it'll be possible to test your query

Categories : Mysql

Python3 - ascii/utf-8/iso-8859-1 can't decode byte 0xe5 (Swedish characters)
You are running this in xterm, which does not support UTF-8 by default. Run it as xterm -u8 or use uxterm to fix that. The other way to work around that, is to use a different locale; set your locale to Latin-1 for example: export LANG=sv_SE.ISO-8859-1 but then you are limited to 256 codepoints, versus the full range (several million) of the Unicode standard. Note that Python 2 never decoded the input; writing out what you read from the terminal will look fine because the raw bytes you read are interpreted by the terminal in the same locale; reading and writing Latin-1 bytes works just fine. That's not quite the same as processing Unicode data, however.

Categories : Python

How to make function of SQL Queries
You need a mapping table to look up the make when you specify the type (or vice versa). Make Type Ford Explorer Chevrolet Lumina Ford Crown Subaru Legacy Then, when you call your function GetMake(type), it does a query using the Type in the query, returns the Make, and you can then use the results in your query above.

Categories : SQL

How to make a partial queries in MongoDb
One alternative can be using aggregation framework: db.testing.aggregate({$match : {'user.name':'John'}},{$unwind : '$user.sports'}, {$skip: 0}, {$limit : 2}) db.testing.aggregate({$match : {'user.name':'John'}},{$unwind : '$user.sports'}, {$skip: 2}, {$limit : 4}) Change the (0, 2) for (skip, limit) above to get the next set of 2.

Categories : Mongodb

make better LinQ queries mvc3
Fiddler doesn't monitor the data sent from the Sql Server to the webserver. Fiddler will only show you the size of the HTML that the view generates. So to answer your question, YES! The performance is much better if you ask only for the fields you need rather than either asking for all of them, or blindly using the select method. The Sql Server should/may run the query faster. It may be able to retrieve all the fields you are asking for directly from an index rather than having to actually read each row. There are many other reasons as well, but they get more technical. As for the webserver, it too will execute faster since it doesn't have to recieve as much data from the sql server, and it will use less memory (leaving more memory available for caching, etc). A good analogy would b

Categories : Asp Net Mvc

facebook opengraph, how to make customized queries?
me/friends?fields=music will give you the music interests of your friends and for nearby restaurants you will have to execute a complex query like search?q=cafe&type=place&center=37.76,-122.427&distance=1000 Refer to this page for search types

Categories : Facebook

COGNOS Report Studio : Make a certain query run before any other queries
Doesnt seem like a good case for a "live" report. I would utilize a SQL scheduler to run your SPs beforehand and write the last completed date to a debug table. Have Cognos read off the tables you've created along with the last run date from the debug table.

Categories : Sql Server

Buffer MySQL queries or make multiple connections?
Making connections is expensive so it's probably best not to keep making them. However, holding one connection open all the time has its own problems ( what happens if it closes for some reason? ) Why not have a look at database connection pooling - google will show up several competing connection pool implementations for you. You'll get the best of both worlds. To your application, the connection will appear to be permanently open, but if the connection fails for some reason, the connection pool will handle re-opening it for you.

Categories : Java

How make dns queries in dns-python as dig (with aditional records section)?
I found solution, I made request as dig: import dns.name import dns.message import dns.query import dns.flags domain = 'google.com' name_server = '8.8.8.8' ADDITIONAL_RDCLASS = 65535 domain = dns.name.from_text(domain) if not domain.is_absolute(): domain = domain.concatenate(dns.name.root) request = dns.message.make_query(domain, dns.rdatatype.ANY) request.flags |= dns.flags.AD request.find_rrset(request.additional, dns.name.root, ADDITIONAL_RDCLASS, dns.rdatatype.OPT, create=True, force_unique=True) response = dns.query.udp(request, name_server) print response.answer print response.additional print response.authority With ADDITIONAL_RDCLASS = 4096 like dig all work too, but I set it full to be on the safe side. and it works pretty good.

Categories : Python

How to make arbitrary SQL-queries very quick from a huge table in the database
If you are using single table, it should be for reports only or else you are doing it wrong. Reports data should not be updated often, not every hour. If oracle hint fails, you can change it manually, happens time to time, but it's mostly because update/insert. And know your queries, don't blindly create indexes for every column (you do use indexes right), check where these slow queries spends time then you will know where to optimize. Relational db doesn't sucks, if you know how to use them.

Categories : SQL

Does it make sense to wrap PHP mysqli prepared statements for executing single queries?
To have such a function is actually the only sane way. While using raw API functions right in the application code, so much advertised on this blessed site of Stack Overflow, is one of the worst practices ever. And yes, it makes sense even for the single query execution. As the only prepared statement's purpose is to format your query properly and unconditionally. Though, to create such a function for mysqli using native prepared statements is a durn complex task. One need A LOT of experience and research to accomplish it. Say, only to add an arbitrary number of parameters to a query, you will need a screenful of code: Bind multiple parameters into mysqli query And you will need twice as that to get your results into array! However, for the emulated approach it would be much easier

Categories : PHP

Is there any difference between iPhone 4 and iPhone 5 when I use media queries?
According to this site, the difference is with the max-device-width: In iphone4 it's 480px In iphone5 it's 568px iphone4 @media only screen and (min-device-width : 320px) and (max-device-width : 480px) { /* STYLES GO HERE */} iphone5: @media only screen and (min-device-width : 320px) and (max-device-width : 568px) { /* STYLES GO HERE */ } ...and both have Device-pixel-ratio: 2 so iphone4 has screen height= 960px (Actual Pixels) and iphone5 has screen height= 1136px (Actual Pixels)

Categories : CSS

In Django, how do I make a model that queries my legacy SQL Server and Oracle databases and sends that to the view?
You can query different databases in your ORM calls by leveraging the using statement: https://docs.djangoproject.com/en/1.5/ref/models/querysets/#using This would allow you to set up as many database definitions as you need in settings.py, then specify which DB to query at the view level. That way, you wouldn't have to change your model definition should you decide to consolidate your databases, etc.

Categories : Django

I'm trying to make a long-polling chat app using Twisted and jQuery (with Django). How do I pass queries back to the JS?
Sometimes, a function in a twisted application may conditionally return data, and at other times return a Deferred. In those cases, you can't check to see if you got data; you probably won't, and in the cases that you do get a deferred, no amount of rechecking will change that; you must always turn such functions into real Deferreds, with maybeDeferred, and then attach a callback to the result. That said, t.e.adbapi.ConnectionPool.runQuery() is not such a function. it always returns a deferred. The only way to work with that data is to attach a callback. In general, you won't ever see the result of an asynchronous call in twisted applications in the same function that makes the initial call. This means that, since you want to run a query for every long polling request, and since those

Categories : Jquery

What difference does add method make over setContentPane?
JFrame#add basically calls JFrame#getContentPane().add, so it's just shorthand. So your code is actually saying... fr.setContentPane(new JPanel() {...}); fr.getContentPane().add(new JButton("Press Me")); Now, when you comment out the setContentPane line, JFrame uses a BorderLayout by default, the button is now being laid out in the CENTER position of the (frame's default) content pane and is filling the entire available space. You could get the same effect as your original code by calling... JPanel background = new JPanel() {...}; background.add(new JButton("Press Me")); fr.add(background); Take a look at Using top-level containers and How to use Root Panes for more details

Categories : Java



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.