w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML Categories
mongodb - add column to one collection find based on value in another collection
In MongoDB, the simplest way is probably to handle this with application-side logic and not to try this in a single query. There are many ways to structure your data, but here's one possibility: user_document = { name : "User1", postsIhaveLiked : [ "post1", "post2" ... ] } post_document = { postID : "post1", content : "my awesome blog post" } With this structure, you would first query for the user's user_document. Then, for each post returned, you could check if the post's postID is in that user's "postsIhaveLiked" list. The main idea with this is that you get your data in two steps, not one. This is different from a join, but based on the same underlying idea of using one key (in this case, the postID) to relate two different pieces of data. In gene

Categories : Mongodb

mongodb distinct values of a row in a generic fashion
Salat author here. Look at SalatDAO#primitiveProjections - MyDAO.primitiveProjections[String](/* some query or DBObject.empty for all */, "hybridType") See the section on projections at SalatDAO wiki page

Categories : Mongodb

MongoDB query for distinct field values that meet a conditional
You're going to want to use the aggregation framework: db.sentences.aggregate([ { $group: { _id: "$last_syls", count: { $sum: 1} } }, { $match: { count: { $gt: 10 } } } ]) This groups documents by their last_syls field with a count per group, then filters that result set to all results with a count greater than 10.

Categories : Mongodb

Mongodb. Cannot find defined collection in C#
You must add a Class like code below var connectionString = "mongodb://localhost"; var client = new MongoClient(connectionString); var server = client.GetServer(); var database = server.GetDatabase("testdb"); // "testdb" is the name of the database // "Users" is the name of the collection var collection = database.GetCollection<Entity>("Users"); // var searchQuery = Query.EQ("firstname", "Tom"); var cursor = collection.FindAll(); what is Entity? -It's the class i'm telling yout about to get and set fields inside this collection. In my case,it should look like this : class Entity { public ObjectId Id { get; set; } public string firstname { get; set; } public string lastname { get; set; } }

Categories : C#

PHP MongoDb, find all referenced documents when collection is indexed array
I don't quite see why you have such a complicated structure. Particularily the "0" and "1" keys are problematic, especially dealing with PHP as it doesn't really like arrays with numerical string keys. The $ref/$id fields come from MongoDBRef, which should avoid as they don't provide you with any functionality. You should just have: { "_id": ObjectId("5188deba4c2c989909000000"), "_type": "Model_Discs", "title": "really cool cd", "referencedBy": [ ObjectId("4e171cade3a9f23359e98552"), ObjectId("5045c3222b0a82ec46000000") ] } Then you can simply query with: db.collection.find( { referencedBy: new ObjectId("5045c3222b0a82ec46000000") } );

Categories : PHP

Mongodb: Skip collection values from between (not a normal pagination)
I went through several resources and concluded that currently this is impossible to make with one query.. Instead, I agreed on that there are only two options to overcome this problem: 1.) Make a loop of some sort and run several slice queries while increasing the position of a slice. Similar to resource I linked: var skip = NUMBER_OF_ITEMS * (PAGE_NUMBER - 1) db.companies.find({}, {$slice:[skip, NUMBER_OF_ITEMS]}) However, depending on the type of a data, I would not want to run 5000 individual queries to get only half of the array contents, so I decided to use option 2.) Which seems for me relatively fast and performance wise. 2.) Make single query by _id to row you want and before returning results to client or some other part of your code, skip your unwanted array items away by u

Categories : Mongodb

Clojure idiomatic way to find duplicate values in a lazy collection (excluding some values from the check)
You could try using a set instead of a list for the exceptions and accumulator. Checks would then be like: is item in accumulator or exceptions? Probably a lot faster while still preserving laziness. (defn duplicates? [coll except]   (let [_duplicates?         (fn [coll except accum]           (if (seq coll)             (let [item (first coll)]               (if (contains? except item)                 (recur (rest coll) except accum)                 (if (contains? accum item)                   true                   (recur (rest coll) except (conj accum item)))))               false))]     (_duplicates? coll except #{}))) user=> (duplicates? '(1 2 3 4 1) #{1}) false user=> (duplicates? '(1 2 3 4 1) #{}) true Note:

Categories : Clojure

Find all distinct field values for a Model that occur N times in CakePHP
In "raw" SQL you would use group by and having: select `date` from FeaturedListings fl group by `date` having count(*) >= N; If you want the listings on these dates, you need to join this back to the original data. Here is one method: select fl.* from FeaturedListings fl join (select `date` from FeaturedListings fl group by `date` having count(*) >= N ) fld on fl.`date` = fld.`date`

Categories : Mysql

MongoDb java async driver : What is the correct query for sorting and limiting values from a collection
I don't think that the Aggregation framework is the right choice here. I would just do a straight 'find'. There is a Find class and nested Find.Builder class for constructing the more complex queries. import static com.allanbank.mongodb.builder.QueryBuilder.where; import com.allanbank.mongodb.MongoClient; import com.allanbank.mongodb.MongoCollection; import com.allanbank.mongodb.MongoFactory; import com.allanbank.mongodb.MongoIterator; import com.allanbank.mongodb.bson.Document; import com.allanbank.mongodb.builder.Find; import com.allanbank.mongodb.builder.Sort; public class StackOverFlow { // SELECT * FROM collect // WHERE time >= input1 AND userId = input2 // ORDER BY time DESC // LIMIT 30 public static void query(long input1, String input2) { MongoCl

Categories : Sorting

Greatest n per group with multiple criteria for greatest
Not sure I am interpreting your question correctly. Would be easier if you had supplied table definitions including primary and foreign keys. If you want the the most recent term that ends soonest with the largest number of students per school, this might do it: SELECT DISTINCT ON (t.school_id) t.school_id, t.term_id, s.size, t.start_date, t.end_date FROM term t JOIN ( SELECT term_id, COUNT(s.id) AS size FROM students GROUP BY term_id ) s USING (term_id) ORDER BY t.school_id, t.start_date DESC, t.end_date, size DESC; More explanation for DISTINCT ON in this related answer: Select first row in each GROUP BY group?

Categories : Postgresql

MongoDB: Different return values on .find() at shell access, and at php
SOLVED, many thanks to Vitaly Muminov! "i'm not quite sure about that, but you can try setting ini_set('mongo.native_long', 1); or wrapping your numbers into MongoInt64 class" The solution is wrap the number to MongoInt64!

Categories : PHP

Find greatest common subtype of two Scala types
Sounds like you want a Least Upper Bound (LUB) of the scala types? I would look to Miles' Shapeless library for inspiration where they actually have a LUBConstraint. Or if you want the Greater Lower Bound (GLB) I'm afraid I'd have to refer you to using a macro definition wherein you can get either the LUB or the GLB, see Types.

Categories : Scala

Find greatest in multiple groups across tables MySQL
I think you want to remove the repo from the max query: select p1.Name, p1.Version, p1.Arch, d1.repo, p1.Date from Packages p1 inner join Distribution d1 on p1.id = d1.id inner join (select Name, Arch, max(Date) as Date from Packages group by Name, Arch ) sq on p1.Name = sq.Name and p1.Arch = sq.Arch and p1.Date = sq.Date; EDIT: If not all repos have a date, then you want to filter by distribution in the subquery. However, you still don't want to put repo in the aggregation: select p1.Name, p1.Version, p1.Arch, d1.repo, p1.Date from Packages p1 inner join Distribution d1 on p1.id = d1.id inner join (select Name, Arch, max(Date) as Date from Packages p join Distribution d on p.id = d.id group by Name

Categories : Mysql

use GREATEST in MySQL when there are null values
Use the COALESCE function EG: greatest( COALESCE((follette_title.usedbuying_price *1.37), 0), COALESCE((amtext.price*1.37), 0), COALESCE((nebraska.price *1.2), 0), COALESCE((tichenor.price *1.25), 0) )

Categories : Mysql

Posix Sockets: Find out greatest number of file descriptors
You should have a variable, e.g. int maxfd, which you adjust every time your code contains FD_SET() or FD_CLR(). The answer to this question contains an example of adjusting maxfd properly. Unlike the comments suggest, I dont think you need to make "the" (which the?) variable static. The comments are right about poll and epoll, but knowing how to use select is useful as well.

Categories : C

Is it possible to find the list of attributes which would yield to the greatest sum without brute forcing?
No polynomial algorithms to solve this problem come to my mind. I can only suggest you a greedy heuristic: For each attribute, compute its expected_score, i.e. the addend it would bring to your SUM, if selected alone. In your example, the score of 1 is 3 - 87 = -84. Sort the attributes by expected_score in non-increasing order. By following that order, greedily add to L the attributes. Call actual_score the score that the attribute a will actually bring to your sum (it can be better or worse than expected_score, depending on the attributes you already have in L). If actual_score(a) is not strictly positive, discard a. This will not give you the optimal L, but I think a "fairly good" one.

Categories : Algorithm

Rearranging a matrix by the greatest values of the rows
just use sortrows... so no need for nested loops and multiple sorts, for example A=[v1;v2;v3;v4] B = flipud(sortrows(A)) B = 13 14 15 16 9 10 11 12 5 6 7 8 1 2 3 4 EDIT: What you want can be done using: [~, IX]=sort(max(A')','descend') then A(IX,:) A(IX,:) = 0 0 10 0 0 4 0 0 0 0 0 2 0 0 1 0

Categories : Matlab

Select distinct values of distinct group
You can use the following code to get your desired array with only 1 SQL executed $sql = "SELECT title, group_concat(year) as years FROM table_name group by title" $rs = mysql_query($sql); $arr = array(); while($row = mysql_fetch_assoc($rs)) { $arr[$row['title']] = explode(',',$row['years']); }

Categories : PHP

Better way to move MongoDB Collection to another Collection
You could use a MapReduce job for this. MapReduce allows you to specify a out-collection to store the results in. When you hava a map function which emits each document with its own _id as key and a reduce function which returns the first (and in this case only because _id's are unique) entry of the values array, the MapReduce is essentially a copy operation from the source-collection to the out-collection. Untested code: db.runCommand( { mapReduce: "mongo_collection", map: function(document) { emit(document._id, document); }, reduce: function(key, values) { return values[0]; }, out: { merge:"mongo_his_collection" } }

Categories : Mongodb

MongoDB aggregate using distinct
you can use two group commands in the pipeline, the first to group by accoundId, followed by second group that does usual operation. something like this: db.InboundWorkItems.aggregate( {$match: {notificationDate: {$gte: ISODate("2013-07-18T04:00:00Z")}, dropType:'drop' }}, {$group: {_id:"accountId",notificationDate:"$notificationDate"}}, {$group: {_id:1, nd: {$first:"$notificationDate"}, count:{$sum:1} }}, {$sort:{nd:1}} )

Categories : Mongodb

get distinct items in a collection if other item is not null
Unfortunately the easiest way I can come up with the at the moment is to unwind the Books array, match only those that have an email value and then form the root documents back up: db.collection.aggregate([ {$unwind:'$Books'}, {$match: {'Books.auth_email':{$nin:['',null]}}}, {$group: {_id:'$Books.name',email:{$first:'$Books.auth_email'}}} ]) Though another way which should work as well: db.collection.aggregate([ {$unwind:'$Books'}, {$match: {'Books.auth_email':{$nin:['',null]}}}, {$group: {_id:'$_id',Country:'$Country',Books:{$addToSet:'$Books'}}} ]) I believe that in python you can just do: self.mongo.aggregate([ {"$unwind":"$Books"}, {"$match": {"Books.auth_email":{"$nin":['',null]}}}, {"$group": {"_id":"$Books.name","email":{"$first":"$Books.au

Categories : Mongodb

How to use distinct with pipeline in mongodb using python
You will need to group on device and user instead first. You can do that with the following pipeline operator: {'$group':{'_id' : { d: '$Device', u: '$UserId' } } } And then secondly you need to count the number of devices per user (like you already had, but slighty modified: { '$group': { '_id' : '$_id.d', 'count': { '$sum' : 1 } } } With the following dataset: { "_id" : "1234gbrghr", "Device" : "samsung", "UserId" : "12654", "Month" : "july" } { "_id" : "1278gbrghr", "Device" : "nokia", "UserId" : "87654", "Month" : "july" } { "_id" : "1239gbrghr", "Device" : "samsung", "UserId" : "12654", "Month" : "july" } { "_id" : "1238gbrghr", "Device" : "samsung", "UserId" : "12653", "Month" : "july" } And the following aggregate command: db.so.aggregate( [ { '$match' : {'Month' : 'ju

Categories : Python

Get distinct sets of fields from MongoDB
it is not that easy, but you can do it :-). The most challenge is to create the bson query and to use the runCommand syntax from mongodb. Here is some example code and the data you can find in docs.mongodb.org. An example SQL query could look like this: SELECT state, SUM(pop) AS totalPop FROM zips GROUP BY state HAVING pop > (10000) In mongoDB shell you will run something like this. db.zipcodes.aggregate( { $group : { _id : "$state", totalPop : { $sum : "$pop" } } }, { $match : {totalPop : { $gte : 10000 } } } ) You can run the same with the command db.runCommand which has the following default syntax for the aggregation framework: db.runCommand( { aggregate :

Categories : R

How to get the distinct list of ids from the Mongo Collection by datetime ordering?
This is tricky, because each distinct value for _stdid can have a different dateTime field value. Which one do you want to pick? The first, the last, the average? You will also need to use the aggregation framework, and not a straight distinct. On the MongoDB shell, you would use (if you wanted the first of all the dateTime values that map to a single _stdid field): db.messages.aggregate( [ { $group: { _id: '$_stdid', dateTime : { $first: '$dateTime' } } }, { $sort: { dateTime : -1 } } ] ); In Java, this looks like: // create our pipeline operations, first with the $group operation DBObject groupFields = new BasicDBObject( "_id", "$_stdid" ); groupFields.put( "dateTime", new BasicDBObject( "$first", "$dateTime" ) ); DBObject group = new BasicDBObject( "$group", groupFields );

Categories : Mongodb

How to remove duplicates from collection using IEqualityComparer, LinQ Distinct
You need to override GetHashCode method in your Employee. You haven't done this. One example of a good hashing method is given below: (generated By ReSharper) public override int GetHashCode() { return ((this.fName != null ? this.fName.GetHashCode() : 0) * 397) ^ (this.lName != null ? this.lName.GetHashCode() : 0); } now after Distinct is called, foreach loop prints: abc def lmn def In your case you are calling object's class GetHashCode, which knows nothing about internal fields. One simple note, MoreLINQ contains DistinctBy extension method, which allows you to do: IEnumerable<Employe> coll = Employeecollection.DistinctBy(employee => new {employee.fName, employee.lName}); Anonymous objects have correct implementation for both GetHashCode and Equals methods.

Categories : C#

MongoDB Java driver: distinct with sort
MongoDB doesn't support server-side sorting with the distinct command. What's happening in the console is that the distinct('myKey') call returns an array and then you're calling the JavaScript sort method on that array which returns a sorted version of the array. The parameters you pass into sort are ignored. To do the equivalent in Java you would do: List myKeys = myCollection.distinct("myKey"); java.util.Collections.sort(myKeys); To get the unique keys using a server-side sort you could use aggregate. Here's how you'd do that in the shell: db.mycollection.aggregate([ { $group: {_id: '$myKey' }}, { $sort: {_id: 1}} ]) However, when I tested this, the simple client-side sort approach performed much better.

Categories : Java

MongoDB Aggregation: Counting distinct fields
I figured this out by using the $addToSet and $unwind operators. Mongodb Aggregation count array/set size db.collection.aggregate([ { $group: { _id: { account: '$account' }, vendors: { $addToSet: '$vendor'} } }, { $unwind:"$vendors" }, { $group: { _id: "$_id", vendorCount: { $sum:1} } } ]); Hope it helps someone

Categories : Mongodb

Distinct node js MongoDB case insensitive
That is a brilliant case for a "$downcase" operator in MongoDB's aggregation framework; yet, at this time, it does not exist. The best you can do is store a searchable, downcased version of title for every document. Thus, your schema would look like: { title: "War and Peace", searchable_title: "war and peace" } Your query logic would downcase the values prior to querying for the searchable title. For scale, stay away from case insensitive regexes, and go with downcased titles.

Categories : Node Js

Include related data from second collection in collection.find
Don't use $regex unless its a direct (with the mongodb $regex) syntax. This should work: searchFoo = function(query) { var re = new RegExp(query, "i"); return FooCollection.find({ name: re }); }; as should this: searchFoo = function(query) { return FooCollection.find({ name: { $regex: query, $options: "i" } }); }; Additionally to link up your bar query do a transform instead of regex: FooCollection.find({name:"SOMENAME"}, {transform:function(doc) { if(doc.barId) doc.bar = FooCollection.findOne(doc.barId) return doc; }}); This should give you the result (after a fetch or if you parse through the cursor) [{ name: SOMENAME, barId: MONGO_ID, bar : { name: ANOTHERNAME} }] More information about transforms can be found in the meteor do

Categories : Meteor

MongoDB: Sort distinct keys by number of occurances
One possible solution is a simple map / reduce. The advantage is that you could use it to aggregate additional information, such as the number of commits, committers, files... It might be too heavy a solution for what you want, though. I'm not entirely familiar with the modern aggregation framework, but I believe that if there's a solution other than map / reduce, that's where you're likely to find it.

Categories : Mongodb

Distinct sorting and grouping with MongoDB aggregation framework
Its actually a pretty easy fix if you think about how the aggregation framework is described. Taken from the docs: Conceptually, documents from a collection pass through an aggregation pipeline, which transforms these objects as they pass through. For those familiar with UNIX-like shells (e.g. bash,) the concept is analogous to the pipe (i.e. |) used to string text filters together. You may have read that before, but the reason to explain that again is that you can pass operations into that pipeline in just about any order - and more than once. Where as in MYSQL for example, LIMIT is always listed at the end of the query and applies to the result set after all other grouping functions. In MongoDB, the operations are run in the order you've added them to the pipeline. So ord

Categories : Python

how to query group by and distinct with a limit in MongoDB PHP codeigniter?
First of all using group_by and distinct together doesn't make any sense. Either you are using group_by or distinct. If you want to do some kind of pagination for a grouped query you have to use map and reduce or the aggregation pipeline. In your case the aggregation would look like that. db.users.aggregate( { '$match' => { 'user' => 'ashok' } }, { '$group' => { '_id' => '$details.url' } }, { '$skip' => 0 }, { '$limit' => 10 } ) I am using this aggregation feature to display references at VersionEye. The aggregation feature allows to do grouping and paging on db level, that's why it's much faster then other ORM or pagination solutions.

Categories : PHP

Removing from collection based on values from another collection - Lambda/Linq
Removal has to be done with foreach (as LINQ isn't supposed to alter collections), but other than that you can fetch unwanted brands and then items to remove from carlist1 with LINQ: var carlist2Brands = carlist2 .GroupBy(c => c.Brand) .Select(g => g.Key) .ToArray(); var carsToRemove = carlist1 .Where(c => carlist2Brands.Contains(c.Brand)) .ToArray(); foreach (var car in carsToRemove) { carlist1.Remove(car); }

Categories : C#

SQL select single distinct values from multiple columns (not combined values)
Try to use union with both columns select Department1 from tab union select Department2 from tab NOTE: union command eliminates duplicates

Categories : SQL

MySQL remove duplicates in one column, leaving smallest and greatest values based on second column
MySQL supports join syntax for the delete command. Using this syntax, you can set up a filter for the appropriate records to delete: delete t from yourtable t join (select count, min(when) as minwhen, max(when() as maxwhen from yourtable group by count ) tokeep on t.count = tokeep.count and (t.when > tokeep.minwhen and t.when < tokeep.maxwhen);

Categories : Mysql

Comma separated values from SQL, getting distinct values and their count
How about: var result = from word in entries.SelectMany(e => e.HowFound.Split(',')) group word by word into gr select new {Entry = gr.Key, Count = gr.Count()}; grouping instead of distinct allows you to get the count.

Categories : C#

How to get core data distinct values with relationship values?
The property used as sectionNameKeyPath must be included in the propertiesToFetch: [req setPropertiesToFetch:@[@"identityID", @"sortname", @"contact.sectionIndex"]];

Categories : IOS

Should the MongoDb Official driver translate a Distinct() linq operator into a database operation?
From what I can see by turning profiling on the distinct should be sent to mongo. Below are the 2 traces for distinct queries against a test DB. { "op" : "command", "ns" : "test.$cmd", "command" : { "distinct" : "testing", "key" : "Value" }, "ntoreturn" : 1, "keyUpdates" : 0, "numYield" : 0, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(49), "w" : NumberLong(0) }, "timeAcquiringMicros" : { "r" : NumberLong(2), "w" : NumberLong(1) } }, "responseLength" : 209, "millis" : 0, "ts" : ISODate("2013-06-

Categories : Linq

MongoDB - aggregate to another collection?
The Aggregation framework currently cannot be outputted to another collection directly. However you can try the answer in this discussion: SO-questions-output aggregate to new collection The mapreduce is way slower and I too have been waiting for a solution. You can try the Hadoop to Mongodb connector, which is supported in the mongodb website. Hadoop is faster at mapreduce. But I do not know if it would be well suited in your specific case. Link to hadoop + MongoDB connector All the best.

Categories : Mongodb

Count MongoDB collection row
I've not used the Node.JS driver at all, but looking at the documentation, it appears that Collection() has a count function on it: // Assuming DB has an open connection... db.collection("my_collection", function(err, collection) { collection.count(function(err, count)) { // Assuming no errors, 'count' should have your answer } }); Does that help at all? As for the second part of your question, I'm not entirely sure what you're asking but here are a couple of ways of getting your data: Using the toArray() function to give you an array of all documents (see docs): collection.find().toArray(function(err, docs) { // 'docs' should contain an array of all your documents // 'docs.length' will give you the length of the array (how many docs) }); Using the each() fu

Categories : Node Js



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.