w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML Categories
Include a Javascript variable in the MongoDB aggregate query with $project
I figured it out. ... { $project : { ip : "$ip", model : "$model", cmts : "$cmts", prov_group : { '$substr': [$prov, 0, 5] } } } );

Categories : Javascript

I have a query in mongodb and the reference key's are in hashmap list , I need to process a simple query using java in mongodb
When you post code, it helps if you indent it, so it is more readable. As I mentioned to you on another forum, you need to go back and review the Java collection classes, since you have multiple usage errors in the above code. Here are a few things you need to do to clean up your code: 1) You don't need to use the itkey iterator. Instead, use: for (String key : likey) and get rid of all the itkey.next calls. Your current code only processes every second element of the List. The other ones are printed out. 2) Your HashMap will map a key to a Boolean. Is that what you want? You said you want to count the number of non-zero values for the key. So, the line: Boolean listone = table.distinct(keys).contains(nullvalue); is almost certainly in error. 3) When you iterate over the Ha

Categories : Java

MongoDB - Aggregate Sum
$sum only works with ints, longs and floats. Right now, there is no operator to parse a string into a number, although that would be very useful. You can do this yourself as is described in Mongo convert all numeric fields that are stored as string but that would be slow. I would suggest you make sure that your application stores numbers as int/long/float, and that you write a script that iterators over all your documents and updates the value. I would also suggest that you add a feature request at https://jira.mongodb.org/browse/SERVER to add an operator that converts a string to a number.

Categories : Javascript

MongoDB - aggregate to another collection?
The Aggregation framework currently cannot be outputted to another collection directly. However you can try the answer in this discussion: SO-questions-output aggregate to new collection The mapreduce is way slower and I too have been waiting for a solution. You can try the Hadoop to Mongodb connector, which is supported in the mongodb website. Hadoop is faster at mapreduce. But I do not know if it would be well suited in your specific case. Link to hadoop + MongoDB connector All the best.

Categories : Mongodb

aggregate request MongoDB
There are several issues with your sample document and aggregation: the sample doc will not match your aggregation query because you are matching on createdDate field existing the $group() operator works on a document level and needs an _id field to group by your list field is an embedded document, not an array aside from formatting, there is no obvious way to relate the sample values to the calculated result you are looking for Here is an adjusted example document with the list as an array as well as some unique values for each item that happen to add up to the value numbers you mentioned: db.people.insert({ "ad" : "noc2", "createdDate" : ISODate(), "list" : [ { "id" : "p45", "date" : ISODate("201

Categories : Mongodb

MongoDB aggregate using distinct
you can use two group commands in the pipeline, the first to group by accoundId, followed by second group that does usual operation. something like this: db.InboundWorkItems.aggregate( {$match: {notificationDate: {$gte: ISODate("2013-07-18T04:00:00Z")}, dropType:'drop' }}, {$group: {_id:"accountId",notificationDate:"$notificationDate"}}, {$group: {_id:1, nd: {$first:"$notificationDate"}, count:{$sum:1} }}, {$sort:{nd:1}} )

Categories : Mongodb

Mongodb aggregate with 'join'
No, you can't do that as there are no joins in MongoDB (not in normal queries, MapReduce, or the Aggregation Framework). Only one collection can be accessed at a time from the aggregation functions. Mongoose won't directly help, or necessarily make the query for additional user information any more efficient than doing an $in on a large batch of Users at a time (an array of userId). ($in docs here) There really aren't work-arounds for this as the lack of joins is currently an intentional design of MongoDB (ie., it's how it works). Two paths you might consider: You may find that another database platform would be better suited to the types of queries that you're trying to run. You might try using $in as suggested above after the aggregation results are returned to your client code (

Categories : Mongodb

How to aggregate queries in mongodb
This is going to be ugly with aggregation framework, but it can be done: db.collection.aggregate( {$match: {"activity.gear": "glasses"}}, {$unwind: "$activity"}, {$group: { _id: {_id: "$_id", name: "$name"}, _count: {$sum: {$cond: [{$eq: ["glasses", "$activity.gear"]}, 1, 0]}} }}, {$match: {_count: {$gt: 1}}} ) When analyzing the above query, I would recommend walking through step. Start with just the "$match", the the "$match" and "$unwind". And so one. You will see how each step works. The response is not the full document. If you are looking for the full document, include a $project step that passes through a dummy activity, and reconstruct the full document on the output.

Categories : Mongodb

How to aggregate sum in MongoDB to get a total count?
Sum To get the sum of a grouped field when using the Aggregation Framework of MongoDB, you'll need to use $group and $sum: db.characters.aggregate([ { $group: { _id: null, total: { $sum: "$wins" } } } ] ) In this case, if you want to get the sum of all of the wins, you need to refer to the field name using the $ syntax as $wins which just fetches the values of the wins field from the grouped documents and sums them together. Count You can sum other values as well by passing in a specific value (as you'd done in your comment). If you had { "$sum" : 1 }, that would actually be a count of all of the wins, rather than a total.

Categories : Mongodb

Mongodb aggregate on filter like subdocument
If you change your data structure to something like this, note that all the values are arrays, even the ones with single values: { _id: 1, filters: [{ key: 'f1', values: ['v1-1'] },{ key: 'f2', values: ['v2-1'] },{ key: 'f3', values: ['v3-1', 'v3-3'] }] } { _id: 2, filters: [{ key: 'f1', values: ['v1-1'] },{ key: 'f2', values: ['v2-2'] },{ key: 'f3', values: ['v3-2', 'v3-3'] }] } { _id: 3, filters: [{ key: 'f1', values: ['v1-1'] },{ key: 'f2', values: ['v2-2'] },{ key: 'f3', values: ['v3-1', 'v3-3'] }] } You could do an aggregate function something like this: db.test.aggregate({ $unwind: "$filters" },{ $project: { _id: 1, key: "$filters.key", values: "$filters.values"

Categories : Mongodb

Mongodb Aggregate: Nested totals
You can run two groups one after another : db.collection.aggregate([ {$group:{_id:{account:'$account', CountryCode:'$CouintryCode', ReferalSite:'$ReferalSite'}}, {number:1}}, {$group:{_id:{CountryCode:'$_id.CountryCode', ReferalSite:'$_id.ReferalSite'}}, {number:{$sum:'$number'}}}])

Categories : Mongodb

How aggregate in mongoDB by _id consists of two element(in Java)?
You have this working in the shell, the question is how to turn this into Java. db.workers.aggregate([{$group:{_id:{department:"$department",type:"$type"}, amount_sum:{$sum:"$amount"}}}]) This is very similar to the example in Java tutorial for MongoDB. The only difference is that they use a simple DBObject for _id part of $group and you need to make a document to use as your _id. Replace the line: DBObject groupFields = new BasicDBObject( "_id", "$department"); with: DBObject docFields = new BasicDBObject("department", "$department"); docFields.put("type", "$type"); DBObject groupFields = new BasicDBObject( "_id", docFields); and you should be all set.

Categories : Java

Matching on compound _id fields in MongoDB aggregate
I think the best way to address this issues is by storing your data differently. Your "_id" sort of has arbitrary values as key and that is something you should avoid. I would probably store the documents as: { _id: { u: "rick", d: ISODate("2010-10-10T14:00:00Z"), type: hobby, value: "wizardry" } } { _id: { u: "rick", d: ISODate("2010-10-10T14:00:00Z"), type: gender, value: "male" }, } And then your match because simple even without having to create a different match for each type.

Categories : Mongodb

MongoDB - aggregate by date, right-aligned boundaries
You can do it using mongodb map-reduce: var map = function(){ var date = new Date(this.perEnd.getTime()); if(date.getMinutes() > 0){ date.setHours(date.getHours() + 1, 0, 0, 0); } else { date.setHours(date.getHours(), 0, 0, 0); } emit(date, this.val); }; var reduce = function(key, values){ return Array.sum(values) }; db.collection.mapReduce(map, reduce, {out : {inline : 1}}, callback); For your data I got the following result: [ { _id: Wed Jun 05 2013 21:00:00 GMT+0300 (EEST), value: 7.3 }, { _id: Wed Jun 05 2013 22:00:00 GMT+0300 (EEST), value: 37.54 }, { _id: Wed Jun 05 2013 23:00:00 GMT+0300 (EEST), value: 15.68 } ]

Categories : Mongodb

mongodb aggregate embedded document values
Unwind only goes down one level, so you have to call as many times as many levels you have if you do it like [ { "$project" : { "text" : "$periods.tables.rows.text" , "_id" : "$_id"}}, { "$unwind" : "$text"}, { "$unwind" : "$text"}, { "$unwind" : "$text"}, { "$group" : { "_id" : "$_id" , "texts" : { "$addToSet" : "$text"}}}, { "$project" : { "_id" : 0 , "texts" : 1}} ] It will work as you expect.

Categories : Mongodb

sort by date with aggregate request in mongodb
Your aggregate query is incorrect. You add the sort and limit to the match, but that's now how you do that. You use different pipeline operators: db.friends.aggregate( [ { $match: { advertiser: "noc3" }, { $sort: { createdDate: -1 } }, { $limit: 1 }, Your other pipeline operators are bit strange too, and your code vs query mismatches on timestamps vs createdDate. If you add the expected output, I can update the answer to include the last bits of the query too.

Categories : Mongodb

Aggregate MongoDB results by ObjectId date
So this doesn't answer my question directly, but I did find a better way to replace all that lambda nonsense above using Python's setdefault: d = {} for message in messages: key = message['_id'].generation_time.date() d.setdefault(key,[]).append(message) Thanks to @raymondh for the hint in is PyCon talk: Transforming Code into Beautiful, Idiomatic Python

Categories : Mongodb

Return a max value of an array with aggregate request in mongodb
Try this: db.people.aggregate([ {$match:{ad:"noc2"}}, {$unwind:"$list"}, {$project:{_id:0, _id":"$list.id", "value":{$add:["$list.value1","$list.value2","$list.value3"]}}}, {$sort:{value:-1}, {$limit:1} ]) Output: { "result" : [ { "_id" : "p45", "value" : 587 } ], "ok" : 1 }

Categories : Arrays

Translating MongoDB query to a MongoDB java driver query
I haven't checked the syntax. Also I don't know if your query works or not but here's my try. //unwind DBObject unwind = new BasicDBObject("$unwind", "$scorearray"); // Now the $group operation DBObject groupFields = new BasicDBObject("player", "$player"); groupFields.put("venue", "$scorearray.venue")); groupFields.put("score", "$scorearray.score")); DBObject group = new BasicDBObject("$group", new BasicDBObject("_id", groupFields)); //sort DBObject sort = new BasicDBObject("$sort", new BasicDBObject("_id.score":1)); //secondgroup DBObject secondGroupFields = new BasicDBObject("_id", "$_id.player") secondGroupFields.put("maxScore", new BasicDBObject("$last":"$_id.score")); secondGroupFi

Categories : Mongodb

Is it possible to include other documents from the same collection in MongoDB's aggregate() function?
As at MongoDB 2.4, the Aggregation Framework does not support fetching additional documents into a pipeline or referencing documents relative to the current document. You will have to implement these sort of calculations in your application logic. You may want to upvote and watch SERVER-4437 in the MongoDB Jira issue tracker; this feature suggestion is to add support for windowing operations on pipelines.

Categories : Mongodb

Counting user activity with MongoDB Aggregate Framework
You need to use $unwind operation explode array, then $group by date (using the granularity that you want) and $project only the date and count, as below: db.user.aggregate({ $unwind: "$logins" }, { $group: { _id: { year: { $year: "$logins" }, month: { $month: "$logins" }, day: { $dayOfMonth: "$logins" }, hour: { $hour: "$logins" } }, date: { $first: "$logins" }, count: { $sum: 1 } } }, { $project: { _id : 0, date: "$date", number_of_users_logged_in: "$count" } }) I grouped by year/month/day/hour.

Categories : Mongodb

Mongodb, how do I aggregate match / group some documents but only if one or other conditions are satisfied?
Unfortunately, you can't do that with MongoDB in a single step on the database server. You'll need to do it client side. While you can project (documentation) your results to only include/exclude some fields (or the first matching result in an array for example as shown here), you can't conditionally do it based on a search with multiple arrays (and the projection operator returns only the first match, not just the results that match). You might need to consider a different document/collection structure to meet your requirements. MongoDB doesn't have sub-document level filtering/searching yet.

Categories : Mongodb

Mongoose / Mongodb mapReduce or aggregate to group by single field
I ended up doing this // Aggregate pipeline Conversation.aggregate( { $match: { listing: new ObjectId(listingId) } }, { $group: { _id: '$threadId', message: { $last: "$message" }, to: { $last: "$to" }, from: { $last: "$from" } } }, { $project : { _id : 0, threadId : "$_id", message: "$message", to: "$to", from: "$from" } }, function(err, threads) { console.log(err); console.log(threads); } ); It seems to work fine. Let me know if there is any simpler way or if this snippet can be optimized. Hope this helps someone.

Categories : Node Js

Multiple MapReduce Functions or Aggregate Frameworks for unique value and count in Mongodb?
I think you can get the format you want using aggregation, specifically the $group and $project operators. Take a look at this aggregation call: var agg_output = db.answers.aggregate([ { $group: { _id: { city: "$city", state: "$state", answerArray: "$answerArray", pickId: "$pickId" }, count: { $sum: 1 }} }, { $project: { city: "$_id.city", state: "$_id.state", answerArray: "$_id.answerArray", pickId: "$_id.pickId", count: "$count", _id: 0} } ]); db.answer_counts.insert(agg_output.result); The $group stage takes care of summing the occurences of each unique combination of city/state/answerArray/pickId, while the $project

Categories : Mongodb

query based on matching elements in DBRef list for mongodb using spring-data-mongodb
Querying for one element on an array is exactly like query for a field equality. You could read the MongoDB documentation here. So your query will be: new Query(Criteria.where("users.$id").is(new ObjectId(userId)))

Categories : Mongodb

SQL aggregate query with bit-field
SELECT CAST([Note].Date AS DATE) As Date, SUM([Note].Days) AS Totalб CASE WHEN MIN(Locked) = 0 THEN 'false' ELSE 'true' END AS SignedOff FROM Note WHERE [Note].ID_Employee = N'E6A0E609-F8B2-4B48-A17C-4A4E117A4077' GROUP BY CAST(Note.Date AS DATE)

Categories : Sql Server

GROUP BY and aggregate function query
Make this task easier on yourself by starting with a smaller piece of the problem. First get the minimum Time from TimeTrials for each combination of MemberID and Distance. SELECT tt.MemberID, tt.Distance, Min(tt.Time) AS MinOfTime FROM TimeTrials AS tt GROUP BY tt.MemberID, tt.Distance; Assuming that SQL is correct, use it in a subquery which you join back to TimeTrials again. SELECT tt2.* FROM TimeTrials AS tt2 INNER JOIN ( SELECT tt.MemberID, tt.Distance, Min(tt.Time) AS MinOfTime FROM TimeTrials AS tt GROUP BY tt.MemberID, tt.Distance ) AS sub ON tt2.MemberID = sub.MemberID AND tt2.Distance = sub.Distance AND tt2.Time = sub.Min

Categories : SQL

SQL query for aggregate on multiple rows
Try this: SELECT Name FROM Tablename WHERE indicator IN(1, 2) GROUP BY Name HAVING COUNT(DISTINCT indicator) = 2; See it in action here: SQL Fiddle Demo

Categories : SQL

SQl Query : how to aggregate on part of date
I wonder if something like this would work (don't know much about Jet): select oi.product_name, sum(iif(month(o.order_date) = 1, oi.items_purchased_count, 0)) as JAN, sum(iif(month(o.order_date) = 2, oi.items_purchased_count, 0)) as FEB, ... from orders o inner join order_items oi on o.order_id = oi.order_id group by oi.product_name

Categories : SQL

Peforming an Update via a MySQL Aggregate Query
In the process of writing this question, I managed to find the solution to my own problem. I was indeed quite close to getting the query to work. Apparently, the only thing I had wrong was that I had placed an extra parentheses after j.`Auto Number`. I removed that parentheses and now the code runs fine. I thought about not posting since I had managed to figure out my own problem, but since I was having difficulty finding an answer when I searched for this issue, I figured I might as well post my problem and its answer. Here is the successful code: UPDATE `t inventory1` i INNER JOIN (SELECT Sum(p.Qty) AS SumOfQty, p.Category AS Category, p.StockNu AS StockNu FROM `t purchorderitems` p INNER JOIN `t jobenv` j ON p.`Order Nu` = j.`Auto Number` WHERE ((p.PickedUp) Is Null AND (j.`Date In`

Categories : Mysql

SQL query - how to aggregate only contiguous periods without lag or lead?
This is a hard problem that would be made easier with cumulative sums and lag() or lead(). You can still do the work. I prefer to express it using correlated subqueries. The logic starts by identifying which records are connected to the "next" record by an overlap. The following query uses this logic to define OverlapWithPrev. select * from (select t.*, (select top 1 1 from t t2 where t2.personid = t.personid and t2.fromd < t.fromd and t2.tod >= dateadd(d, -1, t.fromd) order by t2.fromd ) as OverlapWithPrev from t ) t This takes on the value of 1 when there is a previous record and NULL when the

Categories : SQL

Query in relational algebra without using aggregate functions
I forget the proper relational algebra syntax now but you can do (Worked on >= 1 site on 1st May) minus (Worked on > 1 site on 1st May) -------------------------------------- equals (Worked on 1 site on 1st May) A SQL solution using only the operators mentioned in the comments (and assuming rename) is below. SELECT Name FROM Work WHERE Date = '1st May' /*Worked on at least one site on 1st May */ EXCEPT SELECT W1.Name /*Worked more than one site on 1st May */ FROM Work W1 CROSS JOIN Work W2 WHERE W1.Name = W2.Name AND W1.Date = '1st May' AND W2.Date = '1st May' AND W2.Site <> W2.Site I assume this will be relatively straight forward to translate

Categories : Database

Aggregate functions conflict with some column in my query
Try this: SELECT f1.match_static_id, f2.comments_no, f2.maxtimestamp, users.username FROM forum AS f1 INNER JOIN ( SELECT match_static_id, max(timestamp) maxtimestamp, count(comment) AS comments_no FROM Forum GROUP BY match_static_id ) AS f2 ON f1.match_static_id = f2.match_static_id AND f1.timestamp = f2.maxtimestamp INNER JOIN users on users.user_id = f1.user_id; See it in action here: SQL Fiddle Demo

Categories : Mysql

mysql query for different section of same report (aggregate and detail )
First of all you can introduce row number into A and B section. E.g. SELECT @ROW := @ROW + 1 AS row, first_name FROM users, (SELECT @ROW := 0) r; Then SELECT A.tot_count as tot_countA, A.emptid as emptidA, A.ind_count as ind_countA, B.tot_count as tot_countB, B.emptid as emptidB, B.ind_count as ind_countB, FROM (subquery A with row column) A inner join (subquery B with row column) B on A.row=b.row

Categories : Mysql

Querydsl generates invalid SQL on basic aggregate query
I ended up switching to jOOQ. So far, the experience has been very positive. The API is somewhat similar, the documentation is better and it doesn't generate invalid SQL.

Categories : Sql Server

unable to use aggregate function on multiple queries in one query
This doesn't have anything to do with PHP... It's to do with your top line - $qu="SELECT distinct calls.c_number, count(type) as count1,SUM(charges * duration) as total,sum(duration) as duration1,billing_details.payment as pay,packages.(count)activation as act What do you expect "packages.(count)activation" to do? Do you mean count(packages.activation)?

Categories : PHP

Aggregate function error in Access update query using Max()
See whether a DMax expression (see DMin, DMax Functions) gets what you need from dbo_tblStats ... ask for the max stopframe where Video matches the current FileName value. Assuming Video and FileName are both text data type, try this query. UPDATE tblFiles SET CurRecord = DMax( "stopframe", "dbo_tblStats", "Video='" & FileName & "'" ) WHERE Progress<90;

Categories : Ms Access

aggregate and distinct linq query output to list
Use GroupBy and Sum: var result = db.Table .GroupBy(r => r.Date) .Select(g => new{ Day=g.Key, Hours=g.Sum(r=> r.Hours) }) If Date is actually a DateTime you should use r.Date.Date to remove the time portion.

Categories : C#

MySQL query to aggregate the number of versions created over previous 7 days
Nailed it (yay!) thanks to @Goat CO's suggestion: SELECT p.created_at_date, SUM(status = 'draft') as draft, SUM(status = 'active') as active, SUM(status = 'archived') as archived FROM `product_versions` p JOIN ( SELECT product_id, MAX(id) AS latest_version FROM product_versions GROUP BY created_at_date, product_id ) grouped_versions ON p.product_id = grouped_versions.product_id AND p.id = grouped_versions.latest_version GROUP BY created_at_date ORDER BY created_at_date DESC Result +-----------------+-------+--------+----------+ | created_at_date | draft | active | archived | +-----------------+-------+--------+----------+ | 2013-09-07 | 2 | 1 | 1 | | 2013-09-06 | 1 | 1 | 0 | +-----------------+-------+--------+------

Categories : Mysql

Mongodb query optimization - running query in parallel
There are so many things wrong with this query. Your nested conditional with regexes will never get faster in MongoDB. MongoDB is not the best tool for "data discovery" (e.g. ad-hoc, multi-conditional queries for uncovering unknown information). MongoDB is blazing fast when you know the metrics you are generating. But, not for data discovery. If this is a common query you are running, then I would create an attribute called "united_states_or_health_care", and set the value to the timestamp of the create date. With this method, you are moving your logic from your query to your document schema. This is one common way to think about scaling with MongoDB. If you are doing data discovery, you have a few different options: Have your application concatenate the results of the different

Categories : Mongodb



© Copyright 2017 w3hello.com Publishing Limited. All rights reserved.