w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
  Home » MONGODB » Page 1
Constraints on an embedded subclass - Grails, GORM, Mongo
I've solved my issue in case anyone else comes across this. It may not be the optimal solution but I haven't found an alternative. I've added a custom validator to the Media class that calls validate() on the embedded Film class and adds any errors that arise to the Media objects errors class Media{ ObjectId id; String name; Film film; static mapWith = "mongo" static embedded = ["film"] s

Categories : Mongodb

Using RESTFul Oracle APEX
I found that APEX takes where condition as encoded parameter in the URL. Something like: https://api.appery.io/rest/1/db/collections/Outlet_Details?where=%7B%22Oracle_Flag%22%3A%22Y%22%7D The header is same and no input parameters. This can be done from Application builder > New Application > Database > Create Application > Shared Componenets > Create > REST and then start inserting the heade

Categories : Mongodb

MongoDB MapReduce Calculate Delta
As per your comments and my suggestion, this m/r gives you both. Please read the comments for clarification. db.runners.mapReduce( // Map function(){ for(var i =0; i < this.RunningSpeed.length; i++){ var value={ date: this.RunningSpeed[i].Date, speed: this.RunningSpeed[i].Value}; // We emit all in single key value pairs emit(this.Name,value); }

Categories : Mongodb

MongoDB: FailedToParse: Bad characters in value
That's not a valid query. --query must be a JSON document. Your error is in thinking that mongodump is something programmatic like the mongo shell that can evaluate the findOne and substitute the value into the query. This is not the case. You can find the _id from the result of the findOne and put it in the mongodump --query manually. Use extended JSON format for an ObjectId type, if that is the

Categories : Mongodb

MongoDB many-to-many relationship logic
Note: This is one way to model it. Data modeling has a lot to do with the use cases and the according questions you want to ask. Your use case might need a different model. I'd probably model it like this manufacturer { _id:"ACME", name: "ACME Corporation" … } products { _id:ObjectId(...), manufacturer: "ACME", name: "SuperFoo", description: "Without SuperFoo, you can't bar o

Categories : Mongodb

Spring mongodb get ID of inserted item after Save
This is pretty interesting and thought I would share. I just figured out the solution for this with the help of BatScream comment above: You would create an object and insert it into your mongoDB: Animal animal = new Animal(); animal.setName(name); animal.setCat(cat); mongoTemplate.insert(animal); Your animal class looks like this with getters and settings for all fields: pub

Categories : Mongodb

MongoDB in Docker container runs out of space for journal
Simply remove unneeded images by first having a look of which images are there with docker ps -a and remove unneeded ones with docker rmi imageid|name Keep in mind though that images depend on each other - you have to find out which are the ones you really want to remove. The good news is that unneeded dependencies are automatically removed.

Categories : Mongodb

Create an Index for a find() operation in a mongodb collection
Your .find() function has no arguments, therefore, query will need to access all documents in a collection. If you use an index (up to this point), it will only introduce some overhead. Your .sort() instruction requires that returned documents (the whole collection) should be sorted. And this is a case, where you can benefit from an index: db.posts.ensureIndex({date:1}) (or -1) In this case,

Categories : Mongodb

MongoDB: Not authorized on Admin
I figured it out. I accidentally put the data and logs in same directory as the mongodbin. The solution is to put the mongodbdata and mongodblogs in a separate folder or drive -- I differentiated using C:~ and D:~ drives.

Categories : Mongodb

Mongo -Select parent document with maximum child documents count, Faster way?
If I got you right, you want to have the parent with the most childs. This is easy to accomplish using the aggregation framework. When each child only can have one parent, the aggregation query would look like this db.childs.aggregate( { $group: { _id:"$parent_id", children:{$sum:1} } }, { $sort: { "children":-1 } }, { $limit : 1 } ); Which should return a document like: { _id:"SomePare

Categories : Mongodb

Find ignores Second Value on Collections
Promociones.find( {'metadata.tipoMenu' : { $in: [searchMenu] } }, {'metadata.diaOferta' : { $in: [diaDeHoy] } } ) should be Promociones.find( { 'metadata.tipoMenu' : { $in: [searchMenu] } , 'metadata.diaOferta' : { $in: [diaDeHoy] } } ) See the Meteor docs on collection.find.

Categories : Mongodb

Why does MongoDB not support queries of properties of embedded documents that are stored in hashed arrays?
As a rule of thumb: Usually, these problems aren't technical ones, but problems with data modeling. I have yet to find a use case where it makes sense to have keys hold semantic value. If you had something like 'products':[ {sku:12432,price:49.99,qty_in_stock:4}, {sku:54352,price:29.99,qty_in_stock:5} ] It would make a lot more sense. But: you are modelling invoices. An invoice sh

Categories : Mongodb

Insert an embedded document to a new field in mongodb document
You can do it with db.test.update( { _id : 133 }, { $set : { PackSizes: {_id: 123, PackSizeName:"xyz", UnitName:"pqr"}} } ) PackSizes could be any document, with array or without it. Your result document will be { "_id" : 133, "Name" : "abc", "Price" : 20, "PackSizes" : { "_id" : 123, "PackSizeName" : "xyz", "UnitName" : "pqr" } } Updated:

Categories : Mongodb

Check configuration of mongodb setup
Since your diagram shows otherwise: You can either have exactly 1 or 3 config servers. You should always have your mongos instances have the exact same string for the configdb parameter. And this string has to hold all config servers in the same order. Otherwise, you risk metadata corruption. All configservers and mongos need to be able to connect to and be connected by all nodes in the cluster.

Categories : Mongodb

MongoDB WriteError "code" : 9
From the capped collection docs that BatScream linked: You cannot delete documents from a capped collection. To remove all records from a capped collection, use the ‘emptycapped’ command. To remove the collection entirely, use the drop() method. Documents expire from the capped collection as new documents are inserted. You cannot manually delete documents. See the docs for more info

Categories : Mongodb

pymongo.errors.ConnectionFailure: timed out from an ubuntu ec2 instance running scrapyd
I solved the issue. Initially, I set up my ec2's security group's outbound rules as: Outbound Type:HTTP, Protocol: TCP, Port Range:80, Destination: 0.0.0.0/0 Type:Custom, Protocol: TCP, Port Range: 6800, Destination: 0.0.0.0/0 Type:HTTPS, Protocol: TCP, Port Range:443, Destination 0.0.0.0/0 However, this wasn't enough as I also needed a specific Custom TCP Protocol for the actual port of the mo

Categories : Mongodb

MongoDB - how do i see everything in a collection using the shell?
To list all the documents in a collection, just call find without any parameters. db.myCollection.find() If there are a lot of documents, it will batch them up. You will then be able to show the next batch by typing it.

Categories : Mongodb

Why won't my insert callback run?
It looks like you are mixing node.js and mongodb shell. In mongodb shell all code is synchronous and run line by line. So the db.sets.insert will simply return inserted document. So, try to rewrite it as follow: if (newSet) { insertedSet = db.sets.insert(newSet); print('in') // never outputs db.practices.insert({ type: 'set', srcId: insertedSet._id, st

Categories : Mongodb

querying mongodb (mongoose) for a timespan in subdocument
$elemMatch is for arrays. The query you are looking for is: {'foo.fooDate': {$gt: new Date('2014-01-01') }} $elemMatch would work for the following schema, and return results if at least one item from foo array match specified query (or all specified queries): var wishSchema = mongoose.Schema({ foo: [ bar: String, fooDate: Date ] }); If you need to run only one query

Categories : Mongodb

How to modify multi documents' field in Mongodb? The new value is the extension of the original one
As @Disposer pointed out, it's not possible to have multiple documents in a collection with the same _id. Assuming that was a typo, you can make use of the cursor function forEach() in the mongo shell to achieve what you want: db.foo.find().forEach( function(myDoc) { db.foo.update({"_id": myDoc._id}, {$set: {"site": "http:\" + myDoc.site}}); })

Categories : Mongodb

"$**" wildcard specifier for text index - MongoDB in JAVA
This is a resolved issue JAVA-814 fixed in 2.11.2/2.12. Please update the driver to an appropriate version and then your attempts to index all string fields with $** should succeed.

Categories : Mongodb

Get all documents in a collection which match a set of keys - mongoDB
If I understood your problem correctly this would be your answer db.user_details.find( { $or : [ {'roles.administrator' : "Y2FZnfx9Zi4NR6J6e"}, {'roles.mod' : "Y2FZnfx9Zi4NR6J6e"}, {'roles.writer' : "Y2FZnfx9Zi4NR6J6e"} ] })

Categories : Mongodb

mongo convert or $rename, $set, $unset a lat lon into 2dsphere Point coordinates array
There is no way to currently update a field by manipulating the existing value of that field, but you can do the following: db.collection.find().forEach(function(doc){ var location = {"type":"Point","coordnates":[doc.loc.lon,doc.loc.lat]}; db.collection.update({"_id":doc._id},{$set:{"loc":location}}); }) Basically, if we want to update a field with a new value that is derived from it's old valu

Categories : Mongodb

How to Cap a Collection based on Size - MOngodb
As the documentation for MongoDB 2.6 says, "If the size field is less than or equal to 4096, then the collection will have a cap of 4096 bytes. Otherwise, MongoDB will raise the provided size to make it an integer multiple of 256." You can see the size MongoDB actually chose by querying system.namespaces: > // Create collection of size 10. > db.createCollection('my_collection', {capped: tru

Categories : Mongodb

MongoDB Java Driver creating Database and Collection
prefer to use createCollection method on the DB object, but found that it does not create database / collection unless the first document is inserted. MongoDB creates a collection implicitly when the first document is saved into a collection. The createCollection() method explicitly creates a collection only and only if an options object is passed to it as an argument. Now this makes sense

Categories : Mongodb

Mongo database size inconsistency
MongoDB 2.6 uses Powers of Two Record Allocation by default. Prior to loading your data, you can try either changing your mongod newCollectionsUsePowerOf2Sizes or collMod your collection: db.runCommand( { collMod: "myCollection", usePowerOf2Sizes: false })

Categories : Mongodb

response.write not working in embedded function on nodejs
Assuming res.end() is supposed to be response.end(), you're calling response.end() before your res.write calls in the async connect callback. You need to move that call into the callback, like this: var server = require('http').createServer(function(request,response) { switch (url.parse(request.url).pathname) { case '/mongotest': mongoClient.connect('mongodb://localhost:27017', funct

Categories : Mongodb

How do I structure my data with mongodb?
I'd do this: Schemas: var UserSchema = new Schema({ name: { type: String }, company: type: Schema.Types.ObjectId, ref: 'Company', stores: [{type: Schema.Types.ObjectId, ref: 'Store'}] sections: [{type: Schema.Types.ObjectId, ref: 'Sections'}] }) var CompanySchema = new Schema({ name: { type: String, }, store:{type: Schema.Types.ObjectId, ref: 'Store'} }); v

Categories : Mongodb

MongoDB schema for timeseries with metadata
A lot of best practice for dealing with time-series data is contained in the MongoDB document Pre-Aggregated Reports. Typically, you will use some or all of the following patterns: Bucketing by day (or some other period) using upserts Pre-aggregating summary values (eg. $inc) at various levels (eg. minute, hour, day) by using in-place updates, whenever each new event/tick is consumed, thus enabl

Categories : Mongodb

MongoDB can't set 'maxIncomingConnections'
I don't know as which user you ran the ulimit command, but keep in mind that this only is valid for the current user in the current environment. A better approach is to set the open file limit in /etc/security/limits.conf like this: # Max is 64k anyway, and there is a hard limit # of 20k connection in MongoDB anyway # 40k open files should be more than enough # unless you have _very_ large disks

Categories : Mongodb

Find $near location wrong result
maxDistance is given in meters, whereas the distance between the points is much bigger than 5m. If you values accordingly (I assume you meant miles), the query should work. Furthermore, your places are stored within a single document. MongoDB won't sort arrays of an document according to the distance. If you had the locations in different documents, they'd be happily sorted by distance.

Categories : Mongodb

MongoDB PHP Date
lastSeenDate is not MongodbDate format, may be you should like do this: $js = "function() { return this.lastSeenDate > '2013-02-18' && this.lastSeenDate <= '2014-09-28'; }"; $collection->find(array('$where' => $js));

Categories : Mongodb

MongoDB $oid vs ObjectId
The MongoLab UI uses Strict MongoDB Extended JSON so Object IDs are represented thusly, as in the second code block of the OP: { "$oid": "<id>" }

Categories : Mongodb

What do the oplog fields actually mean?
h is a hash (signed Long) ts is the internal timestamp format (the "x11" type shown at bsonspec.org; search the API docs for your driver at api.mongodb.org for further information) you are correct on op, ns, o, and o2 there's also a "v" field (I'm gonna speculate that this is version, which would allow them to update the schema for the oplog). b is True for all the delete operations I could find,

Categories : Mongodb

Elasticsearch failing to work with MongoDB river
I bet you did the same mistake as I did. You not only need to install the dependency by issuing following: plugin --install elasticsearch/elasticsearch-mapper-attachments/2.4.1 You also need to instal the river itself: plugin --install com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/2.0.6 Elastic search looks for a class name by the value of type.

Categories : Mongodb

Mongodb update guarantee using w=0
No, w=0 can fail, it is only: http://docs.mongodb.org/manual/core/write-concern/#unacknowledged Unacknowledged is similar to errors ignored; however, drivers will attempt to receive and handle network errors when possible. Which means that the write can fail silently within MongoDB itself. It is not reliable if you wish to specifically guarantee. At the end of the day if you wish to touc

Categories : Mongodb

MongoDB in Go (golang) with mgo: how to use logical operators to query?
Your mongo query can be translated to the following: pipeline := bson.D{ {"key1", 1}, {"$or", []interface{}{ bson.D{{"key2", 2}}, bson.D{{"key3", 2}}, }}, } The query should be equivalent to the following in the mongo console: db.mycollection.find({"key1" : 1, "$or" : [{"key2" : 2}, {"key3" : 2}]}) If you'd rather wish to use unordered maps, bson.M, it would be li

Categories : Mongodb

How can I improve aggregation processing time with Map Reduce?
Yes, it could work. It's a common pattern to improve querying speed. But MongoDB is special in that case, because Map Reduce needs JavaScript evaluation, while the aggregation framework is implemented natively, therefore aggregation is faster. I advice to separate the concept from the technology. It's still a good idea to do pre-calculations in a batch job, but you should do it with the aggregati

Categories : Mongodb

How to pass variable while filtering Mongodb INPUT in Kettle
I would suggest to try the following: Within the job that calls the transformation, create a variable that meets the format expectations. You could use the JavaScript step to evaluate and store the variable. Example for a short script that stores a value in a variable: // do some alterations to 'modifiedDate', // then store the variable in the memory: parent_job.setVariable("Extraction.MongoDB.

Categories : Mongodb




© Copyright 2018 w3hello.com Publishing Limited. All rights reserved.